How to deploy vCloud Director Appliance with Terraform and govc

Recently I’ve been looking at a tool to automate the provisioning of the vCloud Director appliance. I wanted something that could quickly take JSON as input for the OVF properties and be able to consistently deploy the appliance with the same outcome. I tried Terraform, however that didn’t quite work out as I expected as the Terraform provider for vSphere’s vsphere_virtual_machine resource, is not able to deploy OVA or OVFs directly.

Here’s what HashiCorp has to say about that…

NOTE: Neither the vsphere_virtual_machine resource nor the vSphere provider supports importing of OVA or OVF files as this is a workflow that is fundamentally not the domain of Terraform. The supported path for deployment in Terraform is to first import the virtual machine into a template that has not been powered on, and then clone from that template. This can be accomplished with Packergovc‘s import.ovf and import.ova subcommands, or ovftool.

The way that this could be done is to first import the OVA without vApp properties, then convert it to a template, then use Terraform to create a new VM from that template and use the vapp section to customise the appliance.

vapp {
    properties = {
      "guestinfo.tf.internal.id" = "42"
    }

This didn’t work for me as not all vApp properties are implemented in the vsphere_virtual_machine resource yet. Let me know if you are able to get this to work.

So that’s where govc came in handy.

govc is a vSphere CLI built on top of govmomi.

The CLI is designed to be a user friendly CLI alternative to the GUI and well suited for automation tasks. It also acts as a test harness for the govmomi APIs and provides working examples of how to use the APIs.

Once you’ve installed govc, you can then setup the environment by entering the following examples into your shell:

export GOVC_URL="https://vcenter-onprem.vcd.lab"

export GOVC_USERNAME='administrator@vsphere.local'

export GOVC_PASSWORD='My$ecureP4ssw0rd!'

export GOVC_INSECURE=true

To deploy the appliance we will use the govc inport.ova command.

However, before you can do that, you need to obtain the JSON file that contains all the OVF properties for you to edit and then use as an input into the import.ova options with govc.

To create the JSON file run the following command

govc import.spec /path_to_vcd_appliance.ova | python -m json.tool > vcd-appliance.json

govc import.spec /volumes/STORAGE/Terraform/VMware_vCloud_Director-10.0.0.4649-15450333_OVF10.ova | python -m json.tool > vcd-appliance.json

Then edit the vcd-appliance.json file and enter the parameters for your vCD appliance. Then deploy the appliance with the govc import.ova command.

The format for this command is

govc import.ova –options=/path_to_vcd_appliance.json vcd_appliance.ova

govc import.ova -ds=NVMe --options=/Users/phanh/Downloads/terraformdir/govc/vcd-appliance.json /volumes/STORAGE/Terraform/VMware_vCloud_Director-10.0.0.4649-15450333_OVF10.ova

You should now see your vCD appliance being deployed to your vCenter server.

This method also works for any OVA/OVF deployment, including the NSX-T unified appliance, vROPs, vRO.

The next natural step would be to continue the configuration of vCloud Director with the Terraform provider for vCloud Director.

Creating a VMware vCloud Director Cluster

Overview

A VMware vCloud Director (vCD) cluster contains one or more vCD servers, these servers are referred to as “Cells” and form the basis of the VMware cloud.  A cloud can be formed of multiple cells. 

This diagram is a good representation of the vCD Cluster concept.

To enable multiple servers to participate in a cluster, the same pre-requisites exist for a single host as for multiple hosts but the following must be met:

  • each host must mount the shared transfer server storage at $VCLOUD_HOME/data/transfer, this is typically located in /opt/vmware/cloud-director/data/transfer.

This shared storage could be a NFS mount, mounted to all participating servers with rw access for root.  It is important that prior to configuring the first server, a decision must be made on whether a cluster is required.  If you intend to use a vCD Cluster, configure the shared transfer server storage before executing the vCD installer.

Check out the vCloud Director Installation and Configuration Guide for pre-requisites.

Shared Transfer Server Storage

For this post, I’ve setup an NFS volume on Freenas and given rw permissions for all cluster members to the volume.  It is assummed that you have a completely clean installation of RHEL 5 x64 (or if like me you are running this in a lab CENTOS 5 x64), with all the latest updates and pre-requisite packages.

Now to mount the volume on all hosts:

  1. Connect to your first host using SSH or login directly
  2. Edit your /etc/fstab file and add the following line remembering to change to your NFS server and relevant mount point
  3. vcd-freenas.vmwire.local:/mnt/SSD /opt/vmware/cloud-director/data/transfer nfs rw,soft,_netdev  0 0
  4. The resulting /etc/fstab should look something like this:
  5. /etc/fstab

    /etc/fstab

  6. Now create the shared transfer server storage folder structure, /opt/vmware/cloud-director/data/transfer (just do a mkdir command)
  7. run chkconfig netfs on
  8. Repeat steps 1-6 for any other hosts
  9. Restart servers

 Now you are ready to install vCD onto the first host, making sure that you have met all the pre-requisites as detailed in the vCloud Director Installation and Configuration Guide.  Once completed you should have a working cell with its shared transfer server storage folder located on the NFS volume.

Setting up a second cell as part of the Cloud Director Cluster

At this point you should already have a working cell with the vCD shared transfer server storage located on the NFS volume.  Before you install vCD onto a server the following must be done:

  1. All pre-requisites for a single server installation must also be met for subsequent servers as part of a vCD Cluster
  2. The second server must also have rw access for root to the shared transfer server storage
  3. The second server must have access to the response file, this file is located in /opt/vmware/cloud-director/etc/responses.properties on the first successfully installed server
  4. Copy the above file to the second server or to the shared transfer server storage
  5. It is important to note that the response file contains values that were used for the first server.  Subsequent servers will use the response file, and as such if you stored your certificates.ks file for the first server in a location not recognised by subsequent servers, you will be prompted by the installation script to enter the correct path to the certificates.ks file for any subsequent servers.  To avoid this, you could create all the certificates.ks files for all cluster members and place them in the shared transfer server storage, with of course unique names such as vcd-cell1-certificates.ks and vcd-cell2-certificates.ks.
  6. You can now install vCD onto subsequent servers with the command vmware-cloud-director-1.0.0-285979.bin -r /opt/vmware/cloud-director/data/transfer/responses.properties

The installer will automatically complete most prompts for you, but you will still need to select the correct eth adapter for the http and consoleproxy services, everything else will be automatic.

Go ahead and have a play and maybe even deploy a load balancer on top.

Here’s a screenshot of my two cells working side by side connecting to the same shared transfer server storage, oracle database and managing the same vCenters.

For more information read the overview at Yellow Bricks which also includes links to the product pages.