Playing with Tanzu – persistent volume claims, deployments & services

Learning the k8s ropes…

This is not a how to article to get vSphere with Tanzu up and running, there are plenty of guides out there, here and here. This post is more of a “lets have some fun with Kubernetes now that I have a vSphere with Tanzu cluster to play with“.

Answering the following question would be a good start to get to grips with understanding Kubernetes from a VMware perspective.

How do I do things that I did in the past in a VM but now do it with Kubernetes in a container context instead?

For example building the certbot application in a container instead of a VM.

Lets try to create an Ubuntu deployment that deploys one Ubuntu container into a vSphere Pod with persistent storage and a load balancer service from NSX-T to get to the /bin/bash shell of the deployed container.

Let’s go!

I created two yaml files for this, accessible from Github. You can read up on what these objects are here.

FilenameWhats it for?What does it do?Github link
certbot-deployment.yamlk8s deployment specificationDeploys one ubuntu pod, claims a 16Gb volume and mounts it to /dev/sdb and creates a load balancer to enable remote management with ssh.ubuntu-deployment.yaml
certbot-pvc.yamlpersistent volume claim specificationCreates a persistent volume of 16Gb size from the underlying vSphere storage class named tanzu-demo-storage.
The PVC is then consumed by the deployment.
Table 1. The only two files that you need.

Here’s the certbot-deployment.yaml file that shows the required fields and object spec for a Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
  name: certbot
  replicas: 1
      app: certbot
        app: certbot
      - name: certbot-storage
         claimName: ubuntu-pvc
      - name: ubuntu
        image: ubuntu:latest
        command: ["/bin/sleep", "3650d"]
        imagePullPolicy: IfNotPresent
        - mountPath: "/mnt/sdb"
          name: certbot-storage
      restartPolicy: Always
apiVersion: v1
kind: Service
    app: certbot
  name: certbot
  - port: 22
    protocol: TCP
    targetPort: 22
    app: certbot
  sessionAffinity: None
  type: LoadBalancer

Here’s the certbot-pvc.yaml file that shows the required fields and object spec for a Kubernetes Persistent Volume Claim.

apiVersion: v1
kind: PersistentVolumeClaim
  name: certbot-pvc
    storage-tier: tanzu-demo-storage
    availability-zone: home
    - ReadWriteOnce
  storageClassName: tanzu-demo-storage
        storage: 16Gi

First deploy the PVC claim with this command:

kubectl apply -f certbot-pvc.yaml

Then deploy the deployment with this command:

kubectl.exe apply -f certbot-deployment.yaml

Magic happens and you can monitor the vSphere client and kubectl for status. Here are a couple of screenshots to show you whats happening.

kubectl describe deployment certbot
Name:                   certbot
Namespace:              new
CreationTimestamp:      Thu, 11 Mar 2021 23:40:25 +0200
Labels:                 <none>
Annotations:   1
Selector:               app=certbot
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=certbot
    Image:      ubuntu:latest
    Port:       <none>
    Host Port:  <none>
    Environment:  <none>
      /mnt/sdb from certbot-storage (rw)
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  certbot-pvc
    ReadOnly:   false
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   certbot-68b4747476 (1/1 replicas created)
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  44m   deployment-controller  Scaled up replica set certbot-68b4747476 to 1
kubectl describe pvc
Name:          certbot-pvc
Namespace:     new
StorageClass:  tanzu-demo-storage
Status:        Bound
Volume:        pvc-418a0d4a-f4a6-4aef-a82d-1809dacc9892
Labels:        availability-zone=home
Annotations: yes
Finalizers:    []
Capacity:      16Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    certbot-68b4747476-pq5j2
Events:        <none>
kubectl get deployments
certbot     1/1     1            1           47m

kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
certbot-68b4747476-pq5j2     1/1     Running   0          47m

kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
certbot-pvc   Bound    pvc-418a0d4a-f4a6-4aef-a82d-1809dacc9892   16Gi       RWO            tanzu-demo-storage   84m

Let’s log into our pod, note the name from the kubectl get pods command above.


Its not yet possible to log into the pod using SSH since this is a fresh container that does not have SSH installed, lets log in first using kubectl and install SSH.

kubectl exec --stdin --tty certbot-68b4747476-pq5j2 -- /bin/bash

You will then be inside the container at the /bin/bash prompt.

root@certbot-68b4747476-pq5j2:/# ls
bin   dev  home  lib32  libx32  mnt  proc  run   srv  tmp  var
boot  etc  lib   lib64  media   opt  root  sbin  sys  usr

Lets install some tools and configure ssh.

apt-get update
apt-get install iputils-ping
apt-get install ssh

passwd root

service ssh restart


Before we can log into the container over an SSH connection, we need to find out what the external IP is for the SSH service that the NSX-T load balancer configured for the deployment. You can find this using the command:

kubectl get services

kubectl get services
NAME        TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
certbot     LoadBalancer    22:31731/TCP   51m

The IP that we use to get to the Ubuntu container over SSH is Lets try that with a putty/terminal session…

login as: root
certbot@'s password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 4.19.126-1.ph3-esx x86_64)

 * Documentation:
 * Management:
 * Support:

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

$ ls
bin   dev  home  lib32  libx32  mnt  proc  run   srv  tmp  var
boot  etc  lib   lib64  media   opt  root  sbin  sys  usr
$ df
Filesystem     1K-blocks   Used Available Use% Mounted on
overlay           258724 185032     73692  72% /
/mnt/sdb        16382844  45084  16321376   1% /mnt/sdb
tmpfs             249688     12    249676   1% /run/secrets/                         eaccount
/dev/sda          258724 185032     73692  72% /dev/termination-log

You can see that there is a 16Gb mount point at /mnt/sdb just as we specified in the specifications and remote SSH access is working.

How to import existing infrastructure into Terraform management

Terraform is a great framework to use to start developing and working with infrastructure-as-code to manage resources. It provides awesome benefits such as extremely fast deployment through automation, managing configuration drift, adding configuration changes and destroying entire environments with a few key strokes. Plus it supports many providers so you can easily use the same code logic to deploy and manage different resources, for example on VMware clouds, AWS or Azure at the same time.

For more information if you haven’t looked at Terraform before, please take a quick run through HashiCorp’s website:

Getting started with Terraform is really quite simple when the environment that you are starting to manage is green-field. In that, you are starting from a completely fresh deployment on Day-0. If we take AWS as an example, this is as fresh as signing up to the AWS free-tier with a new account and having nothing deployed in your AWS console.

Terraform has a few simple files that are used to build and manage infrastructure through code, these are the configuration and the state. The basic building blocks of Terraform. There are other files and concepts that could be used such as variables and modules, but I won’t cover these in much detail in this post.

How do you bring in infrastructure that is already deployed into Terraform’s management?

This post will focus on how to import existing infrastructure (brown-field) into Terraform’s management. Some scenarios where this could happen is that you’ve already deployed infrastructure and have only recently started to look into infrastructure as code and maybe you’ve tried to use PowerShell, Ansible and other tools but none are quite as useful as Terraform.


First lets assume that you’ve deployed Terraform CLI or are already using Terraform Cloud, the concepts are pretty much the same. I will be using Terraform CLI for the examples in this post together with AWS. I’m also going to assume that you know how to obtain access and secret keys from your AWS Console.

By all means this import method works with any supported Terraform provider, including all the VMware ones. For this exercise, I will work with AWS.

My AWS environment consists of the following infrastructure, yours will be different of course and I’m using this infrastructure below in the examples.

You will need to obtain the AWS resource IDs from your environment, use the AWS Console or API to obtain this information.

#ResourceNameAWS Resource ID
Table 1. AWS Resource IDs

But I used CloudFormation to deploy my infrastructure…

If you used CloudFormation to deploy your infrastructure and you now want to use Terraform, then you will need to update the CloudFormation deletion policy to retain before bringing any resources into Terraform. This is important as any accidental deletion or change with CloudFormation stack would impact your Terraform configuration and state. I recommend setting this policy before importing resources with Terraform.

This link has some more information that will help you enable the deletion policy on all resources.

For example to change a CloudFormation configuration with the deletion policy enabled, the code would look like this:


      Type: AWS::EC2::VPC
      DeletionPolicy: Retain
        InstanceTenancy: default
        EnableDnsSupport: 'true'
        EnableDnsHostnames: 'true'

Lets get started!

Set up your configuration file for a new project that will import an existing AWS infrastructure. The first version of our file will look like this, with the only resource that we will import being the VPC. Its always good to work with a single resource first to ensure that your import works before going all out and importing all the rest.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "3.28.0"

provider "aws" {
  # Configuration options
  region = "eu-west-1"
  access_key = "my_access_key"
  secret_key = "my_secret_key"

resource "aws_vpc" "VPC" {
  # (resource arguments)

Run the following to initialize the AWS provider in Terraform.

terraform init

Import the VPC resource with this command in your terminal

terraform import aws_vpc.VPC vpc-02d890cacbdbaaf87

You can then review the terraform state file, it should be named terraform.tfstate, and it will look something like this. (Open it in a text editor).

  "version": 4,
  "terraform_version": "0.14.6",
  "serial": 13,
  "lineage": "xxxx",
  "outputs": {},
  "resources": [    {
  "mode": "managed",
      "type": "aws_vpc",
      "name": "VPC",
      "provider": "provider[\"\"]",
      "instances": [
          "schema_version": 1,
          "attributes": {
            "arn": "xxxx",
            "assign_generated_ipv6_cidr_block": false,
            "cidr_block": "",
            "default_network_acl_id": "acl-067e11c10e2327cc9",
            "default_route_table_id": "rtb-0a55b9e1683991242",
            "default_security_group_id": "sg-0db58c5c159b1ebf9",
            "dhcp_options_id": "dopt-7d1b121b",
            "enable_classiclink": false,
            "enable_classiclink_dns_support": false,
            "enable_dns_hostnames": true,
            "enable_dns_support": true,
            "id": "vpc-02d890cacbdbaaf87",
            "instance_tenancy": "default",
            "ipv6_association_id": "",
            "ipv6_cidr_block": "",
            "main_route_table_id": "rtb-0a55b9e1683991242",
            "owner_id": "xxxxxxx",
            "tags": {
              "Name": "VPC",
              "environment": "aws",
              "project": "Imported by Terraform"
          "sensitive_attributes": [],
          "private": "xxxxxx"

Notice that the VPC and all of the VPC settings have now been imported into Terraform.

Now that we have successfully imported the VPC, we can continue and import the rest of the infrastructure. The remaining AWS services we need to import are detailed in Table 1. AWS Resource IDs.

To import the remaining infrastructure we need to add the code to the file to import the other resources. Edit your so that it looks like this. Notice that all of the thirteen resources are defined in the configuration file and the resource arguments are all empty. We will update the resource arguments later, initially we just need to import the resources into the Terraform state and then update the configuration with the known state.

Terraform does not support automatic creation of a configuration out of a state.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "3.28.0"

provider "aws" {
  # Configuration options
  region = "eu-west-1"
  access_key = "my_access_key"
  secret_key = "my_secret_key"

resource "aws_vpc" "SAVPC" {
  # (resource arguments)

resource "aws_subnet" "PublicSubnetA" {
  # (resource arguments)

resource "aws_subnet" "PublicSubnetB" {
  # (resource arguments)

resource "aws_subnet" "PrivateSubnetA" {
  # (resource arguments)

resource "aws_subnet" "PrivateSubnetB" {
  # (resource arguments)

resource "aws_internet_gateway" "IGW" {
  # (resource arguments)

resource "aws_network_acl" "NACL" {
  # (resource arguments)

resource "aws_route_table" "PublicRoute" {
  # (resource arguments)

resource "aws_route_table" "PrivateRoute" {
  # (resource arguments)

resource "aws_instance" "Instance1" {
  # (resource arguments)

resource "aws_elb" "elb-UE360LJ7779C" {
  # (resource arguments)

resource "aws_security_group" "ELBSecurityGroup" {
  # (resource arguments)

resource "aws_security_group" "AppServerSecurityGroup" {
  # (resource arguments)

Run the following commands in your terminal to import the remaining resources into Terraform.

terraform import aws_subnet.PublicSubnetA subnet-0f6d45ef0748260c6
terraform import aws_subnet.PublicSubnetB subnet-092bf59b48c62b23f
terraform import aws_subnet.PrivateSubnetA subnet-03c31081bf98804e0
terraform import aws_subnet.PrivateSubnetB subnet-05045746ac7362070
terraform import aws_internet_gateway.IGW igw-09056bba88a03f8fb
terraform import aws_network_acl.NACL acl-0def8bcfeff536048
terraform import aws_route_table.PublicRoute rtb-082be686bca733626
terraform import aws_route_table.PrivateRoute rtb-0d7d3b5eacb25a022
terraform import aws_instance.Instance1 i-0bf15fecd31957129
terraform import aws_elb.elb-158WU63HHVD3 elb-158WU63HHVD3
terraform import aws_security_group.ELBSecurityGroup sg-0b8f9ee4e1e2723e7
terraform import aws_security_group.AppServerSecurityGroup sg-031fadbb59460a776

Now that all thirteen resources are imported you will need to manually update the configuration file, in our case with the resource arguments that correspond to the current state of all the resources that were just imported. The easiest way to do this is to first take a look at the Terraform provider for AWS documentation to find the mandatory fields that are needed. Lets use the aws_subnet as an example:

From the documentation we need two things

cidr_block – (Required) The CIDR block for the subnet.

vpc_id – (Required) The VPC ID.

We know that we need these two as a minimum, but what if there are other configuration items that were done in the AWS Console or CloudFormation before you started to work with Terraform. An example of this is of course tags and other configuration parameters. You want to update your file with the same configuration as what was just imported into the state. This is very important.

To do this, do not use the terraform.tfstate but instead run the following command.

terraform show

You’ll get an output of the current state of your AWS environment that you can then copy and paste the resource arguments into your configuration.

I won’t cover how to do all thirteen resources in this post so I’ll again use our example for one of the aws_subnet resources. Here is the PublicSubnetA aws_subnet resource information copy and pasted straight out of the terraform show command.

# aws_subnet.PublicSubnetA:
resource "aws_subnet" "PublicSubnetA" {
    arn                             = "arn:aws:ec2:eu-west-1:xxxx:subnet/subnet-0f6d45ef0748260c6"
    assign_ipv6_address_on_creation = false
    availability_zone               = "eu-west-1a"
    availability_zone_id            = "euw1-az2"
    cidr_block                      = ""
    id                              = "subnet-0f6d45ef0748260c6"
    map_customer_owned_ip_on_launch = false
    map_public_ip_on_launch         = true
    owner_id                        = "xxxx"
    tags                            = {
        "Name"        = "PublicSubnetA"
        "environment" = "aws"
        "project"     = "my_project"
    vpc_id                          = "vpc-02d890cacbdbaaf87"

    timeouts {}

Not all resource arguments are needed, again review the documentation. Here is an example of my changes to the file with some of the settings taken from the output of the terraform show command.

resource "aws_subnet" "PublicSubnetA" {
    assign_ipv6_address_on_creation = false
    cidr_block                      = var.cidr_block_PublicSubnetA
	map_public_ip_on_launch         = true
    tags                            = {
        Name        = "PublicSubnetA"
        environment = "aws"
        project     = "my_project"
    vpc_id                          = var.vpc_id

    timeouts {}

Notice that I have turned the value for cidr_block and vpc_id into a variables.

Using Variables

Using variables simplifies a lot of your code. I’m not going to explain what these are in this post, you can read up on these with this link:

However, the contents of my terraform.tfvars file looks like this:

cidr_block = ""
vpc_id = "vpc-02d890cacbdbaaf87"
cidr_block_PublicSubnetA = ""
cidr_block_PublicSubnetB = ""
cidr_block_PrivateSubnetA = ""
cidr_block_PrivateSubnetB = ""
instance_type = "t2.micro"
ami_id = "ami-047bb4163c506cd98"
instance_port = "80"
instance_protocol = "http"
lb_port = "80"
lb_protocol = "http"

Just place your terraform.tfvars file in the same location as your file. Terraform automatically links to the default or you can reference a different variable file, again refer to the documentation.

Finalizing the configuration

Once you’ve updated your configuration with all the correct resource arguments, you can test to see if what is in the configuration is the same as what is in the state. To do this run the following command:

terraform plan

If you copied and pasted and updated your correctly then you would get output from your terminal similar to the following:

terraform plan
[ Removed content to save space ]

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

Congratulations, you’ve successfully imported an infrastructure that was built outside of Terraform.

You can now proceed to manage your infrastructure with Terraform. For example changing the terraform.tfvars parameters for

lb_port = "443"
lb_protocol = "https"

And then running plan and apply will update the elastic load balancer elb-158WU63HHVD3 from health check on port 80 to port 443 instead.

terraform plan
[ removed content to save space ]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_elb.elb-158WU63HHVD3 will be updated in-place
  ~ resource "aws_elb" "elb-158WU63HHVD3" {
      ~ health_check {
          ~ target              = "TCP:80" -> "TCP:443"           
terraform apply
[ content removed to save space] 

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

And that’s how you import existing resources into Terraform, I hope you find this post useful. Please comment below if you have a better method or have any suggestions for improvements. And feel free to comment below if you have questions and need help.

Reducing HCX on-premises appliances resources

When HCX is deployed there are three appliances that are deployed as part of the Service Mesh. These are detailed below.

ApplianceRolevCPUMemory (GB)
IXInterconnect appliance83
NEL2 network extension appliance83
WOWAN optimization appliance814

As you can see, these three appliances require a lot of resources just for one Service Mesh. A Service Mesh is created on a 1:1 basis between source and destination. If you connected your on-premises environment to another destination, you would need another service mesh.

For example, if you had the following hybrid cloud requirements:

Service MeshSource siteDestination sitevCPUsMemory (GB)
1On-premisesVCPP Provider2420
2On-PremisesVMware Cloud on AWS2420
3On-PremisesAnother On-premises2420

As you can see, resource requirements will add up.

If you’re running testing or deploying these in a nested lab, the resource requirements may be too high for your infrastructure. This post shows you how you can edit the OVF appliances to be deployed with lower resource requirements.

Disclaimer: The following is unsupported by VMware. Reducing vCPU and memory on any of the HCX appliances will impact HCX services.

  1. Log into your HCX manager appliance with the admin account
  2. do a su – to gain root access, use the same password
  3. go into the /common/appliances directory
  4. here you’ll see folders for sp and vcc, these are the only two that you need to work in.
  5. first lets start with sp, sp stands for Silverpeak which is what is running the WAN optimization.
  6. go into the /common/appliances/sp/ directory
  7. vi the file VX-0000-
  8. go to the section where virtual cpus and memory is configured and change to the following. (I find that reducing to four vCPUs and 7GB RAM for the WO appliance is good).
         <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
         <rasd:Description>Number of Virtual CPUs</rasd:Description>
         <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
         <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
         <rasd:Description>Memory Size</rasd:Description>
         <rasd:ElementName>7168MB of memory</rasd:ElementName>

Next configure the IX and L2E appliances, these are both contained in the vcc directory.

  1. go to the /common/appliances/vcc/3.5.0 directory
  2. vi the vcc-va-large-3.5.3-17093722.ovf file, again changing the vCPU to 4 and leaving RAM at 3 GB.
         <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
         <rasd:Description>Number of Virtual CPUs</rasd:Description>
         <rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>

Once you save your changes and create a Service Mesh, you will notice that the new appliances will be deployed with reduced virtual hardware requirements.

Hope this helps your testing with HCX!

Upgrade Cloud Director to 10.2

A very quick post on how to upgrade your Cloud Director cluster to 10.2. This post contains some shortcuts that removes some repetitive steps.

Download the latest Cloud Director 10.2 from You can read more about the latest updates in the release notes.

Primarily, VCD 10.2 brings some very new functionality, including:

  1. NSX-T Advanced functional parity – NSX Advanced Load Balancer (Avi), Distributed Firewall, VRF-lite, Cross VDC networking, IPv6, Dual stack (IPv4/IPv6) on the same network, SLAAC, DHCPv6, CVDS (vSphere 7.0/NSX-T 3.0), L2VPN – API only.
  2. vSphere with Kubernetes support – Provider and tenant UI for managing and consuming Kubernetes clusters.

Start on the primary appliance

  1. You can find which appliance is primary by going to https://<ip-of-vcd-appliance&gt;:5480.
  2. Copy the appliance update package to one of the appliances, directly into the transfer share so that you don’t have to do this for all the appliances in your cluster.
  3. Once copied do the following on the first primary appliance.
root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# ls
VMware_Cloud_Director_10.2.0.5190-17029810_update.tar.gz cells

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# chmod u+x VMware_Cloud_Director_10.2.0.5190-17029810_update.tar.gz

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# mkdir update-package

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# tar -zxf VMware_Cloud_Director_10.2.0.5190-17029810_update.tar.gz -C update-package/

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --repo file://opt/vmware/vcloud-director/data/transfer/update-package

Set local repository address…

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --check
Checking for available updates, this process can take a few minutes….
Available Updates - Build 17029810

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# /opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell --shutdown

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --install latest

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# /opt/vmware/appliance/bin/create-db-backup

2020-10-16 08:41:01 | Invoking Database backup utility
2020-10-16 08:41:01 | Command line usage to create embedded PG DB backup: create-db-backup
2020-10-16 08:41:01 | Using "vcloud" as default PG DB to backup since DB_NAME is not provided
2020-10-16 08:41:01 | Creating back up directory /opt/vmware/vcloud-director/data/transfer/pgdb-backup if it does not already exist …
2020-10-16 08:41:01 | Creating the "vcloud" DB backup at /opt/vmware/vcloud-director/data/transfer/pgdb-backup…
2020-10-16 08:41:03 | "vcloud" DB backup has been successfully created.
2020-10-16 08:41:03 | Copying the primary node's properties and certs …
2020-10-16 08:41:04 | "vcloud" DB backup, Properties files and certs have been successfully saved to /opt/vmware/vcloud-director/data/transfer/pgdb-backup/db-backup-2020-10-16-084101.tgz.

Note: To restore the postgres DB dump copy this tar file to the remote system.

root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# /opt/vmware/vcloud-director/bin/upgrade

Welcome to the VMware Cloud Director upgrade utility
Verify that you have a valid license key to use the version of the
VMware Cloud Director software to which you are upgrading.
This utility will apply several updates to the database. Please
ensure you have created a backup of your database prior to continuing.

Do you wish to upgrade the product now? [Y/N] y

Examining database at URL: jdbc:postgresql://
The next step in the upgrade process will change the VMware Cloud Director database schema.

Backup your database now using the tools provided by your database vendor.

Enter [Y] after the backup is complete. y

Running 5 upgrade tasks
Executing upgrade task:
Successfully ran upgrade task
Executing upgrade task:
Successfully ran upgrade task
Executing upgrade task:
Successfully ran upgrade task
Executing upgrade task:
…..\Successfully ran upgrade task
Executing upgrade task:
Successfully ran upgrade task
Database upgrade complete
Upgrade complete

Would you like to start the Cloud Director service now? If you choose not
to start it now, you can manually start it at any time using this command:
service vmware-vcd start

Start it now? [y/n] n

Skipping start up for now

On the remaining appliances

root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --repo file://opt/vmware/vcloud-director/data/transfer/update-package

Set local repository address…

root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --check
Checking for available updates, this process can take a few minutes….
Available Updates - Build 17029810

root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# /opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell --shutdown
Please enter the administrator password:
Cell successfully deactivated and all tasks cleared in preparation for shutdown

root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --install latest
Installing version - Build 17029810

Reboot appliances

Now reboot each appliance one at a time starting with the primary, wait for it to rejoin the PostgreSQL cluster before rebooting the other appliances.

Using Let’s Encrypt certificates with Cloud Director

Let’s Encrypt (LE) is a certificate authority that issues free SSL certificates for use in your web applications. This post details how to get LE setup to support Cloud Director specifically with a wildcard certificate.


LE uses an application called certbot to request, automatically download and renew certificates. You can think of certbot as the client for LE.

First you’ll need to create a client machine that can request certificates from LE. I started with a simple CentOS VM. For more details about installing certbot into your preferred OS read this page here.

Once you get yours on the network with outbound internet access, you can start by performing the following.

 # Update software
 yum update
 # Install wget if not already installed
 yum install wget
 # Download the certbot application.
 # Move certbot into a local application directory
 sudo mv certbot-auto /usr/local/bin/certbot-auto
 # Set ownership to root
 sudo chown root /usr/local/bin/certbot-auto
 # Change permisssions for certbot
 sudo chmod 0755 /usr/local/bin/certbot-auto

Now you’re ready to request certificates. Run the following command but of course replacing your desired domain within the ‘ ‘.

/usr/local/bin/certbot-auto --config-dir $HOME/.certbot --work-dir $HOME/.certbot/work --logs-dir $HOME/.certbot/logs  certonly --manual --preferred-challenges=dns -d '*'

This will create a request for a wildcard certificate for *

You’ll then be asked to create a new DNS TXT record on your public DNS server for the domain that you are requesting to validate that you can manage that domain. Here’s what mine looks like for the above.

This means that you can only request public certificates with LE, private certificates are not supported.

You will then see a response from LE such as the following:

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2020-12-24. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot-auto
   again. To non-interactively renew *all* of your certificates, run
   "certbot-auto renew"

Updating Cloud Director certificates

Before you can use new certificate, you need to perform some operations with the JAVA Keytool to import the pem formatted certificates into the certificates.ks file that Cloud Director uses.

The issued certificate is available in the directory


Navigate to there using an SSH client and you’ll see a structure like this

Download the entire folder for the next steps. Within the folder you’ll see the following files

cert.pemyour certificate in pem format
chain.pemthe Let’s Encrypt root CA certificate in pem format
fullchain.pemyour wildcard certificate AND the LE root CA certificate in pem format
privkey.pemthe private key for your certificate (without passphrase)

We need to rename the file to something that the JAVA Keytool can work with. I renamed mine to the following:

Original filenameNew Filename
fullchain.pemnot needed

Copy the three new files to one of the Cloud Director cells, use the /tmp directory.

Now launch an SSH session to one of the Cloud Director cells and perform the following.

# Import the certificate and the private key into a new pfx format certificate
openssl pkcs12 -export -out /tmp/vmwire-com.pfx -inkey /tmp/vmwire-com.key -in /tmp/vmwire-com.crt

# Create a new certificates.ks file and import the pfx formatted certificate
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storepass Vmware1! -keypass Vmware1! -storetype JCEKS -importkeystore -srckeystore /tmp/vmwire-com.pfx -srcstorepass Vmware1!

# Change the alias for the first entry to be http
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storetype JCEKS -changealias -alias 1 -destalias http -storepass Vmware1!

# Import the certificate again, this time creating alias 1 again (we will use the same wildcard certifiate for the consoleproxy)
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storepass Vmware1! -keypass Vmware1! -storetype JCEKS -importkeystore -srckeystore /tmp/vmwire-com.pfx -srcstorepass Vmware1!

# Change the alias for the first entry to be consoleproxy
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storetype JCEKS -changealias -alias 1 -destalias consoleproxy -storepass Vmware1!

# Import the root certificate into the certificates.ks file
/opt/vmware/vcloud-director/jre/bin/keytool -importcert -alias root -file /tmp/vmwire-com-ca.crt -storetype JCEKS -keystore /tmp/certificates.ks -storepass Vmware1!

# List all the entries, you should now see three, http, consoleproxy and root
/opt/vmware/vcloud-director/jre/bin/keytool  -list -keystore /tmp/certificates.ks -storetype JCEKS -storepass Vmware1!

# Stop the Cloud Director service on all cells
service vmware-vcd stop

# Make a backup of the current certificate
mv /opt/vmware/vcloud-director/certificates.ks /opt/vmware/vcloud-director/certificates.ks.old

# Copy the new certificate to the Cloud Director directory
cp /tmp/certificates.ks /opt/vmware/vcloud-director/

# List all the entries, you should now see three, http, consoleproxy and root
/opt/vmware/vcloud-director/jre/bin/keytool  -list -keystore /opt/vmware/vcloud-director/certificates.ks -storetype JCEKS -storepass Vmware1!

# Reconfigure the Cloud Director application to use the new certificate

# Start the Cloud Director application
service vmware-vcd start

# Monitor startup logs
tail -f /opt/vmware/vcloud-director/logs/cell.log

Copy the certificates.ks file to the other cells and perform the configure on the other cells to update the certificates for all cells. Don’t forget to update the certificate on the load balancer too. This other post shows how to do it with the NSX-T load balancer.

Check out the new certificate at

Automate NSX-T Load Balancer setup for Cloud Director and the Tenant App

This post describes how to use the NSX-T Policy API to automate the creation of load balancer configurations for Cloud Director and the vRealize Operations Tenant App.

Postman collection

I’ve included a Postman collection that contains all of the necessary API calls to get everything configured. There is also a Postman environment that contains the necessary variables to successfully configure the load balancer services.

To get started import the collection and environment into Postman.

You’ll see the collection in Postman named NSX-T Load Balancer Setup. All the steps are numbered to import certificates, configure the Cloud Director load balancer services. I’ve also included the calls to create the load balancer services for the vRealize Operations Tenant App.

Before you run any of those API calls, you’ll first want to import the Postman environment. Once imported you’ll see the environments in the top right screen of Postman, the environment is called NSX-T Load Balancer Setup.

Complete your environment variables.

VariableValue Description
nsx_vipnsx-t manager cluster virtual ip
nsx-manager-usernsx-t manager username, usually admin
nsx-manager-passwordnsx-t manager password
vcd-public-ippublic ip address for the vcd service to be configured on the load balancer
tenant-app-public-ippublic ip address for the tenant app service to be configured on the load balancer
vcd-cert-namea name for the imported vcd http certificate
vcd-cert-private-keyvcd http certificate private key in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.

for example:

vcd-cert-passphrasevcd private key passphrase
vcd-certificatevcd http certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.

For example:
ca-cert-namea name for the imported ca root certificate
ca-certificateca root certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
vcd-node1-namethe hostname for the first vcd appliance
vcd-node1-ipthe dmz ip address for the first vcd appliance
vcd-node2-namethe hostname for the second vcd appliance
vcd-node2-ipthe dmz ip address for the second vcd appliance
vcd-node3-namethe hostname for the third vcd appliance
vcd-node3-ipthe dmz ip address for the third vcd appliance
tenant-app-node-namethe hostname for the vrealize operations tenant app appliance
tenant-app-node-ipthe dmz ip address for the vrealize operations tenant app appliance
tenant-app-cert-namea name for the imported tenant app certificate
tenant-app-cert-private-keytenant app certificate private key in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.

For example:

tenant-app-cert-passphrasetenant app private key passphrase
tenant-app-certificatetenant app certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.

For example:
tier1-full-paththe full path to the nsx-t tier1 gateway that will run the load balancer,

for example /infra/tier-1s/stage1-m-ec01-t1-gw01
vcd-dmz-segment-namethe portgroup name of the vcd dmz portgroup,

for example stage1-m-vCDFront
allowed_ip_aan ip address that is allowed to access the /provider URI and the admin API
allowed_ip_ban ip address that is allowed to access the /provider URI and the admin API

Now you’re ready to run the calls.

The collection and environment are available to download from Github.

Protecting Cloud Director with NSX-T Load Balancer L7 HTTP Policies

Running Cloud Director (formerly vCloud Director) over the Internet has its benefits however opens up the portal to security risks. To prevent this, we can use the native load balancing capabilities of NSX-T to serve only HTTP access to the URIs that are required and preventing access to unnecessary URIs from the rest of the Internet.

An example of this is to disallow the /provider and /cloudapi/1.0.0/sessions/provider URIs as these are provider side administrator only URIs that a service provider uses to manage the cloud and should not be accessible from the Internet.

The other article that I wrote previously describes the safe URIs and unsafe URIs that can be exposed over the Internet, you can find that article here. That article discuss doing the L7 HTTP policies using Avi. This article will go through how you can achieve the same with the built in NSX-T load balancer.

This article assumes that you already have the Load Balancer configured with the Cloud Director Virtual Servers, Server Pools and HTTPS Profiles and Monitors already set up. If you need a guide on how to do this, then please visit Tomas Fojta’s article here.

The L7 HTTP rules can be set up under Load Balancing | Virtual Servers. Edit the Virtual Server rule for the Cloud Director service and open up the Load Balancer Rules section.

Click on the Set link next to HTTP Access Phase. I’ve already set mine up so you can see that I already have two rules. You should also end up with two rules once this is complete.

Go ahead and add a new rule with the Add Rule button.

The first rule we want to set up is to prevent access from the Internet to the /provider URI but allow an IP address or group of IP addresses to access the service for provider side administration, such as a management bastion host.

Set up you rule as follows:

What we are doing here is creating a condition that when the /provider URI is requested, we drop all incoming connections unless the connection is initiated from the management jump box, this jump box has an IP address of The Negate option is enabled to achieve this. Think of negate as the opposite of the rule, so negate does not drop connections to /provider when the source IP address is

Here’s the brief explanation from the official NSX-T 3.0 Administration Guide.

If negate is enabled, when Connection Drop is configured, all requests not
matching the specified match condition are dropped. Requests matching the
specified match condition are allowed.

Save this rule and lets setup another one to prevent access to the admin API. Setup this second rule as follows:

This time use /cloudapi/1.0.0/sessions/provider as the URI. Again, use the Negate option for your management IP address. Save your second rule and Apply all the changes.

Now you should be able to access /tenant URIs over the Internet but not the /provider URI. However, accessing the /provider URI from (or whatever your equivalent is) will work.

Doing this with the API

Do a PUT against /policy/api/v1/infra/lb-virtual-servers/vcloud with the following.

(Note that the Terraform provider for NSX-T doesn’t support HTTP Access yet. So to automate, use the NSX-T API directly instead.)

  "enabled": true,
  "ip_address": "<IP_address_of_this_load_balancer>",
  "ports": [
  "access_log_enabled": false,
  "lb_persistence_profile_path": "/infra/lb-persistence-profiles/default-source-ip-lb-persistence-profile",
  "lb_service_path": "/infra/lb-services/vcloud",
  "pool_path": "/infra/lb-pools/vcd-appliances",
  "application_profile_path": "/infra/lb-app-profiles/vcd-https",
  "client_ssl_profile_binding": {
    "ssl_profile_path": "/infra/lb-client-ssl-profiles/default-balanced-client-ssl-profile",
    "default_certificate_path": "/infra/certificates/my-signed-certificate",
    "client_auth": "IGNORE",
    "certificate_chain_depth": 3
  "server_ssl_profile_binding": {
    "ssl_profile_path": "/infra/lb-server-ssl-profiles/default-balanced-server-ssl-profile",
    "server_auth": "IGNORE",
    "certificate_chain_depth": 3,
    "client_certificate_path": "/infra/certificates/my-signed-certificate"
    "rules": [
      "match_conditions": [
          "uri": "/cloudapi/1.0.0/sessions/provider",
          "match_type": "CONTAINS",
          "case_sensitive": false,
          "type": "LBHttpRequestUriCondition",
          "inverse": false
          "source_address": "",
          "type": "LBIpHeaderCondition",
          "inverse": true
      "match_strategy": "ALL",
      "phase": "HTTP_ACCESS",
      "actions": [
          "type": "LBConnectionDropAction"
      "match_conditions": [
          "uri": "/provider",
          "match_type": "EQUALS",
          "case_sensitive": false,
          "type": "LBHttpRequestUriCondition",
          "inverse": false
          "source_address": "",
          "type": "LBIpHeaderCondition",
          "inverse": true
      "match_strategy": "ALL",
      "phase": "HTTP_ACCESS",
      "actions": [
          "type": "LBConnectionDropAction"
  "log_significant_event_only": false,
  "resource_type": "LBVirtualServer",
  "id": "vcloud",
  "display_name": "vcloud",
  "_revision": 1

Workflow for end-to-end tenant provisioning with VMware Cloud Director

Firstly, apologies to all those who asked for the workflow at VMworld 2019 in Barcelona and also e-mailed me for a copy. It’s been hectic in my professional and personal life. I also wanted to clean up the workflows and remove any customer specific items that are not relevant to this workflow. Sorry it took so long!

If you’d like to see an explanation video of the workflows in action, please take a look at the VMworld session recording.


These vRealize Orchestrator workflows were co-created and developed by Benoit Serratrice and Henri Timmerman.

You can download a copy of the workflow using this link here.

What does it do?

Commission Customer Process

The workflow does the following:

  1. Creates an organization based on your initial organisation name as an input.
  2. Creates a vDC into this organization.
  3. Adds a gateway to the vDC.
  4. Adds an routed network with a gateway CIDR that you enter.
  5. Adds a direct external network.
  6. Converts the organization network to use distributed routing.
  7. Adds a default outbound firewall rule for the routed network.
  8. Adds a source NAT rule to allow the routed network to goto the external network.
  9. Adds a catalog.
Commission Customer vRO Workflow

It also cleans up the provisioning if there is a failure. I have also included a Decommission Customer workflow separately to enable you to quickly delete vCD objects quickly and easily. It is designed for lab environments. Bear this in mind when using it.

Other caveats: the workflows contained in this package are unsupported. I’ll help in the comments below as much as I can.

Getting Started

Import the package after downloading it from github.

The first thing you need to do is setup the global settings in the Global, Commission, storageProfiles and the other configurations. You can find these under Assets > Configurations.

You should then see the Commission Customer v5 workflow under Workflows in your vRO client, it should look something like this.

Enter a customer name and enter the gateway IP in CIDR into the form.

Press Run, then sit back and enjoy the show.

Known Issues

Commissioning a customer when there are no existing edge gateways deployed that use an external network. You see the following error in the vRO logs:

item: 'Commission Customer v5/item12', state: 'failed', business state: 'null', exception: 'TypeError: Cannot read property "ipAddress" from null (Workflow:Commission Customer v5 / get next ip (item8)#5)'

This happens because no IP addresses are in use from the external network pool. The Commission Customer workflow calculates the next IP address to assign to the edge gateway, it cannot do this if the last IP in use is null. Manually provision something that uses one IP address from the external network IP pool. Then use the Commission Customer workflow, it should now work.

Commissioning a customer workflow completes successfully, however you see the following errors:

[2020-03-22 19:30:44.596] [I] orgNetworkId: 545b5ef4-ff89-415b-b8ef-bae3559a1ac7
[2020-03-22 19:30:44.662] [I] =================================================================== Converting Org network to a distributed interface...
[2020-03-22 19:30:44.667] [I] ** API endpoint:
[2020-03-22 19:30:44.678] [I] error caught!
[2020-03-22 19:30:44.679] [I] error details: InternalError: Cannot execute the request:  (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.680] [I] error details: Cannot execute the request:  (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.728] [I] Network converted succesfully.

The workflow attempts to convert the org network from an internal interface to a distributed interface but it does not work even thought the logs says it was successful. Let me know if you are able to fix this.

VMworld 2019 Rewatch: Building a Modern Cloud Hosting Platform on VMware Cloud Foundation with VMware vCloud Director (HBI1321BE)

Rewatch my session with Onni Rautanen at VMworld EMEA 2019 where we cover the clouds that we are building together with Tieto.

Description: In this session, you will get a technical deep dive into Tieto’s next generation service provider cloud hosting platform running on VMware vCloud Director Cloud POD architecture deployed on top of VMware Cloud Foundation. Administrators and cloud engineers will learn from Tieto cloud architects about their scalable design and implementation guidance for building a modern multi-tenant hosting platform for 10,000+ VMs. Other aspects of this session will discuss the API integration of ServiceNow into the VMware cloud stack, Backup and DR, etc.

You’ll need to create a free VMworld account to access this video and many other videos that are made available during and after the VMworld events.

Load Balancing and Protecting Cloud Director with Avi Networks


The Avi Vantage platform is built on software-defined principles, enabling a next generation architecture to deliver the flexibility and simplicity expected by IT and lines of business. The Avi Vantage architecture separates the data and control planes to deliver application services beyond load balancing, such as application analytics, predictive autoscaling, micro-segmentation, and self-service for app owners in both on-premises or cloud environments. The platform provides a centrally managed, dynamic pool of load balancing resources on commodity x86 servers, VMs or containers, to deliver granular services close to individual applications. This allows network services to scale near infinitely without the added complexity of managing hundreds of disparate appliances.

Avi components

Controllers – these are the management appliances that are responsible for state data, Service Engines are deployed by the controllers. The controllers run in a management network.

Service Engines – the load balancing services run in here. These generally run in a DMZ network. Service Engines can have one or more network adaptors connected to multiple networks. At least one network with routing to the controllers, and the remaining networks as data networks.

Deployment modes

Avi can be installed in a variety of deployment types. For VMware Cloud on AWS, it is not currently possible to deploy using ‘write access’ as vCenter is locked-down in VMC and it also has a different API from vSphere 6.7 vCenter Server. You’ll also find that other tools may not work with vCenter in a VMware Cloud on AWS SDDC, such as govc.

Instead Avi needs to be deployed using ‘No Access’ mode.

You can refer to this link for instructions to deploy Avi Controllers in ‘No Access’ mode.

Since it is only possible to use ‘No Access’ mode with VMC based SDDCs, its also a requirement to deploy the service engines manually. To do this follow the guide in this link, and start at the section titled Downloading Avi Service Engine on OVA.

If you’re using Avi with on-premises deployments of vCenter, then ‘Write Mode’ can be used to automate the provisioning of service engines. Refer to this link for more information on the different modes.

Deploying Avi Controller with govc

You can deploy the Avi Controller onto non VMware Cloud on AWS vCenter servers using the govc tool. Refer to this other post on how to do so. I’ve copied the JSON for the controller.ova for your convenience below.

    "DiskProvisioning": "flat",
    "IPAllocationPolicy": "dhcpPolicy",
    "IPProtocol": "IPv4",
    "PropertyMapping": [
            "Key": "avi.mgmt-ip.CONTROLLER",
            "Value": ""
            "Key": "avi.mgmt-mask.CONTROLLER",
            "Value": ""
            "Key": "avi.default-gw.CONTROLLER",
            "Value": ""
            "Key": "avi.sysadmin-public-key.CONTROLLER",
            "Value": ""
    "NetworkMapping": [
            "Name": "Management",
            "Network": ""
    "MarkAsTemplate": false,
    "PowerOn": false,
    "InjectOvfEnv": false,
    "WaitForIP": false,
    "Name": null


For a high-level architecture overview, this link provides a great starting point.

Figure 1. Avi architecture

Service Engine Typical Deployment Architecture

Generally in legacy deployments, where BGP is not used. The service engines would tend to have three network interfaces. These are typically used for frontend, backend and management networks. This is typical of traditional deployments with F5 LTM for example.

For our example here, I will use three networks for the SEs as laid out below.

Network nameGateway CIDRPurpose

The service engines are configured with the following details. It is important to make a note of the MAC addresses in ‘No access’ mode as you will need this information later.

Service Engineavi-se1avi-se2
ManagementIP Address
Mac Address 00:50:56:8d:c0:2e
IP Address
Mac Address 00:50:56:8d:38:33
BackendIP Address
Mac Address 00:50:56:8d:8e:41
IP Address
Mac Address 00:50:56:8d:53:f6
FrontendIP Address
Mac Address 00:50:56:8d:89:b4
IP Address
Mac Address 00:50:56:8d:80:41

The Management network is used for communications between the SEs and the Avi controllers. For the port requirements, please refer to this link.

The Backend network is used for communications between the SEs and the application that is being load balanced and protected by Avi.

The Frontend network is used for upstream communications to the clients, in this case the northbound router or firewall towards the Internet.

Sample Application

Lets use VMware Cloud Director as the sample application for configuring Avi. vCD as it is more commonly named (to be renamed VMware Cloud Director), is a cloud platform which is deployed with an Internet facing portal. Due to this, it is always best to protect the portal from malicious attacks by employing a number of methods.

Some of these include, SSL termination and web application filtering. The following two documents explain this in more detail.

vCloud Director Security and VMware vCloud Director Security Hardening Guide.

The vCD application is configured as below:

vCD Appliance 1vCD Appliance 2
eth0 ip address10.104.123.2110.104.123.22
static route10.104.123.1
eth1 ip address10.104.124.2110.104.124.22

You’ll notice that the eth0 and eth1 interfaces are connected to two different management networks and respectively. For vCD, it is generally good practice to separate the two interfaces into separate networks.

Network nameGateway CIDRPurpose
sddc-cgw-vcd-mgmt- Frontend
UI/API/VM Remote Console
sddc-cgw-vcd-mgmt- Backend
PostgreSQL, SSH etc.

For simplicity, I also deployed my Avi controllers onto the sddc-cgw-vcd-mgmt-2 network.

The diagram below summarises the above architecture for the HTTP interface for vCD. For this guide, I’ve used VMware Cloud on AWS together with Avi Networks to protect vCD running as an appliance inside the SDDC. This is not a typical deployment model as Cloud Director Service will be able to use VMware Cloud on AWS SDDC resource soon, but I wanted to showcase the possibilities and constraints when using Avi with VMC based SDDCs.

Figure 2 . vCD HTTP Diagram

Configuring Avi for Cloud Director

After you have deployed the Avi Controllers and the Service Engines, there are few more steps needed before vCD is fully up and operational. The proceeding steps can be summarised as follows:

  1. Setup networking for the service engines by assigning the right IP address to the correct MAC addresses for the data networks
  2. Configure the network subnets for the service engines
  3. Configure static routes for the service engines to reach vCD
  4. Setup Legacy HA mode for the service engine group
  5. Setup the SSL certificate for the HTTP service
  6. Setup the Virtual Services for HTTP and Remote Console (VMRC)
  7. Setup the server pools
  8. Setup health monitors
  9. Setup HTTP security policies

Map Service Engine interfaces

Using the Avi Vantage Controller, navigate to Infrastructure > Service Engine, select one of the Service Engines then click on the little pencil icon. Then map the MAC addresses to the correct IP addresses.

Configure the network subnets for the service engines

Navigate to Infrastructure > Networks and create the subnets.

Configure static routes

Navigate to Infrastructure > Routing and setup any static routes. You’ll notice from figure 2 that since the service engine has three network interfaces on different networks, we need to create a static route on the interface that does not have the default gateway. This is so the service engines knows which gateway to use to route traffic for particular traffic types. In this case, the gateway for the service engine to route the HTTP and Remote Console traffic southbound to the vCD cells.

Setup Legacy HA mode for the service engine group

Navigate to Infrastructure > Service Engine Group.

Setup the HA mode to Legacy HA. This is the simplest configuration, you can use Elastic HA if you wish.

Configure the HTTP and Remote Console Virtual Services

Navigate to Applications > Virtual Services.

Creating a Virtual Service, has a few sub tasks which include the creation of the downstream server pools and SSL certificates.

Create a new Virtual Service for the HTTP service, this is for the Cloud Director UI and API. Please use this example to create another Virtual Service for the Remote Console.

For the Remote Console service, you will need to accept TCP 443 on the load balancer but connect southbound to the Cloud Director appliances on port TCP 8443. TCP 8443 is the port that VMRC uses as it shares the same IP addresses as the HTTP service.

You may notice that the screenshot is for an already configured Virtual Service for the vCD HTTP service. The server pool and SSL certificate is already configured. Below are the screenshots for those.

Certificate Management

You may already have a signed HTTP certificate that you wish to use with the load balancer for SSL termination. To do so, you will need to use the JAVA keytool to manipulate the HTTP certificate, obtaining the private key and convert from JCEKS to PCKS12. JAVA keytool is available in the vCD appliance at /opt/vmware/vcloud-director/jre/bin/.

Figure 3. SSL termination on load balancer

For detailed instructions on creating a signed certificate for vCD, please follow this guide.

Convert the keystore file certificates.ks file from JCEKS to PKCS12

keytool -importkeystore -srcstoretype JCEKS -srckeystore certificates.ks -destkeystore certificates_pkcs12.ks -deststoretype PKCS12

Export private key for the HTTP certificate from the certificates_pkcs12.ks file

keytool -importkeystore -srckeystore certificates_pkcs12.ks -srcalias http -destalias http -destkeystore httpcert.p12 -deststoretype PKCS12

Now that you have the private key for the HTTP certificate, you can go ahead and configure the HTTP certificate on the load balancer.

For the certificate file, you can either paste the text or upload the certificate file (.cer, .crt) from the certificate authority for the HTTP certificate.

For the Key (PEM) or PKCS12 file, you can use the httpcert.p12 file that you extracted from the certificates_pkcs12.ks file above.

The Key Passphrase is the password that you used to secure the httpcert.p12 file earlier.

Note that the vCD Remote Console (VMRC) must use pass-through for SSL termination, e.g., termination of the VMRC session must happen on the Cloud Director cell. Therefore, the above certificate management activities on Avi are not required for the VMRC.

Health Monitors

Navigate to Applications > Pools.

Edit the HTTP pool using the pencil icon and click on the Add Active Monitor green button.

Health monitoring of the HTTP service uses

GET /cloud/server_status HTTP/1.0

With an expected server response of

Service is up.

And a response code of 200.

The vCD Remote Console Health monitor is a lot simpler as you can see below.

Layer 7 HTTP Security

Layer 7 HTTP Security is very important and is highly recommended for any application exposed to the Internet. Layer 3 fire-walling and SSL certificates is always never enough in protecting and securing applications.

Navigate to Applications > Virtual Services.

Click on the pencil icon for the HTTP virtual service and then click on the Policies tab. Then click on the HTTP Security policy. Add a new policy with the following settings. You can read more about Layer 7 HTTP policies here.

Allowed StringsRequired by
/tenantTenant use
/networkAccess to networking
/tenant-networkingAccess to networking
/cloudFor SAML/SSO logins
/transferUploads/Downloads of ISO and templates
/apiGeneral API access
/cloudapiGeneral API access
/docsSwagger API browser
Blocked Strings
/cloudapi/1.0.0/sessions/providerSpecifically block admin APIs from the Internet

This will drop all provider side services when accessed from the Internet. To access provider side services, such as /provider or admin APIs, use an internal connection to the Cloud Director cells.

Change Cloud Director public addresses

If not already done so, you should also change the public address settings in Cloud Director.

Testing the Cloud Director portal

Try to access

You won’t be able to access it as /provider is not on the list of allowed URI strings that we configured in the L7 HTTPS Security settings.

However, if you try to access, you will be able to reach the tenant portal for the organisation named VMwire.

Many thanks to Mikael Steding, our Avi Network Systems Engineer for helping me with setting this up.

Please reach out to me if you have any questions.

How to deploy vCloud Director Appliance with Terraform and govc

Recently I’ve been looking at a tool to automate the provisioning of the vCloud Director appliance. I wanted something that could quickly take JSON as input for the OVF properties and be able to consistently deploy the appliance with the same outcome. I tried Terraform, however that didn’t quite work out as I expected as the Terraform provider for vSphere’s vsphere_virtual_machine resource, is not able to deploy OVA or OVFs directly.

Here’s what HashiCorp has to say about that…

NOTE: Neither the vsphere_virtual_machine resource nor the vSphere provider supports importing of OVA or OVF files as this is a workflow that is fundamentally not the domain of Terraform. The supported path for deployment in Terraform is to first import the virtual machine into a template that has not been powered on, and then clone from that template. This can be accomplished with Packergovc‘s import.ovf and import.ova subcommands, or ovftool.

The way that this could be done is to first import the OVA without vApp properties, then convert it to a template, then use Terraform to create a new VM from that template and use the vapp section to customise the appliance.

vapp {
    properties = {
      "" = "42"

This didn’t work for me as not all vApp properties are implemented in the vsphere_virtual_machine resource yet. Let me know if you are able to get this to work.

So that’s where govc came in handy.

govc is a vSphere CLI built on top of govmomi.

The CLI is designed to be a user friendly CLI alternative to the GUI and well suited for automation tasks. It also acts as a test harness for the govmomi APIs and provides working examples of how to use the APIs.

Once you’ve installed govc, you can then setup the environment by entering the following examples into your shell:

export GOVC_URL="https://vcenter-onprem.vcd.lab"

export GOVC_USERNAME='administrator@vsphere.local'

export GOVC_PASSWORD='My$ecureP4ssw0rd!'

export GOVC_INSECURE=true

To deploy the appliance we will use the govc inport.ova command.

However, before you can do that, you need to obtain the JSON file that contains all the OVF properties for you to edit and then use as an input into the import.ova options with govc.

To create the JSON file run the following command

govc import.spec /path_to_vcd_appliance.ova | python -m json.tool > vcd-appliance.json

govc import.spec /volumes/STORAGE/Terraform/VMware_vCloud_Director- | python -m json.tool > vcd-appliance.json

Then edit the vcd-appliance.json file and enter the parameters for your vCD appliance. Then deploy the appliance with the govc import.ova command.

The format for this command is

govc import.ova –options=/path_to_vcd_appliance.json vcd_appliance.ova

govc import.ova -ds=NVMe --options=/Users/phanh/Downloads/terraformdir/govc/vcd-appliance.json /volumes/STORAGE/Terraform/VMware_vCloud_Director-

You should now see your vCD appliance being deployed to your vCenter server.

This method also works for any OVA/OVF deployment, including the NSX-T unified appliance, vROPs, vRO.

The next natural step would be to continue the configuration of vCloud Director with the Terraform provider for vCloud Director.

Securing VMware Cloud on AWS remote access to your SDDC with an SSL VPN

The Use Case

What is an SSL VPN?

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialised client software on the end user’s computer.



  • SSL VPN is not an available feature by the Management Gateway or Compute Gateway in VMware Cloud on AWS
  • Enable client VPN connections over SSL to an SDDC in VMware Cloud on AWS for secure access to the resources
  • Avoid site-to-site VPN configurations between on-premises and the Management Gateway
  • Avoid opening vCenter to the Internet

Not all customers want to setup site-to-site VPNs using IPSEC or Route-based VPNs between their on-premises data centre to an SDDC on VMware Cloud on AWS. Using a client VPN such as an SSL VPN to enable a client-side device to setup an SSL VPN tunnel to the SDDC.


  • Improve remote administrative security
  • Enable users to access SDDC resource including vCenter over a secure SSL VPN from anywhere with an Internet connection


This article goes through the requirements and steps needed to get OpenVPN up and running. Of course, you can use any SSL VPN software, OpenVPN is a freely available open source alternative that is quick and easy to setup and is used in this article as a working example.

Review the following basic requirements before proceeding:

  • Access to your VMware Cloud on AWS SDDC
  • Basic knowledge of Linux
  • Basic knowledge of VMware vSphere
  • Basic knowledge of firewall administration


vCenter Server

In this section you’ll deploy the OpenVPN appliance. The steps can be summarised below:

  • Download the OpenVPN appliance to the SDDC. The latest VMware version is available with this link:

Make a note of the IP address of the appliance, you’ll need this to NAT a public IP to this internal IP using the HTTPS service later. My appliance is using an IP of

  • Log in as root with password of openvpnas to change a password for the openvpn user. This user is used for administering the admin web interface for OpenVPN.

VMware Cloud on AWS

In this section you’ll need to create a number of firewall rules as summarised in the tables further below.

Here’s a quick diagram to show how the components relate.

What does the workflow look like?

  1. A user connects to the SSL VPN to OpenVPN using the public IP address
  2. HTTPS (TCP 443) is NAT’d from to the OpenVPNAppliance with an IP of also to the HTTPS service.
  3. OpenVPN is configured with subnets that VPN users are allowed to access. and are the two allowed subnets. OpenVPN configures the SSL VPN tunnel to route to these two subnets.
  4. The user can open up a browser session on his laptop and connect to vCenter server using

Rules Configured on Management Gateway

Rule # Rule name Source Destination Services Action
1 Allow the OpenVPN appliance to access vCenter only on port 443 OpenVPN appliance vCenter HTTPS Allow

The rule should look similar to the following.

Rules Configured on Compute Gateway

Rule # Rule name Source Destination Services Action
2 Allow port 443 access to the OpenVPN appliance Any OpenVPN appliance HTTPS Allow
3 Allow the OpenVPN-network outbound access to any destination OpenVPN-network Any Any Allow

The two rules should look similar to the following.

I won’t go into detail on how to create these rules. However, you will need to create a few User Defined Groups for some of the Source and Destination objects.

NAT Rules

Rule name Public IP Service Public Ports Internal IP Internal Ports
NAT HTTPS Public IP to OpenVPN appliance HTTPS 443 443

You’ll need to request a new Public IP before configuring the NAT rule.

The NAT rule should look similar to the following.

OpenVPN Configuration

We need to configure OpenVPN before it will accept SSL VPN connections. Ensure you’ve gone through the initial configuration detailed in this document

  • Connect to the OpenVPNAppliance VM using a web browser. The URL is for my appliance is
  • Login using openvpn and use the password you set earlier.

  • Click on the Admin button

Configure Network Settings

  • Click on Network Settings and enter the public IP that was issued by VMware Cloud on AWS earlier.
  • Also, only enable the TCP daemon.

  • Leave everything else on default settings.
  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Routing

  • Click on VPN Settings and enter the subnet that vCenter runs on under the Routing section. I use the Infrastructure Subnet.

  • Leave all other settings default, however this depends on what you configured when you deployed the OpenVPN appliance initially. My settings are below:

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Configure Users and Users’ access to networks

  • Click on User Permissions and add a new user
  • Click on the More Settings pencil icon and configure a password and add in the subnets that you want this user to be able to access. I am using – this is the OpenVPN-network subnet and also – this is the Infrastructure Subnet for vCenter, ESXi in the SDDC. This will allow clients connected through the SSL VPN to connect directly to vCenter.

If you don’t know the Infrastructure Subnet you can obtain it by going to Network & Security > Overview

  • Press Save Settings at the bottom.
  • Press the Update Running Server button.

Installing the OpenVPN SSL VPN client onto a client device

The desktop client is only required if you do not want to use the web browser to initiate the SSL VPN. Unfortunately, we need signed certificates configured on OpenVPN to use the browser. I don’t have any for this example, so we will use the desktop client to connect instead.

For this section I will use my laptop to connect to the VPN.

  • Open up a HTTPS browser session to the public IP address that was provisioned by VMware Cloud on AWS earlier. For me this is
  • Accept any certificates to proceed. Of course, you can use real signed certificates with your OpenVPN configuration.
  • Enter the username of the user that was created earlier, the password and select the Connect button.

  • Click on the continue link to download the SSL VPN client

  • Once downloaded, launch the installation file.
  • Once complete you can close the browser as it won’t connect automatically as we are not using signed certificates.

Connecting to the OpenVPN SSL VPN client from a client device

Now that the SSL VPN client is installed we can open an SSL VPN tunnel.

  • Launch the OpenVPNConnect client, I’m on OSX, so SPACEBAR “OpenVPNConnect” will bring up the client.
  • Once launched, you can click on the small icon at the top of your screen.

  • Connect to the public IP relevant to your OpenVPN configuration.
  • Enter the credentials then click on Connect.
  • Accept all certificate prompts and the VPN should now be connected.

Connect to vCenter

Open up a HTTPS browser session and use the internal IP address of vCenter. You may need to add a hosts file entry for the public FQDN for vCenter to redirect to the internal IP instead. That’s it! You’re now accessing vCenter over an SSL VPN.

It’s also possible to use this method to connect to other network segments. Just follow the procedures above to add additional network segments and rules in the Compute Gateway and also add additional subnets to the Access Control section when adding/editing users to OpenVPN.

Call to Action

Learn more with these resources:

Using FaceTime on your Mac for Conference Calls with Webex, GoToMeeting and GlobalMeet

If like me you’re generally plugged into your laptop with a headset when working in a nice comfy place and dislike using your cellphone’s speaker and mic or apple headset for calls but instead prefer to take calls on your laptop using the Calls From iPhone feature.


This enables you to easily transition from what you were doing on your laptop – for example, listening to Apple Music, watching YouTube or whatever and flawlessly pick up a call or make a new call directly from your laptop. The benefits here are that you don’t need to take off your headset and continue working without switching devices or changing audio inputs for those with Bluetooth connected headsets.

But have you noticed that the FaceTime interface on OSX has no keypad? This is a problem when you need to pick up a call from the call-back function from Webex for example. Webex asks you to press ‘1’ on the keypad to be connected to the conference. Likewise, if you need to dial into a conference call with Webex, GoToMeeting or Globalmeet, you’ll need to use a keypad to enter the correct input followed generally by ‘#’ to connect. This is a little difficult if there is no keypad right?

If you tried to open up the keypad on your iPhone whilst connected to a call on your Mac, then the audio will transfer from your Mac to your iPhone and you cannot transfer it back.


Luckily there is a workaround. Well two actually, one will enable you to use the call-back functions from conference call systems and the other will enable you to dial into the meeting room directly.

When you receive a call-back call from Webex for example, and are asked to enter ‘1’ to continue, press the Mute button, then use your keyboard’s keys to provide the necessary inputs – press Mute, press 1, press #, then unmute as necessary.


The second workaround involves using direct-dial by just typing/pasting the conference number and attendee access codes directly into FaceTime before making the call. A comma ‘,’ sends a pause to the call, enabling you to enter the attendee access code and any other inputs that you need.


I find that both these work very well for me, mute works for call-back functions and direct-dial works very well when I need to join a call directly. The mute workaround is also very effective when using an IVR phone system too, think banking, customer services systems.

I hope this helps!

Atlantis USX 3.5 – What’s New?

I’m excited to announce the latest enhancements to the Atlantis USX product following the release of Atlantis USX 3.5.

Before we delve too deep in what’s new in USX 3.5, let’s take a brief recap on some of the innovative features from our previous releases.

We delivered USX 2.2 back in February 2015 where we delivered XenServer Support and LDAP authentication, USX 3.0 followed in August 2015 with support for VMware VVOLs, Volume Level Snapshot and Replication and the release of Atlantis Insight. USX 3.1 gave us deduplication aware stretched cluster and also multi-site disaster recovery in October 2015. Two-node clusters were enabled in USX 3.1.2 as well as enhancements to SnapClone for workspace in January 2016.

Some of these features were first in the industry features, for example, support for VMware VVOLs on a hyperconverged platform, all-flash hyperconverged before it became an industry standard and deduplication-aware stretched cluster using the Teleport technology that we pioneered in 2014 and released with USX 2.0.


Figure 1. Consistent Innovation

The feature richness and consistent innovation is something that we strive to continue to deliver with USX 3.5 coupled with additional stability and operationally ready feature set.

Let’s focus on the key focus areas with this latest release and what makes it different from the previous versions. Three main areas with the USX 3.5 enhancements are Simplify, Solidify and Optimize. These areas are targeted to provide a better user experience for both administrators and end users.


XenServer 7 – USX 3.5 adds support for running USX on XenServer 7, in addition to vSphere 6.2.

Health Checks – We’ve added the ability to perform system health checks at any time, this is of course useful when planning for either a new installation or an upgrade of USX. Of course you can also run a health check on your USX environment at any time to make sure that everything is functioning as it should. This great feature helps identify any configuration issues prior to deployment of volumes. The tool will give pass or fail results for each of the test items, however, not all failed items prevent you from continuing your deployment, these will be flagged as a warning. For example, Internet Accessibility is not a requirement for USX, it is used to upload Insight logs or check for USX updates.


Figure 2. Health Checks

Operational Simplicity – enhancing operational simplicity, making things easier to do. On-demand SnapClone has been added to the USX user interface (UI), this enables the ability to create a full SnapClone – essentially a full backup of the contents of an in-memory volume to disk before any maintenance is done on that volume. This helps with maintenance of your environment where you need to quickly take a hypervisor host down for maintenance, the ability to instantly do a SnapClone through the UI makes this an easier method than in previous versions.


Figure 3. On-demand and scheduled SnapClones

Simple Maintenance Mode – We’ve also added the ability to perform maintenance mode for Simple Volumes. Simple Volumes can be located on local storage to present the memory from that hypervisor as a high performance in-memory volume for your virtual machine workloads such as VDI desktops. You can now enable maintenance mode using the Atlantis USX Manager UI or the REST API on simple volumes. What this does is that it will migrate the volume from one host to another, enabling you to put the source host into maintenance mode to perform any maintenance operations. This works with both VMware and Citrix hypervisors.


Figure 4. Simple Maintenance Mode


Alerting is an area that has also been improved. We have added new alerts to highlight utilization of the backing disk that a volume uses. Additionally, alerts to highlight snapshot utilization is also now available. Alerts can be easily accessed using the Alerts menu in the GUI and are designed to be non-invasive however due to their nature, highly visible within the Alerts menu in the USX web UI for quick access.

Disaster Recovery for Simple Hybrid Volumes

Although this is now a new feature in USX 3.5, we’ve actually been deploying this in some of our larger customers for a few years now and the automation and workflows are now being exposed into the USX 3.5 UI. This feature enables simple hybrid volumes to be replicated by underlying storage with replication enabled technology, coupled with the automation and workflows, simple hybrid volumes can be recovered at the DR site with volume objects like the export IP addresses and volume identities being changed to suit the environment at the DR site.


Plugin Framework is now a key feature to the USX capabilities. It is an additional framework which is integrated into the USX Web UI. It allows for the importing and running of Atlantis and community created plugins written in Python that enhance the functionality of USX. Plugins such as guest VM operations or guest VM query capabilities. These plugins enable guest-side operations such as restart of all VMs within a USX volume, or query the DNS-name of all guest VMs residing in a USX volume.


Figure 5. USX Plugin Framework

I hope you’ll agree that the plugin framework will provide an additional level of capabilities on top of the great capabilities we already have for automation and management such as the USX REST API and USX PowerShell Cmdlets.

Reduced Resource Requirements for volume container memory – we’ve decreased the metadata memory requirements by 40%. In previous versions the amount of memory assigned to metadata was a percentage of the volume export size before data reduction, for example, if you exported a volume of 1TB in size, the amount of memory reserved for metadata would then be 50GB, with USX 3.5 this is now reduced down to just 30GB, whilst still providing the same great performance and data reduction capabilities with fewer memory resources requirements. USX 3.5 optimizations also include the reduction of local flash storage required for the performance tier when using hybrid volumes, we’ve decreased the flash storage requirements by 95%!

In addition to reducing the metadata resource and local flash requirements, we’ve also reduced the amount of storage required for SnapClone space by 50%. This reduction reduces the SnapClone storage footprint on the underlying local or shared storage enabling you to use less storage for running USX.


ROBO to support vSphere Essentials.

ROBO use case is now even more cost effective with USX 3.5. This enhancement enables the use of the VMware vSphere Essentials licensing model for customers who prefer the VMware hypervisor over Citrix XenServer. This is a great option for remote and branch offices with three or less servers that wish to enable high performance, data reduction aware storage for remote sites.


Atlantis USX 3.5 is available now from the Atlantis Portal. Download now and let me know what you think of the new capabilities.


Release notes and online documentation are available here.

Deduplication – By the Numbers

Atlantis HyperScale appliances come with effective capacities of 12TB, 24TB and 48TB depending on the model that is deployed. These capacities are what we refer to as effective capacity, i.e., the available capacity after in-line de-duplication that occurs when data is stored onto HyperScale Volumes. HyperScale Volumes always de-duplicate data first before writing data down to the local flash drives. This is what is known as in-line deduplication which is very different from post-de-duplication which will de-duplicate data after it is written down to disk. The latter incurs storage capacity overhead as you will need the capacity to store the data before the post-process de-duplication is able to then de-duplicate. This is why HyperScale appliances only require three SSDs per node to provide the 12TB of effective capacity at 70% de-duplication.

Breaking it down

HyperScale SuperMicro CX-12
Number of nodes 4
Number of SSDs per node 3
SSD capacity 400GB
Usable flash capacity per node 1,200GB
Cluster RAW flash capacity 4,800GB
Cluster failure tolerance 1
Usable flash capacity per cluster 3,600GB
Effective capacity with 70% dedupe 12,000GB


Data Reduction Table

De-Dupe Rate (%)

Reduction Rate (X)







































Formula for calculating Reduction Rate

Taking the capacity from a typical HyperScale appliance of 3,600GB, this will give 12,000TB of effective capacity.


HyperScale provides a guarantee of 12TB per CX-12 appliance, however some workloads such as DEV/TEST private clouds and stateless VDI workloads could see as much as 90% data reduction. That’s 36,000GB of effective capacity. Do the numbers yourself, in-line de-duplication eliminates the need for lots of local flash drives or slower high capacity SAS or SATA drives. HyperScale runs the same codebase as USX and as such utilizes RAM to perform the in-line de-duplication which eliminates the need for add-in hardware cards or SSDs as staging capacity for de-duplication.

For more information please visit this site

Introducing Atlantis HyperScale

What is it?

A hyper-converged appliance running pre-installed USX software on either XenServer or VMware vSphere and on the hardware of your choice – Lenovo, HP, SuperMicro and Cisco.

How is it installed?

HyperScale comes pre-installed by Atlantis Channel Partners. HyperScale runs exactly the same software as USX, however HyperScale is installed automatically from USB key by the Channel Partner. When it is delivered to your datacenter, it is a simple 5 step process to get the HyperScale appliance ready to use.

Watch the video.

Step 1

Step 2

Step 3

Step 4

Step 5


What do you get?

The appliance is ready to use in about 30 minutes with three data stores ready for use. You can of course create more volumes and also attach and optimize external storage such as NAS/SAN in addition to the local flash devices that come with the appliance.

Atlantis HyperScale Server Specifications
Server Specifications Per Node CX-12 CX-24 CX-48 (Phase 2)
Server Compute Dual Intel E5-2680 v3
Hypervisor VMware vSphere 5.5 or Citrix XenServer 6.5
Memory 256-512 GB 384-512 GB TBD
Networking 2x 10GbE & 2x 1GbE
Local Flash Storage 3x 400GB Intel 3710 SSD 3x 800GB Intel 3710 SSD TBD
Total All-Flash Effective Capacity (4 Nodes)* 12 TB 24 TB 48 TB
Failure Tolerance 1 node failure (FTT=1)
Number of Deployed Volumes 3
IOPs per Volume More than 50,000 IOPs
Latency per Volume Less than 1ms
Throughput per Volume More than 210 MB/s

Key Differentiators vs other Hyper-converged Offerings

Apart from lower cost (another post to follow) or you can read this post from Chris Mellor from The Register, HyperScale runs on exactly the same codebase as USX. USX has advanced data services that provide very efficient data reduction and IO acceleration patented technology. For a brief overview of the Data Services please see this video.



Number of nodes = 4

SSDs per node = 3

SSD capacity = 400GB

Usable capacity per node = 1200GB

Usable capacity per appliance with FTT=1 = 3,600GB

Effective capacity with 70% de-duplication = 12,000GB


USX 2.1 Whats New?

USX 2.1 is now available and has some minor improvements over previous versions. There are some major milestones and some minor improvements that are part of this release.

USX Volume Dashboard
USX Volume Dashboard

Major milestones:

  1. VMware support for USX on the VMware HCL, the VMware KB is in this link.
  2. VMware support for Atlantis NAS VAAI Plugin, the VMware Compatibility Guide for USX is in this link.
    • The Atlantis NAS VAAI Plugin is now officially supported as VMwareAccepted: VIBs with this acceptance level go through verification testing, but the tests do not fully test every function of the software. The partner runs the tests and VMware verifies the result. Today, CIM providers and PSA plugins are among the VIBs published at this level. VMware directs support calls for VIBs with this acceptance level to the partner’s support organization.
    • Atlantis NAS VAAI Plugin can now be installed using VMware Update Manager.

Minor improvements:

  1. Added Incremental Backups for SnapClones for Simple Volumes (VDI use cases).
  2. Added Session Timeout – A new preference was added so that you can configure the number of minutes that a session can be idle before it is terminated.
  3. Added vCenter hierarchy view for Mount and Unmount of Volumes.
  4. Added new Volume Dashboard that shows availability & status reporting improvements, including colour codes for various conditions, all of which roll up to an redesigned volume dashboard that provides an overview of a volume’s configuration, resource use, and health
  5. Improved Status Updates.
  6. Added Active Directory and LDAP Authentication to USX Manager.
  7. Added option to have one node failure for USX clusters of up to 5 nodes. Previously this was up to 4 nodes.
  8. REST API to support changing the USX database server.

Coming to VMworld US? Get yourself a VVOL compliant all software storage array

This article details all of my and Atlantis’ activities at VMworld US. Read more to get an introduction of what we will be doing and announcing and a sneak peek at our upcoming technology roadmap that solves some of the major business issues concerning performance, capacity and availability today. It is indeed going to be a VMworld with ‘no limits’ and one of the great innovations that we will be announcing is Teleport. More on this later!

Teleport your files
No limits with Teleport

I’ll be at in San Francisco from Saturday 23rd August until Thursday 28th August where I’ll be representing the USX team and looking after the Hands on Labs, running live demos and having expert one on ones at the booth. Come and visit to learn more about USX and how I can help you get more performance and capacity out of your VMware and storage infrastructure. I’d love to hear from you.

Where can you find me?

Atlantis is a Gold sponsor this year with Hands on Labs, a booth and multiple speaking sessions. Read on to find out what we’ll be announcing and where you can find my colleagues and me.

Booth in the Exhibitor Hall

I’ll mostly be located at booth 1529, you can find me and my colleagues next to the main VMware booth, just head straight up pass the HP, EMC, NetApp and Dell stands and come speak to me on how USX can help you claim more performance and capacity from these great enterprise storage arrays.

Speak to me about USX data services and I’ll show you some great live demos on how you can reclaim up to 5 times your storage capacity and gain 10 times more performance out of your VMware environment.

Here’s one showing USX as storage for vCloud Director in a Service Provider context and also for Horizon View.

If that’s not enough then come and speak to me about some of these great innovations:

  • If you’ve been waiting for a VVOL compliant all software vendor to try VVOLs with vSphere 6 beta then wait no more.
  • VMware VVOL Support – all of your storage past, present and future instantly become VVOL compliant with USX.
  • Teleport – the vMotion of the storage world which gives you the ability to move VMs, VMDKs and files between multiple data centers and the cloud in seconds to improve agility (if you’re thinking its Storage vMotion, trust me it is not).
  • And more….

Location in Solutions Exchange


We have three breakout sessions this year, two of them with our customers UHL and Northim Bank where Dave Rose and Erick Stoeckle respectively will take you through how they use USX in production.

The other breakout session is focused on VVols, VASA, VSAN and USX Data Services and will be delivered by our CTO and Founder Chetan Venkakesh (@chetan_). If you have not had the pleasure to hear Chetan speak before, then please don’t miss this opportunity. The guy is insane and uses just one slide with one picture to explain everything to you. He is a great storyteller and you shouldn’t miss it – even if it’s just for the F bombs that he likes to drop.

Chetan will also do a repeat 20-minute condensed session in the Solutions Exchange for a brain dump of Atlantis USX Data Services. Don’t miss this! Chetan will take you through the great new technology in the Atlantis kitbag.

Session Title Speaker(s) When Where
STP3212 – Unleashing the Awesomeness of the SDDC with Atlantis USX Chetan Venkatesh – Founder and CTO, Atlantis Computing Tuesday, Aug 26, 11:20 AM – 11:40 AM Solutions Exchange Theater Booth 1901
INF2951-SPO – Unleashing SDDC Awesomeness with Atlantis USX: Building a Storage Infrastructure for Tier 1 VMs with vVOLS, VASA, VSAN and Atlantis USX Data Services Chetan Venkatesh – Founder and CTO, Atlantis Computing Wednesday, Aug 27, 12:30 PM – 1:30 PM Somewhere in the Moscone (TBC)
EUC2654 – UK Hospital Switches From Citrix XenApp to VMware Horizon Saving £2.5 Million and Improving Patient Care Dave Rose – Head of Design authority, UHL
Seth Knox – VP Products, Atlantis Computing
Wednesday, Aug 27, 1:00 PM – 2:00 PM Somewhere in the Moscone (TBC)
STO2767 – Northrim Bank and USX Erick Stoeckle , Northrim Bank
Nishi Das – Director of Product Management, ILIO USX, Atlantis Computing Inc.
Thursday, Aug 28, 1:30 PM – 2:30 PM Somewhere in the Moscone (TBC)

Hands on Labs

You can find the hands on labs in the Hands on Labs hall, I’ll also be here to support you if you’re taking this lab. The Atlantis USX HOL is titled:

HOL-PRT-1465 – Build a Software-based Storage Infrastructure for Tier 1 VM Workloads with Atlantis USX Data Services.

This HOL consists of three modules, each of which can be taken separately or one after the other.

Modules 1 and 2 are read and click modules where you will follow the instructions in the lab guide and create the USX constructs using the Atlantis USX GUI.

Module 3 however uses the Atlantis USX API browser to quickly perform the steps in Module 1 with some JSON code.

All three modules will take you approximately an hour and a half to complete.

I had an interesting time writing this lab which was a balancing exercise in working with the limited resources assigned to my Org VDC. Please provide feedback on this lab if you can, it’ll help with future versions of this HOL. Just tweet me at @hugophan. Thanks!

Note that performance will be an issue because we are using the VMworld Hands on Labs hosted on Project NEE/OneCloud. This is a vCloud Director cloud in which the ESXi servers that you will see in vCenter are actually all virtual machines. Any VMs that you run on these ESXi servers will themselves be what we call nested VMs. In some cases you could actually see 2 more or nested levels. How’s that for inception? Just be aware that the labs are for a GUI, concept and usability feel and not for performance.

If you want to see performance, come to our booth!

VMware Hands on Labs with 3 layers of nested VMs!

Hands on Labs modules

Module #


Module Title

Atlantis USX – Deploying together with VMware VSAN to deliver optimized local storage

Module Narrative

Using Atlantis USX, IT organizations can pool VSANs with existing shared storage, while optimizing it with Atlantis USX In-Memory storage technology to boost performance, reduce storage capacity and provide storage services such as high availability, fast cloning and unified management across all datacenter storage hardware.The student will be taken through how to build a Hybrid virtual volume that optimizes VMware VSAN allowing it to delver high performing virtual workloads from local storage.

  • Build an USX Capacity Pool using the underlying VMware VSAN datastore
  • Build an USX performance pool from local server RAM
  • Build a Hybrid USX virtual volume suitable for running SQL Server
  • Present the Atlantis USX virtual volume to ESX over NFS

Module Objectives
Development Notes

A customer has built a resilient datastore from local storage using VSAN. This is then pooled by Atlantis USX to provide the Deduplication and I/O optimization that server workloads require. A joint whitepaper of this solution has already been written here:
Estimated module duration: 45 minutes


Module #


Module Title

Atlantis USX – Build In Memory Storage

Module Narrative

With Atlantis USX In-Memory storage optimization, processing computationally extensive analytics becomes easier and more cost effective allowing for an increased amount of data being processed per node and reduced the time to complete these IO intensive jobs, workloads may include Hadoop, Splunk, MongoDB.During this lab the student will be taken through how to build an Atlantis USX virtual volume using local server memory.

  • Build an USX Performance Pool aggregating server RAM from a number of ESX hosts.
    • Log into the web based management interface, and connect it the vCenter hosting the ESX infrastructure
    • Export the memory from the three ESX hosts onto the network using Atlantis aggregation technology.
    • Combine the discrete RAM resource into a protected performance pool with the Pool creation wizard.
  • Build an In-Memory virtual volume suitable for running a big data application
    • Run through the Create Virtual Volume wizard selecting In-Memory and deploying the In-Memory Virtual Volume
  • Present the Virtual Volume (datastore) to ESX over NFS.
    • Add the newly created datastore into ESX.

Module Objectives
Development Notes

The use case for this lab is increasing application performance by taking advantage of the storage optimization features in Atlantis USX.Estimated module duration: 30 minutes


Module #


Module Title

Atlantis USX – Using the RESTful API to drive automation and orchestration to scale a software-based storage infrastructure

Module Narrative

Atlantis USX has a powerful set of RESTful APIs. This module will give you insight into those APIs by using them to build out a Virtual Volume. In this module you will:

  • Connect to the USX API browser and review the available APIs
  • Create a Capacity and Memory Pool with the API
  • Create a Virtual Volume with the API

Module Objectives
Development Notes

The intent of this lab is to provide an example of how to use the Atlantis USX RESTful API to deploy USX at scale.Estimated module duration: 15 minutes


Oculus giveaway! See the reality in software defined storage

That’s right! I’ll be giving some of these away at the booth, make sure you stop by to see the new reality in software defined storage!

You can also pick up some of the usual freebies like T-shirts, pens, notepads etc.

There are also Google Glasses, Chromecasts, quad copters and others. We’re also working on something special. Watch this space.

Live Demos at the Booth

Come and speak to me and my colleagues to learn how USX works. We will be running live demos of the following subjects:

  • USX – Storage Consolidation for any workload on any storage.
  • USX – Database Performance Acceleration.
    • Run Tier-1 workloads on commodity hardware.
    • Run Tier-1 high performance workloads on non all-flash or hybrid storage arrays.
  • USX – All Flash Hyper-Converged with SuperMicro/IBM/SANdisk.
  • USX – Teleport (think vMotion for VMs, VMDKs and files over long distances and high latency links). Come talk to me for a live demo.

Beam me up Scotty!

  • USX Tech Preview – Cloud Gateway – using USX data services with AWS S3 as primary storage.
  • USX – VDI on USX on VSAN.
  • VDI – NVIDIA 3D Graphics.

Atlantis Events

SF Giants Game

SF Giants Game, Mon, Aug 25th at 19:00. Please contact your Atlantis Representative or ping me a note if you haven’t received an invite.

USX Partner Training & Breakfast, Wed, Aug 27th at 08:00. Please contact your Atlantis Representative or ping me a note if you’re an Atlantis Partner but have not received an invite.

Let’s meet up!

If you’re at VMworld or in the SF Bay area then let’s meet up and expand our networks.

Event Date Hours Event Name Where Register
Sat, Aug 23rd 19:00 – 22:00 VMworld Community Kickoff Johnny Foley’s, 243 O’Farrell Street
Sun, Aug 24th 13:00 – 16:00 #Opening Acts City View at Metreon
Sun, Aug 24th 15:00 – 17:00 #v0dgeball Charity Tournament SOMA Rec Center – Corner of Folsom and 6th Streets
Sun, Aug 24th 16:00 – 19:00 VMworld Welcome Reception Solutions Exchange, Moscone Center n/a
Sun, Aug 24th 20:00 – 23:00 #VMunderground City View at Metreon
Mon, Aug 25th 19:00 – 23:00 #vFlipCup VMworld Community TweetUp Folsom Street Foundry
Tues, Aug 26th 16:30 – 18:00 Hall Crawl Solutions Exchange, Moscone Center n/a
Tues, Aug 26th 19:00 – 22:00 #VCDX, #vExpert Party E&O Restaurant & Lounge, 314 Sutter St Invite only
Tues, Aug 26th 20:00 – 23:00 #vBacon Ferry Building, 1 Sausalito
Wed, Aug 27th 17:00 – 19:00 VMware vCHS Tweetup 111 Minna
Wed, Aug 27th 19:00 – 22:00 VMworld Party Moscone Center n/a

Can’t meet up?

Follow me and my colleagues on Twitter for live updates during VMworld and send us messages and questions, we’d love to hear from you.

Hugo Phan @hugophan

Chetan Venkakesh @chetan_

Seth Knox @seth_knox

Mark Nijmeijer @MarkNijmeijerCA

Gregg Holzrichter @gholzrichter

Toby Colleridge @tobyjcol

Is Atlantis USX the future of Software Defined Storage?

More informative info on #USX and a great write-up from Storage Swiss. - The Home of Storage Switzerland

Software Defined Storage (SDS) has certainly caught the attention of IT planners looking to reduce the cost of storage by liberating them from traditional storage hardware lock-in. As SDS evolves the promise of lower storage CAPEX, increased deployment and architecture flexibility, paired with lower OPEX through decreased complexity may emerge from suppliers of this technology. Atlantis USX looks to lead this trend, claiming to deliver all-flash array performance for half the cost of a traditional SAN.

Atlantis USX Architecture

From an architectural perspective, USX has the same roots as Atlantis’s VDI solution, except that it’s focused on virtual server workloads instead of virtual desktops. As part of the enhancements for server virtualization, USX has added the ability to pool any storage resource between servers (SAN, NAS, Flash, RAM, SAS, SATA), it’s added data protection to ensure reliability in case of a host failure and has built its own high availability…

View original post 1,335 more words

Virtual Volumes – Explained with Carousels, Horses and Unicorns – in pictures


[Tongue in cheek. There’s no World Cup on today so I made this. Please don’t take this too seriously.]

A SAN is like a carousel

  • It provides capacity (just like a carousel) and performance (when the carousel goes around).
  • People ride on static horses bolted to the carousel and try to enjoy the ride.
  • This horse is like a LUN. The horse does not know who is riding it.
  • Everybody travels at the same speed unless you happen to sit on the outside where things go a little bit faster.
  • The speed is relative to how fast the carousel rotates and how quickly you can get to an outside seat (if you want that extra speed and wind through your hair).
  • If you want to guarantee an outside seat, you can get to the front of the queue by having a FastPass+.
  • Get a bigger motor, or increase the speed, the carousel…

View original post 168 more words