Click on CREATE, Root/Intermediate CA Certificate. Then import each certificate individually starting from the bottom. Click on Validate and then Import.
Do this again for the other two certificates, the ISRG Root X1 certificate and then the R3 intermediate certificate. Once done, you’ll see the following.
The Subscriber certificate is done differently.
Click on CREATE, Controller Certificate. Then give the certificate a name, click on the Import option and browse to the fullchain.pem file and also the privkey.pem file. A passphrase is not required as Let’s Encrypt does not create a passphrase. Click on Validate and then Import.
Once done, you’ll see the following.
Now that we’ve imported the Let’s Encrypt CA certificates, we can proceed to change the SSL certificate used by the Avi Controller for HTTPS web management.
Navigate to Administration, Settings, Access Settings, then click on the pencil icon.
Delete all of the current certificates in the SSL/TLS Certificate box and then select the new Subscriber certificate that we imported earlier, in my case I named it star-vmwire-com.
Once you press Save, you can close the browser session and open up a new one to start enjoying secure connections to your Avi Controller.
Updating Let’s Encrypt SSL Certificates for NSX-T Manager
Updating NSX-T Manager to use a CA signed SSL certificate is a little bit different from how we updated the vCenter certificate. It requires interacting with the NSX-T API.
First lets import the certificate into NSX-T. Again, you’ll need the fullchain.pem file but with the appended DST Root CA X3 certificate that was prepared in this article.
Navigate to System and then under Settings, click on the Certificates link.
First we need to import each of the CA certificates in the chain before we import the certificate for NSX-T Manager.
Again the certificates in the fullchain.pem file in order are
Click on IMPORT, Import CA Certificate. Then import each certificate individually starting from the bottom, make sure to deselect the Service Certificate slider, as we are not using these certificates for virtual services.
Its important to import bottom up as this enables NSX-T to check the issuer for subsequent certificates that you import. So import in reverse order of the fullchain.pem file. Start importing with this order
Once you’ve imported all three of the CA root and intermediate certificates – DST Root CA X3 certificate, ISRG Root X1 CA and the R3 CA certificate, you can then import the Subscriber Certificate *.vmwire.com last, once all done you’ll see the following.
Summarized in the following table.
Order in fullchain.pem
Name in NSX-T
Issued By
Subscriber Certificate
star-vmwire-com
R3
R3 Certificate
R3
ISRG Root X1
ISRG Root X1 Certificate
ISRG Root X1
DST Root CA X3
DST Root CA X3 Certificate
DST Root CA X3
DST Root CA X3
You’ll need the certificate ID for the certificate star-vmwire-com to use to update the NSX-T Manager certificate.
Click on the ID column of that certificate and copy the ID to your clipboard.
Now you’ll need to open a tool such as Postman to make the change.
First lets validate that our certificate is OK by using this GET against the NSX-T API, paste in the certificate ID into the URL.
GET https://nsx.vmwire.com/api/v1/trust-management/certificates/21fd7e8a-3a2e-4938-9dc7-5f3eccd791e7/?action=validate
If the status is “OK”, we’re good to continue.
Next use will POST the certificate ID against the following URL.
POST https://nsx.vmwire.com/api/v1/node/services/http?action=apply_certificate&certificate_id=21fd7e8a-3a2e-4938-9dc7-5f3eccd791e7
Once done, close your NSX-T Manager browser session, and enjoy using a CA signed certificate with NSX-T.
Updating Let’s Encrypt SSL Certificates for vCenter Server
I prefer to use wildcard certificates for my environment to reduce the number of certificates that I need to manage. This is due to Let’s Encrypt limiting their certificates to 90 days. This means that you’ll need to renew each certificate every <90 days or so. Using a wildcard certificate reduces your operational overhead. However, vCenter does not support wildcard certificates.
After you’ve prepped the fullchain.pem file according to the previous article, you can now update the vCenter SSL certificate using vCenter’s Certificate Management tool.
Navigate to Menu then Administration and click on Certificate Management.
Under the Machine SSL Certificate, click on Actions and choose Import and Replace Certificate.
Select the Replace with external CA certificate (requires private key).
Copy the section for the Subscriber Certificate part into the Machine SSL Certificate box, and then the rest into the Chain of trusted root certificates box.
Copy the contents of the privkey.pem file into the Private Key box.
Once you click on Replace, vCenter will restart its services and you can open a new browser window to the FQDN of vCenter and enjoy a secured vCenter session.
The Let’s Encrypt DST Root CA X3 certificate is missing from the fullchain.pem and chain.pem files, therefore errors such as the following prevent certificates from being imported by VMware appliances such as NSX-T and vCenter.
This post summarizes how to fix this issue.
Let’s Encrypt is a great service that provides free SSL certificates. I recently rebuilt my lab and decided to use SSL certs for my management appliances. However, non of the management appliances would accept the certificates issued by Let’s Encrypt due to an incomplete chain. This post summarizes how to fix this issue.
TL;DR the Let’s Encrypt DST Root CA X3 certificate is missing from the fullchain.pem and chain.pem files, therefore errors such as the following prevent certificates from being imported by VMware appliances such as NSX-T and vCenter.
Certificate chain validation failed. Make sure a valid chain is provided in order leaf,intermediate,root certificate. (Error code: 2076)
Get your certbot tool up and running, you can read more with this link.
Grab your files from the /etc/letsencrypt/live folder for your vCenter certificate. My one is in /etc/letsencrypt/live/vcenter.vmwire.com
You should now have the following files.
cert.pem
chain.pem
fullchain.pem
privkey.pem
A note on Let’s Encrypt certificate chain. If you look at the Certification Path for Let’s Encrypt certificates, you’ll notice something like this.
figure 1.
vcenter.vmwire.com is issued by the R3 CA certificate. This is Let’s Encrypt’s Intermediate certificate.
R3 is issued by the DST Root CA X3 certificate. This is Let’s Encrypt root certificate.
Then the DST Root CA X3 certificate needs to be trusted by all of our management appliances, vCenter, NSX-T and Avi Controller.
What I found is that, this is not the case and trying to import a Let’s Encrypt certificate without the root certificate that issued the DST Root CA X3 certificate will fail. Here’s an example from NSX-T when importing the chain.pem certificate.
figure 2. Importing the chain.pem certificate to NSX
The chain.pem file contains the R3 certificate and the DST Root CA X3 certificate. When you open it in notepad++ it looks like this.
figure 3. chain.pem
So we have a problem. We need the certificate that issued the DST Root CA X3 certificate to complete the chain and pass the chain validation.
Lets take a look at Let’s Encrypt certificates on their website.
So looking up the chain, it appears that my certificate vcenter.vmwire.com corresponds to the Subscriber Cert, which is issued by R3. This confirms the assumptions above in figure 1. However, it looks like the R3 certificate is not issued by the DST Root CA X3 certificate but in fact another certificate named ISRG Root X1.
Lets test this theory and import each of the certificates in the chain.pem file individually using NSX-T.
After importing, you can see that this is in fact the ISRG Root X1 certificate that is issued by the DST Root CA X3 certificate. My assumption from figure 3. is then incorrect.
So what is the top certificate in the chain.pem file?
Lets’ import it and find out. Yup, its the R3 certificate.
So where is the DST Root CA X3 certificate that we need to complete the validation chain?
We can obtain this from the Let’s Encrypt website. Scroll all the way down to the bottom of that page and you’ll see the following:
Clicking on that link will get you the the following page with this link.
And we will get closer to our DST Root CA X3 certificate when we click on that link above.
Clicking on that link gets us to this page here.
Then clicking on that link will get us to this page here.
We can now grab our certificate with this link highlighted here.
When you click on this link, you’ll be able to download a file named 8395.crt, this is the DST Root CA X3 certificate that we need to complete the chain. However, it is in a .crt format but we need to work with .pem.
To convert a crt certificate to pem use the following command.
That means we just need to append our new DST Root CA X3 certificate to the bottom of the fullchain.pem file to get a valid chain. It will now look like this.
Deploying your first pod with a persistent volume claim and service on vSphere with Tanzu. With sample code for you to try.
Learning the k8s ropes…
This is not a how to article to get vSphere with Tanzu up and running, there are plenty of guides out there, here and here. This post is more of a “lets have some fun with Kubernetes now that I have a vSphere with Tanzu cluster to play with“.
Answering the following question would be a good start to get to grips with understanding Kubernetes from a VMware perspective.
How do I do things that I did in the past in a VM but now do it with Kubernetes in a container context instead?
For example building the certbot application in a container instead of a VM.
Lets try to create an Ubuntu deployment that deploys one Ubuntu container into a vSphere Pod with persistent storage and a load balancer service from NSX-T to get to the /bin/bash shell of the deployed container.
Let’s go!
I created two yaml files for this, accessible from Github. You can read up on what these objects are here.
Filename
Whats it for?
What does it do?
Github link
certbot-deployment.yaml
k8s deployment specification
Deploys one ubuntu pod, claims a 16Gb volume and mounts it to /dev/sdb and creates a load balancer to enable remote management with ssh.
Creates a persistent volume of 16Gb size from the underlying vSphere storage class named tanzu-demo-storage. The PVC is then consumed by the deployment.
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
certbot 1/1 1 1 47m
kubectl get pods
NAME READY STATUS RESTARTS AGE
certbot-68b4747476-pq5j2 1/1 Running 0 47m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
certbot-pvc Bound pvc-418a0d4a-f4a6-4aef-a82d-1809dacc9892 16Gi RWO tanzu-demo-storage 84m
Let’s log into our pod, note the name from the kubectl get pods command above.
certbot-68b4747476-pq5j2
Its not yet possible to log into the pod using SSH since this is a fresh container that does not have SSH installed, lets log in first using kubectl and install SSH.
You will then be inside the container at the /bin/bash prompt.
root@certbot-68b4747476-pq5j2:/# ls
bin dev home lib32 libx32 mnt proc run srv tmp var
boot etc lib lib64 media opt root sbin sys usr
root@certbot-68b4747476-pq5j2:/#
Before we can log into the container over an SSH connection, we need to find out what the external IP is for the SSH service that the NSX-T load balancer configured for the deployment. You can find this using the command:
kubectl get services
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
certbot LoadBalancer 10.96.0.44 172.16.2.3 22:31731/TCP 51m
The IP that we use to get to the Ubuntu container over SSH is 172.16.2.3. Lets try that with a putty/terminal session…
login as: root
certbot@172.16.2.3's password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 4.19.126-1.ph3-esx x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
$ ls
bin dev home lib32 libx32 mnt proc run srv tmp var
boot etc lib lib64 media opt root sbin sys usr
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 258724 185032 73692 72% /
/mnt/sdb 16382844 45084 16321376 1% /mnt/sdb
tmpfs 249688 12 249676 1% /run/secrets/kubernetes.io/servic eaccount
/dev/sda 258724 185032 73692 72% /dev/termination-log
$
You can see that there is a 16Gb mount point at /mnt/sdb just as we specified in the specifications and remote SSH access is working.
This post will focus on how to import existing infrastructure into Terraform’s management. Some scenarios where this could happen is that you’ve already deployed infrastructure and have only recently started to look into infrastructure as code and maybe you’ve tried to use PowerShell, Ansible and other tools but none are quite as declarative as Terraform.
Terraform is a great framework to use to start developing and working with infrastructure-as-code to manage resources. It provides awesome benefits such as extremely fast deployment through automation, managing configuration drift, adding configuration changes and destroying entire environments with a few key strokes. Plus it supports many providers so you can easily use the same code logic to deploy and manage different resources, for example on VMware clouds, AWS or Azure at the same time.
For more information if you haven’t looked at Terraform before, please take a quick run through HashiCorp’s website:
Getting started with Terraform is really quite simple when the environment that you are starting to manage is green-field. In that, you are starting from a completely fresh deployment on Day-0. If we take AWS as an example, this is as fresh as signing up to the AWS free-tier with a new account and having nothing deployed in your AWS console.
Terraform has a few simple files that are used to build and manage infrastructure through code, these are the configuration and the state. The basic building blocks of Terraform. There are other files and concepts that could be used such as variables and modules, but I won’t cover these in much detail in this post.
How do you bring in infrastructure that is already deployed into Terraform’s management?
This post will focus on how to import existing infrastructure (brown-field) into Terraform’s management. Some scenarios where this could happen is that you’ve already deployed infrastructure and have only recently started to look into infrastructure as code and maybe you’ve tried to use PowerShell, Ansible and other tools but none are quite as useful as Terraform.
Assumptions
First lets assume that you’ve deployed Terraform CLI or are already using Terraform Cloud, the concepts are pretty much the same. I will be using Terraform CLI for the examples in this post together with AWS. I’m also going to assume that you know how to obtain access and secret keys from your AWS Console.
By all means this import method works with any supported Terraform provider, including all the VMware ones. For this exercise, I will work with AWS.
My AWS environment consists of the following infrastructure, yours will be different of course and I’m using this infrastructure below in the examples.
You will need to obtain the AWS resource IDs from your environment, use the AWS Console or API to obtain this information.
#
Resource
Name
AWS Resource ID
1
VPC
VPC
vpc-02d890cacbdbaaf87
2
PublicSubnetA
PublicSubnetA
subnet-0f6d45ef0748260c6
3
PublicSubnetB
PublicSubnetB
subnet-092bf59b48c62b23f
4
PrivateSubnetA
PrivateSubnetA
subnet-03c31081bf98804e0
5
PrivateSubnetB
PrivateSubnetB
subnet-05045746ac7362070
6
IGW
IGW
igw-09056bba88a03f8fb
7
NetworkACL
NACL
acl-0def8bcfeff536048
8
RoutePublic
PublicRoute
rtb-082be686bca733626
9
RoutePrivate
PrivateRoute
rtb-0d7d3b5eacb25a022
10
Instance1
Instance1
i-0bf15fecd31957129
11
elb
elb-UE360LJ7779C
elb-158WU63HHVD3
12
SGELB
ELBSecurityGroup
sg-0b8f9ee4e1e2723e7
13
SGapp
AppServerSecurityGroup
sg-031fadbb59460a776
Table 1. AWS Resource IDs
But I used CloudFormation to deploy my infrastructure…
If you used CloudFormation to deploy your infrastructure and you now want to use Terraform, then you will need to update the CloudFormation deletion policy to retain before bringing any resources into Terraform. This is important as any accidental deletion or change with CloudFormation stack would impact your Terraform configuration and state. I recommend setting this policy before importing resources with Terraform.
This link has some more information that will help you enable the deletion policy on all resources.
Set up your main.tf configuration file for a new project that will import an existing AWS infrastructure. The first version of our main.tf file will look like this, with the only resource that we will import being the VPC. Its always good to work with a single resource first to ensure that your import works before going all out and importing all the rest.
Notice that the VPC and all of the VPC settings have now been imported into Terraform.
Now that we have successfully imported the VPC, we can continue and import the rest of the infrastructure. The remaining AWS services we need to import are detailed in Table 1. AWS Resource IDs.
To import the remaining infrastructure we need to add the code to the main.tf file to import the other resources. Edit your main.tf so that it looks like this. Notice that all of the thirteen resources are defined in the configuration file and the resource arguments are all empty. We will update the resource arguments later, initially we just need to import the resources into the Terraform state and then update the configuration with the known state.
Terraform does not support automatic creation of a configuration out of a state.
Now that all thirteen resources are imported you will need to manually update the configuration file, in our case main.tf with the resource arguments that correspond to the current state of all the resources that were just imported. The easiest way to do this is to first take a look at the Terraform provider for AWS documentation to find the mandatory fields that are needed. Lets use the aws_subnet as an example:
We know that we need these two as a minimum, but what if there are other configuration items that were done in the AWS Console or CloudFormation before you started to work with Terraform. An example of this is of course tags and other configuration parameters. You want to update your main.tf file with the same configuration as what was just imported into the state. This is very important.
To do this, do not use the terraform.tfstate but instead run the following command.
terraform show
You’ll get an output of the current state of your AWS environment that you can then copy and paste the resource arguments into your main.tf configuration.
I won’t cover how to do all thirteen resources in this post so I’ll again use our example for one of the aws_subnet resources. Here is the PublicSubnetA aws_subnet resource information copy and pasted straight out of the terraform showcommand.
Not all resource arguments are needed, again review the documentation. Here is an example of my changes to the main.tf file with some of the settings taken from the output of the terraform show command.
Just place your terraform.tfvars file in the same location as your main.tf file. Terraform automatically links to the default or you can reference a different variable file, again refer to the documentation.
Finalizing the configuration
Once you’ve updated your main.tf configuration with all the correct resource arguments, you can test to see if what is in the configuration is the same as what is in the state. To do this run the following command:
terraform plan
If you copied and pasted and updated your main.tf correctly then you would get output from your terminal similar to the following:
terraform plan
[ Removed content to save space ]
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Congratulations, you’ve successfully imported an infrastructure that was built outside of Terraform.
You can now proceed to manage your infrastructure with Terraform. For example changing the terraform.tfvars parameters for
lb_port = "443"
lb_protocol = "https"
And then running plan and apply will update the elastic load balancer elb-158WU63HHVD3 from health check on port 80 to port 443 instead.
terraform plan
[ removed content to save space ]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_elb.elb-158WU63HHVD3 will be updated in-place
~ resource "aws_elb" "elb-158WU63HHVD3" {
~ health_check {
~ target = "TCP:80" -> "TCP:443"
}
}
terraform apply
[ content removed to save space]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
And that’s how you import existing resources into Terraform, I hope you find this post useful. Please comment below if you have a better method or have any suggestions for improvements. And feel free to comment below if you have questions and need help.
When HCX is deployed there are three appliances that are deployed as part of the Service Mesh. If you’re running testing or deploying these in a nested lab, the resource requirements may be too high for your infrastructure. This post shows you how you can edit the OVF appliances to be deployed with lower resource requirements.
When HCX is deployed there are three appliances that are deployed as part of the Service Mesh. These are detailed below.
Appliance
Role
vCPU
Memory (GB)
IX
Interconnect appliance
8
3
NE
L2 network extension appliance
8
3
WO
WAN optimization appliance
8
14
Total
24
20
As you can see, these three appliances require a lot of resources just for one Service Mesh. A Service Mesh is created on a 1:1 basis between source and destination. If you connected your on-premises environment to another destination, you would need another service mesh.
For example, if you had the following hybrid cloud requirements:
Service Mesh
Source site
Destination site
vCPUs
Memory (GB)
1
On-premises
VCPP Provider
24
20
2
On-Premises
VMware Cloud on AWS
24
20
3
On-Premises
Another On-premises
24
20
Total
72
60
As you can see, resource requirements will add up.
If you’re running testing or deploying these in a nested lab, the resource requirements may be too high for your infrastructure. This post shows you how you can edit the OVF appliances to be deployed with lower resource requirements.
Disclaimer: The following is unsupported by VMware. Reducing vCPU and memory on any of the HCX appliances will impact HCX services.
Log into your HCX manager appliance with the admin account
do a su – to gain root access, use the same password
go into the /common/appliances directory
here you’ll see folders for sp and vcc, these are the only two that you need to work in.
first lets start with sp, sp stands for Silverpeak which is what is running the WAN optimization.
go into the /common/appliances/sp/7.3.9.0 directory
vi the file VX-0000-7.3.9.0_62228.ovf
go to the section where virtual cpus and memory is configured and change to the following. (I find that reducing to four vCPUs and 7GB RAM for the WO appliance is good).
Once you save your changes and create a Service Mesh, you will notice that the new appliances will be deployed with reduced virtual hardware requirements.
Copy the appliance update package to one of the appliances, directly into the transfer share so that you don’t have to do this for all the appliances in your cluster.
Once copied do the following on the first primary appliance.
root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# ls VMware_Cloud_Director_10.2.0.5190-17029810_update.tar.gz cells appliance-nodes responses.properties
root@vcd01 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --check Checking for available updates, this process can take a few minutes…. Available Updates - 10.2.0.5190 Build 17029810
2020-10-16 08:41:01 | Invoking Database backup utility 2020-10-16 08:41:01 | Command line usage to create embedded PG DB backup: create-db-backup 2020-10-16 08:41:01 | Using "vcloud" as default PG DB to backup since DB_NAME is not provided 2020-10-16 08:41:01 | Creating back up directory /opt/vmware/vcloud-director/data/transfer/pgdb-backup if it does not already exist … 2020-10-16 08:41:01 | Creating the "vcloud" DB backup at /opt/vmware/vcloud-director/data/transfer/pgdb-backup… 2020-10-16 08:41:03 | "vcloud" DB backup has been successfully created. 2020-10-16 08:41:03 | Copying the primary node's properties and certs … 2020-10-16 08:41:04 | "vcloud" DB backup, Properties files and certs have been successfully saved to /opt/vmware/vcloud-director/data/transfer/pgdb-backup/db-backup-2020-10-16-084101.tgz.
Note: To restore the postgres DB dump copy this tar file to the remote system.
Welcome to the VMware Cloud Director upgrade utility Verify that you have a valid license key to use the version of the VMware Cloud Director software to which you are upgrading. This utility will apply several updates to the database. Please ensure you have created a backup of your database prior to continuing.
Do you wish to upgrade the product now? [Y/N] y
Examining database at URL: jdbc:postgresql://172.16.2.28:5432/vcloud?socketTimeout=90&ssl=true The next step in the upgrade process will change the VMware Cloud Director database schema.
Backup your database now using the tools provided by your database vendor.
Enter [Y] after the backup is complete. y
Running 5 upgrade tasks Executing upgrade task: Successfully ran upgrade task Executing upgrade task: Successfully ran upgrade task Executing upgrade task: Successfully ran upgrade task Executing upgrade task: …..\Successfully ran upgrade task Executing upgrade task: ……………[15] Successfully ran upgrade task Database upgrade complete Upgrade complete
Would you like to start the Cloud Director service now? If you choose not to start it now, you can manually start it at any time using this command: service vmware-vcd start
root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# vamicli update --check Checking for available updates, this process can take a few minutes…. Available Updates - 10.2.0.5190 Build 17029810
root@vcd02 [ /opt/vmware/vcloud-director/data/transfer ]# /opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell --shutdown Please enter the administrator password: Cell successfully deactivated and all tasks cleared in preparation for shutdown
Let’s Encrypt (LE) is a certificate authority that issues free SSL certificates for use in your web applications. This post details how to get LE setup to support Cloud Director specifically with a wildcard certificate.
Let’s Encrypt (LE) is a certificate authority that issues free SSL certificates for use in your web applications. This post details how to get LE setup to support Cloud Director specifically with a wildcard certificate.
Certbot
LE uses an application called certbot to request, automatically download and renew certificates. You can think of certbot as the client for LE.
First you’ll need to create a client machine that can request certificates from LE. I started with a simple CentOS VM. For more details about installing certbot into your preferred OS read this page here.
Once you get yours on the network with outbound internet access, you can start by performing the following.
# Update software
yum update
# Install wget if not already installed
yum install wget
# Download the certbot application.
wget https://dl.eff.org/certbot-auto
# Move certbot into a local application directory
sudo mv certbot-auto /usr/local/bin/certbot-auto
# Set ownership to root
sudo chown root /usr/local/bin/certbot-auto
# Change permisssions for certbot
sudo chmod 0755 /usr/local/bin/certbot-auto
Now you’re ready to request certificates. Run the following command but of course replacing your desired domain within the ‘your.domain.here ‘.
This will create a request for a wildcard certificate for *.vmwire.com.
You’ll then be asked to create a new DNS TXT record on your public DNS server for the domain that you are requesting to validate that you can manage that domain. Here’s what mine looks like for the above.
This means that you can only request public certificates with LE, private certificates are not supported.
You will then see a response from LE such as the following:
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/root/.certbot/live/vmwire.com/fullchain.pem
Your key file has been saved at:
/root/.certbot/live/vmwire.com/privkey.pem
Your cert will expire on 2020-12-24. To obtain a new or tweaked
version of this certificate in the future, simply run certbot-auto
again. To non-interactively renew *all* of your certificates, run
"certbot-auto renew"
Updating Cloud Director certificates
Before you can use new certificate, you need to perform some operations with the JAVA Keytool to import the pem formatted certificates into the certificates.ks file that Cloud Director uses.
The issued certificate is available in the directory
/root/.certbot/live/
Navigate to there using an SSH client and you’ll see a structure like this
Download the entire folder for the next steps. Within the folder you’ll see the following files
Filename
Purpose
cert.pem
your certificate in pem format
chain.pem
the Let’s Encrypt root CA certificate in pem format
fullchain.pem
your wildcard certificate AND the LE root CA certificate in pem format
privkey.pem
the private key for your certificate (without passphrase)
We need to rename the file to something that the JAVA Keytool can work with. I renamed mine to the following:
Original filename
New Filename
cert.pem
vmwire-com.crt
chain.pem
vmwire-com-ca.crt
fullchain.pem
not needed
privkey.pem
vmwire-com.key
Copy the three new files to one of the Cloud Director cells, use the /tmp directory.
Now launch an SSH session to one of the Cloud Director cells and perform the following.
# Import the certificate and the private key into a new pfx format certificate
openssl pkcs12 -export -out /tmp/vmwire-com.pfx -inkey /tmp/vmwire-com.key -in /tmp/vmwire-com.crt
# Create a new certificates.ks file and import the pfx formatted certificate
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storepass Vmware1! -keypass Vmware1! -storetype JCEKS -importkeystore -srckeystore /tmp/vmwire-com.pfx -srcstorepass Vmware1!
# Change the alias for the first entry to be http
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storetype JCEKS -changealias -alias 1 -destalias http -storepass Vmware1!
# Import the certificate again, this time creating alias 1 again (we will use the same wildcard certifiate for the consoleproxy)
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storepass Vmware1! -keypass Vmware1! -storetype JCEKS -importkeystore -srckeystore /tmp/vmwire-com.pfx -srcstorepass Vmware1!
# Change the alias for the first entry to be consoleproxy
/opt/vmware/vcloud-director/jre/bin/keytool -keystore /tmp/certificates.ks -storetype JCEKS -changealias -alias 1 -destalias consoleproxy -storepass Vmware1!
# Import the root certificate into the certificates.ks file
/opt/vmware/vcloud-director/jre/bin/keytool -importcert -alias root -file /tmp/vmwire-com-ca.crt -storetype JCEKS -keystore /tmp/certificates.ks -storepass Vmware1!
# List all the entries, you should now see three, http, consoleproxy and root
/opt/vmware/vcloud-director/jre/bin/keytool -list -keystore /tmp/certificates.ks -storetype JCEKS -storepass Vmware1!
# Stop the Cloud Director service on all cells
service vmware-vcd stop
# Make a backup of the current certificate
mv /opt/vmware/vcloud-director/certificates.ks /opt/vmware/vcloud-director/certificates.ks.old
# Copy the new certificate to the Cloud Director directory
cp /tmp/certificates.ks /opt/vmware/vcloud-director/
# List all the entries, you should now see three, http, consoleproxy and root
/opt/vmware/vcloud-director/jre/bin/keytool -list -keystore /opt/vmware/vcloud-director/certificates.ks -storetype JCEKS -storepass Vmware1!
# Reconfigure the Cloud Director application to use the new certificate
/opt/vmware/vcloud-director/bin/configure
# Start the Cloud Director application
service vmware-vcd start
# Monitor startup logs
tail -f /opt/vmware/vcloud-director/logs/cell.log
Copy the certificates.ks file to the other cells and perform the configure on the other cells to update the certificates for all cells. Don’t forget to update the certificate on the load balancer too. This other post shows how to do it with the NSX-T load balancer.
This post describes how to use the NSX-T Policy API to automate the creation of load balancer configurations for Cloud Director and the vRealize Operations Tenant App.
This post describes how to use the NSX-T Policy API to automate the creation of load balancer configurations for Cloud Director and the vRealize Operations Tenant App.
Postman collection
I’ve included a Postman collection that contains all of the necessary API calls to get everything configured. There is also a Postman environment that contains the necessary variables to successfully configure the load balancer services.
To get started import the collection and environment into Postman.
You’ll see the collection in Postman named NSX-T Load Balancer Setup. All the steps are numbered to import certificates, configure the Cloud Director load balancer services. I’ve also included the calls to create the load balancer services for the vRealize Operations Tenant App.
Before you run any of those API calls, you’ll first want to import the Postman environment. Once imported you’ll see the environments in the top right screen of Postman, the environment is called NSX-T Load Balancer Setup.
Complete your environment variables.
Variable
Value Description
nsx_vip
nsx-t manager cluster virtual ip
nsx-manager-user
nsx-t manager username, usually admin
nsx-manager-password
nsx-t manager password
vcd-public-ip
public ip address for the vcd service to be configured on the load balancer
tenant-app-public-ip
public ip address for the tenant app service to be configured on the load balancer
vcd-cert-name
a name for the imported vcd http certificate
vcd-cert-private-key
vcd http certificate private key in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
vcd http certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
For example: —–BEGIN CERTIFICATE—–\nMIIGADCCBOigAwIBAgIRALUVXndtVGMeRM1YiMqzBCowDQYJKoZIhvcNAQELBQAw\ngY8xCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO\nBgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGltaXRlZDE3MDUGA1UE\nAxMuU2VjdGlnbyBSU0EgRG9tYWluIFZhbGlkYXRpb24gU2VjdXJlIFNlcnZlciBD\nQTAeFw0xOTA4MjMwMDAwMDBaFw0yMDA4MjIyMzU5NTlaMFUxITAfBgNVBAsTGERv\nbWFpbiBDb250cm9sIFZhbGlkYXRlZDEUMBIGA1UECxMLUG9zaXRpdmVTU0wxGjAY\nBgNVBAMTEXZjbG91ZC52bXdpcmUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A\nMIIBCgKCAQEAqh9sn6bNiDmmg3fJSG4zrK9IbrdisALFqnJQTkkErvoky2ax0RzV\n/ZJ/1fNHpvy1yT7RSZbKcWicoxatYPCgFHDzz2JwgvfwQCRMOfbPzohTSAhrPZph\n4FOPnrF8iwGggTxp+/2/ixg0DjQZL32rc9ax1qEvSURt571hUE7uLkRbPrdbocSZ\n4c2atVh8K1fp3uBqEbAs0UyjW5PK3wIN5ZRFArxc5kiGW0btN1RmoWwOmuJkAtu7\nzuaAJcgr/UVb1PP+GgAvKdmikssB1MWQALTRHm7H2GJp2MlbyGU3ZROSPkSSaNsq\n4otCJxtvQze/lB5QGWj5V2B7YbNJKwJdXQIDAQABo4ICjjCCAoowHwYDVR0jBBgw\nFoAUjYxexFStiuF36Zv5mwXhuAGNYeEwHQYDVR0OBBYEFNhZaRisExXrYrqfIIm6\n9TP8JrqwMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQG\nCCsGAQUFBwMBBggrBgEFBQcDAjBJBgNVHSAEQjBAMDQGCysGAQQBsjEBAgIHMCUw\nIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMAgGBmeBDAECATCB\nhAYIKwYBBQUHAQEEeDB2ME8GCCsGAQUFBzAChkNodHRwOi8vY3J0LnNlY3RpZ28u\nY29tL1NlY3RpZ29SU0FEb21haW5WYWxpZGF0aW9uU2VjdXJlU2VydmVyQ0EuY3J0\nMCMGCCsGAQUFBzABhhdodHRwOi8vb2NzcC5zZWN0aWdvLmNvbTAzBgNVHREELDAq\nghF2Y2xvdWQudm13aXJlLmNvbYIVd3d3LnZjbG91ZC52bXdpcmUuY29tMIIBAgYK\nKwYBBAHWeQIEAgSB8wSB8ADuAHUAsh4FzIuizYogTodm+Su5iiUgZ2va+nDnsklT\nLe+LkF4AAAFsv3BsIwAABAMARjBEAiBat+l0e3BTu+EBcRJfR8hCA/CznWm1mbVl\nxZqDoKM6tAIgON6U0YoqA91xxpXH2DyA04o5KSdSvNT05wz2aa7zkzwAdQBep3P5\n31bA57U2SH3QSeAyepGaDIShEhKEGHWWgXFFWAAAAWy/cGw+AAAEAwBGMEQCIDHl\njofAcm5GqECwtjBfxYD7AFkJn4Ez0IGRFrux4ldiAiAaNnkMbf0P9arSDNno4hQT\nIJ2hUaIWNfuKBEIIkfqhCTANBgkqhkiG9w0BAQsFAAOCAQEAZCubBHRV+m9iiIeq\nCoaFV2YZLQUz/XM4wzQL+73eqGHINp6xh/+kYY6vw4j+ypr9P8m8+ouqichqo7GJ\nMhjtbXrB+TTRwqQgDHNHP7egBjkO+eDMxK4aa3x1r1AQoRBclPvEbXCohg2sPUG5\nZleog76NhPARR43gcxYC938OH/2TVAsa4JApF3vbCCILrbTuOy3Z9rf3aQLSt6Jp\nkh85w6AlSkXhQJWrydQ1o+NxnfQmTOuIH8XEQ2Ne1Xi4sbiMvWQ7dlH5/N8L8qWQ\nEPCWn+5HGxHIJFXMsgLEDypvuXGt28ZV/T91DwPLeGCEp8kUC3N+uamLYeYMKOGD\nMrToTA==\n—–END CERTIFICATE—–
ca-cert-name
a name for the imported ca root certificate
ca-certificate
ca root certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
vcd-node1-name
the hostname for the first vcd appliance
vcd-node1-ip
the dmz ip address for the first vcd appliance
vcd-node2-name
the hostname for the second vcd appliance
vcd-node2-ip
the dmz ip address for the second vcd appliance
vcd-node3-name
the hostname for the third vcd appliance
vcd-node3-ip
the dmz ip address for the third vcd appliance
tenant-app-node-name
the hostname for the vrealize operations tenant app appliance
tenant-app-node-ip
the dmz ip address for the vrealize operations tenant app appliance
tenant-app-cert-name
a name for the imported tenant app certificate
tenant-app-cert-private-key
tenant app certificate private key in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
tenant app certificate in pem format, the APIs only accept single line and no spaces in the certificate chain, use \n as an end of line character.
For example: —–BEGIN CERTIFICATE—–\nMIIGADCCBOigAwIBAgIRALUVXndtVGMeRM1YiMqzBCowDQYJKoZIhvcNAQELBQAw\ngY8xCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO\nBgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGltaXRlZDE3MDUGA1UE\nAxMuU2VjdGlnbyBSU0EgRG9tYWluIFZhbGlkYXRpb24gU2VjdXJlIFNlcnZlciBD\nQTAeFw0xOTA4MjMwMDAwMDBaFw0yMDA4MjIyMzU5NTlaMFUxITAfBgNVBAsTGERv\nbWFpbiBDb250cm9sIFZhbGlkYXRlZDEUMBIGA1UECxMLUG9zaXRpdmVTU0wxGjAY\nBgNVBAMTEXZjbG91ZC52bXdpcmUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A\nMIIBCgKCAQEAqh9sn6bNiDmmg3fJSG4zrK9IbrdisALFqnJQTkkErvoky2ax0RzV\n/ZJ/1fNHpvy1yT7RSZbKcWicoxatYPCgFHDzz2JwgvfwQCRMOfbPzohTSAhrPZph\n4FOPnrF8iwGggTxp+/2/ixg0DjQZL32rc9ax1qEvSURt571hUE7uLkRbPrdbocSZ\n4c2atVh8K1fp3uBqEbAs0UyjW5PK3wIN5ZRFArxc5kiGW0btN1RmoWwOmuJkAtu7\nzuaAJcgr/UVb1PP+GgAvKdmikssB1MWQALTRHm7H2GJp2MlbyGU3ZROSPkSSaNsq\n4otCJxtvQze/lB5QGWj5V2B7YbNJKwJdXQIDAQABo4ICjjCCAoowHwYDVR0jBBgw\nFoAUjYxexFStiuF36Zv5mwXhuAGNYeEwHQYDVR0OBBYEFNhZaRisExXrYrqfIIm6\n9TP8JrqwMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQG\nCCsGAQUFBwMBBggrBgEFBQcDAjBJBgNVHSAEQjBAMDQGCysGAQQBsjEBAgIHMCUw\nIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMAgGBmeBDAECATCB\nhAYIKwYBBQUHAQEEeDB2ME8GCCsGAQUFBzAChkNodHRwOi8vY3J0LnNlY3RpZ28u\nY29tL1NlY3RpZ29SU0FEb21haW5WYWxpZGF0aW9uU2VjdXJlU2VydmVyQ0EuY3J0\nMCMGCCsGAQUFBzABhhdodHRwOi8vb2NzcC5zZWN0aWdvLmNvbTAzBgNVHREELDAq\nghF2Y2xvdWQudm13aXJlLmNvbYIVd3d3LnZjbG91ZC52bXdpcmUuY29tMIIBAgYK\nKwYBBAHWeQIEAgSB8wSB8ADuAHUAsh4FzIuizYogTodm+Su5iiUgZ2va+nDnsklT\nLe+LkF4AAAFsv3BsIwAABAMARjBEAiBat+l0e3BTu+EBcRJfR8hCA/CznWm1mbVl\nxZqDoKM6tAIgON6U0YoqA91xxpXH2DyA04o5KSdSvNT05wz2aa7zkzwAdQBep3P5\n31bA57U2SH3QSeAyepGaDIShEhKEGHWWgXFFWAAAAWy/cGw+AAAEAwBGMEQCIDHl\njofAcm5GqECwtjBfxYD7AFkJn4Ez0IGRFrux4ldiAiAaNnkMbf0P9arSDNno4hQT\nIJ2hUaIWNfuKBEIIkfqhCTANBgkqhkiG9w0BAQsFAAOCAQEAZCubBHRV+m9iiIeq\nCoaFV2YZLQUz/XM4wzQL+73eqGHINp6xh/+kYY6vw4j+ypr9P8m8+ouqichqo7GJ\nMhjtbXrB+TTRwqQgDHNHP7egBjkO+eDMxK4aa3x1r1AQoRBclPvEbXCohg2sPUG5\nZleog76NhPARR43gcxYC938OH/2TVAsa4JApF3vbCCILrbTuOy3Z9rf3aQLSt6Jp\nkh85w6AlSkXhQJWrydQ1o+NxnfQmTOuIH8XEQ2Ne1Xi4sbiMvWQ7dlH5/N8L8qWQ\nEPCWn+5HGxHIJFXMsgLEDypvuXGt28ZV/T91DwPLeGCEp8kUC3N+uamLYeYMKOGD\nMrToTA==\n—–END CERTIFICATE—–
tier1-full-path
the full path to the nsx-t tier1 gateway that will run the load balancer,
for example /infra/tier-1s/stage1-m-ec01-t1-gw01
vcd-dmz-segment-name
the portgroup name of the vcd dmz portgroup,
for example stage1-m-vCDFront
allowed_ip_a
an ip address that is allowed to access the /provider URI and the admin API
allowed_ip_b
an ip address that is allowed to access the /provider URI and the admin API
Variables
Now you’re ready to run the calls.
The collection and environment are available to download from Github.
Running Cloud Director (formerly vCloud Director) over the Internet has its benefits however opens up the portal to security risks. To prevent this, we can use the native load balancing capabilities of NSX-T to serve only HTTP access to the URIs that are required and preventing access to unnecessary URIs from the rest of the Internet.
Running Cloud Director (formerly vCloud Director) over the Internet has its benefits however opens up the portal to security risks. To prevent this, we can use the native load balancing capabilities of NSX-T to serve only HTTP access to the URIs that are required and preventing access to unnecessary URIs from the rest of the Internet.
An example of this is to disallow the /provider and /cloudapi/1.0.0/sessions/provider URIs as these are provider side administrator only URIs that a service provider uses to manage the cloud and should not be accessible from the Internet.
The other article that I wrote previously describes the safe URIs and unsafe URIs that can be exposed over the Internet, you can find that article here. That article discuss doing the L7 HTTP policies using Avi. This article will go through how you can achieve the same with the built in NSX-T load balancer.
This article assumes that you already have the Load Balancer configured with the Cloud Director Virtual Servers, Server Pools and HTTPS Profiles and Monitors already set up. If you need a guide on how to do this, then please visit Tomas Fojta’s article here.
The L7 HTTP rules can be set up under Load Balancing | Virtual Servers. Edit the Virtual Server rule for the Cloud Director service and open up the Load Balancer Rules section.
Click on the Set link next to HTTP Access Phase. I’ve already set mine up so you can see that I already have two rules. You should also end up with two rules once this is complete.
Go ahead and add a new rule with the Add Rule button.
The first rule we want to set up is to prevent access from the Internet to the /provider URI but allow an IP address or group of IP addresses to access the service for provider side administration, such as a management bastion host.
Set up you rule as follows:
What we are doing here is creating a condition that when the /provider URI is requested, we drop all incoming connections unless the connection is initiated from the management jump box, this jump box has an IP address of 10.37.5.30. The Negate option is enabled to achieve this. Think of negate as the opposite of the rule, so negate does not drop connections to /provider when the source IP address is 10.37.5.30.
If negate is enabled, when Connection Drop is configured, all requests not matching the specified match condition are dropped. Requests matching the specified match condition are allowed.
Save this rule and lets setup another one to prevent access to the admin API. Setup this second rule as follows:
This time use /cloudapi/1.0.0/sessions/provider as the URI. Again, use the Negate option for your management IP address. Save your second rule and Apply all the changes.
Now you should be able to access /tenant URIs over the Internet but not the /provider URI. However, accessing the /provider URI from 10.37.5.30 (or whatever your equivalent is) will work.
Doing this with the API
Do a PUT against /policy/api/v1/infra/lb-virtual-servers/vcloud with the following.
(Note that the Terraform provider for NSX-T doesn’t support HTTP Access yet. So to automate, use the NSX-T API directly instead.)
VMware vRealize Orchestrator workflows for VMware Cloud Director to automate the provisioning of cloud services.
Firstly, apologies to all those who asked for the workflow at VMworld 2019 in Barcelona and also e-mailed me for a copy. It’s been hectic in my professional and personal life. I also wanted to clean up the workflows and remove any customer specific items that are not relevant to this workflow. Sorry it took so long!
If you’d like to see an explanation video of the workflows in action, please take a look at the VMworld session recording.
Credits
These vRealize Orchestrator workflows were co-created and developed by Benoit Serratrice and Henri Timmerman.
Creates an organization based on your initial organisation name as an input.
Creates a vDC into this organization.
Adds a gateway to the vDC.
Adds an routed network with a gateway CIDR that you enter.
Adds a direct external network.
Converts the organization network to use distributed routing.
Adds a default outbound firewall rule for the routed network.
Adds a source NAT rule to allow the routed network to goto the external network.
Adds a catalog.
Commission Customer vRO Workflow
It also cleans up the provisioning if there is a failure. I have also included a Decommission Customer workflow separately to enable you to quickly delete vCD objects quickly and easily. It is designed for lab environments. Bear this in mind when using it.
Other caveats: the workflows contained in this package are unsupported. I’ll help in the comments below as much as I can.
Getting Started
Import the package after downloading it from github.
The first thing you need to do is setup the global settings in the Global, Commission, storageProfiles and the other configurations. You can find these under Assets > Configurations.
You should then see the Commission Customer v5 workflow under Workflows in your vRO client, it should look something like this.
Enter a customer name and enter the gateway IP in CIDR into the form.
Press Run, then sit back and enjoy the show.
Known Issues
Commissioning a customer when there are no existing edge gateways deployed that use an external network. You see the following error in the vRO logs:
item: 'Commission Customer v5/item12', state: 'failed', business state: 'null', exception: 'TypeError: Cannot read property "ipAddress" from null (Workflow:Commission Customer v5 / get next ip (item8)#5)'
This happens because no IP addresses are in use from the external network pool. The Commission Customer workflow calculates the next IP address to assign to the edge gateway, it cannot do this if the last IP in use is null. Manually provision something that uses one IP address from the external network IP pool. Then use the Commission Customer workflow, it should now work.
Commissioning a customer workflow completes successfully, however you see the following errors:
[2020-03-22 19:30:44.596] [I] orgNetworkId: 545b5ef4-ff89-415b-b8ef-bae3559a1ac7
[2020-03-22 19:30:44.662] [I] =================================================================== Converting Org network to a distributed interface...
[2020-03-22 19:30:44.667] [I] ** API endpoint: vcloud.vmwire.com/api/admin/network/545b5ef4-ff89-415b-b8ef-bae3559a1ac7/action/convertToDistributedInterface
[2020-03-22 19:30:44.678] [I] error caught!
[2020-03-22 19:30:44.679] [I] error details: InternalError: Cannot execute the request: (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.680] [I] error details: Cannot execute the request: (Workflow:Convert net to distributed interface / Post to vCD (item4)#21)
[2020-03-22 19:30:44.728] [I] Network converted succesfully.
The workflow attempts to convert the org network from an internal interface to a distributed interface but it does not work even thought the logs says it was successful. Let me know if you are able to fix this.
Rewatch my session with Onni Rautanen at VMworld EMEA 2019 where we cover the clouds that we are building together with Tieto.
Rewatch my session with Onni Rautanen at VMworld EMEA 2019 where we cover the clouds that we are building together with Tieto.
Description: In this session, you will get a technical deep dive into Tieto’s next generation service provider cloud hosting platform running on VMware vCloud Director Cloud POD architecture deployed on top of VMware Cloud Foundation. Administrators and cloud engineers will learn from Tieto cloud architects about their scalable design and implementation guidance for building a modern multi-tenant hosting platform for 10,000+ VMs. Other aspects of this session will discuss the API integration of ServiceNow into the VMware cloud stack, Backup and DR, etc.
You’ll need to create a free VMworld account to access this video and many other videos that are made available during and after the VMworld events.
This article covers protecting and load balancing the Cloud Director application with Avi Networks. It covers SSL termination. health monitoring and layer 7 HTTP filtering. It can also be used as a reference for other load balancer products such as F5 LTM or NGINX.
Overview
The Avi Vantage platform is built on software-defined principles, enabling a next generation architecture to deliver the flexibility and simplicity expected by IT and lines of business. The Avi Vantage architecture separates the data and control planes to deliver application services beyond load balancing, such as application analytics, predictive autoscaling, micro-segmentation, and self-service for app owners in both on-premises or cloud environments. The platform provides a centrally managed, dynamic pool of load balancing resources on commodity x86 servers, VMs or containers, to deliver granular services close to individual applications. This allows network services to scale near infinitely without the added complexity of managing hundreds of disparate appliances.
Controllers – these are the management appliances that are responsible for state data, Service Engines are deployed by the controllers. The controllers run in a management network.
Service Engines – the load balancing services run in here. These generally run in a DMZ network. Service Engines can have one or more network adaptors connected to multiple networks. At least one network with routing to the controllers, and the remaining networks as data networks.
Deployment modes
Avi can be installed in a variety of deployment types. For VMware Cloud on AWS, it is not currently possible to deploy using ‘write access’ as vCenter is locked-down in VMC and it also has a different API from vSphere 6.7 vCenter Server. You’ll also find that other tools may not work with vCenter in a VMware Cloud on AWS SDDC, such as govc.
Instead Avi needs to be deployed using ‘No Access’ mode.
You can refer to this link for instructions to deploy Avi Controllers in ‘No Access’ mode.
Since it is only possible to use ‘No Access’ mode with VMC based SDDCs, its also a requirement to deploy the service engines manually. To do this follow the guide in this link, and start at the section titled Downloading Avi Service Engine on OVA.
If you’re using Avi with on-premises deployments of vCenter, then ‘Write Mode’ can be used to automate the provisioning of service engines. Refer to this link for more information on the different modes.
Deploying Avi Controller with govc
You can deploy the Avi Controller onto non VMware Cloud on AWS vCenter servers using the govc tool. Refer to this other post on how to do so. I’ve copied the JSON for the controller.ova for your convenience below.
For a high-level architecture overview, this link provides a great starting point.
Figure 1. Avi architecture
Service Engine Typical Deployment Architecture
Generally in legacy deployments, where BGP is not used. The service engines would tend to have three network interfaces. These are typically used for frontend, backend and management networks. This is typical of traditional deployments with F5 LTM for example.
For our example here, I will use three networks for the SEs as laid out below.
Network name
Gateway CIDR
Purpose
sddc-cgw-vcd-dmz1
10.104.125.1/24
Management
sddc-cgw-vcd-dmz2
10.104.126.1/24
Backend
sddc-cgw-vcd-dmz3
10.104.127.1/24
Frontend
The service engines are configured with the following details. It is important to make a note of the MAC addresses in ‘No access’ mode as you will need this information later.
Service Engine
avi-se1
avi-se2
Management
IP Address 10.104.125.11 Mac Address 00:50:56:8d:c0:2e
IP Address 10.104.125.12 Mac Address 00:50:56:8d:38:33
Backend
IP Address 10.104.126.11 Mac Address 00:50:56:8d:8e:41
IP Address 10.104.126.12 Mac Address 00:50:56:8d:53:f6
Frontend
IP Address 10.104.127.11 Mac Address 00:50:56:8d:89:b4
IP Address 10.104.127.12 Mac Address 00:50:56:8d:80:41
The Management network is used for communications between the SEs and the Avi controllers. For the port requirements, please refer to this link.
The Backend network is used for communications between the SEs and the application that is being load balanced and protected by Avi.
The Frontend network is used for upstream communications to the clients, in this case the northbound router or firewall towards the Internet.
Sample Application
Lets use VMware Cloud Director as the sample application for configuring Avi. vCD as it is more commonly named (to be renamed VMware Cloud Director), is a cloud platform which is deployed with an Internet facing portal. Due to this, it is always best to protect the portal from malicious attacks by employing a number of methods.
Some of these include, SSL termination and web application filtering. The following two documents explain this in more detail.
You’ll notice that the eth0 and eth1 interfaces are connected to two different management networks 10.104.123.0/24 and 10.104.124.0/24 respectively. For vCD, it is generally good practice to separate the two interfaces into separate networks.
Network name
Gateway CIDR
Purpose
sddc-cgw-vcd-mgmt-1
10.104.123.1/24
vCD Frontend UI/API/VM Remote Console
sddc-cgw-vcd-mgmt-2
10.104.124.1/24
vCD Backend PostgreSQL, SSH etc.
For simplicity, I also deployed my Avi controllers onto the sddc-cgw-vcd-mgmt-2 network.
The diagram below summarises the above architecture for the HTTP interface for vCD. For this guide, I’ve used VMware Cloud on AWS together with Avi Networks to protect vCD running as an appliance inside the SDDC. This is not a typical deployment model as Cloud Director Service will be able to use VMware Cloud on AWS SDDC resource soon, but I wanted to showcase the possibilities and constraints when using Avi with VMC based SDDCs.
Figure 2 . vCD HTTP Diagram
Configuring Avi for Cloud Director
After you have deployed the Avi Controllers and the Service Engines, there are few more steps needed before vCD is fully up and operational. The proceeding steps can be summarised as follows:
Setup networking for the service engines by assigning the right IP address to the correct MAC addresses for the data networks
Configure the network subnets for the service engines
Configure static routes for the service engines to reach vCD
Setup Legacy HA mode for the service engine group
Setup the SSL certificate for the HTTP service
Setup the Virtual Services for HTTP and Remote Console (VMRC)
Setup the server pools
Setup health monitors
Setup HTTP security policies
Map Service Engine interfaces
Using the Avi Vantage Controller, navigate to Infrastructure > Service Engine, select one of the Service Engines then click on the little pencil icon. Then map the MAC addresses to the correct IP addresses.
Configure the network subnets for the service engines
Navigate to Infrastructure > Networks and create the subnets.
Configure static routes
Navigate to Infrastructure > Routing and setup any static routes. You’ll notice from figure 2 that since the service engine has three network interfaces on different networks, we need to create a static route on the interface that does not have the default gateway. This is so the service engines knows which gateway to use to route traffic for particular traffic types. In this case, the gateway for the service engine to route the HTTP and Remote Console traffic southbound to the vCD cells.
Setup Legacy HA mode for the service engine group
Navigate to Infrastructure > Service Engine Group.
Setup the HA mode to Legacy HA. This is the simplest configuration, you can use Elastic HA if you wish.
Configure the HTTP and Remote Console Virtual Services
Navigate to Applications > Virtual Services.
Creating a Virtual Service, has a few sub tasks which include the creation of the downstream server pools and SSL certificates.
Create a new Virtual Service for the HTTP service, this is for the Cloud Director UI and API. Please use this example to create another Virtual Service for the Remote Console.
For the Remote Console service, you will need to accept TCP 443 on the load balancer but connect southbound to the Cloud Director appliances on port TCP 8443. TCP 8443 is the port that VMRC uses as it shares the same IP addresses as the HTTP service.
You may notice that the screenshot is for an already configured Virtual Service for the vCD HTTP service. The server pool and SSL certificate is already configured. Below are the screenshots for those.
Certificate Management
You may already have a signed HTTP certificate that you wish to use with the load balancer for SSL termination. To do so, you will need to use the JAVA keytool to manipulate the HTTP certificate, obtaining the private key and convert from JCEKS to PCKS12. JAVA keytool is available in the vCD appliance at /opt/vmware/vcloud-director/jre/bin/.
Figure 3. SSL termination on load balancer
For detailed instructions on creating a signed certificate for vCD, please follow this guide.
Convert the keystore file certificates.ks file from JCEKS to PKCS12
Now that you have the private key for the HTTP certificate, you can go ahead and configure the HTTP certificate on the load balancer.
For the certificate file, you can either paste the text or upload the certificate file (.cer, .crt) from the certificate authority for the HTTP certificate.
For the Key (PEM) or PKCS12 file, you can use the httpcert.p12 file that you extracted from the certificates_pkcs12.ks file above.
The Key Passphrase is the password that you used to secure the httpcert.p12 file earlier.
Note that the vCD Remote Console (VMRC) must use pass-through for SSL termination, e.g., termination of the VMRC session must happen on the Cloud Director cell. Therefore, the above certificate management activities on Avi are not required for the VMRC.
Health Monitors
Navigate to Applications > Pools.
Edit the HTTP pool using the pencil icon and click on the Add Active Monitor green button.
Health monitoring of the HTTP service uses
GET /cloud/server_status HTTP/1.0
With an expected server response of
Service is up.
And a response code of 200.
The vCD Remote Console Health monitor is a lot simpler as you can see below.
Layer 7 HTTP Security
Layer 7 HTTP Security is very important and is highly recommended for any application exposed to the Internet. Layer 3 fire-walling and SSL certificates is always never enough in protecting and securing applications.
Navigate to Applications > Virtual Services.
Click on the pencil icon for the HTTP virtual service and then click on the Policies tab. Then click on the HTTP Security policy. Add a new policy with the following settings. You can read more about Layer 7 HTTP policies here.
Allowed Strings
Required by
/tenant
Tenant use
/login
Login
/network
Access to networking
/tenant-networking
Access to networking
/cloud
For SAML/SSO logins
/transfer
Uploads/Downloads of ISO and templates
/api
General API access
/cloudapi
General API access
/docs
Swagger API browser
Blocked Strings
/cloudapi/1.0.0/sessions/provider
Specifically block admin APIs from the Internet
This will drop all provider side services when accessed from the Internet. To access provider side services, such as /provider or admin APIs, use an internal connection to the Cloud Director cells.
Change Cloud Director public addresses
If not already done so, you should also change the public address settings in Cloud Director.
Recently I’ve been looking at a tool to automate the provisioning of the vCloud Director appliance. I wanted something that could quickly take JSON as input for the OVF properties and be able to consistently deploy the appliance with the same outcome. I tried Terraform, however that didn’t quite work out as I expected as the Terraform provider for vSphere’s vsphere_virtual_machine resource, is not able to deploy OVA or OVFs directly.
Here’s what HashiCorp has to say about that…
NOTE: Neither the vsphere_virtual_machine resource nor the vSphere provider supports importing of OVA or OVF files as this is a workflow that is fundamentally not the domain of Terraform. The supported path for deployment in Terraform is to first import the virtual machine into a template that has not been powered on, and then clone from that template. This can be accomplished with Packer, govc‘s import.ovf and import.ova subcommands, or ovftool.
The way that this could be done is to first import the OVA without vApp properties, then convert it to a template, then use Terraform to create a new VM from that template and use the vapp section to customise the appliance.
vapp {
properties = {
"guestinfo.tf.internal.id" = "42"
}
This didn’t work for me as not all vApp properties are implemented in the vsphere_virtual_machine resource yet. Let me know if you are able to get this to work.
The CLI is designed to be a user friendly CLI alternative to the GUI and well suited for automation tasks. It also acts as a test harness for the govmomi APIs and provides working examples of how to use the APIs.
Once you’ve installed govc, you can then setup the environment by entering the following examples into your shell:
To deploy the appliance we will use the govc inport.ova command.
However, before you can do that, you need to obtain the JSON file that contains all the OVF properties for you to edit and then use as an input into the import.ova options with govc.
Not all customers want to setup site-to-site VPNs using IPSEC or Route-based VPNs between their on-premises data centre to an SDDC on VMware Cloud on AWS. Using a client VPN such as an SSL VPN to enable a client-side device to setup an SSL VPN tunnel to the SDDC.
The Use Case
What is an SSL VPN?
AnSSL VPN(Secure Sockets Layer virtual private network) is a form of VPNthat can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec)VPN, anSSL VPNdoes not require the installation of specialised client software on the end user’s computer. -www.bitpipe.com
Why?
SSL VPN is not an available feature by the Management Gateway or Compute Gateway in VMware Cloud on AWS
Enable client VPN connections over SSL to an SDDC in VMware Cloud on AWS for secure access to the resources
Avoid site-to-site VPN configurations between on-premises and the Management Gateway
Avoid opening vCenter to the Internet
Not all customers want to setup site-to-site VPNs using IPSEC or Route-based VPNs between their on-premises data centre to an SDDC on VMware Cloud on AWS. Using a client VPN such as an SSL VPN to enable a client-side device to setup an SSL VPN tunnel to the SDDC.
Benefits
Improve remote administrative security
Enable users to access SDDC resource including vCenter over a secure SSL VPN from anywhere with an Internet connection
Summary
This article goes through the requirements and steps needed to get OpenVPN up and running. Of course, you can use any SSL VPN software, OpenVPN is a freely available open source alternative that is quick and easy to setup and is used in this article as a working example.
Review the following basic requirements before proceeding:
Access to your VMware Cloud on AWS SDDC
Basic knowledge of Linux
Basic knowledge of VMware vSphere
Basic knowledge of firewall administration
Steps
vCenter Server
In this section you’ll deploy the OpenVPN appliance. The steps can be summarised below:
Download the OpenVPN appliance to the SDDC. The latest VMware version is available with this link:
Make a note of the IP address of the appliance, you’ll need this to NAT a public IP to this internal IP using the HTTPS service later. My appliance is using an IP of 192.168.1.201.
Log in as root with password of openvpnas to change a password for the openvpn user. This user is used for administering the admin web interface for OpenVPN.
VMware Cloud on AWS
In this section you’ll need to create a number of firewall rules as summarised in the tables further below.
Here’s a quick diagram to show how the components relate.
What does the workflow look like?
A user connects to the SSL VPN to OpenVPN using the public IP address 3.122.197.159.
HTTPS (TCP 443) is NAT’d from 3.122.197.159 to the OpenVPNAppliance with an IP of 192.168.1.201 also to the HTTPS service.
OpenVPN is configured with subnets that VPN users are allowed to access. 192.168.1.0/24 and 10.71.0.0/16 are the two allowed subnets. OpenVPN configures the SSL VPN tunnel to route to these two subnets.
The user can open up a browser session on his laptop and connect to vCenter server using https://10.71.224.4.
Rules Configured on Management Gateway
Rule #
Rule name
Source
Destination
Services
Action
1
Allow the OpenVPN appliance to access vCenter only on port 443
OpenVPN appliance
vCenter
HTTPS
Allow
The rule should look similar to the following.
Rules Configured on Compute Gateway
Rule #
Rule name
Source
Destination
Services
Action
2
Allow port 443 access to the OpenVPN appliance
Any
OpenVPN appliance
HTTPS
Allow
3
Allow the OpenVPN-network outbound access to any destination
OpenVPN-network
Any
Any
Allow
The two rules should look similar to the following.
I won’t go into detail on how to create these rules. However, you will need to create a few User Defined Groups for some of the Source and Destination objects.
NAT Rules
Rule name
Public IP
Service
Public Ports
Internal IP
Internal Ports
NAT HTTPS Public IP to OpenVPN appliance
3.122.197.159
HTTPS
443
192.168.1.201
443
You’ll need to request a new Public IP before configuring the NAT rule.
The NAT rule should look similar to the following.
OpenVPN Configuration
We need to configure OpenVPN before it will accept SSL VPN connections. Ensure you’ve gone through the initial configuration detailed in this document
Connect to the OpenVPNAppliance VM using a web browser. The URL is for my appliance is https://192.168.1.201:943
Login using openvpn and use the password you set earlier.
Click on the Admin button
Configure Network Settings
Click on Network Settings and enter the public IP that was issued by VMware Cloud on AWS earlier.
Also, only enable the TCP daemon.
Leave everything else on default settings.
Press Save Settings at the bottom.
Press the Update Running Server button.
Configure Routing
Click on VPN Settings and enter the subnet that vCenter runs on under the Routing section. I use the Infrastructure Subnet. 10.71.0.0/16.
Leave all other settings default, however this depends on what you configured when you deployed the OpenVPN appliance initially. My settings are below:
Press Save Settings at the bottom.
Press the Update Running Server button.
Configure Users and Users’ access to networks
Click on User Permissions and add a new user
Click on the More Settings pencil icon and configure a password and add in the subnets that you want this user to be able to access. I am using 192.168.1.0/24 – this is the OpenVPN-network subnet and also 10.71.0.0/16 – this is the Infrastructure Subnet for vCenter, ESXi in the SDDC. This will allow clients connected through the SSL VPN to connect directly to vCenter.
If you don’t know the Infrastructure Subnet you can obtain it by going to Network & Security > Overview
Press Save Settings at the bottom.
Press the Update Running Server button.
Installing the OpenVPN SSL VPN client onto a client device
The desktop client is only required if you do not want to use the web browser to initiate the SSL VPN. Unfortunately, we need signed certificates configured on OpenVPN to use the browser. I don’t have any for this example, so we will use the desktop client to connect instead.
For this section I will use my laptop to connect to the VPN.
Open up a HTTPS browser session to the public IP address that was provisioned by VMware Cloud on AWS earlier. For me this is https://3.122.197.159.
Accept any certificates to proceed. Of course, you can use real signed certificates with your OpenVPN configuration.
Enter the username of the user that was created earlier, the password and select the Connect button.
Click on the continue link to download the SSL VPN client
Once downloaded, launch the installation file.
Once complete you can close the browser as it won’t connect automatically as we are not using signed certificates.
Connecting to the OpenVPN SSL VPN client from a client device
Now that the SSL VPN client is installed we can open an SSL VPN tunnel.
Launch the OpenVPNConnect client, I’m on OSX, so SPACEBAR “OpenVPNConnect” will bring up the client.
Once launched, you can click on the small icon at the top of your screen.
Connect to the public IP relevant to your OpenVPN configuration.
Enter the credentials then click on Connect.
Accept all certificate prompts and the VPN should now be connected.
Connect to vCenter
Open up a HTTPS browser session and use the internal IP address of vCenter. You may need to add a hosts file entry for the public FQDN for vCenter to redirect to the internal IP instead. That’s it! You’re now accessing vCenter over an SSL VPN.
It’s also possible to use this method to connect to other network segments. Just follow the procedures above to add additional network segments and rules in the Compute Gateway and also add additional subnets to the Access Control section when adding/editing users to OpenVPN.
Using FaceTime on your Mac to make and receive conference calls.
If like me you’re generally plugged into your laptop with a headset when working in a nice comfy place and dislike using your cellphone’s speaker and mic or apple headset for calls but instead prefer to take calls on your laptop using the Calls From iPhone feature.
This enables you to easily transition from what you were doing on your laptop – for example, listening to Apple Music, watching YouTube or whatever and flawlessly pick up a call or make a new call directly from your laptop. The benefits here are that you don’t need to take off your headset and continue working without switching devices or changing audio inputs for those with Bluetooth connected headsets.
But have you noticed that the FaceTime interface on OSX has no keypad? This is a problem when you need to pick up a call from the call-back function from Webex for example. Webex asks you to press ‘1’ on the keypad to be connected to the conference. Likewise, if you need to dial into a conference call with Webex, GoToMeeting or Globalmeet, you’ll need to use a keypad to enter the correct input followed generally by ‘#’ to connect. This is a little difficult if there is no keypad right?
If you tried to open up the keypad on your iPhone whilst connected to a call on your Mac, then the audio will transfer from your Mac to your iPhone and you cannot transfer it back.
Luckily there is a workaround. Well two actually, one will enable you to use the call-back functions from conference call systems and the other will enable you to dial into the meeting room directly.
When you receive a call-back call from Webex for example, and are asked to enter ‘1’ to continue, press the Mute button, then use your keyboard’s keys to provide the necessary inputs – press Mute, press 1, press #, then unmute as necessary.
The second workaround involves using direct-dial by just typing/pasting the conference number and attendee access codes directly into FaceTime before making the call. A comma ‘,’ sends a pause to the call, enabling you to enter the attendee access code and any other inputs that you need.
I find that both these work very well for me, mute works for call-back functions and direct-dial works very well when I need to join a call directly. The mute workaround is also very effective when using an IVR phone system too, think banking, customer services systems.
I’m excited to announce the latest enhancements to the Atlantis USX product following the release of Atlantis USX 3.5.
Before we delve too deep in what’s new in USX 3.5, let’s take a brief recap on some of the innovative features from our previous releases.
We delivered USX 2.2 back in February 2015 where we delivered XenServer Support and LDAP authentication, USX 3.0 followed in August 2015 with support for VMware VVOLs, Volume Level Snapshot and Replication and the release of Atlantis Insight. USX 3.1 gave us deduplication aware stretched cluster and also multi-site disaster recovery in October 2015. Two-node clusters were enabled in USX 3.1.2 as well as enhancements to SnapClone for workspace in January 2016.
Some of these features were first in the industry features, for example, support for VMware VVOLs on a hyperconverged platform, all-flash hyperconverged before it became an industry standard and deduplication-aware stretched cluster using the Teleport technology that we pioneered in 2014 and released with USX 2.0.
Figure 1. Consistent Innovation
The feature richness and consistent innovation is something that we strive to continue to deliver with USX 3.5 coupled with additional stability and operationally ready feature set.
Let’s focus on the key focus areas with this latest release and what makes it different from the previous versions. Three main areas with the USX 3.5 enhancements are Simplify, Solidify and Optimize. These areas are targeted to provide a better user experience for both administrators and end users.
Simplify
XenServer 7 – USX 3.5 adds support for running USX on XenServer 7, in addition to vSphere 6.2.
Health Checks – We’ve added the ability to perform system health checks at any time, this is of course useful when planning for either a new installation or an upgrade of USX. Of course you can also run a health check on your USX environment at any time to make sure that everything is functioning as it should. This great feature helps identify any configuration issues prior to deployment of volumes. The tool will give pass or fail results for each of the test items, however, not all failed items prevent you from continuing your deployment, these will be flagged as a warning. For example, Internet Accessibility is not a requirement for USX, it is used to upload Insight logs or check for USX updates.
Figure 2. Health Checks
Operational Simplicity – enhancing operational simplicity, making things easier to do. On-demand SnapClone has been added to the USX user interface (UI), this enables the ability to create a full SnapClone – essentially a full backup of the contents of an in-memory volume to disk before any maintenance is done on that volume. This helps with maintenance of your environment where you need to quickly take a hypervisor host down for maintenance, the ability to instantly do a SnapClone through the UI makes this an easier method than in previous versions.
Figure 3. On-demand and scheduled SnapClones
Simple Maintenance Mode – We’ve also added the ability to perform maintenance mode for Simple Volumes. Simple Volumes can be located on local storage to present the memory from that hypervisor as a high performance in-memory volume for your virtual machine workloads such as VDI desktops. You can now enable maintenance mode using the Atlantis USX Manager UI or the REST API on simple volumes. What this does is that it will migrate the volume from one host to another, enabling you to put the source host into maintenance mode to perform any maintenance operations. This works with both VMware and Citrix hypervisors.
Figure 4. Simple Maintenance Mode
Solidify
Alerting is an area that has also been improved. We have added new alerts to highlight utilization of the backing disk that a volume uses. Additionally, alerts to highlight snapshot utilization is also now available. Alerts can be easily accessed using the Alerts menu in the GUI and are designed to be non-invasive however due to their nature, highly visible within the Alerts menu in the USX web UI for quick access.
Disaster Recovery for Simple Hybrid Volumes
Although this is now a new feature in USX 3.5, we’ve actually been deploying this in some of our larger customers for a few years now and the automation and workflows are now being exposed into the USX 3.5 UI. This feature enables simple hybrid volumes to be replicated by underlying storage with replication enabled technology, coupled with the automation and workflows, simple hybrid volumes can be recovered at the DR site with volume objects like the export IP addresses and volume identities being changed to suit the environment at the DR site.
Optimize
Plugin Framework is now a key feature to the USX capabilities. It is an additional framework which is integrated into the USX Web UI. It allows for the importing and running of Atlantis and community created plugins written in Python that enhance the functionality of USX. Plugins such as guest VM operations or guest VM query capabilities. These plugins enable guest-side operations such as restart of all VMs within a USX volume, or query the DNS-name of all guest VMs residing in a USX volume.
Figure 5. USX Plugin Framework
I hope you’ll agree that the plugin framework will provide an additional level of capabilities on top of the great capabilities we already have for automation and management such as the USX REST API and USX PowerShell Cmdlets.
Reduced Resource Requirements for volume container memory – we’ve decreased the metadata memory requirements by 40%. In previous versions the amount of memory assigned to metadata was a percentage of the volume export size before data reduction, for example, if you exported a volume of 1TB in size, the amount of memory reserved for metadata would then be 50GB, with USX 3.5 this is now reduced down to just 30GB, whilst still providing the same great performance and data reduction capabilities with fewer memory resources requirements. USX 3.5 optimizations also include the reduction of local flash storage required for the performance tier when using hybrid volumes, we’ve decreased the flash storage requirements by 95%!
In addition to reducing the metadata resource and local flash requirements, we’ve also reduced the amount of storage required for SnapClone space by 50%. This reduction reduces the SnapClone storage footprint on the underlying local or shared storage enabling you to use less storage for running USX.
ROBO to support vSphere Essentials.
ROBO use case is now even more cost effective with USX 3.5. This enhancement enables the use of the VMware vSphere Essentials licensing model for customers who prefer the VMware hypervisor over Citrix XenServer. This is a great option for remote and branch offices with three or less servers that wish to enable high performance, data reduction aware storage for remote sites.
Availability
Atlantis USX 3.5 is available now from the Atlantis Portal. Download now and let me know what you think of the new capabilities.
Release notes and online documentation are available here.
Atlantis HyperScale appliances come with effective capacities of 12TB, 24TB and 48TB depending on the model that is deployed. These capacities are what we refer to as effective capacity, i.e., the available capacity after in-line de-duplication that occurs when data is stored onto HyperScale Volumes. HyperScale Volumes always de-duplicate data first before writing data down to the local flash drives. This is what is known as in-line deduplication which is very different from post-de-duplication which will de-duplicate data after it is written down to disk. The latter incurs storage capacity overhead as you will need the capacity to store the data before the post-process de-duplication is able to then de-duplicate. This is why HyperScale appliances only require three SSDs per node to provide the 12TB of effective capacity at 70% de-duplication.
Breaking it down
HyperScale SuperMicro
CX-12
Number of nodes
4
Number of SSDs per node
3
SSD capacity
400GB
Usable flash capacity per node
1,200GB
Cluster RAW flash capacity
4,800GB
Cluster failure tolerance
1
Usable flash capacity per cluster
3,600GB
Effective capacity with 70% dedupe
12,000GB
Data Reduction Table
De-Dupe Rate (%)
Reduction Rate (X)
95
20.00
90
10.00
80
5.00
75
4.00
70
3.33
65
2.86
60
2.50
55
2.22
50
2.00
45
1.82
40
1.67
35
1.54
30
1.43
25
1.33
20
1.25
15
1.18
10
1.11
5
1.05
Formulas
Formula for calculating Reduction Rate
Taking the capacity from a typical HyperScale appliance of 3,600GB, this will give 12,000TB of effective capacity.
Summary
HyperScale provides a guarantee of 12TB per CX-12 appliance, however some workloads such as DEV/TEST private clouds and stateless VDI workloads could see as much as 90% data reduction. That’s 36,000GB of effective capacity. Do the numbers yourself, in-line de-duplication eliminates the need for lots of local flash drives or slower high capacity SAS or SATA drives. HyperScale runs the same codebase as USX and as such utilizes RAM to perform the in-line de-duplication which eliminates the need for add-in hardware cards or SSDs as staging capacity for de-duplication.
A hyper-converged appliance running pre-installed USX software on either XenServer or VMware vSphere and on the hardware of your choice – Lenovo, HP, SuperMicro and Cisco.
How is it installed?
HyperScale comes pre-installed by Atlantis Channel Partners. HyperScale runs exactly the same software as USX, however HyperScale is installed automatically from USB key by the Channel Partner. When it is delivered to your datacenter, it is a simple 5 step process to get the HyperScale appliance ready to use.
The appliance is ready to use in about 30 minutes with three data stores ready for use. You can of course create more volumes and also attach and optimize external storage such as NAS/SAN in addition to the local flash devices that come with the appliance.
Atlantis HyperScale Server Specifications
Server Specifications Per Node
CX-12
CX-24
CX-48 (Phase 2)
Server Compute
Dual Intel E5-2680 v3
Hypervisor
VMware vSphere 5.5 or Citrix XenServer 6.5
Memory
256-512 GB
384-512 GB
TBD
Networking
2x 10GbE & 2x 1GbE
Storage
Local Flash Storage
3x 400GB Intel 3710 SSD
3x 800GB Intel 3710 SSD
TBD
Total All-Flash Effective Capacity (4 Nodes)*
12 TB
24 TB
48 TB
Summary
Failure Tolerance
1 node failure (FTT=1)
Number of Deployed Volumes
3
IOPs per Volume
More than 50,000 IOPs
Latency per Volume
Less than 1ms
Throughput per Volume
More than 210 MB/s
Key Differentiators vs other Hyper-converged Offerings
Apart from lower cost (another post to follow) or you can read this post from Chris Mellor from The Register, HyperScale runs on exactly the same codebase as USX. USX has advanced data services that provide very efficient data reduction and IO acceleration patented technology. For a brief overview of the Data Services please see this video.
Pricing
Sizing
Number of nodes = 4
SSDs per node = 3
SSD capacity = 400GB
Usable capacity per node = 1200GB
Usable capacity per appliance with FTT=1 = 3,600GB
Effective capacity with 70% de-duplication = 12,000GB