VMware Cloud Director, Container Service Extension and App Launchpad Running in Kubernetes

I’ve been experimenting with the VMware Cloud Director, Container Service Extension and App Launchpad applications and wanted to test if these applications would run in Kubernetes.

The short answer is yes!

I’ve been experimenting with the VMware Cloud Director, Container Service Extension and App Launchpad applications and wanted to test if these applications would run in Kubernetes.

The short answer is yes!

I initially deployed these apps as a standalone Docker container to see if they would run as a container. I wanted to eventually get them to run in a Kubernetes cluster to benefit from all the goodies that Kubernetes provides.

Packaging the apps wasn’t too difficult, just needed patience and a lot of Googling. The process was as follows:

  • run a Docker image of a Linux image, CentOS for VCD and Photon for ALP and CSE.
  • prepare all the pre-requisites, such as yum update and tdnf update.
  • commit the image to a Harbor registry
  • build a Helm chart to deploy the applications using the images and then create a shell script that is run when the image starts to install and run the applications.

Well, its not that simple but you can take a look at the code for all three Helm Charts on my Github or pull them from my public Harbor repository.

VMware Cloud Director

Github: https://github.com/hugopow/vmware-cloud-director

Helm Chart: helm pull oci://harbor.vmwire.com/library/vmware-cloud-director

How to install: Update values.yaml and then run

helm install vmware-cloud-director oci://harbor.vmwire.com/library/vmware-cloud-director --version 0.5.0 -n vmware-cloud-director

Notice how easy that was to install?

The values.yaml file is the only file you’ll need to edit, just update to suit your environment.

# Default values for vmware-cloud-director.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

installFirstCell:
  enabled: true

installAdditionalCell:
  enabled: false

storageClass: iscsi
pvcCapacity: 2Gi

vcdNfs:
  server: 10.92.124.20
  mountPath: /mnt/nvme/vcd-k8s

vcdSystem:
  user: administrator
  password: Vmware1!
  email: admin@domain.local
  systemName: VCD
  installationId: 1

postgresql:
  dbHost: postgresql.vmware-cloud-director.svc.cluster.local
  dbName: vcloud
  dbUser: vcloud
  dbPassword: Vmware1!

# Availability zones in deployment.yaml are setup for TKG and must match VsphereFailureDomain and VsphereDeploymentZones
availabilityZones:
  enabled: false

httpsService:
  type: LoadBalancer
  port: 443

consoleProxyService:
  port: 8443

publicAddress:
  uiBaseUri: https://vcd-k8s.vmwire.com
  uiBaseHttpUri: http://vcd-k8s.vmwire.com
  restapiBaseUri: https://vcd-k8s.vmwire.com
  restapiBaseHttpUri: http://vcd-k8s.vmwire.com
  consoleProxy: vcd-vmrc.vmwire.com

tls:
  certFullChain: |-
    -----BEGIN CERTIFICATE-----
          wildcard certificate
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
          intermediate certificate
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
          root certificate
    -----END CERTIFICATE-----
  certKey: |-
    -----BEGIN PRIVATE KEY-----
          wildcard certificate private key
    -----END PRIVATE KEY-----

The installation process is quite fast, less than three minutes to get the first pod up and running and two minutes for each subsequent pod. That means a VCD multi-cell system up and running in less than ten minutes.

I’ve deployed VCD as a StatefulSet, and have three replicas. Since the replica is set to three, three VCD “Pods” are deployed, in the old world these would be the cells. Here you can see three pods running which would provide both load balancing and high-availability. The other pod is the PostgreSQL database that these cells use. You should also be able to see that Kubernetes has scheduled each pod on a different worker node. I have three worker nodes in this Kubernetes cluster.

Below is the view in VCD of the three cells.

The StatefulSet also has a LoadBalancer service configured for performing the load balancing of the HTTP and Console Proxy traffic on TCP 443 and TCP 8443 respectively.

You can see the LoadBalancer service has configured the services for HTTP and Console Proxy. Note, that this is done automatically by Kubernetes using a manifest in the Helm Chart.

Migrating an existing VCD instance to Kubernetes

If you want to migrate an existing instance to Kubernetes, then use this post here.

Container Service Extension

Github: https://github.com/hugopow/container-service-extension

Helm Chart: helm pull oci://harbor.vmwire.com/library/container-service-extension

How to install: Update values.yaml and then run helm install container-service-extension oci://harbor.vmwire.com/library/container-service-extension --version 0.2.0 -n container-service-extension

Here’s CSE running as a pod in Kubernetes. Since CSE is a stateless application, I’ve configured it to run as a Deployment.

CSE also does not need a database as it purely communicates with VCD through a message bus such as MQTT or RabbitMQ. Additionally no external access to CSE is required as this is done via VCD, so no load balancer is needed either.

You can see that when CSE is idle it only needs 1 milicore of CPU and 102Mib of RAM. This is so much better in terms of resource requirements than running CSE in a VM. This is one of the advantages of running pods vs VMs. Pods will use considerably fewer resources than VMs.

App Launchpad

Github: https://github.com/hugopow/app-launchpad

Helm Chart: helm pull oci://harbor.vmwire.com/library/app-launchpad

How to install: Update values.yaml and then run helm install app-launchpad oci://harbor.vmwire.com/library/app-launchpad --version 0.4.0 -n app-launchpad

The values.yaml file is the only file you’ll need to edit, just update to suit your environment.

# Default values for app-launchpad.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

alpConnect:
  saUser: "svc-alp"
  saPass: Vmware1!
  url: https://vcd-k8s.vmwire.com
  adminUser: administrator@system
  adminPass: Vmware1!
  mqtt: true
  eula: accept
# If you accept the EULA then type "accept" in the EULA key value to install ALP. You can fine the EULA in the README.md file.

I’ve already written an article about ALP here. That article contains a lot more details so I’ll share a few screenshots below for ALP.

Just like CSE, ALP is a stateless application and is deployed as a Deployment. ALP also does not require external access through a load balancer as it too communicates with VCD using the MQTT or RabbitMQ message bus.

You can see that ALP when idle requires just 3 milicores of CPU and 400 Mib of RAM.

ALP can be deployed with multiple instances to provide load balancer and high availability. This is done by deploying RabbitMQ and connecting ALP and VCD to the same exchange. VCD does not support multiple instances of ALP if MQTT is used.

When RabbitMQ is configured, then ALP can be scaled by changing the Deployment number of replicas to two or more. Kubernetes would then deploy additional pods with ALP.

Advertisement

Running VMware Cloud Director App Launchpad in Kubernetes

VMware Cloud Director App Launchpad allows users to deploy applications from public and private registries very easily into their VCD clouds, either as virtual machines or as containers into Kubernetes clusters provisioned into VCD by Container Service Extension.

Installing ALP requires a Linux system, followed by installing the application from an RPM file and then going through some configuration commands to connect ALP to the VCD system. Tedious at best and prone to errors.

This post shows how you can run ALP as a Kubernetes pod in a Kubernetes cluster instead of running ALP in a VM.

Disclaimer: This is unsupported. This post is an example of how you can run App Launchpad in Kubernetes instead of deploying it on a traditional VM. Use at your own risk. Please continue to run ALP in supported configurations in production environments.

VMware Cloud Director App Launchpad allows users to deploy applications from public and private registries very easily into their VCD clouds, either as virtual machines or as containers into Kubernetes clusters provisioned into VCD by Container Service Extension.

The official documentation is here.

Here are a few screenshots of ALP in action in VCD.

How is App Launchpad Installed in a VM?

Installing ALP requires a Linux system, followed by installing the application from an RPM file and then going through some configuration commands to connect ALP to the VCD system. Tedious at best and prone to errors.

This post shows how you can run ALP as a Kubernetes pod in a Kubernetes cluster instead of running ALP in a VM.

Disclaimer: This is unsupported. This post is an example of how you can run App Launchpad in Kubernetes instead of deploying it on a traditional VM. Use at your own risk. Please continue to run ALP in supported configurations in production environments.

VMs vs Containers

Running containers in Kubernetes instead of VMs provides enhanced benefits. Such as:

  • Containers are more lightweight than VMs, as their images are measured in megabytes rather than gigabytes
  • Containers require fewer IT resources to deploy, run, and manage
  • Containers spin up in milliseconds
  • Since their order of magnitude is smaller, a single system can host many more containers as compared to VMs
  • Containers are easier to deploy and fit in well with infrastructure as code concepts
  • Developing, testing, running and managing applications are easier and more efficient with containers.

A short list, you can of course read more here.

Running App Launchpad in a Kubernetes cluster

What have I done to ALP to make it work as a container running in a Kubernetes cluster?

  • Built a Docker image containing the docker photon image and installed all the pre-requisites to run ALP.
  • Built a Helm chart to easily deploy ALP into any Kubernetes cluster.

What does the Helm chart look like?

There are three main files in the Helm chart that makes this work.

FilePurpose
values.yamlHolds the configuration information which can be changed by the user, such as parameters for the VCD system that ALP with connect to.
deployment.yamlKubernetes deployment that uses the other two files to deploy the ALP application in to Kubernetes.
configmap.yamlContains the run-alp.sh script that will install and configure ALP using the parameters in the values.yaml file.

You can find the Helm chart on my Github repo here.

How to deploy ALP into Kubernetes?

Pull the Helm chart from my registry

helm pull oci://harbor.vmwire.com/library/app-launchpad

Extract it to your local directory

tar zxvf app-launchpad-0.4.0.tgz

You’ll find the values.yaml file in the /app-launchpad directory. Edit it to your liking and also accept the ALP EULA, you’ll also find the EULA in the README.md file.

alpConnect:
  saUser: "svc-alp"
  saPass: Vmware1!
  url: https://vcd.vmwire.com
  adminUser: administrator@system
  adminPass: Vmware1!
  mqtt: true
  eula: accept
# If you accept the EULA then type "accept" in the EULA key value to install ALP.

You can either package the chart and place it into your own registry or just use mine.

To install the chart, run

kubectl create ns app-launchpad

helm install app-launchpad oci://harbor.vmwire.com/library/app-launchpad -n app-launchpad -f /home/alp/app-launchpad/values.yaml

You’ll see output like this

values.yaml
NAME: app-launchpad
LAST DEPLOYED: Fri Mar 18 09:54:16 2022
NAMESPACE: app-launchpad
STATUS: deployed
REVISION: 1
TEST SUITE: None

Running the following command will show that the deployment is successful

helm list -n app-launchpad
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
app-launchpad   app-launchpad   1               2022-03-18 09:54:16.560871812 +0000 UTC deployed        app-launchpad-0.4.0     2.1.1

Running the following commands you’ll see that the pod has started

kubectl get deploy -n app-launchpad
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
app-launchpad   1/1     1            1           25m
kubectl get po -n app-launchpad
NAME                             READY   STATUS    RESTARTS   AGE
app-launchpad-669786b6dd-p8fjw   1/1     Running   0          25m

Getting the logs you’ll see something like

kubectl logs app-launchpad-669786b6dd-p8fjw -n app-launchpad
Uninstalling...
Removed /etc/systemd/system/multi-user.target.wants/alp.service.
Removed /etc/systemd/system/multi-user.target.wants/alp-deployer.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
warning: %postun(vmware-alp-2.1.1-19234432.x86_64) scriptlet failed, exit status 1
warning: /home/vmware-alp-2.1.1-19234432.ph3.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 001e5cc9: NOKEY
Verifying...                          ########################################
Preparing...                          ########################################
Updating / installing...
vmware-alp-2.1.1-19234432             ########################################
New installing...
Found the /opt/vmware/alp/log, change log directory owner and permission ...
chmod: /opt/vmware/alp/log/*: No such file or directory
chown: /opt/vmware/alp/log/*: No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/alp.service → /lib/systemd/system/alp.service.
Created symlink /etc/systemd/system/multi-user.target.wants/alp-deployer.service → /lib/systemd/system/alp-deployer.service.
Setup ALP connections
VMWARE END USER LICENSE AGREEMENT
Last updated: 03 May 2021
--- snipped ---
Cloud Director Setting for App Launchpad
+--------------------------------+-----------------------------------------------+
|             Cloud Director URL | https://vcd.vmwire.com                        |
|  App Launchpad Service Account | svc-alp                                       |
| App Launchpad Service Password | Vmware1!                                      |
|                   MQTT Triplet | VMware/AppLaunchpad/1.0.0                     |
|                     MQTT Token | e089774e-389c-4e12-82d0-11378a30981d          |
|          MQTT Topic of Monitor | topic/extension/VMware/AppLaunchpad/1.0.0/ext |
|         MQTT Topic of Response | topic/extension/VMware/AppLaunchpad/1.0.0/vcd |
|   App Launchpad extension UUID | 9ba4f6c8-a1e4-3a57-bd4c-e5ca5c2f8375          |
+--------------------------------+-----------------------------------------------+
Successfully connected and configured with Cloud Director for App Launchpad.
start ALP Deployer service
Start ALP service
==> /opt/vmware/alp/deployer/log/deployer/default.log <==
{"level":"info","timestamp":"2022-03-18T09:54:24.446Z","caller":"cmd/deployer.go:68","msg":"Starting server","Config":{"ALP":{"System":{"Deployer":{"AuthToken":"***"}},"VCDEndpoint":{"URL":"https://vcd.vmwire.com","FingerprintsSHA256":"f4:e0:1b:7c:9c:d2:da:15:94:52:58:6f:80:02:2a:46:8f:ab:a5:91:d7:43:f6:8b:85:60:23:16:93:8b:2a:87"},"Deployer":{"Host":"127.0.0.1","Port":8087,"KubeRESTClient":{"QPS":256,"Burst":512,"Timeout":180000,"CertificateValidation":false},"ChartCacheSize":128}},"Logging":{"Stdout":false,"File":{"Path":"log/deployer/"},"Level":{"Com":{"VMware":{"ALP":"INFO"}}}}}}
{"level":"info","timestamp":"2022-03-18T09:54:24.550Z","caller":"server/manager.go:59","msg":"The manager is starting mux-router"}
 __     __  __  __  __        __     _      ____    _____            _      _       ____
 \ \   / / |  \/  | \ \      / /    / \    |  _ \  | ____|          / \    | |     |  _ \
  \ \ / /  | |\/| |  \ \ /\ / /    / _ \   | |_) | |  _|           / _ \   | |     | |_) |
   \ V /   | |  | |   \ V  V /    / ___ \  |  _ <  | |___         / ___ \  | |___  |  __/
    \_/    |_|  |_|    \_/\_/    /_/   \_\ |_| \_\ |_____|       /_/   \_\ |_____| |_|

  :: Spring Boot Version : 2.4.13
  :: VMware vCloud Director App LaunchPad Version : 2.1.1-19234432, Build Date: Thu Jan 20 02:18:37 GMT 2022
=================================================================================================================

What next?

It will take around two minutes until ALP is ready.

Open the VCD provider portal and click on the More menu to open up App Launchpad.

From here you can configure App Launchpad and enjoy using the app in a container running in a Kubernetes cluster.

Some other details

You’ll notice (if you deployed Kubernetes Dashboard), that the pod uses minimal resources after it has started and settled down to an idle state.

Using pretty much no CPU and around 300Mb of memory. This is so much better than running this thing in a VM right?

Note that I have used MQTT for the message bus between ALP and VCD. If you use RabbitMQ, you can in fact deploy multiple pods of ALP and enable Kubernetes to run ALP as a clustered service. MQTT does not support multiple instances of ALP.

Just change the replicaCount value from 1 to 2, and also edit the configMap to change from MQTT to RabbitMQ.

To finish off

I’ve found that moving my lab applications such as ALP and Container Service Extension to Kubernetes has freed up a lot of memory and CPU. This is the main use case for me as I run a lot of labs and demo environments. It is also just a lot easier to deploy these applications with Helm into Kubernetes than using virtual machines.

This is just one example of modernizing some of the VCPP applications to take advantage of the benefits of running in Kubernetes.

I hope this helps you too. Feel free to comment below if you find this useful. I am also working on improving my Container Service Extension Helm chart and will publish that when it is ready.