Migrating VMware Cloud Director to Kubernetes

This post summarizes how you can migrate the VMware Cloud Director database from PostgreSQL running in the VCD appliance into a PostgreSQL pod running in Kuberenetes and then creating new VCD cells running as pods in Kubernetes to run VCD services. In summary, modernizing VCD as a modern application.

This post summarizes how you can migrate the VMware Cloud Director database from PostgreSQL running in the VCD appliance into a PostgreSQL pod running in Kuberenetes and then creating new VCD cells running as pods in Kubernetes to run VCD services. In summary, modernizing VCD into a modern application.

I wanted to experiment with VMware Cloud Director to see if it would run in Kubernetes. One of the reasons for this is to reduce resource consumption in my home lab. The VCD appliance can be quite a high resource consuming VM needing a minimum of 2 vCPUs and 6GB of RAM. Running VCD in Kubernetes would definitely reduce this down and free up much needed RAM for other applications. Other benefits by running this workload in Kubernetes would benefit from faster deployment, higher availability, easier lifecycle management and operations and additional benefits from the ecosystem such as observability tools.

Here’s a view of the current VCD appliance in the portal. 172.16.1.34 is the IP of the appliance, 172.16.1.0/27 is the network for the NSX-T segment that I’ve created for the VCD DMZ network. At the end of this post, you’ll see VCD running in Kubernetes pods with IP addresses assigned by the CNI instead.

Tanzu Kubernetes Grid Shared Services Cluster

I am using a Tanzu Kubernetes Grid cluster set up for shared services. Its the ideal place to run applications that in the virtual machine world would have been running in a traditional vSphere Management Cluster. I also run Container Service Extension and App Launchpad Kubernetes pods in this cluster too.

Step 1. Deploy PostgreSQL with Kubeapps into a Kubernetes cluster

If you have Kubeapps, this is the easiest way to deploy PostgreSQL.

Copy my settings below to create a PostgreSQL database server and the vcloud user and database that are required for the database restore.

Step 1. Alternatively, use Helm directly.

# Create database server using KubeApps or Helm, vcloud user with password

helm repo add bitnami https://charts.bitnami.com/bitnami

# Pull the chart, unzip then edit values.yaml
helm pull bitnami/postgresql
tar zxvf postgresql-11.1.11.tgz

helm install postgresql bitnami/postgresql -f /home/postgresql/values.yaml -n vmware-cloud-director

# Expose postgres service using load balancer
k expose pod -n vmware-cloud-director postgresql-primary-0 --type=LoadBalancer --name postgresql-public

# Get the IP address of the load balancer service
k get svc -n vmware-cloud-director postgresql-public

# Connect to database as postgres user from VCD appliance to test connection
psql --host 172.16.4.70 -U postgres -p 5432

# Type password you used when you deployed postgresql

# Quit
\q

Step 2. Backup database from VCD appliance and restore to PostgreSQL Kubernetes pod

Log into the VCD appliance using SSH.

# Stop vcd services on all VCD appliances
service vmware-vcd stop

# Backup database and important files on VCD appliance
./opt/vmware/appliance/bin/create_backup.sh

# Unzip the zip file into /opt/vmware/vcloud-director/data/transfer/backups

# Restore database using pg_dump backup file. Do this from the VCD appliance as it already has the postgres tools installed.

pg_restore --host 172.16.4.70 -U postgres -p 5432 -C -d postgres /opt/vmware/vcloud-director/data/transfer/backups/vcloud-database.sql

# Edit responses.properties and change IP address of database server from  load balancer IP to the assigned FQDN for the postgresql pod, e.g. postgresql-primary.vmware-cloud-director.svc.cluster.local

# Shutdown the VCD appliance, its no longer needed

Step 3. Deploy Helm Chart for VCD

# Pull the Helm Chart
helm pull oci://harbor.vmwire.com/library/vmware-cloud-director

# Uncompress the Helm Chart
tar zxvf vmware-cloud-director-0.5.0.tgz

# Edit the values.yaml to suit your needs

# Deploy the Helm Chart
helm install vmware-cloud-director vmware-cloud-director --version 0.5.0 -n vmware-cloud-director -f /home/vmware-cloud-director/values.yaml

# Wait for about five minutes for the installation to complete

# Monitor logs
k logs -f  -n vmware-cloud-director vmware-cloud-director-0

Known Issues

If you see an error such as:

Error starting application: Unable to create marker file in the transfer spooling area: VfsFile[fileObject=file:///opt/vmware/vcloud-director/data/transfer/cells/4c959d7c-2e3a-4674-b02b-c9bbc33c5828]

This is due to the transfer share being created by a different vcloud user on the original VCD appliance. This user has a different Linux user ID, normally 1000 or 1001, we need to change this to work with the new vcloud user.

Run the following commands to resolve this issue:

# Launch a bash session into the VCD pod
k exec -it -n vmware-cloud-director vmware-cloud-director-0 -- /bin/bash

# change ownership to the /transfer share to the vcloud user
chmod -R vcloud:vcloud /opt/vmware/vcloud-director/data/transfer

# type exit to quit
exit

Once that’s done, the cell can start and you’ll see the following:

Successfully verified transfer spooling area: VfsFile[fileObject=file:///opt/vmware/vcloud-director/data/transfer]
Cell startup completed in 2m 26s

Accessing VCD

The VCD pod is exposed using a load balancer in Kubernetes. Ports 443 and 8443 are exposed on a single IP, just like how it is configured on the VCD appliance.

Run the following to obtain the new load balancer IP address of VCD.

k get svc -n vmware-cloud-director  vmware-cloud-director
vmware-cloud-director   LoadBalancer   100.64.230.197   172.16.4.71   443:31999/TCP,8443:30016/TCP   16m

Redirect your DNS server record to point to this new IP address for both the HTTP and VMRC services, e.g., 172.16.4.71.

If everything ran successfully, you should now be able to log into VCD. Here’s my VCD instance that I use for my lab environment which was previously running in a VCD appliance, now migrated over to Kubernetes.

Notice, the old cell is now inactive because it is powered-off. It can now be removed from VCD and deleted from vCenter.

The pod vmware-cloud-director-0 is now running the VCD application. Notice its assigned IP address of 100.107.74.159. This is the pod’s IP address.

Everything else will work as normal, any UI customizations, TLS certificates are kept just as before the migration, this is because we restored the database and used the responses.properties to add new cells.

Even opening a remote console to a VM will continue to work.

Load Balancer is NSX Advanced LB (Avi)

Avi provides the load balancing services automatically through the Avi Kubernetes Operator (AKO).

AKO automatically configures the services in Avi for you when services are exposed.

Deploy another VCD cell, I mean pod

It is very easy now to scale the VCD by deploying additional replicas.

Edit the values.yaml file and change the replicas number from 1 to 2.

# Upgrade the Helm Chart
helm upgrade vmware-cloud-director vmware-cloud-director --version 0.4.0 -n vmware-cloud-director -f /home/vmware-cloud-director/values.yaml

# Wait for about five minutes for the installation to complete

# Monitor logs
k logs -f  -n vmware-cloud-director vmware-cloud-director-1

When the VCD services start up successfully, you’ll notice that the cell will appear in the VCD UI and Avi is also updated automatically with another pool.

We can also see that Avi is load balancing traffic across the two pods.

Deploy as many replicas as you like.

Resource usage

Here’s a very brief overview of what we have deployed so far.

Notice that the two PostgreSQL pods together are only using 700 Mb of RAM. The VCD pods are consuming much more. But a vast improvement over the 6GB that one appliance needed previously.

High Availability

You can ensure that the VCD pods are scheduled on different Kubernetes worker nodes by using multi availability zone topology. To do this just change the values.yaml.

# Availability zones in deployment.yaml are setup for TKG and must match VsphereFailureDomain and VsphereDeploymentZones
availabilityZones:
  enabled: true

This makes sure that if you scale up the vmware-cloud-director statefulset, Kubernetes will ensure that each of the pods will not be placed on the same worker node.

As you can see from the Kubernetes Dashboard output under Resource usage above, vmware-cloud-director-0 and vmware-cloud-director-1 pods are scheduled on different worker nodes.

More importantly, you can see that I have also used the same for the postgresql-primary-0 and postgresql-read-0 pods. These are really important to keep separate in case of failure of a worker node or of an ESX server that the worker node runs on.

Finally

Here are a few screenshots of VCD, CSE and ALP all running in my Shared Services Kubernetes cluster.

Backing up the PostgreSQL database

For Day 2 operations, such as backing up the PostgreSQL database you can use Velero or just take a backup of the database using the pg_dump tool.

Backing up the database with pg_dump using a Docker container

Its super easy to take a database backup using a Docker container, just make sure you have Docker running on your workstation and that it can reach the load balancer IP address for the PostgreSQL service.

docker run -it  -e PGPASSWORD=Vmware1! postgres:14.2  pg_dump  -h 172.16.4.70 -U postgres vcloud > backup.sql

The command will create a file in the current working directory named backup.sql.

Backing up the database with Velero

Please see this other post on how to setup Velero and Restic to backup Kubernetes pods and persistent volumes.

To create a backup of the PostgreSQL database using Velero run the following command.

velero backup create postgresql --ordered-resources 'statefulsets=vmware-cloud-director/postgresql-primary' --include-namespaces=vmware-cloud-director

Describe the backup

velero backup describe postgresql

Show backup logs

velero backup logs postgresql

To delete the backup

velero backup delete postgresql
Advertisement

Running Kubernetes Dashboard with signed certificates

In a previous post I went through how to deploy the Kubernetes Dashboard into a Kubernetes cluster with default settings, running with a self-signed certificate. This post covers how to update the configuration to use a signed certificate. I’m a fan of Let’s Encrypt so will be using a signed wildcard certificate from Let’s Encrypt for this post.

In a previous post I went through how to deploy the Kubernetes Dashboard into a Kubernetes cluster with default settings, running with a self-signed certificate. This post covers how to update the configuration to use a signed certificate. I’m a fan of Let’s Encrypt so will be using a signed wildcard certificate from Let’s Encrypt for this post.

You can prepare Let’s Encrypt by referring to a previous post here.

Step 1. Create a new namespace

Create a new namespace for Kubernetes Dashboard

kubectl create ns kubernetes-dashboard

Step 2. Upload certificates

Upload your certificate and private key to $HOME/certs in pem format. Let’s Encrypt just happens to issue certificates in pem format with the following names:

cert.pem and privkey.pem

All we need to do is to rename these to:

tls.crt and tls.key

And then upload them to $HOME/certs where our kubectl tool is installed.

Step 3. Create secret

Create the secret for the custom certificate by running this command.

kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kubernetes-dashboard

You can check that that secret is ready by issuing the following command:

kubectl describe secret -n kubernetes-dashboard kubernetes-dashboard-certs
Name:         kubernetes-dashboard-certs
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
tls.key:  1705 bytes
tls.crt:  1835 bytes

Step 4. Edit the deployment

We need to download the deployment yaml and then edit it to ensure that it uses the Let’s Encrypt signed certificates.

Run the following command to download the Kubernetes Dashboard deployment yaml file

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Now edit it by using your favorite editor and add in the following two lines

        - --tls-cert-file=/tls.crt
        - --tls-key-file=/tls.key

under the following

Deployment – kubernetes-dashboard – spec.template.spec.containers.args

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --tls-cert-file=/tls.crt
            - --tls-key-file=/tls.key
            - --auto-generate-certificates

Step 5. Expose Kubernetes Dashboard using a load balancer

Let’s expose the app using a load balancer, I’m using NSX ALB (Avi) but the code below can be used with any load balancer.

Continue editing the recommended.yaml file with the following contents from line 32:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: LoadBalancer

Save changes to the file. Now we’re ready to deploy.

If you want to use the Avi Services API (K8s Gateway API). Then add labels to the service, like this. This will ensure that the service uses the Avi gateway.

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    ako.vmware.com/gateway-name: gateway-tkg-workload-vip
    ako.vmware.com/gateway-namespace: default
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: LoadBalancer

Step 6. Deploy Kubernetes Dashboard

Deploy the app with the following command:

kubectl apply -f recommended.yaml

Step 7. Get full access to the cluster for Kubernetes Dashboard

To get full cluster access to the kubernetes-dashboard account run the following

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard

Step 8. Obtain the login token

To login we’ll need to obtain a token with the following code:

kubectl describe -n kubernetes-dashboard secret kubernetes-dashboard-token

Copy just the token and paste it into the browser to login. Enjoy a secure connection to Kubernetes Dashboard. Enjoy!

Using Avi’s Support for Gateway API

Avi (NSX Advanced Load Balancer) supports Kubernetes Gateway API. This post shows how to install and use the Gateway API to expose applications using this custom resource definition (CRD).

Introduction

Avi (NSX Advanced Load Balancer) supports Kubernetes Gateway API. This post shows how to install and use the Gateway API to expose applications using this custom resource definition (CRD).

Gateway API is an open source project managed by the SIG-NETWORK community. It is a collection of resources that model service networking in Kubernetes. These resources – GatewayClass,Gateway, HTTPRoute, TCPRoute, Service, etc – aim to evolve Kubernetes service networking through expressive, extensible, and role-oriented interfaces that are implemented by many vendors and have broad industry support.

https://gateway-api.sigs.k8s.io/

For a quick introduction to the Kubernetes Gateway API, read this link and this link from the Avi documentation.

Why use Gateway API?

You would want to use the Gateway API if you had the following requirements:

  1. Network segmentation – exposing applications from the same Kubernetes cluster to different network segments
  2. Shared IP – exposing multiple services that use both TCP and UDP ports on the same IP address

NSX Advanced Load Balancer supports both of these requirements through the use of the Gateway API. The following section describes how this is implemented.

The Gateway API introduces a few new resource types:

GatewayClasses are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.

Gateways are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.

AVI Infra Setting

Aviinfrasetting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.

A sample Avi Infra Setting is as shown below:

apiVersion: ako.vmware.com/v1alpha1
kind: AviInfraSetting
metadata:
  name: aviinfrasetting-tkg-workload-vip
spec:
  seGroup:
    name: tkgvsphere-tkgworkload-group10
  network:
    vipNetworks:
      - networkName: tkg-workload-vip
        cidr: 172.16.4.64/27
    enableRhi: false

Avi Infra Setting is a cluster scoped CRD and can be attached to the intended Services. Avi Infra setting resources can be attached to Services using Gateway APIs.

GatewayClass

Gateway APIs provide interfaces to structure Kubernetes service networking.

AKO supports Gateway APIs via the servicesAPI flag in the values.yaml.

The Avi Infra Setting resource can be attached to a Gateway Class object, via the .spec.parametersRef as shown below:

apiVersion: networking.x-k8s.io/v1alpha1
kind: GatewayClass
metadata:
  name: gatewayclass-tkg-workload-vip
spec:
  controller: ako.vmware.com/avi-lb
  parametersRef:
    group: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-workload-vip

Gateway

The Gateway object provides a way to configure multiple Services as backends to the Gateway using label matching. The labels are specified as constant key-value pairs, the keys being ako.vmware.com/gateway-namespace and ako.vmware.com/gateway-name. The values corresponding to these keys must match the Gateway namespace and name respectively, for AKO to consider the Gateway valid. In case any one of the label keys are not provided as part of matchLabels OR the namespace/name provided in the label values do no match the actual Gateway namespace/name, AKO will consider the Gateway invalid.

Please see https://avinetworks.com/docs/ako/1.5/gateway/.

kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: gateway-tkg-workload-vip
  namespace: default
spec:
  gatewayClassName: gatewayclass-tkg-workload-vip
  listeners:
  - protocol: TCP
    port: 80
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: gateway-tkg-workload-vip
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
- protocol: TCP
    port: 443
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: gateway-tkg-workload-vip
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service

How to use GatewayAPI

Tying all of these CRDs together.

A Gateway uses a GatewayClass, which in turn uses an AviInfraSetting. Therefore when a Gateway is used by a Service using the relevant labels, that particular service will be exposed on a network that is referenced by the AviInfraSetting via the .spec.network.vipNetworks

https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes/blob/master/docs/crds/avinfrasetting.md#aviinfrasetting-with-servicesingressroutes

In your helm charts, for any service that needs a LoadBalancer service. You would now want to use ClusterIP instead of LoadBalancer and use Labels such as the following:

apiVersion: v1
kind: Service
metadata:
  name: web-statefulset-service-1
  namespace: default
  labels:
    ako.vmware.com/gateway-name: gateway-tkg-workload-vip
    ako.vmware.com/gateway-namespace: default
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: ClusterIP

The Labels

ako.vmware.com/gateway-name: gateway-tkg-workload-vip
ako.vmware.com/gateway-namespace: default

and the ClusterIP type tells the AKO operator to use the gateways, each gateway is on a separate network segment for traffic separation via the spec.gatewayClassName and conversely the gatewayclass via the spec.parametersRef.name for the AviInfraSetting.

Playing with Tanzu – persistent volume claims, deployments & services

Deploying your first pod with a persistent volume claim and service on vSphere with Tanzu. With sample code for you to try.

Learning the k8s ropes…

This is not a how to article to get vSphere with Tanzu up and running, there are plenty of guides out there, here and here. This post is more of a “lets have some fun with Kubernetes now that I have a vSphere with Tanzu cluster to play with“.

Answering the following question would be a good start to get to grips with understanding Kubernetes from a VMware perspective.

How do I do things that I did in the past in a VM but now do it with Kubernetes in a container context instead?

For example building the certbot application in a container instead of a VM.

Lets try to create an Ubuntu deployment that deploys one Ubuntu container into a vSphere Pod with persistent storage and a load balancer service from NSX-T to get to the /bin/bash shell of the deployed container.

Let’s go!

I created two yaml files for this, accessible from Github. You can read up on what these objects are here.

FilenameWhats it for?What does it do?Github link
certbot-deployment.yamlk8s deployment specificationDeploys one ubuntu pod, claims a 16Gb volume and mounts it to /dev/sdb and creates a load balancer to enable remote management with ssh.ubuntu-deployment.yaml
certbot-pvc.yamlpersistent volume claim specificationCreates a persistent volume of 16Gb size from the underlying vSphere storage class named tanzu-demo-storage.
The PVC is then consumed by the deployment.
ubuntu-pvc.yaml
Table 1. The only two files that you need.

Here’s the certbot-deployment.yaml file that shows the required fields and object spec for a Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: certbot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: certbot
  template:
    metadata:
      labels:
        app: certbot
    spec:
      volumes:
      - name: certbot-storage
        persistentVolumeClaim:
         claimName: ubuntu-pvc
      containers:
      - name: ubuntu
        image: ubuntu:latest
        command: ["/bin/sleep", "3650d"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: "/mnt/sdb"
          name: certbot-storage
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: certbot
  name: certbot
spec:
  ports:
  - port: 22
    protocol: TCP
    targetPort: 22
  selector:
    app: certbot
  sessionAffinity: None
  type: LoadBalancer

Here’s the certbot-pvc.yaml file that shows the required fields and object spec for a Kubernetes Persistent Volume Claim.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: certbot-pvc
  labels:
    storage-tier: tanzu-demo-storage
    availability-zone: home
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: tanzu-demo-storage
  resources:
    requests:
        storage: 16Gi

First deploy the PVC claim with this command:

kubectl apply -f certbot-pvc.yaml

Then deploy the deployment with this command:

kubectl.exe apply -f certbot-deployment.yaml

Magic happens and you can monitor the vSphere client and kubectl for status. Here are a couple of screenshots to show you whats happening.

kubectl describe deployment certbot
Name:                   certbot
Namespace:              new
CreationTimestamp:      Thu, 11 Mar 2021 23:40:25 +0200
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=certbot
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=certbot
  Containers:
   ubuntu:
    Image:      ubuntu:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sleep
      3650d
    Environment:  <none>
    Mounts:
      /mnt/sdb from certbot-storage (rw)
  Volumes:
   certbot-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  certbot-pvc
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   certbot-68b4747476 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  44m   deployment-controller  Scaled up replica set certbot-68b4747476 to 1
kubectl describe pvc
Name:          certbot-pvc
Namespace:     new
StorageClass:  tanzu-demo-storage
Status:        Bound
Volume:        pvc-418a0d4a-f4a6-4aef-a82d-1809dacc9892
Labels:        availability-zone=home
               storage-tier=tanzu-demo-storage
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
               volumehealth.storage.kubernetes.io/health: accessible
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      16Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    certbot-68b4747476-pq5j2
Events:        <none>
kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
certbot     1/1     1            1           47m


kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
certbot-68b4747476-pq5j2     1/1     Running   0          47m


kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
certbot-pvc   Bound    pvc-418a0d4a-f4a6-4aef-a82d-1809dacc9892   16Gi       RWO            tanzu-demo-storage   84m

Let’s log into our pod, note the name from the kubectl get pods command above.

certbot-68b4747476-pq5j2

Its not yet possible to log into the pod using SSH since this is a fresh container that does not have SSH installed, lets log in first using kubectl and install SSH.

kubectl exec --stdin --tty certbot-68b4747476-pq5j2 -- /bin/bash

You will then be inside the container at the /bin/bash prompt.

root@certbot-68b4747476-pq5j2:/# ls
bin   dev  home  lib32  libx32  mnt  proc  run   srv  tmp  var
boot  etc  lib   lib64  media   opt  root  sbin  sys  usr
root@certbot-68b4747476-pq5j2:/#

Lets install some tools and configure ssh.

apt-get update
apt-get install iputils-ping
apt-get install ssh

passwd root

service ssh restart

exit

Before we can log into the container over an SSH connection, we need to find out what the external IP is for the SSH service that the NSX-T load balancer configured for the deployment. You can find this using the command:

kubectl get services

kubectl get services
NAME        TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
certbot     LoadBalancer   10.96.0.44   172.16.2.3    22:31731/TCP   51m

The IP that we use to get to the Ubuntu container over SSH is 172.16.2.3. Lets try that with a putty/terminal session…

login as: root
certbot@172.16.2.3's password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 4.19.126-1.ph3-esx x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

$ ls
bin   dev  home  lib32  libx32  mnt  proc  run   srv  tmp  var
boot  etc  lib   lib64  media   opt  root  sbin  sys  usr
$ df
Filesystem     1K-blocks   Used Available Use% Mounted on
overlay           258724 185032     73692  72% /
/mnt/sdb        16382844  45084  16321376   1% /mnt/sdb
tmpfs             249688     12    249676   1% /run/secrets/kubernetes.io/servic                         eaccount
/dev/sda          258724 185032     73692  72% /dev/termination-log
$

You can see that there is a 16Gb mount point at /mnt/sdb just as we specified in the specifications and remote SSH access is working.