Single node clusters with TKG

Single-node clusters are a Tech Preview for TKG since 2.1 on vSphere. Its not actually a single-node cluster per-se but a collapsed Kubernetes node with both the control plane and the worker node on one virtual machine that can be deployed in a cluster with more than one node or just as a single-node.

Use cases include edge deployments or hardware constrained environments.

You can deploy a single node or three nodes that has both the control plane and the worker node roles. In fact, to Kubernetes, the node is recognised as a control plane node, but pods are allowed to be scheduled on the nodes since we change the spec.topology.variables.controlPlaneTaint=false in the cluster config specification.

A few things to know about single node clusters

  • Supported on TKG 2.1 and newer with the standalone management cluster only, not supported with vSphere with Tanzu (TKG with Supervisor).
  • Single node clusters are supported with Cluster Class based clusters only. Legacy clusters are not supported.
  • Single node clusters behave just like any other TKG clusters so it will support everything you are used to.
  • You can deploy nodes that are both control plane and workers in only odd numbers, this is because Kubernetes still treats these nodes as control plane nodes, but allow any pod to be scheduled on them. So scaling the cluster up from one node to 3, 5, 7 etc is possible with a simple one line command of tanzu cluster scale <cluster-name> -c #. Here is a cluster with five nodes. As you can see Kubernetes assigns the control-plane role to the nodes. However, deploying a single-node cluster removes the Taints from the node. On any other cluster type you’ll see this taint Taints: node-role.kubernetes.io/control-plane:NoSchedule. This is removed for single-node clusters.
k get no -o wide
NAME                     STATUS   ROLES           AGE     VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
tkg-single-ngbmw-dcljq   Ready    control-plane   17m     v1.25.7+vmware.2   172.16.3.84   172.16.3.84   Ubuntu 20.04.6 LTS   5.4.0-144-generic   containerd://1.6.18-1-gdbc99e5b1
tkg-single-ngbmw-mm6tp   Ready    control-plane   9m51s   v1.25.7+vmware.2   172.16.3.85   172.16.3.85   Ubuntu 20.04.6 LTS   5.4.0-144-generic   containerd://1.6.18-1-gdbc99e5b1
tkg-single-ngbmw-mvdv2   Ready    control-plane   14m     v1.25.7+vmware.2   172.16.3.70   172.16.3.70   Ubuntu 20.04.6 LTS   5.4.0-144-generic   containerd://1.6.18-1-gdbc99e5b1
tkg-single-ngbmw-ngqxd   Ready    control-plane   12m     v1.25.7+vmware.2   172.16.3.75   172.16.3.75   Ubuntu 20.04.6 LTS   5.4.0-144-generic   containerd://1.6.18-1-gdbc99e5b1
tkg-single-ngbmw-tqq79   Ready    control-plane   3h1m    v1.25.7+vmware.2   172.16.3.82   172.16.3.82   Ubuntu 20.04.6 LTS   5.4.0-144-generic   containerd://1.6.18-1-gdbc99e5b1
  • You can also scale down
k get no
NAME                     STATUS   ROLES           AGE   VERSION
tkg-single-ngbmw-mm6tp   Ready    control-plane   18m   v1.25.7+vmware.2
  • You can register single node clusters to TMC. This is possible as TKG changes the metadata for single node clusters as a workload cluster type. You can find this by looking at the config map for the tkg-metadata k get cm -n tkg-system-public tkg-metadata -o yaml. Line 6 below.
apiVersion: v1
data:
  metadata.yaml: |
    cluster:
        name: tkg-single
        type: workload
        plan: dev
        kubernetesProvider: VMware Tanzu Kubernetes Grid
        tkgVersion: v2.2.0
        edition: tkg
        infrastructure:
            provider: vsphere
        isClusterClassBased: true
    bom:
        configmapRef:
            name: tkg-bom
kind: ConfigMap
metadata:
  creationTimestamp: "2023-05-29T14:47:14Z"
  name: tkg-metadata
  namespace: tkg-system-public
  resourceVersion: "250"
  uid: 944a120b-595c-4367-a570-db295af54d11

To deploy a single-node cluster, you can refer to the documentation here.

  • In summary, switch to the TKG management cluster context and type this command to enable single-node clusters tanzu config set features.cluster.single-node-clusters true
  • create a cluster config file as normal, and save the file as a yaml, for example tkg-single.yaml.
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
ALLOW_LEGACY_CLUSTER: false
INFRASTRUCTURE_PROVIDER: vsphere
CLUSTER_PLAN: dev
NAMESPACE: default
# CLUSTER_API_SERVER_PORT: # For deployments without NSX Advanced Load Balancer
CNI: antrea
ENABLE_DEFAULT_STORAGE_CLASS: false

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
#CONTROLPLANE_SIZE: small
#WORKER_SIZE: small

# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096

VSPHERE_CONTROL_PLANE_NUM_CPUS: 4
VSPHERE_CONTROL_PLANE_DISK_GIB: 40
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096

# CONTROL_PLANE_MACHINE_COUNT:
# WORKER_MACHINE_COUNT:
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

#VSPHERE_CLONE_MODE: "fullClone"
VSPHERE_NETWORK: tkg-workload
# VSPHERE_TEMPLATE:
# VSPHERE_TEMPLATE_MOID:
# IS_WINDOWS_WORKLOAD_CLUSTER: false
# VIP_NETWORK_INTERFACE: "eth0"
VSPHERE_SSH_AUTHORIZED_KEY: <-- snipped -->
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_PASSWORD: 
# VSPHERE_REGION:
# VSPHERE_ZONE:
# VSPHERE_AZ_0:
# VSPHERE_AZ_1:
# VSPHERE_AZ_2:
# USE_TOPOLOGY_CATEGORIES: false
VSPHERE_SERVER: vcenter.vmwire.com
VSPHERE_DATACENTER: home.local
VSPHERE_RESOURCE_POOL: tkg-vsphere-workload
VSPHERE_DATASTORE: lun01
VSPHERE_FOLDER: tkg-vsphere-workload
# VSPHERE_STORAGE_POLICY_ID
# VSPHERE_WORKER_PCI_DEVICES:
# VSPHERE_CONTROL_PLANE_PCI_DEVICES:
# VSPHERE_IGNORE_PCI_DEVICES_ALLOW_LIST:
VSPHERE_CONTROL_PLANE_CUSTOM_VMX_KEYS: 'ethernet0.ctxPerDev=3,ethernet0.pnicFeatures=4,sched.cpu.shares=high'
# VSPHERE_WORKER_CUSTOM_VMX_KEYS: 'ethernet0.ctxPerDev=3,ethernet0.pnicFeatures=4,sched.cpu.shares=high'
# WORKER_ROLLOUT_STRATEGY: "RollingUpdate"
# VSPHERE_CONTROL_PLANE_HARDWARE_VERSION:
# VSPHERE_WORKER_HARDWARE_VERSION:
VSPHERE_TLS_THUMBPRINT: <-- snipped -->
VSPHERE_INSECURE: false
# VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip
# VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443
# VSPHERE_ADDITIONAL_FQDN:
AVI_CONTROL_PLANE_HA_PROVIDER: true


#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

ADDITIONAL_IMAGE_REGISTRY_1: "harbor.vmwire.com"
ADDITIONAL_IMAGE_REGISTRY_1_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_1_CA_CERTIFICATE: <-- snipped -->


# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
# TKG_PROXY_CA_CERT: ""

ENABLE_AUDIT_LOGGING: false

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false

Then use the –dry-run option and save the cluster object spec file with tanzu cluster create <name-of-new-cluster> -f tkg-single.yaml > tkg-single-spec.yaml --dry-run, this creates a new file called tkg-single-spec.yaml that you need to edit before creating the single node cluster.

Edit the tkg-single-spec.yaml file and change the following sections.

under spec.topology.variables, add the following:

- name: controlPlaneTaint
  value: false

under spec.topology.workers, delete the entire block including the workers section heading.

Your changed file should look like the example below.

apiVersion: csi.tanzu.vmware.com/v1alpha1
kind: VSphereCSIConfig
metadata:
  name: tkg-single
  namespace: default
spec:
  vsphereCSI:
    config:
      datacenter: /home.local
      httpProxy: ""
      httpsProxy: ""
      noProxy: ""
      region: null
      tlsThumbprint: <-- snipped -->
      useTopologyCategories: false
      zone: null
    mode: vsphereCSI
---
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: ClusterBootstrap
metadata:
  annotations:
    tkg.tanzu.vmware.com/add-missing-fields-from-tkr: v1.25.7---vmware.2-tkg.1
  name: tkg-single
  namespace: default
spec:
  additionalPackages:
  - refName: metrics-server*
  - refName: secretgen-controller*
  - refName: pinniped*
  - refName: tkg-storageclass*
    valuesFrom:
      inline:
        infraProvider: ""
  csi:
    refName: vsphere-csi*
    valuesFrom:
      providerRef:
        apiGroup: csi.tanzu.vmware.com
        kind: VSphereCSIConfig
        name: tkg-single
  kapp:
    refName: kapp-controller*
---
apiVersion: v1
kind: Secret
metadata:
  name: tkg-single
  namespace: default
stringData:
  password: 
  username: administrator@vsphere.local
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  annotations:
    osInfo: ubuntu,20.04,amd64
    tkg/plan: dev
  labels:
    tkg.tanzu.vmware.com/cluster-name: tkg-single
  name: tkg-single
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 100.96.0.0/11
    services:
      cidrBlocks:
      - 100.64.0.0/13
  topology:
    class: tkg-vsphere-default-v1.0.0
    controlPlane:
      metadata:
        annotations:
          run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
      replicas: 1
    variables:
    - name: controlPlaneTaint
      value: false
    - name: cni
      value: antrea
    - name: controlPlaneCertificateRotation
      value:
        activate: true
        daysBefore: 90
    - name: additionalImageRegistries
      value:
      - caCert: <-- snipped -->
        host: harbor.vmwire.com
        skipTlsVerify: false
    - name: auditLogging
      value:
        enabled: false
    - name: podSecurityStandard
      value:
        audit: baseline
        deactivated: false
        warn: baseline
    - name: aviAPIServerHAProvider
      value: true
    - name: vcenter
      value:
        cloneMode: fullClone
        datacenter: /home.local
        datastore: /home.local/datastore/lun01
        folder: /home.local/vm/tkg-vsphere-workload
        network: /home.local/network/tkg-workload
        resourcePool: /home.local/host/cluster/Resources/tkg-vsphere-workload
        server: vcenter.vmwire.com
        storagePolicyID: ""
        template: /home.local/vm/Templates/ubuntu-2004-efi-kube-v1.25.7+vmware.2
        tlsThumbprint: <-- snipped -->
    - name: user
      value:
        sshAuthorizedKeys:
        - <-- snipped -->
    - name: controlPlane
      value:
        machine:
          customVMXKeys:
            ethernet0.ctxPerDev: "3"
            ethernet0.pnicFeatures: "4"
            sched.cpu.shares: high
          diskGiB: 40
          memoryMiB: 8192
          numCPUs: 4
    - name: worker
      value:
        count: 1
        machine:
          diskGiB: 40
          memoryMiB: 4096
          numCPUs: 2
    version: v1.25.7+vmware.2-tkg.1
Advertisement

AviInfraSetting with IngressClass

Avi Infra Setting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.

Here I have a different network that I want a new Ingress to use, in this case the tkg-wkld-trf-vip network, 172.16.4.97/27, lets assume its used for 5G traffic connectivity and the NSX-T T1 is connected to a different T0 VRF. This isolates the traffic between VRFs, so that we can expose certain applications on different VRFs.

In this example, I’ll change Grafana from using the default VIP network to the tkg-wkld-trf-vip network instead. You can read up on how this was originally done using the default VIP network in the previous post.

aviinfrasetting-tkg-wkld-trf-vip.yaml

---
apiVersion: ako.vmware.com/v1alpha1
kind: AviInfraSetting
metadata:
  name: aviinfrasetting-tkg-wkld-trf-vip
spec:
  seGroup:
    name: tkg-workload1
  network:
    vipNetworks:
      - networkName: tkg-wkld-trf-vip
        cidr: 172.16.4.96/27
    enableRhi: false

Attaching Avi Infra Setting to Ingress

Avi Infra Settings can be applied to Ingress resources, using the IngressClass construct. IngressClass provides a way to configure Controller-specific load balancing parameters and applies these configurations to a set of Ingress objects. AKO supports listening to IngressClass resources in Kubernetes version 1.19+. The Avi Infra Setting reference can be provided in the Ingress Class as shown below:

aviingressclass-tkg-wkld-trf-vip.yaml

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: aviingressclass-tkg-wkld-trf-vip
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-trf-vip

dashboard-ingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: tanzu-system-dashboards
  annotations:
    ako.vmware.com/enable-tls: "true"
  labels:
    app: dashboard-ingress
spec:
  ingressClassName: aviingressclass-tkg-wkld-trf-vip
  rules:
    - host: "grafana.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: grafana
                port:
                  number: 80

Below you can see that Grafana is now using the new AviInfraSetting and has been assigned an IP address of 172.16.4.98.

Introduction to Avi Ingress and Replacing Contour for Prometheus and Grafana

Avi Ingress is an alternative to Contour and NGINX ingress controllers.

Tanzu Kubernetes Grid ships with Contour as the default Ingress controller that Tanzu Packages uses to expose Prometheus and Contour. Prometheus and Grafana are configured to use Contour if you set ingress: true in the values.yaml files.

This post details how to set Avi Ingress up and use it to expose these applications using signed TLS certificates.

Let’s start

Install AKO with helm as normal, use ClusterIP in the Avi values.yaml config file.

Reference link to documentation:

https://avinetworks.com/docs/ako/1.9/networking-v1-ingress/

Create secret for ingress certificate, using a wildcard certificate will enable Avi to automatically secure all applications with the TLS certificate.

tls.key and tls.crt in base64 encoded format.

router-certs-default.yaml

apiVersion: v1
kind: Secret
metadata:
  name: router-certs-default
  namespace: avi-system
type: kubernetes.io/tls
data:
  tls.key: --snipped--
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVjVENDQTFtZ0F3SUJBZ0lTQTI0MDJNMStJN01kaTIwRWZlK2hlQitQTUEwR0NTcUdTSWIzRFFFQkN3VUEKTURJeEN6QUpCZ05WQkFZVEFsVlRNUll3RkFZRFZRUUtFdzFNWlhRbmN5QkZibU55ZVhCME1Rc3dDUVlEVlFRRApFd0pTTXpBZUZ3MHlNekF6TWpReE1qSTBNakphRncweU16QTJNakl4TWpJME1qRmFNQ1V4SXpBaEJnTlZCQU1NCkdpb3VkR3RuTFhkdmNtdHNiMkZrTVM1MmJYZHBjbVV1WTI5dE1Ga3dFd1lIS29aSXpqMENBUVlJS29aSXpqMEQKQVFjRFFnQUVmcEs2MUQ5bFkyQUZzdkdwZkhwRlNEYVl1alF0Nk05Z21yYUhrMG5ySHJhTUkrSEs2QXhtMWJyRwpWMHNrd2xDWEtrWlNCbzRUZmFlTDF6bjI1N0M1QktPQ0FsY3dnZ0pUTUE0R0ExVWREd0VCL3dRRUF3SUhnREFkCkJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBZEJnTlYKSFE0RUZnUVVxVjMydlU4Yzl5RFRpY3NVQmJCMFE0MFNsZFl3SHdZRFZSMGpCQmd3Rm9BVUZDNnpGN2RZVnN1dQpVQWxBNWgrdm5Zc1V3c1l3VlFZSUt3WUJCUVVIQVFFRVNUQkhNQ0VHQ0NzR0FRVUZCekFCaGhWb2RIUndPaTh2CmNqTXVieTVzWlc1amNpNXZjbWN3SWdZSUt3WUJCUVVITUFLR0ZtaDBkSEE2THk5eU15NXBMbXhsYm1OeUxtOXkKWnk4d0pRWURWUjBSQkI0d0hJSWFLaTUwYTJjdGQyOXlhMnh2WVdReExuWnRkMmx5WlM1amIyMHdUQVlEVlIwZwpCRVV3UXpBSUJnWm5nUXdCQWdFd053WUxLd1lCQkFHQzN4TUJBUUV3S0RBbUJnZ3JCZ0VGQlFjQ0FSWWFhSFIwCmNEb3ZMMk53Y3k1c1pYUnpaVzVqY25sd2RDNXZjbWN3Z2dFR0Jnb3JCZ0VFQWRaNUFnUUNCSUgzQklIMEFQSUEKZHdCNk1veFUyTGN0dGlEcU9PQlNIdW1FRm5BeUU0Vk5POUlyd1RwWG8xTHJVZ0FBQVljVHlxNTJBQUFFQXdCSQpNRVlDSVFEekZNSklaT3NKMG9GQTV2UVVmNUpZQUlaa3dBMnkxNE92K3ljcTU0ZDZmZ0loQUxOcmNnM0lrNllsCkxlMW1ROHFVZmttNWsxRTZBSDU4OFJhYWZkZlhONTJCQUhjQTZEN1EyajcxQmpVeTUxY292SWxyeVFQVHk5RVIKYSt6cmFlRjNmVzBHdlc0QUFBR0hFOHF1VlFBQUJBTUFTREJHQWlFQW9Wc3ZxbzhaR2o0cmszd1hmL0xlSkNCbApNQkg2UFpBb2UyMVVkbko5aThvQ0lRRGoyS1Q1eWlUOGtRdjFyemxXUWgveHV6VlRpUGtkdlBHL3Zxd3J0SWhjCjJEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFFczlKSTFwZ3R6T2JyRmd0Vnpsc1FuZC8xMi9QYWQ5WXI2WVMKVE5XM3F1bElhaEZ4UDdVcVRIT0xVSGw0cGdpTThxZ2ZlcmhyTHZXbk1wOUlxQ3JVVElTSnFRblh5bnkyOHA2Zwoyc2NqS2xFSWt2RURvcExoek0ydGpCenc4a1dUYUdYUE8yM0dhcHBHWW14OS9Ma2NkUDVSS0xKMmlRTEJXZlhTCmNQRlNmZWsySEc3dEw1N0s0Uit4eDB4MTdsZ2RLeFdOL1JYQ2RvcHFPY3RyTCtPL0lwWVVWZXNiVzNJbkpFZDkKdjZmS1RmVE84K3JVVnlkajVmUGdFUWJva2Q2L3BDTGdIYS81UVpQMjZ1ZytRa1llUEJvUWRrTkpGOTk4a2NHWQpBZGc0THpJZjdYdU9SNDB4eU90aHIyN1p4Y1FXZnhMM2M4bGJuUlJrMXZNL3pMMDhIdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=

k apply -f router-certs-default.yaml

Here is the example online store website deployment using ingress with the certificate. Lets play with this before we get around to exposing Prometheus and Grafana.

sample-ingress.yaml

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: http-ingress-deployment
  labels:
    app: http-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http-ingress
  template:
    metadata:
      labels:
        app: http-ingress
    spec:
      containers:
        - name: http-ingress
          image: ianwijaya/hackazon
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
      imagePullSecrets:
      - name: regcred
---
kind: Service
apiVersion: v1
metadata:
  name: ingress-svc
  labels:
    svc: ingress-svc
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: http-ingress
  type: ClusterIP

avisvcingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: avisvcingress
  annotations:
    ako.vmware.com/enable-tls: "true"
  labels:
    app: avisvcingress
spec:
  ingressClassName: avi-lb
  rules:
    - host: "hackazon.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: ingress-svc
                port:
                  number: 80

Note that the Service uses ClusterIP and the Ingress is annotated with ako.vmware.com/enable-tls: "true" to use the default tls specified in router-certs-default.yaml. Also add the ingressClassName into the Ingress manifest.

k apply -f sample-ingress.yaml

k apply -f avisvcingress.yaml

k get ingress avisvcingress

NAME CLASS HOSTS ADDRESS PORTS AGE
avisvcingress avi-lb hackazone.tkg-workload1.vmwire.com 172.16.4.69 80 13m

Let’s add another host

Append another host to the avisvcingress.yaml file.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: avisvcingress
  annotations:
    ako.vmware.com/enable-tls: "true"
  labels:
    app: avisvcingress
spec:
  ingressClassName: avi-lb
  rules:
    - host: "hackazon.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: ingress-svc
                port:
                  number: 80
    - host: "nginx.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: nginx-service
                port:
                  number: 80

k replace -f avisvcingress.yaml

And use the trusty statefulset file to create an nginx webpage. statefulset-topology-aware.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
  labels:
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx-service
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: failure-domain.beta.kubernetes.io/zone
                operator: In
                values:
                - az-1
                - az-2
                - az-3
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: failure-domain.beta.kubernetes.io/zone
      terminationGracePeriodSeconds: 10
      initContainers:
      - name: install
        image: busybox
        command:
        - wget
        - "-O"
        - "/www/index.html"
        - https://raw.githubusercontent.com/hugopow/cse/main/index.html
        volumeMounts:
        - name: www
          mountPath: "/www"
      containers:
        - name: nginx
          image: k8s.gcr.io/nginx-slim:0.8
          ports:
            - containerPort: 80
              name: web
          volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
            - name: logs
              mountPath: /logs
  volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: tanzu-local-ssd
        resources:
          requests:
            storage: 2Gi
    - metadata:
        name: logs
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: tanzu-local-ssd
        resources:
          requests:
            storage: 1Gi

k apply -f statefulset-topology-aware.yaml

k get ingress avisvcingress

NAME            CLASS    HOSTS                                                             ADDRESS       PORTS   AGE
avisvcingress   avi-lb   hackazon.tkg-workload1.vmwire.com,nginx.tkg-workload1.vmwire.com   172.16.4.69   80      7m33s

Notice that another host is added to the same ingress, and both hosts share the same VIP.

Lets add Prometheus to this!

Create a new manifest for Prometheus to use called monitoring-ingress.yaml

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: monitoring-ingress
  namespace: tanzu-system-monitoring
  annotations:
    ako.vmware.com/enable-tls: "true"
  labels:
    app: monitoring-ingress
spec:
  ingressClassName: avi-lb
  rules:
    - host: "prometheus.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: prometheus-server
                port:
                  number: 80

Note that since Prometheus when deployed by Tanzu Packages is deployed into the namespace tanzu-system-monitoring, we also need to create the new ingress in the same namespace.

Deploy Prometheus following the documentation here, but do not enable ingress in the prometheus-data-values.yaml file, that uses Contour. We don’t want that as we are using Avi Ingress instead.

Add Grafana too!

Create a new manifest for Grafana to use called dashboard-ingress.yaml.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: tanzu-system-dashboards
  annotations:
    ako.vmware.com/enable-tls: "true"
  labels:
    app: dashboard-ingress
spec:
  ingressClassName: avi-lb
  rules:
    - host: "grafana.tkg-workload1.vmwire.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: grafana
                port:
                  number: 80

Note that since Grafana when deployed by Tanzu Packages is deployed into the namespace tanzu-system-dashboards, we also need to create the new ingress in the same namespace.

Deploy Grafana following the documentation here, but do not enable ingress in the grafana-data-values.yaml file, that uses Contour. We don’t want that as we are using Avi Ingress instead.

Summary

Ingress with Avi is really nice, I like it! A single secret to store the TLS certificates and all hosts are automatically configured to use TLS. You also just need to expose TCP 80 as ClusterIP Services and Avi will do the rest for you and expose the application over TCP 443 using the TLS cert.

Here you can see that all four of our applications – hackazon, nginx running across three AZs, Grafana and Prometheus all using Ingress and sharing a single IP address.

Very cool indeed!

k get ingress -A

NAMESPACE                 NAME                 CLASS    HOSTS                                                              ADDRESS       PORTS   AGE
default                   avisvcingress        avi-lb   hackazon.tkg-workload1.vmwire.com,nginx.tkg-workload1.vmwire.com   172.16.4.69   80      58m
tanzu-system-dashboards   dashboard-ingress    avi-lb   grafana.tkg-workload1.vmwire.com                                   172.16.4.69   80      3m47s
tanzu-system-monitoring   monitoring-ingress   avi-lb   prometheus.tkg-workload1.vmwire.com                                172.16.4.69   80      14m

CSE TKG Clusters can’t pull from GitHub

During TKG cluster creation you might see the following errors.

Error: failed to get
provider components for the "cluster-api:v1.1.3" provider: failed to get
repository client for the CoreProvider with name cluster-api: error creating
the GitHub repository client: failed to get GitHub latest version: failed to
get repository versions: failed to get repository versions: rate limit for
github api has been reached. Please wait one hour or get a personal API
token and assign it to the GITHUB_TOKEN environment variable

This is due to GitHub rate limiting for anonymous access to GitHub. CSE TKG clusters pull images from GitHub, and if you are pulling too many within a short period of time, you will eventually hit the rate limits.

To ensure that you don’t hit the limits a GitHub Access Token is needed.

Then configure CSE to use the GitHub Access Token using the CSE documentation here.

Scaling TKG Management Cluster Nodes Vertically

In a previous post I wrote about how to scale workload cluster control plane and worker nodes vertically. This post explains how to do the same for the TKG Management Cluster nodes.

Scaling vertically is increasing or decreasing the CPU, Memory, Disk or changing other things such as the network for the nodes. Using the Cluster API it is possible to make these changes on the fly, Kubernetes will use rolling updates to make the necessary changes.

First change to the TKG Management Cluster context to make the changes.

Scaling Worker Nodes

Run the following to list all the vSphereMachineTemplates.

k get vspheremachinetemplates.infrastructure.cluster.x-k8s.io -A
NAMESPACE    NAME                         AGE
tkg-system   tkg-mgmt-control-plane       20h
tkg-system   tkg-mgmt-worker              20h

These custom resource definitions are immutable so we will need to make a copy of the yaml file and edit it to add a new vSphereMachineTemplate.

k get vspheremachinetemplates.infrastructure.cluster.x-k8s.io -n tkg-system   tkg-mgmt-worker -o yaml > tkg-mgmt-worker-new.yaml

Now edit the new file named tkg-mgmt-worker-new.yaml

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","kind":"VSphereMachineTemplate","metadata":{"annotations":{"vmTemplateMoid":"vm-9726"},"name":"tkg-mgmt-worker","namespace":"tkg-system"},"spec":{"template":{"spec":{"cloneMode":"fullClone","datacenter":"/home.local","datastore":"/home.local/datastore/lun01","diskGiB":40,"folder":"/home.local/vm/tkg-vsphere-tkg-mgmt","memoryMiB":8192,"network":{"devices":[{"dhcp4":true,"networkName":"/home.local/network/tkg-mgmt"}]},"numCPUs":2,"resourcePool":"/home.local/host/Management/Resources/tkg-vsphere-tkg-Mgmt","server":"vcenter.vmwire.com","storagePolicyName":"","template":"/home.local/vm/Templates/photon-3-kube-v1.22.9+vmware.1"}}}}
    vmTemplateMoid: vm-9726
  creationTimestamp: "2022-12-23T15:23:56Z"
  generation: 1
  name: tkg-mgmt-worker
  namespace: tkg-system
  ownerReferences:
  - apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    name: tkg-mgmt
    uid: 9acf6370-64be-40ce-9076-050ab8c6f41f
  resourceVersion: "3069"
  uid: 4a8f305f-0b61-4d33-ba02-7fb3fcc8ba22
spec:
  template:
    spec:
      cloneMode: fullClone
      datacenter: /home.local
      datastore: /home.local/datastore/lun01
      diskGiB: 40
      folder: /home.local/vm/tkg-vsphere-tkg-mgmt
      memoryMiB: 8192
      network:
        devices:
        - dhcp4: true
          networkName: /home.local/network/tkg-mgmt
      numCPUs: 2
      resourcePool: /home.local/host/Management/Resources/tkg-vsphere-tkg-Mgmt
      server: vcenter.vmwire.com
      storagePolicyName: ""
      template: /home.local/vm/Templates/photon-3-kube-v1.22.9+vmware.1

Change the name of the CRD on line 10. Make any other changes you need, such as CPU on line 32 or RAM on line 27. Save the file.

Now you’ll need to create the new vSphereMachineTemplate.

k apply -f tkg-mgmt-worker-new.yaml

Now we’re ready to make the change.

Lets first take a look at the MachineDeployments.

k get machinedeployments.cluster.x-k8s.io -A

NAMESPACE    NAME            CLUSTER    REPLICAS   READY   UPDATED   UNAVAILABLE   PHASE     AGE   VERSION
tkg-system   tkg-mgmt-md-0   tkg-mgmt   2          2       2         0             Running   20h   v1.22.9+vmware.1

Now edit this MachineDeployment.

k edit machinedeployments.cluster.x-k8s.io -n tkg-system   tkg-mgmt-md-0

You need to make the change to the section spec.template.spec.infrastructureRef under line 56.

 53       infrastructureRef:
 54         apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
 55         kind: VSphereMachineTemplate
 56         name: tkg-mgmt-worker

Change line 56 to the new VsphereMachineTemplate CRD we created earlier.

 53       infrastructureRef:
 54         apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
 55         kind: VSphereMachineTemplate
 56         name: tkg-mgmt-worker-new

Save and quit. You’ll notice that a new VM will immediately start being cloned in vCenter. Wait for it to complete, this new VM is the new worker with the updated CPU and memory sizing and it will replace the current worker node. Eventually, after a few minutes, the old worker node will be deleted and you will be left with a new worker node with the updated CPU and RAM specified in the new VSphereMachineTemplate.

Scaling Control Plane Nodes

Scaling the control plane nodes is similar.

k get vspheremachinetemplates.infrastructure.cluster.x-k8s.io -n tkg-system tkg-mgmt-control-plane -o yaml > tkg-mgmt-control-plane-new.yaml

Edit the file and perform the same steps as the worker nodes.

You’ll notice that there is no MachineDeployment for the control plane node for a TKG Management Cluster. Instead we have to edit the CRD named KubeAdmControlPlane.

Run this command

k get kubeadmcontrolplane -A

NAMESPACE    NAME                     CLUSTER    INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
tkg-system   tkg-mgmt-control-plane   tkg-mgmt   true          true                   1          1       1         0             21h   v1.22.9+vmware.1

Now we can edit it

k edit kubeadmcontrolplane -n tkg-system   tkg-mgmt-control-plane

Change the section under spec.machineTemplate.infrastructureRef, around line 106.

102   machineTemplate:
103     infrastructureRef:
104       apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
105       kind: VSphereMachineTemplate
106       name: tkg-mgmt-control-plane
107       namespace: tkg-system

Change line 106 to

102   machineTemplate:
103     infrastructureRef:
104       apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
105       kind: VSphereMachineTemplate
106       name: tkg-mgmt-control-plane-new
107       namespace: tkg-system

Save the file. You’ll notice that another VM will start cloning and eventually you’ll have a new control plane node up and running. This new control plane node will replace the older one. It will take longer than the worker node so be patient.

Rights for VMware Data Solutions

Creating a new Global Role

You’ll need to create a new global role with the correct rights to be able to deploy data solutions into a TKG cluster.

The easiest way to do this is to clone the role named Kubernetes Cluster Author created by CSE 4.0 and add additional rights for Data Solutions.

Administrator View: VMWARE:CAPVCDCLUSTER
Administrator View: VMWARE:DSCONFIG
Administrator View: VMWARE:DSINSTANCETEMPLATE
Administrator View: VMWARE:DSINSTANCE
Administrator View: VMWARE:DSPROVISIONING
Administrator View: VMWARE:DSCLUSTER

Administrator Full Control: VMWARE:DSINSTANCE

View: VMWARE:DSCONFIG
View: VMWARE:DSPROVISIONING
View: VMWARE:DSINSTANCE
View: VMWARE:DSINSTANCETEMPLATE
View: VMWARE:DSCLUSTER

Full Control: VMWARE:DSPROVISIONING
Full Control: VMWARE:DSCLUSTER
Full Control: VMWARE:DSINSTANCE

Edit VMWARE:DSINSTANCE
Edit VMWARE:DSCLUSTER
Edit VMWARE:DSPROVISIONING

Now publish this new Global Role to a tenant and assign a tenant user this new role and you can then deploy Data Solutions into a TKG cluster.

Best practices for installing CSE 4.0

Container Service Extension 4 was released recently. This post aims to help ease the setup of CSE 4.0 as it has a different deployment model using the Solutions framework instead of deploying the CSE appliance into the traditional Management cluster concept used by service providers to run VMware management components such as vCenter, NSX-T Managers, Avi Controllers and other management systems.

Step 1 – Create a CSE Service Account

Perform these steps using the administrator@system account or an equivalent system administrator role.

Setup a Service Account in the Provider (system) organization with the role CSE Admin Role.

In my environment I created a user to use as a service account named svc-cse. You’ll notice that this user has been assigned the CSE Admin Role.

The CSE Admin Role is created automatically by CSE when you use the CSE Management UI as a Provider administrator, just do these steps using the administrator@system account.

Step 2 – Create a token for the Service Account

Log out of VCD and log back into the Provider organization as the service account you created in Step 1 above. Once logged in, it should look like the following screenshot, notice that the svc-cse user is logged into the Provider organization.

Click on the downward arrow at the top right of the screen, next to the user svc-cse and select User Preferences.

Under Access Tokens, create a new token and copy the token to a safe place. This is what you use to deploy the CSE appliance later.

Log out of VCD and log back in as adminstrator@system to the Provider organization.

Step 3 – Deploy CSE appliance

Create a new tenant Organization where you will run CSE. This new organization is dedicated to VCD extensions such as CSE and is managed by the service provider.

For example you can name this new organization something like “solutions-org“. Create an Org VDC within this organization and also the necessary network infrastructure such as a T1 router and an organization network with internet access.

Still logged into the Provider organization, open another tab by clicking on the Open in Tenant Portal link to your “solutions-org” organization. You must deploy the CSE vApp as a Provider.

Now you can deploy the CSE vApp.

Use the Add vApp From Catalog workflow.

Accept the EULA and continue with the workflow.

When you get the Step 8 of the Create vApp from Template, ensure that you setup the OVF properties like my screenshot below:

The important thing to note is to ensure that you are using the correct service account username and use the token from Step 2 above.

Also since you must have the service account in the Provider organization, leave the default system organization for CSE service account’s org.

The last value is very important, it must by set to the tenant organization that will run the CSE appliance, in our case it is the “solutions-org” org.

Once the OVA is deployed you can boot it up or if you want to customize the root password then do so before you start the vApp. If not, the default credentials are root and vmware.

Rights required for deploying TKG clusters

Ensure that the user that is logged into a tenant organization has the correct rights to deploy a TKG cluster. This user must have at a minimum the rights in the Kubernetes Cluster Author Global Role.

App LaunchPad

You’ll also need to upgrade App Launchpad to the latest version alp-2.1.2-20764259 to support CSE 4.0 deployed clusters.

Also ensure that the App-Launchpad-Service role has the rights to manage CAPVCD clusters.

Otherwise you may encounter the following issue:

VCD API Protected by Web Application Firewalls

If you are using a web application firewall (WAF) in front of your VCD cells and you are blocking access to the provider side APIs. You will need to add the SNAT IP address of the T1 from the solutions-org into the WAF whitelist.

The CSE appliance will need access to the VCD provider side APIs.

I wrote about using a WAF in front of VCD in the past to protect provider side APIs. You can read those posts here and here.

Container Service Extension with an one-arm load balancer

The default setting for load balancer service requests for application services defaults to using the two-arm load balancer with NSX Advanced Load Balancer (Avi) in Container Service Extension (CSE) provisioned Tanzu Kubernetes Grid (TKG) cluster deployed in VMware Cloud Director (VCD).

VCD tells NSX-T to create a DNAT towards an internal only IP range of 192.168.8.x. This may be undesirable for some customers and it is now possible to remove the need for this and just use a one-arm load balancer instead.

The default setting for load balancer service requests for application services defaults to using the two-arm load balancer with NSX Advanced Load Balancer (Avi) in Container Service Extension (CSE) provisioned Tanzu Kubernetes Grid (TKG) cluster deployed in VMware Cloud Director (VCD).

VCD tells NSX-T to create a DNAT towards an internal only IP range of 192.168.8.x. This may be undesirable for some customers and it is now possible to remove the need for this and just use a one-arm load balancer instead.

This capability has been enabled only for VCD 10.4.x, in prior versions of VCD this support was not available.

The requirements are:

  • CSE 4.0
  • VCD 10.4
  • Avi configured for VCD
  • A TKG cluster provisioned by CSE UI.

If you’re still running VCD 10.3.x then this blog article is irrelevant.

The vcloud-ccm-configmap config map stores the vcloud-ccm-config.yaml, that is used by the vmware-cloud-director-ccm deployment.

Step 1 – make a copy of the vcloud-ccm-configmap

k get cm -n kube-system vcloud-ccm-configmap -o yaml

apiVersion: v1
data:
vcloud-ccm-config.yaml: "vcd:\n host: https://vcd.vmwire.com\n org: tenant1\n
\ vdc: tenant1-vdc\nloadbalancer:\n oneArm:\n startIP: \"192.168.8.2\"\n endIP:
\"192.168.8.100\"\n ports:\n http: 80\n https: 443\n network: default-organization-network\n
\ vipSubnet: \n enableVirtualServiceSharedIP: false # supported for VCD >= 10.4\nvAppName:
tkg-1\n"
immutable: true
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"vcloud-ccm-config.yaml":"vcd:\n host: https://vcd.vmwire.com\n org: tenant1\n vdc: tenant1-vdc\nloadbalancer:\n oneArm:\n startIP: \"192.168.8.2\"\n endIP: \"192.168.8.100\"\n ports:\n http: 80\n https: 443\n network: default-organization-network\n vipSubnet: \n enableVirtualServiceSharedIP: false # supported for VCD \u003e= 10.4\nvAppName: tkg-1\n"},"immutable":true,"kind":"ConfigMap","metadata":{"annotations":{},"name":"vcloud-ccm-configmap","namespace":"kube-system"}}
creationTimestamp: "2022-11-19T15:08:27Z"
name: vcloud-ccm-configmap
namespace: kube-system
resourceVersion: "1014"
uid: 5e8f2136-124f-4fc0-b4e6-49741ee5545b

Make a copy of the config map to edit it and then apply, since the current config map is immutable.

k get cm -n kube-system vcloud-ccm-configmap -o yaml > vcloud-ccm-configmap.yaml

Step 2 – Edit the vcloud-ccm-configmap

Edit the file, delete the entries under data: oneArm:\n , delete the startIP and endIP lines and change the value to true for the key enableVirtualServiceSharedIP. Ignore the rest of the file.

apiVersion: v1
data:
vcloud-ccm-config.yaml: "vcd:\n host: https://vcd.vmwire.com\n org: tenant1\n
\ vdc: tenant1-vdc\nloadbalancer:\n
\ ports:\n http: 80\n https: 443\n network: default-organization-network\n
\ vipSubnet: \n enableVirtualServiceSharedIP: true # supported for VCD >= 10.4\nvAppName:
tkg-1\n"
immutable: true
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"vcloud-ccm-config.yaml":"vcd:\n host: https://vcd.vmwire.com\n org: tenant1\n vdc: tenant1-vdc\nloadbalancer:\n oneArm:\n startIP: \"192.168.8.2\"\n endIP: \"192.168.8.100\"\n ports:\n http: 80\n https: 443\n network: default-organization-network\n vipSubnet: \n enableVirtualServiceSharedIP: false # supported for VCD \u003e= 10.4\nvAppName: tkg-1\n"},"immutable":true,"kind":"ConfigMap","metadata":{"annotations":{},"name":"vcloud-ccm-configmap","namespace":"kube-system"}}
creationTimestamp: "2022-11-19T15:08:27Z"
name: vcloud-ccm-configmap
namespace: kube-system
resourceVersion: "1014"
uid: 5e8f2136-124f-4fc0-b4e6-49741ee5545b

Step 3 – Apply the new config map

To apply the new config map, you need to delete the old configmap first.

k delete cm -n kube-system vcloud-ccm-configmap
configmap "vcloud-ccm-configmap" deleted

Apply the new config map with the yaml file that you just edited.

k apply -f vcloud-ccm-configmap.yaml

configmap/vcloud-ccm-configmap created

To finalize the change, you have to take a backup of the vmware-cloud-director-ccm deployment and then delete it so that it can use the new config-map.

You can check the config map that this deployment uses by typing:

k get deploy -n kube-system vmware-cloud-director-ccm -o yaml

Step 4 – Redeploy the vmware-cloud-director-ccm deloyment

Take a backup of the vmware-cloud-director-ccm deployment by typing:

k get deploy -n kube-system vmware-cloud-director-ccm -o yaml > vmware-cloud-director-ccm.yaml

No need to edit this time. Now delete the deployment:

k delete deploy -n kube-system vmware-cloud-director-ccm

deployment.apps "vmware-cloud-director-ccm" deleted

You can now recreate the deployment from the yaml file:

k apply -f vmware-cloud-director-ccm.yaml

deployment.apps/vmware-cloud-director-ccm created

Now when you deploy and application and request a load balancer service, NSX ALB (Avi) will route the external VIP IP towards the k8s workers nodes, instead of to the NSX-T DNAT segment on 192.168.2.x first.

Step 5 – Deploy a load balancer service

k apply -f https://raw.githubusercontent.com/hugopow/tkg-dev/main/web-statefulset.yaml

You’ll notice a few things happening with this example. A new statefulset with one replica is scheduled with an nginx pod. The statefulset also requests a 1 GiB PVC to store the website. A load balancer service is also requested.

Note that there is no DNAT setup on this tenant’s NAT services, this is because after the config map change, the vmware-cloud-director cloud controller manager is not using a two-arm load balancer architecture anymore, therefore no need to do anything with NSX-T NAT rules.

If you check your NSX ALB settings you’ll notice that it is indeed now using a one-arm configuration. Where the external VIP IP address is 10.149.1.113 and port is TCP 80. NSX ALB is routing that to the two worker nodes with IP addresses of 192.168.0.100 and 192.168.0.102 towards port TCP 30020.

k get svc -n web-statefulset

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-statefulset-service LoadBalancer 100.66.198.78 10.149.1.113 80:30020/TCP 13m

k get no -o wide


NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

tkg-1-worker-node-pool-1-68c67d5fd6-c79kr Ready 5h v1.22.9+vmware.1 192.168.0.102 192.168.0.102 Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.11
tkg-1-worker-node-pool-2-799d6bccf5-8vj7l Ready 4h46m v1.22.9+vmware.1 192.168.0.100 192.168.0.100 Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.11

Cleaning up CSE 4.0 beta

For those partners that have been testing the beta, you’ll need to remove all traces of it before you can install the GA version. VMware does not support upgrading or migrating from beta builds to GA builds.

This is a post to help you clean up your VMware Cloud Director environment in preparation for the GA build of CSE 4.0.

For those partners that have been testing the beta, you’ll need to remove all traces of it before you can install the GA version. VMware does not support upgrading or migrating from beta builds to GA builds.

If you don’t clean up, when you try to configure CSE again with the CSE Management wizard, you’ll see the message below:

“Server configuration entity already exists.”

Delete CSE Roles

First delete all the CSE Roles that the beta has setup, the GA version of CSE will recreate these for you when you use the CSE management wizard. Don’t forget to assign the new role to your CSE service account when you deploy the CSE GA OVA.

Use the Postman Collection to clean up

I’ve included a Postman collection on my Github account, available here.

Hopefully, it is self-explanatory. Authenticate against the VCD API, then run each API request in order, make sure you obtain the entity and entityType IDs before you delete.

If you’re unable to delete the entity or entityTypes, you may need to delete all of the CSE clusters before, that means cleaning up all PVCs, PVs, deployments and then the clusters themselves.

Deploy CSE GA Normally

You’ll now be able to use the Configure Management wizard and deploy CSE 4.0 GA as normal.

Known Issues

If you’re unable to delete any of these entities then run a POST using /resolve.

For example, https://vcd.vmwire.com/api-explorer/provider#/definedEntity/resolveDefinedEntity

Once, it is resolved, you can go ahead and delete the entity.

VMware Cloud Director, Container Service Extension and App Launchpad Running in Kubernetes

I’ve been experimenting with the VMware Cloud Director, Container Service Extension and App Launchpad applications and wanted to test if these applications would run in Kubernetes.

The short answer is yes!

I’ve been experimenting with the VMware Cloud Director, Container Service Extension and App Launchpad applications and wanted to test if these applications would run in Kubernetes.

The short answer is yes!

I initially deployed these apps as a standalone Docker container to see if they would run as a container. I wanted to eventually get them to run in a Kubernetes cluster to benefit from all the goodies that Kubernetes provides.

Packaging the apps wasn’t too difficult, just needed patience and a lot of Googling. The process was as follows:

  • run a Docker image of a Linux image, CentOS for VCD and Photon for ALP and CSE.
  • prepare all the pre-requisites, such as yum update and tdnf update.
  • commit the image to a Harbor registry
  • build a Helm chart to deploy the applications using the images and then create a shell script that is run when the image starts to install and run the applications.

Well, its not that simple but you can take a look at the code for all three Helm Charts on my Github or pull them from my public Harbor repository.

VMware Cloud Director

Github: https://github.com/hugopow/vmware-cloud-director

Helm Chart: helm pull oci://harbor.vmwire.com/library/vmware-cloud-director

How to install: Update values.yaml and then run

helm install vmware-cloud-director oci://harbor.vmwire.com/library/vmware-cloud-director --version 0.5.0 -n vmware-cloud-director

Notice how easy that was to install?

The values.yaml file is the only file you’ll need to edit, just update to suit your environment.

# Default values for vmware-cloud-director.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

installFirstCell:
  enabled: true

installAdditionalCell:
  enabled: false

storageClass: iscsi
pvcCapacity: 2Gi

vcdNfs:
  server: 10.92.124.20
  mountPath: /mnt/nvme/vcd-k8s

vcdSystem:
  user: administrator
  password: Vmware1!
  email: admin@domain.local
  systemName: VCD
  installationId: 1

postgresql:
  dbHost: postgresql.vmware-cloud-director.svc.cluster.local
  dbName: vcloud
  dbUser: vcloud
  dbPassword: Vmware1!

# Availability zones in deployment.yaml are setup for TKG and must match VsphereFailureDomain and VsphereDeploymentZones
availabilityZones:
  enabled: false

httpsService:
  type: LoadBalancer
  port: 443

consoleProxyService:
  port: 8443

publicAddress:
  uiBaseUri: https://vcd-k8s.vmwire.com
  uiBaseHttpUri: http://vcd-k8s.vmwire.com
  restapiBaseUri: https://vcd-k8s.vmwire.com
  restapiBaseHttpUri: http://vcd-k8s.vmwire.com
  consoleProxy: vcd-vmrc.vmwire.com

tls:
  certFullChain: |-
    -----BEGIN CERTIFICATE-----
          wildcard certificate
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
          intermediate certificate
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
          root certificate
    -----END CERTIFICATE-----
  certKey: |-
    -----BEGIN PRIVATE KEY-----
          wildcard certificate private key
    -----END PRIVATE KEY-----

The installation process is quite fast, less than three minutes to get the first pod up and running and two minutes for each subsequent pod. That means a VCD multi-cell system up and running in less than ten minutes.

I’ve deployed VCD as a StatefulSet, and have three replicas. Since the replica is set to three, three VCD “Pods” are deployed, in the old world these would be the cells. Here you can see three pods running which would provide both load balancing and high-availability. The other pod is the PostgreSQL database that these cells use. You should also be able to see that Kubernetes has scheduled each pod on a different worker node. I have three worker nodes in this Kubernetes cluster.

Below is the view in VCD of the three cells.

The StatefulSet also has a LoadBalancer service configured for performing the load balancing of the HTTP and Console Proxy traffic on TCP 443 and TCP 8443 respectively.

You can see the LoadBalancer service has configured the services for HTTP and Console Proxy. Note, that this is done automatically by Kubernetes using a manifest in the Helm Chart.

Migrating an existing VCD instance to Kubernetes

If you want to migrate an existing instance to Kubernetes, then use this post here.

Container Service Extension

Github: https://github.com/hugopow/container-service-extension

Helm Chart: helm pull oci://harbor.vmwire.com/library/container-service-extension

How to install: Update values.yaml and then run helm install container-service-extension oci://harbor.vmwire.com/library/container-service-extension --version 0.2.0 -n container-service-extension

Here’s CSE running as a pod in Kubernetes. Since CSE is a stateless application, I’ve configured it to run as a Deployment.

CSE also does not need a database as it purely communicates with VCD through a message bus such as MQTT or RabbitMQ. Additionally no external access to CSE is required as this is done via VCD, so no load balancer is needed either.

You can see that when CSE is idle it only needs 1 milicore of CPU and 102Mib of RAM. This is so much better in terms of resource requirements than running CSE in a VM. This is one of the advantages of running pods vs VMs. Pods will use considerably fewer resources than VMs.

App Launchpad

Github: https://github.com/hugopow/app-launchpad

Helm Chart: helm pull oci://harbor.vmwire.com/library/app-launchpad

How to install: Update values.yaml and then run helm install app-launchpad oci://harbor.vmwire.com/library/app-launchpad --version 0.4.0 -n app-launchpad

The values.yaml file is the only file you’ll need to edit, just update to suit your environment.

# Default values for app-launchpad.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

alpConnect:
  saUser: "svc-alp"
  saPass: Vmware1!
  url: https://vcd-k8s.vmwire.com
  adminUser: administrator@system
  adminPass: Vmware1!
  mqtt: true
  eula: accept
# If you accept the EULA then type "accept" in the EULA key value to install ALP. You can fine the EULA in the README.md file.

I’ve already written an article about ALP here. That article contains a lot more details so I’ll share a few screenshots below for ALP.

Just like CSE, ALP is a stateless application and is deployed as a Deployment. ALP also does not require external access through a load balancer as it too communicates with VCD using the MQTT or RabbitMQ message bus.

You can see that ALP when idle requires just 3 milicores of CPU and 400 Mib of RAM.

ALP can be deployed with multiple instances to provide load balancer and high availability. This is done by deploying RabbitMQ and connecting ALP and VCD to the same exchange. VCD does not support multiple instances of ALP if MQTT is used.

When RabbitMQ is configured, then ALP can be scaled by changing the Deployment number of replicas to two or more. Kubernetes would then deploy additional pods with ALP.

Using Velero with Restic for Kubernetes Data Protection

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises.

This works with any Kubernetes cluster, including Tanzu Kubernetes Grid and Kubernetes clusters deployed with Container Service Extension with VMware Cloud Director.

This solution can be used for air-gapped environments where the Kuberenetes clusters do not have Internet access and cannot use public services such as Amazon S3, or Tanzu Mission Control Data Protection. These services are SaaS services which are pretty much out of bounds in air-gapped environments.

Overview

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:

  • Take backups of your cluster and restore in case of loss.
  • Migrate cluster resources to other clusters.
  • Replicate your production cluster to development and testing clusters.

Velero consists of:

  • A server that runs on your Kubernetes cluster
  • A command-line client that runs locally

Velero works with any Kubernetes cluster, including Tanzu Kubernetes Grid and Kubernetes clusters deployed using Container Service Extension with VMware Cloud Director.

This solution can be used for air-gapped environments where the Kubernetes clusters do not have Internet access and cannot use public services such as Amazon S3, or Tanzu Mission Control Data Protection. These services are SaaS services which are pretty much out of bounds in air-gapped environments.

Install Velero onto your workstation

Download the latest Velero release for your preferred operating system, this is usually where you have your kubectl tools.

https://github.com/vmware-tanzu/velero/releases

Extract the contents.

tar zxvf velero-v1.8.1-linux-amd64.tar.gz

You’ll see a folder structure like the following.

ls -l
total 70252
-rw-r----- 1 phanh users    10255 Mar 10 09:45 LICENSE
drwxr-x--- 4 phanh users     4096 Apr 11 08:40 examples
-rw-r----- 1 phanh users    15557 Apr 11 08:52 values.yaml
-rwxr-x--- 1 phanh users 71899684 Mar 15 02:07 velero

Copy the velero binary to the /usr/local/bin location so it is usable from anywhere.

sudo cp velero /usr/local/bin/velero

sudo chmod +x /usr/local/bin/velero

sudo chmod 755 /usr/local/bin/velero

If you want to enable bash auto completion, please follow this guide.

Setup an S3 service and bucket

I’m using TrueNAS’ S3 compatible storage in my lab. TrueNAS is an S3 compliant object storage system and is incredibly easy to setup. You can use other S3 compatible object stores such as Amazon S3. A full list of supported providers can be found here.

Follow these instructions to setup S3 on TrueNAS.

  1. Add certificate, go to System, Certificates
  2. Add, Import Certificate, copy and paste cert.pem and cert.key
  3. Storage, Pools, click on the three dots next to the Pools that will hold the S3 root bucket.
  4. Add a Dataset, give it a name such as s3-storage
  5. Services, S3, click on pencil icon.
  6. Setup like the example below.

Setup the access key and secret key for this configuration.

access key: AKIAIOSFODNN7EXAMPLE
secret key: wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY

Update DNS to point to s3.vmwire.com to 10.92.124.20 (IP of TrueNAS). Note that this FQDN and IP address needs to be accessible from the Kubernetes worker nodes. For example, if you are installing Velero onto Kubernetes clusters in VCD, the worker nodes on the Organization network need to be able to route to your S3 service. If you are a service provider, you can place your S3 service on the services network that is accessible by all tenants in VCD.

Test access

Download and install the S3 browser tool https://s3-browser.en.uptodown.com/windows

Setup the connection to your S3 service using the access key and secret key.

Create a new bucket to store some backups. If you are using Container Service Extension with VCD, create a new bucket for each Tenant organization. This ensures multi-tenancy is maintained. I’ve create a new bucket named tenant1 which corresponds to one of my tenant organizations in my VCD environment.

Install Velero into the Kubernetes cluster

You can use the velero-plugin-for-aws and the AWS provider with any S3 API compatible system, this includes TrueNAS, Cloudian Hyperstore etc.

Setup a file with your access key and secret key details, the file is named credentials-velero.

vi credentials-velero
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY

Change your Kubernetes context to the cluster that you want to enable for Velero backups. The Velero CLI will connect to your Kubernetes cluster and deploy all the resources for Velero.

velero install \
    --use-restic \
    --default-volumes-to-restic \
    --use-volume-snapshots=false \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.4.0 \
    --bucket tenant1 \
    --backup-location-config region=default,s3ForcePathStyle="true",s3Url=https://s3.vmwire.com:9000 \
    --secret-file ./credentials-velero

To install Restic, use the --use-restic flag in the velero install command. See the install overview for more details on other flags for the install command.

velero install --use-restic

When using Restic on a storage provider that doesn’t have Velero support for snapshots, the --use-volume-snapshots=false flag prevents an unused VolumeSnapshotLocation from being created on installation. The VCD CSI provider does not provide native snapshot capability, that’s why using Restic is a good option here.

I’ve enabled the default behavior to include all persistent volumes to be included in pod backups enabled on all Velero backups running the velero install command with the --default-volumes-to-restic flag. Refer install overview for details.

Specify the bucket with the --bucket flag, I’m using tenant1 here to correspond to a VCD tenant that will have its own bucket for storing backups in the Kubernetes cluster.

For the --backup-location-config flag, configure you settings like mine, and use the s3Url flag to point to your S3 object store, if you don’t use this Velero will use AWS’ S3 public URIs.

A working deployment looks like this

time="2022-04-11T19:24:22Z" level=info msg="Starting Controller" logSource="/go/pkg/mod/github.com/bombsimon/logrusr@v1.1.0/logrusr.go:111" logger=controller.downloadrequest reconciler group=velero.io reconciler kind=DownloadRequest
time="2022-04-11T19:24:22Z" level=info msg="Starting controller" controller=restore logSource="pkg/controller/generic_controller.go:76"
time="2022-04-11T19:24:22Z" level=info msg="Starting controller" controller=backup logSource="pkg/controller/generic_controller.go:76"
time="2022-04-11T19:24:22Z" level=info msg="Starting controller" controller=restic-repo logSource="pkg/controller/generic_controller.go:76"
time="2022-04-11T19:24:22Z" level=info msg="Starting controller" controller=backup-sync logSource="pkg/controller/generic_controller.go:76"
time="2022-04-11T19:24:22Z" level=info msg="Starting workers" logSource="/go/pkg/mod/github.com/bombsimon/logrusr@v1.1.0/logrusr.go:111" logger=controller.backupstoragelocation reconciler group=velero.io reconciler kind=BackupStorageLocation worker count=1
time="2022-04-11T19:24:22Z" level=info msg="Starting workers" logSource="/go/pkg/mod/github.com/bombsimon/logrusr@v1.1.0/logrusr.go:111" logger=controller.downloadrequest reconciler group=velero.io reconciler kind=DownloadRequest worker count=1
time="2022-04-11T19:24:22Z" level=info msg="Starting workers" logSource="/go/pkg/mod/github.com/bombsimon/logrusr@v1.1.0/logrusr.go:111" logger=controller.serverstatusrequest reconciler group=velero.io reconciler kind=ServerStatusRequest worker count=10
time="2022-04-11T19:24:22Z" level=info msg="Validating backup storage location" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:114"
time="2022-04-11T19:24:22Z" level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
time="2022-04-11T19:25:22Z" level=info msg="Validating backup storage location" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:114"
time="2022-04-11T19:25:22Z" level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"

To see all resources deployed, use this command.

k get all -n velero
NAME                          READY   STATUS    RESTARTS   AGE
pod/restic-x6r69              1/1     Running   0          49m
pod/velero-7bc4b5cd46-k46hj   1/1     Running   0          49m

NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/restic   1         1         1       1            1           <none>          49m

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/velero   1/1     1            1           49m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/velero-7bc4b5cd46   1         1         1       49m

Example to test Velero and Restic integration

Please use this link here: https://velero.io/docs/v1.5/examples/#snapshot-example-with-persistentvolumes

You may need to edit the with-pv.yaml manifest if you don’t have a default storage class.

Useful commands

velero get backup-locations
NAME      PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        tenant1          Available   2022-04-11 19:26:22 +0000 UTC   ReadWrite     true

Create a backup example

velero backup create nginx-backup --selector app=nginx

Show backup logs

velero backup logs nginx-backup

Delete a backup

velero delete backup nginx-backup

Show all backups

velero backup get

Backup the VCD PostgreSQL database, see this previous blog post.

velero backup create postgresql --ordered-resources 'statefulsets=vmware-cloud-director/postgresql-primary' --include-namespaces=vmware-cloud-director

Show logs for this backup

velero backup logs postgresql

Describe the postgresql backup

velero backup describe postgresql

Describe volume backups

kubectl -n velero get podvolumebackups -l velero.io/backup-name=nginx-backup -o yaml

apiVersion: v1
items:
- apiVersion: velero.io/v1
  kind: PodVolumeBackup
  metadata:
    annotations:
      velero.io/pvc-name: nginx-logs
    creationTimestamp: "2022-04-13T17:55:04Z"
    generateName: nginx-backup-
    generation: 4
    labels:
      velero.io/backup-name: nginx-backup
      velero.io/backup-uid: c92d306a-bc76-47ba-ac81-5b4dae92c677
      velero.io/pvc-uid: cf3bdb2f-714b-47ee-876c-5ed1bbea8263
    name: nginx-backup-vgqjf
    namespace: velero
    ownerReferences:
    - apiVersion: velero.io/v1
      controller: true
      kind: Backup
      name: nginx-backup
      uid: c92d306a-bc76-47ba-ac81-5b4dae92c677
    resourceVersion: "8425774"
    uid: 1fcdfec5-9854-4e43-8bc2-97a8733ee38f
  spec:
    backupStorageLocation: default
    node: node-7n43
    pod:
      kind: Pod
      name: nginx-deployment-66689547d-kwbzn
      namespace: nginx-example
      uid: 05afa981-a6ac-4caf-963b-95750c7a31af
    repoIdentifier: s3:https://s3.vmwire.com:9000/tenant1/restic/nginx-example
    tags:
      backup: nginx-backup
      backup-uid: c92d306a-bc76-47ba-ac81-5b4dae92c677
      ns: nginx-example
      pod: nginx-deployment-66689547d-kwbzn
      pod-uid: 05afa981-a6ac-4caf-963b-95750c7a31af
      pvc-uid: cf3bdb2f-714b-47ee-876c-5ed1bbea8263
      volume: nginx-logs
    volume: nginx-logs
  status:
    completionTimestamp: "2022-04-13T17:55:06Z"
    path: /host_pods/05afa981-a6ac-4caf-963b-95750c7a31af/volumes/kubernetes.io~csi/pvc-cf3bdb2f-714b-47ee-876c-5ed1bbea8263/mount
    phase: Completed
    progress:
      bytesDone: 618
      totalBytes: 618
    snapshotID: 8aa5e473
    startTimestamp: "2022-04-13T17:55:04Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Migrating VMware Cloud Director to Kubernetes

This post summarizes how you can migrate the VMware Cloud Director database from PostgreSQL running in the VCD appliance into a PostgreSQL pod running in Kuberenetes and then creating new VCD cells running as pods in Kubernetes to run VCD services. In summary, modernizing VCD as a modern application.

This post summarizes how you can migrate the VMware Cloud Director database from PostgreSQL running in the VCD appliance into a PostgreSQL pod running in Kuberenetes and then creating new VCD cells running as pods in Kubernetes to run VCD services. In summary, modernizing VCD into a modern application.

I wanted to experiment with VMware Cloud Director to see if it would run in Kubernetes. One of the reasons for this is to reduce resource consumption in my home lab. The VCD appliance can be quite a high resource consuming VM needing a minimum of 2 vCPUs and 6GB of RAM. Running VCD in Kubernetes would definitely reduce this down and free up much needed RAM for other applications. Other benefits by running this workload in Kubernetes would benefit from faster deployment, higher availability, easier lifecycle management and operations and additional benefits from the ecosystem such as observability tools.

Here’s a view of the current VCD appliance in the portal. 172.16.1.34 is the IP of the appliance, 172.16.1.0/27 is the network for the NSX-T segment that I’ve created for the VCD DMZ network. At the end of this post, you’ll see VCD running in Kubernetes pods with IP addresses assigned by the CNI instead.

Tanzu Kubernetes Grid Shared Services Cluster

I am using a Tanzu Kubernetes Grid cluster set up for shared services. Its the ideal place to run applications that in the virtual machine world would have been running in a traditional vSphere Management Cluster. I also run Container Service Extension and App Launchpad Kubernetes pods in this cluster too.

Step 1. Deploy PostgreSQL with Kubeapps into a Kubernetes cluster

If you have Kubeapps, this is the easiest way to deploy PostgreSQL.

Copy my settings below to create a PostgreSQL database server and the vcloud user and database that are required for the database restore.

Step 1. Alternatively, use Helm directly.

# Create database server using KubeApps or Helm, vcloud user with password

helm repo add bitnami https://charts.bitnami.com/bitnami

# Pull the chart, unzip then edit values.yaml
helm pull bitnami/postgresql
tar zxvf postgresql-11.1.11.tgz

helm install postgresql bitnami/postgresql -f /home/postgresql/values.yaml -n vmware-cloud-director

# Expose postgres service using load balancer
k expose pod -n vmware-cloud-director postgresql-primary-0 --type=LoadBalancer --name postgresql-public

# Get the IP address of the load balancer service
k get svc -n vmware-cloud-director postgresql-public

# Connect to database as postgres user from VCD appliance to test connection
psql --host 172.16.4.70 -U postgres -p 5432

# Type password you used when you deployed postgresql

# Quit
\q

Step 2. Backup database from VCD appliance and restore to PostgreSQL Kubernetes pod

Log into the VCD appliance using SSH.

# Stop vcd services on all VCD appliances
service vmware-vcd stop

# Backup database and important files on VCD appliance
./opt/vmware/appliance/bin/create_backup.sh

# Unzip the zip file into /opt/vmware/vcloud-director/data/transfer/backups

# Restore database using pg_dump backup file. Do this from the VCD appliance as it already has the postgres tools installed.

pg_restore --host 172.16.4.70 -U postgres -p 5432 -C -d postgres /opt/vmware/vcloud-director/data/transfer/backups/vcloud-database.sql

# Edit responses.properties and change IP address of database server from  load balancer IP to the assigned FQDN for the postgresql pod, e.g. postgresql-primary.vmware-cloud-director.svc.cluster.local

# Shutdown the VCD appliance, its no longer needed

Step 3. Deploy Helm Chart for VCD

# Pull the Helm Chart
helm pull oci://harbor.vmwire.com/library/vmware-cloud-director

# Uncompress the Helm Chart
tar zxvf vmware-cloud-director-0.5.0.tgz

# Edit the values.yaml to suit your needs

# Deploy the Helm Chart
helm install vmware-cloud-director vmware-cloud-director --version 0.5.0 -n vmware-cloud-director -f /home/vmware-cloud-director/values.yaml

# Wait for about five minutes for the installation to complete

# Monitor logs
k logs -f  -n vmware-cloud-director vmware-cloud-director-0

Known Issues

If you see an error such as:

Error starting application: Unable to create marker file in the transfer spooling area: VfsFile[fileObject=file:///opt/vmware/vcloud-director/data/transfer/cells/4c959d7c-2e3a-4674-b02b-c9bbc33c5828]

This is due to the transfer share being created by a different vcloud user on the original VCD appliance. This user has a different Linux user ID, normally 1000 or 1001, we need to change this to work with the new vcloud user.

Run the following commands to resolve this issue:

# Launch a bash session into the VCD pod
k exec -it -n vmware-cloud-director vmware-cloud-director-0 -- /bin/bash

# change ownership to the /transfer share to the vcloud user
chmod -R vcloud:vcloud /opt/vmware/vcloud-director/data/transfer

# type exit to quit
exit

Once that’s done, the cell can start and you’ll see the following:

Successfully verified transfer spooling area: VfsFile[fileObject=file:///opt/vmware/vcloud-director/data/transfer]
Cell startup completed in 2m 26s

Accessing VCD

The VCD pod is exposed using a load balancer in Kubernetes. Ports 443 and 8443 are exposed on a single IP, just like how it is configured on the VCD appliance.

Run the following to obtain the new load balancer IP address of VCD.

k get svc -n vmware-cloud-director  vmware-cloud-director
vmware-cloud-director   LoadBalancer   100.64.230.197   172.16.4.71   443:31999/TCP,8443:30016/TCP   16m

Redirect your DNS server record to point to this new IP address for both the HTTP and VMRC services, e.g., 172.16.4.71.

If everything ran successfully, you should now be able to log into VCD. Here’s my VCD instance that I use for my lab environment which was previously running in a VCD appliance, now migrated over to Kubernetes.

Notice, the old cell is now inactive because it is powered-off. It can now be removed from VCD and deleted from vCenter.

The pod vmware-cloud-director-0 is now running the VCD application. Notice its assigned IP address of 100.107.74.159. This is the pod’s IP address.

Everything else will work as normal, any UI customizations, TLS certificates are kept just as before the migration, this is because we restored the database and used the responses.properties to add new cells.

Even opening a remote console to a VM will continue to work.

Load Balancer is NSX Advanced LB (Avi)

Avi provides the load balancing services automatically through the Avi Kubernetes Operator (AKO).

AKO automatically configures the services in Avi for you when services are exposed.

Deploy another VCD cell, I mean pod

It is very easy now to scale the VCD by deploying additional replicas.

Edit the values.yaml file and change the replicas number from 1 to 2.

# Upgrade the Helm Chart
helm upgrade vmware-cloud-director vmware-cloud-director --version 0.4.0 -n vmware-cloud-director -f /home/vmware-cloud-director/values.yaml

# Wait for about five minutes for the installation to complete

# Monitor logs
k logs -f  -n vmware-cloud-director vmware-cloud-director-1

When the VCD services start up successfully, you’ll notice that the cell will appear in the VCD UI and Avi is also updated automatically with another pool.

We can also see that Avi is load balancing traffic across the two pods.

Deploy as many replicas as you like.

Resource usage

Here’s a very brief overview of what we have deployed so far.

Notice that the two PostgreSQL pods together are only using 700 Mb of RAM. The VCD pods are consuming much more. But a vast improvement over the 6GB that one appliance needed previously.

High Availability

You can ensure that the VCD pods are scheduled on different Kubernetes worker nodes by using multi availability zone topology. To do this just change the values.yaml.

# Availability zones in deployment.yaml are setup for TKG and must match VsphereFailureDomain and VsphereDeploymentZones
availabilityZones:
  enabled: true

This makes sure that if you scale up the vmware-cloud-director statefulset, Kubernetes will ensure that each of the pods will not be placed on the same worker node.

As you can see from the Kubernetes Dashboard output under Resource usage above, vmware-cloud-director-0 and vmware-cloud-director-1 pods are scheduled on different worker nodes.

More importantly, you can see that I have also used the same for the postgresql-primary-0 and postgresql-read-0 pods. These are really important to keep separate in case of failure of a worker node or of an ESX server that the worker node runs on.

Finally

Here are a few screenshots of VCD, CSE and ALP all running in my Shared Services Kubernetes cluster.

Backing up the PostgreSQL database

For Day 2 operations, such as backing up the PostgreSQL database you can use Velero or just take a backup of the database using the pg_dump tool.

Backing up the database with pg_dump using a Docker container

Its super easy to take a database backup using a Docker container, just make sure you have Docker running on your workstation and that it can reach the load balancer IP address for the PostgreSQL service.

docker run -it  -e PGPASSWORD=Vmware1! postgres:14.2  pg_dump  -h 172.16.4.70 -U postgres vcloud > backup.sql

The command will create a file in the current working directory named backup.sql.

Backing up the database with Velero

Please see this other post on how to setup Velero and Restic to backup Kubernetes pods and persistent volumes.

To create a backup of the PostgreSQL database using Velero run the following command.

velero backup create postgresql --ordered-resources 'statefulsets=vmware-cloud-director/postgresql-primary' --include-namespaces=vmware-cloud-director

Describe the backup

velero backup describe postgresql

Show backup logs

velero backup logs postgresql

To delete the backup

velero backup delete postgresql

Kubernetes Gateway API with NSX Advanced Load Balancer (Avi)

Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:
– Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
– Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer

Using LoadBalancers, Gateways, GatewayClasses, AviInfraSettings, IngressClasses and Ingresses

Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:

  1. Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
  2. Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer

NSX Advanced Load Balancer (Avi) supports both of these requirements through the use of the Gateway API. The following section describes how this is implemented.

The Gateway API introduces a few new resource types:

  • GatewayClasses are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
  • Gateways are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.

Aviinfrasetting

Avi Infra Setting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.

A sample Avi Infra Setting is as shown below:

apiVersion: ako.vmware.com/v1alpha1
kind: AviInfraSetting
metadata:
  name: aviinfrasetting-tkg-wkld-oam
spec:
  seGroup:
    name: tkgvsphere-tkgworkload-group10
  network:
    vipNetworks:
      - networkName: tkg-wkld-oam-vip
        cidr: 10.223.63.0/26
    enableRhi: false

Avi Infra Setting is a cluster scoped CRD and can be attached to the intended Services. Avi Infra setting resources can be attached to Services using Gateway APIs. 

GatewayClass

Gateway APIs provide interfaces to structure Kubernetes service networking.

AKO supports Gateway APIs via the servicesAPI flag in the values.yaml.

The Avi Infra Setting resource can be attached to a Gateway Class object, via the .spec.parametersRef as shown below:

apiVersion: networking.x-k8s.io/v1alpha1
kind: GatewayClass
metadata:
  name: avigatewayclass-tkg-wkld-oam
spec:
  controller: ako.vmware.com/avi-lb
  parametersRef:
    group: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-oam

Gateway

The Gateway object provides a way to configure multiple Services as backends to the Gateway using label matching. The labels are specified as constant key-value pairs, the keys being ako.vmware.com/gateway-namespace and ako.vmware.com/gateway-name. The values corresponding to these keys must match the Gateway namespace and name respectively, for AKO to consider the Gateway valid. In case any one of the label keys are not provided as part of matchLabels OR the namespace/name provided in the label values do no match the actual Gateway namespace/name, AKO will consider the Gateway invalid. Please see https://avinetworks.com/docs/ako/1.5/gateway/.

kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: app-gateway-admin-0
  namespace: default
spec:
  gatewayClassName: avigatewayclass-tkg-wkld-oam
  listeners:
  - protocol: UDP
    port: 161
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
  - protocol: TCP
    port: 80
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
  - protocol: TCP
    port: 443
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service

How to use the GatewayAPI

In your helm charts, for any service that needs a LoadBalancer service. You would now want to use ClusterIP instead but use Labels such as the following:

apiVersion: v1
kind: Service
metadata:
  name: web-statefulset-service-oam
  namespace: default
  labels:
    ako.vmware.com/gateway-name: app-gateway-admin-0
    ako.vmware.com/gateway-namespace: default
spec:
  selector:
  app: nginx
  ports:
  - port: 8443
    targetPort: 443
    protocol: TCP
    type: ClusterIP

The Gateway Labels

ako.vmware.com/gateway-name: app-gateway-admin-0
ako.vmware.com/gateway-namespace: default

and the ClusterIP type tells the Avi Kubernetes Operator (AKO) to use the gateways, each gateway is on a separate network segment for traffic separation.

The gateways also have the relevant ports that the application uses, configure your gateway and change your helm chart to use the gateway objects.

Ingress Class

Avi Infra Settings can be applied to Ingress resources, using the IngressClass construct. IngressClass provides a way to configure Controller-specific load balancing parameters and applies these configurations to a set of Ingress objects. AKO supports listening to IngressClass resources in Kubernetes version 1.19+. The Avi Infra Setting reference can be provided in the Ingress Class as shown below:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-oam
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-oam
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-trf
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-trf
    ---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-trf
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-sigtran

Using IngresClass

The Avi Infra Setting resource can be attached to a Gateway Class object and Ingress Class object, via the .spec.parametersRef. However, using annotations with LoadBalancer object instead of using labels with Gateway API object, you will not be able to use shared protocol and ports on the same IP address. For example, TCP AND UDP 53 on the same LoadBalancer IP address. This is not supported yet, until MixedProtocolLB is supported by Kubernetes.

To provide a Controller to implement a given ingress, in addition to creating the IngressClass object, the ingressClassName should be specified, that matches the IngressClass name. The ingress looks as shown below:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  ingressClassName: avi-ingress-class-oam
  rules:
    - host: my-website.my-domain.com
      http:
        paths:
        - path: /foo
          backend:
            serviceName: web-service-1
            servicePort: 443

Using Annotation with Services of type LoadBalancer

Services of Type LoadBalancer can specify the Avi Infra Setting using an annotation as shown below without using Gateway API objects:

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-sigtran"

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-trf”

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-oam"

Automated installation of Container Service Extension 3.1.2

This post is an update to enable the automated installation of Container Service Extension to version 3.1.2, the script is also updated for better efficiency.

This post is an update to enable the automated installation of Container Service Extension to version 3.1.2, the script is also updated for better efficiency.

You can find the details on my github account under the repository named cse-automated.

https://github.com/hugopow/cse-automated

Ensure you review the README.MD and read the comments in the script too.

Pre-Requisites

  1. Deploy Photon OVA into vSphere, 2 VCPUs, 4GB RAM is more than enough
  2. Assign VM a hostname and static IP
  3. Ensure it can reach the Internet
  4. Ensure it can also reach VCD on TCP 443 and vCenter servers registered in VCD on TCP 443.
  5. SSH into the Photon VM
  6. Note that my environment has CA signed SSL certs and the script has been tested against this environment. I have not tested the script in environments with self-signed certificates.

Download cse-install.sh script to Photon VM

# Download the script to the Photon VM
curl https://raw.githubusercontent.com/hugopow/cse-automated/main/cse-install.sh --output cse-install.sh

#  Make script executable
chmod +x cse-install.sh

Change the cse-install.sh script

Make sure you change passwords, CA SSL certificates and environment variables to suit your environment.

Launch the script, sit back and relax

# Run as root
sh cse-install.sh

Demo Video

Old video of CSE 3.0.4 automated install, but still the same process.

Running VMware Cloud Director App Launchpad in Kubernetes

VMware Cloud Director App Launchpad allows users to deploy applications from public and private registries very easily into their VCD clouds, either as virtual machines or as containers into Kubernetes clusters provisioned into VCD by Container Service Extension.

Installing ALP requires a Linux system, followed by installing the application from an RPM file and then going through some configuration commands to connect ALP to the VCD system. Tedious at best and prone to errors.

This post shows how you can run ALP as a Kubernetes pod in a Kubernetes cluster instead of running ALP in a VM.

Disclaimer: This is unsupported. This post is an example of how you can run App Launchpad in Kubernetes instead of deploying it on a traditional VM. Use at your own risk. Please continue to run ALP in supported configurations in production environments.

VMware Cloud Director App Launchpad allows users to deploy applications from public and private registries very easily into their VCD clouds, either as virtual machines or as containers into Kubernetes clusters provisioned into VCD by Container Service Extension.

The official documentation is here.

Here are a few screenshots of ALP in action in VCD.

How is App Launchpad Installed in a VM?

Installing ALP requires a Linux system, followed by installing the application from an RPM file and then going through some configuration commands to connect ALP to the VCD system. Tedious at best and prone to errors.

This post shows how you can run ALP as a Kubernetes pod in a Kubernetes cluster instead of running ALP in a VM.

Disclaimer: This is unsupported. This post is an example of how you can run App Launchpad in Kubernetes instead of deploying it on a traditional VM. Use at your own risk. Please continue to run ALP in supported configurations in production environments.

VMs vs Containers

Running containers in Kubernetes instead of VMs provides enhanced benefits. Such as:

  • Containers are more lightweight than VMs, as their images are measured in megabytes rather than gigabytes
  • Containers require fewer IT resources to deploy, run, and manage
  • Containers spin up in milliseconds
  • Since their order of magnitude is smaller, a single system can host many more containers as compared to VMs
  • Containers are easier to deploy and fit in well with infrastructure as code concepts
  • Developing, testing, running and managing applications are easier and more efficient with containers.

A short list, you can of course read more here.

Running App Launchpad in a Kubernetes cluster

What have I done to ALP to make it work as a container running in a Kubernetes cluster?

  • Built a Docker image containing the docker photon image and installed all the pre-requisites to run ALP.
  • Built a Helm chart to easily deploy ALP into any Kubernetes cluster.

What does the Helm chart look like?

There are three main files in the Helm chart that makes this work.

FilePurpose
values.yamlHolds the configuration information which can be changed by the user, such as parameters for the VCD system that ALP with connect to.
deployment.yamlKubernetes deployment that uses the other two files to deploy the ALP application in to Kubernetes.
configmap.yamlContains the run-alp.sh script that will install and configure ALP using the parameters in the values.yaml file.

You can find the Helm chart on my Github repo here.

How to deploy ALP into Kubernetes?

Pull the Helm chart from my registry

helm pull oci://harbor.vmwire.com/library/app-launchpad

Extract it to your local directory

tar zxvf app-launchpad-0.4.0.tgz

You’ll find the values.yaml file in the /app-launchpad directory. Edit it to your liking and also accept the ALP EULA, you’ll also find the EULA in the README.md file.

alpConnect:
  saUser: "svc-alp"
  saPass: Vmware1!
  url: https://vcd.vmwire.com
  adminUser: administrator@system
  adminPass: Vmware1!
  mqtt: true
  eula: accept
# If you accept the EULA then type "accept" in the EULA key value to install ALP.

You can either package the chart and place it into your own registry or just use mine.

To install the chart, run

kubectl create ns app-launchpad

helm install app-launchpad oci://harbor.vmwire.com/library/app-launchpad -n app-launchpad -f /home/alp/app-launchpad/values.yaml

You’ll see output like this

values.yaml
NAME: app-launchpad
LAST DEPLOYED: Fri Mar 18 09:54:16 2022
NAMESPACE: app-launchpad
STATUS: deployed
REVISION: 1
TEST SUITE: None

Running the following command will show that the deployment is successful

helm list -n app-launchpad
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
app-launchpad   app-launchpad   1               2022-03-18 09:54:16.560871812 +0000 UTC deployed        app-launchpad-0.4.0     2.1.1

Running the following commands you’ll see that the pod has started

kubectl get deploy -n app-launchpad
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
app-launchpad   1/1     1            1           25m
kubectl get po -n app-launchpad
NAME                             READY   STATUS    RESTARTS   AGE
app-launchpad-669786b6dd-p8fjw   1/1     Running   0          25m

Getting the logs you’ll see something like

kubectl logs app-launchpad-669786b6dd-p8fjw -n app-launchpad
Uninstalling...
Removed /etc/systemd/system/multi-user.target.wants/alp.service.
Removed /etc/systemd/system/multi-user.target.wants/alp-deployer.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
warning: %postun(vmware-alp-2.1.1-19234432.x86_64) scriptlet failed, exit status 1
warning: /home/vmware-alp-2.1.1-19234432.ph3.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 001e5cc9: NOKEY
Verifying...                          ########################################
Preparing...                          ########################################
Updating / installing...
vmware-alp-2.1.1-19234432             ########################################
New installing...
Found the /opt/vmware/alp/log, change log directory owner and permission ...
chmod: /opt/vmware/alp/log/*: No such file or directory
chown: /opt/vmware/alp/log/*: No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/alp.service → /lib/systemd/system/alp.service.
Created symlink /etc/systemd/system/multi-user.target.wants/alp-deployer.service → /lib/systemd/system/alp-deployer.service.
Setup ALP connections
VMWARE END USER LICENSE AGREEMENT
Last updated: 03 May 2021
--- snipped ---
Cloud Director Setting for App Launchpad
+--------------------------------+-----------------------------------------------+
|             Cloud Director URL | https://vcd.vmwire.com                        |
|  App Launchpad Service Account | svc-alp                                       |
| App Launchpad Service Password | Vmware1!                                      |
|                   MQTT Triplet | VMware/AppLaunchpad/1.0.0                     |
|                     MQTT Token | e089774e-389c-4e12-82d0-11378a30981d          |
|          MQTT Topic of Monitor | topic/extension/VMware/AppLaunchpad/1.0.0/ext |
|         MQTT Topic of Response | topic/extension/VMware/AppLaunchpad/1.0.0/vcd |
|   App Launchpad extension UUID | 9ba4f6c8-a1e4-3a57-bd4c-e5ca5c2f8375          |
+--------------------------------+-----------------------------------------------+
Successfully connected and configured with Cloud Director for App Launchpad.
start ALP Deployer service
Start ALP service
==> /opt/vmware/alp/deployer/log/deployer/default.log <==
{"level":"info","timestamp":"2022-03-18T09:54:24.446Z","caller":"cmd/deployer.go:68","msg":"Starting server","Config":{"ALP":{"System":{"Deployer":{"AuthToken":"***"}},"VCDEndpoint":{"URL":"https://vcd.vmwire.com","FingerprintsSHA256":"f4:e0:1b:7c:9c:d2:da:15:94:52:58:6f:80:02:2a:46:8f:ab:a5:91:d7:43:f6:8b:85:60:23:16:93:8b:2a:87"},"Deployer":{"Host":"127.0.0.1","Port":8087,"KubeRESTClient":{"QPS":256,"Burst":512,"Timeout":180000,"CertificateValidation":false},"ChartCacheSize":128}},"Logging":{"Stdout":false,"File":{"Path":"log/deployer/"},"Level":{"Com":{"VMware":{"ALP":"INFO"}}}}}}
{"level":"info","timestamp":"2022-03-18T09:54:24.550Z","caller":"server/manager.go:59","msg":"The manager is starting mux-router"}
 __     __  __  __  __        __     _      ____    _____            _      _       ____
 \ \   / / |  \/  | \ \      / /    / \    |  _ \  | ____|          / \    | |     |  _ \
  \ \ / /  | |\/| |  \ \ /\ / /    / _ \   | |_) | |  _|           / _ \   | |     | |_) |
   \ V /   | |  | |   \ V  V /    / ___ \  |  _ <  | |___         / ___ \  | |___  |  __/
    \_/    |_|  |_|    \_/\_/    /_/   \_\ |_| \_\ |_____|       /_/   \_\ |_____| |_|

  :: Spring Boot Version : 2.4.13
  :: VMware vCloud Director App LaunchPad Version : 2.1.1-19234432, Build Date: Thu Jan 20 02:18:37 GMT 2022
=================================================================================================================

What next?

It will take around two minutes until ALP is ready.

Open the VCD provider portal and click on the More menu to open up App Launchpad.

From here you can configure App Launchpad and enjoy using the app in a container running in a Kubernetes cluster.

Some other details

You’ll notice (if you deployed Kubernetes Dashboard), that the pod uses minimal resources after it has started and settled down to an idle state.

Using pretty much no CPU and around 300Mb of memory. This is so much better than running this thing in a VM right?

Note that I have used MQTT for the message bus between ALP and VCD. If you use RabbitMQ, you can in fact deploy multiple pods of ALP and enable Kubernetes to run ALP as a clustered service. MQTT does not support multiple instances of ALP.

Just change the replicaCount value from 1 to 2, and also edit the configMap to change from MQTT to RabbitMQ.

To finish off

I’ve found that moving my lab applications such as ALP and Container Service Extension to Kubernetes has freed up a lot of memory and CPU. This is the main use case for me as I run a lot of labs and demo environments. It is also just a lot easier to deploy these applications with Helm into Kubernetes than using virtual machines.

This is just one example of modernizing some of the VCPP applications to take advantage of the benefits of running in Kubernetes.

I hope this helps you too. Feel free to comment below if you find this useful. I am also working on improving my Container Service Extension Helm chart and will publish that when it is ready.

How to add spaces to an entire block of lines in Vi

Working with files in Linux can be a pain, especially when you want to add spaces (yaml files) or multiple spaces to large block of text (certificates). This post shows you how to do just that.

Working with files in Linux can be a pain, especially when you want to add spaces (yaml files) or multiple spaces to large block of text (certificates). This post shows you how to do just that.

The below can be used to add spaces into yaml files very easily. Very useful when adding spaces to tls.crt files for example.

  1. Run this command to add line numbers

:set number

  1. Add four spaces from line 8 to line 37, note that between the last / and the preceding / there are four spaces
    :3,37s/^/ /

Enable Feature Gates for kube-apiserver on TKG clusters

Feature gates are a set of key=value pairs that describe Kubernetes features. You can turn these features on or off using the a ytt overlay file or by editing KubeadmControlPlane or VSphereMachineTemplate. This post, shows you how to enable a feature gate by enabling the MixedProtocolLBService to the TKG kube-apiserver. It can be used to enable other feature gates as well, however, I am using the MixedProtocolLBService to test this at one of my customers.

Feature gates are a set of key=value pairs that describe Kubernetes features. You can turn these features on or off using the a ytt overlay file or by editing KubeadmControlPlane or VSphereMachineTemplate. This post, shows you how to enable a feature gate by enabling the MixedProtocolLBService to the TKG kube-apiserver. It can be used to enable other feature gates as well, however, I am using the MixedProtocolLBService to test this at one of my customers.

Note that enabling feature gates on TKG clusters is unsupported.

The customer has a requirement to test mixed protocols in the same load balancer service (multiple ports and protocols on the same load balancer IP address). This feature is currently in alpha and getting a head start on alpha features is always a good thing to do to stay ahead.

For example to do this in a LoadBalancer service (with the MixedProtocolLBService feature gate enabled):

apiVersion: v1
kind: Service
metadata:
  name: mixed-protocol-dns
spec:
  type: LoadBalancer
  ports:
    - name: dns-udp
      port: 53
      protocol: UDP
    - name: dns-tcp
      port: 53
      protocol: TCP
  selector:
    app: my-dns-server

Today, without enabling this feature gate, can only be achieved using the Gateway API. The gateway object would look something like this:

apiVersion: networking.x-k8s.io/v1alpha1
kind: Gateway
metadata:
  name: gateway-tkg-dns
  namespace: default
spec:
  gatewayClassName: gatewayclass-tkg-workload
  listeners:
  - protocol: TCP
    port: 53
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: gateway-tkg-dns
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
  - protocol: UDP
    port: 53
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: gateway-tkg-dns
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service

And the service would look something like this.

apiVersion: v1
kind: Service
metadata:
  name: mixed-protocol-dns
  namespace: default
  labels:
    ako.vmware.com/gateway-name: gateway-tkg-dns
    ako.vmware.com/gateway-namespace: default
spec:
  selector:
    app: nginx
  ports:
    - port: 53
      targetPort: 53
      protocol: TCP
    - port: 53
      targetPort: 53
      protocol: UDP
  type: ClusterIP

Let’s assume that you want to enable this feature gate before deploying a new TKG cluster. I’ll show you how to enable this on an existing cluster further down the post.

Greenfield – before creating a new TKG cluster

Create a new overlay file named kube-apiserver-feature-gates.yaml. Place this file in your ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory. For more information on ytt overlays, please read this link.

#! Please add any overlays specific to vSphere provider under this file.

#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")

#! Enable MixedProtocolLBService feature gate on kube api.
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          #@overlay/match missing_ok=True
          feature-gates: MixedProtocolLBService=true

Deploy the TKG cluster.

Inspect the kube-apiserver pod for feature gate

k get po -n kube-system kube-apiserver-tkg-test-control-plane-#####  -o yaml

You should see on line 44 that the overlay has enabled the feature gate.

kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.16.3.66:6443
    kubernetes.io/config.hash: 15fb674a0f0f4d8b5074593f74365f98
    kubernetes.io/config.mirror: 15fb674a0f0f4d8b5074593f74365f98
    kubernetes.io/config.seen: "2022-03-08T22:05:59.729647404Z"
    kubernetes.io/config.source: file
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  creationTimestamp: "2022-03-08T22:06:00Z"
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver-tkg-test-control-plane-fmpw2
  namespace: kube-system
  ownerReferences:
  - apiVersion: v1
    controller: true
    kind: Node
    name: tkg-test-control-plane-fmpw2
    uid: 9fa5077e-4802-46ac-bce7-0cf62252e0e6
  resourceVersion: "2808"
  uid: fe22305b-5be1-48b3-b4be-d660d1d307b6
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=172.16.3.66
    - --allow-privileged=true
    - --audit-log-maxage=30
    - --audit-log-maxbackup=10
    - --audit-log-maxsize=100
    - --audit-log-path=/var/log/kubernetes/audit.log
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cloud-provider=external
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --feature-gates=MixedProtocolLBService=true
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=100.64.0.0/13
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

Inspect kubeadmcontrolplane, this is the control plane template for the master node, and all subsequent master nodes that are deployed. You can see on line 32, that the feature gate flag is enabled.

k get kubeadmcontrolplane tkg-test-control-plane -o yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  creationTimestamp: "2022-03-08T22:03:12Z"
  finalizers:
  - kubeadm.controlplane.cluster.x-k8s.io
  generation: 1
  labels:
    cluster.x-k8s.io/cluster-name: tkg-test
  name: tkg-test-control-plane
  namespace: default
  ownerReferences:
  - apiVersion: cluster.x-k8s.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Cluster
    name: tkg-test
    uid: b0d75a37-9968-4119-bc56-c9fa2347be55
  resourceVersion: "8160318"
  uid: 72d74b68-d386-4f75-b54b-b1a8ab63b379
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          audit-log-maxage: "30"
          audit-log-maxbackup: "10"
          audit-log-maxsize: "100"
          audit-log-path: /var/log/kubernetes/audit.log
          audit-policy-file: /etc/kubernetes/audit-policy.yaml
          cloud-provider: external
          feature-gates: MixedProtocolLBService=true

Now if you created a service with mixed protocols, the kube-apiserver will accept the service and will tell the load balancer to deploy the service.

Brownfield – enable feature gates on an existing cluster

Enabling feature gates on an already deployed cluster is a little bit harder to do, as you need to be extra careful that you don’t break your current cluster.

Let’s edit the KubeadmControlPlane template, you need to do this in the tkg-mgmt cluster context

kubectl config use-context tkg-mgmt-admin@tkg-mgmt
kubectl edit kubeadmcontrolplane tkg-hugo-control-plane

Find the line:

spec.kubeadmConfigSpec.apiServer.extraArgs

Add in the following line:

feature-gates: MixedProtocolLBService=true

so that section now looks like this:

spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          feature-gates: MixedProtocolLBService=true
          audit-log-maxage: "30"
          audit-log-maxbackup: "10"
          audit-log-maxsize: "100"
          audit-log-path: /var/log/kubernetes/audit.log
          audit-policy-file: /etc/kubernetes/audit-policy.yaml
          cloud-provider: external

Save the changes with :wq!

You’ll see that TKG has immediately started to clone a new control plane VM. Wait for the new VM to replace the current one.

If you inspect the new control plane VM, you’ll see that it has the feature gate applied. You need to do this in the worker cluster context that you want the feature gate enabled on, in my case tkg-hugo.

Note that enabling the feature gate to spec.kubeadmconfigspec.clusterconfiguration.apiserver.extraargs actually, enables the feature gate on the kube-apiserver, which in TKG runs in a pod.

kubectl config use-context tkg-hugo-admin@tkg-hugo
k get po kube-apiserver-tkg-hugo-control-plane-#### -n kube-system -o yaml

Go to the line spec.containers.command.kubeapiserver. You’ll see something like the following:

spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=172.16.3.82
    - --allow-privileged=true
    - --audit-log-maxage=30
    - --audit-log-maxbackup=10
    - --audit-log-maxsize=100
    - --audit-log-path=/var/log/kubernetes/audit.log
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cloud-provider=external
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --feature-gates=MixedProtocolLBService=true

Congratulations, the feature gate is now enabled!

Deploy Harbor Registry with Tanzu Packages and expose with Ingress

In the previous post, I described how to install Harbor using Helm to utilize ChartMuseum for running Harbor as a Helm chart repository.

The Harbor registry that comes shipped with TKG 1.5.1 uses Tanzu Packages to deploy Harbor into a TKG cluster. This version of Harbor does not support Helm Charts using ChartMuseum. VMware dropped support for ChartMuseum in TKG and are adopting OCI registries instead. This post describes how to deploy Harbor using the Tanzu Packages (KApp) and use Harbor as an OCI registry that fully supports Helm charts. This is the preferred way to use chart and image registries.

The latest versions as of TKG 1.5.1 packages, February 2022.

PackageVersion
cert-manager1.5.3+vmware.2-tkg.1
contour1.18.2+vmware.1-tkg.1
harbor2.3.3+vmware.1-tkg.1

Or run the following to see the latest available versions.

tanzu package available list harbor.tanzu.vmware.com -A

Pre-requisites

Before installing Harbor, you need to install Cert Manager and Contour. You can follow this other guide here to get started. This post uses Ingress, which requires NSX Advanced Load Balancer (Avi). The previous post will show you how to install these pre-requisites.

Deploy Harbor

Create a configuration file named harbor-data-values.yaml. This file configures the Harbor package. Follow the steps below to obtain a template file.

image_url=$(kubectl -n tanzu-package-repo-global get packages harbor.tanzu.vmware.com.2.3.3+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')

imgpkg pull -b $image_url -o /tmp/harbor-package-2.3.3+vmware.1-tkg.1

cp /tmp/harbor-package-2.3.3+vmware.1-tkg.1/config/values.yaml harbor-data-values.yaml

Set the mandatory passwords and secrets in the harbor-data-values.yaml file by automatically generating random passwords and secrets:

bash /tmp/harbor-package-2.3.3+vmware.1-tkg.1/config/scripts/generate-passwords.sh harbor-data-values.yaml

Specify other settings in the harbor-data-values.yaml file.

Set the hostname setting to the hostname you want to use to access Harbor via ingress. For example, harbor.yourdomain.com.

To use your own certificates, update the tls.crt, tls.key, and ca.crt settings with the contents of your certificate, key, and CA certificate. The certificate can be signed by a trusted authority or be self-signed. If you leave these blank, Tanzu Kubernetes Grid automatically generates a self-signed certificate.

The format of the tls.crt and tls.key looks like this:

tlsCertificate:
  tls.crt: |
    -----BEGIN CERTIFICATE-----
    ---snipped---
    -----END CERTIFICATE-----
  tls.key: |
    -----BEGIN PRIVATE KEY-----
    ---snipped---
    -----END PRIVATE KEY-----

If you used the generate-passwords.sh script, optionally update the harborAdminPassword with something that is easier to remember.

Optionally update other persistence settings to specify how Harbor stores data.

If you need to store a large quantity of container images in Harbor, set persistence.persistentVolumeClaim.registry.size to a larger number.

If you do not update the storageClass under persistence settings, Harbor uses the cluster’s default storageClass.

Remove all comments in the harbor-data-values.yaml file:

yq -i eval '... comments=""' harbor-data-values.yaml

Install the Harbor package:

tanzu package install harbor \
--package-name harbor.tanzu.vmware.com \
--version 2.3.3+vmware.1-tkg.1 \
--values-file harbor-data-values.yaml \
--namespace my-packages

Obtain the address of the Envoy service load balancer.

kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.status.loadBalancer.ingress[0]}'

Update your DNS record to point the hostname to the IP address above.

Update Harbor

Update the Harbor installation in any way, such as updating the TLS certificate, make your changes to the harbor-data-values.yaml file then run the following to update Harbor.

tanzu package installed update harbor --version 2.3.3+vmware.1-tkg.1 --values-file harbor-data-values.yaml --namespace my-packages

Using Harbor as an OCI Registry for Helm Charts

Login to the registry

helm registry login -u admin harbor2.vmwire.com

Package a helm chart if you haven’t got one already packaged

helm package buildachart

Upload a chart to the registry

helm push buildachart-0.1.0.tgz oci://harbor2.vmwire.com/chartrepo

The chart can now be seen in the Harbor UI in the view as where normal Docker images are.

OCI based Harbor

Notice that this is an OCI registry and not a Helm repository that is based on ChartMuseum, thats why you won’t see the ‘Helm Charts’ tab next to the ‘Repositories’ tab.

ChartMuseum based Harbor

Deploy an application with Helm

Let’s deploy the buildachart application, this is a simple nginx application that can use TLS so we have a secure site with HTTPS.

Create a new namespace and the TLS secret for the application. Copy the tls.crt and tls.key files in pem format to $HOME/certs/

# Create a new namespace for cherry
k create ns cherry

# Create a TLS secret with the contents of tls.key and tls.crt in the cherry namespace
kubectl create secret tls cherry-tls --key $HOME/certs/tls.key --cert $HOME/certs/tls.crt -n cherry

Deploy the app using Harbor as the Helm chart repository

helm install buildachart oci://harbor2.vmwire.com/chartrepo/buildachart --version 0.1.0 -n cherry

If you need to install Helm

Follow this link here.

https://helm.sh/docs/topics/registries/

https://opensource.com/article/20/5/helm-charts

https://itnext.io/helm-3-8-0-oci-registry-support-b050ff218911

Deploy Harbor Registry with Helm and expose with Ingress

The Harbor registry that comes shipped with TKG 1.5.1 uses Tanzu Packages to deploy Harbor into a TKG cluster. This version of Harbor does not support Helm Charts using ChartMuseum. VMware dropped support for ChartMuseum in TKG and are adopting OCI registries instead. This post describes how to deploy the upstream Harbor distribution that supports ChartMuseum for a helm repository. Follow this other post here to deploy Harbor with Tanzu Packages (Kapp) with support for OCI.

Intro

The Harbor registry that comes shipped with TKG 1.5.1 uses Tanzu Packages to deploy Harbor into a TKG cluster. This version of Harbor does not support Helm Charts using ChartMuseum. VMware dropped support for ChartMuseum in TKG and are adopting OCI registries instead. This post describes how to deploy the upstream Harbor distribution that supports ChartMuseum for a helm repository. Follow this other post here to deploy Harbor with Tanzu Packages (Kapp) with support for OCI.

The example below uses the following components:

  • TKG 1.5.1
  • AKO 1.6.1
  • Contour 1.18.2
  • Helm 3.8.0

Use the previous post to deploy the per-requisites.

Step 1 – Download the harbor helm chart

helm repo add harbor https://helm.goharbor.io
helm fetch harbor/harbor --untar

Step 2 – Edit the values.yaml file

You only need to change the following lines.

Line NumberSpecification
5loadBalancer or ingress (contour)
13use TLS certificate
30 & 35secret name (created in Step 3.)
38 & 39FQDN of your harbor and notary DNS A records
215, 221 etcA storage class if you don’t have a default storage class. Leave blank to use your default storage class.
355admin password
expose:
  # Set the way how to expose the service. Set the type as "ingress",
  # "clusterIP", "nodePort" or "loadBalancer" and fill the information
  # in the corresponding section
  type: ingress
  tls:
    # Enable the tls or not.
    # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
    # Note: if the "expose.type" is "ingress" and the tls
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291
    # for the detail.
    enabled: true
    # The source of the tls certificate. Set it as "auto", "secret"
    # or "none" and fill the information in the corresponding section
    # 1) auto: generate the tls certificate automatically
    # 2) secret: read the tls certificate from the specified secret.
    # The tls certificate can be generated manually or by cert manager
    # 3) none: configure no tls certificate for the ingress. If the default
    # tls certificate is configured in the ingress controller, choose this option
    certSource: secret
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: ""
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: "harbor-cert"
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      # Only needed when the "expose.type" is "ingress".
      notarySecretName: "harbor-cert"
  ingress:
    hosts:
      core: harbor.vmwire.com
      notary: notary.harbor.vmwire.com
   
---snipped---

Step 3 – Create a TLS secret for ingress

Copy the tls.crt and tls.key files in pem format to $HOME/certs/

# Create a new namespace for harbor
k create ns harbor

# Create a TLS secret with the contents of tls.key and tls.crt in the harbor namespace
kubectl create secret tls harbor-cert --key $HOME/certs/tls.key --cert $HOME/certs/tls.crt -n harbor

Step 4 – Install Harbor

Ensure you’re in the directory that you ran Step 2 in.

helm install harbor . -n harbor

Monitor deployment with

kubectl get po -n harbor

Log in

Use admin and the password you set on line 355 of the values.yaml file. The default password is Harbor12345.

Quick guide to install cert-manager, contour, prometheus and grafana into TKG using Tanzu Packages (Kapp)

Intro

For an overview of Kapp, please see this link here.

The latest versions as of TKG 1.5.1, February 2022.

PackageVersion
cert-manager1.5.3+vmware.2-tkg.1
contour1.18.2+vmware.1-tkg.1
prometheus2.27.0+vmware.2-tkg.1
grafana7.5.7+vmware.2-tkg.1

Or run the following to see the latest available versions.

tanzu package available list cert-manager.tanzu.vmware.com -A
tanzu package available list contour.tanzu.vmware.com -A
tanzu package available list prometheus.tanzu.vmware.com -A
tanzu package available list grafana.tanzu.vmware.com -A

Install Cert Manager

tanzu package install cert-manager \
--package-name cert-manager.tanzu.vmware.com \
--namespace my-packages \
--version 1.5.3+vmware.2-tkg.1 \
--create-namespace

I’m using ingress with Contour which needs a load balancer to expose the ingress services. Install AKO and NSX Advanced Load Balancer (Avi) by following this previous post.

Install Contour

Create a file named contour-data-values.yaml, this example uses NSX Advanced Load Balancer (Avi)

---
infrastructure_provider: vsphere
namespace: tanzu-system-ingress
contour:
 configFileContents: {}
 useProxyProtocol: false
 replicas: 2
 pspNames: "vmware-system-restricted"
 logLevel: info
envoy:
 service:
   type: LoadBalancer
   annotations: {}
   nodePorts:
     http: null
     https: null
   externalTrafficPolicy: Cluster
   disableWait: false
 hostPorts:
   enable: true
   http: 80
   https: 443
 hostNetwork: false
 terminationGracePeriodSeconds: 300
 logLevel: info
 pspNames: null
certificates:
 duration: 8760h
 renewBefore: 360h

Remove comments in the contour-data-values.yaml file.

yq -i eval '... comments=""' contour-data-values.yaml

Deploy contour

tanzu package install contour \
--package-name contour.tanzu.vmware.com \
--version 1.18.2+vmware.1-tkg.1 \
--values-file contour-data-values.yaml \
--namespace my-packages

Install Prometheus

Download the prometheus-data-values.yaml file to use custom values to use ingress.

image_url=$(kubectl -n tanzu-package-repo-global get packages prometheus.tanzu.vmware.com.2.27.0+vmware.2-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')

imgpkg pull -b $image_url -o /tmp/prometheus-package-2.27.0+vmware.2-tkg.1

cp /tmp/prometheus-package-2.27.0+vmware.2-tkg.1/config/values.yaml prometheus-data-values.yaml

Edit the file and change any settings you need such as adding the TLS certificate and private key for ingress. It’ll look something like this.

ingress:
  enabled: true
  virtual_host_fqdn: "prometheus-tkg-mgmt.vmwire.com"
  prometheus_prefix: "/"
  alertmanager_prefix: "/alertmanager/"
  prometheusServicePort: 80
  alertmanagerServicePort: 80
  tlsCertificate:
    tls.crt: |
      -----BEGIN CERTIFICATE-----
      --- snipped---
      -----END CERTIFICATE-----
    tls.key: |
      -----BEGIN PRIVATE KEY-----
      --- snipped---
      -----END PRIVATE KEY-----

Remove comments in the prometheus-data-values.yaml file.

yq -i eval '... comments=""' prometheus-data-values.yaml

Deploy prometheus

tanzu package install prometheus \
--package-name prometheus.tanzu.vmware.com \
--version 2.27.0+vmware.2-tkg.1 \
--values-file prometheus-data-values.yaml \
--namespace my-packages

Install Grafana

Download the grafana-data-values.yaml file.

image_url=$(kubectl -n tanzu-package-repo-global get packages grafana.tanzu.vmware.com.7.5.7+vmware.2-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')

imgpkg pull -b $image_url -o /tmp/grafana-package-7.5.7+vmware.2-tkg.1

cp /tmp/grafana-package-7.5.7+vmware.2-tkg.1/config/values.yaml grafana-data-values.yaml

Generate a Base64 password and edit the grafana-data-values.yaml file to update the default admin password.

echo -n 'Vmware1!' | base64

Also update the TLS configuration to use signed certificates for ingress. It will look something like this.

  secret:
    type: "Opaque"
    admin_user: "YWRtaW4="
    admin_password: "Vm13YXJlMSE="

ingress:
  enabled: true
  virtual_host_fqdn: "grafana-tkg-mgmt.vmwire.com"
  prefix: "/"
  servicePort: 80
  #! [Optional] The certificate for the ingress if you want to use your own TLS certificate.
  #! We will issue the certificate by cert-manager when it's empty.
  tlsCertificate:
    #! [Required] the certificate
    tls.crt: |
      -----BEGIN CERTIFICATE-----
      ---snipped---
      -----END CERTIFICATE-----
    #! [Required] the private key
    tls.key: |
      -----BEGIN PRIVATE KEY-----
      ---snipped---
      -----END PRIVATE KEY-----

Since I’m using ingress to expose the Grafana service, also change line 33, from LoadBalancer to ClusterIP. This prevents Kapp from creating an unnecessary service that will consume an IP address.

#! Grafana service configuration
   service:
     type: ClusterIP
     port: 80
     targetPort: 3000
     labels: {}
     annotations: {}

Remove comments in the grafana-data-values.yaml file.

yq -i eval '... comments=""' grafana-data-values.yaml

Deploy Grafana

tanzu package install grafana \
--package-name grafana.tanzu.vmware.com \
--version 7.5.7+vmware.2-tkg.1 \
--values-file grafana-data-values.yaml \
--namespace my-packages

Accessing Grafana

Since I’m using ingress and I set the ingress FQDN as grafana-tkg-mgmt.vmwire.com and I also used TLS. I can now access the Grafana UI using https://grafana-tkg-mgmt.vmwire.com and enjoy a secure connection.

Listing all installed packages

tanzu package installed list -A

Making changes to Contour, Prometheus or Grafana

If you need to make changes to any of the configuration files, you can then update the deployment with the tanzu package installed update command.

tanzu package installed update contour \
--version 1.18.2+vmware.1-tkg.1 \
--values-file contour-data-values.yaml \
--namespace my-packages
tanzu package installed update prometheus \
--version 2.27.0+vmware.2-tkg.1 \
--values-file prometheus-data-values.yaml \
--namespace my-packages
tanzu package installed update grafana \
--version 7.5.7+vmware.2-tkg.1 \
--values-file grafana-data-values.yaml \
--namespace my-packages

Removing Cert Manager, Contour, Prometheus or Grafana

tanzu package installed delete cert-manager -n my-packages
tanzu package installed delete contour -n my-packages
tanzu package installed delete prometheus -n my-packages
tanzu package installed delete grafana -n my-packages

Copypasta for doing this again on another cluster

Place all your completed data-values files into a directory and just run the entire code block below to set everything up in one go.

# Deploy cert-manager
tanzu package install cert-manager \
--package-name cert-manager.tanzu.vmware.com \
--namespace my-packages \
--version 1.5.3+vmware.2-tkg.1 \
--create-namespace

# Deploy contour
yq -i eval '... comments=""' contour-data-values.yaml
tanzu package install contour \
--package-name contour.tanzu.vmware.com \
--version 1.18.2+vmware.1-tkg.1 \
--values-file contour-data-values.yaml \
--namespace my-packages

# Deploy prometheus
yq -i eval '... comments=""' prometheus-data-values.yaml
tanzu package install prometheus \
--package-name prometheus.tanzu.vmware.com \
--version 2.27.0+vmware.2-tkg.1 \
--values-file prometheus-data-values.yaml \
--namespace my-packages

# Deploy grafana
yq -i eval '... comments=""' grafana-data-values.yaml
tanzu package install grafana \
--package-name grafana.tanzu.vmware.com \
--version 7.5.7+vmware.2-tkg.1 \
--values-file grafana-data-values.yaml \
--namespace my-packages