Preparing NSX Advanced Load Balancer for Tanzu Kubernetes Grid Ingress Services

In this post I describe how to setup NSX ALB (Avi) in preparation for use with Tanzu Kubernetes Grid, more specifically, the Avi Kubernetes Operator (AKO).

AKO is a Kubernetes operator which works as an ingress controller and performs Avi-specific functions in a Kubernetes environment with the Avi Controller. It runs as a pod in the cluster and translates the required Kubernetes objects to Avi objects and automates the implementation of ingresses/routes/services on the Service Engines (SE) via the Avi Controller.

In this post I describe how to setup NSX ALB (Avi) in preparation for use with Tanzu Kubernetes Grid, more specifically, the Avi Kubernetes Operator (AKO).

AKO is a Kubernetes operator which works as an ingress controller and performs Avi-specific functions in a Kubernetes environment with the Avi Controller. It runs as a pod in the cluster and translates the required Kubernetes objects to Avi objects and automates the implementation of ingresses/routes/services on the Service Engines (SE) via the Avi Controller.

AKO
Avi Kubernetes Operator Architecture

First lets describe the architecture for TKG + AKO.

For each tenant that you have, you will have at least one AKO configuration.

A tenant can have one or more TKG workload clusters and more than one TKG workload cluster can share an AKO configuration. This is important to remember for multi-tenant services when using Tanzu Kubernetes Grid. However, you can of course configure an AKO config for each TKG workload cluster if you wish to provide multiple AKO configurations. This will require more Service Engines and Service Engine Groups as we will discuss further below.

So as a minimum, you will have several AKO configs. Let me summarize in the following table.

AKO ConfigDescriptionSpecification
install-ako-for-allThe default ako configuration used for the TKG Management Cluster and deployed by defaultProvider side ako configuration for the TKG Management Cluster only.
ako-for-tkg-sscThe ako configuration for the Tanzu Shared Services ClusterProvider side AKO configuration for the Tanzu Shared Services Cluster only.

tkg-ssc-akodeploymentconfig.yaml
ako-for-tenant-1The ako configuration for Tenant 1AKO configuration prepared by the Provider and deployed for the tenant to use.

tkg-tenant-1-akodeploymentconfig.yaml
ako-for-tenant-x The ako configuration for Tenant x

Although TKG deploys a default AKO config, we do not use any ingress services for the TKG Management Cluster. Therefore we do not need to deploy a Service Engine Group and Service Engines for this cluster.

Service Engine Groups and Service Engines are only required if you need ingress services to your applications. We of course need this for the Tanzu Shared Services and any applications deployed into a workload cluster.

I will go into more detail in a follow-up post where I will demonstrate how to setup the Tanzu Shared Services Cluster that uses the preparation steps described in this post.

Lets start the Avi Controller configuration. Although I am using the Tanzu Shared Services Cluster as an example for this guide, the same steps can be repeated for all additional Tanzu Kubernetes Grid workload clusters. All that is needed is a few changes to the .yaml files and you’re good to go.

Clouds

I prefer not to use the Default-Cloud, and will always create a new cloud.

The benefit to using NSX ALB in write mode (Orchestration mode) is that NSX ALB will orchestrate the creation of service engines for you and also scale out more service engines if your applications demand more capacity. However, if you are using VMware Cloud on AWS, this is not possible due to restrictions with the RBAC constraints within VMC so only non-orchestration mode is possible with VMC.

In this post I’m using my home lab which is running vSphere.

Navigate to Infrastructure, Clouds and click on the CREATE button and select the VMware vCenter/VMware vSphere ESX option. This post uses vCenter as a cloud type.

Fill in the details as my screenshots show. You can leave the IPAM Profile settings empty for now, we will complete these in the next step.

Select the Data Center within your vSphere hierarchy. I’m using my home lab for this example. Again leave all the other settings on the defaults.

The next tab will take you to the network options for the management network to use for the service engines. This network needs to be routable between the Avi Controller(s) and the service engines.

The network tab will show you the networks that it finds from the vCenter connection, I am using my management network. This network is where I run all of the management appliances, such as vCenter, NSX-T, Avi Controllers etc.

Its best to configure a static IP pool for the service engines. Generally, you’ll need just a handful of IP addresses as each service engine group will have two service engines and each service engine will only need one management IP. A service engine group can provide Kubernetes load balancing services for the entire Kubernetes cluster. This of course depends on your sizing requirements, and can be reviewed here. For my home lab, fourteen IP addresses is more than sufficient for my needs.

Service Engine Group

While we’re in the Infrastructure settings lets proceed to setup a new Service Engine Group. Navigate to Infrastructure, Service Engine Group, select the new cloud that we previously setup and then click on the CREATE button. Its important that you ensure you select your new cloud from that drop down menu.

Give your new service engine group a name, I tend to use a naming format such as tkg-<cluster-name>-se-group. For this example, I am setting up a new SE group for the Tanzu Shared Services Cluster.

Reduce the maximum number of service engines down if you wish. You can leave all other settings on defaults.

Click on the Advanced tab to setup some vSphere specifics. Here you can setup some options that will help you identify the SEs in the vSphere hierarchy as well as placing the SEs into a VM folder and options to include or exclude compute clusters or hosts and even an option to include or exclude a datastore.

Service Engine groups are important as they are the boundary with which TKG clusters will use for L4 services. Each SE Group needs to have a unique name, this is important as each TKG workload cluster will use this name for its AKODeploymentConfig file, this is the config file that maps a TKG cluster to NSX ALB for L4 load balancing services.

With TKG, when you create a TKG workload cluster you must specify some key value pairs that correspond to service engine group names and this is then applied in the AKODeploymentConfig file.

The following table shows where these relationships lie and I will go into more detail in a follow-up post where I will demonstrate how to setup the Tanzu Shared Services Cluster.

Avi ControllerTKG cluster deployment fileAKO Config file
Service Engine Group name
tkg-ssc-se-group
AVI_LABELS
'cluster': 'tkg-ssc'
clusterSelector:
matchLabels:
cluster: tkg-ssc

serviceEngineGroup: tkg-ssc-se-group

Networks

Navigate to Infrastructure, Networks, again ensure that you select your new cloud from the drop down menu.

The Avi Controller will show you all the networks that it has detected using the vCenter connection that you configured. What we need to do in this section is to configure the networks that NSX ALB will use when configuring a service for Kubernetes to use. Generally, depending on how you setup your network architecture for TKG, you will have one network that the TKG cluster will use and another for the front-end VIP. This network is what you will use to expose the pods on. Think of it as a load balancer DMZ network.

In my home lab, I use the following setup.

NetworkDescriptionSpecification
tkg-mgmtTKG Management ClusterNetwork: 172.16.3.0/27
Static IP Pools:
172.16.3.26 – 172.16.3.29
tkg-sscTKG Shared Services ClusterNetwork: 172.16.3.32/27
Static IP Pools:
172.16.3.59 – 172.16.3.62
tkg-ssc-vipTKG Shared Services Cluster front-end VIPsNetwork: 172.16.4.32/27
Static IP Pools:
172.16.4.34 – 172.16.4.62

IPAM Profile

Create an IPAM profile by navigating to Templates, Profiles, IPAM/DNS Profiles and clicking on the CREATE button and select IPAM Profile.

Select the cloud that you setup above and select all of the usable networks that you will use for applications that will use the load balancer service from NSX ALB. You want to select the networks that you configured in the step above.

Avi Controller Certificate

We also need the SSL certificate used by the Avi Controller, I am using a signed certificate in my home lab from Let’s Encrypt, which I wrote about in a previous post.

Navigate to Templates, Security, SSL/TLS Certificates, click on the icon with a downward arrow in a circle next to the certificate for your Avi Controller, its normally the first one in the list.

Click on the Copy to clipboard button and paste the certificate into Notepad++ or similar.

At this point we have NSX ALB setup for deploying a new TKG workload cluster using the new Service Engine Group that we have prepared. In the next post, I’ll demonstrate how to setup the Tanzu Shared Services Cluster to use NSX ALB for ingress services.

Tanzu Bootstrap using Photon OS 4.0

This topic explains how to install and initialize the Tanzu command line interface (CLI) on a bootstrap machine. I’ve found that using Photon OS 4.0 is the fastest and most straightforward method to get going with Tanzu.

A quick post on how to setup a Tanzu Kubernetes Grid bootstrap VM using Photon OS.

This topic explains how to install and initialize the Tanzu command line interface (CLI) on a bootstrap machine. I’ve found that using Photon OS 4.0 is the fastest and most straightforward method to get going with Tanzu. I’ve tried Ubuntu and CentOS but these require a lot more preparation with pre-requisites and dependencies such as building a VM from an ISO, and installing Docker which Photon OS already comes with. The other thing I’ve noticed with Linux distros such as Ubuntu is that other things might interfere with your Tanzu deployment. Apparmor and the ufw firewall come to mind.

The Tanzu bootstrap machine is the laptop, host, or server that you deploy management and workload clusters from, and that keeps the Tanzu and Kubernetes configuration files for your deployments. The bootstrap machine is typically local, but it can also be a physical machine or VM that you access remotely. We will be using a ready built Photon OS OVA that is supported by VMware.

Photon OS, is an open-source minimalist Linux operating system from VMware that is optimized for cloud computing platforms, VMware vSphere deployments, and applications native to the cloud.

Photon OS is a Linux container host optimized for vSphere and cloud-computing platforms such as Amazon Elastic Compute and Google Compute Engine. As a lightweight and extensible operating system, Photon OS works with the most common container formats, including Docker, Rocket, and Garden. Photon OS includes a yum-compatible, package-based lifecycle management system called tdnf.

Once the Tanzu CLI is installed, the second and last step to deploying Tanzu Kubernetes Grid is using the Tanzu CLI to create or designate a management cluster on each cloud provider that you use. The Tanzu CLI then communicates with the management cluster to create and manage workload clusters on the cloud provider.

Tanzu bootstrap high-level architecture

Download the Photon OS OVA from this direct link.

Deploy that ova using vCenter Linux Guest Customization and ensure that you deploy it on the same network as the TKG management cluster network that you intend to use. This is important as the bootstrap will set up a temporary kind cluster on Docker that will then be moved over to the TKG management cluster.

Ensure your VM has a minimum of 6GB of RAM. You can read up on other pre-requisites here.

Login using root, with password of changeme, you will be asked to update the root password. All the steps below are done using the root account, if you wish to use a non root account with Photon OS, ensure that you add that account to the docker group, more details in this link here.

Install the tar package, which we will need later to extract the Tanzu CLI.

tdnf install tar

First thing to do is create a new directory called /tanzu.

mkdir /tanzu

and then go into that directory, we will be performing most tasks in this director

cd /tanzu

Copy the following files to the /tanzu directory. You can get these files from my.vmware.com under the Tanzu Kubernetes Grid product listing.

kubectl-linux-v1.20.5-vmware.1.gz, this is the kubectl tool.

tanzu-cli-bundle-v1.3.1-linux-amd64.tar, this is the tanzu CLI.

Unpack the Tanzu CLI

tar -xvf tanzu-cli-bundle-v1.3.1-linux-amd64.tar

Unzip the kubectl tool

gunzip kubectl-linux-v1.20.5-vmware.1.gz

Install kubectl

install kubectl-linux-v1.20.5-vmware.1 /usr/local/bin/kubectl

Now go to the /cli directory

cd /tanzu/cli

Install Tanzu CLI

install core/v1.3.1/tanzu-core-linux_amd64 /usr/local/bin/tanzu

Install Tanzu CLI Plugins

Now go back to the /tanzu directory

tanzu plugin clean
tanzu plugin install --local cli all
tanzu plugin list

Enable bash auto completion for tanzu cli

source <(tanzu completion bash)

Enable bash auto completion for kubectl

source <(kubectl completion bash)

Install Carvel Tools – ytt, kapp, kgld and imgpkg

Install ytt

gunzip ytt-linux-amd64-v0.31.0+vmware.1.gz
chmod ugo+x ytt-linux-amd64-v0.31.0+vmware.1
mv ./ytt-linux-amd64-v0.31.0+vmware.1 /usr/local/bin/ytt

Install kapp

gunzip kapp-linux-amd64-v0.36.0+vmware.1.gz
chmod ugo+x kapp-linux-amd64-v0.36.0+vmware.1
mv ./kapp-linux-amd64-v0.36.0+vmware.1 /usr/local/bin/kapp

Install kbld

gunzip kbld-linux-amd64-v0.28.0+vmware.1.gz
chmod ugo+x kbld-linux-amd64-v0.28.0+vmware.1
mv ./kbld-linux-amd64-v0.28.0+vmware.1 /usr/local/bin/kbld

Install imgpkg

gunzip imgpkg-linux-amd64-v0.5.0+vmware.1.gz
chmod ugo+x imgpkg-linux-amd64-v0.5.0+vmware.1
mv ./imgpkg-linux-amd64-v0.5.0+vmware.1 /usr/local/bin/imgpkg

Stop Docker

systemctl stop docker

Upgrade software on Photon OS

tdnf -y upgrade docker

Start Docker

systemctl start docker

Add your SSH public key to login with your private key. Please see this link for a guide. Logout and use your private key to login.

vi ~/.ssh/authorized_keys

Start TKG UI Installer

tanzu management-cluster create --ui --bind <ip-of-your-photon-vm>:8080 --browser none