Kubernetes Load Balancer Service for CSE on Cloud Director

This article describes how to setup vCenter, VCD, NSX-T and NSX Advanced Load Balancer to support exposing Kubernetes applications in Kubernetes clusters provisioned into VCD.

At the end of this post, you would be able to run this command:

kubectl expose deployment webserver –port=80 –type=LoadBalancer

… and have NSX ALB together with VCD and NSX-T automate the provisioning and setup of everything that allows you to expose that application to the outside world using a Kubernetes service of type LoadBalancer.

This article describes how to setup vCenter, VCD, NSX-T and NSX Advanced Load Balancer to support exposing Kubernetes applications in Kubernetes clusters provisioned into VCD.

At the end of this post, you would be able to run this command:

kubectl expose deployment webserver --port=80 --type=LoadBalancer

… and have NSX ALB together with VCD and NSX-T automate the provisioning and setup of everything that allows you to expose that application to the outside world using a Kubernetes service of type LoadBalancer.

Create a Content Library for NSX ALB

In vCenter (Resource vCenter managing VCD PVDCs), create a Content Library for NSX Advanced Load Balancer to use to upload the service engine ova.

Create T1 for Avi Service Engine management network

Create T1 for Avi Service Engine management network. You can either attach this T1 to the default T0 or create a new T0.

  • enable DHCP server for the T1
  • enable All Static Routes and All Connected Segments & Service Ports under Route Advertisement

Create a network segment for Service Engine management network

Create a network segment for Avi Service Engine management network. Attach the segment to the T1 the was created in the previous step.

Ensure you enable DHCP, this will assign IP addresses to the service engines automatically and you won’t need to setup IPAM profiles in Avi Vantage.

NSX Advanced Load Balancer Settings

A couple of things to setup here.

  • You do not need to create any tenants in NSX ALB, just use the default admin context.
  • No IPAM/DNS Profiles are required as we will use DHCP from NSX-T for all networks.
  • Use FQDNs instead of IP addresses
  • Use the same FQDN in all systems for consistency and to ensure that registration between the systems work
    • NSX ALB
    • VCD
    • NSX-T
  • Navigate to Administration, User Credentials and setup user credentials for NSX-T controller and vCenter server
  • Navigate to Administration, Settings, Tenant Settings and ensure that the settings are as follows

Setup an NSX-T Cloud

Navigate to Infrastructure, Clouds. Setup your cloud similar to mine, I have valled my NSX-T cloud nsx.vmwire.com (which is the FQDN of my NSX-T Controller).

Lets go through these settings from the top.

  • use the FQDN of your NSX-T manager for the name
  • click the DHCP option, we will be using NSX-T’s DHCP server so we can ignore IPAM/DNS later
  • enter something for the Object Name Prefix, this will give the SE VM name a prefix so they can be identified in vCenter. I used avi here, so it will look like this in vCenter
  • type the FQDN of the NSX-T manager into the NSX-T Manager Address
  • choose the NSX-T Manager Credentials that you configured earlier
  • select the Transport Zone that you are using in VCD for your tenants
    • under Management Network Segment, select the T1 that you created earlier for SE management networking
    • under Segment ID, select the network segment that you created earlier for the SE management network
  • click ADD under the Data Network Segment(s)
    • select the T1 that is used by the tenant in VCD
    • select the tenant organization routed network that is attached to the t1 in the previous task
  • the two previous settings tell NSX ALB where to place the data/vip network for front-end load balancing use. NSX-ALB will create a new segment for this in NSX-T automatically, and VCD will automatically create DNAT rules when a virtual service is requested in NSX ALB
  • the last step is to add the vCenter server, this would be the vCenter server that is managing the PVDCs used in VCD.

Now wait for a while until the status icon turns green and shows Complete.

Setup a Service Engine Group

Decide whether you want to use a shared service engine group for all VCD tenants or dedicated a service engine group for each Tenant.

I use the dedicated model.

  • navigate to Infrastructure, Service Engine Group
  • change the cloud to the NSX-T cloud that you setup earlier
  • create a new service engine group with your preferred settings, you can read about the options here.

Setup Avi in VCD

Log into VCD as a Provider and navigate to Resources, Infrastructure Resources, NSX-ALB, Controllers and click on the ADD link.

Wait for a while for Avi to sync with VCD. Then continue to add the NSX-T Cloud.

Navigate to Resources, Infrastructure Resources, NSX-ALB, NSX-T Clouds and click on the ADD link.

Proceed when you can see the status is healthy.

Navigate to Resources, Infrastructure Resources, NSX-ALB, Service Engine Groups and click on the ADD link.

Staying logged in as a Provider, navigate to the tenant that you wish to enable NSX ALB load balancing services and navigate to Networking, Edge Gateways, Load Balancer, Service Engine Groups. Then add the service engine group to this tenant.

This will enable this tenant to use NSX ALB load balancing services.

Deploy a new Kubernetes cluster in VCD with Container Service Extension

Deploy a new Kubernetes cluster using Container Service Extension in VCD as normal.

Once the cluster is ready, download the kube config file and log into the cluster.

Check that all the nodes and pods are up as normal.

kubectl get nodes -A
kubectl get pods -A
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   antrea-agent-7nlqs                          2/2     Running   0          21m
kube-system   antrea-agent-q5qc8                          2/2     Running   0          24m
kube-system   antrea-controller-5cd95c574d-r4q2z          0/1     Pending   0          8m38s
kube-system   coredns-6598d898cd-qswn8                    0/1     Pending   0          24m
kube-system   coredns-6598d898cd-s4p5m                    0/1     Pending   0          24m
kube-system   csi-vcd-controllerplugin-0                  0/3     Pending   0          4m29s
kube-system   etcd-mstr-zj9p                              1/1     Running   0          24m
kube-system   kube-apiserver-mstr-zj9p                    1/1     Running   0          24m
kube-system   kube-controller-manager-mstr-zj9p           1/1     Running   0          24m
kube-system   kube-proxy-76m4h                            1/1     Running   0          24m
kube-system   kube-proxy-9229x                            1/1     Running   0          21m
kube-system   kube-scheduler-mstr-zj9p                    1/1     Running   0          24m
kube-system   vmware-cloud-director-ccm-99fd59464-qjj7n   1/1     Running   0          24m

You might see that the following pods in the kube-system namespace are in a pending state. If everything is already working then move onto the next section.

kube-system   coredns-6598d898cd-qswn8     0/1     Pending
kube-system   coredns-6598d898cd-s4p5m     0/1     Pending
kube-system   csi-vcd-controllerplugin-0   0/3     Pending

This is due to the cluster waiting for the csi-vcd-controllerplugin-0 to start.

To get this working, we just need to configure the csi-vcd-controllerplugin-0 with the instructions in this previous post.

Once done, you’ll see that the pods are all now healthy.

kubectl get pods -A
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   antrea-agent-7nlqs                          2/2     Running   0          23m
kube-system   antrea-agent-q5qc8                          2/2     Running   0          26m
kube-system   antrea-controller-5cd95c574d-r4q2z          1/1     Running   0          10m
kube-system   coredns-6598d898cd-qswn8                    1/1     Running   0          26m
kube-system   coredns-6598d898cd-s4p5m                    1/1     Running   0          26m
kube-system   csi-vcd-controllerplugin-0                  3/3     Running   0          60s
kube-system   csi-vcd-nodeplugin-twr4w                    2/2     Running   0          49s
kube-system   etcd-mstr-zj9p                              1/1     Running   0          26m
kube-system   kube-apiserver-mstr-zj9p                    1/1     Running   0          26m
kube-system   kube-controller-manager-mstr-zj9p           1/1     Running   0          26m
kube-system   kube-proxy-76m4h                            1/1     Running   0          26m
kube-system   kube-proxy-9229x                            1/1     Running   0          23m
kube-system   kube-scheduler-mstr-zj9p                    1/1     Running   0          26m
kube-system   vmware-cloud-director-ccm-99fd59464-qjj7n   1/1     Running   0          26m

Testing the Load Balancer service

Lets deploy a nginx webserver and expose it using all of the infrastructure that we setup above.

kubectl create deployment webserver --image nginx

Wait for the deployment to start and the pod to go into a running state. You can use this command to check

kubectl get deploy webserver
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver   1/1     1            1           7h47m

Now we can’t access the nginx default web page yet until we expose it using the load balancer service.

kubectl expose deployment webserver --port=80 --type=LoadBalancer

Wait for the load balancer service to start and the pod to go into a running state. During this time, you’ll see the service engines being provisioned automatically by NSX ALB. It’ll take 10 minutes or so to get everything up and running.

You can use this command to check when the load balancer service has completed and check the EXTERNAL-IP.

kubectl get service webserver
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
webserver   LoadBalancer   100.71.45.194   10.149.1.114   80:32495/TCP   7h48m

You can see that NSX ALB, VCD and NSX-T all worked together to expose the nginx applicationto the outside world.

The external IP of 10.149.1.114 in my environment is an uplink segment on a T0 that I have configured for VCD tenants to use as egress and ingress into their organization VDC. It is the external network for their VDCs.

Paste the external IP into a web browser and you should see the nginx web page.

In the next post, I’ll go over the end to end network flow to show how this all connects NSX ALB, VCD, NSX-T and Kubernetes together.

Author: Hugo Phan

@hugophan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s