Kubernetes Gateway API with NSX Advanced Load Balancer (Avi)

Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:
– Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
– Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer

Using LoadBalancers, Gateways, GatewayClasses, AviInfraSettings, IngressClasses and Ingresses

Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:

  1. Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
  2. Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer

NSX Advanced Load Balancer (Avi) supports both of these requirements through the use of the Gateway API. The following section describes how this is implemented.

The Gateway API introduces a few new resource types:

  • GatewayClasses are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
  • Gateways are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.

Aviinfrasetting

Avi Infra Setting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.

A sample Avi Infra Setting is as shown below:

apiVersion: ako.vmware.com/v1alpha1
kind: AviInfraSetting
metadata:
  name: aviinfrasetting-tkg-wkld-oam
spec:
  seGroup:
    name: tkgvsphere-tkgworkload-group10
  network:
    vipNetworks:
      - networkName: tkg-wkld-oam-vip
        cidr: 10.223.63.0/26
    enableRhi: false

Avi Infra Setting is a cluster scoped CRD and can be attached to the intended Services. Avi Infra setting resources can be attached to Services using Gateway APIs. 

GatewayClass

Gateway APIs provide interfaces to structure Kubernetes service networking.

AKO supports Gateway APIs via the servicesAPI flag in the values.yaml.

The Avi Infra Setting resource can be attached to a Gateway Class object, via the .spec.parametersRef as shown below:

apiVersion: networking.x-k8s.io/v1alpha1
kind: GatewayClass
metadata:
  name: avigatewayclass-tkg-wkld-oam
spec:
  controller: ako.vmware.com/avi-lb
  parametersRef:
    group: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-oam

Gateway

The Gateway object provides a way to configure multiple Services as backends to the Gateway using label matching. The labels are specified as constant key-value pairs, the keys being ako.vmware.com/gateway-namespace and ako.vmware.com/gateway-name. The values corresponding to these keys must match the Gateway namespace and name respectively, for AKO to consider the Gateway valid. In case any one of the label keys are not provided as part of matchLabels OR the namespace/name provided in the label values do no match the actual Gateway namespace/name, AKO will consider the Gateway invalid. Please see https://avinetworks.com/docs/ako/1.5/gateway/.

kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
  name: app-gateway-admin-0
  namespace: default
spec:
  gatewayClassName: avigatewayclass-tkg-wkld-oam
  listeners:
  - protocol: UDP
    port: 161
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
  - protocol: TCP
    port: 80
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service
  - protocol: TCP
    port: 443
    routes:
      selector:
        matchLabels:
          ako.vmware.com/gateway-name: app-gateway-admin-0
          ako.vmware.com/gateway-namespace: default
      group: v1
      kind: Service

How to use the GatewayAPI

In your helm charts, for any service that needs a LoadBalancer service. You would now want to use ClusterIP instead but use Labels such as the following:

apiVersion: v1
kind: Service
metadata:
  name: web-statefulset-service-oam
  namespace: default
  labels:
    ako.vmware.com/gateway-name: app-gateway-admin-0
    ako.vmware.com/gateway-namespace: default
spec:
  selector:
  app: nginx
  ports:
  - port: 8443
    targetPort: 443
    protocol: TCP
    type: ClusterIP

The Gateway Labels

ako.vmware.com/gateway-name: app-gateway-admin-0
ako.vmware.com/gateway-namespace: default

and the ClusterIP type tells the Avi Kubernetes Operator (AKO) to use the gateways, each gateway is on a separate network segment for traffic separation.

The gateways also have the relevant ports that the application uses, configure your gateway and change your helm chart to use the gateway objects.

Ingress Class

Avi Infra Settings can be applied to Ingress resources, using the IngressClass construct. IngressClass provides a way to configure Controller-specific load balancing parameters and applies these configurations to a set of Ingress objects. AKO supports listening to IngressClass resources in Kubernetes version 1.19+. The Avi Infra Setting reference can be provided in the Ingress Class as shown below:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-oam
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-oam
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-trf
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-trf
    ---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: avi-ingress-class-trf
spec:
  controller: ako.vmware.com/avi-lb
  parameters:
    apiGroup: ako.vmware.com
    kind: AviInfraSetting
    name: aviinfrasetting-tkg-wkld-sigtran

Using IngresClass

The Avi Infra Setting resource can be attached to a Gateway Class object and Ingress Class object, via the .spec.parametersRef. However, using annotations with LoadBalancer object instead of using labels with Gateway API object, you will not be able to use shared protocol and ports on the same IP address. For example, TCP AND UDP 53 on the same LoadBalancer IP address. This is not supported yet, until MixedProtocolLB is supported by Kubernetes.

To provide a Controller to implement a given ingress, in addition to creating the IngressClass object, the ingressClassName should be specified, that matches the IngressClass name. The ingress looks as shown below:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  ingressClassName: avi-ingress-class-oam
  rules:
    - host: my-website.my-domain.com
      http:
        paths:
        - path: /foo
          backend:
            serviceName: web-service-1
            servicePort: 443

Using Annotation with Services of type LoadBalancer

Services of Type LoadBalancer can specify the Avi Infra Setting using an annotation as shown below without using Gateway API objects:

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-sigtran"

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-trf”

annotations:
    aviinfrasetting.ako.vmware.com/name: "aviinfrasetting-tkg-wkld-oam"

Installing Contour and Envoy with Container Service Extension and VMware Cloud Director

This post summarizes how to get Contour and Envoy up and running with Kubernetes clusters running in VMware Cloud Director.

This post summarizes how to get Contour and Envoy up and running with Kubernetes clusters running in VMware Cloud Director.

Pre-requisites

  1. A Kubernetes cluster deployed by Container Service Extension in VCD
  2. NSX Advanced Load Balancer setup for the Kubernetes cluster
  3. A VMware Cloud Director deployment

Step 1. Upload an SSL certificate for Contour to VCD

Obtain the cluster ID from Kubernetes Container Clusters, copy the entire Cluster ID.

Navigate to Certificate Management, Certificates Library, and click on Import

I used a Let’s Encrypt signed certificate. I wrote about using Let’s Encrypt in a previous post.

For the friendly name, paste in the Cluster ID and append to the end “-cert”

Continue the wizard by uploading the certificate and the private key for the certificate.

Step 2. Install Helm

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm fetch bitnami/contour

tar xvf contour-<version>.tgz

Step 3. Running Envoy as a non-root user

Envoy is configured to run as a non-root user by default. This is much more secure but we won’t be able to use any ports that are lower than 1024. Therefore we must change the values.yaml file for contour.

Edit the values.yaml file located in the directory that you untar the tkz file into and search for

envoy.containerPorts.http

Change the http port to 8080 and the https port to 8443.

It should end up looking like this:

  containerPorts:
    http: 8080
    https: 8443

Step 4. Installing Contour (and Envoy)

Install Contour by running the following command

helm install ingress <path-to-contour-directory>

You should get one daemonset named ingress-contour-envoy and deployment named ingress-contour-contour. These spin up two pods.

You will also see two services starting, one called ingress-contour with a service type of ClusterIP and another called ingress-contour-envoy with a service type LoadBalancer. Wait for NSX ALB to assign an external IP for the envoy service from your Organization network IP pool.

This IP is now your Kubernetes cluster IP for ingress services. Make a note of this IP address. My example uses 10.149.1.116 as the external IP.

Step 5. Setup DNS

The next step to do is to setup DNS, I’m using Windows DNS in my lab so what I’ve done is setup a sub domain called apps.vmwire.com and also setup an A record pointing to *.apps.vmwire.com.

*.apps.vmwire.com 10.149.1.116

DNS is now setup to point *.apps.vmwire.com to the external IP assigned to Envoy. From this point forward, any DNS request that hits *.apps.vmwire.com will be redirected to Contour.

Testing ingress with some apps

Download the following files from my Github.

https://raw.githubusercontent.com/hugopow/cse/main/shapes.yaml

https://raw.githubusercontent.com/hugopow/cse/main/shapes-ingress.yaml

They are two yaml files that deploys a sample web application and then exposes the applications using Contour and Envoy.

You don’t have to edit the shapes.yaml file, but you will need to edit the shapes-ingress.yaml file and change lines 9 and 16 to your desired DNS settings.

In this example, Contour will use circles.apps.vmwire.com to expose the circles application and triangles.apps.vmwire.com to expose the triangles application. Note that we are not adding circles. or triangles. A records into the DNS server.

Lets deploy the circles and triangles apps.

kubectl apply -f shapes.yaml

And then expose the applications with Contour

kubectl apply -f shapes-ingress.yaml

Now open up a web browser and navigate to http://circles.<your-domain&gt; or http://triangles.<your-domain&gt; and see the apps being exposed by Contour. If you don’t get a connection, its probably because you haven’t enabled port 80 through your Edge Gateway.