Avi Infra Setting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.
Here I have a different network that I want a new Ingress to use, in this case the tkg-wkld-trf-vip network, 172.16.4.97/27, lets assume its used for 5G traffic connectivity and the NSX-T T1 is connected to a different T0 VRF. This isolates the traffic between VRFs, so that we can expose certain applications on different VRFs.
In this example, I’ll change Grafana from using the default VIP network to the tkg-wkld-trf-vip network instead. You can read up on how this was originally done using the default VIP network in the previous post.
Avi Infra Settings can be applied to Ingress resources, using the IngressClass construct. IngressClass provides a way to configure Controller-specific load balancing parameters and applies these configurations to a set of Ingress objects. AKO supports listening to IngressClass resources in Kubernetes version 1.19+. The Avi Infra Setting reference can be provided in the Ingress Class as shown below:
Avi Ingress is an alternative to Contour and NGINX ingress controllers.
Tanzu Kubernetes Grid ships with Contour as the default Ingress controller that Tanzu Packages uses to expose Prometheus and Contour. Prometheus and Grafana are configured to use Contour if you set ingress: true in the values.yaml files.
This post details how to set Avi Ingress up and use it to expose these applications using signed TLS certificates.
Let’s start
Install AKO with helm as normal, use ClusterIP in the Avi values.yaml config file.
Here is the example online store website deployment using ingress with the certificate. Lets play with this before we get around to exposing Prometheus and Grafana.
Note that the Service uses ClusterIP and the Ingress is annotated with ako.vmware.com/enable-tls: "true" to use the default tls specified in router-certs-default.yaml. Also add the ingressClassName into the Ingress manifest.
k apply -f sample-ingress.yaml
k apply -f avisvcingress.yaml
k get ingress avisvcingress
NAME CLASS HOSTS ADDRESS PORTS AGE
avisvcingress avi-lb hackazone.tkg-workload1.vmwire.com 172.16.4.69 80 13m
Let’s add another host
Append another host to the avisvcingress.yaml file.
Note that since Prometheus when deployed by Tanzu Packages is deployed into the namespace tanzu-system-monitoring, we also need to create the new ingress in the same namespace.
Deploy Prometheus following the documentation here, but do not enable ingress in the prometheus-data-values.yaml file, that uses Contour. We don’t want that as we are using Avi Ingress instead.
Add Grafana too!
Create a new manifest for Grafana to use called dashboard-ingress.yaml.
Note that since Grafana when deployed by Tanzu Packages is deployed into the namespace tanzu-system-dashboards, we also need to create the new ingress in the same namespace.
Deploy Grafana following the documentation here, but do not enable ingress in the grafana-data-values.yaml file, that uses Contour. We don’t want that as we are using Avi Ingress instead.
Summary
Ingress with Avi is really nice, I like it! A single secret to store the TLS certificates and all hosts are automatically configured to use TLS. You also just need to expose TCP 80 as ClusterIP Services and Avi will do the rest for you and expose the application over TCP 443 using the TLS cert.
Here you can see that all four of our applications – hackazon, nginx running across three AZs, Grafana and Prometheus all using Ingress and sharing a single IP address.
Very cool indeed!
k get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default avisvcingress avi-lb hackazon.tkg-workload1.vmwire.com,nginx.tkg-workload1.vmwire.com 172.16.4.69 80 58m
tanzu-system-dashboards dashboard-ingress avi-lb grafana.tkg-workload1.vmwire.com 172.16.4.69 80 3m47s
tanzu-system-monitoring monitoring-ingress avi-lb prometheus.tkg-workload1.vmwire.com 172.16.4.69 80 14m
Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:
– Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
– Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer
Using LoadBalancers, Gateways, GatewayClasses, AviInfraSettings, IngressClasses and Ingresses
Gateway API replaces services of type LoadBalancer in applications that require shared IP with multiple services and network segmentation. The Gateway API can be used to meet the following requirements:
Shared IP – supporting multiple services, protocols and ports on the same load balancer external IP address
Network segmentation – supporting multiple networks, e.g., oam, signaling and traffic on the same load balancer
NSX Advanced Load Balancer (Avi) supports both of these requirements through the use of the Gateway API. The following section describes how this is implemented.
The Gateway API introduces a few new resource types:
GatewayClasses are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
Gateways are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.
Aviinfrasetting
Avi Infra Setting provides a way to segregate Layer-4/Layer-7 virtual services to have properties based on different underlying infrastructure components, like Service Engine Group, intended VIP Network etc.
Avi Infra Setting is a cluster scoped CRD and can be attached to the intended Services. Avi Infra setting resources can be attached to Services using Gateway APIs.
GatewayClass
Gateway APIs provide interfaces to structure Kubernetes service networking.
AKO supports Gateway APIs via the servicesAPI flag in the values.yaml.
The Avi Infra Setting resource can be attached to a Gateway Class object, via the .spec.parametersRef as shown below:
The Gateway object provides a way to configure multiple Services as backends to the Gateway using label matching. The labels are specified as constant key-value pairs, the keys being ako.vmware.com/gateway-namespace and ako.vmware.com/gateway-name. The values corresponding to these keys must match the Gateway namespace and name respectively, for AKO to consider the Gateway valid. In case any one of the label keys are not provided as part of matchLabels OR the namespace/name provided in the label values do no match the actual Gateway namespace/name, AKO will consider the Gateway invalid. Please see https://avinetworks.com/docs/ako/1.5/gateway/.
In your helm charts, for any service that needs a LoadBalancer service. You would now want to use ClusterIP instead but use Labels such as the following:
and the ClusterIP type tells the Avi Kubernetes Operator (AKO) to use the gateways, each gateway is on a separate network segment for traffic separation.
The gateways also have the relevant ports that the application uses, configure your gateway and change your helm chart to use the gateway objects.
Ingress Class
Avi Infra Settings can be applied to Ingress resources, using the IngressClass construct. IngressClass provides a way to configure Controller-specific load balancing parameters and applies these configurations to a set of Ingress objects. AKO supports listening to IngressClass resources in Kubernetes version 1.19+. The Avi Infra Setting reference can be provided in the Ingress Class as shown below:
The Avi Infra Setting resource can be attached to a Gateway Class object and Ingress Class object, via the .spec.parametersRef. However, using annotations with LoadBalancer object instead of using labels with Gateway API object, you will not be able to use shared protocol and ports on the same IP address. For example, TCP AND UDP 53 on the same LoadBalancer IP address. This is not supported yet, until MixedProtocolLB is supported by Kubernetes.
To provide a Controller to implement a given ingress, in addition to creating the IngressClass object, the ingressClassName should be specified, that matches the IngressClass name. The ingress looks as shown below:
Envoy is configured to run as a non-root user by default. This is much more secure but we won’t be able to use any ports that are lower than 1024. Therefore we must change the values.yaml file for contour.
Edit the values.yaml file located in the directory that you untar the tkz file into and search for
envoy.containerPorts.http
Change the http port to 8080 and the https port to 8443.
It should end up looking like this:
containerPorts:
http: 8080
https: 8443
Step 4. Installing Contour (and Envoy)
Install Contour by running the following command
helm install ingress <path-to-contour-directory>
You should get one daemonset named ingress-contour-envoy and deployment named ingress-contour-contour. These spin up two pods.
You will also see two services starting, one called ingress-contour with a service type of ClusterIP and another called ingress-contour-envoy with a service type LoadBalancer. Wait for NSX ALB to assign an external IP for the envoy service from your Organization network IP pool.
This IP is now your Kubernetes cluster IP for ingress services. Make a note of this IP address. My example uses 10.149.1.116 as the external IP.
Step 5. Setup DNS
The next step to do is to setup DNS, I’m using Windows DNS in my lab so what I’ve done is setup a sub domain called apps.vmwire.com and also setup an A record pointing to *.apps.vmwire.com.
*.apps.vmwire.com 10.149.1.116
DNS is now setup to point *.apps.vmwire.com to the external IP assigned to Envoy. From this point forward, any DNS request that hits *.apps.vmwire.com will be redirected to Contour.
They are two yaml files that deploys a sample web application and then exposes the applications using Contour and Envoy.
You don’t have to edit the shapes.yaml file, but you will need to edit the shapes-ingress.yaml file and change lines 9 and 16 to your desired DNS settings.
In this example, Contour will use circles.apps.vmwire.com to expose the circles application and triangles.apps.vmwire.com to expose the triangles application. Note that we are not adding circles. or triangles. A records into the DNS server.
Lets deploy the circles and triangles apps.
kubectl apply -f shapes.yaml
And then expose the applications with Contour
kubectl apply -f shapes-ingress.yaml
Now open up a web browser and navigate to http://circles.<your-domain> or http://triangles.<your-domain> and see the apps being exposed by Contour. If you don’t get a connection, its probably because you haven’t enabled port 80 through your Edge Gateway.