Using Contour for KubeApps with TLS

Kubeapps provides a cloud native solution to browse, deploy and manage the lifecycle of applications on a Kubernetes cluster. The very basic installation of KubeApps does not expose the application outside of the Kubernetes cluster as the default service type is ClusterIP.

You can easily expose using a LoadBalancer service but the application will use a self-signed certificate.

This post shows you how you can expose using Contour ingress instead and use a TLS certificate to secure access.

Deploy KubeApps

You’ll need to have Contour installed before installing KubeApps.

Deploy KubeApps as normal using Helm.

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace kubeapps
helm install kubeapps --namespace kubeapps bitnami/kubeapps

Create a demo credential to access KubeApps

kubectl create --namespace default serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: kubeapps-operator-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: kubeapps-operator
type: kubernetes.io/service-account-token
EOF



Create Contour HTTPProxy and Kubernetes-tls secret

Use this manifest to create a kubernetes-tls secret and httpproxy to use with Contour.

Paste in the tls.crt and tls.key in base64 format and update the fqdn.

kubeapps-contour.yaml

apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: kubeapps-grpc
  namespace: kubeapps
spec:
  virtualhost:
    fqdn: kubeapps.tenant1.vmwire.com
    tls:
      secretName: kubeapps-host-tls
  routes:
    - conditions:
      - prefix: /apis/
      pathRewritePolicy:
        replacePrefix:
        - replacement: /
      services:
        - name: kubeapps-internal-kubeappsapis
          port: 8080
          protocol: h2c
    - services:
      - name: kubeapps
        port: 80
---
apiVersion: v1
data:
  tls.crt: |
    ---snipped---
  tls.key: |

kind: Secret
metadata:
  name: kubeapps-host-tls
  namespace: kubeapps
type: kubernetes.io/tls

Then apply the manifest to create the secret and httpproxy with kubectl apply -f kubeapps-contour.yaml.

Update DNS

Update DNS records for the FQDNs to point to the IP address of the envoy service. You can find the External IP address used by Envoy by typing kubectl get svc -n tanzu-system-ingress envoy.

Obtain token and login

Obtain the token to login kubectl get --namespace default secret kubeapps-operator-token -o go-template='{{.data.token | base64decode}}'

Open up a browser session and enter the FQDN of the virtual host. You should now be able to log into KubeApps and enjoy a secure TLS connection too.

Deploying Kubeapps on TKG in vCloud Director Clouds

Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters. This guide shows how you can deploy Kubeapps into your TKG clusters deployed in VMware Cloud Director.

Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters. This guide shows how you can deploy Kubeapps into your TKG clusters deployed in VMware Cloud Director.

With Kubeapps you can:

Pre-requisites:

  • a Kubernetes cluster deployed in VCD
  • Avi is setup for VCD to provide L4 load balancer to Kubernetes services
  • NSX-T is is setup for VCD
  • A default storageclass is defined for your Kubernetes cluster
  • Helm installed to your workstation, if using Photon OS, its already installed

Step 1: Install KubeApps

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace kubeapps
helm install kubeapps --namespace kubeapps bitnami/kubeapps

Step 2: Create demo credentials

kubectl create --namespace default serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator

Step 3: Obtain token to login to KubeApps

kubectl get --namespace default secret $(kubectl get --namespace default serviceaccount kubeapps-operator -o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' | grep kubeapps-operator-token) -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo

Step 4: Expose KubeApps using Avi load balancer

k edit svc kubeapps -n kubeapps

change the line from

"type: ClusterIP"

to

"type: LoadBalancer"

Or: Expose using Gateway API, add ako.vmware.com labels into the kubeapps service like this (Not supported in VCD clouds):

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: kubeapps
    meta.helm.sh/release-namespace: kubeapps
  creationTimestamp: "2022-03-26T13:47:45Z"
  labels:
    ako.vmware.com/gateway-name: gateway-tkg-workload-vip
    ako.vmware.com/gateway-namespace: default
    app.kubernetes.io/component: frontend
    app.kubernetes.io/instance: kubeapps
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubeapps
    helm.sh/chart: kubeapps-7.8.13
  name: kubeapps
  namespace: kubeapps

Step 5: Log into KubeApps with the token