Sometimes it is convenient to expose the Kubernetes Dashboard to manage the cluster instead of using the console and kubectl.

In this post I am going to show you:

  • how to set up automatic certificate renewal with cert-manager;
  • how to expose the Kubernetes Dashboard to a public nginx Ingress over a HTTPS connection;
  • how to configure simple basic authentication as an addition security layer.

The only prerequisite is to have a running Kubernetes cluster. I have tested the configuration provided in this post on k3s (Rancher) and microk8s clusters.

A note to k3s users: k3s ships with Traefik ingress controller; in this post, I am using nginx ingress controller. To disable Traefik, you will need to add

disable:
  - traefik

to /etc/rancher/k3/config.yaml and restart k3s (e.g., systemctl restart k3s).

Step 1. Install Kubernetes Dashboard

If the Kubernetes Dashboard has not yet been installed, you will obviously need to install it.

For microk8s, this is as easy as running

microk8s enable rbac dashboard

For k3s, this will be a bit more difficult:

GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

Or you can even use this Helm Chart to install the Dashboard.

Now that the Dashboard has been installed, you will need to create a user:

  1. Create a service account, service-account.yaml:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    

    Note: microk8s puts the Dashboard into kube-system namespace instead of kubernetes-dashboard. If you are using microk8s, please adjust metadata.namespace accordingly.

  2. Create a cluster role binding, cluster-role-binding.yaml:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: admin-user
        namespace: kubernetes-dashboard
    

    microk8s users, please see the note above

  3. Apply the configuration:
    kubectl apply -f service-account.yaml
    kubectl apply -f cluster-role-binding.yaml
    

Rancher: Install nginx Ingress Controller

Assuming that the Traefik Ingress Controller is disabled, you can use this command to install the nginx Ingress Controller:

GITHUB_URL=https://github.com/kubernetes/ingress-nginx/releases
VERSION_INGRESS_NGINX=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/${VERSION_INGRESS_NGINX}/deploy/static/provider/cloud/deploy.yaml

Step 2. Install cert-manager

cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources.

To install the cert-manager, just run:

GITHUB_URL=https://github.com/jetstack/cert-manager/releases
VERSION_CERT_MANAGER=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/${VERSION_CERT_MANAGER}/cert-manager.yaml

Step 3. Set up Certificate Issuer

The next step would be to setup an Issuer or a ClusterIssuer.

Issuers, and ClusterIssuers, are Kubernetes resources that represent certificate authorities (CAs) that can generate signed certificates by honoring certificate signing requests.

I use Cloudflare, and therefore it makes sense for me to use a DNS01 Challenge Provider instead of HTTP01; moreover, cert-manager supports CLoudflare out of the box.

I need to create a secret with my Cloudflare API Token, cf-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: cloudflare-api-token-secret
  namespace: cert-manager
type: Opaque
stringData:
  api-token: A…M

Important: this secret needs to be in the same namespace as cert-manager is in, otherwise it will be unable to read the secret.

Now let us create a ClusterIssuer, cluster-issuer.yaml. I put all resources related to the Dashboard into the namespace where the Dashboard lives. For k3s this is kubernetes-dashboard, for microk8s this will be kube-system. Check metadata.namespace field.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: clusterissuer-le
  namespace: kubernetes-dashboard
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - selector:
          dnsZones:
            - wildwolf.name
            - "*.wildwolf.name"
        dns01:
          cloudflare:
            email: [email protected]
            apiTokenSecretRef:
              name: cloudflare-api-token-secret
              key: api-token

apiTokenSecretRef.name references the name of the Cloudflare secret.
privateKeySecretRef.name is the name of the secret that cert-manager will create to store the account’s private key.

If you don’t use Cloudflare, a ClusterIssuer for a HTTP01 challenge will look like this:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: clusterissuer-le
  namespace: kubernetes-dashboard
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
         ingress:
           class: nginx

Now let us apply this configuuration:

# Only run this command if you are using Cloudflare
kubectl apply -f cf-secret.yaml

kubectl apply -f cluster-issuer.yaml

Step 4. Create a Certificate

Now let us create a Certificate, certificate.yaml.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  secretName: certificate-prod-dashboard
  dnsNames:
    - test-cluster-dashboard.wildwolf.name
  issuerRef:
    name: clusterissuer-le
    kind: ClusterIssuer

spec.issuerRef.name references Issuer’s metadata.name, and spec.issuerRef.kind must match Issuer’s kind.
secretName is the name of the secret created by the cert-manager to store the certificate and its private key.

It is time to create the vertificate:

kubectl apply -f certificate.yaml

You can monitor the creation process with

kubectl -n kubernetes-dashboard describe certificate kubernetes-dashboard

Namespace (-n parameter) must obviously match the certificate’s metadata.namespace, and the name (the last parameter on the command line) must match metadata.name.

Step 5. Create Ingress

Now it is time to expose the Dashboard. Let is create ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/issuer: clusterissuer-le
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - test-cluster-dashboard.wildwolf.name
      secretName: certificate-prod-dashboard
  rules:
    - host: test-cluster-dashboard.wildwolf.name
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: kubernetes-dashboard
                port:
                  number: 443

Some important things here:

  • The Dasboard is usually available only over HTTPS; in order for nginx to connect to it, we need to tell nginx that it needs to connect to a HTTPS-enabled backend. This is what nginx.ingress.kubernetes.io/backend-protocol annotation does;
  • cert-manager.io/issuer must reference the name of the Issuer;
  • spec.tls[0].secretName must reference Certificate’s spec.secretName.

Apply the configuration:

kubectl apply -f ingress.yaml

Now if everything went well, we can navigate to our Dasboard and see that the connection is secure:

Step 6. Configure HTTP Basic Authentication

To protect the Dashboard, we can configure HTTP Basic Authentication. To do that, we will need a htpasswd utility (or we can use a Docker container: wildwildangel/alpine-htpasswd):

htpasswd -bnm username very-secure-password >> htpasswd-dashboard

Or, with Docker,

docker run --rm -it wildwildangel/alpine-htpasswd -bnm username very-secure-password >> htpasswd-dashboard

You can run this command multiple times to create several users, if necessary.

When done, we need to create a secret with the content of htpasswd-dashboard file:

kubectl -n kubernetes-dashboard create secret generic htpasswd-dashboard --from-file=auth=htpasswd-dashboard

After that, we will need to add three annotations to our ingress.yaml:

  annotations:
# …
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: htpasswd-dashboard
    nginx.ingress.kubernetes.io/auth-realm: "Restricted Area"

nginx.ingress.kubernetes.io/auth-secret should be the name of the secret we have created.

Apply the changes:

kubectl apply -f ingress.yaml

After that, if you reload the Dashboard, you should see a prompt for your username and password:

We have set up cert-manager, configured automatic certificate renewal, exposed out Kubernetes Dashboard to a public Ingress over a secure connection, and finally, configured HTTP basic authentication to protect the Dashboard.

How to Expose Kubernetes Dashboard Over HTTPS
Tagged on:                             

One thought on “How to Expose Kubernetes Dashboard Over HTTPS

  • October 9, 2022 at 11:32 am
    Permalink

    Thanks mate, for the article, in my case I was missing these 2 annotations to make it work:

    nginx.ingress.kubernetes.io/backend-protocol: “HTTPS”
    nginx.ingress.kubernetes.io/rewrite-target: /

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *