[Developer's community]

DevOps Friday: Configure Helm for AKS

There are many ways to automate the same in the world of DevOps. One of the tools from the arsenal that could be

potentially used for the matter is Helm. I’m sure you’ve heard of it already and my point here not to teach you how to run Helm charts but how to get started by configuring Helm for Azure Kubernetes Services (AKS).

Before we get started, let’s agree on terminology:

  • Helm chart (or chart) – is a Helm package that contains information sufficient for installing a set of Kubernetes resources into a Kubernetes cluster
  • Tiller – is an in-cluster component of Helm. It interacts directly with your Kubernetes instance (cluster) APIs to install/upgrade/query and remove Kubernetes resources

When you start working with Helm in AKS, you may notice that the Pods named ‘tiller*’ are already pre-installed into the cluster. Well, that won’t work. A simple command like ‘helm status’ won’t provide any status output but the error message. To fix the situation, we need to follow certain installation instructions acceptable within AKS:

  1. Get helm executable (GitHub release repo or direct link). Make sure that you’ve added ‘Helm.exe’ to the PATH after archive extraction so you can use it in the command line
  2. Delete any Tiller deployments you may have by default:
kubectl delete deployment tiller-deploy --namespace kube-system
  1. Add a service account for tiller (this is the best practice within AKS)
kubectl create -f Tiller_rbac.yaml

Once the cluster is RBAC-enabled, this step is mandatory. The full listing of the manifest file can be found below

  1. Init a service account
helm init --service-account tiller
  1. Sync Helm version with the server
helm init --upgrade
  1. Check version
helm version
  1. For the testing purposes, deploy Traefik (or any other object you have in mind) with Helm chart
helm install stable/traefik --set dashboard.enabled=true,dashboard.domain=dashboard.localhost
  1. Other useful stuff:

Get helm status: helm status

Get Helm deployments: helm list

 Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don’t have to define it explicitly.
---

# Tiller_rbac.yaml file content

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: tiller

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: tiller

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

  - kind: ServiceAccount

    name: tiller

    namespace: kube-system

That’s all you need to do. Hopefully, it helped you to understand the configuration steps so you can avoid surfing the internet trying to figure why it didn’t work initially. Happy configuring! 😉

Add comment

Loading