Quick Guide For Azure Kubernetes

Azure Kubernetes Service (AKS) is the managed Kubernetes service offering from Azure. Operating a Kubernetes cluster in a private data center involves a lot of time-consuming tasks. With AKS, Azure takes care of all the work on setting up and managing the Kubernetes cluster so that you can put more focus on your application.

In this article, we are going to introduce the concepts of AKS to help you get started. Then, we will create an AKS cluster using the Azure web portal.

AKS Architecture

AKS is a Cloud Native Computing Foundation (CNCF) certified Kubernetes offering. The CNCF certification ensures that AKS exposes the required APIs, similar to the open-source community versions of Kubernetes. Therefore, you do not have to worry about compatibility when migrating an application between open-source Kubernetes and AKS.

A Kubernetes cluster consists of a control plane and a set of worker nodes. The control plane is responsible for managing the nodes and scheduling the containers. The nodes run the containerized workloads.

Control Plane

The Kubernetes control plane consists of several components such as kube-apiserver, etcd, kube-scheduler, etc. In AKS, this control plane is fully managed by Azure, so you will not get direct access to it. You can create a new AKS cluster via Azure CLI or the Azure web portal. Then, you can use the `kubectl` CLI tool, Kubernetes dashboard, or Kubernetes API to interact with the AKS cluster.

The AKS control plane is a single tenant. Therefore, it will be dedicated to you and will not be shared with any other user account in Azure.

An AKS cluster can run multiple containerized applications. You can use Kubernetes namespaces to isolate different applications within the same cluster. However, Kubernetes still does not implements enough security measures for isolating non-secure applications within the same cluster.

You can overcome this problem by deploying multiple AKS clusters. Creating numerous Kubernetes clusters in an on-premises data center could exponentially increase your administrative overhead. But, since AKS is a managed service, you can efficiently operate multiple AKS clusters with no such additional work. As an example, if your organization has various departments, each department can have a dedicated AKS cluster to run their applications. While Azure has an upper limit of 5000 maximum AKS clusters for a single account, that’s a reasonably large value for most organizations.

Nodes

The nodes run the containerized workload applications. In an on-premise Kubernetes cluster, the nodes could be either VMs or bare-metal servers. But, AKS nodes are always Azure VMs. You cannot have bare-metal nodes in AKS.

The main component in the node is the container runtime which is responsible for the actual execution of the Container. Since version 1.19, AKS uses Container as the container runtime. The previous AKS versions have been using Moby, which is an upstream version of Docker.

AKS logically groups the nodes into node pools. When creating an AKS cluster, a default node pool is also created. This node pool is called the `system node pool` because it contains critical system pods such as `CodeDNS` and `metrics-server`.

You can create additional node pools, which are called the `user node pools.` While you could deploy your applications also into the `system node pool`, it is recommended to use separate `user node pools` for that purpose. It will prevent any non-secure user applications from disturbing the system pods in the `system node pool`.

An AKS cluster can have a maximum of 100 node pools. The cluster can include a maximum of 1000 nodes across all the node pools.

Serverless Containers with AKS

Azure virtual nodes is an AKS feature for running containers in a serverless mode. Based on the open-source project Virtual Kubelet, virtual nodes allow you to run containers without provisioning VMs as worker nodes.

 Upgrading Kubernetes Version in AKS

At the time of this writing, AKS supports Kubernetes versions 1.18, 1.19, and 1.20. As new Kubernetes versions are released by the open-source Kubernetes community, AKS will add support for the newer versions and the older versions will be deprecated consecutively. You must expect to upgrade your AKS clusters at least once a year to stay on a supported version in Azure.

You can upgrade your AKS clusters manually using the Azure CLI or let Azure perform the upgrades automatically. If you select the auto-upgrade option, you have to select one out of the three modes for upgrading; patch, stable and rapid. To understand their behavior, you must know the naming pattern of Kubernetes versions. Let’s consider version 1.19.3 as an example. The `19` is the minor version, and `3` is the patch version.

The `patch` mode automatically upgrades the cluster to the latest patch version but will not upgrade the cluster’s minor version. The `stable` option will automatically upgrade the cluster to version one before the latest supported minor version.

This is the recommended option for most production workloads. If you select the `rapid` option, Azure will automatically upgrade the cluster to the latest supported version.

High Availability in AKS

By default, AKS creates a cluster or a node pool in a single Azure availability zone. This type of clusters or node pool could go unavailable due to a fault in the particular availability zone. To overcome this problem, AKS can implement high availability by distributing the control plane and nodes across multiple availability zones.

This high availability option has to be enabled at the creation time of the cluster or the node pool. You cannot change high availability settings in an existing cluster or a node pool.

Storage in AKS

A container is considered as an ephemeral piece of component, which can be created and destroyed dynamically. But, containerized applications also need to store some types of data permanently. Azure provides two types of storage for storing data.

1. Volumes – A volume is defined as part of a pod. The life cycle of a volume is bound with the life cycle of a pod, so it is deleted when the pod is deleted.

2. Persistent volumes – If you need a volume to exist beyond the life cycle of an individual pod, you must use `persistent volumes`. A persistent volume can exist throughout the entire life cycle of an application. The volumes can be either Azure Disks or Azure Files. An Azure disk can only be attached to a single pod, but a volume of type Azure Files can be accessed by multiple pods simultaneously.

Node Auto-repair

 If you choose to deploy a Kubernetes cluster in a private data center, you are responsible for monitoring the health of your worker nodes and taking necessary action to restore the nodes that become faulty.

In AKS, the nodes are Azure VMs, and Azure implements a mechanism for monitoring their health. If a node reports `NotReady` status or does not report any status to the monitoring system, Azure either reboot or recreate the node.

Scaling Applications on AKS

AKS supports two features for scaling containerized applications.

The `Cluster Autoscaler` is the concept of scaling the number of nodes according to the requested CPU/memory requirements of the application.

The `Horizontal pod autoscaler` is another scaling feature on AKS. It adds or removes nodes in the node pool by monitoring the application performance.

Pricing and SLA

AKS is free to use. You will not be charged anything for the number of hours you had an active AKS cluster. You are only charged for the VMs that you provision as nodes.

The default SLA for an AKS cluster is 99.5%. For organizations that need more strict SLAs, AKS offers Uptime SLA. It has to be enabled separately for each cluster. When Uptime SLA is enabled, an AKS cluster is charged 0.10 USD per hour. AKS guarantees 99.95% availability for Uptime enabled clusters that are distributed across multiple availability zones. For clusters in a single availability zone, Uptime SLA guarantees a 99.9% availability.

en_USEN