By: Saikrishna Madupu – Sr Devops Engineer
Deploying Kubernetes using KinD can help setup a test environment where you can build multi-nodes or multiple clusters.
If you want to create clusters on virtual machines, you should have the resources to run the virtual machines. Each machine should have adequate disk space, memory, and CPU utilization. An alternate way to overcome this high volume of resources is to use containers in place. Using containers provides the advantage to run additional nodes, as per the requirements, by creating/deleting them in minutes and helps run multiple clusters on a single host. To explain how to run a cluster using only containers locally, use Kubernetes in Docker (KinD) to create a Kubernetes cluster on your Docker host.
Why pick KIND for test env’s[KH1] ?
- KinD can create a new multi-node cluster in minutes
- Separate the control plane and worker nodes to provide a more “realistic” cluster
- In order to limit the hardware requirements and to make Ingress easier to configure, only create a two-node cluster
- A multi-node cluster can be created in a few minutes and once testing has been completed, clusters can be torn down in a few seconds
Pre-requisites:
- Docker daemon to create a cluster
- It supports most of the platforms below
- Linux
- macOS running Docker Desktop
- Windows running Docker Desktop
- Windows running WSL2
How kind works:
At a high level, you can think of a KinD cluster as consisting of a single Docker container that runs a control plane node and a worker node to create a Kubernetes cluster. To make the deployment easy and robust, KinD bundles every Kubernetes object into a single image, known as a node image. This node image contains all the required Kubernetes components to create a single-node or multi-node cluster. Once it is up and running, you can use Docker to exec into a control plane node container. It comes with the standard k8 components and comes with default CNI [KINDNET]. We can also disable default CNI and enable such as Calico, Falnnel, Cilium. Since KinD uses Docker as the container engine to run the cluster nodes, all clusters are limited to the same network constraints that a standard Docker container is limited to. We can also run other containers on our kind env by passing an extra argument –net=kind to the docker run command.
KinD Installation:
I’m using Mac for demonstration and will also point out the steps to install it manually.
Option1:
brew install kind
Option2:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin
You can verify the installation of kind by simply running:
kind version kind v0.11.1 go1.16.4 darwin/arm64
- KinD create cluster this will create a new k8 cluster with all components in a single docker container named by kind as shown below:
Creating cluster “kind” …
✓ Ensuring node image (kindest/node:v1.21.1) ?
✓ Preparing nodes ?
✓ Writing configuration ?
✓ Starting control-plane ?️
✓ Installing CNI ?
✓ Installing StorageClass ?
Set kubectl context to “kind-kind”
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! ?
- As part of this cluster creation, it also modifies/creates config file which is used to access the cluster
- We can verify the newly built cluster by running kubectl get nodes
NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane,master 5m54s v1.21.1
KinD helps us to create and delete the cluster very quick. In order to delete the cluster we use KinD delete cluster in this example, it also deletes entry in our ~/.kube/config file that gets appended when cluster gets created.
kind delete cluster --name <cluster name>
Creating a multi-node cluster:
When creating a multi-node cluster, with custom options we need to create a cluster config file. Setting values in this file allows you to customize the KinD cluster, including the number of nodes, API options, and more. Sample config is shown below:
Config file:
/Cluster01-kind.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: apiServerAddress: "0.0.0.0" disableDefaultCNI: true apiServerPort: 6443 kubeadmConfigPatches: - | apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration metadata: name: config networking: serviceSubnet: "10.96.0.1/12" podSubnet: "10.240.0.0/16" nodes: - role: control-plane extraPortMappings: - containerPort: 2379 hostPort: 2379 extraMounts: - hostPath: /dev containerPath: /dev - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock - role: worker extraPortMappings: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 - containerPort: 2222 hostPort: 2222 extraMounts: - hostPath: /dev containerPath: /dev - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock
apiserverAddress:
What IP address the API server will listen on. By default it will use 127.0.0.1, but since we plan to use the cluster from other networked machines, we have selected to listen on all IP addresses.
disableefaultCNI: Enable or disable the Kindnet installation. The default value is false.
kubeadmConfigPatches:
This section allows you to set values for other cluster options during the installation. For our configuration, we are setting the CIDR ranges for the ServiceSubnet and the podSubnet.
Nodes:
For our cluster, we will create a single control plane node, and a single worker node.
role:control-plane:
The first role section is for the control-plane. We have added options to map the localhosts/dev and /var/run/Docker. Sock, which will be used in the Falco chapter, later in the book.
role:worker:
This is the second node section, which allows you to configure options that the worker nodes will use. For our cluster, we have added the same local mounts that will be used for Falco, and we have also added additional ports to expose for our Ingress controller.
ExportPortMapping:
To expose ports to your KinD nodes, you need to add them to the extraPortMappings section of the configuration. Each mapping has two values, the container port, and the host port. The host port is the port you would use to target the cluster, while the container port is the port that the container is listening on.
Extramounts:
The extra Mounts section allows you to add extra mount points to the containers. This comes in handy to expose mounts like /dev and /var/run/Docker. Sock that we will need for the Falco chapter.
Multi-node cluster configuration:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: control-plane - role: control-plane - role: worker - role: worker - role: worker kind create cluster --name cluster01 --config cluster-01.yaml
Set kubectl context to “kind-multinode”
You can now use your cluster with:
kubectl cluster-info –context kind-multinode Note: The –name option will set the name of the cluster to cluster-01, and –config tells the installer to use the cluster01-kind.yaml config file.
Multiple control plane servers introduce additional complexity since we can only target a single host or IP in our configuration files. To make this configuration usable, we need to deploy a load balancer in front of our cluster. If you do deploy multiple control plane nodes, the installation will create an additional container running a HAProxy load balancer.
- Creating cluster “multinode” …
- Ensuring node image (kindest/node:v1.21.1)
- Preparing nodes
- Configuring the external load balancer
- Writing configuration
- Starting control-plane
- Installing StorageClass
- Joining more control-plane nodes
- Joining worker nodes
Have a question, bug, or feature request? Let us know! https:
Since we have a single host, each control plane node and the HAProxy container are running on unique ports. Each container needs to be exposed to the host so that they can receive incoming requests. In this example, the important one to note is the port assigned to HAProxy, since that’s the target port for the cluster. In Kubernetes config file, we can see that it is targeting https://127.0.0.1:42673, which is the port that’s been allocated to the HAProxy container.
When a command is executed using kubectl, it directs to the HAProxy server. Using a configuration file that was created by KinD during the cluster’s creation, with the help of HA Proxy traffic gets routed between the three control plane nodes. In the HAProxy container, we can verify the configuration by viewing the config file, found at /usr/local/etc/haproxy/haproxy.cfg:
# generated by kind
global log /dev/log local0 log /dev/log local1 notice daemon resolvers docker nameserver dns 127.0.0.11:53 defaults log global mode tcp option dontlognull # TODO: tune these timeout connect 5000 timeout client 50000 timeout server 50000 # allow to boot despite dns don't resolve backends default-server init-addr none frontend control-plane bind *:6443 default_backend kube-apiservers backend kube-apiservers option httpchk GET /healthz # TODO: we should be verifying (!) server multinode-control-plane multinode-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4 server multinode-control-plane2 multinode-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4 server multinode-control-plane3 multinode-control-plane3:6443 check check-ssl verify no resolvers docker resolve-prefer ipv4
As shown in the preceding configuration file, there is a backend section called kube-apiservers
that contains the three control plane containers. Each entry contains the Docker IP address of a control plane node with a port assignment of 6443
, targeting the API server running in the container. When you request https://127.0.0.1:32791
, that request will hit the HAProxy container, then, using the rules in the HAProxy configuration file, the request will be routed to one of the three nodes in the list.
Since our cluster is now fronted by a load balancer, you have a highly available control plane for testing.