This article reviews the process to set up a Kubernetes cluster using docker container runtime with 1 master node and 1 worker node on VMware based RHEL 8 instances.
All the commands listed will be ran against both the master and worker node.
Let’s start by enabling the RedHat repos.
#Setup RHEL subscription
subscription-manager register
subscription-manager refresh
#Install commonly used repos
subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms
subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms
Update the Yum repositories.
yum update -y
install yum-utils
Since this is a lab environment, we will be disabling firewalls. If it is a production environment, you can open specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely. (For a list of the required ports see: https://kubernetes.io/docs/reference/networking/ports-and-protocols/)
#Disable firewall
systemctl disable firewalld
systemctl stop firewalld
Swap disabled. You MUST disable swap in order for the Kubelet to work properly.
swapoff –a
#Comment out the swap line
etc/fstab
#/dev/mapper/rhel-swap swap swap defaults 0 0
Install Docker and Dockerd Container runtime.
#Installing Docker
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf repolist -v
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
systemctl enable docker
systemctl start docker
###Install docker Docker Container Runtime
git clone https://github.com/Mirantis/cri-dockerd.git
# Run these commands as root
###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
Installing Kubeadm, Kubelet and Kubectl.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
Forwarding IPv4 and letting iptables see bridged traffic.
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
ON MASTER NODE ONLY
Deploy the cluster via Kubeadm then deploy the Flannel networking component.
#Deploy the Kubernetes cluster specifying the cluster network cidr and the container runtime
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/cri-dockerd.sock
#After deploying the cluster you will receive a join command which you will save to run on the worker node.
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
###On Master Nodes only w/ non root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=~/.kube/config
Deploy Flannel as the non-root user
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
ON WORKER NODE ONLY
Run the join command to add the node to the cluster.
#Join the node to the cluster
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
After joining the worker node to the cluster run the following on the master node to confirm cluster status.
kubectl get nodes -o wide
#The result should look similar to the following
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE CONTAINER-RUNTIME
master Ready control-plane 2d5h v1.25.0 192.168.16.73 Red Hat Enterprise Linux 8.7 docker://23.0.4
worker1 Ready <none> 2d1h v1.25.0 192.168.16.153 Red Hat Enterprise Linux 8.7 docker://23.0.4
About the Author
Delroy Hall, DevOps Engineer Delroy is an IT professional and tech enthusiast with a passion for providing IT Solutions. He draws inspiration from challenges, believing there is always room for optimization. He is experienced in Cloud Computing, DevOps Automation and holds certifications for AWS and Terraform. |