Let us look at setting up a Kubernetes cluster with 1 master node (kubemaster.bpic.local) and 2 worker nodes (kubenode1.bpic.local, kubenode2.bpic.local) on VMware based RHEL 7 instances.
We have set up an additional user (other than root) on these machines, as we will be running kubectl (client) commands as the non-root user.
First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 3 components – kubemaster, kubenode1, kubenode2
subscription-manager register subscription-manager refresh subscription-manager attach –-auto subscription-manager repos –-list subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms subscription-manager repos --enable rhel-7-server-rpms subscription-manager repos --enable rhel-7-server-source-rpms subscription-manager repos --enable rhel-7-server-rh-common-source-rpms subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms subscription-manager repos --enable rhel-7-server-optional-source-rpms subscription-manager repos --enable rhel-7-server-extras-rpms
The rhel-7-server-extras-rpms repo contains docker and other utilities.
Since this is our lab environment, we will be disabling firewalls. If it is a production environment, you can open up specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely.
systemctl disable firewalld systemctl stop firewalld
Since we are using VMware VMs, it is recommended to set up VMware-tools
yum install perl mkdir /mnt/cdrom Mount /dev/cdrom /mnt/cdrom cp /mnt/cdrom/VMwareTools-version.tar.gz /tmp/ tar -zxvf VMwareTools-version.tar.gz /tmp/vmware-tools-distrib/./vmware-install.pl umount /mnt/cdrom
Update the yum repositories
yum –y update yum install yum-utils
Configure additional settings
swapoff –a
Also, comment out the swap line in
etc/fstab #/dev/mapper/rhel-swap swap swap defaults 0 0
Install and enable docker
yum –y install docker systemctl enable docker systemctl start docker
Set up repo for Kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Additional enforcement settings
setenforce 0
Update the config file to change the selinux settings
vi /etc/selinux/config
Change the settings from
selinux=enforcing to selinux=permissive
Install and enable kubelet service
yum -y install kubelet kubeadm kubectl systemctl enable kubelet start kubelet
Enable sysctl settings
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl –system
Alternatively, you can update the /etc/sysctl.conf file
vi /etc/sysctl.conf
Add/update the following lines
net/bridge/bridge-nf-call-iptables = 1 net/ipv4/ip_forward = 1
On the Kubernetes master node only, we will set up the flannel networking component using fat manifest:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubeadm init --pod-network-cidr=10.244.0.0/16 kubeadm token create --print-join-command
Capture the results of the above command, specifically the part describing how to add nodes to this cluster
You can now join any number of machines by running the following on each node as root:
kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
And then after changing to a non-root user, run the following commands
su – nonrootuser mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
On the Kubernetes nodes (kubenode1 and kubenode2), we will run the join command, to add those nodes to the cluster:
kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34 You can now test the cluster by running the below command on either of the nodes, or the master as the non-root user:
kubectl get nodes
You should see results like this (changed for your system names) showing the cluster configuration:
NAME STATUS ROLES AGE VERSION kubemaster.bpic.local Ready master 15h v1.17.3 kubenode1.bpic.local Ready <none> 14h v1.17.3 kubenode2.bpic.local Ready <none> 14h v1.17.3
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com.
Anuj Tuli is the chief technology officer at Keyva. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli helps customers chart a prescriptive strategy for Application Containerization, CI/CD Pipeline Implementations, API abstraction, Application Modernization, and Cloud Automation integrations. He leads the development and management of Cloud Automation IP and related professional services. With an application developer background, he provides a hands-on perspective towards various technologies.
Like what you read? Follow Anuj on LinkedIn.
Join the Keyva Community! Follow Keyva on LinkedIn at: