This article reviews the process to set up a Kubernetes cluster using docker container runtime with 1 master node and 1 worker node on VMware based RHEL 8 instances.
All the commands listed will be ran against both the master and worker node.
Let’s start by enabling the RedHat repos.
#Setup RHEL subscription
subscription-manager register
subscription-manager refresh
#Install commonly used repos
subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms
subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms
Update the Yum repositories.
yum update -y
install yum-utils
Since this is a lab environment, we will be disabling firewalls. If it is a production environment, you can open specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely. (For a list of the required ports see: https://kubernetes.io/docs/reference/networking/ports-and-protocols/)
#Disable firewall
systemctl disable firewalld
systemctl stop firewalld
Swap disabled. You MUST disable swap in order for the Kubelet to work properly.
swapoff –a
#Comment out the swap line
etc/fstab
#/dev/mapper/rhel-swap swap swap defaults 0 0
Install Docker and Dockerd Container runtime.
#Installing Docker
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf repolist -v
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
systemctl enable docker
systemctl start docker
###Install docker Docker Container Runtime
git clone https://github.com/Mirantis/cri-dockerd.git
# Run these commands as root
###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
Installing Kubeadm, Kubelet and Kubectl.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
Forwarding IPv4 and letting iptables see bridged traffic.
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
ON MASTER NODE ONLY
Deploy the cluster via Kubeadm then deploy the Flannel networking component.
#Deploy the Kubernetes cluster specifying the cluster network cidr and the container runtime
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/cri-dockerd.sock
#After deploying the cluster you will receive a join command which you will save to run on the worker node.
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
###On Master Nodes only w/ non root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=~/.kube/config
Deploy Flannel as the non-root user
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
ON WORKER NODE ONLY
Run the join command to add the node to the cluster.
#Join the node to the cluster
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
After joining the worker node to the cluster run the following on the master node to confirm cluster status.
kubectl get nodes -o wide
#The result should look similar to the following
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE CONTAINER-RUNTIME
master Ready control-plane 2d5h v1.25.0 192.168.16.73 Red Hat Enterprise Linux 8.7 docker://23.0.4
worker1 Ready <none> 2d1h v1.25.0 192.168.16.153 Red Hat Enterprise Linux 8.7 docker://23.0.4
About the Author
[table id =6 /]
[post_title] => How to set up a Kubernetes cluster with Dockerd container runtime on Red Hat Enterprise Linux 8 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-a-kubernetes-cluster-with-dockerd-container-runtime-on-red-hat-enterprise-linux-8 [to_ping] => [pinged] => [post_modified] => 2023-05-15 15:11:18 [post_modified_gmt] => 2023-05-15 15:11:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3830 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 3827 [post_author] => 7 [post_date] => 2023-04-18 14:46:53 [post_date_gmt] => 2023-04-18 14:46:53 [post_content] => [post_title] => Case Study: IT Modernized [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => case-study-it-modernized [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:12:25 [post_modified_gmt] => 2024-05-15 20:12:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3827 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 3824 [post_author] => 7 [post_date] => 2023-04-18 14:45:32 [post_date_gmt] => 2023-04-18 14:45:32 [post_content] => [post_title] => Case Study: Infrastructure Modernized for Business Critical Application [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => case-study-infrastructure-modernized-for-business-critical-application [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:13:12 [post_modified_gmt] => 2024-05-15 20:13:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3824 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 3814 [post_author] => 7 [post_date] => 2023-04-18 14:34:57 [post_date_gmt] => 2023-04-18 14:34:57 [post_content] => [post_title] => Case Study: Cloud Infrastructure Consolidation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cloud-infrastructure-consolidation [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:14:07 [post_modified_gmt] => 2024-05-15 20:14:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3814 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 3779 [post_author] => 7 [post_date] => 2023-04-06 08:30:00 [post_date_gmt] => 2023-04-06 08:30:00 [post_content] =>Keyva Chief Technology Officer Anuj Tuli celebrates the organization's 5th Anniversary and discusses lessons learned over the last five years.
[post_title] => CTO Talks: Keyva 5th Anniversary - Lessons Learned [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-keyva-5th-anniversary-lessons-learned [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:19:44 [post_modified_gmt] => 2024-05-15 20:19:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3779 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 3753 [post_author] => 14 [post_date] => 2023-03-16 13:44:01 [post_date_gmt] => 2023-03-16 13:44:01 [post_content] =>By: Saikrishna Madupu – Sr Devops Engineer
This blog describes how to incorporate an extensive quantity of current AWS infrastructure into Terraform. This process is relevant to an organization that manually constructed its AWS infrastructure and wants to implement it in Terraform code for improved automation and cost savings. This is a substantial undertaking with a number of benefits.
I discovered through study that Terraform import only permits the import of a single resource at a time. You will need to import resources into one account at a time if you wish to build up your organization in the cloud at this time because there is no possibility to import resources into several accounts at the same time.
Adding TerraCognita will allow you to import a rather large file including a variety of variables. Using a flag, you have the ability to import only particular variables into a program. You can leverage the same strategy to restrict the variables that are generated by the variable.tf file.
Installation:
Go Libraries:
You can build and install with the latest sources. It uses Go Modules, so GO 1.17+ is required.
Linux:
curl -L https://github.com/cycloidio/terracognita/releases/latest/download/terracognita-linux-amd64.tar.gz -o terracognita-linux-amd64.tar.gz
tar -xf terracognita-linux-amd64.tar.gz
chmod u+x terracognita-linux-amd64
sudo mv terracognita-linux-amd64 /usr/local/bin/terracognita
MacOs:
brew install terracognita
Prerequisites:
Use Cases:
Sample CLI command to import all s3 buckets for AWS:
terracognita aws --hcl s3 --tfstate terraform.tfstate --aws-default-region us-east-1 -i aws_s3_bucket
Returns Output:
terracognita-s3 % terracognita aws --hcl s3 --tfstate terraform.tfstate --aws-default-region us-east-1 -i aws_s3_bucket
We are about to remove all content from "s3", are you sure? Yes/No (Y/N):
y
Starting Terracognita with version v0.8.1
Importing with filters:
Tags: [],
Include: [aws_s3_bucket],
Exclude: [],
Targets: [],
Importing aws_s3_bucket [6/6] Done!
Writing HCL Done!
Writing TFState Done!
saikrishnamadupu@Administrators-MacBook-Pro terracognita-s3 % ls -ltr
total 32
-rw-r--r-- 1 saikrishnamadupu staff 13164 Feb 12 00:07 terraform.tfstate
drwx------ 4 saikrishnamadupu staff 128 Feb 12 00:07 s3
It returns all the buckets info into the folder called s3 where it contains s3_storage.tf file that contains all the buckets list and its configuration.
Post-import manual work:
Supported providers:
Ref: Terracognita
About the Author
[table id =5 /]
[post_title] => TerrCognita Infrastructure as Code (IaC) Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => terrcognita-infrastructure-as-code-iac-automation [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:03:44 [post_modified_gmt] => 2023-04-21 17:03:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3753 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 3751 [post_author] => 14 [post_date] => 2023-03-09 14:50:56 [post_date_gmt] => 2023-03-09 14:50:56 [post_content] =>By: Saikrishna Madupu – Sr Devops Engineer
This article reviews how to tail logs from multiple pods via Kubernetes and Stern.
Kubernetes (K8) is a scalable container orchestrator. It is fairly lightweight to support IoT appliances and it can also handle huge business systems with hundreds of apps and hosts .
Stern is a tool for the tailing of numerous Kubernetes pods and the numerous containers that make up each pod. To facilitate faster debugging, each result is color coded.
As the query is a regular expression, the pod name can be easily filtered, and the exact id is not required. For instance, for instance omitting the deployment id. When a pod is deleted, it is removed from the tail, and when a new pod is added, it is automatically tailed.
Stern can tail all of the containers in a pod instead of having to do each one manually. You can simply specify the container flag to limit the number of containers displayed. By default, all containers are monitored.
Deploying a nginx svc:
kind: Service
apiVersion: v1
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP%
kubectl apply --filename nginx-svc.yaml -n keyva
O/p: service/nginx unchanged
deployment.apps/nginx created
we can validate and verify the svc and pods that being up and running:
kubectl get all -n keyva
NAME READY STATUS RESTARTS AGE
pod/nginx-cd55c47f5-gwtkn 1/1 Running 0 12s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 10.96.58.31 <none> 80/TCP 88d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 12s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-cd55c47f5 1 1 1 12s
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-cd55c47f5-gwtkn 1/1 Running 0 30s
KubeCtl has limits:
Using the label selection, it is evident that kubectl can read logs from numerous pods, however this technique has a drawback.
The reason for this is that –follow streams the API server's logs. You open a connection to the API server per pod, which opens a connection to the associated kubelet to stream logs continually. This does not scale well and results in many incoming and outgoing connections to the API server. As a result, it became a design decision to restrict the number of concurrent connections. Using Stern:
The command is fairly straightforward. Stern retrieves the logs from the given namespace for the specified application. In the case of Stern, you can view not only logs from a single Kubernetes object, such as a deployment or service, but also logs from all related objects. Example:
Stern -n keyva nginx
stern -n keyva nginx
+ nginx-cd55c47f5-86ql5 › nginx
+ nginx-cd55c47f5-bm55t › nginx
+ nginx-cd55c47f5-gwtkn › nginx
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker processes
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 35
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 36
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 37
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker processes
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 36
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 37
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 39
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
If you want to use stern in Kubernetes Pods, you need to create the following ClusterRole and bind it to ServiceAccount.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: stern
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "watch", "list"]
Stern facilitates the output of custom log messages. Using the —output flag, you may utilize the following prepared templates:
output | description |
defult | Displays the namespace, pod and container, and decorates it with color depending on --color |
raw | Only outputs the log message itself, useful when your logs are json and you want to pipe them to jq |
json | Marshals the log struct to json. Useful for programatic purposes |
It takes a custom template through the —template flag, which is subsequently compiled into a Go template and used for each log message. The following struct is passed to this Go template:
property | type | description |
Message | string | The log message itself |
NodeName | string | The node name where the pod is scheduled on |
Namespace | string | The namespace of the pod |
PodName | string | The name of the pod |
ContainerName | string | The name of the container |
In addition to the built-in functions, the template includes the following functions:
func | arguments | description |
json | object | Marshal the object and output it as a json text |
color | color.Color, string | Wrap the text in color (.ContainerColor and .PodColor provided) |
parseJSON | string | Parse string as JSON |
extjson | string | Parse the object as json and output colorized json |
ppextjson | string | Parse the object as json and output pretty-print colorized json |
Kubernetes can add complexity. Software programmers need logs quickly to fix problems. Set up your CLI with some aliases and get rolling to tail logs from your apps in real-time if you are using Kubernetes and have access to view logs on your Kubernetes cluster.
About the Author
[table id =5 /]
[post_title] => Tail Logs from Multiple K8 Pods [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => tail-logs-from-multiple-k8-pods [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:04:17 [post_modified_gmt] => 2023-04-21 17:04:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3751 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 3743 [post_author] => 14 [post_date] => 2023-02-28 17:08:16 [post_date_gmt] => 2023-02-28 17:08:16 [post_content] =>By: Saikrishna Madupu – Sr Devops Engineer
Reloader is a Kubernetes tool that automatically reloads configuration files in a running container when a change is detected. This can be useful for updating configurations without having to manually restart your application. This blog will walk through the process of setting up Reloader in Kubernetes. It will provide an example of MySQL deployment to enable reloader using annotations and MySQL deployment YAML to watch for changes in secrets.
Installation:
• kubectl apply -f
https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
• Helm
helm repo add stakater https://stakater.github.io/stakater-charts
helm repo update
helm install stakater/reloader
Notes:
By default, Reloader is deployed in the default namespace and monitors all namespaces for changes to secrets and configmaps.
Example:
Create a secret for the MySQL DB:
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: demo-secret
annotations:
reloader.stakater.com/auto: "true"
data:
password: dGVzdGluZzEyMzQK
kubectl apply -f secret.yaml
kubectl describe secret demo-secret
Name: demo-secret
Namespace: keyva
Labels: <none>
Annotations: reloader.stakater.com/auto: true
Type: Opaque
Data
====
password: 13 bytes
username: 8 bytes
Create PVC and PC for MySQL:
persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
kubectl apply -f persistent-volume.yaml
kubectl describe pvc
Name: mysql-pv-claim
Namespace: keyva
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: mysql-7ddf8fdbb8-bbbms
mysql-7ddf8fdbb8-thdv9
mysql-7ddf8fdbb8-z2rjj
Events: <none>
kubectl describe pv
Name: mysql-pv-volume
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: keyva/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>
Create deployment YML for MySQL container using above Secrets, PVC and PV:
kubectl apply -f my-sql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: mysql
replicas: 3
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: demo-secret
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Once deployed, you can verify the pods status:
Kubectl get pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-5b8d8bd99b-277mf 1/1 Running 0 10s
mysql-5b8d8bd99b-6x5wz 1/1 Running 0 10s
mysql-5b8d8bd99b-srlnb 1/1 Running 0 10s
Describing the pod to view details:
kubectl describe deployment mysql
Name: mysql
Namespace: keyva
CreationTimestamp: Sun, 19 Feb 2023 20:28:02 -0600
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
reloader.stakater.com/auto: true
Selector: app=mysql
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql:5.6
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'demo-secret'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-5b8d8bd99b (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set mysql-5b8d8bd99b to 3
Reloader can monitor changes to ConfigMap and Secret, as well as perform rolling upgrades on Pods and their associated DeploymentConfigs, Deployments, Daemonsets Statefulsets, and Rollouts.
Validation of reloader:
kubectl logs reloader-reloader-7f6b8d49f7-9lxrx
time="2023-02-22T01:59:48Z" level=info msg="Changes detected in 'demo-secret' of type 'SECRET' in namespace 'keyva', Updated 'mysql' of type 'Deployment' in namespace 'keyva'"
Verify the age of Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-7bd6b6d789-fqfgd 1/1 Running 0 3s
mysql-7bd6b6d789-tbxxp 1/1 Running 0 3s
mysql-7bd6b6d789-trd7v 1/1 Running 0 3s
Ref: K8-Reloader
About the Author
[table id =5 /]
[post_title] => Reloader - K8 - Rapid - Rollouts [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => reloader-k8-rapid-rollouts [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:06:07 [post_modified_gmt] => 2023-04-21 17:06:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3743 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 3830 [post_author] => 16 [post_date] => 2023-04-21 17:03:12 [post_date_gmt] => 2023-04-21 17:03:12 [post_content] =>This article reviews the process to set up a Kubernetes cluster using docker container runtime with 1 master node and 1 worker node on VMware based RHEL 8 instances.
All the commands listed will be ran against both the master and worker node.
Let’s start by enabling the RedHat repos.
#Setup RHEL subscription
subscription-manager register
subscription-manager refresh
#Install commonly used repos
subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms
subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms
Update the Yum repositories.
yum update -y
install yum-utils
Since this is a lab environment, we will be disabling firewalls. If it is a production environment, you can open specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely. (For a list of the required ports see: https://kubernetes.io/docs/reference/networking/ports-and-protocols/)
#Disable firewall
systemctl disable firewalld
systemctl stop firewalld
Swap disabled. You MUST disable swap in order for the Kubelet to work properly.
swapoff –a
#Comment out the swap line
etc/fstab
#/dev/mapper/rhel-swap swap swap defaults 0 0
Install Docker and Dockerd Container runtime.
#Installing Docker
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf repolist -v
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
systemctl enable docker
systemctl start docker
###Install docker Docker Container Runtime
git clone https://github.com/Mirantis/cri-dockerd.git
# Run these commands as root
###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
Installing Kubeadm, Kubelet and Kubectl.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
Forwarding IPv4 and letting iptables see bridged traffic.
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
ON MASTER NODE ONLY
Deploy the cluster via Kubeadm then deploy the Flannel networking component.
#Deploy the Kubernetes cluster specifying the cluster network cidr and the container runtime
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/cri-dockerd.sock
#After deploying the cluster you will receive a join command which you will save to run on the worker node.
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
###On Master Nodes only w/ non root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=~/.kube/config
Deploy Flannel as the non-root user
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
ON WORKER NODE ONLY
Run the join command to add the node to the cluster.
#Join the node to the cluster
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
After joining the worker node to the cluster run the following on the master node to confirm cluster status.
kubectl get nodes -o wide
#The result should look similar to the following
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE CONTAINER-RUNTIME
master Ready control-plane 2d5h v1.25.0 192.168.16.73 Red Hat Enterprise Linux 8.7 docker://23.0.4
worker1 Ready <none> 2d1h v1.25.0 192.168.16.153 Red Hat Enterprise Linux 8.7 docker://23.0.4
About the Author
[table id =6 /]
[post_title] => How to set up a Kubernetes cluster with Dockerd container runtime on Red Hat Enterprise Linux 8 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-a-kubernetes-cluster-with-dockerd-container-runtime-on-red-hat-enterprise-linux-8 [to_ping] => [pinged] => [post_modified] => 2023-05-15 15:11:18 [post_modified_gmt] => 2023-05-15 15:11:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3830 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 118 [max_num_pages] => 15 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 43dbb0366d3a7536cbe6a299469b3ec9 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )