Keyva Chief Technology Officer Anuj Tuli discusses how DevSecOps allows security to be innately tied to the development and operational work being done by IT teams.
[post_title] => CTO Talks: DevSecOps - Security in a Digital Era is a Top Concern [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-devsecops-security-in-a-digital-era-is-a-top-concern [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:41:48 [post_modified_gmt] => 2024-05-15 19:41:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3782 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 3830 [post_author] => 16 [post_date] => 2023-04-21 17:03:12 [post_date_gmt] => 2023-04-21 17:03:12 [post_content] =>This article reviews the process to set up a Kubernetes cluster using docker container runtime with 1 master node and 1 worker node on VMware based RHEL 8 instances.
All the commands listed will be ran against both the master and worker node.
Let’s start by enabling the RedHat repos.
#Setup RHEL subscription
subscription-manager register
subscription-manager refresh
#Install commonly used repos
subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms
subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms
Update the Yum repositories.
yum update -y
install yum-utils
Since this is a lab environment, we will be disabling firewalls. If it is a production environment, you can open specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely. (For a list of the required ports see: https://kubernetes.io/docs/reference/networking/ports-and-protocols/)
#Disable firewall
systemctl disable firewalld
systemctl stop firewalld
Swap disabled. You MUST disable swap in order for the Kubelet to work properly.
swapoff –a
#Comment out the swap line
etc/fstab
#/dev/mapper/rhel-swap swap swap defaults 0 0
Install Docker and Dockerd Container runtime.
#Installing Docker
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf repolist -v
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
systemctl enable docker
systemctl start docker
###Install docker Docker Container Runtime
git clone https://github.com/Mirantis/cri-dockerd.git
# Run these commands as root
###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
Installing Kubeadm, Kubelet and Kubectl.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
Forwarding IPv4 and letting iptables see bridged traffic.
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
ON MASTER NODE ONLY
Deploy the cluster via Kubeadm then deploy the Flannel networking component.
#Deploy the Kubernetes cluster specifying the cluster network cidr and the container runtime
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/cri-dockerd.sock
#After deploying the cluster you will receive a join command which you will save to run on the worker node.
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
###On Master Nodes only w/ non root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=~/.kube/config
Deploy Flannel as the non-root user
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
ON WORKER NODE ONLY
Run the join command to add the node to the cluster.
#Join the node to the cluster
kubeadm join masternode.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \
--discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34
After joining the worker node to the cluster run the following on the master node to confirm cluster status.
kubectl get nodes -o wide
#The result should look similar to the following
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE CONTAINER-RUNTIME
master Ready control-plane 2d5h v1.25.0 192.168.16.73 Red Hat Enterprise Linux 8.7 docker://23.0.4
worker1 Ready <none> 2d1h v1.25.0 192.168.16.153 Red Hat Enterprise Linux 8.7 docker://23.0.4
About the Author
[table id =6 /]
[post_title] => How to set up a Kubernetes cluster with Dockerd container runtime on Red Hat Enterprise Linux 8 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-a-kubernetes-cluster-with-dockerd-container-runtime-on-red-hat-enterprise-linux-8 [to_ping] => [pinged] => [post_modified] => 2023-05-15 15:11:18 [post_modified_gmt] => 2023-05-15 15:11:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3830 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 3827 [post_author] => 7 [post_date] => 2023-04-18 14:46:53 [post_date_gmt] => 2023-04-18 14:46:53 [post_content] => [post_title] => Case Study: IT Modernized [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => case-study-it-modernized [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:12:25 [post_modified_gmt] => 2024-05-15 20:12:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3827 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 3824 [post_author] => 7 [post_date] => 2023-04-18 14:45:32 [post_date_gmt] => 2023-04-18 14:45:32 [post_content] => [post_title] => Case Study: Infrastructure Modernized for Business Critical Application [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => case-study-infrastructure-modernized-for-business-critical-application [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:13:12 [post_modified_gmt] => 2024-05-15 20:13:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3824 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 3814 [post_author] => 7 [post_date] => 2023-04-18 14:34:57 [post_date_gmt] => 2023-04-18 14:34:57 [post_content] => [post_title] => Case Study: Cloud Infrastructure Consolidation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cloud-infrastructure-consolidation [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:14:07 [post_modified_gmt] => 2024-05-15 20:14:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3814 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 3779 [post_author] => 7 [post_date] => 2023-04-06 08:30:00 [post_date_gmt] => 2023-04-06 08:30:00 [post_content] =>Keyva Chief Technology Officer Anuj Tuli celebrates the organization's 5th Anniversary and discusses lessons learned over the last five years.
[post_title] => CTO Talks: Keyva 5th Anniversary - Lessons Learned [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-keyva-5th-anniversary-lessons-learned [to_ping] => [pinged] => [post_modified] => 2024-05-15 20:19:44 [post_modified_gmt] => 2024-05-15 20:19:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3779 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 3753 [post_author] => 14 [post_date] => 2023-03-16 13:44:01 [post_date_gmt] => 2023-03-16 13:44:01 [post_content] =>By: Saikrishna Madupu – Sr Devops Engineer
This blog describes how to incorporate an extensive quantity of current AWS infrastructure into Terraform. This process is relevant to an organization that manually constructed its AWS infrastructure and wants to implement it in Terraform code for improved automation and cost savings. This is a substantial undertaking with a number of benefits.
I discovered through study that Terraform import only permits the import of a single resource at a time. You will need to import resources into one account at a time if you wish to build up your organization in the cloud at this time because there is no possibility to import resources into several accounts at the same time.
Adding TerraCognita will allow you to import a rather large file including a variety of variables. Using a flag, you have the ability to import only particular variables into a program. You can leverage the same strategy to restrict the variables that are generated by the variable.tf file.
Installation:
Go Libraries:
You can build and install with the latest sources. It uses Go Modules, so GO 1.17+ is required.
Linux:
curl -L https://github.com/cycloidio/terracognita/releases/latest/download/terracognita-linux-amd64.tar.gz -o terracognita-linux-amd64.tar.gz
tar -xf terracognita-linux-amd64.tar.gz
chmod u+x terracognita-linux-amd64
sudo mv terracognita-linux-amd64 /usr/local/bin/terracognita
MacOs:
brew install terracognita
Prerequisites:
Use Cases:
Sample CLI command to import all s3 buckets for AWS:
terracognita aws --hcl s3 --tfstate terraform.tfstate --aws-default-region us-east-1 -i aws_s3_bucket
Returns Output:
terracognita-s3 % terracognita aws --hcl s3 --tfstate terraform.tfstate --aws-default-region us-east-1 -i aws_s3_bucket
We are about to remove all content from "s3", are you sure? Yes/No (Y/N):
y
Starting Terracognita with version v0.8.1
Importing with filters:
Tags: [],
Include: [aws_s3_bucket],
Exclude: [],
Targets: [],
Importing aws_s3_bucket [6/6] Done!
Writing HCL Done!
Writing TFState Done!
saikrishnamadupu@Administrators-MacBook-Pro terracognita-s3 % ls -ltr
total 32
-rw-r--r-- 1 saikrishnamadupu staff 13164 Feb 12 00:07 terraform.tfstate
drwx------ 4 saikrishnamadupu staff 128 Feb 12 00:07 s3
It returns all the buckets info into the folder called s3 where it contains s3_storage.tf file that contains all the buckets list and its configuration.
Post-import manual work:
Supported providers:
Ref: Terracognita
About the Author
[table id =5 /]
[post_title] => TerrCognita Infrastructure as Code (IaC) Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => terrcognita-infrastructure-as-code-iac-automation [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:03:44 [post_modified_gmt] => 2023-04-21 17:03:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3753 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 3784 [post_author] => 2 [post_date] => 2023-05-31 08:30:00 [post_date_gmt] => 2023-05-31 08:30:00 [post_content] => [post_title] => Celebrating Keyva's 5 Year Anniversary [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => celebrating-keyvas-5-year-anniversary [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:44:33 [post_modified_gmt] => 2024-05-15 19:44:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3784 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 394fd62363393340ca5763db78686a33 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )