Get Appointment

  • contact@wellinor.com
  • +(123)-456-7890

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 7 ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 7 [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 48, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 3218 [post_author] => 14 [post_date] => 2022-04-20 14:33:21 [post_date_gmt] => 2022-04-20 14:33:21 [post_content] =>

By: Saikrishna Madupu - Sr Devops Engineer

Deploying Kubernetes using KinD can help setup a test environment where you can build multi-nodes or multiple clusters.

If you want to create clusters on virtual machines, you should have the resources to run the virtual machines. Each machine should have adequate disk space, memory, and CPU utilization. An alternate way to overcome this high volume of resources is to use containers in place. Using containers provides the advantage to run additional nodes, as per the requirements, by creating/deleting them in minutes and helps run multiple clusters on a single host. To explain how to run a cluster using only containers locally, use Kubernetes in Docker (KinD) to create a Kubernetes cluster on your Docker host.

Why pick KIND for test env’s[KH1] ?

Pre-requisites:

How kind works:

At a high level, you can think of a KinD cluster as consisting of a single Docker container that runs a control plane node and a worker node to create a Kubernetes cluster. To make the deployment easy and robust, KinD bundles every Kubernetes object into a single image, known as a node image. This node image contains all the required Kubernetes components to create a single-node or multi-node cluster. Once it is up and running, you can use Docker to exec into a control plane node container. It comes with the standard k8 components and comes with default CNI [KINDNET]. We can also disable default CNI and enable such as Calico, Falnnel, Cilium.  Since KinD uses Docker as the container engine to run the cluster nodes, all clusters are limited to the same network constraints that a standard Docker container is limited to. We can also run other containers on our kind env by passing an extra argument –net=kind to the docker run command.


KinD Installation:

I’m using Mac for demonstration and will also point out the steps to install it manually.

Option1:

 brew install kind

Option2:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin

You can verify the installation of kind by simply running:

kind version 

kind v0.11.1 go1.16.4 darwin/arm64

Creating cluster "kind" ...

 ✓ Ensuring node image (kindest/node:v1.21.1) ?

 ✓ Preparing nodes ? 

 ✓ Writing configuration ?

 ✓ Starting control-plane ?️

 ✓ Installing CNI ?

 ✓ Installing StorageClass ?

Set kubectl context to "kind-kind"

You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! ?

NAME                         STATUS ROLES                             AGE       VERSION
kind-control-plane   Ready    control-plane,master  5m54s   v1.21.1

KinD helps us to create and delete the cluster very quick. In order to delete the cluster we use KinD delete cluster in this example, it also deletes entry in our ~/.kube/config file that gets appended when cluster gets created.

kind delete cluster --name <cluster name>

Creating a multi-node cluster:

When creating a multi-node cluster, with custom options we need to create a cluster config file. Setting values in this file allows you to customize the KinD cluster, including the number of nodes, API options, and more. Sample config is shown below:

Config file:

/Cluster01-kind.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
  disableDefaultCNI: true
  apiServerPort: 6443
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  metadata:
    name: config
  networking:
    serviceSubnet: "10.96.0.1/12"
    podSubnet: "10.240.0.0/16"
nodes:
- role: control-plane
 extraPortMappings:
  - containerPort: 2379
    hostPort: 2379
  extraMounts:
  - hostPath: /dev
    containerPath: /dev
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443
  - containerPort: 2222
    hostPort: 2222
  extraMounts:
  - hostPath: /dev
    containerPath: /dev
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock

 

apiserverAddress:

What IP address the API server will listen on. By default it will use 127.0.0.1, but since we plan to use the cluster from other networked machines, we have selected to listen on all IP addresses.

disableefaultCNI: Enable or disable the Kindnet installation. The default value is false.

kubeadmConfigPatches:
This section allows you to set values for other cluster options during the installation. For our configuration, we are setting the CIDR ranges for the ServiceSubnet and the podSubnet.

Nodes:
For our cluster, we will create a single control plane node, and a single worker node.

role:control-plane:

The first role section is for the control-plane. We have added options to map the localhosts/dev and /var/run/Docker. Sock, which will be used in the Falco chapter, later in the book.

role:worker:
This is the second node section, which allows you to configure options that the worker nodes will use. For our cluster, we have added the same local mounts that will be used for Falco, and we have also added additional ports to expose for our Ingress controller.

ExportPortMapping:

To expose ports to your KinD nodes, you need to add them to the extraPortMappings section of the configuration. Each mapping has two values, the container port, and the host port. The host port is the port you would use to target the cluster, while the container port is the port that the container is listening on.

Extramounts:

The extra Mounts section allows you to add extra mount points to the containers. This comes in handy to expose mounts like /dev and /var/run/Docker. Sock that we will need for the Falco chapter.

Multi-node cluster configuration:

 kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

kind create cluster --name cluster01 --config cluster-01.yaml

Set kubectl context to "kind-multinode"

You can now use your cluster with:

kubectl cluster-info --context kind-multinode Note: The –name option will set the name of the cluster to cluster-01, and –config tells the installer to use the cluster01-kind.yaml config file.

Multiple control plane servers introduce additional complexity since we can only target a single host or IP in our configuration files. To make this configuration usable, we need to deploy a load balancer in front of our cluster. If you do deploy multiple control plane nodes, the installation will create an additional container running a HAProxy load balancer.

Have a question, bug, or feature request? Let us know! https:

Since we have a single host, each control plane node and the HAProxy container are running on unique ports. Each container needs to be exposed to the host so that they can receive incoming requests. In this example, the important one to note is the port assigned to HAProxy, since that's the target port for the cluster. In Kubernetes config file, we can see that it is targeting https://127.0.0.1:42673, which is the port that's been allocated to the HAProxy container.

When a command is executed using kubectl, it directs to the HAProxy server. Using a configuration file that was created by KinD during the cluster's creation, with the help of HA Proxy traffic gets routed between the three control plane nodes. In the HAProxy container, we can verify the configuration by viewing the config file, found at /usr/local/etc/haproxy/haproxy.cfg:

# generated by kind

global
  log /dev/log local0
  log /dev/log local1 notice
  daemon
resolvers docker
  nameserver dns 127.0.0.11:53
defaults
  log global
  mode tcp
  option dontlognull
  # TODO: tune these
  timeout connect 5000
  timeout client 50000
  timeout server 50000
  # allow to boot despite dns don't resolve backends
  default-server init-addr none
frontend control-plane
  bind *:6443
  default_backend kube-apiservers
backend kube-apiservers
  option httpchk GET /healthz
  # TODO: we should be verifying (!)
  server multinode-control-plane multinode-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server multinode-control-plane2 multinode-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server multinode-control-plane3 multinode-control-plane3:6443 check check-ssl verify no resolvers docker resolve-prefer ipv4

As shown in the preceding configuration file, there is a backend section called kube-apiservers that contains the three control plane containers. Each entry contains the Docker IP address of a control plane node with a port assignment of 6443, targeting the API server running in the container. When you request https://127.0.0.1:32791, that request will hit the HAProxy container, then, using the rules in the HAProxy configuration file, the request will be routed to one of the three nodes in the list.

Since our cluster is now fronted by a load balancer, you have a highly available control plane for testing.

[post_title] => Deploy Kubernetes using KinD [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => deploy-kubernetes-using-kind [to_ping] => [pinged] => [post_modified] => 2023-06-28 17:57:30 [post_modified_gmt] => 2023-06-28 17:57:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3218 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 3200 [post_author] => 7 [post_date] => 2022-02-25 14:02:09 [post_date_gmt] => 2022-02-25 14:02:09 [post_content] =>

By Anuj Tuli, CTO

If you have used EKS or provisioned it using Terraform, you know the various components and resources you need to account for as pre-requisites to getting the cluster set up. For example, setting up IAM roles, policies, security groups, VPC settings, Kubernetes config map, updating kubeconfig file, and more. Although Terraform gives you the ability to do all of that, the IaC developer has to account for these items by creating those resources in Terraform. The CLI eksctl provided by AWS can be used as an alternative to create the cluster and have all the dependencies and pre-requisites accounted for. You can find more info on installing eksctl and using it here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html 

Let's look at the steps involved in using eksctl to spin up an EKS cluster. We will do this on a Mac, so some steps may differ if you're running another OS: 

Download and install eksctl: 

brew install weaveworks/tap/eksctl 

Once installed, you can validate you have the version you want to run:

eksctl version 

 

Next, make sure you have a ssh key set up that you'd like to use. This key will be used for the EKS nodes that get provisioned. In our case, we will create a new private key: 

ssh-keygen -t rsa 

 

This should place the private key under:

~/.ssh/id_rsa 

 

We will now set up the yaml file that will capture the various properties we want to have for this EKS cluster. An example file is shown below. You can adjust it with the private key path or other values as necessary. We will call this file:

my-eks-cluster.yaml 

 

apiVersion: eksctl.io/v1alpha5

kind: ClusterConfig

metadata: 

  1.   name: my-eks-cluster 

      region: us-east-2 

    nodeGroups: 

      - name: nodegroup-1 

        instanceType: t3.medium 

        desiredCapacity: 2 

        volumeSize: 20 

        ssh: 

          allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key 

      - name: nodegroup-2 

        instanceType: t3.medium 

        desiredCapacity: 2 

        volumeSize: 20 

        ssh: 

          publicKeyPath: ~/.ssh/id_rsa.pub 

 

Run the create cluster command: 

eksctl create cluster -f my-eks-cluster.yaml 

 

We will be using nodegroups for our cluster. You can also provision a Fargate cluster using the command below (for default profile settings), or have fargateProfiles resource defined within your config file:

eksctl create cluster --fargate 

 

And that should do it. Your EKS cluster using AWS CloudFormation stack sets should be provisioned with all the default settings for pre-requisite resources. You can modify the config file above with declarations for any resources (like IAM groups) that you want to be customized.  

If you have any questions or comments on the tutorial content above or run into specific errors not covered here, please feel free to reach out to info@keyvatech.com.

[post_title] => Create EKS clusters in AWS using eksctl [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => create-eks-clusters-in-aws-using-eksctl [to_ping] => [pinged] => [post_modified] => 2022-02-25 14:02:09 [post_modified_gmt] => 2022-02-25 14:02:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3200 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 3047 [post_author] => 2 [post_date] => 2021-01-27 16:46:39 [post_date_gmt] => 2021-01-27 16:46:39 [post_content] =>

In this post we'll briefly explore the history of the Opsware automation portfolio and talk about modern equivalents and replacements you should be considering.  

A Brief History of Opsware  

Let’s start with defining what we are talking about in today's blog. I'm focusing specifically on the IT datacenter automation software, namely: Cloud Service Automation (CSA), Server Automation (SA), Network Automation (NA), and Operations Orchestration (OO) - a product which once had the acronym HPOO… you can't make it up! 

If we allow ourselves to hop in the way back machine, the story starts with a Bay Area startup called Loudcloud which was founded by Ben Horowitz and Marc Andreesen in 1999. Loudcloud was an infrastructure and application hosting company and developed really cool management software to manage its clients' IT infrastructure. The company went public in 2001. In 2002 Loudcloud sold its managed services business to EDS. (Ed. note: EDS briefly became HP ES in an acquisition on its ultimate voyage into the sun and to a merger with CSC, the joint company becoming known as DXC Technology in 2017.) Loudcloud rebranded as an enterprise software company called Opsware that focused on developing and selling its IT datacenter lifecycle management software. In 2007 Opsware was acquired by HP Software. In 2017 HP sold the software business to Micro Focus. This software that Loudcloud / Opsware built back in the late 1990 / early 2000s is the aforementioned suite of automation software, specifically: Server Automation (System), Network Automation (System), and Process Automation (System) - all of which were rebranded slightly after the 2007 acquisition by HP Software.  

So other than exercising some knowledge on the history of the software, why mention all of this? It's because it is truly old tech. It's been upgraded and expanded and rewritten since the early days, but it is still that kind of old school top-down management interface for IT environments with more modern amenities like the ability to write automation in YAML stapled to the side of it. At their peak these software solutions were used to manage tens of thousands of operating systems, network devices, and to automate endpoints leveraging an agent-based architecture. And it wasn't cheap! Solutions like Server Automation, Operations Orchestration and other similar market offerings (anyone remember BMC Bladelogic, now TrueSight?) were closed-source and partially responsible for the explosion of enterprise open source software. Sales teams had a number back then, if your device count was smaller than that number they knew there was no business case for you to evaluate that type of software - you just couldn't get there. A good chunk of mid-market and large, but not large-enough, IT enterprises were left with no good enterprise automation solutions.  

What Else Is Out There? 

So what happens? People start looking for (and building) their own solutions in the mid-2000s. Open source solutions start getting community adoption and IT staff are able to go way beyond things like CFEngine and are starting to adopt solutions like Chef and Puppet and learn more modern languages like Ruby. Chef and Puppet provide an early example of how to build a userbase on open source software but quickly realize no one wants to suddenly pay for things they'd been previously given for free. Licensing models change, some products go open core and paywall subsequently developed features. Far more recently, that is, in the last 10 years (geez, I am getting old)open source software supporting modern software development and hybrid cloud architectures has become the standard. And if you find yourself in a traditional IT environment or at least one with some tech debt you're looking to retire, you really owe it to yourself to look at Ansible & Terraform.  

Red Hat Ansible & HashiCorp Terraform 

Ansible began life as an open source project in 2012. Automation is written in YAML, a simple scripting language that anyone can learn and it is an agentless architecture. Ansible was acquired by Red Hat in 2015, and to their great credit, Red Hat not only left Ansible core as open source, they went and open-sourced the enterprise version Ansible Tower (the community version of which is AWX)! Awesome move for the community. Due to the commitment to open source, Red Hat's market reach, and the extraordinarily simple-to-use scripting language YAML, usage of Ansible in enterprises of all sizes has skyrocketed. If you're not using it today, you're in luck, you're a simple web search and download from having an enterprise grade solution that really acts as a jack-of-all-trades for endpoint configuration regardless of operating system running on the target. It's been used quite successfully for years at very large scale in organizations of every size.  

HashiCorp Terraform launched in the community in 2014. It has since seen massive growth as an open source project and as both SaaS-based and on-premise enterprise software solutions. Terraform is an extremely powerful tool which enables infrastructure-as-code use cases. Terraform manages external resources using what it calls providers and gives the end-user the ability to declare the end-state configuration leveraging those external providers. This declarative architecture allows for highly modular, scalable, and reusable code to configure highly complex end points, platform-as-a-service, etc.  

In practice, we see Ansible + Terraform being used in concert with code release processes as well as being front-ended by service catalogs like ServiceNow to enable a limitless variety of push button IT capabilities. Please contact us If you'd like to learn more about using Ansible or Terraform .

[post_title] => Are There Any Alternatives to HP Software / Micro Focus Automation Tools? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => are-there-any-alternatives-to-hp-software-micro-focus-automation-tools [to_ping] => [pinged] => [post_modified] => 2023-06-28 17:59:46 [post_modified_gmt] => 2023-06-28 17:59:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3047 [menu_order] => 1 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2952 [post_author] => 7 [post_date] => 2020-10-21 16:24:17 [post_date_gmt] => 2020-10-21 16:24:17 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

Continuing from 'Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I)', we are going to install OpenShift Cluster in this section. We are going to use a public Route53 domain name for our install.

If you wish to create a private cluster then you will need to do a bit more setup. See the following pages for more information on creating a private cluster that does not require DNS. The first page has the RedHat solution on the install-config program not supporting private clusters and contains an install config yaml file to use instead of the install-config command.

https://access.redhat.com/solutions/5158831

https://access.redhat.com/sites/default/files/attachments/aws-internal-install-config.yml

This page has more info on the install process and limitations of private clusters:
https://docs.openshift.com/container-platform/4.5/installing/installing_aws/installing-aws-private.html

First create the install-config yaml file and back it up as it is consumed by manifest creation.
Note: from here out all commands are run from the openshift_windows_cluster directory unless otherwise stated.

$ mkdir ~/openshift_windows_cluster && cd ~/openshift_windows_cluster

$ openshift-install create install-config
? Platform aws
INFO Credentials loaded from the "default" profile in file "/home/ec2-user/.aws/credentials"
? Region us-east-2
? Base Domain example.com
? Cluster Name win-test-cluster
? Pull Secret [? for help] (Paste your Pull Secret from the Red Hat web site or text file you downloaded)

$ sed -i 's/OpenShiftSDN/OVNKubernetes/g' install-config.yaml

$ cp -p install-config.yaml install-config.yaml.backup

Now we can create the manifest files and set up the OVN CNI settings:

$ openshift-install create manifests
INFO Credentials loaded from the "default" profile in file "/home/ec2-user/.aws/credentials"
INFO Consuming Install Config from target directory

$ cp -p manifests/cluster-network-02-config.yml manifests/cluster-network-03-config.yml

$ vi manifests/cluster-network-03-config.yml

The important things to change in this file are the apiVersion and defaultNetwork settings. It is important that the hybrid cluster network CIDR does not overlap with the cluster network CIDR. If you are following this guide exactly you can use this our network config file.

Here are the contents of our manifests/cluster-network-03-config.yml file:

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
creationTimestamp: null
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
externalIP:
policy: {}
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork:
- cidr: 10.132.0.0/14
hostPrefix: 23
status: {}

Creation of the Cluster

With those files in place we can now create the cluster. Take a coffee break, this will take around 30 minutes to complete.

$ openshift-install create cluster
INFO Consuming Openshift Manifests from target directory
INFO Consuming Worker Machines from target directory
INFO Consuming Master Machines from target directory
INFO Consuming OpenShift Install (Manifests) from target directory
INFO Consuming Common Manifests from target directory
INFO Credentials loaded from the "default" profile in file "/home/ec2-user/.aws/credentials"
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API at https://api.win-test-cluster.example.com:6443...
INFO API v1.18.3+5302882 up
INFO Waiting up to 40m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.win-test-cluster.example.com:6443 to initialize...
I1015 22:40:12.502855 1042 trace.go:116] Trace[1959950141]: "Reflector ListAndWatch" name:k8s.io/client-go/tools/watch/informerwatcher.go:146 (started: 2020-10-15
22:39:55.810110164 +0000 UTC m=+886.539985514) (total time: 16.692708687s):
Trace[1959950141]: [16.692655552s] [16.692655552s] Objects listed

INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ec2-user/openshift_windows_cluster/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.win-test-cluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "XXXXX-XXXXX-XXXXX-XXXXX"
INFO Time elapsed: 30m48s

Now you can run the export command and start using oc commands.

$ export KUBECONFIG=/home/ec2-user/openshift_windows_cluster/auth/kubeconfig
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-128-115.us-east-2.compute.internal Ready master 1h v1.18.3+970c1b3
ip-10-0-150-141.us-east-2.compute.internal Ready worker 1h v1.18.3+970c1b3
ip-10-0-161-110.us-east-2.compute.internal Ready worker 1h v1.18.3+970c1b3
ip-10-0-186-69.us-east-2.compute.internal Ready master 1h v1.18.3+970c1b3
ip-10-0-201-57.us-east-2.compute.internal Ready master 1h v1.18.3+970c1b3
ip-10-0-220-129.us-east-2.compute.internal Ready worker 1h v1.18.3+970c1b3
$ oc version
Client Version: 4.5.14
Server Version: 4.5.14
Kubernetes Version: v1.18.3+5302882

To verify you have the proper network running you can run this command:

 $ oc get network.operator cluster -o yaml

Look at the spec section of the yaml output. It should look like this.

spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
defaultNetwork:
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork:
- cidr: 10.132.0.0/14
hostPrefix: 23
type: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16


Bootstrapping the Windows Worker Nodes

If you already have an SSH keypair in AWS you can use that, if not you can generate a new one with the steps below. Note that you cannot use a key with a passphrase for Windows machines.

$ ssh-keygen -t rsa -b 4096 -N "" -C "example-key" -f ~/.ssh/example-key

$ aws --region us-east-2 ec2 import-key-pair --key-name "example-key" --public-key-material file://$HOME/.ssh/example-key.pub

Now we need to download the Windows node bootstrapper and create our Windows nodes. This will take about 5 minutes to run.

See this page for the latest releases: https://github.com/openshift/windows-machine-config-bootstrapper/releases

See this page for more info on wni: https://github.com/openshift/windows-machine-config-bootstrapper/tree/master/tools/windows-node-installer

Note: Due to a bug in the Intel 82599 network adapter used in most Intel based instances that causes issues with overlay networks, we suggest using AMD based instances like m5a.large

$ wget https://github.com/openshift/windows-machine-config-bootstrapper/releases/download/v4.5.2-alpha/wni -O ~/bin/wni

$ chmod +x ~/bin/wni && mkdir windowsnodeinstaller

$ wni aws create --kubeconfig $KUBECONFIG --credentials ~/.aws/credentials --credential-account default --instance-type m5a.large --ssh-key example-key --private-key ~/.ssh/example-key --dir ./windowsnodeinstaller/
2020/10/16 20:05:13 kubeconfig source: /home/ec2-user/openshift_windows_cluster/auth/kubeconfig
2020/10/16 20:05:14 Added rule with port 5986 to the security groups of your local IP
2020/10/16 20:05:14 Added rule with port 22 to the security groups of your local IP
2020/10/16 20:05:14 Added rule with port 3389 to the security groups of your local IP
2020/10/16 20:05:14 Using existing Security Group: sg-0123456789012345
2020/10/16 20:09:41 External IP: 4.138.182.84
2020/10/16 20:09:41 Internal IP: 10.0.42.50

After creating the node we can get the login info and run Ansible to finish node setup.

See this page for more information: https://github.com/openshift/windows-machine-config-bootstrapper/tree/master/tools/ansible

Get the Windows node Instance ID from the json file and get the Windows Administrator password. This password can also be used for RDP.

$ cat windowsnodeinstaller/windows-node-installer.json
{"InstanceIDs":["i-0123456789012345"],"SecurityGroupIDs":["sg-0123456789012345"]}

$ aws ec2 get-password-data --instance-id i-0123456789012345 --priv-launch-key ~/.ssh/example-key

Ansible Windows Node Finalization

Now we need to create an Ansible inventory file.

 $ vi inventory.ini

Your file should look like this, with your Windows node password and node address. Be sure to put the password in single quotes and set the cluster address to match the name of your cluster and private IP to match your node as well.

[win]
4.138.182.84 ansible_password='YOURWINDOWSNODEPASSWORDHERE' private_ip=10.0.42.50

[win:vars]
ansible_user=Administrator
cluster_address=win-test-cluster.example.com
ansible_connection=winrm
ansible_ssh_port=5986
ansible_winrm_server_cert_validation=ignore

Verify Ansible connectivity with this command and look for SUCCESS in the output:

$ ansible win -i inventory.ini -m win_ping
4.138.182.84 | SUCCESS => {
"changed": false,
"ping": "pong"
}

Clone the Windows Machine Config Bootstrapper repo and run the ansible playbook against the node:

$ git clone https://github.com/openshift/windows-machine-config-bootstrapper.git

$ ansible-playbook -v -i inventory.ini windows-machine-config-bootstrapper/tools/ansible/tasks/wsu/main.yaml

This will produce a lot of output and take 10 minutes or so. In the end you should see the Play Recap. As long as 'failed=0' then everything should be good.

To check the node is good and working in the cluster run this command:

$ oc get nodes -o wide -l kubernetes.io/os=windows
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-42-50.us-east-2.compute.internal Ready worker 29m v1.18.3 10.0.42.50 3.138.182.84 Windows Server 2019 Datacenter 10.0.17763.1518 docker://19.3.12

At this point you should use RDP to connect to the Windows worker node using the Administrator user and the password you pulled earlier. Just add the Windows Worker Node to a security group allowing RDP and then open a connection. After logging in start a powershell session with admin rights and run 'docker ps'.

Deploy a Windows sample application:

$ oc create -f https://raw.githubusercontent.com/keyvatech/blog_files/master/kubernetes_windows_web_server.yaml -n default


You can check it is running in OpenShift with this command:

$oc rollout status deployment win-webserver -n default
deployment "win-webserver" successfully rolled out

On Windows docker output should look like this:

PS C:\Users\Administrator> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
09c8bbd2a7e8 mcr.microsoft.com/windows/servercore "powershell.exe -com…" 13 minutes ago Up 13 minutes k8s_windowswebserver_win-webserver-85b49f8677-cgqkq_default_01fe28db-5ae7-4ead-8e84-5d9d5cd2cb01_0
52d42f33de9d mcr.microsoft.com/k8s/core/pause:1.2.0 "cmd /S /C 'cmd /c p…" 16 minutes ago Up 16 minutes k8s_POD_win-webserver-85b49f8677-cgqkq_default_01fe28db-5ae7-4ead-8e84-5d9d5cd2cb01_0

If you have any issues try waiting 15 minutes and then redeploying with one of the following commands:

$ oc rollout restart deployment/win-webserver
$ oc rollout retry deployment/win-webserver

To look at logs for the container, do this:

$ oc get pods
NAME READY STATUS RESTARTS AGE
win-webserver-564d75c5f7-l4kk2 1/1 Running 0 96s
$ oc logs win-webserver-564d75c5f7-l4kk2
Listening at http://*:80/

After the application is up and running DNS will take up to 5 minutes to populate. So if this doesn't work try again. Check the service is up and running by getting the external IP for the service and curling it.

$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 23h
openshift ExternalName <none> kubernetes.default.svc.cluster.local <none> 23h
win-webserver LoadBalancer 172.30.88.146 a038a9aa4571f4a7cafaf15ebf7ae270-23672059.us-east-2.elb.amazonaws.com 80:32601/TCP 35m
$ curl a038a9aa4571f4a7cafaf15ebf7ae270-23672059.us-east-2.elb.amazonaws.com
<html><body><H1>Windows Container Web Server</H1></body></html>

Deleting the Cluster

If you're all done and want to tear down here are the commands:

$ wni aws destroy --kubeconfig $KUBECONFIG --credentials ~/.aws/credentials --credential-account default --dir ./windowsnodeinstaller/

$ openshift-install destroy cluster

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part II) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-an-openshift-cluster-in-aws-with-windows-worker-nodes-part-ii [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:17:57 [post_modified_gmt] => 2022-01-26 13:17:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2952 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2934 [post_author] => 7 [post_date] => 2020-10-20 10:46:46 [post_date_gmt] => 2020-10-20 10:46:46 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

This guide covers how to set up an OpenShift cluster in AWS with Windows worker nodes. Because this requires the OVN Kubernetes container network interface you can not simply add Windows nodes to existing clusters. Please also understand that this functionality is still considered to be preview or beta from Red Hat is not supported in production environments at this time. This functionality also requires using OpenShift 4.4 or later, we tested this using OpenShift 4.5, which was the latest when this was published.

Requirements:
- Ansible 2.9+
- Python 3
- Python winrm module
- AWS CLI
- OpenShift 4.4+
- OC CLI 4.4+
- GIT
- AWS IAM User with programmatic access key and AdministratorAccess policy attached

Environment Setup:
If you don't have an environment that meets the above specs then create an EC2 instance with Amazon Linux 2.
I used a t2.micro instance and a security group allowing SSH on port 22. This environment already has the AWS CLI set up. During my run I only needed 4GB total disk space so the default disk size is fine.

After the instance is launched, SSH to the new VM as 'ec2-user' using your keyfile.
Run the following commands to set up python pre-reqs:

$ sudo yum install python3 python3-pip git
$ pip3 install --user pywinrm ansible

Navigate to https://cloud.redhat.com/openshift/install/aws/installer-provisioned and log in with your Red Hat account. This page provides links to the latest installer and CLI. You will also need to download your pull secret from here. These are correct as of Oct 2020, however if you have an issue, please use the links on the latest page from Red Hat.

Download OpenShift CLI and Installer and place the binaries in the $PATH. Note: /home/ec2-user/bin is in the default of $PATH on AMZ Linux 2 and openshift-client also contains a kubectl binary.

$ cd ~

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz

$ mkdir bin && tar -xvf openshift-client-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-client-README.md

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz

$ tar -xvf openshift-install-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-install-README.md

Check the versions of the pre-reqs. Here is the output from when I tested this example as well.

$ ansible --version
ansible 2.10.2
config file = None
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/.local/lib/python3.7/site-packages/ansible
executable location = /home/ec2-user/.local/bin/ansible
python version = 3.7.9 (default, Aug 27 2020, 21:59:41) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]

$ aws --version
aws-cli/1.18.107 Python/2.7.18 Linux/4.14.193-149.317.amzn2.x86_64 botocore/1.17.31

$ oc version
Client Version: 4.5.14

$ openshift-install version
openshift-install 4.5.14
built from commit 9893a482f310ee72089872f1a4caea3dbec34f28
release image quay.io/openshift-release-dev/ocp-release@sha256:95cfe9273aecb9a0070176210477491c347f8e69e41759063642edf8bb8aceb6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2-0-g52c56ce", GitCommit:"d7f3ccf9a5bdc96ba92e31526cf014b3de4c46aa", GitTreeState:"clean", BuildDate:"2020-09-16T15:25:59Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

$ pip3 freeze
ansible==2.10.1
ansible-base==2.10.2
certifi==2020.6.20
cffi==1.14.3
chardet==3.0.4
cryptography==3.1.1
idna==2.10
Jinja2==2.11.2
MarkupSafe==1.1.1
ntlm-auth==1.5.0
packaging==20.4
pycparser==2.20
pyparsing==2.4.7
pywinrm==0.4.1
PyYAML==5.3.1
requests==2.24.0
requests-ntlm==1.1.0
six==1.15.0
urllib3==1.25.10
xmltodict==0.12.0

$ pip3 show pywinrm
Name: pywinrm
Version: 0.4.1
Summary: Python library for Windows Remote Management
Home-page: http://github.com/diyan/pywinrm/
Author: Alexey Diyan
Author-email: alexey.diyan@gmail.com
License: MIT license
Location: /home/ec2-user/.local/lib/python3.7/site-packages
Requires: xmltodict, requests, requests-ntlm, six


Configure the AWS and the AWS CLI

You will need an AWS IAM user with a programmatic access key and the AdministratorAccess policy attached. You will also need to set up Route53 for a public cluster, but this is not reqiured, if you wish to create a private cluster see our steps below.
See this page for information on setting up your AWS account. https://docs.openshift.com/container-platform/4.5/installing/installing_aws/installing-aws-account.html

If you need information on names for availability zones you can run one of the following commands.
Be sure you are using a region supported by RedHat for Openshift on the AWS.

$ aws ec2 describe-regions
$ aws ec2 describe-availability-zones --region us-east-2
$ aws ec2 describe-availability-zones --all-availability-zones

Run these commands to set up the AWS CLI

$ aws configure
AWS Access Key ID [None]: YOURACCESSKEYID
AWS Secret Access Key [None]: YOURSECRETACCESSKEY
Default region name [None]: us-east-2
Default output format [None]: json

We are now ready to set up the OpenShift Cluster. Please go to 'Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part II)'.

  1.  

Helpful links: 

https://cloud.redhat.com/openshift/install/

If you are interested in deploying Windows worker nodes with Rancher,  please see our post here.

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-an-openshift-cluster-in-aws-with-windows-nodes [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:06 [post_modified_gmt] => 2022-01-26 13:18:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2934 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2921 [post_author] => 7 [post_date] => 2020-10-01 15:03:01 [post_date_gmt] => 2020-10-01 15:03:01 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

When considering infrastructure automation Terraform and Ansible are usually brought up. Both do some things really well, but also have limitations. Terraform is an infrastructure as code tool, whereas Ansible is a configuration management tool that can also do infrastructure as code. I've had people ask about how the tools compare and which one to use and when, so let's explore these tools and talk about the benefits of each. 

First, why would you use Terraform? The single most important reason is that Terraform, like Ansible, is platform agnostic. This means that if you have a hybrid or multi cloud application or service, you can use terraform to manage the infrastructure in a single repository. Cloud vendor specific solutions like AWS CloudFormation templates work well, but they are limited to using within the platform they are available in. With Terraform's ability to support multiple providers, you can do things like managing the infrastructure code definition of on-premise and AWS/GCP/Azure Cloud VMs, load balancers, DNS, or network configuration in the same set of files. Using a single common configuration language means greater flexibility in transitioning to new environments and reducing vendor lock-in. Another reason you should consider using Terraform is that, unlike Ansible, it works on the principle of understanding the current vs desired state. This means that if you do something like deploy a VM via Terraform, then later delete that block of configuration, Terraform will delete the VM. So your Terraform code is declarative of your infrastructure. With Ansible you would need to write additional code to perform an operation similar to this, as Ansible is not aware of the state from previous runs. Another benefit of Terraform is that you can see what it will do before you run it by using the 'terraform plan' command. 

However, if you already have a significant amount of infrastructure deployed, it can be time consuming to import your current environment to manage under Terraform. You can use it for new deployments without importing existing environment configurations, however it won't be able to manage those existing resources. Terraform also stores the state of what was provisioned in a state file. This means that if there are multiple people working on the code then they must run it out of a single common location with the same state file. Cloud providers can use cloud storage buckets to store the state file. The ideal solution might be using a CI or Orchestration system to run the 'terraform apply' to deploy infrastructure changes, and gating the process via approvals in ITSM. It is critical to ensure actual changes are applied from a single source of truth, like a master git branch. Also, while Terraform is extensible with custom providers ,you will need to write them in Go, which is not yet as widely used as Python. 

Now let's look at why Ansible. The best thing about Ansible is that it can handle a wide variety of configuration and deployment tasks using standard modules and it's easily extensible with Python. You can deploy a VM, use templates in case of custom configuration files, communicate with REST APIs, interact with git repos, and easily configure Linux or custom software all using already available standard modules. Building your own custom Ansible modules, which typically isn't needed given the exhaustive Ansible library, requires minimal programming effort. An example 'hello world' module only requires 4 lines of Python code. Drop the code in a 'library' directory next to your playbook and you're ready to use it. Ansible also comes with 'ansible-vault' which provides a way to store sensitive variables in encrypted yaml files in your playbook repo, which can be decrypted at runtime using a vault password. Because of these features, you can easily implement a wide variety of use cases using Ansible to achieve configuration as code. Some example cases we've used Ansible for include deployment of Linux OS hardening changes to meet security standard compliance,  configuring Apache Tomcat and Oracle Weblogic as part of application server deployment, integrating with ITSM (IT Service Management) and CMDB (Configuration Management Database) platforms, and interacting with silent installers and CLIs using Keyva built custom modules like one for Python Pexpect. 

Now, given that Ansible does not store state of the resources, you will need to write playbooks to handle removal of resources. Meaning, even if you deployed something and Ansible made sure it was 'present', to remove it you would usually need to run the same function with the named resource as 'absent'. For simple things like removing a file, this is easy and you just need to remove the code after it is run once everywhere. For more complex use cases, you can get around this limitation by writing playbooks in a way that queries existing resources into variable lists, compares to what is in Ansible, then removes the items that do not match. However, this would take additional time, is more complex, and does not account for any changes that were made on target resources manually. From a configuration, compliance and remediation standpoint, this may actually be desirable for some organizations. 

What's great about both tools is that they can work with each other. There's no reason to believe that one tool needs to own the whole process. Given their differences in scope, while they can do similar things, they are in no way replacements for the full functionality of the other. Terraform can be set up to run Ansible on a host after provisioning to do the configuration of that host. Likewise, Ansible can use the Terraform module to plan or apply a Terraform project as a step within a playbook. The Ansible module for Terraform also returns the outputs from Terraform as variables that Ansible can consume and use for further action. When designing and implementing  infrastructure-as-code in your environment, it is important to consider which tool is best suited for each part of the task. It is also imperative to consider combining Terraform with Ansible when deploying infrastructure. If you need help getting started or advice on best practices around implementing infrastructure-as-code, please reach out to info@keyvatech.com. 

[post_title] => Ansible vs. Terraform: Understanding the Differences [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => ansible-vs-terraform-understanding-the-differences [to_ping] => [pinged] => [post_modified] => 2020-10-01 15:03:01 [post_modified_gmt] => 2020-10-01 15:03:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2921 [menu_order] => 10 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2911 [post_author] => 7 [post_date] => 2020-09-25 13:36:20 [post_date_gmt] => 2020-09-25 13:36:20 [post_content] =>

By Anuj Tuli, CTO

Many organizations that use PowerBI for business insights and analytics have a need to run their reports against various data sources, including workloads that they may have residing in Amazon AWS. There can be a number of various data sources configured for AWS; this blog walks through how to set up connectivity between PowerBI and AWS Aurora MySQL Database.  

Assumptions:  

First, let's look at various configurations that we need to set up on the AWS side -  

Next, we will configure the PowerBI components -  

One of the most common ODBC errors we've seen is when the ODBC connector is unable to connect to the database. This usually happens either because the public subnet for the VPC is not associated with the Windows EC2 instance, or the public accessibility flag for the database is not set.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com 

[post_title] => How to set up PowerBI for reporting from AWS Aurora MySQL Database [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-powerbi-for-reporting-from-aws-aurora-mysql-database [to_ping] => [pinged] => [post_modified] => 2020-09-25 13:36:20 [post_modified_gmt] => 2020-09-25 13:36:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2911 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2908 [post_author] => 7 [post_date] => 2020-09-24 18:41:45 [post_date_gmt] => 2020-09-24 18:41:45 [post_content] =>

By Anuj Tuli, CTO

Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the right data to capture, 2) Bringing data from various sources into the data warehouse, 3) Performing guided analysis on the captured data so as to derive meaning from it.  

A modern data warehouse platform helps bring these activities together, so that you can easily identify, capture and retrieve data from various sources, and provide visibility and reporting capabilities for chosen interpretation. Snowflake is built for data scientists and data engineers, and it supports modern data and applications that use as much unstructured data as structured data. 

Snowflake offers SaaS data warehousing services, and have also made available a number of connectors for data retrieval on their github here - https://github.com/snowflakedb. There is also a community page that provides hands-on exposure to the Snowflake platform, and other educational videos. More info here - https://community.snowflake.com/s/education-services 

Keyva provides services and offerings around Snowflake data warehousing platform. You can always reach our team at: info@keyvatech.com to request additional information. 

 

[post_title] => Big Data and Snowflake [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => big-data-and-snowflake [to_ping] => [pinged] => [post_modified] => 2020-09-24 18:41:45 [post_modified_gmt] => 2020-09-24 18:41:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2908 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 3218 [post_author] => 14 [post_date] => 2022-04-20 14:33:21 [post_date_gmt] => 2022-04-20 14:33:21 [post_content] =>

By: Saikrishna Madupu - Sr Devops Engineer

Deploying Kubernetes using KinD can help setup a test environment where you can build multi-nodes or multiple clusters.

If you want to create clusters on virtual machines, you should have the resources to run the virtual machines. Each machine should have adequate disk space, memory, and CPU utilization. An alternate way to overcome this high volume of resources is to use containers in place. Using containers provides the advantage to run additional nodes, as per the requirements, by creating/deleting them in minutes and helps run multiple clusters on a single host. To explain how to run a cluster using only containers locally, use Kubernetes in Docker (KinD) to create a Kubernetes cluster on your Docker host.

Why pick KIND for test env’s[KH1] ?

Pre-requisites:

How kind works:

At a high level, you can think of a KinD cluster as consisting of a single Docker container that runs a control plane node and a worker node to create a Kubernetes cluster. To make the deployment easy and robust, KinD bundles every Kubernetes object into a single image, known as a node image. This node image contains all the required Kubernetes components to create a single-node or multi-node cluster. Once it is up and running, you can use Docker to exec into a control plane node container. It comes with the standard k8 components and comes with default CNI [KINDNET]. We can also disable default CNI and enable such as Calico, Falnnel, Cilium.  Since KinD uses Docker as the container engine to run the cluster nodes, all clusters are limited to the same network constraints that a standard Docker container is limited to. We can also run other containers on our kind env by passing an extra argument –net=kind to the docker run command.


KinD Installation:

I’m using Mac for demonstration and will also point out the steps to install it manually.

Option1:

 brew install kind

Option2:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin

You can verify the installation of kind by simply running:

kind version 

kind v0.11.1 go1.16.4 darwin/arm64

Creating cluster "kind" ...

 ✓ Ensuring node image (kindest/node:v1.21.1) ?

 ✓ Preparing nodes ? 

 ✓ Writing configuration ?

 ✓ Starting control-plane ?️

 ✓ Installing CNI ?

 ✓ Installing StorageClass ?

Set kubectl context to "kind-kind"

You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! ?

NAME                         STATUS ROLES                             AGE       VERSION
kind-control-plane   Ready    control-plane,master  5m54s   v1.21.1

KinD helps us to create and delete the cluster very quick. In order to delete the cluster we use KinD delete cluster in this example, it also deletes entry in our ~/.kube/config file that gets appended when cluster gets created.

kind delete cluster --name <cluster name>

Creating a multi-node cluster:

When creating a multi-node cluster, with custom options we need to create a cluster config file. Setting values in this file allows you to customize the KinD cluster, including the number of nodes, API options, and more. Sample config is shown below:

Config file:

/Cluster01-kind.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
  disableDefaultCNI: true
  apiServerPort: 6443
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  metadata:
    name: config
  networking:
    serviceSubnet: "10.96.0.1/12"
    podSubnet: "10.240.0.0/16"
nodes:
- role: control-plane
 extraPortMappings:
  - containerPort: 2379
    hostPort: 2379
  extraMounts:
  - hostPath: /dev
    containerPath: /dev
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443
  - containerPort: 2222
    hostPort: 2222
  extraMounts:
  - hostPath: /dev
    containerPath: /dev
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock

 

apiserverAddress:

What IP address the API server will listen on. By default it will use 127.0.0.1, but since we plan to use the cluster from other networked machines, we have selected to listen on all IP addresses.

disableefaultCNI: Enable or disable the Kindnet installation. The default value is false.

kubeadmConfigPatches:
This section allows you to set values for other cluster options during the installation. For our configuration, we are setting the CIDR ranges for the ServiceSubnet and the podSubnet.

Nodes:
For our cluster, we will create a single control plane node, and a single worker node.

role:control-plane:

The first role section is for the control-plane. We have added options to map the localhosts/dev and /var/run/Docker. Sock, which will be used in the Falco chapter, later in the book.

role:worker:
This is the second node section, which allows you to configure options that the worker nodes will use. For our cluster, we have added the same local mounts that will be used for Falco, and we have also added additional ports to expose for our Ingress controller.

ExportPortMapping:

To expose ports to your KinD nodes, you need to add them to the extraPortMappings section of the configuration. Each mapping has two values, the container port, and the host port. The host port is the port you would use to target the cluster, while the container port is the port that the container is listening on.

Extramounts:

The extra Mounts section allows you to add extra mount points to the containers. This comes in handy to expose mounts like /dev and /var/run/Docker. Sock that we will need for the Falco chapter.

Multi-node cluster configuration:

 kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

kind create cluster --name cluster01 --config cluster-01.yaml

Set kubectl context to "kind-multinode"

You can now use your cluster with:

kubectl cluster-info --context kind-multinode Note: The –name option will set the name of the cluster to cluster-01, and –config tells the installer to use the cluster01-kind.yaml config file.

Multiple control plane servers introduce additional complexity since we can only target a single host or IP in our configuration files. To make this configuration usable, we need to deploy a load balancer in front of our cluster. If you do deploy multiple control plane nodes, the installation will create an additional container running a HAProxy load balancer.

Have a question, bug, or feature request? Let us know! https:

Since we have a single host, each control plane node and the HAProxy container are running on unique ports. Each container needs to be exposed to the host so that they can receive incoming requests. In this example, the important one to note is the port assigned to HAProxy, since that's the target port for the cluster. In Kubernetes config file, we can see that it is targeting https://127.0.0.1:42673, which is the port that's been allocated to the HAProxy container.

When a command is executed using kubectl, it directs to the HAProxy server. Using a configuration file that was created by KinD during the cluster's creation, with the help of HA Proxy traffic gets routed between the three control plane nodes. In the HAProxy container, we can verify the configuration by viewing the config file, found at /usr/local/etc/haproxy/haproxy.cfg:

# generated by kind

global
  log /dev/log local0
  log /dev/log local1 notice
  daemon
resolvers docker
  nameserver dns 127.0.0.11:53
defaults
  log global
  mode tcp
  option dontlognull
  # TODO: tune these
  timeout connect 5000
  timeout client 50000
  timeout server 50000
  # allow to boot despite dns don't resolve backends
  default-server init-addr none
frontend control-plane
  bind *:6443
  default_backend kube-apiservers
backend kube-apiservers
  option httpchk GET /healthz
  # TODO: we should be verifying (!)
  server multinode-control-plane multinode-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server multinode-control-plane2 multinode-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server multinode-control-plane3 multinode-control-plane3:6443 check check-ssl verify no resolvers docker resolve-prefer ipv4

As shown in the preceding configuration file, there is a backend section called kube-apiservers that contains the three control plane containers. Each entry contains the Docker IP address of a control plane node with a port assignment of 6443, targeting the API server running in the container. When you request https://127.0.0.1:32791, that request will hit the HAProxy container, then, using the rules in the HAProxy configuration file, the request will be routed to one of the three nodes in the list.

Since our cluster is now fronted by a load balancer, you have a highly available control plane for testing.

[post_title] => Deploy Kubernetes using KinD [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => deploy-kubernetes-using-kind [to_ping] => [pinged] => [post_modified] => 2023-06-28 17:57:30 [post_modified_gmt] => 2023-06-28 17:57:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3218 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 109 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 77808611123855893e19e8064ac5690a [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

Deploy Kubernetes using KinD

By: Saikrishna Madupu – Sr Devops Engineer Deploying Kubernetes using KinD can help setup a test environment where you can build multi-nodes or multiple clusters. If you want to create ...
Read more

Create EKS clusters in AWS using eksctl

By Anuj Tuli, CTO If you have used EKS or provisioned it using Terraform, you know the various components and resources you need to account for as pre-requisites to getting ...
Read more

Are There Any Alternatives to HP Software / Micro Focus Automation Tools?

In this post we’ll briefly explore the history of the Opsware automation portfolio and talk about modern equivalents and replacements you should be considering.   A Brief History of Opsware   Let’s ...
Read more
code displayed on computer monitor

Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part II)

By Brad Johnson, Lead DevOps Engineer Continuing from ‘Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I)‘, we are going to install OpenShift Cluster in this section. We ...
Read more
code displayed on computer monitor

Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I)

By Brad Johnson, Lead DevOps Engineer This guide covers how to set up an OpenShift cluster in AWS with Windows worker nodes. Because this requires the OVN Kubernetes container network ...
Read more

Ansible vs. Terraform: Understanding the Differences

By Brad Johnson, Lead DevOps Engineer When considering infrastructure automation Terraform and Ansible are usually brought up. Both do some things really well, but also have limitations. Terraform is an ...
Read more

How to set up PowerBI for reporting from AWS Aurora MySQL Database

By Anuj Tuli, CTO Many organizations that use PowerBI for business insights and analytics have a need to run their reports against various data sources, including workloads that they may have residing ...
Read more

Big Data and Snowflake

By Anuj Tuli, CTO Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the ...
Read more