Listed below are some great events coming up that you should check out! Red Hat Summit Connect - Minneapolis 12/2 Finally! An in person event. Come check out this day-long event with Red Hat in Minneapolis and catch up on what's new with Red Hat. There are a lot of great sessions on Ansible and OpenShift at this event. To register for the event and view the agenda visit the following link: https://www.redhat.com/en/summit/connect/minneapolis AWS Re:Invent - Las Vegas & Virtual 11/29 - 12/3 This is the event of the year for the AWS ecosystem. If you're doing anything with AWS you're going to want to attend the event. Virtual passes for the event are free and breakout sessions are available to watch on-demand. Register here: https://reinvent.awsevents.com/register/ HashiCorp: Vault + Zero Trust Security - Virtual 12/16 Join HashiCorp for a zero-trust, identity-based security hands-on workshop. During this workshop, participants will learn about the HashiCorp security model which is predicated on the principle of identity-based access and security. In order for any machine or user to do anything, they must authenticate who or what they are, and their identity and policies define what they’re allowed to do. After an overview of zero-trust security, participants will go through a hands-on workshop of HashiCorp Vault. Register here: https://events.hashicorp.com/zero-trust-security-workshop-dec16 Microsoft Azure Virtual Training Day: DevOps with GitHub - Virtual 12/1 & 12/2 Never stop learning! Free 2-day virtual training with Microsoft. Some of the hands-on activities include: To attend the event register at the following link: https://mktoevents.com/Microsoft+Event/302741/157-GQE-382 By: Saikrishna Madupu - Sr Devops Engineer Deploying Kubernetes using KinD can help setup a test environment where you can build multi-nodes or multiple clusters. If you want to create clusters on virtual machines, you should have the resources to run the virtual machines. Each machine should have adequate disk space, memory, and CPU utilization. An alternate way to overcome this high volume of resources is to use containers in place. Using containers provides the advantage to run additional nodes, as per the requirements, by creating/deleting them in minutes and helps run multiple clusters on a single host. To explain how to run a cluster using only containers locally, use Kubernetes in Docker (KinD) to create a Kubernetes cluster on your Docker host. Why pick KIND for test env’s[KH1] ? Pre-requisites: How kind works: At a high level, you can think of a KinD cluster as consisting of a single Docker container that runs a control plane node and a worker node to create a Kubernetes cluster. To make the deployment easy and robust, KinD bundles every Kubernetes object into a single image, known as a node image. This node image contains all the required Kubernetes components to create a single-node or multi-node cluster. Once it is up and running, you can use Docker to exec into a control plane node container. It comes with the standard k8 components and comes with default CNI [KINDNET]. We can also disable default CNI and enable such as Calico, Falnnel, Cilium. Since KinD uses Docker as the container engine to run the cluster nodes, all clusters are limited to the same network constraints that a standard Docker container is limited to. We can also run other containers on our kind env by passing an extra argument –net=kind to the docker run command. KinD Installation: I’m using Mac for demonstration and will also point out the steps to install it manually. Option1: Option2: You can verify the installation of kind by simply running: Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: Have a nice day! 👋 KinD helps us to create and delete the cluster very quick. In order to delete the cluster we use KinD delete cluster in this example, it also deletes entry in our ~/.kube/config file that gets appended when cluster gets created. When creating a multi-node cluster, with custom options we need to create a cluster config file. Setting values in this file allows you to customize the KinD cluster, including the number of nodes, API options, and more. Sample config is shown below: Config file: apiserverAddress: What IP address the API server will listen on. By default it will use 127.0.0.1, but since we plan to use the cluster from other networked machines, we have selected to listen on all IP addresses. disableefaultCNI: Enable or disable the Kindnet installation. The default value is false. kubeadmConfigPatches: Nodes: role:control-plane: The first role section is for the control-plane. We have added options to map the localhosts/dev and /var/run/Docker. Sock, which will be used in the Falco chapter, later in the book. role:worker: ExportPortMapping: To expose ports to your KinD nodes, you need to add them to the extraPortMappings section of the configuration. Each mapping has two values, the container port, and the host port. The host port is the port you would use to target the cluster, while the container port is the port that the container is listening on. Extramounts: The extra Mounts section allows you to add extra mount points to the containers. This comes in handy to expose mounts like /dev and /var/run/Docker. Sock that we will need for the Falco chapter. Set kubectl context to "kind-multinode" You can now use your cluster with: kubectl cluster-info --context kind-multinode Note: The –name option will set the name of the cluster to cluster-01, and –config tells the installer to use the cluster01-kind.yaml config file. Multiple control plane servers introduce additional complexity since we can only target a single host or IP in our configuration files. To make this configuration usable, we need to deploy a load balancer in front of our cluster. If you do deploy multiple control plane nodes, the installation will create an additional container running a HAProxy load balancer. Have a question, bug, or feature request? Let us know! https: Since we have a single host, each control plane node and the HAProxy container are running on unique ports. Each container needs to be exposed to the host so that they can receive incoming requests. In this example, the important one to note is the port assigned to HAProxy, since that's the target port for the cluster. In Kubernetes config file, we can see that it is targeting https://127.0.0.1:42673, which is the port that's been allocated to the HAProxy container. When a command is executed using kubectl, it directs to the HAProxy server. Using a configuration file that was created by KinD during the cluster's creation, with the help of HA Proxy traffic gets routed between the three control plane nodes. In the HAProxy container, we can verify the configuration by viewing the config file, found at /usr/local/etc/haproxy/haproxy.cfg: # generated by kind As shown in the preceding configuration file, there is a backend section called Since our cluster is now fronted by a load balancer, you have a highly available control plane for testing. By Anuj Tuli, CTO If you have used EKS or provisioned it using Terraform, you know the various components and resources you need to account for as pre-requisites to getting the cluster set up. For example, setting up IAM roles, policies, security groups, VPC settings, Kubernetes config map, updating kubeconfig file, and more. Although Terraform gives you the ability to do all of that, the IaC developer has to account for these items by creating those resources in Terraform. The CLI eksctl provided by AWS can be used as an alternative to create the cluster and have all the dependencies and pre-requisites accounted for. You can find more info on installing eksctl and using it here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html Let's look at the steps involved in using eksctl to spin up an EKS cluster. We will do this on a Mac, so some steps may differ if you're running another OS: Download and install eksctl: Once installed, you can validate you have the version you want to run: Next, make sure you have a ssh key set up that you'd like to use. This key will be used for the EKS nodes that get provisioned. In our case, we will create a new private key: This should place the private key under: We will now set up the yaml file that will capture the various properties we want to have for this EKS cluster. An example file is shown below. You can adjust it with the private key path or other values as necessary. We will call this file: my-eks-cluster.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: Run the create cluster command: eksctl create cluster -f my-eks-cluster.yaml We will be using nodegroups for our cluster. You can also provision a Fargate cluster using the command below (for default profile settings), or have fargateProfiles resource defined within your config file: And that should do it. Your EKS cluster using AWS CloudFormation stack sets should be provisioned with all the default settings for pre-requisite resources. You can modify the config file above with declarations for any resources (like IAM groups) that you want to be customized. If you have any questions or comments on the tutorial content above or run into specific errors not covered here, please feel free to reach out to info@keyvatech.com. By: Jesse Langhoff, Director of Sales Today is one of those days. Not the best day to be Amazon Web Services. And it's not a great day to have all of your mission-critical services sitting on top of it alone. The dreaded network outage. Its impact is being felt everywhere from AWS Chime web conferences services being unavailable, delivery trucks being sidelined, many of the SaaS services we all consume on a daily basis being unreachable, to downing far flung things like hosted blockchain nodes and making it impossible for me to communicate with my Roomba via its mobile app. My poor, directionless Roomba. AWS is a premier provider of hundreds of cloud-based services globally. It's a de facto standard and in almost every conversation when an enterprise talks about going from on-prem to the cloud. However, outages like these as infrequent as they may be remind us that the cloud (should you choose one) can be a single point of failure. It's why we advocate for platform independent technologies like Snowflake, Terraform, and Ansible and why your cloud journey should consider how you can straddle multiple clouds or operate between the cloud and on-prem. Failure is inevitable. How we adapt to failure is up to us. While the cloud is fault tolerant, it isn't faultless. Drop us a line if you want to talk hybrid or multicloud. Listed below are some great events coming up that you should check out! MSP Ansible Meetup – local, in person meetup this Thursday, 8/19. Great opportunity to meet and talk shop with local Ansible users. https://www.meetup.com/Ansible-Minneapolis/events/ Ansiblefest 2021 – 9/29 – 9/30: The yearly Ansible conference, online this year. Learn more at: https://www.redhat.com/en/events/ansiblefest-2021 Red Hat Automation for you, beginners… Aug 17th: https://www.redhat.com/en/events/webinar/automation-for-you-beginners-experts-and-everyone-else Shift Left Container Security 8/19 (AWS & McAfee) & Cloud Native Application Protection Workshop (8/26): Microsoft Azure Virtual Training: Migrating on-prem infrastructure: 9/2 and 9/3: https://mktoevents.com/Microsoft+Event/287240/157-GQE-382 Kong Summit – 9/27-9/29: https://konghq.com/kong-summit/ Accelerating Digital Transformation (On Demand): Rancher Desktop: Kubernetes & Container Management: 8/18 https://more.suse.com/RancherAug2021OnlineMeetup.html Spring is here and while we're all clamoring to get outside and enjoy some fresh air spring is also conference season and there are some great events you should check out. Red Hat Virtual Summit - 4/27-4/29 This is Red Hat's major yearly event. In happier time this event switches between coasts every other year, historically alternating between Boston and San Francisco. The event was held virtually last week, but if you missed it, no problem. You can register and view all of the great sessions and presentations on demand here: https://www.redhat.com/en/summit HashiCorp & Cisco 5/4: Making application-centric infrastructure a reality with Cisco ACI and HashiCorp Consul 5/4: This is a joint webinar between HashiCorp & Cisco and Highlights using Consul (a service mesh) with Cisco ACI. This is a good session for anyone interested in how these solutions can work well together in your environment. If you're interested register here: AWS Virtual Workshop: Cloud and Hybrid Operations Best Practices in a Modern Enterprise 5/10-14: AWS hosts tons and tons of events every month. This workshop is especially relevant for anyone that is using AWS but would like to know more about best practices for hybrid operations. Register at the link below: https://pages.awscloud.com/AWS-Virtual-Workshop_2021_VW_s14-MGT.html?trk=ep_card-el_a134p000006vlZJAAY&trkCampaign=2021_VW_s14-MGT&sc_channel=el&sc_campaign=pac_Q2-2021_exlinks_events_VW_14&sc_outcome=Product_Adoption_Campaigns&sc_geo=NAMER&sc_country=mult ServiceNow Knowledge 2021 5/11: ServiceNow's Knowledge is the event to attend if you're interested in all things ServiceNow and Service Management. Like the other major conferences this one is virtual and free to attend this year. Register at the link below and build your session agenda. https://knowledge.servicenow.com/ Achieving Security Goals with Vault and AWS 5/20: We frequently work with our clients to evaluate what in-cloud services they should use vs cloud agnostic solutions like Vault. This session with AWS and HashiCorp details how to use Vault in conjunction with AWS services to achieve robust cloud security. Register at the link below: https://www.brighttalk.com/webinar/achieving-security-goals-with-vault-and-aws/ Azure Webinar Series: K8s on Azure: Lessons from Real-World Deployments: 5/18 An upcoming webinar from Microsoft focused on real-world deployments of kubernetes workloads on Azure. This is a great opportunity to learn and ask your questions around deploying workloads into Azure. https://info.microsoft.com/ww-landing-kubernetes-on-azure-lessons-from-real-world-deployments.html Hey everyone! I hope you're staying warm through this historic cold snap. There's no better time than the present to stay indoors and check out some upcoming virtual tech events. The first event I'd like to mention is coming up this Thursday, Feb 18th, 2021. It's the Open Source North Speaker Series. It's free to attend but you need to register in advance. Here's a look at this Thursday's speaker agenda: Open Source North Speaker Series Thursday, February 18, 12:00 - 1:00 PM CST For details on each presentation and speaker - AND TO REGISTER, please visit https://opensourcenorth.com/speaker-series The next event I want to mention is an event coming up with the CTO of Kong , Marco Palladino. Marco is a dynamic speaker and excellent CTO, always a fun, informative watch. Kong - Automatic Observability With Service Mesh Friday, February 26, 11:00 - 12:00 PM CT Key takeaways: To register go here: Register Now The last event I want to draw your attention to this month is focused on the OpenShift Developer Sandbox hosted by Red Hat DevNation during which you will create an account an OpenShift cluster and deploy a sample app. The dev cluster will be available for you to use thereafter for 14 days. OpenShift Developer Sandbox Thursday, February 18, 11:00 - 12:00 PM CST Have you heard of the new OpenShift Developer Sandbox? Join this DevNation Tech Talk where you will be guided through the process of creating an account, creating/configuring your Developer Sandbox cluster, and deploying a sample application on OpenShift. Your OpenShift cluster will be available for your use for 14 days. If you've ever wanted to test out OpenShift, this is your chance to do it! Produced by the Red Hat Developer team, DevNation Tech Talks are live discussions led by the Red Hat technologists who create our products. Sessions include real solutions and code to help you build with open source, plus sample projects, robust discussion, and live Q&A to help you get started. Are you new to DevNation Tech Talks? See what you've missed. To register for this event go here: Register for the OpenShift Sandbox Tech Talk In this post we'll briefly explore the history of the Opsware automation portfolio and talk about modern equivalents and replacements you should be considering. A Brief History of Opsware Let’s start with defining what we are talking about in today's blog. I'm focusing specifically on the IT datacenter automation software, namely: Cloud Service Automation (CSA), Server Automation (SA), Network Automation (NA), and Operations Orchestration (OO) - a product which once had the acronym HPOO… you can't make it up! If we allow ourselves to hop in the way back machine, the story starts with a Bay Area startup called Loudcloud which was founded by Ben Horowitz and Marc Andreesen in 1999. Loudcloud was an infrastructure and application hosting company and developed really cool management software to manage its clients' IT infrastructure. The company went public in 2001. In 2002 Loudcloud sold its managed services business to EDS. (Ed. note: EDS briefly became HP ES in an acquisition on its ultimate voyage into the sun and to a merger with CSC, the joint company becoming known as DXC Technology in 2017.) Loudcloud rebranded as an enterprise software company called Opsware that focused on developing and selling its IT datacenter lifecycle management software. In 2007 Opsware was acquired by HP Software. In 2017 HP sold the software business to Micro Focus. This software that Loudcloud / Opsware built back in the late 1990 / early 2000s is the aforementioned suite of automation software, specifically: Server Automation (System), Network Automation (System), and Process Automation (System) - all of which were rebranded slightly after the 2007 acquisition by HP Software. So other than exercising some knowledge on the history of the software, why mention all of this? It's because it is truly old tech. It's been upgraded and expanded and rewritten since the early days, but it is still that kind of old school top-down management interface for IT environments with more modern amenities like the ability to write automation in YAML stapled to the side of it. At their peak these software solutions were used to manage tens of thousands of operating systems, network devices, and to automate endpoints leveraging an agent-based architecture. And it wasn't cheap! Solutions like Server Automation, Operations Orchestration and other similar market offerings (anyone remember BMC Bladelogic, now TrueSight?) were closed-source and partially responsible for the explosion of enterprise open source software. Sales teams had a number back then, if your device count was smaller than that number they knew there was no business case for you to evaluate that type of software - you just couldn't get there. A good chunk of mid-market and large, but not large-enough, IT enterprises were left with no good enterprise automation solutions. What Else Is Out There? So what happens? People start looking for (and building) their own solutions in the mid-2000s. Open source solutions start getting community adoption and IT staff are able to go way beyond things like CFEngine and are starting to adopt solutions like Chef and Puppet and learn more modern languages like Ruby. Chef and Puppet provide an early example of how to build a userbase on open source software but quickly realize no one wants to suddenly pay for things they'd been previously given for free. Licensing models change, some products go open core and paywall subsequently developed features. Far more recently, that is, in the last 10 years (geez, I am getting old)open source software supporting modern software development and hybrid cloud architectures has become the standard. And if you find yourself in a traditional IT environment or at least one with some tech debt you're looking to retire, you really owe it to yourself to look at Ansible & Terraform. Red Hat Ansible & HashiCorp Terraform Ansible began life as an open source project in 2012. Automation is written in YAML, a simple scripting language that anyone can learn and it is an agentless architecture. Ansible was acquired by Red Hat in 2015, and to their great credit, Red Hat not only left Ansible core as open source, they went and open-sourced the enterprise version Ansible Tower (the community version of which is AWX)! Awesome move for the community. Due to the commitment to open source, Red Hat's market reach, and the extraordinarily simple-to-use scripting language YAML, usage of Ansible in enterprises of all sizes has skyrocketed. If you're not using it today, you're in luck, you're a simple web search and download from having an enterprise grade solution that really acts as a jack-of-all-trades for endpoint configuration regardless of operating system running on the target. It's been used quite successfully for years at very large scale in organizations of every size. HashiCorp Terraform launched in the community in 2014. It has since seen massive growth as an open source project and as both SaaS-based and on-premise enterprise software solutions. Terraform is an extremely powerful tool which enables infrastructure-as-code use cases. Terraform manages external resources using what it calls providers and gives the end-user the ability to declare the end-state configuration leveraging those external providers. This declarative architecture allows for highly modular, scalable, and reusable code to configure highly complex end points, platform-as-a-service, etc. In practice, we see Ansible + Terraform being used in concert with code release processes as well as being front-ended by service catalogs like ServiceNow to enable a limitless variety of push button IT capabilities. Please contact us If you'd like to learn more about using Ansible or Terraform . Listed below are some great events coming up that you should check out! Red Hat Summit Connect - Minneapolis 12/2 Finally! An in person event. Come check out this day-long event with Red Hat in Minneapolis and catch up on what's new with Red Hat. There are a lot of great sessions on Ansible and OpenShift at this event. To register for the event and view the agenda visit the following link: https://www.redhat.com/en/summit/connect/minneapolis AWS Re:Invent - Las Vegas & Virtual 11/29 - 12/3 This is the event of the year for the AWS ecosystem. If you're doing anything with AWS you're going to want to attend the event. Virtual passes for the event are free and breakout sessions are available to watch on-demand. Register here: https://reinvent.awsevents.com/register/ HashiCorp: Vault + Zero Trust Security - Virtual 12/16 Join HashiCorp for a zero-trust, identity-based security hands-on workshop. During this workshop, participants will learn about the HashiCorp security model which is predicated on the principle of identity-based access and security. In order for any machine or user to do anything, they must authenticate who or what they are, and their identity and policies define what they’re allowed to do. After an overview of zero-trust security, participants will go through a hands-on workshop of HashiCorp Vault. Register here: https://events.hashicorp.com/zero-trust-security-workshop-dec16 Microsoft Azure Virtual Training Day: DevOps with GitHub - Virtual 12/1 & 12/2 Never stop learning! Free 2-day virtual training with Microsoft. Some of the hands-on activities include: To attend the event register at the following link: https://mktoevents.com/Microsoft+Event/302741/157-GQE-382 brew install kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin
kind version
kind v0.11.1 go1.16.4 darwin/arm64
kubectl cluster-info --context kind-kind
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 5m54s v1.21.1
kind delete cluster --name <cluster name>
Creating a multi-node cluster:
/Cluster01-kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "0.0.0.0"
disableDefaultCNI: true
apiServerPort: 6443
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
networking:
serviceSubnet: "10.96.0.1/12"
podSubnet: "10.240.0.0/16"
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 2379
hostPort: 2379
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 2222
hostPort: 2222
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
This section allows you to set values for other cluster options during the installation. For our configuration, we are setting the CIDR ranges for the ServiceSubnet and the podSubnet.
For our cluster, we will create a single control plane node, and a single worker node.
This is the second node section, which allows you to configure options that the worker nodes will use. For our cluster, we have added the same local mounts that will be used for Falco, and we have also added additional ports to expose for our Ingress controller.Multi-node cluster configuration:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
kind create cluster --name cluster01 --config cluster-01.yaml
global
log /dev/log local0
log /dev/log local1 notice
daemon
resolvers docker
nameserver dns 127.0.0.11:53
defaults
log global
mode tcp
option dontlognull
# TODO: tune these
timeout connect 5000
timeout client 50000
timeout server 50000
# allow to boot despite dns don't resolve backends
default-server init-addr none
frontend control-plane
bind *:6443
default_backend kube-apiservers
backend kube-apiservers
option httpchk GET /healthz
# TODO: we should be verifying (!)
server multinode-control-plane multinode-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
server multinode-control-plane2 multinode-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
server multinode-control-plane3 multinode-control-plane3:6443 check check-ssl verify no resolvers docker resolve-prefer ipv4
kube-apiservers
that contains the three control plane containers. Each entry contains the Docker IP address of a control plane node with a port assignment of 6443
, targeting the API server running in the container. When you request https://127.0.0.1:32791
, that request will hit the HAProxy container, then, using the rules in the HAProxy configuration file, the request will be routed to one of the three nodes in the list.brew install weaveworks/tap/eksctl
eksctl version
ssh-keygen -t rsa
~/.ssh/id_rsa
name: my-eks-cluster
region: us-east-2
nodeGroups:
- name: nodegroup-1
instanceType: t3.medium
desiredCapacity: 2
volumeSize: 20
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
- name: nodegroup-2
instanceType: t3.medium
desiredCapacity: 2
volumeSize: 20
ssh:
publicKeyPath: ~/.ssh/id_rsa.pub eksctl create cluster --fargate
In this session you'll learn how a service mesh can observe all of our traffic in new modern applications running on both Kubernetes and virtual machines.Keyva Winter Events
Read moreDeploy Kubernetes using KinD
Read moreCreate EKS clusters in AWS using eksctl
Read moreToday Makes its Case for Hybrid & Multicloud Deployments
Read moreAugust & September Events 2021
Read moreMay Virtual Events 2021
Read moreUpcoming Tech Events – February 2021
Read moreAre There Any Alternatives to HP Software / Micro Focus Automation Tools?
Read more