Get Appointment

  • contact@wellinor.com
  • +(123)-456-7890

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => DESC ) [autosort] => 0 [paged] => 0 ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => DESC ) [autosort] => 0 [paged] => 0 [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'expired' OR wp_posts.post_status = 'tribe-ea-success' OR wp_posts.post_status = 'tribe-ea-failed' OR wp_posts.post_status = 'tribe-ea-schedule' OR wp_posts.post_status = 'tribe-ea-pending' OR wp_posts.post_status = 'tribe-ea-draft') ORDER BY wp_posts.post_date DESC LIMIT 0, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 2825 [post_author] => 11 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.

In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.

We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

  1. Provision two CentOS 7 nodes in VMware with 2CPUs and 4GB of RAM or greater.
  2. After they have booted, log in to the nodes and prepare them to be added to Rancher. We have created the following script to help with this. Please add any steps your org needs as well. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-node-prep.sh
  3. Provision the windows server worker node in vSphere, note that 1.5 CPUs and 2.5GB of RAM are reserved for windows. You may want to over-provision this node by a bit. I used 6CPUs and 8GB ram so there was some overhead in my lab.
  4. Modify the windows node CPU settings and enable “Hardware virtualization”, then make any other changes you need and boot the node.
  5. You can confirm the windows node version by running ‘winver’ at the powershell prompt.
  6. Check to make sure the VMware Tools version you are running is 11.1.0 or later.
  7. After you boot the windows node open an admin powershell prompt and run the commands in this powershell script to set up the system, install docker and open the proper firewall ports. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-windows-node-prep.ps1
  8. After you run the script you can then set the hostname, make any other changes for your org and reboot.
  9. Once the reboot is complete open a powershell prompt as admin and run ‘docker ps‘, then run ‘docker run hello-world‘ to test the install.

There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider

This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html

For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:

docker image pull mcr.microsoft.com/windows/nanoserver:1809

Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.

  1. Log in to the rancher management web interface, select the global cluster screen and click “add cluster”.
  2. Choose “From existing nodes (custom)” this is the only option where windows is supported currently.
  3. Set a cluster name, choose your kubernetes version, for Network Provider select “Flannel” from the dropdown.
  4. Flannel is the only network type to support windows, the windows support option should now allow you to select “Enabled“. Leave the Flannel Backend set to VXLAN.
  5. You can now review the other settings, but you likely don’t need to make any other changes. Click “Next” at the bottom of the page.
  6. You are now presented with the screen showing docker commands to add nodes. You will need to copy these commands and run them by hand on each node. Be sure to run the windows command in an admin powershell prompt.
    1. For the master node select Linux with etcd and Control Plane.
    2. For the linux worker select Linux with only Worker.
    3. For the windows worker node select windows, worker is the only option.
  7. This cluster will now provision itself and come up. This may take 5-10 mins.
  8. After the cluster is up select the cluster name from the main drop down in the upper left, then go to “Projects/Namespaces” and click on “Project: System”. Be sure you are on the Resources > Workloads page. All services should say “Active”. If there are any issues here you may need to troubleshoot further.

Troubleshooting

Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.

If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

You can use these commands in windows to check on the docker service status and restart it.

sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-a-rancher-cluster-with-windows-worker-nodes [to_ping] => [pinged] => [post_modified] => 2020-08-06 14:25:22 [post_modified_gmt] => 2020-08-06 14:25:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2825 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2823 [post_author] => 6 [post_date] => 2020-07-24 14:38:58 [post_date_gmt] => 2020-07-24 14:38:58 [post_content] =>

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. This seems to happen most often when organizations adopt new and emerging technologies. Often teams haven't built up their internal skills and capabilities around tech like Kubernetes or automation platforms such as Red Hat Ansible. Or, perhaps the team is already very skilled, but want someone to help with their OpenShift 3.x -> 4.x upgrade path, or need someone to write a new Ansible module so that they can expand their ability to offer automation capabilities via their playbooks.  

There hasn't been a good way to get this kind of incremental help – no granular consumption model for technical expertise – it's not a function of your vendor's L1 support. Your vendor will tell you to buy a TAM or their own expensive consulting services. It's also not something readily available in the community at large. There are user forums and networks for days, but will you get a response to your questions? Will the responses be correct?  

Keyva created Guru Services to address this exact issue. It's more than L1 support, not as heavy as a consulting engagement, it's enterprise grade and far more reliable than crowdsourcing the community for answers and assistance. 

Guru Service is just as easy to use: choose from 3 different service levels and you're on your way. You'll have access to our client portal from which you can schedule your On Demand Guru. We'll send you a meeting invite with web conference information and you'll be over the hump and on your way in no time. We currently provide On Demand Gurus for Red Hat Ansible, OpenShift, and Kong and are actively adding technologies to our suite of Guru Services. To learn more about these offerings, check out our vendor pages for Kong (https://keyvatech.com/kong-enterprise/) and Red Hat (https://keyvatech.com/red-hat/). Reach out to our Keyva team at: info@keyvatech.com to request additional information or a quote on our Guru Services .  

[post_title] => The On Demand Guru [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-on-demand-guru [to_ping] => [pinged] => [post_modified] => 2020-07-31 14:52:47 [post_modified_gmt] => 2020-07-31 14:52:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2823 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2765 [post_author] => 6 [post_date] => 2020-07-20 16:28:31 [post_date_gmt] => 2020-07-20 16:28:31 [post_content] =>

With most of us in IT still working from home and unable to attend in-person events there's been an absolute explosion of online events. In our opinion here are some of the best technology events you can attend while we wait for the all-clear to resume attending our local events. The following list are a selection of events being put on by some of our vendor partners and are arranged by vendor.  

HashiCorp:  

We are a go-to-market and services delivery partner of HashiCorp. By now you've probably heard of them and their portfolio. Terraform is one of their flagship products, an excellent solution to deliver infrastructure as code and platform provisioning. They have 3 interesting events. The first of which is a customer roundtable (you need to register an account, but do not need to be a customer). These events are usually very good for seeing how other organizations out there are using Terraform. Additionally, there are two workshops later in the month where you can get some hands-on with Terraform in AWS or Azure. Visit the links below to register. Note all times are in CDT. 

Customer Roundtable – 7/16 – 12-2:30: https://events.hashicorp.com/strategydays/july16 

Terraform on AWS Workshop – 7/14 – 3-7: https://events.hashicorp.com/workshops/terraform-july14 

Terraform on Azure Workshop – 7/28 – 11-2:30: https://events.hashicorp.com/workshops/terraform-july28 

ServiceNow:  

We are an ISV of ServiceNow and have been building NOW certified integrations between ServiceNow and other major vendor solutions for years. This event should be interesting to all of us: Optimizing Agile Enterprises post COVID. Visit the link below to register for the event.  

Building, sustaining, and optimizing an agile enterprise post COVID-19 – 7/21 11-12: https://go.servicenow.com/LP=14515 

Snowflake:  

Snowflake is a next gen cloud-based data warehousing solution that is a truly unique and transformational technology. If you have an old, crusty data warehouse on prem gathering dust while it bleeds your team dry you should check out these events. They have a live demo every Thursday and you can hop into their well-attended users group taking place later this month. Visit the links below to register.  

Live Demo Every Thursday: https://www.snowflake.com/live-demo/?utm_cta=events-page-featured-live-demo (click Americas) and select the link for the session 

Snowflake users group – 7/15 – 11 AM: https://usergroups.snowflake.com/events/details/snowflake-group-by-presents-group-by-data-heroes-a-virtual-symposium-led-by-snowflake-users/?_ga=2.135575154.299628294.1594135230-1207227298.1594135230#/ 

[post_title] => July Virtual Events You Should Attend [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => july-virtual-events-you-should-attend [to_ping] => [pinged] => [post_modified] => 2020-07-24 14:52:06 [post_modified_gmt] => 2020-07-24 14:52:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2765 [menu_order] => 1 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2742 [post_author] => 7 [post_date] => 2020-07-15 12:00:59 [post_date_gmt] => 2020-07-15 12:00:59 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to easily deploy a Kubernetes cluster in VMware with only minimal configuration. The intent of this tutorial is to give you a base for a scalable development cluster where you can test deploying applications or configuring other Kubernetes software without setting up DNS or external load balancers.

We will use VMware vSphere 6.7 for our deployment. For the OS and software versions we are going to use the ones recommended by Rancher support. As of May 2020, Docker currently has an issue with cluster DNS and firewalld interfering with each other in CentOS/RHEL 8, so we will be using CentOS 7 and Docker 19.03.x for our management server, however you can use any supported OS. For the Master and Worker nodes we will be using RancherOS or CentOS. Using RancherOS eliminates the need to build a custom VM template in vSphere that uses cloud-init.

Requirements for this exercise:
- Admin access to vSphere or a service account with access.
- Ability to create RHEL/CentOS 7 VMs in vSphere.
- Guest VM network has internet access.

In this deployment Rancher has two primary components, the Rancher cluster manager and the Kubernetes cluster we will manage. For production use, the cluster management component would be a container deployed on its own kubernetes cluster. For ease of install and use in a testing and lab deployment, we can simply deploy the management application as a Docker container on a single server. This configuration is not recommended for production and cannot be converted into a production scenario later. If you want a single node cluster manager that can be converted into production ready setup, then you can deploy the management container on a one node kubernetes cluster which could then later be scaled up.

Rancher management server deployment

All commands run as root or with sudo unless noted:

Spin up a standard or minimal CentOS/RHEL 7 server, 2 CPU, 4GB RAM. I used a 100GB thin provisioned primary disk.

Install docker using the Rancher script. Alternatively, install by hand using documentation from docker.

curl https://releases.rancher.com/install-docker/19.03.sh | sh

Create a directory for persistent Rancher data storage

mkdir /opt/rancher

Run Rancher container with persistent data mount listening on port 80/443. This uses a docker self signed cert for SSL.

docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:latest

Log in to the rancher web interface using your web browser. The first login will prompt you to set the password for the admin user. Set a password and you should see the main management user interface.

Optional - Creating a CentOS 7 node template for cluster nodes that includes cloud-init.
Cloud-init will allow you to specify additional configuration in Rancher that happen when Rancher creates new nodes, like firewall settings.

    1. Boot a new VM with a CentOS iso attached and install the OS manually
    2. Customize disk layout as needed
    3. Leave the system as DHCP
    4. Set a default root password
    5. Make any changed needed by your org
    6. After booting the system, clean things up so you can turn it into a VM. We have created a script for this, please edit as needed. This sets selinux to permissive as Rancher may have issues with the dns service in enforcing mode without additional configuration. The last command in this script will shut down the VM
https://raw.githubusercontent.com/keyvatech/blog_files/master/centos7_cloudinit_vmtemplate.sh

In vCenter find the VM, right-click on it, then select Clone > Clone To Template.
This template can now be used in Rancher with cloud-init for additional provisioning.

Now we can create your new Rancher cluster. Note that the Rancher coredns workload will not with with selinux set to enforcing. If you require enforcing mode you will need additional configuration. It is also important to use consistent DNS names when deploying, FQDNs are best, but do not mix short and full hostnames as it causes certificate issues. Rancher will generate self signed certs if you do not provide your own.

1) From the main web interface cluster page click add cluster, then select vSphere

2) Enter a cluster name like "rancher1"

3) Create a node template for your nodes. This can be used for both master and worker nodes.

    1. Click "Add Node Template"
    2. Fill out the Account Access section with your vSphere login info. If the credentials worked you will see the scheduling section populate. If it failed, you can add a new credential with a new name, then delete the ones that didn't work later by clicking on the user profile picture and selecting "cloud credentials".
    3. Fill in the scheduling information for your data center, resource pool, data store and folder.
    4. Edit the instance options and specify 2 CPUs and 4096MB RAM or more.
    5. Under Creation Method select either "Install from Boot2Docker ISO (legacy)" or the CentOS 7 node template if you made one.
    6. If you are using a CentOS template with cloud-init fill in the Cloud Config YAML section. We have created the following config which handles firewall config. You can extend this as needed or modify it and create a different template for each node type if desired.
      https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-cloud-init-config.txt
    7. Select a Network to deploy to.
    8. Review the remaining settings and adjust if you need them in your environment.
    9. Name the template at the bottom of the page. The template can likely be used for multiple types if desired so keep the name generic. I prefer to use names that indicate node OS and resources like "centos7-2CPU-4GB"
    10. Click create.

4) Enter the name prefix for your master and worked nodes. For example, "rancher1-master" and "rancher1-worker", when nodes are created a number will be appended to the end.

5) For the master node select the etcd and control plane checkboxes

6) For the worker node select the worker checkbox.

7) Click Create at the bottom of the page. Rancher will now provision your nodes in vCenter.

You should now have a basic functional Kubernetes cluster.

If you are interested in deploying Windows worker nodes with Rancher please see our post here.

 

Helpful links: 

https://rancher.com/support-maintenance-terms/#2.4.x

https://rancher.com/docs/rancher/v2.x/en/installation/requirements/

https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Getting started with Kubernetes using Rancher and VMware vSphere [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => getting-started-with-kubernetes-using-rancher-and-vmware-vsphere [to_ping] => [pinged] => [post_modified] => 2020-08-06 14:11:43 [post_modified_gmt] => 2020-08-06 14:11:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2742 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2601 [post_author] => 12 [post_date] => 2020-07-01 06:54:19 [post_date_gmt] => 2020-07-01 06:54:19 [post_content] =>

Evolving Solutions and Keyva Named Red Hat Apex Partner

MINNEAPOLIS, JULY 1, 2020 – Evolving Solutions and its affiliated company, Keyva, jointly announced today that they have been named as a Red Hat Apex Partner in North America. As part of Red Hat’s partner ecosystem, Evolving Solutions and Keyva have shown that they have deep expertise in emerging technologies and hybrid cloud infrastructure platforms. Red Hat Apex Partners have made investments in Red Hat’s portfolio of application development, delivery, and integration resources, and bring industry expertise to Red Hat regulated industries. They are well-trained and very committed to working with Red Hat on business opportunities.

An invitation-only program, Red Hat Apex Partners are able to support implementations of Red Hat’s emerging technologies including Red Hat OpenShift, Red Hat Ansible Automation, Red Hat OpenStack Platform, Red Hat Middleware, Red Hat CloudForms, Red Hat OpenShift Container Storage, Red Hat Ceph Storage, and Red Hat Gluster Storage, to position customers for success. Partners such as Evolving Solutions and Keyva offer the technical expertise and working practices to deliver a high degree of client satisfaction across a range of deployment scenarios and projects.

Evolving Solutions and Keyva develop and support certified integrations between Red Hat emerging technologies such as OpenShift and Ansible and other leading technologies such as ServiceNow. End to end automation and integration is at the core of how the companies help clients drive business value and technical capabilities from their technology investments. Our family of companies have a deep level of expertise in the Red Hat solutions and the broader technology marketplace,” says Jaime Gmach, President/CEO of Evolving Solutions and Keyva. “As highly skilled integrators, this collaboration allows us to offer clients greater benefit and deliver value to their organizations.”

According to Ernest Jones, Vice President of North American Partner Sales at Red Hat, “Red Hat’s partner ecosystem is a vital component in delivering powerful, flexible, and open solutions to global enterprises. We’re pleased to have Evolving Solutions and Keyva as Apex Partners and look forward to delivering open innovation to our joint clients with them.”

“The Apex partnership signifies Red Hat’s acknowledgment of Evolving Solutions’ and Keyva’s investment and success in delivering value for our clients using the Red Hat portfolio. As one of a small group of Apex partners in North America, we are well-positioned to help our clients succeed with their digital transformation utilizing best of breed technologies such as Ansible Tower and OpenShift,” states Gmach.

###

Media Contact

Beth Naffziger

Director of Marketing

Beth.n@evolvingsol.com

About Evolving Solutions

Evolving Solutions is a technology solutions provider that helps clients modernize and automate their mission-critical infrastructure to support digital transformation. Our business is client-centric, providing consulting and delivering technical solutions to enable modern operations in the hybrid cloud.

Evolving Solutions has deep partner relationships with IBM, HPE, Cisco, AppDynamics, Dynatrace, NetApp, Nutanix, Azure, Red Hat, AWS and over 70 other vendors that help us deliver exceptional results to service our clients’ technology needs. Learn more at www.evolvingsol.com.

About Keyva

Keyva is a consulting firm focused on delivering innovative technology solutions. Keyva simplifies IT to free up time and allow businesses to focus on their core offering and on client value. Keyva consultants help enterprises automate multi-clouds, multi-vendors, processes, applications and infrastructure within their environment, while leading transformation initiatives to allow companies to take the next step on their business journey.  Learn more at www.keyvatech.com.

Additional Resources

###

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, Ansible and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. The OpenStack Word Mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation's permission. Red Hat is not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

[post_title] => EVOLVING SOLUTIONS AND KEYVA NAMED RED HAT APEX PARTNER [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evolving-solutions-and-keyva-named-red-hat-apex-partner [to_ping] => [pinged] => [post_modified] => 2020-07-24 14:49:29 [post_modified_gmt] => 2020-07-24 14:49:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2601 [menu_order] => 2 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2361 [post_author] => 11 [post_date] => 2020-04-09 11:59:52 [post_date_gmt] => 2020-04-09 11:59:52 [post_content] =>

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3). 

We will be using Ansible Tower v3.6 and PostgreSQL 10, on RHEL 7 systems running in VMware for this technical guide. The commands for setting up the same configuration on RHEL 8 will be different for some cases.  This guide does not account for clustering of the PostgreSQL database. If you are setting up Tower in HA capacity for Production environments, it is recommended to follow best practices for PostgreSQL clustering, to avoid a single point of failure. 

First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 4 systems – towerdb, tower1, tower2, tower3 

subscription-manager register 

subscription-manager refresh 

subscription-manager attach –-auto 

subscription-manager repos –-list  

subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms 

subscription-manager repos --enable rhel-7-server-rpms 

subscription-manager repos --enable rhel-7-server-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms 

subscription-manager repos --enable rhel-7-server-optional-source-rpms 

subscription-manager repos --enable rhel-7-server-extras-rpms 

sudo yum update 

sudo yum install wget 

sudo yum install python36 

sudo pip3 install httpie

Also: 

  1. a) Update the /etc/hosts file on all 4 hosts with entries for all systems
  2. b) Add and copy thesshkeys on all systems 

On the Database system (towerdb), we will now set up PostgreSQL 10 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install postgresql10 postgresql10-server 

Initialize the database 

/usr/pgsql-10/bin/postgresql-10-setup initdb 

systemctl enable postgresql-10 

systemctl start postgresql-10 

Verify you can log in to the database 

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

# \list 

This command will show you the existing (default) database list. 

Next, we will configure the database to make sure it can talk to all the Tower web nodes: 

sudo vi /var/lib/pgsql/10/data/pg_hba.conf 

Add/update the line with 'md5' entry to allow all hosts:  

host    all             all             0.0.0.0/0            md5 

Update the postgresql.conf file 

sudo vi /var/lib/pgsql/10/data/postgresql.conf

Add/update the entry to listen to all incoming requests:  

listen_addresses = '*' 

Restart the database services, to pick up the changes made: 

sudo systemctl restart postgresql-10 

sudo systemctl status postgresql-10 

 

On each of the Tower web nodes (tower1, tower2, tower3), we will set up the Ansible Tower binaries: 

mkdir ansible-tower 

cd ansible-tower/ 

wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz 

tar xvzf ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz  

cd ansible-tower-setup-bundle-3.6.2-1 

python -c 'from hashlib import md5; print("md5" + md5("password" + "awx").hexdigest())' 

md5f58b4d5d85dbde46651335d78bb56b8c 

Where password will be the password that you will be using a hash of, when authenticating against the database 

Back on the database server (towerdb), we will go ahead and set up the database schema pre-requisites for Tower install:  

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

postgres=# CREATE USER awx; CREATE DATABASE awx OWNER awx; ALTER USER awx WITH password 'password'; 

On tower1tower2, tower3, update the inventory file and run the setup. Make sure your script contents match on all tower web tier systems. 

You will need to update at least the following values and customize them for your environment: 

admin_password='password' 

pg_password='password' 

rabbit_mq = 'password' 

Under the [tower] section, you will have to add entries for all your tower web hosts. The first entry will typically serve as the primary node when the cluster is run.  

We will now run the setup script: 

./setup.sh 

You can either copy this inventory file on the other 2 tower systems (tower2 and tower3), or replicate the content to match the content in the file on tower1, and run the setup script on the other 2 tower systems as well.  

Once the setup script is run on all hosts, and it finishes successfully, you will be able to test your cluster instance. You can do so by going to one of the tower hosts URL, initiating a job template, and see which specific tower node it runs on – based on the tower node that is designated to be the primary node at that time. You will also be able to view the same console details, and logs of job runs, regardless of which tower web URL you go to.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com.

[post_title] => Clustering guide for Red Hat Ansible Tower [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => clustering-guide-for-red-hat-ansible-tower [to_ping] => [pinged] => [post_modified] => 2020-06-18 18:25:12 [post_modified_gmt] => 2020-06-18 18:25:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2361 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2263 [post_author] => 11 [post_date] => 2020-03-31 16:27:19 [post_date_gmt] => 2020-03-31 16:27:19 [post_content] =>

Let us look at setting up a Kubernetes cluster with 1 master node (kubemaster.bpic.local) and 2 worker nodes (kubenode1.bpic.local, kubenode2.bpic.local) on VMware based RHEL 7 instances. 

We have set up an additional user (other than root) on these machines, as we will be running kubectl (client) commands as the non-root user.

First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 3 components – kubemaster, kubenode1, kubenode2

subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms

The rhel-7-server-extras-rpms repo contains docker and other utilities. 

Since this is our lab environment, we will be disabling firewalls. If it is a production environment, you can open up specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely.

systemctl disable firewalld 
systemctl stop firewalld

Since we are using VMware VMs, it is recommended to set up VMware-tools 

yum install perl 
mkdir /mnt/cdrom 
Mount /dev/cdrom /mnt/cdrom 
cp /mnt/cdrom/VMwareTools-version.tar.gz /tmp/ 
tar -zxvf VMwareTools-version.tar.gz 
/tmp/vmware-tools-distrib/./vmware-install.pl  
umount /mnt/cdrom 

Update the yum repositories

yum –y update yum 
install yum-utils

Configure additional settings

swapoff –a

Also, comment out the swap line in

etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0 

Install and enable docker

yum –y install docker 
systemctl enable docker 
systemctl start docker 


Set up repo for Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
EOF 

Additional enforcement settings

setenforce 0

Update the config file to change the selinux settings

vi /etc/selinux/config 

Change the settings from 

selinux=enforcing to selinux=permissive 

Install and enable kubelet service

yum -y install kubelet kubeadm kubectl 
systemctl enable kubelet 

start kubelet 

Enable sysctl settings

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl –system

Alternatively, you can update the /etc/sysctl.conf file 

vi /etc/sysctl.conf 

Add/update the following lines

net/bridge/bridge-nf-call-iptables = 1 

net/ipv4/ip_forward = 1 

On the Kubernetes master node only, we will set up the flannel networking component using fat manifest:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 


kubeadm init --pod-network-cidr=10.244.0.0/16 


kubeadm token create --print-join-command 

Capture the results of the above command, specifically the part describing how to add nodes to this cluster

You can now join any number of machines by running the following on each node as root:

kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ 

    --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34

And then after changing to a non-root user, run the following commands

su – nonrootuser 
mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

On the Kubernetes nodes (kubenode1 and kubenode2), we will run the join command, to add those nodes to the cluster:

kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ 

    --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34 
You can now test the cluster by running the below command on either of the nodes, or the master as the non-root user:
kubectl get nodes

You should see results like this (changed for your system names) showing the cluster configuration:

 
NAME                    STATUS   ROLES    AGE   VERSION 
kubemaster.bpic.local   Ready    master   15h   v1.17.3 
kubenode1.bpic.local    Ready    <none>   14h   v1.17.3 
kubenode2.bpic.local    Ready    <none>   14h   v1.17.3

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com.

[post_title] => How to set up a Kubernetes cluster on Red Hat Enterprise Linux 7 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-a-kubernetes-cluster-on-red-hat-enterprise-linux-7 [to_ping] => [pinged] => [post_modified] => 2020-03-31 17:39:39 [post_modified_gmt] => 2020-03-31 17:39:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2263 [menu_order] => 5 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2282 [post_author] => 11 [post_date] => 2020-03-26 11:45:40 [post_date_gmt] => 2020-03-26 11:45:40 [post_content] =>

By Anuj Tuli, Chief Technology Officer

Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number of organizations that use Windows and .NET based applications, and they are still trying to determine the best way forward for containerization of their Windows based business critical applications.  

Kubernetes added support for Windows based components (worker nodes) starting with release v1.14.  

In the example below, we will join a Windows worker node (v1.16.x) with a Kubernetes cluster v1.17.x. 

As of this moment, Windows worker nodes are supported on Windows 2019 operating system only. In this example, we will leverage the flannel network set up on our master node on RHEL (see instructions above).  

Step 1: Download the sig-windows-tools repository from https://github.com/kubernetes-sigs/sig-windows-tools , and extract the files 

Step 2: Navigate to, and update the Kubernetes configuration file at C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan 

In our instance, we will update the following values: 

  • Interface Name 
  • Control Plane details – IP address, username, KubeadmToken, KubeadmCAHash. The values for these come from the output of the kubectl join command, when it was run during set up of the master node

Step 3: Open up PowerShell console in Admin mode and install kubernetes via the downloaded script. This step requires reboot of the server 

PS C:\Users\Administrator> cd C:\<Download-Path>\kubernetes\kubeadm 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -install 

Step 4: Once K8s is installed, join it to the existing kubernetes cluster. This step takes the values you entered in the modified Kubeclustervxlan file 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -join 

Step 5: Verify that the Windows worker node was successfully added to the cluster. You can do this by running the kubectl command from any client (Windows or Linux nodes on the cluster)  

PS C:\<Download-Path>\kubernetes\kubeadm> kubectl get nodes 

NAME                    STATUS   ROLES    AGE   VERSION 
kubemaster.bpic.local   Ready    master   15h   v1.17.3 
kubenode1.bpic.local    Ready    <none>   14h   v1.17.3 
kubenode2.bpic.local    Ready    <none>   14h   v1.17.3 
win-eo5rgh4493r         Ready    <none>   12h   v1.16.2 

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com 

[post_title] => Step-by-step guide: Set up a Windows worker node for Kubernetes cluster [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => step-by-step-guide-set-up-a-windows-worker-node-for-kubernetes-cluster [to_ping] => [pinged] => [post_modified] => 2020-03-26 18:42:22 [post_modified_gmt] => 2020-03-26 18:42:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2282 [menu_order] => 6 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2825 [post_author] => 11 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.

In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.

We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

  1. Provision two CentOS 7 nodes in VMware with 2CPUs and 4GB of RAM or greater.
  2. After they have booted, log in to the nodes and prepare them to be added to Rancher. We have created the following script to help with this. Please add any steps your org needs as well. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-node-prep.sh
  3. Provision the windows server worker node in vSphere, note that 1.5 CPUs and 2.5GB of RAM are reserved for windows. You may want to over-provision this node by a bit. I used 6CPUs and 8GB ram so there was some overhead in my lab.
  4. Modify the windows node CPU settings and enable “Hardware virtualization”, then make any other changes you need and boot the node.
  5. You can confirm the windows node version by running ‘winver’ at the powershell prompt.
  6. Check to make sure the VMware Tools version you are running is 11.1.0 or later.
  7. After you boot the windows node open an admin powershell prompt and run the commands in this powershell script to set up the system, install docker and open the proper firewall ports. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-windows-node-prep.ps1
  8. After you run the script you can then set the hostname, make any other changes for your org and reboot.
  9. Once the reboot is complete open a powershell prompt as admin and run ‘docker ps‘, then run ‘docker run hello-world‘ to test the install.

There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider

This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html

For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:

docker image pull mcr.microsoft.com/windows/nanoserver:1809

Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.

  1. Log in to the rancher management web interface, select the global cluster screen and click “add cluster”.
  2. Choose “From existing nodes (custom)” this is the only option where windows is supported currently.
  3. Set a cluster name, choose your kubernetes version, for Network Provider select “Flannel” from the dropdown.
  4. Flannel is the only network type to support windows, the windows support option should now allow you to select “Enabled“. Leave the Flannel Backend set to VXLAN.
  5. You can now review the other settings, but you likely don’t need to make any other changes. Click “Next” at the bottom of the page.
  6. You are now presented with the screen showing docker commands to add nodes. You will need to copy these commands and run them by hand on each node. Be sure to run the windows command in an admin powershell prompt.
    1. For the master node select Linux with etcd and Control Plane.
    2. For the linux worker select Linux with only Worker.
    3. For the windows worker node select windows, worker is the only option.
  7. This cluster will now provision itself and come up. This may take 5-10 mins.
  8. After the cluster is up select the cluster name from the main drop down in the upper left, then go to “Projects/Namespaces” and click on “Project: System”. Be sure you are on the Resources > Workloads page. All services should say “Active”. If there are any issues here you may need to troubleshoot further.

Troubleshooting

Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.

If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

You can use these commands in windows to check on the docker service status and restart it.

sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-a-rancher-cluster-with-windows-worker-nodes [to_ping] => [pinged] => [post_modified] => 2020-08-06 14:25:22 [post_modified_gmt] => 2020-08-06 14:25:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2825 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 52 [max_num_pages] => 7 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => c5f8786ab4150007f777a5c4c925f836 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )
code displayed on computer monitor

Creating a Rancher cluster with Windows worker nodes

By Brad Johnson, Lead DevOps Engineer In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our ...
Read more
person typing on electronic device and reviewing graphs

The On Demand Guru

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. This seems to happen most often when ...
Read more
working at future speed

July Virtual Events You Should Attend

With most of us in IT still working from home and unable to attend in-person events there’s been an absolute explosion of online events. In our opinion here are some of the best technology events you can ...
Read more
code displayed on computer monitor

Getting started with Kubernetes using Rancher and VMware vSphere

By Brad Johnson, Lead DevOps Engineer In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to easily deploy a Kubernetes cluster in VMware ...
Read more

EVOLVING SOLUTIONS AND KEYVA NAMED RED HAT APEX PARTNER

Evolving Solutions and Keyva Named Red Hat Apex Partner MINNEAPOLIS, JULY 1, 2020 – Evolving Solutions and its affiliated company, Keyva, jointly announced today that they have been named as a Red Hat Apex Partner in North ...
Read more

Clustering guide for Red Hat Ansible Tower

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes ...
Read more

How to set up a Kubernetes cluster on Red Hat Enterprise Linux 7

Let us look at setting up a Kubernetes cluster with 1 master node (kubemaster.bpic.local) and 2 worker nodes (kubenode1.bpic.local, kubenode2.bpic.local) on VMware based RHEL 7 instances.  We have set up an additional user (other than root) on ...
Read more

Step-by-step guide: Set up a Windows worker node for Kubernetes cluster

By Anuj Tuli, Chief Technology Officer Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number of organizations that use Windows and .NET ...
Read more