Get Appointment

  • contact@wellinor.com
  • +(123)-456-7890

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 9 ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 9 [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 64, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 2878 [post_author] => 7 [post_date] => 2020-09-08 15:22:08 [post_date_gmt] => 2020-09-08 15:22:08 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.  

Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for Red Hat Ansible Tower from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/3jMkbPn 

 

 

[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:17:48 [post_modified_gmt] => 2022-01-26 13:17:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2878 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2854 [post_author] => 7 [post_date] => 2020-08-27 12:28:05 [post_date_gmt] => 2020-08-27 12:28:05 [post_content] =>

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in containers. OpenShift Online is a SaaS offering for OpenShift provided by Red Hat. OpenShift Online takes away the effort required to set up your OpenShift clusters on-prem and allows organizations to quickly leverage all that OpenShift offers, including Developer console, without worry about managing the underlying infrastructure.  

OpenShift Online provides REST based APIs for all functions that can be carried out via the console, and the oc command line. Therefore, teams can build automated functionality that leverages Kubernetes cluster using OpenShift management plane. Today, we will look at one such function – to create a Project. Any user that wants to create a project using APIs is required to have appropriate role bindings in the specific namespace that they want to create or manage projects in. By default, OpenShift Online provides you the ability to create Projects via the console, using the ProjectRequest API call.  

Assuming you have the oc command line setup, the command to create a project is: 

$ oc new-project <project_name>  
--description="<description>" --display-name="<display_name>" 

We will take a look at how to create a Project in OpenShift Online using the REST API. We will be using Postman to trigger our API call. This sample was run against OpenShift v3.11, Postman v7.30.1.  

1) The first thing we will do is log into our OpenShift Online console, and on the top right section in the drop-down that shows up under your name, select 'Copy Login Command'. Paste the copied contents into Notepad and capture the 'token' value. 

2) Download and import the Postman collection for this sample API call here  

3) Paste the copied token value under 'Authorization' section of the request 

4) Update the sections in bold for an appropriate name you want your Project to have 

{ 

    "kind": "ProjectRequest", 

    "apiVersion": "v1", 

    "displayName": "82520759", 

    "description": "test project from postman", 

    "metadata": { 

        "labels": { 

            "name": "82520759", 

            "namespace": "82520759" 

        }, 

        "name": "82520759" 

    } 

} 

5) Execute the Postman call. You should now see a new project created under your OpenShift Online instance.  

You can adjust the Body of the sample call to pass in more values associated with the ProjectRequest object. For reference, the object schema includes the below  

https://docs.openshift.com/container-platform/3.11/rest_api/oapi/v1.ProjectRequest.html 

apiVersion: 
description: 
displayName: 
kind: 
metadata: 
  annotations: 
    clusterName: 
      creationTimestamp: 
      deletionGracePeriodSeconds: 
      deletionTimestamp: 
   finalizers: 
     generateName: 
     generation: 
   initializers: 
   labels: 
     name: 
     namespace: 
   ownerReferences: 
     resourceVersion: 
     selfLink: 
     uid: 
 

Once you've unit tested the REST call with Postman for your OpenShift Online environment, you can very easily port this over to using one of the existing modules in Ansible, and making it a step within your playbook.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com 

[post_title] => How to use REST APIs for OpenShift Online via Postman [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-use-rest-apis-for-openshift-online-via-postman [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:04:54 [post_modified_gmt] => 2023-06-28 18:04:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2854 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2825 [post_author] => 2 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.

In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.

We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

  1. Provision two CentOS 7 nodes in VMware with 2CPUs and 4GB of RAM or greater.
  2. After they have booted, log in to the nodes and prepare them to be added to Rancher. We have created the following script to help with this. Please add any steps your org needs as well. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-node-prep.sh
  3. Provision the windows server worker node in vSphere, note that 1.5 CPUs and 2.5GB of RAM are reserved for windows. You may want to over-provision this node by a bit. I used 6CPUs and 8GB ram so there was some overhead in my lab.
  4. Modify the windows node CPU settings and enable “Hardware virtualization”, then make any other changes you need and boot the node.
  5. You can confirm the windows node version by running ‘winver’ at the powershell prompt.
  6. Check to make sure the VMware Tools version you are running is 11.1.0 or later.
  7. After you boot the windows node open an admin powershell prompt and run the commands in this powershell script to set up the system, install docker and open the proper firewall ports. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-windows-node-prep.ps1
  8. After you run the script you can then set the hostname, make any other changes for your org and reboot.
  9. Once the reboot is complete open a powershell prompt as admin and run ‘docker ps‘, then run ‘docker run hello-world‘ to test the install.

There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider

This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html

For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:

docker image pull mcr.microsoft.com/windows/nanoserver:1809

Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.

  1. Log in to the rancher management web interface, select the global cluster screen and click “add cluster”.
  2. Choose “From existing nodes (custom)” this is the only option where windows is supported currently.
  3. Set a cluster name, choose your kubernetes version, for Network Provider select “Flannel” from the dropdown.
  4. Flannel is the only network type to support windows, the windows support option should now allow you to select “Enabled“. Leave the Flannel Backend set to VXLAN.
  5. You can now review the other settings, but you likely don’t need to make any other changes. Click “Next” at the bottom of the page.
  6. You are now presented with the screen showing docker commands to add nodes. You will need to copy these commands and run them by hand on each node. Be sure to run the windows command in an admin powershell prompt.
    1. For the master node select Linux with etcd and Control Plane.
    2. For the linux worker select Linux with only Worker.
    3. For the windows worker node select windows, worker is the only option.
  7. This cluster will now provision itself and come up. This may take 5-10 mins.
  8. After the cluster is up select the cluster name from the main drop down in the upper left, then go to “Projects/Namespaces” and click on “Project: System”. Be sure you are on the Resources > Workloads page. All services should say “Active”. If there are any issues here you may need to troubleshoot further.

Troubleshooting

Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.

If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

You can use these commands in windows to check on the docker service status and restart it.

sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-a-rancher-cluster-with-windows-worker-nodes [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:00:47 [post_modified_gmt] => 2023-06-28 18:00:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2825 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2823 [post_author] => 2 [post_date] => 2020-07-24 14:38:58 [post_date_gmt] => 2020-07-24 14:38:58 [post_content] =>

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. This seems to happen most often when organizations adopt new and emerging technologies. Often teams haven't built up their internal skills and capabilities around tech like Kubernetes or automation platforms such as Red Hat Ansible. Or, perhaps the team is already very skilled, but want someone to help with their OpenShift 3.x -> 4.x upgrade path, or need someone to write a new Ansible module so that they can expand their ability to offer automation capabilities via their playbooks.  

There hasn't been a good way to get this kind of incremental help – no granular consumption model for technical expertise – it's not a function of your vendor's L1 support. Your vendor will tell you to buy a TAM or their own expensive consulting services. It's also not something readily available in the community at large. There are user forums and networks for days, but will you get a response to your questions? Will the responses be correct?  

Keyva created Guru Services to address this exact issue. It's more than L1 support, not as heavy as a consulting engagement, it's enterprise grade and far more reliable than crowdsourcing the community for answers and assistance. 

Guru Service is just as easy to use: choose from 3 different service levels and you're on your way. You'll have access to our client portal from which you can schedule your On Demand Guru. We'll send you a meeting invite with web conference information and you'll be over the hump and on your way in no time. We currently provide On Demand Gurus for Red Hat Ansible, OpenShift, and Kong and are actively adding technologies to our suite of Guru Services. To learn more about these offerings, check out our vendor pages for Kong (https://keyvatech.com/kong-enterprise/) and Red Hat (https://keyvatech.com/red-hat/). Reach out to our Keyva team at: info@keyvatech.com to request additional information or a quote on our Guru Services .  

[post_title] => The On Demand Guru [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-on-demand-guru [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:01:14 [post_modified_gmt] => 2023-06-28 18:01:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2823 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2742 [post_author] => 7 [post_date] => 2020-07-15 12:00:59 [post_date_gmt] => 2020-07-15 12:00:59 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to easily deploy a Kubernetes cluster in VMware with only minimal configuration. The intent of this tutorial is to give you a base for a scalable development cluster where you can test deploying applications or configuring other Kubernetes software without setting up DNS or external load balancers.

We will use VMware vSphere 6.7 for our deployment. For the OS and software versions we are going to use the ones recommended by Rancher support. As of May 2020, Docker currently has an issue with cluster DNS and firewalld interfering with each other in CentOS/RHEL 8, so we will be using CentOS 7 and Docker 19.03.x for our management server, however you can use any supported OS. For the Master and Worker nodes we will be using RancherOS or CentOS. Using RancherOS eliminates the need to build a custom VM template in vSphere that uses cloud-init.

Requirements for this exercise:
- Admin access to vSphere or a service account with access.
- Ability to create RHEL/CentOS 7 VMs in vSphere.
- Guest VM network has internet access.

In this deployment Rancher has two primary components, the Rancher cluster manager and the Kubernetes cluster we will manage. For production use, the cluster management component would be a container deployed on its own kubernetes cluster. For ease of install and use in a testing and lab deployment, we can simply deploy the management application as a Docker container on a single server. This configuration is not recommended for production and cannot be converted into a production scenario later. If you want a single node cluster manager that can be converted into production ready setup, then you can deploy the management container on a one node kubernetes cluster which could then later be scaled up.

Rancher management server deployment

All commands run as root or with sudo unless noted:

Spin up a standard or minimal CentOS/RHEL 7 server, 2 CPU, 4GB RAM. I used a 100GB thin provisioned primary disk.

Install docker using the Rancher script. Alternatively, install by hand using documentation from docker.

curl https://releases.rancher.com/install-docker/19.03.sh | sh

Create a directory for persistent Rancher data storage

mkdir /opt/rancher

Run Rancher container with persistent data mount listening on port 80/443. This uses a docker self signed cert for SSL.

docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:latest

Log in to the rancher web interface using your web browser. The first login will prompt you to set the password for the admin user. Set a password and you should see the main management user interface.

Optional - Creating a CentOS 7 node template for cluster nodes that includes cloud-init.
Cloud-init will allow you to specify additional configuration in Rancher that happen when Rancher creates new nodes, like firewall settings.

    1. Boot a new VM with a CentOS iso attached and install the OS manually
    2. Customize disk layout as needed
    3. Leave the system as DHCP
    4. Set a default root password
    5. Make any changed needed by your org
    6. After booting the system, clean things up so you can turn it into a VM. We have created a script for this, please edit as needed. This sets selinux to permissive as Rancher may have issues with the dns service in enforcing mode without additional configuration. The last command in this script will shut down the VM
https://raw.githubusercontent.com/keyvatech/blog_files/master/centos7_cloudinit_vmtemplate.sh

In vCenter find the VM, right-click on it, then select Clone > Clone To Template.
This template can now be used in Rancher with cloud-init for additional provisioning.

Now we can create your new Rancher cluster. Note that the Rancher coredns workload will not with with selinux set to enforcing. If you require enforcing mode you will need additional configuration. It is also important to use consistent DNS names when deploying, FQDNs are best, but do not mix short and full hostnames as it causes certificate issues. Rancher will generate self signed certs if you do not provide your own.

1) From the main web interface cluster page click add cluster, then select vSphere

2) Enter a cluster name like "rancher1"

3) Create a node template for your nodes. This can be used for both master and worker nodes.

    1. Click "Add Node Template"
    2. Fill out the Account Access section with your vSphere login info. If the credentials worked you will see the scheduling section populate. If it failed, you can add a new credential with a new name, then delete the ones that didn't work later by clicking on the user profile picture and selecting "cloud credentials".
    3. Fill in the scheduling information for your data center, resource pool, data store and folder.
    4. Edit the instance options and specify 2 CPUs and 4096MB RAM or more.
    5. Under Creation Method select either "Install from Boot2Docker ISO (legacy)" or the CentOS 7 node template if you made one.
    6. If you are using a CentOS template with cloud-init fill in the Cloud Config YAML section. We have created the following config which handles firewall config. You can extend this as needed or modify it and create a different template for each node type if desired.
      https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-cloud-init-config.txt
    7. Select a Network to deploy to.
    8. Review the remaining settings and adjust if you need them in your environment.
    9. Name the template at the bottom of the page. The template can likely be used for multiple types if desired so keep the name generic. I prefer to use names that indicate node OS and resources like "centos7-2CPU-4GB"
    10. Click create.

4) Enter the name prefix for your master and worked nodes. For example, "rancher1-master" and "rancher1-worker", when nodes are created a number will be appended to the end.

5) For the master node select the etcd and control plane checkboxes

6) For the worker node select the worker checkbox.

7) Click Create at the bottom of the page. Rancher will now provision your nodes in vCenter.

You should now have a basic functional Kubernetes cluster.

If you are interested in deploying Windows worker nodes with Rancher please see our post here.

 

Helpful links: 

https://rancher.com/support-maintenance-terms/#2.4.x

https://rancher.com/docs/rancher/v2.x/en/installation/requirements/

https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Getting started with Kubernetes using Rancher and VMware vSphere [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => getting-started-with-kubernetes-using-rancher-and-vmware-vsphere [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:15 [post_modified_gmt] => 2022-01-26 13:18:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2742 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2361 [post_author] => 7 [post_date] => 2020-04-09 11:59:52 [post_date_gmt] => 2020-04-09 11:59:52 [post_content] =>

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3). 

We will be using Ansible Tower v3.6 and PostgreSQL 10, on RHEL 7 systems running in VMware for this technical guide. The commands for setting up the same configuration on RHEL 8 will be different for some cases.  This guide does not account for clustering of the PostgreSQL database. If you are setting up Tower in HA capacity for Production environments, it is recommended to follow best practices for PostgreSQL clustering, to avoid a single point of failure. 

First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 4 systems – towerdb, tower1, tower2, tower3 

subscription-manager register 

subscription-manager refresh 

subscription-manager attach –-auto 

subscription-manager repos –-list  

subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms 

subscription-manager repos --enable rhel-7-server-rpms 

subscription-manager repos --enable rhel-7-server-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms 

subscription-manager repos --enable rhel-7-server-optional-source-rpms 

subscription-manager repos --enable rhel-7-server-extras-rpms 

sudo yum update 

sudo yum install wget 

sudo yum install python36 

sudo pip3 install httpie

Also: 

  1. a) Update the /etc/hosts file on all 4 hosts with entries for all systems
  2. b) Add and copy thesshkeys on all systems 

On the Database system (towerdb), we will now set up PostgreSQL 10 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install postgresql10 postgresql10-server 

Initialize the database 

/usr/pgsql-10/bin/postgresql-10-setup initdb 

systemctl enable postgresql-10 

systemctl start postgresql-10 

Verify you can log in to the database 

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

# \list 

This command will show you the existing (default) database list. 

Next, we will configure the database to make sure it can talk to all the Tower web nodes: 

sudo vi /var/lib/pgsql/10/data/pg_hba.conf 

Add/update the line with 'md5' entry to allow all hosts:  

host    all             all             0.0.0.0/0            md5 

Update the postgresql.conf file 

sudo vi /var/lib/pgsql/10/data/postgresql.conf

Add/update the entry to listen to all incoming requests:  

listen_addresses = '*' 

Restart the database services, to pick up the changes made: 

sudo systemctl restart postgresql-10 

sudo systemctl status postgresql-10 

 

On each of the Tower web nodes (tower1, tower2, tower3), we will set up the Ansible Tower binaries: 

mkdir ansible-tower 

cd ansible-tower/ 

wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz 

tar xvzf ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz  

cd ansible-tower-setup-bundle-3.6.2-1 

python -c 'from hashlib import md5; print("md5" + md5("password" + "awx").hexdigest())' 

md5f58b4d5d85dbde46651335d78bb56b8c 

Where password will be the password that you will be using a hash of, when authenticating against the database 

Back on the database server (towerdb), we will go ahead and set up the database schema pre-requisites for Tower install:  

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

postgres=# CREATE USER awx; CREATE DATABASE awx OWNER awx; ALTER USER awx WITH password 'password'; 

On tower1tower2, tower3, update the inventory file and run the setup. Make sure your script contents match on all tower web tier systems. 

You will need to update at least the following values and customize them for your environment: 

admin_password='password' 

pg_password='password' 

rabbit_mq = 'password' 

Under the [tower] section, you will have to add entries for all your tower web hosts. The first entry will typically serve as the primary node when the cluster is run.  

We will now run the setup script: 

./setup.sh 

You can either copy this inventory file on the other 2 tower systems (tower2 and tower3), or replicate the content to match the content in the file on tower1, and run the setup script on the other 2 tower systems as well.  

Once the setup script is run on all hosts, and it finishes successfully, you will be able to test your cluster instance. You can do so by going to one of the tower hosts URL, initiating a job template, and see which specific tower node it runs on – based on the tower node that is designated to be the primary node at that time. You will also be able to view the same console details, and logs of job runs, regardless of which tower web URL you go to.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com.

[post_title] => Clustering guide for Red Hat Ansible Tower [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => clustering-guide-for-red-hat-ansible-tower [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:05:17 [post_modified_gmt] => 2023-06-28 18:05:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2361 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2263 [post_author] => 7 [post_date] => 2020-03-31 16:27:19 [post_date_gmt] => 2020-03-31 16:27:19 [post_content] =>

Let us look at setting up a Kubernetes cluster with 1 master node (kubemaster.bpic.local) and 2 worker nodes (kubenode1.bpic.local, kubenode2.bpic.local) on VMware based RHEL 7 instances. 

We have set up an additional user (other than root) on these machines, as we will be running kubectl (client) commands as the non-root user.

First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 3 components – kubemaster, kubenode1, kubenode2

subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms

The rhel-7-server-extras-rpms repo contains docker and other utilities. 

Since this is our lab environment, we will be disabling firewalls. If it is a production environment, you can open up specific ports for communication of your applications, and for Kubernetes components instead of disabling the firewall completely.

systemctl disable firewalld 
systemctl stop firewalld

Since we are using VMware VMs, it is recommended to set up VMware-tools 

yum install perl 
mkdir /mnt/cdrom 
Mount /dev/cdrom /mnt/cdrom 
cp /mnt/cdrom/VMwareTools-version.tar.gz /tmp/ 
tar -zxvf VMwareTools-version.tar.gz 
/tmp/vmware-tools-distrib/./vmware-install.pl  
umount /mnt/cdrom 

Update the yum repositories

yum –y update yum 
install yum-utils

Configure additional settings

swapoff –a

Also, comment out the swap line in

etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0 

Install and enable docker

yum –y install docker 
systemctl enable docker 
systemctl start docker 


Set up repo for Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 
EOF 

Additional enforcement settings

setenforce 0

Update the config file to change the selinux settings

vi /etc/selinux/config 

Change the settings from 

selinux=enforcing to selinux=permissive 

Install and enable kubelet service

yum -y install kubelet kubeadm kubectl 
systemctl enable kubelet 

start kubelet 

Enable sysctl settings

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl –system

Alternatively, you can update the /etc/sysctl.conf file 

vi /etc/sysctl.conf 

Add/update the following lines

net/bridge/bridge-nf-call-iptables = 1 

net/ipv4/ip_forward = 1 

On the Kubernetes master node only, we will set up the flannel networking component using fat manifest:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 


kubeadm init --pod-network-cidr=10.244.0.0/16 


kubeadm token create --print-join-command 

Capture the results of the above command, specifically the part describing how to add nodes to this cluster

You can now join any number of machines by running the following on each node as root:

kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ 

    --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34

And then after changing to a non-root user, run the following commands

su – nonrootuser 
mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

On the Kubernetes nodes (kubenode1 and kubenode2), we will run the join command, to add those nodes to the cluster:

kubeadm join kubemaster.bpic.local:6443 --token cll0gw.50jagb64e80uw0da \ 

    --discovery-token-ca-cert-hash sha256:4d699e7f06ce0e7e80b78eadc47453e465358021aee52d956dceed1dfbc0ee34 
You can now test the cluster by running the below command on either of the nodes, or the master as the non-root user:
kubectl get nodes

You should see results like this (changed for your system names) showing the cluster configuration:

 
NAME                    STATUS   ROLES    AGE   VERSION 
kubemaster.bpic.local   Ready    master   15h   v1.17.3 
kubenode1.bpic.local    Ready    <none>   14h   v1.17.3 
kubenode2.bpic.local    Ready    <none>   14h   v1.17.3

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com.

[post_title] => How to set up a Kubernetes cluster on Red Hat Enterprise Linux 7 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-a-kubernetes-cluster-on-red-hat-enterprise-linux-7 [to_ping] => [pinged] => [post_modified] => 2020-03-31 17:39:39 [post_modified_gmt] => 2020-03-31 17:39:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2263 [menu_order] => 5 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2282 [post_author] => 7 [post_date] => 2020-03-26 11:45:40 [post_date_gmt] => 2020-03-26 11:45:40 [post_content] =>

By Anuj Tuli, Chief Technology Officer

Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number of organizations that use Windows and .NET based applications, and they are still trying to determine the best way forward for containerization of their Windows based business critical applications.  

Kubernetes added support for Windows based components (worker nodes) starting with release v1.14.  

In the example below, we will join a Windows worker node (v1.16.x) with a Kubernetes cluster v1.17.x. 

As of this moment, Windows worker nodes are supported on Windows 2019 operating system only. In this example, we will leverage the flannel network set up on our master node on RHEL (see instructions above).  

Step 1: Download the sig-windows-tools repository from https://github.com/kubernetes-sigs/sig-windows-tools , and extract the files 

Step 2: Navigate to, and update the Kubernetes configuration file at C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan 

In our instance, we will update the following values: 

Step 3: Open up PowerShell console in Admin mode and install kubernetes via the downloaded script. This step requires reboot of the server 

PS C:\Users\Administrator> cd C:\<Download-Path>\kubernetes\kubeadm 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -install 

Step 4: Once K8s is installed, join it to the existing kubernetes cluster. This step takes the values you entered in the modified Kubeclustervxlan file 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -join 

Step 5: Verify that the Windows worker node was successfully added to the cluster. You can do this by running the kubectl command from any client (Windows or Linux nodes on the cluster)  

PS C:\<Download-Path>\kubernetes\kubeadm> kubectl get nodes 

NAME                    STATUS   ROLES    AGE   VERSION 
kubemaster.bpic.local   Ready    master   15h   v1.17.3 
kubenode1.bpic.local    Ready    <none>   14h   v1.17.3 
kubenode2.bpic.local    Ready    <none>   14h   v1.17.3 
win-eo5rgh4493r         Ready    <none>   12h   v1.16.2 

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com 

[post_title] => Step-by-step guide: Set up a Windows worker node for Kubernetes cluster [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => step-by-step-guide-set-up-a-windows-worker-node-for-kubernetes-cluster [to_ping] => [pinged] => [post_modified] => 2020-03-26 18:42:22 [post_modified_gmt] => 2020-03-26 18:42:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2282 [menu_order] => 6 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2878 [post_author] => 7 [post_date] => 2020-09-08 15:22:08 [post_date_gmt] => 2020-09-08 15:22:08 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.  

Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for Red Hat Ansible Tower from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/3jMkbPn 

 

 

[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:17:48 [post_modified_gmt] => 2022-01-26 13:17:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2878 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 113 [max_num_pages] => 15 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => a2d5946311334739748b19dc2833df95 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )
two coworkers looking at a tablet

ServiceNow App for Red Hat Ansible Tower “NOW Certified” against Paris release

By Anuj Tuli, CTO Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is ...
Read more

How to use REST APIs for OpenShift Online via Postman

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in ...
Read more
code displayed on computer monitor

Creating a Rancher cluster with Windows worker nodes

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with ...
Read more
person typing on electronic device and reviewing graphs

The On Demand Guru

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. ...
Read more
code displayed on computer monitor

Getting started with Kubernetes using Rancher and VMware vSphere

By Brad Johnson, Lead DevOps Engineer In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to ...
Read more

Clustering guide for Red Hat Ansible Tower

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for ...
Read more

How to set up a Kubernetes cluster on Red Hat Enterprise Linux 7

Let us look at setting up a Kubernetes cluster with 1 master node (kubemaster.bpic.local) and 2 worker nodes (kubenode1.bpic.local, kubenode2.bpic.local) on VMware based RHEL 7 instances.  We have set up ...
Read more

Step-by-step guide: Set up a Windows worker node for Kubernetes cluster

By Anuj Tuli, Chief Technology Officer Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number ...
Read more