Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 11 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 11 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 80, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 2854 [post_author] => 7 [post_date] => 2020-08-27 12:28:05 [post_date_gmt] => 2020-08-27 12:28:05 [post_content] =>

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in containers. OpenShift Online is a SaaS offering for OpenShift provided by Red Hat. OpenShift Online takes away the effort required to set up your OpenShift clusters on-prem and allows organizations to quickly leverage all that OpenShift offers, including Developer console, without worry about managing the underlying infrastructure.  

OpenShift Online provides REST based APIs for all functions that can be carried out via the console, and the oc command line. Therefore, teams can build automated functionality that leverages Kubernetes cluster using OpenShift management plane. Today, we will look at one such function – to create a Project. Any user that wants to create a project using APIs is required to have appropriate role bindings in the specific namespace that they want to create or manage projects in. By default, OpenShift Online provides you the ability to create Projects via the console, using the ProjectRequest API call.  

Assuming you have the oc command line setup, the command to create a project is: 

$ oc new-project <project_name>  
--description="<description>" --display-name="<display_name>" 

We will take a look at how to create a Project in OpenShift Online using the REST API. We will be using Postman to trigger our API call. This sample was run against OpenShift v3.11, Postman v7.30.1.  

1) The first thing we will do is log into our OpenShift Online console, and on the top right section in the drop-down that shows up under your name, select 'Copy Login Command'. Paste the copied contents into Notepad and capture the 'token' value. 

2) Download and import the Postman collection for this sample API call here  

3) Paste the copied token value under 'Authorization' section of the request 

4) Update the sections in bold for an appropriate name you want your Project to have 

{ 

    "kind": "ProjectRequest", 

    "apiVersion": "v1", 

    "displayName": "82520759", 

    "description": "test project from postman", 

    "metadata": { 

        "labels": { 

            "name": "82520759", 

            "namespace": "82520759" 

        }, 

        "name": "82520759" 

    } 

} 

5) Execute the Postman call. You should now see a new project created under your OpenShift Online instance.  

You can adjust the Body of the sample call to pass in more values associated with the ProjectRequest object. For reference, the object schema includes the below  

https://docs.openshift.com/container-platform/3.11/rest_api/oapi/v1.ProjectRequest.html 

apiVersion: 
description: 
displayName: 
kind: 
metadata: 
  annotations: 
    clusterName: 
      creationTimestamp: 
      deletionGracePeriodSeconds: 
      deletionTimestamp: 
   finalizers: 
     generateName: 
     generation: 
   initializers: 
   labels: 
     name: 
     namespace: 
   ownerReferences: 
     resourceVersion: 
     selfLink: 
     uid: 
 

Once you've unit tested the REST call with Postman for your OpenShift Online environment, you can very easily port this over to using one of the existing modules in Ansible, and making it a step within your playbook.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected] 

[post_title] => How to use REST APIs for OpenShift Online via Postman [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-use-rest-apis-for-openshift-online-via-postman [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:04:54 [post_modified_gmt] => 2023-06-28 18:04:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2854 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2825 [post_author] => 2 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.

In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.

We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

  1. Provision two CentOS 7 nodes in VMware with 2CPUs and 4GB of RAM or greater.
  2. After they have booted, log in to the nodes and prepare them to be added to Rancher. We have created the following script to help with this. Please add any steps your org needs as well. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-node-prep.sh
  3. Provision the windows server worker node in vSphere, note that 1.5 CPUs and 2.5GB of RAM are reserved for windows. You may want to over-provision this node by a bit. I used 6CPUs and 8GB ram so there was some overhead in my lab.
  4. Modify the windows node CPU settings and enable “Hardware virtualization”, then make any other changes you need and boot the node.
  5. You can confirm the windows node version by running ‘winver’ at the powershell prompt.
  6. Check to make sure the VMware Tools version you are running is 11.1.0 or later.
  7. After you boot the windows node open an admin powershell prompt and run the commands in this powershell script to set up the system, install docker and open the proper firewall ports. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-windows-node-prep.ps1
  8. After you run the script you can then set the hostname, make any other changes for your org and reboot.
  9. Once the reboot is complete open a powershell prompt as admin and run ‘docker ps‘, then run ‘docker run hello-world‘ to test the install.

There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider

This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html

For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:

docker image pull mcr.microsoft.com/windows/nanoserver:1809

Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.

  1. Log in to the rancher management web interface, select the global cluster screen and click “add cluster”.
  2. Choose “From existing nodes (custom)” this is the only option where windows is supported currently.
  3. Set a cluster name, choose your kubernetes version, for Network Provider select “Flannel” from the dropdown.
  4. Flannel is the only network type to support windows, the windows support option should now allow you to select “Enabled“. Leave the Flannel Backend set to VXLAN.
  5. You can now review the other settings, but you likely don’t need to make any other changes. Click “Next” at the bottom of the page.
  6. You are now presented with the screen showing docker commands to add nodes. You will need to copy these commands and run them by hand on each node. Be sure to run the windows command in an admin powershell prompt.
    1. For the master node select Linux with etcd and Control Plane.
    2. For the linux worker select Linux with only Worker.
    3. For the windows worker node select windows, worker is the only option.
  7. This cluster will now provision itself and come up. This may take 5-10 mins.
  8. After the cluster is up select the cluster name from the main drop down in the upper left, then go to “Projects/Namespaces” and click on “Project: System”. Be sure you are on the Resources > Workloads page. All services should say “Active”. If there are any issues here you may need to troubleshoot further.

Troubleshooting

Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.

If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

You can use these commands in windows to check on the docker service status and restart it.

sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-a-rancher-cluster-with-windows-worker-nodes [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:31:12 [post_modified_gmt] => 2024-05-28 17:31:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2825 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2823 [post_author] => 2 [post_date] => 2020-07-24 14:38:58 [post_date_gmt] => 2020-07-24 14:38:58 [post_content] =>

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. This seems to happen most often when organizations adopt new and emerging technologies. Often teams haven't built up their internal skills and capabilities around tech like Kubernetes or automation platforms such as Red Hat Ansible. Or, perhaps the team is already very skilled, but want someone to help with their OpenShift 3.x -> 4.x upgrade path, or need someone to write a new Ansible module so that they can expand their ability to offer automation capabilities via their playbooks.  

There hasn't been a good way to get this kind of incremental help – no granular consumption model for technical expertise – it's not a function of your vendor's L1 support. Your vendor will tell you to buy a TAM or their own expensive consulting services. It's also not something readily available in the community at large. There are user forums and networks for days, but will you get a response to your questions? Will the responses be correct?  

Keyva created Guru Services to address this exact issue. It's more than L1 support, not as heavy as a consulting engagement, it's enterprise grade and far more reliable than crowdsourcing the community for answers and assistance. 

Guru Service is just as easy to use: choose from 3 different service levels and you're on your way. You'll have access to our client portal from which you can schedule your On Demand Guru. We'll send you a meeting invite with web conference information and you'll be over the hump and on your way in no time. We currently provide On Demand Gurus for Red Hat Ansible, OpenShift, and Kong and are actively adding technologies to our suite of Guru Services. To learn more about these offerings, check out our vendor pages for Kong (https://keyvatech.com/kong-enterprise/) and Red Hat (https://keyvatech.com/red-hat/). Reach out to our Keyva team at: [email protected] to request additional information or a quote on our Guru Services .  

[post_title] => The On Demand Guru [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-on-demand-guru [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:01:14 [post_modified_gmt] => 2023-06-28 18:01:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2823 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2742 [post_author] => 7 [post_date] => 2020-07-15 12:00:59 [post_date_gmt] => 2020-07-15 12:00:59 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to easily deploy a Kubernetes cluster in VMware with only minimal configuration. The intent of this tutorial is to give you a base for a scalable development cluster where you can test deploying applications or configuring other Kubernetes software without setting up DNS or external load balancers.

We will use VMware vSphere 6.7 for our deployment. For the OS and software versions we are going to use the ones recommended by Rancher support. As of May 2020, Docker currently has an issue with cluster DNS and firewalld interfering with each other in CentOS/RHEL 8, so we will be using CentOS 7 and Docker 19.03.x for our management server, however you can use any supported OS. For the Master and Worker nodes we will be using RancherOS or CentOS. Using RancherOS eliminates the need to build a custom VM template in vSphere that uses cloud-init.

Requirements for this exercise:
- Admin access to vSphere or a service account with access.
- Ability to create RHEL/CentOS 7 VMs in vSphere.
- Guest VM network has internet access.

In this deployment Rancher has two primary components, the Rancher cluster manager and the Kubernetes cluster we will manage. For production use, the cluster management component would be a container deployed on its own kubernetes cluster. For ease of install and use in a testing and lab deployment, we can simply deploy the management application as a Docker container on a single server. This configuration is not recommended for production and cannot be converted into a production scenario later. If you want a single node cluster manager that can be converted into production ready setup, then you can deploy the management container on a one node kubernetes cluster which could then later be scaled up.

Rancher management server deployment

All commands run as root or with sudo unless noted:

Spin up a standard or minimal CentOS/RHEL 7 server, 2 CPU, 4GB RAM. I used a 100GB thin provisioned primary disk.

Install docker using the Rancher script. Alternatively, install by hand using documentation from docker.

curl https://releases.rancher.com/install-docker/19.03.sh | sh

Create a directory for persistent Rancher data storage

mkdir /opt/rancher

Run Rancher container with persistent data mount listening on port 80/443. This uses a docker self signed cert for SSL.

docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:latest

Log in to the rancher web interface using your web browser. The first login will prompt you to set the password for the admin user. Set a password and you should see the main management user interface.

Optional - Creating a CentOS 7 node template for cluster nodes that includes cloud-init.
Cloud-init will allow you to specify additional configuration in Rancher that happen when Rancher creates new nodes, like firewall settings.

    1. Boot a new VM with a CentOS iso attached and install the OS manually
    2. Customize disk layout as needed
    3. Leave the system as DHCP
    4. Set a default root password
    5. Make any changed needed by your org
    6. After booting the system, clean things up so you can turn it into a VM. We have created a script for this, please edit as needed. This sets selinux to permissive as Rancher may have issues with the dns service in enforcing mode without additional configuration. The last command in this script will shut down the VM
https://raw.githubusercontent.com/keyvatech/blog_files/master/centos7_cloudinit_vmtemplate.sh

In vCenter find the VM, right-click on it, then select Clone > Clone To Template.
This template can now be used in Rancher with cloud-init for additional provisioning.

Now we can create your new Rancher cluster. Note that the Rancher coredns workload will not with with selinux set to enforcing. If you require enforcing mode you will need additional configuration. It is also important to use consistent DNS names when deploying, FQDNs are best, but do not mix short and full hostnames as it causes certificate issues. Rancher will generate self signed certs if you do not provide your own.

1) From the main web interface cluster page click add cluster, then select vSphere

2) Enter a cluster name like "rancher1"

3) Create a node template for your nodes. This can be used for both master and worker nodes.

    1. Click "Add Node Template"
    2. Fill out the Account Access section with your vSphere login info. If the credentials worked you will see the scheduling section populate. If it failed, you can add a new credential with a new name, then delete the ones that didn't work later by clicking on the user profile picture and selecting "cloud credentials".
    3. Fill in the scheduling information for your data center, resource pool, data store and folder.
    4. Edit the instance options and specify 2 CPUs and 4096MB RAM or more.
    5. Under Creation Method select either "Install from Boot2Docker ISO (legacy)" or the CentOS 7 node template if you made one.
    6. If you are using a CentOS template with cloud-init fill in the Cloud Config YAML section. We have created the following config which handles firewall config. You can extend this as needed or modify it and create a different template for each node type if desired.
      https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-cloud-init-config.txt
    7. Select a Network to deploy to.
    8. Review the remaining settings and adjust if you need them in your environment.
    9. Name the template at the bottom of the page. The template can likely be used for multiple types if desired so keep the name generic. I prefer to use names that indicate node OS and resources like "centos7-2CPU-4GB"
    10. Click create.

4) Enter the name prefix for your master and worked nodes. For example, "rancher1-master" and "rancher1-worker", when nodes are created a number will be appended to the end.

5) For the master node select the etcd and control plane checkboxes

6) For the worker node select the worker checkbox.

7) Click Create at the bottom of the page. Rancher will now provision your nodes in vCenter.

You should now have a basic functional Kubernetes cluster.

If you are interested in deploying Windows worker nodes with Rancher please see our post here.

Helpful links: 

https://rancher.com/support-maintenance-terms/#2.4.x

https://rancher.com/docs/rancher/v2.x/en/installation/requirements/

https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at [email protected].

[post_title] => Getting started with Kubernetes using Rancher and VMware vSphere [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => getting-started-with-kubernetes-using-rancher-and-vmware-vsphere [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:29:22 [post_modified_gmt] => 2024-05-28 17:29:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2742 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2361 [post_author] => 7 [post_date] => 2020-04-09 11:59:52 [post_date_gmt] => 2020-04-09 11:59:52 [post_content] =>

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3). 

We will be using Ansible Tower v3.6 and PostgreSQL 10, on RHEL 7 systems running in VMware for this technical guide. The commands for setting up the same configuration on RHEL 8 will be different for some cases.  This guide does not account for clustering of the PostgreSQL database. If you are setting up Tower in HA capacity for Production environments, it is recommended to follow best practices for PostgreSQL clustering, to avoid a single point of failure. 

First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 4 systems – towerdb, tower1, tower2, tower3 

subscription-manager register 

subscription-manager refresh 

subscription-manager attach –-auto 

subscription-manager repos –-list  

subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms 

subscription-manager repos --enable rhel-7-server-rpms 

subscription-manager repos --enable rhel-7-server-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-source-rpms 

subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms 

subscription-manager repos --enable rhel-7-server-optional-source-rpms 

subscription-manager repos --enable rhel-7-server-extras-rpms 

sudo yum update 

sudo yum install wget 

sudo yum install python36 

sudo pip3 install httpie

Also: 

  1. a) Update the /etc/hosts file on all 4 hosts with entries for all systems
  2. b) Add and copy thesshkeys on all systems 

On the Database system (towerdb), we will now set up PostgreSQL 10 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 

sudo yum install postgresql10 postgresql10-server 

Initialize the database 

/usr/pgsql-10/bin/postgresql-10-setup initdb 

systemctl enable postgresql-10 

systemctl start postgresql-10 

Verify you can log in to the database 

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

# \list 

This command will show you the existing (default) database list. 

Next, we will configure the database to make sure it can talk to all the Tower web nodes: 

sudo vi /var/lib/pgsql/10/data/pg_hba.conf 

Add/update the line with 'md5' entry to allow all hosts:  

host    all             all             0.0.0.0/0            md5 

Update the postgresql.conf file 

sudo vi /var/lib/pgsql/10/data/postgresql.conf

Add/update the entry to listen to all incoming requests:  

listen_addresses = '*' 

Restart the database services, to pick up the changes made: 

sudo systemctl restart postgresql-10 

sudo systemctl status postgresql-10 

 

On each of the Tower web nodes (tower1, tower2, tower3), we will set up the Ansible Tower binaries: 

mkdir ansible-tower 

cd ansible-tower/ 

wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz 

tar xvzf ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz  

cd ansible-tower-setup-bundle-3.6.2-1 

python -c 'from hashlib import md5; print("md5" + md5("password" + "awx").hexdigest())' 

md5f58b4d5d85dbde46651335d78bb56b8c 

Where password will be the password that you will be using a hash of, when authenticating against the database 

Back on the database server (towerdb), we will go ahead and set up the database schema pre-requisites for Tower install:  

:~

nbsp;sudo su – postgres 

:~

nbsp;Psql  

postgres=# CREATE USER awx; CREATE DATABASE awx OWNER awx; ALTER USER awx WITH password 'password'; 

On tower1tower2, tower3, update the inventory file and run the setup. Make sure your script contents match on all tower web tier systems. 

You will need to update at least the following values and customize them for your environment: 

admin_password='password' 

pg_password='password' 

rabbit_mq = 'password' 

Under the [tower] section, you will have to add entries for all your tower web hosts. The first entry will typically serve as the primary node when the cluster is run.  

We will now run the setup script: 

./setup.sh 

You can either copy this inventory file on the other 2 tower systems (tower2 and tower3), or replicate the content to match the content in the file on tower1, and run the setup script on the other 2 tower systems as well.  

Once the setup script is run on all hosts, and it finishes successfully, you will be able to test your cluster instance. You can do so by going to one of the tower hosts URL, initiating a job template, and see which specific tower node it runs on – based on the tower node that is designated to be the primary node at that time. You will also be able to view the same console details, and logs of job runs, regardless of which tower web URL you go to.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected].

[post_title] => Clustering guide for Red Hat Ansible Tower [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => clustering-guide-for-red-hat-ansible-tower [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:05:17 [post_modified_gmt] => 2023-06-28 18:05:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2361 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2282 [post_author] => 7 [post_date] => 2020-03-26 11:45:40 [post_date_gmt] => 2020-03-26 11:45:40 [post_content] =>

By Anuj Tuli, Chief Technology Officer

Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number of organizations that use Windows and .NET based applications, and they are still trying to determine the best way forward for containerization of their Windows based business critical applications.  

Kubernetes added support for Windows based components (worker nodes) starting with release v1.14.  

In the example below, we will join a Windows worker node (v1.16.x) with a Kubernetes cluster v1.17.x. 

As of this moment, Windows worker nodes are supported on Windows 2019 operating system only. In this example, we will leverage the flannel network set up on our master node on RHEL (see instructions above).  

Step 1: Download the sig-windows-tools repository from https://github.com/kubernetes-sigs/sig-windows-tools , and extract the files 

Step 2: Navigate to, and update the Kubernetes configuration file at C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan 

In our instance, we will update the following values: 

Step 3: Open up PowerShell console in Admin mode and install kubernetes via the downloaded script. This step requires reboot of the server 

PS C:\Users\Administrator> cd C:\<Download-Path>\kubernetes\kubeadm 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -install 

Step 4: Once K8s is installed, join it to the existing kubernetes cluster. This step takes the values you entered in the modified Kubeclustervxlan file 

PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -join 

Step 5: Verify that the Windows worker node was successfully added to the cluster. You can do this by running the kubectl command from any client (Windows or Linux nodes on the cluster)  

PS C:\<Download-Path>\kubernetes\kubeadm> kubectl get nodes 

NAME                    STATUS   ROLES    AGE   VERSION 
kubemaster.bpic.local   Ready    master   15h   v1.17.3 
kubenode1.bpic.local    Ready    <none>   14h   v1.17.3 
kubenode2.bpic.local    Ready    <none>   14h   v1.17.3 
win-eo5rgh4493r         Ready    <none>   12h   v1.16.2 

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected] 

[post_title] => Step-by-step guide: Set up a Windows worker node for Kubernetes cluster [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => step-by-step-guide-set-up-a-windows-worker-node-for-kubernetes-cluster [to_ping] => [pinged] => [post_modified] => 2020-03-26 18:42:22 [post_modified_gmt] => 2020-03-26 18:42:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2282 [menu_order] => 6 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2278 [post_author] => 7 [post_date] => 2020-03-24 15:27:04 [post_date_gmt] => 2020-03-24 15:27:04 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Orlando release (latest release) of ServiceNow. ServiceNow announced its release of Orlando on January 23rd, 2020, which is the newest version in the long line of software updates since the company's creation.  

Customers can now upgrade their ServiceNow App for Ansible Tower from previous ServiceNow Releases – London, Madrid, New York – to Orlando release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here: http://bit.ly/2W5tYHv

[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Orlando release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-orlando-release [to_ping] => [pinged] => [post_modified] => 2020-03-24 15:27:07 [post_modified_gmt] => 2020-03-24 15:27:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2278 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2221 [post_author] => 7 [post_date] => 2020-02-18 08:09:33 [post_date_gmt] => 2020-02-18 08:09:33 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

When developing automation you may be faced with challenges that are simply too complicated or tedious to accomplish with Ansible alone. There may even be cases where you are told that “it can’t be automated”. However, when you combine the abilities of Ansible and custom python using the pexpect  module, then you are able to automate practically anything you can do on the command line. In this post we will discuss the basics of creating a custom Ansible module in python.  

Here are a few examples of cases where you might need to create a custom module: 

For the purposes of this article we will focus on the first case. When writing a traditional linux shell or bash script it simply isn’t possibly to continue your script when a command you run drops you into a new shell or new interactive interface. If these tools also provided a non-interactive mode or config/script input we would not need to do this. To overcome this situation we need to use python with pexpect. The native Ansible expect module provides a simple interface to this functionality and should be evaluated before writing a custom module. However, when you need more complex interactions, want specific data returned or want to provide a re-usable and simpler interface to an underlying program for to others to consume, then custom development if warranted.  

In this guide I will talk about the requirements and steps needed to create your own library module. The source code with our example is located here and contains notes in the code as well. The pexpect code is intentionally complex to demonstrate some use cases.

Module Code (Python)

#!/usr/bin/env python
import os
import getpass

DOCUMENTATION = '''
---
module: my_module

short_description: This is a custom module using pexpect to run commands in myscript.sh

description:
- "This module runs commands inside a script in a shell. When run without commands it returns current settings only."

options:
commands:
description:
- The commands to run inside myscript in order
required: false
options:
description:
- options to pass the script
required: false
timeout:
description:
- Timeout for finding the success string or running the program
required: false
default: 300
password:
description:
- Password needed to run myscript
required: true

author:
- Brad Johnson - Keyva
'''

EXAMPLES = '''
- name: "Run myscript to set up myprogram"
my_module:
options: "-o myoption"
password: "{{ myscript_password }}"
commands:
- "set minheap 1024m"
- "set maxheap 5120m"
- "set port 7000"
- "set webport 80"
timeout: 300
'''

RETURN = '''
current_settings: String containing current settings after last command was run and settings saved
type: str
returned: On success
logfile: String containing logfile location on the remote host from our script
type: str
returned: On success
'''


def main():
# This is the import required to make this code an Ansible module
from ansible.module_utils.basic import AnsibleModule
# This instantiates the module class and provides Ansible with
# input argument information, it also enforces input types
module = AnsibleModule(
argument_spec=dict(
commands=dict(required=False, type='list', default=[]),
options=dict(required=False, type='str', default=""),
password=dict(required=True, type='str', no_log=True),
timeout=dict(required=False, type='int', default='300')
)
)
commands = module.params['commands']
options = module.params['options']
password = module.params['password']
timeout = module.params['timeout']

try:
# Importing the modules here allows us to catch them not being installed on remote hosts
# and pass back a failure via ansible instead of a stack trace.
import pexpect
except ImportError:
module.fail_json(msg="You must have the pexpect python module installed to use this Ansible module.")

try:
# Run our pexpect function
current_settings, changed, logfile = run_pexpect(commands, options, password, timeout)
# Exit on success and pass back objects to ansible, which are available as registered vars
module.exit_json(changed=changed, current_settings=current_settings, logfile=logfile)
# Use python exception handling to keep all our failure handling in our main function
except pexpect.TIMEOUT as err:
module.fail_json(msg="pexpect.TIMEOUT: Unexpected timeout waiting for prompt or command: {0}".format(err))
except pexpect.EOF as err:
module.fail_json(msg="pexpect.EOF: Unexpected program termination: {0}".format(err))
except pexpect.exceptions.ExceptionPexpect as err:
# This catches any pexpect exceptions that are not EOF or TIMEOUT
# This is the base exception class
module.fail_json(msg="pexpect.exceptions.{0}: {1}".format(type(err).__name__, err))
except RuntimeError as err:
module.fail_json(msg="{0}".format(err))


def run_pexpect(commands, options, password, timeout=300):
import pexpect
changed = True
script_path = '/path/to/myscript.sh'
if not os.path.exists(script_path):
raise RuntimeError("Error: the script '{0}' does not exist!".format(script_path))
if script_path == '/path/to/myscript.sh':
raise RuntimeError("This module example is based on a hypothetical command line interactive program and "
"can not run. Please use this as a basis for your own development and testing.")
# Set prompt to expect with username embedded in it
# YOU MAY NEED TO CHANGE THIS PROMPT FOR YOUR SYSTEM
# My default RHEL prompt regex
prompt = r'\[{0}\@.+?\]\$'.format(getpass.getuser())
output = ""

child = pexpect.spawn('/bin/bash')
try:
# Look for initial bash prompt
child.expect(prompt)
# Start our program
child.sendline("{0} {1}".format(script_path, options))
# look for our scripts logfile prompt
# Example text seen in output: 'Logfile: /path/to/mylog.log'
child.expect(r'Logfile\:.+?/.+?\.log')
# Note that child.after contains the text of the matching regex
logfile = child.after.split()[1]
# Look for password prompt
i = child.expect([r"Enter password\:", '>'])
if i == 0:
# Send password
child.sendline(password)
child.expect('>')
# Increase timeout for longer running interactions after quick initial ones
child.timeout = timeout
try:
# Look for program internal prompt or new config dialog
i = child.expect([r'Initialize New Config\?', '>'])
# pexpect will return the index of the regex it found first
if i == 0:
# Answer 'y' to initialize new config prompt
child.sendline('y')
child.expect('>')
# If any commands were passed in loop over them and run them one by one.
for command in commands:
child.sendline(command)
i = child.expect([r'ERROR.+?does not exist', r'ERROR.+?$', '>'])
if i == 0:
# Attempt to intelligently add items that may have multiple instances and are missing
# e.g. "socket.2" may need "add socket" run before it.
# Try to allow the user just to use the set command and run add as needed
try:
new_item = child.after.split('"')[1].split('.')[0]
except IndexError:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after))
child.sendline('add {0}'.format(new_item))
i = child.expect([r'ERROR.+?$', '>'])
if i == 0:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after.strip()))
# Retry the failed original command after the add
child.sendline(command)
i = child.expect([r'ERROR.+?$', '>'])
if i == 0:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after.strip()))
elif i == 1:
raise RuntimeError("ERROR: unspecified error running a myscript command\n"
" {0}".format(child.after.strip()))
# Set timeout shorter for final commands
child.timeout = 15
# If we processed any commands run the save function last
if commands:
child.sendline('save')
# Using true loops with expect statements allow us to process multiple items in a block until
# some kind of done or exit condition is met where we then call a break.
while True:
i = child.expect([r'No changes made', r'ERROR.+?$', '>'])
if i == 0:
changed = False
elif i == 1:
raise RuntimeError("ERROR: unexpected error saving configuration\n"
" {0}".format(child.after.strip()))
elif i == 2:
break
# Always print out the config data from out script and return it to the user
child.sendline('print config')
child.expect('>')
# Note that child.before contains the output from the last expected item and this expect
current_settings = child.before.strip()
# Run the 'exit' command that is inside myscript
child.sendline('exit')
# Look for a linux prompt to see if we quit
child.expect(prompt)
except pexpect.TIMEOUT:
raise RuntimeError("ERROR: timed out waiting for a prompt in myscript")
# Get shell/bash return code of myscript
child.sendline("echo $?")
child.expect(prompt)
# process the output into a variable and remove any whitespace
exit_status = child.before.split('\r\n')[1].strip()
if exit_status != "0":
raise RuntimeError("ERROR: The command returned a non-zero exit code! '{0}'\n"
"Additional info:\n{1}".format(exit_status, output))
child.sendline('exit 0')
# run exit as many times as needed to exit the shell or subshells
# This might be useful if you ran a script that put you into a new shell where you then ran some other scripts
# This is also a good example of
while True:
i = child.expect([prompt, pexpect.EOF])
if i == 0:
child.sendline('exit 0')
elif i == 1:
break
finally:
# Always try to close the pexpect process
child.close()
return current_settings, changed, logfile


if __name__ == '__main__':
main()

In order to create a module you need to put your new “mymodule.pyfile somewhere in the Ansible module library path, typically the “library” directory next to your playbook or library inside your role. It’s also important to note that Ansible library modules run on the target ansible host, so if you want to use the ansible “expect” module or make a custom module with pexpect in it then you will need to install the python pexpect module on the remote host before running module. (Note: the pexpect version provided in RHEL/CentOS repos is old and will not support the Ansible “expect” module, install via pip instead for the latest version.) 

 Information on the library path is located here: 

https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html 

 Your example.py file needs to be a standard file with a python shebang header and also import the ansible module. Here is a bare minimum amount of code needed for an ansible module. 

#!/usr/bin/env python 
from ansible.module_utils.basic import AnsibleModule 
module = AnsibleModule(argument_spec=dict(mysetting=dict(required=False, type='str'))) 
try: 
    return_value = "mysetting value is: {0}".format(module.params['mysetting']) 
except: 
    module.fail_json(msg="Unable to process input variable into string") 
module.exit_json(changed=True, my_output=return_value) 

With this example you can see how variables are passed into and out of the module. This also includes a basic exception handle for dealing with errors and allowing ansible to deal with the failure. This exception clause is too broad for normal use as it will catch and hide all errors that could happen in the try block. When you create your module you should only except error types that you anticipate to avoid hiding stack traces of unexpected errors from your logs. 

 

Now we can add in some custom pexpect processing code. This is again a very basic example. The example code linked in this blog post has a complicated and in-depth example. This function would then be added into our try-except block in the code above. 

def run_pexpect(password): 
    import pexpect 
    child = pexpect.spawn('/path/to/myscript.sh') 
    child.timeout = 60 
    child.expect(r"Enter password\:") 
    child.sendline(password) 
    child.expect('Thank you') 
    child.sendline('exit') 
    child.expect(pexpect.EOF) 
    exit_dialog = child.before.strip() 

    return exit_dialog

There are some important things to note here when dealing with pexpect and Ansible. 

 

When creating custom modules I would encourage you to give thought to making the simplest, most maintainable and modular modules possible. It can be easy to create one module/script to rule them all, but the linux concept of having one tool to do one thing well will save you rewriting chunks of code that do the same thing and also help future maintainers of the automation you create. 

 

Helpful links: 

https://docs.ansible.com/ansible/latest/modules/expect_module.html 

https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html 

https://pexpect.readthedocs.io/en/stable/overview.html 

 

If you have any questions about the steps documented here, would like more information on the custom development process, or have any feedback or requests, please let us know at [email protected].

[post_title] => Build custom Red Hat Ansible modules: pexpect [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => build-custom-red-hat-ansible-modules-pexpect [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:26 [post_modified_gmt] => 2022-01-26 13:18:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2221 [menu_order] => 10 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2854 [post_author] => 7 [post_date] => 2020-08-27 12:28:05 [post_date_gmt] => 2020-08-27 12:28:05 [post_content] =>

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in containers. OpenShift Online is a SaaS offering for OpenShift provided by Red Hat. OpenShift Online takes away the effort required to set up your OpenShift clusters on-prem and allows organizations to quickly leverage all that OpenShift offers, including Developer console, without worry about managing the underlying infrastructure.  

OpenShift Online provides REST based APIs for all functions that can be carried out via the console, and the oc command line. Therefore, teams can build automated functionality that leverages Kubernetes cluster using OpenShift management plane. Today, we will look at one such function – to create a Project. Any user that wants to create a project using APIs is required to have appropriate role bindings in the specific namespace that they want to create or manage projects in. By default, OpenShift Online provides you the ability to create Projects via the console, using the ProjectRequest API call.  

Assuming you have the oc command line setup, the command to create a project is: 

$ oc new-project <project_name>  
--description="<description>" --display-name="<display_name>" 

We will take a look at how to create a Project in OpenShift Online using the REST API. We will be using Postman to trigger our API call. This sample was run against OpenShift v3.11, Postman v7.30.1.  

1) The first thing we will do is log into our OpenShift Online console, and on the top right section in the drop-down that shows up under your name, select 'Copy Login Command'. Paste the copied contents into Notepad and capture the 'token' value. 

2) Download and import the Postman collection for this sample API call here  

3) Paste the copied token value under 'Authorization' section of the request 

4) Update the sections in bold for an appropriate name you want your Project to have 

{ 

    "kind": "ProjectRequest", 

    "apiVersion": "v1", 

    "displayName": "82520759", 

    "description": "test project from postman", 

    "metadata": { 

        "labels": { 

            "name": "82520759", 

            "namespace": "82520759" 

        }, 

        "name": "82520759" 

    } 

} 

5) Execute the Postman call. You should now see a new project created under your OpenShift Online instance.  

You can adjust the Body of the sample call to pass in more values associated with the ProjectRequest object. For reference, the object schema includes the below  

https://docs.openshift.com/container-platform/3.11/rest_api/oapi/v1.ProjectRequest.html 

apiVersion: 
description: 
displayName: 
kind: 
metadata: 
  annotations: 
    clusterName: 
      creationTimestamp: 
      deletionGracePeriodSeconds: 
      deletionTimestamp: 
   finalizers: 
     generateName: 
     generation: 
   initializers: 
   labels: 
     name: 
     namespace: 
   ownerReferences: 
     resourceVersion: 
     selfLink: 
     uid: 
 

Once you've unit tested the REST call with Postman for your OpenShift Online environment, you can very easily port this over to using one of the existing modules in Ansible, and making it a step within your playbook.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected] 

[post_title] => How to use REST APIs for OpenShift Online via Postman [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-use-rest-apis-for-openshift-online-via-postman [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:04:54 [post_modified_gmt] => 2023-06-28 18:04:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2854 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 118 [max_num_pages] => 15 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 5af415623278b5326d82b5c298fb9f9b [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

How to use REST APIs for OpenShift Online via Postman

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in ...

Creating a Rancher cluster with Windows worker nodes

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with ...
person typing on electronic device and reviewing graphs

The On Demand Guru

At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. ...

Getting started with Kubernetes using Rancher and VMware vSphere

By Brad Johnson, Lead DevOps Engineer In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to ...

Clustering guide for Red Hat Ansible Tower

This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for ...

Step-by-step guide: Set up a Windows worker node for Kubernetes cluster

By Anuj Tuli, Chief Technology Officer Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number ...
two coworkers looking at a tablet

ServiceNow App for Red Hat Ansible Tower “NOW Certified” against Orlando release

By Anuj Tuli, CTO Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Orlando release (latest release) of ServiceNow. ServiceNow announced its release of Orlando on January 23rd, 2020, ...
code displayed on computer monitor

Build custom Red Hat Ansible modules: pexpect

By Brad Johnson, Lead DevOps Engineer When developing automation you may be faced with challenges that are simply too complicated or tedious to accomplish with Ansible alone. There may even ...