Get Appointment

  • contact@wellinor.com
  • +(123)-456-7890

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 8 ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 8 [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 56, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 2934 [post_author] => 7 [post_date] => 2020-10-20 10:46:46 [post_date_gmt] => 2020-10-20 10:46:46 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

This guide covers how to set up an OpenShift cluster in AWS with Windows worker nodes. Because this requires the OVN Kubernetes container network interface you can not simply add Windows nodes to existing clusters. Please also understand that this functionality is still considered to be preview or beta from Red Hat is not supported in production environments at this time. This functionality also requires using OpenShift 4.4 or later, we tested this using OpenShift 4.5, which was the latest when this was published.

Requirements:
- Ansible 2.9+
- Python 3
- Python winrm module
- AWS CLI
- OpenShift 4.4+
- OC CLI 4.4+
- GIT
- AWS IAM User with programmatic access key and AdministratorAccess policy attached

Environment Setup:
If you don't have an environment that meets the above specs then create an EC2 instance with Amazon Linux 2.
I used a t2.micro instance and a security group allowing SSH on port 22. This environment already has the AWS CLI set up. During my run I only needed 4GB total disk space so the default disk size is fine.

After the instance is launched, SSH to the new VM as 'ec2-user' using your keyfile.
Run the following commands to set up python pre-reqs:

$ sudo yum install python3 python3-pip git
$ pip3 install --user pywinrm ansible

Navigate to https://cloud.redhat.com/openshift/install/aws/installer-provisioned and log in with your Red Hat account. This page provides links to the latest installer and CLI. You will also need to download your pull secret from here. These are correct as of Oct 2020, however if you have an issue, please use the links on the latest page from Red Hat.

Download OpenShift CLI and Installer and place the binaries in the $PATH. Note: /home/ec2-user/bin is in the default of $PATH on AMZ Linux 2 and openshift-client also contains a kubectl binary.

$ cd ~

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz

$ mkdir bin && tar -xvf openshift-client-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-client-README.md

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz

$ tar -xvf openshift-install-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-install-README.md

Check the versions of the pre-reqs. Here is the output from when I tested this example as well.

$ ansible --version
ansible 2.10.2
config file = None
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/.local/lib/python3.7/site-packages/ansible
executable location = /home/ec2-user/.local/bin/ansible
python version = 3.7.9 (default, Aug 27 2020, 21:59:41) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]

$ aws --version
aws-cli/1.18.107 Python/2.7.18 Linux/4.14.193-149.317.amzn2.x86_64 botocore/1.17.31

$ oc version
Client Version: 4.5.14

$ openshift-install version
openshift-install 4.5.14
built from commit 9893a482f310ee72089872f1a4caea3dbec34f28
release image quay.io/openshift-release-dev/ocp-release@sha256:95cfe9273aecb9a0070176210477491c347f8e69e41759063642edf8bb8aceb6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2-0-g52c56ce", GitCommit:"d7f3ccf9a5bdc96ba92e31526cf014b3de4c46aa", GitTreeState:"clean", BuildDate:"2020-09-16T15:25:59Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

$ pip3 freeze
ansible==2.10.1
ansible-base==2.10.2
certifi==2020.6.20
cffi==1.14.3
chardet==3.0.4
cryptography==3.1.1
idna==2.10
Jinja2==2.11.2
MarkupSafe==1.1.1
ntlm-auth==1.5.0
packaging==20.4
pycparser==2.20
pyparsing==2.4.7
pywinrm==0.4.1
PyYAML==5.3.1
requests==2.24.0
requests-ntlm==1.1.0
six==1.15.0
urllib3==1.25.10
xmltodict==0.12.0

$ pip3 show pywinrm
Name: pywinrm
Version: 0.4.1
Summary: Python library for Windows Remote Management
Home-page: http://github.com/diyan/pywinrm/
Author: Alexey Diyan
Author-email: alexey.diyan@gmail.com
License: MIT license
Location: /home/ec2-user/.local/lib/python3.7/site-packages
Requires: xmltodict, requests, requests-ntlm, six


Configure the AWS and the AWS CLI

You will need an AWS IAM user with a programmatic access key and the AdministratorAccess policy attached. You will also need to set up Route53 for a public cluster, but this is not reqiured, if you wish to create a private cluster see our steps below.
See this page for information on setting up your AWS account. https://docs.openshift.com/container-platform/4.5/installing/installing_aws/installing-aws-account.html

If you need information on names for availability zones you can run one of the following commands.
Be sure you are using a region supported by RedHat for Openshift on the AWS.

$ aws ec2 describe-regions
$ aws ec2 describe-availability-zones --region us-east-2
$ aws ec2 describe-availability-zones --all-availability-zones

Run these commands to set up the AWS CLI

$ aws configure
AWS Access Key ID [None]: YOURACCESSKEYID
AWS Secret Access Key [None]: YOURSECRETACCESSKEY
Default region name [None]: us-east-2
Default output format [None]: json

We are now ready to set up the OpenShift Cluster. Please go to 'Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part II)'.

  1.  

Helpful links: 

https://cloud.redhat.com/openshift/install/

If you are interested in deploying Windows worker nodes with Rancher,  please see our post here.

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-an-openshift-cluster-in-aws-with-windows-nodes [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:06 [post_modified_gmt] => 2022-01-26 13:18:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2934 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2921 [post_author] => 7 [post_date] => 2020-10-01 15:03:01 [post_date_gmt] => 2020-10-01 15:03:01 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

When considering infrastructure automation Terraform and Ansible are usually brought up. Both do some things really well, but also have limitations. Terraform is an infrastructure as code tool, whereas Ansible is a configuration management tool that can also do infrastructure as code. I've had people ask about how the tools compare and which one to use and when, so let's explore these tools and talk about the benefits of each. 

First, why would you use Terraform? The single most important reason is that Terraform, like Ansible, is platform agnostic. This means that if you have a hybrid or multi cloud application or service, you can use terraform to manage the infrastructure in a single repository. Cloud vendor specific solutions like AWS CloudFormation templates work well, but they are limited to using within the platform they are available in. With Terraform's ability to support multiple providers, you can do things like managing the infrastructure code definition of on-premise and AWS/GCP/Azure Cloud VMs, load balancers, DNS, or network configuration in the same set of files. Using a single common configuration language means greater flexibility in transitioning to new environments and reducing vendor lock-in. Another reason you should consider using Terraform is that, unlike Ansible, it works on the principle of understanding the current vs desired state. This means that if you do something like deploy a VM via Terraform, then later delete that block of configuration, Terraform will delete the VM. So your Terraform code is declarative of your infrastructure. With Ansible you would need to write additional code to perform an operation similar to this, as Ansible is not aware of the state from previous runs. Another benefit of Terraform is that you can see what it will do before you run it by using the 'terraform plan' command. 

However, if you already have a significant amount of infrastructure deployed, it can be time consuming to import your current environment to manage under Terraform. You can use it for new deployments without importing existing environment configurations, however it won't be able to manage those existing resources. Terraform also stores the state of what was provisioned in a state file. This means that if there are multiple people working on the code then they must run it out of a single common location with the same state file. Cloud providers can use cloud storage buckets to store the state file. The ideal solution might be using a CI or Orchestration system to run the 'terraform apply' to deploy infrastructure changes, and gating the process via approvals in ITSM. It is critical to ensure actual changes are applied from a single source of truth, like a master git branch. Also, while Terraform is extensible with custom providers ,you will need to write them in Go, which is not yet as widely used as Python. 

Now let's look at why Ansible. The best thing about Ansible is that it can handle a wide variety of configuration and deployment tasks using standard modules and it's easily extensible with Python. You can deploy a VM, use templates in case of custom configuration files, communicate with REST APIs, interact with git repos, and easily configure Linux or custom software all using already available standard modules. Building your own custom Ansible modules, which typically isn't needed given the exhaustive Ansible library, requires minimal programming effort. An example 'hello world' module only requires 4 lines of Python code. Drop the code in a 'library' directory next to your playbook and you're ready to use it. Ansible also comes with 'ansible-vault' which provides a way to store sensitive variables in encrypted yaml files in your playbook repo, which can be decrypted at runtime using a vault password. Because of these features, you can easily implement a wide variety of use cases using Ansible to achieve configuration as code. Some example cases we've used Ansible for include deployment of Linux OS hardening changes to meet security standard compliance,  configuring Apache Tomcat and Oracle Weblogic as part of application server deployment, integrating with ITSM (IT Service Management) and CMDB (Configuration Management Database) platforms, and interacting with silent installers and CLIs using Keyva built custom modules like one for Python Pexpect. 

Now, given that Ansible does not store state of the resources, you will need to write playbooks to handle removal of resources. Meaning, even if you deployed something and Ansible made sure it was 'present', to remove it you would usually need to run the same function with the named resource as 'absent'. For simple things like removing a file, this is easy and you just need to remove the code after it is run once everywhere. For more complex use cases, you can get around this limitation by writing playbooks in a way that queries existing resources into variable lists, compares to what is in Ansible, then removes the items that do not match. However, this would take additional time, is more complex, and does not account for any changes that were made on target resources manually. From a configuration, compliance and remediation standpoint, this may actually be desirable for some organizations. 

What's great about both tools is that they can work with each other. There's no reason to believe that one tool needs to own the whole process. Given their differences in scope, while they can do similar things, they are in no way replacements for the full functionality of the other. Terraform can be set up to run Ansible on a host after provisioning to do the configuration of that host. Likewise, Ansible can use the Terraform module to plan or apply a Terraform project as a step within a playbook. The Ansible module for Terraform also returns the outputs from Terraform as variables that Ansible can consume and use for further action. When designing and implementing  infrastructure-as-code in your environment, it is important to consider which tool is best suited for each part of the task. It is also imperative to consider combining Terraform with Ansible when deploying infrastructure. If you need help getting started or advice on best practices around implementing infrastructure-as-code, please reach out to info@keyvatech.com. 

[post_title] => Ansible vs. Terraform: Understanding the Differences [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => ansible-vs-terraform-understanding-the-differences [to_ping] => [pinged] => [post_modified] => 2020-10-01 15:03:01 [post_modified_gmt] => 2020-10-01 15:03:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2921 [menu_order] => 10 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2911 [post_author] => 7 [post_date] => 2020-09-25 13:36:20 [post_date_gmt] => 2020-09-25 13:36:20 [post_content] =>

By Anuj Tuli, CTO

Many organizations that use PowerBI for business insights and analytics have a need to run their reports against various data sources, including workloads that they may have residing in Amazon AWS. There can be a number of various data sources configured for AWS; this blog walks through how to set up connectivity between PowerBI and AWS Aurora MySQL Database.  

Assumptions:  

First, let's look at various configurations that we need to set up on the AWS side -  

Next, we will configure the PowerBI components -  

One of the most common ODBC errors we've seen is when the ODBC connector is unable to connect to the database. This usually happens either because the public subnet for the VPC is not associated with the Windows EC2 instance, or the public accessibility flag for the database is not set.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to info@keyvatech.com 

[post_title] => How to set up PowerBI for reporting from AWS Aurora MySQL Database [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-powerbi-for-reporting-from-aws-aurora-mysql-database [to_ping] => [pinged] => [post_modified] => 2020-09-25 13:36:20 [post_modified_gmt] => 2020-09-25 13:36:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2911 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2908 [post_author] => 7 [post_date] => 2020-09-24 18:41:45 [post_date_gmt] => 2020-09-24 18:41:45 [post_content] =>

By Anuj Tuli, CTO

Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the right data to capture, 2) Bringing data from various sources into the data warehouse, 3) Performing guided analysis on the captured data so as to derive meaning from it.  

A modern data warehouse platform helps bring these activities together, so that you can easily identify, capture and retrieve data from various sources, and provide visibility and reporting capabilities for chosen interpretation. Snowflake is built for data scientists and data engineers, and it supports modern data and applications that use as much unstructured data as structured data. 

Snowflake offers SaaS data warehousing services, and have also made available a number of connectors for data retrieval on their github here - https://github.com/snowflakedb. There is also a community page that provides hands-on exposure to the Snowflake platform, and other educational videos. More info here - https://community.snowflake.com/s/education-services 

Keyva provides services and offerings around Snowflake data warehousing platform. You can always reach our team at: info@keyvatech.com to request additional information. 

 

[post_title] => Big Data and Snowflake [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => big-data-and-snowflake [to_ping] => [pinged] => [post_modified] => 2020-09-24 18:41:45 [post_modified_gmt] => 2020-09-24 18:41:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2908 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2903 [post_author] => 7 [post_date] => 2020-09-21 12:49:38 [post_date_gmt] => 2020-09-21 12:49:38 [post_content] =>

By Anuj Tuli, CTO

Kong recently announced the availability of its certified container-based Kong Enterprise on Red Hat Marketplace. You can find the press release announcement here. 

Kong Enterprise provides the ability to configure RBAC, includes enterprise wide support, and many other features, in addition to the agility and speed offered by the community version. Red Hat Openshift is one of the most widely used enterprise container platforms. With Kong's addition to Red Hat Marketplace, organizations that use Openshift can now leverage API abstraction capability natively as part of deploying their microservices based workloads, while managing the full lifecycle of deployed API layer (abstraction, monetization, reporting, throttling) via the Kong interface.  

Keyva has strategic partnerships with both Red Hat and Kong – and provides project managed deliverable based consulting services around Red Hat Openshift and Ansible offerings, as well as Kong Enterprise offerings. Keyva's IP offerings include certified ServiceNow integrations - for Openshift, as well as for Kong 

[post_title] => Kong Enterprise on Red Hat Marketplace [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => kong-enterprise-on-red-hat-marketplace [to_ping] => [pinged] => [post_modified] => 2020-09-24 14:57:08 [post_modified_gmt] => 2020-09-24 14:57:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2903 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2900 [post_author] => 7 [post_date] => 2020-09-17 13:02:05 [post_date_gmt] => 2020-09-17 13:02:05 [post_content] =>

By Anuj Tuli, CTO

ServiceNow recently announced the general availability of their latest Paris release. Highlights in this release include Process Automation Designer to manage your automation workflows through a single console, Predictive Intelligence Workbench which provides platform recommendations based on machine learning, and Playbooks for Customer Service Management to provide enhanced customer service processes.  

You can look up release notes for Paris release here - https://docs.servicenow.com/bundle/paris-release-notes/page/release-notes/family-release-notes.html 

Keyva is a Premier Partner of ServiceNow and has multiple "NOW" Certified integrations available on the ServiceNow store. You can find more about these integrations here. Our ServiceNow App for Red Hat Ansible Tower and ServiceNow App for Red Hat Openshift offerings are already certified against the latest Paris release. 

You can always reach our team at: info@keyvatech.com to request additional information. 

[post_title] => ServiceNow Paris Release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-paris-release [to_ping] => [pinged] => [post_modified] => 2020-09-17 13:02:05 [post_modified_gmt] => 2020-09-17 13:02:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2900 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2896 [post_author] => 7 [post_date] => 2020-09-16 12:20:52 [post_date_gmt] => 2020-09-16 12:20:52 [post_content] =>

By Anuj Tuli, CTO

The Docker container engine is generally accepted as the de facto standard for container run times. Docker Enterprise is the supported enterprise option that provides container orchestration layer leveraging Kubernetes (or Swarm) and also supports highly-available and highly-scalable cluster architecture.  

Launchpad 2020 is an inaugural virtual event that covers technical sessions and other major announcements around the Docker Enterprise platform. The event will be help on 16th September 2020 – and is scheduled to reveal a new offering called Docker Enterprise Container Cloud. The technical tracks are categorized for Docker Enterprise, Operations and IT, and Developer focused talks.  

You can find the detailed agenda for this event here - https://mirantis.events.cube365.net/mirantis/launchpad-2020/agenda 

Keyva provides services and offerings around open-source Docker and Docker Enterprise platforms. You can always reach our team at: info@keyvatech.com to request additional information. 

 

[post_title] => Docker Enterprise - Launchpad 2020 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => docker-enterprise-launchpad-2020 [to_ping] => [pinged] => [post_modified] => 2020-09-16 12:20:52 [post_modified_gmt] => 2020-09-16 12:20:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2896 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2884 [post_author] => 7 [post_date] => 2020-09-11 15:12:09 [post_date_gmt] => 2020-09-11 15:12:09 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.  

Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for OpenShift from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/2Z3uPJn 

 

 

[post_title] => ServiceNow App for Red Hat OpenShift "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2020-09-11 15:12:09 [post_modified_gmt] => 2020-09-11 15:12:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2884 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2934 [post_author] => 7 [post_date] => 2020-10-20 10:46:46 [post_date_gmt] => 2020-10-20 10:46:46 [post_content] =>

By Brad Johnson, Lead DevOps Engineer

This guide covers how to set up an OpenShift cluster in AWS with Windows worker nodes. Because this requires the OVN Kubernetes container network interface you can not simply add Windows nodes to existing clusters. Please also understand that this functionality is still considered to be preview or beta from Red Hat is not supported in production environments at this time. This functionality also requires using OpenShift 4.4 or later, we tested this using OpenShift 4.5, which was the latest when this was published.

Requirements:
- Ansible 2.9+
- Python 3
- Python winrm module
- AWS CLI
- OpenShift 4.4+
- OC CLI 4.4+
- GIT
- AWS IAM User with programmatic access key and AdministratorAccess policy attached

Environment Setup:
If you don't have an environment that meets the above specs then create an EC2 instance with Amazon Linux 2.
I used a t2.micro instance and a security group allowing SSH on port 22. This environment already has the AWS CLI set up. During my run I only needed 4GB total disk space so the default disk size is fine.

After the instance is launched, SSH to the new VM as 'ec2-user' using your keyfile.
Run the following commands to set up python pre-reqs:

$ sudo yum install python3 python3-pip git
$ pip3 install --user pywinrm ansible

Navigate to https://cloud.redhat.com/openshift/install/aws/installer-provisioned and log in with your Red Hat account. This page provides links to the latest installer and CLI. You will also need to download your pull secret from here. These are correct as of Oct 2020, however if you have an issue, please use the links on the latest page from Red Hat.

Download OpenShift CLI and Installer and place the binaries in the $PATH. Note: /home/ec2-user/bin is in the default of $PATH on AMZ Linux 2 and openshift-client also contains a kubectl binary.

$ cd ~

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz

$ mkdir bin && tar -xvf openshift-client-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-client-README.md

$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz

$ tar -xvf openshift-install-linux.tar.gz --directory bin && mv bin/README.md ~/openshift-install-README.md

Check the versions of the pre-reqs. Here is the output from when I tested this example as well.

$ ansible --version
ansible 2.10.2
config file = None
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/.local/lib/python3.7/site-packages/ansible
executable location = /home/ec2-user/.local/bin/ansible
python version = 3.7.9 (default, Aug 27 2020, 21:59:41) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]

$ aws --version
aws-cli/1.18.107 Python/2.7.18 Linux/4.14.193-149.317.amzn2.x86_64 botocore/1.17.31

$ oc version
Client Version: 4.5.14

$ openshift-install version
openshift-install 4.5.14
built from commit 9893a482f310ee72089872f1a4caea3dbec34f28
release image quay.io/openshift-release-dev/ocp-release@sha256:95cfe9273aecb9a0070176210477491c347f8e69e41759063642edf8bb8aceb6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2-0-g52c56ce", GitCommit:"d7f3ccf9a5bdc96ba92e31526cf014b3de4c46aa", GitTreeState:"clean", BuildDate:"2020-09-16T15:25:59Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

$ pip3 freeze
ansible==2.10.1
ansible-base==2.10.2
certifi==2020.6.20
cffi==1.14.3
chardet==3.0.4
cryptography==3.1.1
idna==2.10
Jinja2==2.11.2
MarkupSafe==1.1.1
ntlm-auth==1.5.0
packaging==20.4
pycparser==2.20
pyparsing==2.4.7
pywinrm==0.4.1
PyYAML==5.3.1
requests==2.24.0
requests-ntlm==1.1.0
six==1.15.0
urllib3==1.25.10
xmltodict==0.12.0

$ pip3 show pywinrm
Name: pywinrm
Version: 0.4.1
Summary: Python library for Windows Remote Management
Home-page: http://github.com/diyan/pywinrm/
Author: Alexey Diyan
Author-email: alexey.diyan@gmail.com
License: MIT license
Location: /home/ec2-user/.local/lib/python3.7/site-packages
Requires: xmltodict, requests, requests-ntlm, six


Configure the AWS and the AWS CLI

You will need an AWS IAM user with a programmatic access key and the AdministratorAccess policy attached. You will also need to set up Route53 for a public cluster, but this is not reqiured, if you wish to create a private cluster see our steps below.
See this page for information on setting up your AWS account. https://docs.openshift.com/container-platform/4.5/installing/installing_aws/installing-aws-account.html

If you need information on names for availability zones you can run one of the following commands.
Be sure you are using a region supported by RedHat for Openshift on the AWS.

$ aws ec2 describe-regions
$ aws ec2 describe-availability-zones --region us-east-2
$ aws ec2 describe-availability-zones --all-availability-zones

Run these commands to set up the AWS CLI

$ aws configure
AWS Access Key ID [None]: YOURACCESSKEYID
AWS Secret Access Key [None]: YOURSECRETACCESSKEY
Default region name [None]: us-east-2
Default output format [None]: json

We are now ready to set up the OpenShift Cluster. Please go to 'Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part II)'.

  1.  

Helpful links: 

https://cloud.redhat.com/openshift/install/

If you are interested in deploying Windows worker nodes with Rancher,  please see our post here.

If you have any questions about the steps documented here, or have any feedback or requests, please let us know at info@keyvatech.com.

[post_title] => Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-an-openshift-cluster-in-aws-with-windows-nodes [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:06 [post_modified_gmt] => 2022-01-26 13:18:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2934 [menu_order] => 9 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 113 [max_num_pages] => 15 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 8a2fb27ebbf7ae7479333ad9d1b3c941 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )
code displayed on computer monitor

Creating an OpenShift Cluster in AWS with Windows Worker Nodes (Part I)

By Brad Johnson, Lead DevOps Engineer This guide covers how to set up an OpenShift cluster in AWS with Windows worker nodes. Because this requires the OVN Kubernetes container network ...
Read more

Ansible vs. Terraform: Understanding the Differences

By Brad Johnson, Lead DevOps Engineer When considering infrastructure automation Terraform and Ansible are usually brought up. Both do some things really well, but also have limitations. Terraform is an ...
Read more

How to set up PowerBI for reporting from AWS Aurora MySQL Database

By Anuj Tuli, CTO Many organizations that use PowerBI for business insights and analytics have a need to run their reports against various data sources, including workloads that they may have residing ...
Read more

Big Data and Snowflake

By Anuj Tuli, CTO Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the ...
Read more

Kong Enterprise on Red Hat Marketplace

By Anuj Tuli, CTO Kong recently announced the availability of its certified container-based Kong Enterprise on Red Hat Marketplace. You can find the press release announcement here.  Kong Enterprise provides the ...
Read more
Business intelligence concept. Big data analytics, chart and graph icons and female hands typing on laptop.

ServiceNow Paris Release

By Anuj Tuli, CTO ServiceNow recently announced the general availability of their latest Paris release. Highlights in this release include Process Automation Designer to manage your automation workflows through a ...
Read more
Business intelligence concept. Big data analytics, chart and graph icons and female hands typing on laptop.

Docker Enterprise – Launchpad 2020

By Anuj Tuli, CTO The Docker container engine is generally accepted as the de facto standard for container run times. Docker Enterprise is the supported enterprise option that provides container ...
Read more

ServiceNow App for Red Hat OpenShift “NOW Certified” against Paris release

By Anuj Tuli, CTO Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest ...
Read more