By Anuj Tuli, CTO
Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.
Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for OpenShift from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly.
You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/2Z3uPJn
[post_title] => ServiceNow App for Red Hat OpenShift "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2020-09-11 15:12:09 [post_modified_gmt] => 2020-09-11 15:12:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2884 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2878 [post_author] => 7 [post_date] => 2020-09-08 15:22:08 [post_date_gmt] => 2020-09-08 15:22:08 [post_content] =>
By Anuj Tuli, CTO
Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.
Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for Red Hat Ansible Tower from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly.
You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/3jMkbPn
[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:17:48 [post_modified_gmt] => 2022-01-26 13:17:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2878 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2854 [post_author] => 7 [post_date] => 2020-08-27 12:28:05 [post_date_gmt] => 2020-08-27 12:28:05 [post_content] =>
Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in containers. OpenShift Online is a SaaS offering for OpenShift provided by Red Hat. OpenShift Online takes away the effort required to set up your OpenShift clusters on-prem and allows organizations to quickly leverage all that OpenShift offers, including Developer console, without worry about managing the underlying infrastructure.
OpenShift Online provides REST based APIs for all functions that can be carried out via the console, and the oc command line. Therefore, teams can build automated functionality that leverages Kubernetes cluster using OpenShift management plane. Today, we will look at one such function – to create a Project. Any user that wants to create a project using APIs is required to have appropriate role bindings in the specific namespace that they want to create or manage projects in. By default, OpenShift Online provides you the ability to create Projects via the console, using the ProjectRequest API call.
Assuming you have the oc command line setup, the command to create a project is:
$ oc new-project <project_name>
--description="<description>" --display-name="<display_name>"
We will take a look at how to create a Project in OpenShift Online using the REST API. We will be using Postman to trigger our API call. This sample was run against OpenShift v3.11, Postman v7.30.1.
1) The first thing we will do is log into our OpenShift Online console, and on the top right section in the drop-down that shows up under your name, select 'Copy Login Command'. Paste the copied contents into Notepad and capture the 'token' value.
2) Download and import the Postman collection for this sample API call here
3) Paste the copied token value under 'Authorization' section of the request
4) Update the sections in bold for an appropriate name you want your Project to have
{
"kind": "ProjectRequest",
"apiVersion": "v1",
"displayName": "82520759",
"description": "test project from postman",
"metadata": {
"labels": {
"name": "82520759",
"namespace": "82520759"
},
"name": "82520759"
}
}
5) Execute the Postman call. You should now see a new project created under your OpenShift Online instance.
You can adjust the Body of the sample call to pass in more values associated with the ProjectRequest object. For reference, the object schema includes the below
https://docs.openshift.com/container-platform/3.11/rest_api/oapi/v1.ProjectRequest.html
apiVersion:
description:
displayName:
kind:
metadata:
annotations:
clusterName:
creationTimestamp:
deletionGracePeriodSeconds:
deletionTimestamp:
finalizers:
generateName:
generation:
initializers:
labels:
name:
namespace:
ownerReferences:
resourceVersion:
selfLink:
uid:
Once you've unit tested the REST call with Postman for your OpenShift Online environment, you can very easily port this over to using one of the existing modules in Ansible, and making it a step within your playbook.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => How to use REST APIs for OpenShift Online via Postman [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3). [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-use-rest-apis-for-openshift-online-via-postman [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:04:54 [post_modified_gmt] => 2023-06-28 18:04:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2854 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2825 [post_author] => 2 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.
In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.
We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info
There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider
This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html
For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:
docker image pull mcr.microsoft.com/windows/nanoserver:1809
Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.
Troubleshooting
Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.
If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/
You can use these commands in windows to check on the docker service status and restart it.
sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => creating-a-rancher-cluster-with-windows-worker-nodes
[to_ping] =>
[pinged] =>
[post_modified] => 2024-05-28 17:31:12
[post_modified_gmt] => 2024-05-28 17:31:12
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=2825
[menu_order] => 0
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [4] => WP_Post Object
(
[ID] => 2823
[post_author] => 2
[post_date] => 2020-07-24 14:38:58
[post_date_gmt] => 2020-07-24 14:38:58
[post_content] =>At Keyva we often meet with clients that need just a little help, something to get them over the hump and continue on their way building out new and exciting IT capabilities. This seems to happen most often when organizations adopt new and emerging technologies. Often teams haven't built up their internal skills and capabilities around tech like Kubernetes or automation platforms such as Red Hat Ansible. Or, perhaps the team is already very skilled, but want someone to help with their OpenShift 3.x -> 4.x upgrade path, or need someone to write a new Ansible module so that they can expand their ability to offer automation capabilities via their playbooks.
There hasn't been a good way to get this kind of incremental help – no granular consumption model for technical expertise – it's not a function of your vendor's L1 support. Your vendor will tell you to buy a TAM or their own expensive consulting services. It's also not something readily available in the community at large. There are user forums and networks for days, but will you get a response to your questions? Will the responses be correct?
Keyva created Guru Services to address this exact issue. It's more than L1 support, not as heavy as a consulting engagement, it's enterprise grade and far more reliable than crowdsourcing the community for answers and assistance.
Guru Service is just as easy to use: choose from 3 different service levels and you're on your way. You'll have access to our client portal from which you can schedule your On Demand Guru. We'll send you a meeting invite with web conference information and you'll be over the hump and on your way in no time. We currently provide On Demand Gurus for Red Hat Ansible, OpenShift, and Kong and are actively adding technologies to our suite of Guru Services. To learn more about these offerings, check out our vendor pages for Kong (https://keyvatech.com/kong-enterprise/) and Red Hat (https://keyvatech.com/red-hat/). Reach out to our Keyva team at: [email protected] to request additional information or a quote on our Guru Services .
[post_title] => The On Demand Guru [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-on-demand-guru [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:01:14 [post_modified_gmt] => 2023-06-28 18:01:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2823 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2742 [post_author] => 7 [post_date] => 2020-07-15 12:00:59 [post_date_gmt] => 2020-07-15 12:00:59 [post_content] =>By Brad Johnson, Lead DevOps Engineer
In this tutorial we are going to get Rancher set up for testing and development use. Rancher is fully open-source and allows us to easily deploy a Kubernetes cluster in VMware with only minimal configuration. The intent of this tutorial is to give you a base for a scalable development cluster where you can test deploying applications or configuring other Kubernetes software without setting up DNS or external load balancers.
We will use VMware vSphere 6.7 for our deployment. For the OS and software versions we are going to use the ones recommended by Rancher support. As of May 2020, Docker currently has an issue with cluster DNS and firewalld interfering with each other in CentOS/RHEL 8, so we will be using CentOS 7 and Docker 19.03.x for our management server, however you can use any supported OS. For the Master and Worker nodes we will be using RancherOS or CentOS. Using RancherOS eliminates the need to build a custom VM template in vSphere that uses cloud-init.
Requirements for this exercise:
- Admin access to vSphere or a service account with access.
- Ability to create RHEL/CentOS 7 VMs in vSphere.
- Guest VM network has internet access.
In this deployment Rancher has two primary components, the Rancher cluster manager and the Kubernetes cluster we will manage. For production use, the cluster management component would be a container deployed on its own kubernetes cluster. For ease of install and use in a testing and lab deployment, we can simply deploy the management application as a Docker container on a single server. This configuration is not recommended for production and cannot be converted into a production scenario later. If you want a single node cluster manager that can be converted into production ready setup, then you can deploy the management container on a one node kubernetes cluster which could then later be scaled up.
Rancher management server deployment
All commands run as root or with sudo unless noted:
Spin up a standard or minimal CentOS/RHEL 7 server, 2 CPU, 4GB RAM. I used a 100GB thin provisioned primary disk.
Install docker using the Rancher script. Alternatively, install by hand using documentation from docker.
curl https://releases.rancher.com/install-docker/19.03.sh | sh
Create a directory for persistent Rancher data storage
mkdir /opt/rancher
Run Rancher container with persistent data mount listening on port 80/443. This uses a docker self signed cert for SSL.
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:latest
Log in to the rancher web interface using your web browser. The first login will prompt you to set the password for the admin user. Set a password and you should see the main management user interface.
Optional - Creating a CentOS 7 node template for cluster nodes that includes cloud-init.
Cloud-init will allow you to specify additional configuration in Rancher that happen when Rancher creates new nodes, like firewall settings.
https://raw.githubusercontent.com/keyvatech/blog_files/master/centos7_cloudinit_vmtemplate.sh
In vCenter find the VM, right-click on it, then select Clone > Clone To Template.
This template can now be used in Rancher with cloud-init for additional provisioning.
Now we can create your new Rancher cluster. Note that the Rancher coredns workload will not with with selinux set to enforcing. If you require enforcing mode you will need additional configuration. It is also important to use consistent DNS names when deploying, FQDNs are best, but do not mix short and full hostnames as it causes certificate issues. Rancher will generate self signed certs if you do not provide your own.
1) From the main web interface cluster page click add cluster, then select vSphere
2) Enter a cluster name like "rancher1"
3) Create a node template for your nodes. This can be used for both master and worker nodes.
https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-cloud-init-config.txt
4) Enter the name prefix for your master and worked nodes. For example, "rancher1-master" and "rancher1-worker", when nodes are created a number will be appended to the end.
5) For the master node select the etcd and control plane checkboxes
6) For the worker node select the worker checkbox.
7) Click Create at the bottom of the page. Rancher will now provision your nodes in vCenter.
You should now have a basic functional Kubernetes cluster.
If you are interested in deploying Windows worker nodes with Rancher please see our post here.
https://rancher.com/support-maintenance-terms/#2.4.x
https://rancher.com/docs/rancher/v2.x/en/installation/requirements/
If you have any questions about the steps documented here, or have any feedback or requests, please let us know at [email protected].
This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).
We will be using Ansible Tower v3.6 and PostgreSQL 10, on RHEL 7 systems running in VMware for this technical guide. The commands for setting up the same configuration on RHEL 8 will be different for some cases. This guide does not account for clustering of the PostgreSQL database. If you are setting up Tower in HA capacity for Production environments, it is recommended to follow best practices for PostgreSQL clustering, to avoid a single point of failure.
First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 4 systems – towerdb, tower1, tower2, tower3
subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
sudo yum update
sudo yum install wget
sudo yum install python36
sudo pip3 install httpie
Also:
On the Database system (towerdb), we will now set up PostgreSQL 10
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install postgresql10 postgresql10-server
Initialize the database
/usr/pgsql-10/bin/postgresql-10-setup initdb
systemctl enable postgresql-10
systemctl start postgresql-10
Verify you can log in to the database
:~nbsp;sudo su – postgres
:~
nbsp;Psql
# \list
This command will show you the existing (default) database list.
Next, we will configure the database to make sure it can talk to all the Tower web nodes:
sudo vi /var/lib/pgsql/10/data/pg_hba.conf
Add/update the line with 'md5' entry to allow all hosts:
host all all 0.0.0.0/0 md5
Update the postgresql.conf file
sudo vi /var/lib/pgsql/10/data/postgresql.conf
Add/update the entry to listen to all incoming requests:
listen_addresses = '*'
Restart the database services, to pick up the changes made:
sudo systemctl restart postgresql-10
sudo systemctl status postgresql-10
On each of the Tower web nodes (tower1, tower2, tower3), we will set up the Ansible Tower binaries:
mkdir ansible-tower
cd ansible-tower/
wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz
tar xvzf ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz
cd ansible-tower-setup-bundle-3.6.2-1
python -c 'from hashlib import md5; print("md5" + md5("password" + "awx").hexdigest())'
md5f58b4d5d85dbde46651335d78bb56b8c
Where password will be the password that you will be using a hash of, when authenticating against the database
Back on the database server (towerdb), we will go ahead and set up the database schema pre-requisites for Tower install:
:~nbsp;sudo su – postgres
:~
nbsp;Psql
postgres=# CREATE USER awx; CREATE DATABASE awx OWNER awx; ALTER USER awx WITH password 'password';
On tower1, tower2, tower3, update the inventory file and run the setup. Make sure your script contents match on all tower web tier systems.
You will need to update at least the following values and customize them for your environment:
admin_password='password'
pg_password='password'
rabbit_mq = 'password'
Under the [tower] section, you will have to add entries for all your tower web hosts. The first entry will typically serve as the primary node when the cluster is run.
We will now run the setup script:
./setup.sh
You can either copy this inventory file on the other 2 tower systems (tower2 and tower3), or replicate the content to match the content in the file on tower1, and run the setup script on the other 2 tower systems as well.
Once the setup script is run on all hosts, and it finishes successfully, you will be able to test your cluster instance. You can do so by going to one of the tower hosts URL, initiating a job template, and see which specific tower node it runs on – based on the tower node that is designated to be the primary node at that time. You will also be able to view the same console details, and logs of job runs, regardless of which tower web URL you go to.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected].
[post_title] => Clustering guide for Red Hat Ansible Tower
[post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => clustering-guide-for-red-hat-ansible-tower
[to_ping] =>
[pinged] =>
[post_modified] => 2023-06-28 18:05:17
[post_modified_gmt] => 2023-06-28 18:05:17
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=2361
[menu_order] => 4
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [7] => WP_Post Object
(
[ID] => 2282
[post_author] => 7
[post_date] => 2020-03-26 11:45:40
[post_date_gmt] => 2020-03-26 11:45:40
[post_content] =>By Anuj Tuli, Chief Technology Officer
Typically when you hear about containers and Kubernetes, it is in the context of Linux or Unix platforms. But there are a large number of organizations that use Windows and .NET based applications, and they are still trying to determine the best way forward for containerization of their Windows based business critical applications.
Kubernetes added support for Windows based components (worker nodes) starting with release v1.14.
In the example below, we will join a Windows worker node (v1.16.x) with a Kubernetes cluster v1.17.x.
As of this moment, Windows worker nodes are supported on Windows 2019 operating system only. In this example, we will leverage the flannel network set up on our master node on RHEL (see instructions above).
Step 1: Download the sig-windows-tools repository from https://github.com/kubernetes-sigs/sig-windows-tools , and extract the files
Step 2: Navigate to, and update the Kubernetes configuration file at C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan
In our instance, we will update the following values:
Step 3: Open up PowerShell console in Admin mode and install kubernetes via the downloaded script. This step requires reboot of the server
PS C:\Users\Administrator> cd C:\<Download-Path>\kubernetes\kubeadm
PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -install
Step 4: Once K8s is installed, join it to the existing kubernetes cluster. This step takes the values you entered in the modified Kubeclustervxlan file
PS C:\<Download-Path>\kubernetes\kubeadm> .\KubeCluster.ps1 -ConfigFile C:\<Download-Path>\kubernetes\kubeadm\v1.16.0\Kubeclustervxlan.json -join
Step 5: Verify that the Windows worker node was successfully added to the cluster. You can do this by running the kubectl command from any client (Windows or Linux nodes on the cluster)
PS C:\<Download-Path>\kubernetes\kubeadm> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster.bpic.local Ready master 15h v1.17.3
kubenode1.bpic.local Ready <none> 14h v1.17.3
kubenode2.bpic.local Ready <none> 14h v1.17.3
win-eo5rgh4493r Ready <none> 12h v1.16.2
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Step-by-step guide: Set up a Windows worker node for Kubernetes cluster [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => step-by-step-guide-set-up-a-windows-worker-node-for-kubernetes-cluster [to_ping] => [pinged] => [post_modified] => 2020-03-26 18:42:22 [post_modified_gmt] => 2020-03-26 18:42:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2282 [menu_order] => 6 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2884 [post_author] => 7 [post_date] => 2020-09-11 15:12:09 [post_date_gmt] => 2020-09-11 15:12:09 [post_content] =>By Anuj Tuli, CTO
Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.
Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for OpenShift from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly.
You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/2Z3uPJn
[post_title] => ServiceNow App for Red Hat OpenShift "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2020-09-11 15:12:09 [post_modified_gmt] => 2020-09-11 15:12:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2884 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4b6a14939efc61b5c6c83cb73e278a35 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )