Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 10 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 10 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 72, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 2908 [post_author] => 7 [post_date] => 2020-09-24 18:41:45 [post_date_gmt] => 2020-09-24 18:41:45 [post_content] =>

By Anuj Tuli, CTO

Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the right data to capture, 2) Bringing data from various sources into the data warehouse, 3) Performing guided analysis on the captured data so as to derive meaning from it.  

A modern data warehouse platform helps bring these activities together, so that you can easily identify, capture and retrieve data from various sources, and provide visibility and reporting capabilities for chosen interpretation. Snowflake is built for data scientists and data engineers, and it supports modern data and applications that use as much unstructured data as structured data. 

Snowflake offers SaaS data warehousing services, and have also made available a number of connectors for data retrieval on their github here - https://github.com/snowflakedb. There is also a community page that provides hands-on exposure to the Snowflake platform, and other educational videos. More info here - https://community.snowflake.com/s/education-services 

Keyva provides services and offerings around Snowflake data warehousing platform. You can always reach our team at: [email protected] to request additional information. 

 

[post_title] => Big Data and Snowflake [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => big-data-and-snowflake [to_ping] => [pinged] => [post_modified] => 2020-09-24 18:41:45 [post_modified_gmt] => 2020-09-24 18:41:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2908 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2903 [post_author] => 7 [post_date] => 2020-09-21 12:49:38 [post_date_gmt] => 2020-09-21 12:49:38 [post_content] =>

By Anuj Tuli, CTO

Kong recently announced the availability of its certified container-based Kong Enterprise on Red Hat Marketplace. You can find the press release announcement here. 

Kong Enterprise provides the ability to configure RBAC, includes enterprise wide support, and many other features, in addition to the agility and speed offered by the community version. Red Hat Openshift is one of the most widely used enterprise container platforms. With Kong's addition to Red Hat Marketplace, organizations that use Openshift can now leverage API abstraction capability natively as part of deploying their microservices based workloads, while managing the full lifecycle of deployed API layer (abstraction, monetization, reporting, throttling) via the Kong interface.  

Keyva has strategic partnerships with both Red Hat and Kong – and provides project managed deliverable based consulting services around Red Hat Openshift and Ansible offerings, as well as Kong Enterprise offerings. Keyva's IP offerings include certified ServiceNow integrations - for Openshift, as well as for Kong 

[post_title] => Kong Enterprise on Red Hat Marketplace [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => kong-enterprise-on-red-hat-marketplace [to_ping] => [pinged] => [post_modified] => 2020-09-24 14:57:08 [post_modified_gmt] => 2020-09-24 14:57:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2903 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2900 [post_author] => 7 [post_date] => 2020-09-17 13:02:05 [post_date_gmt] => 2020-09-17 13:02:05 [post_content] =>

By Anuj Tuli, CTO

ServiceNow recently announced the general availability of their latest Paris release. Highlights in this release include Process Automation Designer to manage your automation workflows through a single console, Predictive Intelligence Workbench which provides platform recommendations based on machine learning, and Playbooks for Customer Service Management to provide enhanced customer service processes.  

You can look up release notes for Paris release here - https://docs.servicenow.com/bundle/paris-release-notes/page/release-notes/family-release-notes.html 

Keyva is a Premier Partner of ServiceNow and has multiple "NOW" Certified integrations available on the ServiceNow store. You can find more about these integrations here. Our ServiceNow App for Red Hat Ansible Tower and ServiceNow App for Red Hat Openshift offerings are already certified against the latest Paris release. 

You can always reach our team at: [email protected] to request additional information. 

[post_title] => ServiceNow Paris Release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-paris-release [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:34:01 [post_modified_gmt] => 2024-05-28 17:34:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2900 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2896 [post_author] => 7 [post_date] => 2020-09-16 12:20:52 [post_date_gmt] => 2020-09-16 12:20:52 [post_content] =>

By Anuj Tuli, CTO

The Docker container engine is generally accepted as the de facto standard for container run times. Docker Enterprise is the supported enterprise option that provides container orchestration layer leveraging Kubernetes (or Swarm) and also supports highly-available and highly-scalable cluster architecture.  

Launchpad 2020 is an inaugural virtual event that covers technical sessions and other major announcements around the Docker Enterprise platform. The event will be help on 16th September 2020 – and is scheduled to reveal a new offering called Docker Enterprise Container Cloud. The technical tracks are categorized for Docker Enterprise, Operations and IT, and Developer focused talks.  

You can find the detailed agenda for this event here - https://mirantis.events.cube365.net/mirantis/launchpad-2020/agenda 

Keyva provides services and offerings around open-source Docker and Docker Enterprise platforms. You can always reach our team at: [email protected] to request additional information. 

 

[post_title] => Docker Enterprise - Launchpad 2020 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => docker-enterprise-launchpad-2020 [to_ping] => [pinged] => [post_modified] => 2020-09-16 12:20:52 [post_modified_gmt] => 2020-09-16 12:20:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2896 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2884 [post_author] => 7 [post_date] => 2020-09-11 15:12:09 [post_date_gmt] => 2020-09-11 15:12:09 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.  

Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for OpenShift from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/2Z3uPJn 

 

 

[post_title] => ServiceNow App for Red Hat OpenShift "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2020-09-11 15:12:09 [post_modified_gmt] => 2020-09-11 15:12:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2884 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2878 [post_author] => 7 [post_date] => 2020-09-08 15:22:08 [post_date_gmt] => 2020-09-08 15:22:08 [post_content] =>

By Anuj Tuli, CTO

Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest version in the long line of software updates since the company's creation.  

Upon general availability of the Paris release, customers will be able to upgrade their ServiceNow App for Red Hat Ansible Tower from previous ServiceNow Releases – Madrid, New York, Orlando – to Paris release seamlessly. 

You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here - https://bit.ly/3jMkbPn 

 

 

[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Paris release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-paris-release [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:17:48 [post_modified_gmt] => 2022-01-26 13:17:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2878 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2854 [post_author] => 7 [post_date] => 2020-08-27 12:28:05 [post_date_gmt] => 2020-08-27 12:28:05 [post_content] =>

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in containers. OpenShift Online is a SaaS offering for OpenShift provided by Red Hat. OpenShift Online takes away the effort required to set up your OpenShift clusters on-prem and allows organizations to quickly leverage all that OpenShift offers, including Developer console, without worry about managing the underlying infrastructure.  

OpenShift Online provides REST based APIs for all functions that can be carried out via the console, and the oc command line. Therefore, teams can build automated functionality that leverages Kubernetes cluster using OpenShift management plane. Today, we will look at one such function – to create a Project. Any user that wants to create a project using APIs is required to have appropriate role bindings in the specific namespace that they want to create or manage projects in. By default, OpenShift Online provides you the ability to create Projects via the console, using the ProjectRequest API call.  

Assuming you have the oc command line setup, the command to create a project is: 

$ oc new-project <project_name>  
--description="<description>" --display-name="<display_name>" 

We will take a look at how to create a Project in OpenShift Online using the REST API. We will be using Postman to trigger our API call. This sample was run against OpenShift v3.11, Postman v7.30.1.  

1) The first thing we will do is log into our OpenShift Online console, and on the top right section in the drop-down that shows up under your name, select 'Copy Login Command'. Paste the copied contents into Notepad and capture the 'token' value. 

2) Download and import the Postman collection for this sample API call here  

3) Paste the copied token value under 'Authorization' section of the request 

4) Update the sections in bold for an appropriate name you want your Project to have 

{ 

    "kind": "ProjectRequest", 

    "apiVersion": "v1", 

    "displayName": "82520759", 

    "description": "test project from postman", 

    "metadata": { 

        "labels": { 

            "name": "82520759", 

            "namespace": "82520759" 

        }, 

        "name": "82520759" 

    } 

} 

5) Execute the Postman call. You should now see a new project created under your OpenShift Online instance.  

You can adjust the Body of the sample call to pass in more values associated with the ProjectRequest object. For reference, the object schema includes the below  

https://docs.openshift.com/container-platform/3.11/rest_api/oapi/v1.ProjectRequest.html 

apiVersion: 
description: 
displayName: 
kind: 
metadata: 
  annotations: 
    clusterName: 
      creationTimestamp: 
      deletionGracePeriodSeconds: 
      deletionTimestamp: 
   finalizers: 
     generateName: 
     generation: 
   initializers: 
   labels: 
     name: 
     namespace: 
   ownerReferences: 
     resourceVersion: 
     selfLink: 
     uid: 
 

Once you've unit tested the REST call with Postman for your OpenShift Online environment, you can very easily port this over to using one of the existing modules in Ansible, and making it a step within your playbook.  

If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected] 

[post_title] => How to use REST APIs for OpenShift Online via Postman [post_excerpt] => This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).  [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-use-rest-apis-for-openshift-online-via-postman [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:04:54 [post_modified_gmt] => 2023-06-28 18:04:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2854 [menu_order] => 4 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2825 [post_author] => 2 [post_date] => 2020-07-30 14:39:25 [post_date_gmt] => 2020-07-30 14:39:25 [post_content] =>

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with our last Rancher blog post we will be using CentOS 7. Please see our last blog post about setting up a Rancher management node if you do not already have one. That part of the process is the same. We are going to assume you are starting at the point that you have a Rancher management interface up and accessible to log in to.

In order to allow us to use Windows worker nodes we will need to create a custom cluster in Rancher. This means we will not be able to use Rancher’s ability to automatically boot nodes for us and we will need to create the nodes by hand before we bring up our Rancher cluster.

We are going to use VMware vSphere 6.7 for our VM deployments. The windows node must run Windows Server 2019, version 1809 or 1903. Kubernetes may fail to run if you are using an older image and do not have the latest updates from Microsoft. In our testing we used version 1809, build 17763.1339 and did not need to install and additional KBs manually. Builds prior to 17763.379 are known to be missing required updates. It is also critical that you have VMware Tools 11.1.x or later installed on the Windows guest VM. See here for additional details on version information.
https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

  1. Provision two CentOS 7 nodes in VMware with 2CPUs and 4GB of RAM or greater.
  2. After they have booted, log in to the nodes and prepare them to be added to Rancher. We have created the following script to help with this. Please add any steps your org needs as well. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-centos7-node-prep.sh
  3. Provision the windows server worker node in vSphere, note that 1.5 CPUs and 2.5GB of RAM are reserved for windows. You may want to over-provision this node by a bit. I used 6CPUs and 8GB ram so there was some overhead in my lab.
  4. Modify the windows node CPU settings and enable “Hardware virtualization”, then make any other changes you need and boot the node.
  5. You can confirm the windows node version by running ‘winver’ at the powershell prompt.
  6. Check to make sure the VMware Tools version you are running is 11.1.0 or later.
  7. After you boot the windows node open an admin powershell prompt and run the commands in this powershell script to set up the system, install docker and open the proper firewall ports. https://raw.githubusercontent.com/keyvatech/blog_files/master/rancher-windows-node-prep.ps1
  8. After you run the script you can then set the hostname, make any other changes for your org and reboot.
  9. Once the reboot is complete open a powershell prompt as admin and run ‘docker ps‘, then run ‘docker run hello-world‘ to test the install.

There are more details here on the docker install method we used:
https://github.com/OneGet/MicrosoftDockerProvider

This page contains documentation on an alternate install method for docker on windows:
https://docs.mirantis.com/docker-enterprise/current/dockeree-products/docker-engine-enterprise/dee-windows.html

For some windows containers it is important your base images matches your windows version. Check your Windows version with ‘winver’ on the command prompt.
If you are running 1809 this is the command to pull the current microsoft nanoserver image:

docker image pull mcr.microsoft.com/windows/nanoserver:1809

Now that we have our nodes provisioned in VMware with docker installer we are ready to create a cluster in Rancher.

  1. Log in to the rancher management web interface, select the global cluster screen and click “add cluster”.
  2. Choose “From existing nodes (custom)” this is the only option where windows is supported currently.
  3. Set a cluster name, choose your kubernetes version, for Network Provider select “Flannel” from the dropdown.
  4. Flannel is the only network type to support windows, the windows support option should now allow you to select “Enabled“. Leave the Flannel Backend set to VXLAN.
  5. You can now review the other settings, but you likely don’t need to make any other changes. Click “Next” at the bottom of the page.
  6. You are now presented with the screen showing docker commands to add nodes. You will need to copy these commands and run them by hand on each node. Be sure to run the windows command in an admin powershell prompt.
    1. For the master node select Linux with etcd and Control Plane.
    2. For the linux worker select Linux with only Worker.
    3. For the windows worker node select windows, worker is the only option.
  7. This cluster will now provision itself and come up. This may take 5-10 mins.
  8. After the cluster is up select the cluster name from the main drop down in the upper left, then go to “Projects/Namespaces” and click on “Project: System”. Be sure you are on the Resources > Workloads page. All services should say “Active”. If there are any issues here you may need to troubleshoot further.

Troubleshooting

Every environment is different, so you may need to go through some additional steps to set up Windows nodes with Rancher. This guide may help you get past the initial setup challenges. A majority of the issues we have seen getting started were caused by DNS, firewalls, selinux being set to “enforcing”, and automatic certs that were generated using “.local” domains or short hostnames.

If you need to wipe Rancher from any nodes and start over see this page:
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/

You can use these commands in windows to check on the docker service status and restart it.

sc.exe qc docker
sc.exe stop docker
sc.exe start docker
[post_title] => Creating a Rancher cluster with Windows worker nodes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-a-rancher-cluster-with-windows-worker-nodes [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:31:12 [post_modified_gmt] => 2024-05-28 17:31:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2825 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2908 [post_author] => 7 [post_date] => 2020-09-24 18:41:45 [post_date_gmt] => 2020-09-24 18:41:45 [post_content] =>

By Anuj Tuli, CTO

Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the right data to capture, 2) Bringing data from various sources into the data warehouse, 3) Performing guided analysis on the captured data so as to derive meaning from it.  

A modern data warehouse platform helps bring these activities together, so that you can easily identify, capture and retrieve data from various sources, and provide visibility and reporting capabilities for chosen interpretation. Snowflake is built for data scientists and data engineers, and it supports modern data and applications that use as much unstructured data as structured data. 

Snowflake offers SaaS data warehousing services, and have also made available a number of connectors for data retrieval on their github here - https://github.com/snowflakedb. There is also a community page that provides hands-on exposure to the Snowflake platform, and other educational videos. More info here - https://community.snowflake.com/s/education-services 

Keyva provides services and offerings around Snowflake data warehousing platform. You can always reach our team at: [email protected] to request additional information. 

 

[post_title] => Big Data and Snowflake [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => big-data-and-snowflake [to_ping] => [pinged] => [post_modified] => 2020-09-24 18:41:45 [post_modified_gmt] => 2020-09-24 18:41:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2908 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 116 [max_num_pages] => 15 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4b6a14939efc61b5c6c83cb73e278a35 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

Big Data and Snowflake

By Anuj Tuli, CTO Organizations that have embarked on the journey to collecting and analyzing data are tasked with three distinct workstreams to achieve their goal – 1) Identifying the ...

Kong Enterprise on Red Hat Marketplace

By Anuj Tuli, CTO Kong recently announced the availability of its certified container-based Kong Enterprise on Red Hat Marketplace. You can find the press release announcement here.  Kong Enterprise provides the ...

ServiceNow Paris Release

By Anuj Tuli, CTO ServiceNow recently announced the general availability of their latest Paris release. Highlights in this release include Process Automation Designer to manage your automation workflows through a ...
Business intelligence concept. Big data analytics, chart and graph icons and female hands typing on laptop.

Docker Enterprise – Launchpad 2020

By Anuj Tuli, CTO The Docker container engine is generally accepted as the de facto standard for container run times. Docker Enterprise is the supported enterprise option that provides container ...

ServiceNow App for Red Hat OpenShift “NOW Certified” against Paris release

By Anuj Tuli, CTO Keyva announces the certification of their ServiceNow App for Red Hat OpenShift against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is the newest ...
two coworkers looking at a tablet

ServiceNow App for Red Hat Ansible Tower “NOW Certified” against Paris release

By Anuj Tuli, CTO Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Paris release (latest release) of ServiceNow. ServiceNow announced its early availability of Paris, which is ...

How to use REST APIs for OpenShift Online via Postman

Red Hat OpenShift Container Platform is an Enterprise Kubernetes offering by Red Hat, that allows users to deploy cloud native applications as well as manage lifecycle of microservices deployed in ...

Creating a Rancher cluster with Windows worker nodes

In this guide we will deal with building a Rancher cluster with windows worker nodes. The cluster will still need a Linux master and worker node as well. As with ...