Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 3 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 3 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 16, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 4069 [post_author] => 7 [post_date] => 2023-11-02 15:14:12 [post_date_gmt] => 2023-11-02 15:14:12 [post_content] =>

Amazon Cognito is a powerful service provided by AWS that allows you to manage user identities and authentication for your applications easily. In this short guide, I will walk you through the steps to create a Cognito User Pool, a fundamental component for handling user sign-ups, sign-ins, and identity management. Let’s get started!

Step One: Configure Sign-in Experience

Step Two: Configure Security Requirements

Step Three: Configure Sign-Up Experience

Step Four: Configure Message Delivery

Step Five: Connect Federated Identity Providers

Step Six: Integrate Your App

[table id=8 /] [post_title] => How to Create a Cognito User Pool: A Quick Guide [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-create-a-cognito-user-pool-a-quick-guide [to_ping] => [pinged] => [post_modified] => 2024-03-18 16:50:05 [post_modified_gmt] => 2024-03-18 16:50:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4069 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 4062 [post_author] => 2 [post_date] => 2023-10-16 18:10:20 [post_date_gmt] => 2023-10-16 18:10:20 [post_content] =>

In this post, we will utilize terraform to create an architecture that can be used to deploy a front and backend web application. N-tier architectures are split into multiple tiers and distributed. A common N-tier architecture is the 3-tier which is made up of a presentation, application, and data tier. But this code can be scaled very easily to add more tiers if needed.

We will deploy network infrastructure, which is called a virtual network in Azure. Within that virtual network we will deploy two small subnets. To run the web application, we will use Azure App Service which is a managed PaaS service that allows you easily scale out your application by adding new instances of web apps. Azure App Service can also be used to store mobile backends and REST APIs.

Requirements:

Terraform

Azure Account

Environment Setup:

Before you start, confirm you have a valid Azure Cloud account. Also, ensure you have Terraform installed on your local machine. Terraform provides official documentation on how to do this.

Start by creating a new directory in the desired location and navigate to it. Then paste the following code to create the resource group, two private subnets, service plan and app service:

# Azure provider source and version being used
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.75.0"
    }
  }
}

provider "azurerm" {
  features {
  }
}

# create a resource group
resource "azurerm_resource_group" "TerraformCreate" {
  name     = "TerraformCreate"
  location = "eastus"
}

# Create a vnet with two private subnets
resource "azurerm_virtual_network" "TerraformCreateVM" {
  name                = "TerraformCreateVM"
  location            = azurerm_resource_group.TerraformCreate.location
  resource_group_name = azurerm_resource_group.TerraformCreate.name
  address_space       = ["10.0.0.0/16"]

  subnet {
    name           = "private-subnet-1"
    address_prefix = "10.0.1.0/24"
  }

  subnet {
    name           = "private-subnet-2"
    address_prefix = "10.0.2.0/24"
  }
}

# Create a app service plan and web app
resource "azurerm_service_plan" "TerraformCreateASP" {
  name                = "TerraformCreateASP"
  location            = azurerm_resource_group.TerraformCreate.location
  resource_group_name = azurerm_resource_group.TerraformCreate.name
  os_type             = "Linux"
  sku_name            = "P1v2"
}

resource "azurerm_linux_web_app" "TerraformCreateWA" {
  name                = "TerraformCreateWA"
  location            = azurerm_resource_group.TerraformCreate.location
  resource_group_name = azurerm_resource_group.TerraformCreate.name
  service_plan_id     = azurerm_service_plan.TerraformCreateASP.id

  site_config {
    application_stack {
      node_version = "18-lts"
    }
  }
}

Now let’s break down the above code:

We have a required terraform block that specifies the Azure provider and the version which is the latest currently. It always good to check the latest version of the Azure provider and update your code.

Next, we create a resource group, a virtual network and two private subnets. A resource group is collection of like resources to make monitoring, provisioning, access control and de-provisioning convenient and effective. A virtual network is used to house our network resources like our subnets.

From there we create a service plan and web app. Our web apps are just instances of our application that are connected to our service plan and define those resources for the application to run. In this example, we are using NodeJS on Linux instances. We are also placing all resources in the resource group. Now you might be asking why use App Service over a virtual machine. As mentioned above App Service is a managed service. That means Azure takes more off the responsibility off the hands of the customer so you can easily and quickly deploy your application. You specify your runtime, manage your data, your application and Microsoft Azure will take care of the rest. Whereas when you deploy on a virtual machine, you have more to manage like your runtime and your OS.

Creating our application stack:

Then we will run the following commands to create the above resources:

Terraform init

Terraform plan

Terraform apply

After running apply, you should see a successful apply with four resources created. Be sure to destroy any unused resources.

[table id=7 /] [post_title] => Creating N-tier Architecture in Azure with Terraform [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => creating-n-tier-architecture-in-azure-with-terraform [to_ping] => [pinged] => [post_modified] => 2023-10-16 18:11:42 [post_modified_gmt] => 2023-10-16 18:11:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4062 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 3792 [post_author] => 7 [post_date] => 2023-10-05 08:30:00 [post_date_gmt] => 2023-10-05 08:30:00 [post_content] =>
https://youtu.be/BuqdSBrGByA

Keyva CTO Anuj Tuli discusses how Kubernetes is necessary for organizations to adopt container technology.

[post_title] => CTO Talks: Containerize Workloads at Scale [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-containerize-workloads-at-scale [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:50:38 [post_modified_gmt] => 2024-05-15 19:50:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3792 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 4045 [post_author] => 7 [post_date] => 2023-10-02 19:47:34 [post_date_gmt] => 2023-10-02 19:47:34 [post_content] =>

Keyva is pleased to announce the certification of the Keyva Service Integration Hub for the Red Hat Ansible and Openshift Automation Platform for the new ServiceNow Vancouver release. Clients can now seamlessly upgrade their ServiceNow App from previous ServiceNow releases (Tokyo, Rome, San Diego, Utah) to the Vancouver release.

The Vancouver release has new solutions that enhance security and governance, simplify critical processes in healthcare and finance, and accelerate talent transformation through AI.

Learn more about the Keyva ServiceNow Integrations Hub for Red Hat products and view all the ServiceNow releases for which Keyva has been certified at the ServiceNow store, visit Ansible https://bit.ly/3RKgoGA and Openshift https://bit.ly/3PXPGZE.

[post_title] => Keyva Service Integration Hub for Red Hat Ansible and OpenShift Automation Platforms – Certified for Vancouver Release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-service-integration-hub-for-red-hat-ansible-and-openshift-automation-platforms-certified-for-vancouver-release [to_ping] => [pinged] => [post_modified] => 2023-10-02 19:47:36 [post_modified_gmt] => 2023-10-02 19:47:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4045 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 3790 [post_author] => 7 [post_date] => 2023-08-23 08:30:00 [post_date_gmt] => 2023-08-23 08:30:00 [post_content] =>
https://youtu.be/hk_mEVSzP3g

Keyva CTO Anuj Tuli discusses how automated remediation helps organizations reduce staff time spent on repetitive tasks.

[post_title] => CTO Talks: Automated Remediation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-automated-remediation [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:54:21 [post_modified_gmt] => 2024-05-15 19:54:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3790 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 4024 [post_author] => 2 [post_date] => 2023-08-15 07:14:00 [post_date_gmt] => 2023-08-15 07:14:00 [post_content] =>

In this blog post, we will explore how Terraform can be utilized to create a BigQuery database in Google Cloud (GCP). BigQuery is one of the most popular GCP services due to its many advantages. BigQuery is a fully managed petabyte scale data warehouse database that uses SQL. Its serverless and allows you to query external data sources without having to store that data inside GCP itself. An important note is that a major advantage of BigQuery is that you pay for the data scanned and not the amount of data stored.

Terraform

GCP Account

Environment setup:

Before you begin, make sure you have a valid Google Cloud account and project setup. We are also going to use a service account to generate the database via Google recommended best practices. Also, make sure you have Terraform installed on your local machine. Terraform provides official documentation on how to do this.

Create a new directory in the desired location and navigate to it and paste the following code to create the BigQuery database:

#Setup RHEL subscription
subscription-manager register
provider "google" {
  credentials = file("<service_account_key_file>.json")

  project = "<ID of your GCP project>"
  region  = "us-central1"
  zone    = "us-central1-a"
}

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.51.0"
    }
  }
}

resource "google_bigquery_dataset" "bigquery_blog" {
  dataset_id    = "bigquery_blog"
  friendly_name = "blog"
  description   = "Dataset for blog"
  location      = "US"

  labels = {
    env = "dev"
  }
}

resource "google_bigquery_table" "bigquery_blog" {
  dataset_id = google_bigquery_dataset.bigquery_blog.dataset_id
  table_id   = "blogs"

  time_partitioning {
    type = "DAY"
  }

  labels = {
    env = "dev"
  }

  schema = <<EOF
[
  {
    "name": "blog_title",
    "type": "STRING",
    "mode": "NULLABLE",
    "description": "Name of blog"
  },
  {
    "name": "blog_date",
    "type": "DATETIME",
    "mode": "NULLABLE",
    "description": "Date of blog"
  }
]
EOF
}

Now let’s break down the above code:

The provider block uses the Google provider which is a plugin that is used for resources management. Here we define the service account credentials file that we want to use create the database also the project ID, region and zone. For the service account, we use least privilege access and just scope its permissions to BigQuery.

Next, we have the resource blocks. Before we create the actual table, we need to create a dataset. A BigQuery dataset is thought of as a container for tables. You can house multiple tables in a dataset or just have a single table. Here we set the location to “US” and add labels so that we can easily id the table. For the table resource, I would like to point out we added a time partitioning configuration. It is recommended that you partition tables and data because it helps with maintainability and query performance.

Creating the database:

Then we will run the following commands to create the database in GCP with our service account.

terraform init

terraform plan

terraform apply

After running apply, you should see a similar output with the success of the terraform apply.

About the Author

[table id=7 /] [post_title] => Leveraging Terraform to Create BiqQuery Database in Google Cloud [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => leveraging-terraform-to-create-biqquery-database-in-google-cloud [to_ping] => [pinged] => [post_modified] => 2023-08-08 15:14:46 [post_modified_gmt] => 2023-08-08 15:14:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4024 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 4031 [post_author] => 2 [post_date] => 2023-08-10 06:02:00 [post_date_gmt] => 2023-08-10 06:02:00 [post_content] =>

The company continues its mission to help clients address the complexity of modern IT environments

“Good people attract good people,” says Jaime Gmach, Chief Executive Officer of Keyva as he reflects on the critical role the Keyva team has played in the company's growth and success.

Keyva, established in 2018, is dedicated to simplifying technology and enabling businesses to focus on innovation. As the organization celebrates its five-year anniversary, Gmach emphasizes the need for automation to accelerate business results.

“We launched Keyva to address a significant need for automation in the technology space,” says Gmach. “There are so many manual, repetitive tasks that can be automated to free up resources and deliver greater value to organizations.”

Keyva addresses the increasing complexity of modern IT environments by prioritizing three key areas:

  1. Automation and Orchestration: Keyva assists clients in automating processes and implementing solutions that enhance business efficiency, reduce risk, and cut costs.
  2. DevOps: Keyva emphasizes organizational culture change, encompassing people, processes, and technology, to help organizations become more agile and adopt a DevOps mindset.
  3. Hybrid Cloud: Keyva guides clients in planning and streamlining their cloud journey, ensuring incremental value, cost reduction, complexity mitigation, and risk management at every step.

When engaging with new clients, Keyva prioritizes small wins to demonstrate the potential of automation. This approach enables clients to spread the message within their organization, highlighting the positive impacts of automation across the entire enterprise. These small wins may involve, for example, freeing up teams to focus on mission-critical IT activities instead of repetitive tasks or automating data transfer across applications.

Keyva also has a focus on continuous education for both clients and team members. Keyva's approach involves tackling problems at the ground level and documenting the solutions to empower clients to handle similar issues independently in the future.

A key part of Keyva’s success is its partnership with Evolving Solutions, which helps enable modern operations in a hybrid cloud world. “The powerful combination of our comprehensive solutions and specialized expertise creates numerous benefits for our clients and it enables us to deliver exceptional outcomes,” says Gmach.

Looking ahead, Gmach aims to have Keyva and Evolving Solutions work together to help clients realize the potential of end-to-end automation. The need for automation in IT is not going away—and disruption is now the norm. Gmach notes, “Automation can’t be a future consideration. It is a requirement today because the benefits it offers in terms of efficiency, cost reduction, accuracy, scalability, customer experience, and competitive advantage make it essential for organizations to thrive in a rapidly changing and increasingly digitized world.”

The Keyva team's dedication, expertise, and commitment to simplifying technology have paved the way for a bright future. With automation as the driving force, the organization is poised to continue transforming businesses, enabling them to stay ahead in an era of constant disruption and transformation.

“As we embrace automation, we unlock new possibilities, elevate our efficiency, and position ourselves for sustained success in an ever-evolving technological landscape,” says Gmach. “I am excited by the opportunity to embark on a journey with our clients to harness the power of automation to create a future that is filled with endless opportunities.”

[post_title] => Keyva: 5 Years of IT Automation and Innovation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-5-years-of-it-automation-and-innovation [to_ping] => [pinged] => [post_modified] => 2023-08-08 15:03:24 [post_modified_gmt] => 2023-08-08 15:03:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4031 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 4015 [post_author] => 13 [post_date] => 2023-08-03 08:15:00 [post_date_gmt] => 2023-08-03 08:15:00 [post_content] =>

In the world of containerized applications, Kubernetes has emerged as the standard for container orchestration, empowering developers to deploy and manage applications at scale. Docker images serve as the building blocks for these containers, and a reliable container registry like Nexus is essential for storing and distributing these images securely.

In this blog, we will explore how to migrate Docker images from one Kubernetes cluster to another using two separate Nexus registries.  One for development and another for production. This process ensures a smooth and controlled transition of applications from dev to prod environments.

Prerequisites 

Before we proceed, ensure that Docker is installed and properly configured on your machine with access to both the dev and prod Kubernetes clusters. Set up Docker credentials to authenticate with the Nexus registries.

1. Pull Image from Dev Registry

docker login <dev-nexus-registry-url>

docker pull <dev-nexus-registry-url>/<image-name>:<tag>

Replace <dev-nexus-registry-url>with the URL of your dev Nexus registry, <image-name> with the image's name, and <tag> with the specific version or tag of the image.

2. Tag the Image for Prod Registry

docker tag <dev-nexus-registry-url>/<image-name>:<tag> <prod-nexus-registry-url>/<image-name>:<tag>

3. Push the Image to Prod Registry

docker login <prod-nexus-registry-url>

docker push <prod-nexus-registry-url>/<image-name>:<tag>

4. Verify the Pushed Image

Conclusion:

Migrating Docker images between Kubernetes clusters using Nexus registries is a crucial process for safely moving applications from dev to production environments. By following the steps outlined in this blog, you can ensure a controlled transition, reducing the risk of discrepancies and unexpected behavior in your production environment.

[post_title] => Migrating Docker Images Between Kubernetes Clusters Using Nexus Registry [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => migrating-docker-images-between-kubernetes-clusters-using-nexus-registry [to_ping] => [pinged] => [post_modified] => 2023-07-28 20:38:39 [post_modified_gmt] => 2023-07-28 20:38:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4015 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 4069 [post_author] => 7 [post_date] => 2023-11-02 15:14:12 [post_date_gmt] => 2023-11-02 15:14:12 [post_content] =>

Amazon Cognito is a powerful service provided by AWS that allows you to manage user identities and authentication for your applications easily. In this short guide, I will walk you through the steps to create a Cognito User Pool, a fundamental component for handling user sign-ups, sign-ins, and identity management. Let’s get started!

Step One: Configure Sign-in Experience

Step Two: Configure Security Requirements

Step Three: Configure Sign-Up Experience

Step Four: Configure Message Delivery

Step Five: Connect Federated Identity Providers

Step Six: Integrate Your App

[table id=8 /] [post_title] => How to Create a Cognito User Pool: A Quick Guide [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-create-a-cognito-user-pool-a-quick-guide [to_ping] => [pinged] => [post_modified] => 2024-03-18 16:50:05 [post_modified_gmt] => 2024-03-18 16:50:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4069 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 111 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4a1f874e0aec59068aaa73554124268d [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

How to Create a Cognito User Pool: A Quick Guide

Amazon Cognito is a powerful service provided by AWS that allows you to manage user identities and authentication for your applications easily. In this short guide, I will walk you ...

Creating N-tier Architecture in Azure with Terraform

In this post, we will utilize terraform to create an architecture that can be used to deploy a front and backend web application. N-tier architectures are split into multiple tiers ...

CTO Talks: Containerize Workloads at Scale

Keyva CTO Anuj Tuli discusses how Kubernetes is necessary for organizations to adopt container technology.

Keyva Service Integration Hub for Red Hat Ansible and OpenShift Automation Platforms – Certified for Vancouver Release

Keyva is pleased to announce the certification of the Keyva Service Integration Hub for the Red Hat Ansible and Openshift Automation Platform for the new ServiceNow Vancouver release. Clients can ...

CTO Talks: Automated Remediation

Keyva CTO Anuj Tuli discusses how automated remediation helps organizations reduce staff time spent on repetitive tasks.

Leveraging Terraform to Create BiqQuery Database in Google Cloud

In this blog post, we will explore how Terraform can be utilized to create a BigQuery database in Google Cloud (GCP). BigQuery is one of the most popular GCP services ...

Keyva: 5 Years of IT Automation and Innovation

The company continues its mission to help clients address the complexity of modern IT environments “Good people attract good people,” says Jaime Gmach, Chief Executive Officer of Keyva as he ...

Migrating Docker Images Between Kubernetes Clusters Using Nexus Registry

In the world of containerized applications, Kubernetes has emerged as the standard for container orchestration, empowering developers to deploy and manage applications at scale. Docker images serve as the building ...