In the world of containerized applications, Kubernetes has emerged as the standard for container orchestration, empowering developers to deploy and manage applications at scale. Docker images serve as the building blocks for these containers, and a reliable container registry like Nexus is essential for storing and distributing these images securely.
In this blog, we will explore how to migrate Docker images from one Kubernetes cluster to another using two separate Nexus registries. One for development and another for production. This process ensures a smooth and controlled transition of applications from dev to prod environments.
Prerequisites
Before we proceed, ensure that Docker is installed and properly configured on your machine with access to both the dev and prod Kubernetes clusters. Set up Docker credentials to authenticate with the Nexus registries.
1. Pull Image from Dev Registry
docker login <dev-nexus-registry-url>
docker pull <dev-nexus-registry-url>/<image-name>:<tag>
Replace <dev-nexus-registry-url>with the URL of your dev Nexus registry, <image-name> with the image's name, and <tag> with the specific version or tag of the image.
2. Tag the Image for Prod Registry
docker tag <dev-nexus-registry-url>/<image-name>:<tag> <prod-nexus-registry-url>/<image-name>:<tag>
3. Push the Image to Prod Registry
docker login <prod-nexus-registry-url>
docker push <prod-nexus-registry-url>/<image-name>:<tag>
4. Verify the Pushed Image
Conclusion:
Migrating Docker images between Kubernetes clusters using Nexus registries is a crucial process for safely moving applications from dev to production environments. By following the steps outlined in this blog, you can ensure a controlled transition, reducing the risk of discrepancies and unexpected behavior in your production environment.
[post_title] => Migrating Docker Images Between Kubernetes Clusters Using Nexus Registry [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => migrating-docker-images-between-kubernetes-clusters-using-nexus-registry [to_ping] => [pinged] => [post_modified] => 2023-07-28 20:38:39 [post_modified_gmt] => 2023-07-28 20:38:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4015 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 3994 [post_author] => 7 [post_date] => 2023-07-31 22:23:55 [post_date_gmt] => 2023-07-31 22:23:55 [post_content] =>In my previous blog post, I demonstrated how to use Azure Storage to set up Remote Terraform State. In this post, I will illustrate the process of setting up an Azure Virtual Network (VNet).
This step is essential as it serves as a prerequisite for a future post, where I’ll explain how to deploy an Azure Kubernetes Service (AKS) cluster in a custom Azure VNet.
Azure Virtual Network
Azure Virtual Network offers several advantages for cloud networking. It allows you to isolate your resources, providing network security and access control through features like network security groups and virtual network service endpoints. VNet enables hybrid connectivity, connecting your Azure resources with on-premises infrastructure or other cloud environments. It also facilitates subnet and IP address management, allowing you to organize and control your resources effectively. VNet integrates with various Azure services, enabling seamless communication and integration, while regional connectivity and VNet peering support scalability and resource distribution.
Note: This article assumes you have Linux and Terraform experience.
Prerequisites
Code
You can find the GitHub Repo here.
Brief Overview of the Directory Structure
/terraform-aks/
├── modules
│ ├── 0-remotestate
│ └── 1-vnet
└── tf
└── dev
├── global
│ └── 0-remotestate
└── westus2
└── aksdemo
└── 1-vnet
Usage
To utilize the modules, make necessary modifications to the main.tf file located at the root level: /tf/westus2/aksdemo/1-vnet, according to your specific criteria.
Let’s take a look at the resources needed to create our Virtual Network within our child module located in terraform-aks/modules/1-vnet/main.tf
We are creating the following:
Let’s go through main.tf and variables.tf files to understand the Terraform code.
Main.tf
Nat Gateway Configuration: This section sets up a NAT (Network Address Translation) gateway and associates it with the VNet and subnet created earlier.
variables.tf
This file defines various input variables that can be customized when running the Terraform code. Here’s a breakdown of some of the variables:
The Terraform code sets up the Azure provider and defines local variables to extract information from the working directory path. It then calls the child module from terraform-aks/modules/1-vnet/to create an Azure VNet and associated resources. The state file for Terraform is stored in an Azure Storage Account using the specified backend configuration. The actual values for the <resource-group-name>, <storage_account_name>, <container_name>, and <key> will need to be replaced with actual values to work with your Azure environment.
/tf/dev/westus2/askdemo/1-vnet/main.tf
Module “network”: This block calls the child module 1-vnet located at terraform-aks/modules/1-vnet. The child module is used to create an Azure Virtual Network (VNet) and associated networking resources. The module is invoked with the following arguments:
Terraform Block: This block specifies some Terraform-specific configurations:
Now that we have set up the root module in the terraform-aks/tf/dev/westus2/aksdemo/1-vnet/main.tf directory, it's time to provision the necessary resources using the child module located at terraform-aks/modules/1-vnet.
terraform init:
terraform init
terraform plan:
terraform plan
terraform apply:
terraform apply
Keyva CTO Anuj Tuli discusses our expertise in developing point-to-point integrations.
[post_title] => CTO Talks: Integrations [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-integrations [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:46:01 [post_modified_gmt] => 2024-05-15 19:46:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3788 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 3965 [post_author] => 7 [post_date] => 2023-07-17 07:11:00 [post_date_gmt] => 2023-07-17 07:11:00 [post_content] =>This article explores the process of utilizing Infrastructure as Code with Terraform to provision Azure Resources for creating storage and managing Terraform Remote State based on the environment.
Terraform State
Terraform state refers to the information and metadata that Terraform uses to manage your infrastructure. It includes details about the resources created, their configurations, dependencies, and relationships.
Remote state storage enhances security by preventing sensitive information from being stored locally and allows for controlled access to the state. It enables state locking and preventing conflicts. It simplifies recovery and auditing by acting as a single source of truth and maintaining a historical record of changes.
Azure Storage
Azure Storage is a highly scalable and durable solution for storing various types of data. To store Terraform state in Azure Storage, we will be utilizing the Azure Blob Storage backend. Azure Blob Storage is a component of Azure Storage that provides a scalable and cost-effective solution for storing large amounts of unstructured data, such as documents, images, videos, and log files.
Note: This article assumes you have Linux and Terraform experience.
Prerequisites
Code
You can find the GitHub Repo here.
Brief Overview of the Directory Structure
/terraform-aks/
├── modules
│ ├── 0-remotestate
│
└── tf
└── dev
|
├── global
│ └── 0-remotestate
Usage
To use the module, modify the main.tf at the root level and child module at main.tf based on your criteria.
Let’s take a look at the resources needed to create our storage for our remote state within our child module located in terraform-aks/modules/0-remotestate/main.tf
We are creating the following:
Resource Group: A resource group is a logical container for grouping and managing related Azure resources.
Storage Account: Storage accounts are used to store and manage large amounts of unstructured data.
Storage Container: A storage container is a logical entity within a storage account that acts as a top-level directory for organizing and managing blobs. It provides a way to organize related data within a storage account.
We use the Terraform Azure provider (azurerm
) to define a resource group (azurerm_resource_group
) in Azure. It creates a resource group with a dynamically generated name, using a combination of <youruniquename
> and the value of the var.environment
variable. The resource group is assigned a location specified by the var.location
variable, and tags specified by the var.tags
variable.
The Azure storage account (azurerm_storage_account
) block is use to create a storage account. The name attribute is set using a combination of <var.name
>, a placeholder that should be replaced with your own name, and the value of the var.environment
variable.
An Azure storage container (azurerm_storage_container
) is defined. The name
attribute specifies the name of the container using <youruniquename
> placeholder, which should be replaced with your desired name.
Next, let’s take a look at the variables inside terraform-aks/modules/0-remotestate/variables.tf
These variables provide flexibility and configurability to the 0-remotestate
module, allowing you to customize various aspects of the resource provisioning process, such as names, locations, access types, and more, based on your specific requirements and preferences.
Next, lets take a look at the outputs located in terraform-aks/modules/0-remotestate/outputs.tf
By defining these outputs, the outputs.tf
file allows you to capture and expose specific information about the created resource group, storage account, and container from the 0-remotestate
module.
Let’s Navigate to our Root module located at terraform-aks/tf/dev/global/0-remotestate /main.tf
This code begins by defining the azurerm
provider, which enables Terraform to interact with Azure resources. The features {}
block is empty, indicating that no specific provider features are being enabled or configured in this case.
The locals
block is used to define local variables. In this case, it defines the following variables:
cwd
: This variable extracts the current working directory path, splits it by slashes ("/"), and then reverses the resulting list. This is done to extract specific values from the path.environment
: This variable captures the third element from the cwd
list, representing the environment.location
: This variable is set to the value "westus2"
, specifying the Azure region where resources will be deployed.name
: This variable is set to the value "aksdemo"
, representing the name of the project or deployment.tags
: This variable is a map that defines various tags for categorizing and organizing resources. The values within the map can be customized based on your specific needs.This code block declares a module named remote_state
and configures it to use the module located at ../../../../modules/0-remotestate
. The source parameter specifies the relative path to the module. The remaining parameters (name, location, environment,
and tags
) are passed to the module as input variables, using values from the local variables defined earlier.
This code also, includes a data block to fetch the current Azure client configuration. This data is useful for authentication and obtaining access credentials when interacting with Azure resources.
The commented out terraform
block represents a backend configuration for storing Terraform state remotely. This block is typically uncommented after the necessary Azure resources (resource group, storage account, container, and key) are created. It allows you to configure remote state storage in Azure Blob Storage for better state management.
Now that we have set up the root module in the terraform-aks/tf/dev/global/0-remotestate/
directory, it's time to provision the necessary resources using the child module located at terraform-aks/modules/0-remotestate
. The root module acts as the orchestrator, leveraging the functionalities and configurations defined within the child module to create the required infrastructure resources.
After executing terraform apply and successfully creating your storage resources, you can proceed to uncomment the backend block in the main.tf file. This block contains the configuration for storing your Terraform state remotely. Once uncommented, run another terraform init command to initialize the backend and store your state in the newly created storage account. This ensures secure and centralized management of your Terraform state, enabling collaborative development and simplified infrastructure updates.
Enter a value of yes when prompted.
[post_title] => Terraform Remote State With Azure Storage [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => terraform-remote-state-with-azure-storage [to_ping] => [pinged] => [post_modified] => 2023-07-11 14:49:10 [post_modified_gmt] => 2023-07-11 14:49:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3965 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 3940 [post_author] => 7 [post_date] => 2023-07-11 14:01:54 [post_date_gmt] => 2023-07-11 14:01:54 [post_content] =>This article details the process in Amazon Elastic Container Service to set up email notifications for stopped tasks.
Amazon Elastic Container Service (ECS)
Amazon Elastic Container Service (ECS) is a fully managed container orchestration service provided by AWS. It enables you to easily run and scale containerized applications in the cloud. ECS simplifies the deployment, management, and scaling of containers by abstracting away the underlying infrastructure.
An ECS task represents a logical unit of work and defines how containers are run within the service. A task can consist of one or more containers that are tightly coupled and need to be scheduled and managed together.
Amazon Simple Notification Service (SNS)
Amazon Simple Notification Service is a fully managed messaging service provided by AWS that enables you to send messages or notifications to various distributed recipients or subscribers. SNS simplifies the process of sending messages to a large number of subscribers, such as end users, applications, or other distributed systems, by handling the message distribution and delivery aspects.
Amazon EventBridge
Amazon EventBridge is a fully managed event bus service provided by AWS. It enables you to create and manage event-driven architectures by integrating and routing events from various sources to different target services. EventBridge acts as a central hub for event routing and allows decoupled and scalable communication between different components of your applications.
Get Started
This demo assumes you have a running ECS cluster.
1. Configure a SNS Topic.
2. Subscribe to the SNS topic you created.
3. Confirm the subscription.
4. Create an Amazon EventBridge rule to trigger the SNS Topic when the state changes to stopped on an ECS Task
{
“source”:[
“aws.ecs”
],
“detail-type”:[
“ECS Task State Change”
],
“detail”:{
“lastStatus”:[
“STOPPED”
],
“stoppedReason”:[
“Essential container in task exited”
]
}
}
Below is an example of the code
5. Add permissions that enable EventBridge to publish SNS topics.
{
“Sid”: “PublishEventsToMyTopic”,
“Effect”: “Allow”,
“Principal”: {
“Service”: “events.amazonaws.com”
},
“Action”: “sns:Publish”,
“Resource”: “arn:aws:sns:region:account-id:topic-name”
}
Below, is an example provided on how to use the JSON converter with the above code.
aws sns set-topic-attributes — topic-arn “arn:aws:sns:region:account-id:topic-name” \
— attribute-name Policy \
— attribute-value
Below is an example of how I used the AWS SNSset-topic-attribute command to set the new policy. This also contains the string I created using the JSON converter that adds the permissions.
aws sns get-topic-attributes --topic-arn
command6. Test your rule
Verify that the rule is working by running a task that exits shortly after it starts.
{
"containerDefinitions":[
{
"command":[
"sh",
"-c",
"sleep 5"
],
"essential":true,
"image":"amazonlinux:2",
"name":"test-sleep"
}
],
"cpu":"256",
"family":"fargate-task-definition",
"memory":"512",
"networkMode":"awsvpc",
"requiresCompatibilities":[
"FARGATE"
]
}
Below is an example of how the code looks in the JSON editor
7. Run the task.
8. Monitor the task.
If your event rule is configured correctly, you will receive an email message within a few minutes with the event text.
[post_title] => ECS: Setting Up Email Notifications For Stopped Tasks [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => ecs-setting-up-email-notifications-for-stopped-tasks [to_ping] => [pinged] => [post_modified] => 2023-06-30 14:22:20 [post_modified_gmt] => 2023-06-30 14:22:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3940 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 3907 [post_author] => 7 [post_date] => 2023-07-03 10:23:00 [post_date_gmt] => 2023-07-03 10:23:00 [post_content] =>This article reviews the process to upgrade an Amazon DocumentDB cluster from version 4.0 to 5.0 with DMS.
Amazon DocumentDB
Amazon DocumentDB is a fully managed, NoSQL database service provided by AWS. It is compatible with MongoDB, which is a popular open-source document database. Amazon DocumentDB is designed to be highly scalable, reliable, and performant, making it suitable for applications that require low-latency and high-throughput database operations.
AWS Database Migration Service
AWS DMS simplifies the process of database migration by providing an efficient and reliable solution for moving databases to AWS or between different database engines. It supports a wide range of database sources, including on-premises databases, databases running on AWS, and databases hosted on other cloud platforms.
Get Started
This demo assumes you have an existing DocumentDB cluster with version 4.0.
1. Create a new DocumentDB cluster with version 5.0. Use this link to help you get started.
2. Authenticate to your Amazon DocumentDB cluster 4.0 using the mongo shell and execute the following commands:
db.adminCommand({modifyChangeStreams: 1,
database: "db_name",
collection: "",
enable: true});
AWS DMS requires access to the cluster’s change streams.
3. Migrate your index’s with the Amazon DocumentDB Index Tool.
connection demonstration with hostname removed
4. Create a replication instance.
5. Update Security Groups.
6. Create Source endpoint.
7. Create Target Endpoint
8. Create the Database Migration Task
9. Monitor the migration task.
You are now ready to change your application’s database connection endpoint from your source Amazon DocumentDB 4.0 cluster to your target Amazon DocumentDB 5.0 cluster.
[post_title] => Upgrading an Amazon DocumentDB Cluster From Version 4.0 to 5.0 With DMS [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => upgrading-an-amazon-documentdb-cluster-from-version-4-0-to-5-0-with-dms [to_ping] => [pinged] => [post_modified] => 2023-06-30 14:03:23 [post_modified_gmt] => 2023-06-30 14:03:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3907 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 3869 [post_author] => 7 [post_date] => 2023-06-29 17:46:26 [post_date_gmt] => 2023-06-29 17:46:26 [post_content] =>Automation is an essential aspect of modern operations, offering numerous benefits such as increased efficiency, reduced errors, and improved productivity. However, implementing automation without proper planning and strategy can lead to disappointing results and wasted resources. To ensure success, organizations need to follow a systematic approach.
At Keyva and Evolving Solutions, we work with an array of clients who range from being highly mature in their automation processes and tools to organizations that are just starting and need guidance to attain operational efficiencies. Across this spectrum, many organizations lack an overarching framework for automation.
To simplify the process, we have outlined the nine essential steps for implementing automation.
Implementing automation in your organization can revolutionize your IT operations and drive significant benefits. By following the steps outlined above, you can ensure that your automation efforts are successful and aligned with your organization's objectives.
Let’s talk. If you would like to discuss how Keyva and Evolving Solutions can help you implement automation strategies can drive better business outcomes in your organization, contact us.
[post_title] => Mastering Automation: Nine Steps to Implementing Automation Effectively [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => mastering-automation [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:53:56 [post_modified_gmt] => 2023-06-28 18:53:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3869 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 3786 [post_author] => 7 [post_date] => 2023-06-27 08:30:00 [post_date_gmt] => 2023-06-27 08:30:00 [post_content] => [post_title] => CTO Talks: How DevOps has Transformed [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cto-talks-how-devops-has-transformed-in-the-past-decade [to_ping] => [pinged] => [post_modified] => 2024-05-15 19:32:12 [post_modified_gmt] => 2024-05-15 19:32:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3786 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 4015 [post_author] => 13 [post_date] => 2023-08-03 08:15:00 [post_date_gmt] => 2023-08-03 08:15:00 [post_content] =>In the world of containerized applications, Kubernetes has emerged as the standard for container orchestration, empowering developers to deploy and manage applications at scale. Docker images serve as the building blocks for these containers, and a reliable container registry like Nexus is essential for storing and distributing these images securely.
In this blog, we will explore how to migrate Docker images from one Kubernetes cluster to another using two separate Nexus registries. One for development and another for production. This process ensures a smooth and controlled transition of applications from dev to prod environments.
Prerequisites
Before we proceed, ensure that Docker is installed and properly configured on your machine with access to both the dev and prod Kubernetes clusters. Set up Docker credentials to authenticate with the Nexus registries.
1. Pull Image from Dev Registry
docker login <dev-nexus-registry-url>
docker pull <dev-nexus-registry-url>/<image-name>:<tag>
Replace <dev-nexus-registry-url>with the URL of your dev Nexus registry, <image-name> with the image's name, and <tag> with the specific version or tag of the image.
2. Tag the Image for Prod Registry
docker tag <dev-nexus-registry-url>/<image-name>:<tag> <prod-nexus-registry-url>/<image-name>:<tag>
3. Push the Image to Prod Registry
docker login <prod-nexus-registry-url>
docker push <prod-nexus-registry-url>/<image-name>:<tag>
4. Verify the Pushed Image
Conclusion:
Migrating Docker images between Kubernetes clusters using Nexus registries is a crucial process for safely moving applications from dev to production environments. By following the steps outlined in this blog, you can ensure a controlled transition, reducing the risk of discrepancies and unexpected behavior in your production environment.
[post_title] => Migrating Docker Images Between Kubernetes Clusters Using Nexus Registry [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => migrating-docker-images-between-kubernetes-clusters-using-nexus-registry [to_ping] => [pinged] => [post_modified] => 2023-07-28 20:38:39 [post_modified_gmt] => 2023-07-28 20:38:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4015 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 78786a6e639c29b4db7d53856140c66c [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )