Get Appointment

  • contact@wellinor.com
  • +(123)-456-7890

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 13 ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 13 [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 96, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 1131 [post_author] => 7 [post_date] => 2019-01-15 21:26:42 [post_date_gmt] => 2019-01-15 21:26:42 [post_content] => You might've heard this before – configuration management database (CMDB) should be the single source of truth. But what does that mean? And how can you achieve it? With all the different third party applications in your environment, you might think it could be a gargantuan effort to consolidate all the data in to the CMDB. It may be a decent amount of work, but the reality is that it is easier than what most would anticipate. By taking the 3 step approach below, you can come close to configuration management database nirvana – a current and accurate CMDB: Step 1: Develop, and agree on a configuration management database schema, and the necessary mappings This step requires the IT Service Management (ITSM) teams, and the various business units to agree on a configuration management database schema and how it will be organized. ITSM teams would create data mappings – to help map data captured from various software components in the environment, into the specified fields within the CMDB schema. This would also include activities around customization of various fields within the CMDB forms, and customizations around API access. Step 2: Integrate and Automate Integrate the configuration management database with all sources of data as per the identified data mappings. This can be done by leveraging existing integrations, or by creating new ones. The population of data within the CMDB can be done as part of an extract-transfer-load process (retrospectively), or as part of the creation of a CI using automation (prospectively). The process of CMDB population is a multi-step process, whereby data is captured via one of the discovery tools and automatically updated within the CMDB. Automation of CI population also helps create relationships between CIs and Change tickets or Incident tickets, thereby making the review process for Change Advisory Boards much easier. Step 3: Optimize and Reconcile Once the data is in the configuration management database, it is important to make sure it is accurate. Given that many different sources of data may compete for the same target field within the CMDB, weights can be assigned to each source to improve accuracy. For example, the asset tag of a device may have a higher weightage when that data is coming from a discovery tool, but the CPU information captured within a configuration management system can be trusted more than any other source. Furthermore, the data captured from all the various sources should be put in staging datasets. It is up to the administrators of the system, to define rule sets and reconciliation rules that will automatically filter the required data in to production data set for consumption. The above may seem like oversimplification of the tasks required to have a fully functional CMDB, but many organizations have successfully adopted a version of this breakdown. It is highly likely that the most time will be spent upfront during the configuration management database schema and data mapping exercise. By investing time and effort towards having an accurate CMDB, organizations can effectively understand the various configurations and their relationships within in the environment, and thereby easily track and manage them. Associates at Keyva have been helping customers set up and optimize their ITSM and CMDB systems for the past two decades. We've also helped organizations develop integrations between configuration management database and third party application software so as to accelerate the population of the CMDB, and to keep it current and accurate. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at info@keyvatech.com
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => 3 Steps to Achieving Configuration Management Database (CMDB) Nirvana [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 3-steps-to-achieve-configuration-management-database-cmdb-nirvana [to_ping] => [pinged] => [post_modified] => 2020-01-22 17:56:20 [post_modified_gmt] => 2020-01-22 17:56:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1131 [menu_order] => 39 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1127 [post_author] => 7 [post_date] => 2019-01-11 16:54:38 [post_date_gmt] => 2019-01-11 16:54:38 [post_content] => If you have tried setting up a DevOps pipeline to achieve continuous deployment, and ran into configuration drift issues -  you will know the pains quite well. A consistent and uniform configuration is a prerequisite to any automation. If the automated workflow finds that the target configuration is not as anticipated, it will either take the exception route, or revert to notification for manual intervention. What is configuration drift? When a given system configuration or an application configuration changes from the "blessed" or "vetted" state, to another state, it is called configuration drift. For example, if the IT team provides web server instance with a preset configuration file, and as part of the application deployment process or application customization process, that configuration is changed – that constitutes a configuration drift. Similar drifts can occur at the OS level, for packages or commercial software. Why does it matter? It is important to mitigate and remediate any configuration drifts because without it, the environment will become unmanageable, especially as you scale. Consider the web server example above, when there is an application outage in your production environment. As part of finding the root cause of failure,  you will now also need to walk back the steps of every configuration changed from the original version, to rule out any issues caused because of those changes. This can cause time and efforts to be deployed towards tangential activities. With consistent deployment automation, you can confidently evaluate the issues plaguing the core application, given all other things to be constant. Another reason that reducing or preventing configuration drift is paramount, is to make sure that any additional deployments on the base tier can be automated. In the above example, it is much easier to automate application deployment on a web server delivered as PaaS, than deploying the same application on a web server that might've been customized, or drifted from its desired state. How to mitigate or prevent configuration drift? There are many ways you can address the resolution of configuration drift. The common factor for all scenarios is to make sure the deployments are automated. Firstly, you'd have to define what constitutes a configuration drift. For example, if you are providing your customers with IaaS machines, would adding a new printer constitute configuration drift? Or would that be classified under allowed customizations, that have no material effect on your service delivery? The rules of configuration will need to be defined. Secondly, you'd want to automate the deployments of your builds and configurations. This could mean using an orchestration framework to deploy the desired service through a self-service catalog, or another automated mechanism. Thirdly, you'd want to make sure that the release and update process for various infrastructure and application components flows through a source control management (SCM) system. Any and all deployments should pick up the latest version of the configs from SCM. And by deploying a systems configuration management solution, you can now check the configuration states of your target systems against the latest versions of those configurations in SCM. There are many other steps you can take to mitigate configuration drift, depending upon the severity of the drift and the penalty you pay for not addressing it. Keyva has helped several organizations address the common challenge of preventing configuration drift. There are several processes and tools an organization can use to address their customized needs for preventing configuration drifts in their infrastructure and applications. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at info@keyvatech.com
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => Configuration Drift : The Bane of Continuous Deployments [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => configuration-drift-the-bane-of-continuous-deployments [to_ping] => [pinged] => [post_modified] => 2019-01-11 16:54:38 [post_modified_gmt] => 2019-01-11 16:54:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1127 [menu_order] => 40 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1122 [post_author] => 7 [post_date] => 2019-01-07 21:54:42 [post_date_gmt] => 2019-01-07 21:54:42 [post_content] => Over the last few years, we've all been exposed to different meanings of the word "Cloud". Similarly, other terms like "Application Development" and "Application Modernization" are sometimes used nebulously as well. Application Development could mean a few different things – it could be creating an integration between two existing software modules using APIs, or it could be mobile application development, or it could mean developing a standalone computer program that serves a business purpose, or it could mean all of the above. No one term is more correct than the other. It depends on what is relevant for you, and how you define an application. Application Modernization could also cover a few things – it could mean recoding an application in a new programming language, or moving the application from one platform to another, or moving it to the public cloud, or making the application architecture more agile and breaking it down in to microservices framework. If the objective is to have your legacy application leverage all the benefits available from a distributed Cloud architecture, the process to make the necessary modifications to that application's architecture or implementation is referred to as Application Modernization. "Cloud-Native" is the term used to describe the characteristics of born-in-the-cloud applications. These applications are built to be used in a distributed fashion, are services aware, resilient, and scalable. But the largest proportion of applications in many industries are still run within legacy on-premises environments. Short of doing a rip-and-replace for those functional applications, there is a need to transform and modernize these legacy applications to fit the new Cloud architectures. Here are some things to consider reviewing, as part of the application modernization process:   Let's look an example. Your legacy application may be monitored for metrics using application specific context using one of the commercial APM tools. As part of the modernization process, and applying best practices approach, the metrics and logs can be exposed as a service (microservice) via a /metrics endpoint by the process of application instrumentation. This would make it much easier to monitor the metrics microservice, and filter out the needed readings. It also makes it easier to upgrade the metrics service if you were to add or remove the exposure of specific parameters. Associates at Keyva have helped multiple organizations assess their application readiness and helped with application modernization.  These include things like refactoring existing applications, adding a wrapper over current applications so they can be consumed easily by DevOps processes, and more. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at info@keyvatech.com.
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => Application Modernization: A Path to Cloud-Native [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => application-modernization-a-path-to-cloud-native [to_ping] => [pinged] => [post_modified] => 2019-01-22 21:21:40 [post_modified_gmt] => 2019-01-22 21:21:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1122 [menu_order] => 41 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1106 [post_author] => 7 [post_date] => 2018-12-19 22:04:44 [post_date_gmt] => 2018-12-19 22:04:44 [post_content] => by Anuj Tuli, Chief Technology Officer If you haven't gotten a chance to review this year's DORA report, check it out below. It has great information on Software Delivery Performance, Organizational Performance on DevOps, and trends around Cloud, Platform and Open Source. A few key observations come to light through this report. Let's take a look. Organizations that focused on having high software delivery performance were able to delivery quickly on new requirements, achieve customer satisfaction, and keep up with regulatory requirements. There is a small group of organizations aptly named "elite performers" that are practicing software delivery performance at the highest levels. The time it takes these elite performers to have the code moved from code commit to production is less than an hour! Organizational Performance is directly tied to Software Delivery Performance. The elite performers in the category for Automation, Configuration Management, Testing, Deployments, and Change Approvals, all had 10% or less of manual work in those areas. This led the high performing teams to free up time for net new work (e.g. new features, functionality) by up to 50%. That is a huge gain towards productivity and innovation. This analysis echoed the fact that by maturing the automation of IT processes in your environments, it tangibly and positively affects organization productivity. In order for organizations to up their levels of Software Delivery Performance, it is important to adopt Cloud like characteristics not only for their underlying infrastructure, but in terms of organizational culture as well. Cloud characteristics are achieved easier when using Platform-as-a-Service, and deploying Infrastructure-as-Code. For Cloud Native applications, public Cloud offerings are an obvious fit, but these applications must be built to be resilient, elastic, and easy to manage. In the survey, 58% responded that they were using open source components, libraries, and platforms – which points to the expansion of open source software in organizations today. Users that implement infrastructure as code to manage their software deployments, and ones that use containers, are more likely to be elite performers. You can download the full report HERE. About the Author [table id=3 /] [post_title] => 2018 DORA Report Review [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 2018-dora-report-review [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:07:24 [post_modified_gmt] => 2020-03-05 19:07:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1106 [menu_order] => 42 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 1090 [post_author] => 7 [post_date] => 2018-12-13 16:00:37 [post_date_gmt] => 2018-12-13 16:00:37 [post_content] => By Anuj Tuli, Chief Technology Officer Here are the steps to installing Red Hat OpenShift Container Platform from scratch, for your lab or dev environments. We will walk through setting up the OC cluster with 1 master and 1 node, but you can set up as many nodes as you'd like. Since we are not setting up the master nodes in HA configuration, we'd recommend this setup be limited to your lab environments. This guide is valid for RHEL 7, and OSE 3.5. We will set up Openshift Container Platform on VMware virtual machines. If you are using KVM or another hypervisor, the steps for hardware configurations may differ slightly. First, let us take a look at the pre-requisite steps that need to be addressed before we begin our work. This applies to both VMs (Master VM and Node VM):

Prepare and Install Packages (on Master and Nodes)

Since we set up RHEL with minimal packages, we would need to enable all the needed rpms. First register with subscription manager using your Red Hat profile credentials.
subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
To enable OpenShift rpms, you will need to find the associated Pool ID and attach it separately.
subscription-manager list --available --all
Find the pool ID associated with the Red Hat OpenShift Container Platform, and run:
subscription-manager attach --pool <Pool ID>
You will now be able to enable the associated repos.
subscription-manager repos --enable rhel-7-server-ose-3.5-rpms
subscription-manager repos --enable rhel-7-server-openstack-10-rpms
Optionally, if you want to set up OC Cluster in HA configuration:
subscription-manager repos --enable="rhel-ha-for-rhel-7-server-rpms"
Finish setting up other utils:
yum repolist
yum -y update
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum install gcc python-virtualenv
yum install atomic-openshift-utils

Set up dnsmasq (on Master and Nodes)

When using OpenShift SaaS offerings, the service provider takes care of setting up DNS and routing. But since we are setting up the cluster from the ground up, we need to set up these components manually. We will be using dnsmasq for our lab.
yum -y install dnsmasq bind-utils
We will now modify the /etc/dnsmasq.conf configuration file. It is recommended that you back up the existing conf file before modification, in case you need to revert to it later. Modify the /etc/dnsmasq.conf file: On the Master, add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<master IP>
resolv-file=/<path>/<custom-filename>
On each Node(s), add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<nodeIP>
resolv-file=/<path>/<custom-filename>
This /<path>/<custom-filename> is where we will list our nameserver (in most cases, your subnet gateway) IP address. On the Master, create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
On each Node(s), create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
We will also need to update the /etc/resolv.conf file and modify the existing nameserver entry to be a loopback address. On the Master and Node(s), open the /etc/resolv.conf file and modify the nameserver entry
nameserver 127.0.0.1
Note that if you reboot your VMs, you may need to update the nameserver entry in this file again. On both the Master and the Node(s), we have disabled the firewall service already (as a pre-requisite). We will now enable the dnsmasq service:
systemctl enable dnsmasq && systemctl start dnsmasq
In order to make sure the dnsmasq service is working correctly, you can try to ping the <subdomain>.<domain> address you defined in the /etc/dnsmasq.conf file under the address section.
ping <subdomain>.<domain>
If you run this command on the Master, it should return the IP address of your Master server. You could also add another custom subdomain in front (any string), and it should return the same IP address. For example -
ping <my_sub>.<subdomain>.<domain>
should return the IP of the Master server as well.

Configuring Docker (on Master and Nodes)

Our next step is to set up Docker on these machines.
yum -y install docker-1.12.6
We will be modifying the /etc/sysconfig/docker-storage-setup file. It is recommended that you back up the existing file before modification. But first, we need to find out what our volume is named. If you recall, we had set up additional 40 GB volumes on our machines for use with Docker storage. The output of fdisk –l will give you the name of your additional disk volume. In my case, it was /dev/sdb. We will use the sdb name in our docker-storage-setup file. Open /etc/sysconfig/docker-storage-setup through your favorite editor, comment all existing lines, and add the following entries:
DEVS=sdb
VG=docker-vg
Save and close the file. We will disable cluster locking configuration for LVM
lvmconf --disable-cluster
And then run our Docker storage setup
docker-storage-setup
You can verify the setup using the command
lvs
It will show you the attributes and sizes associated with the various volumes We can now start the Docker service
systemctl enable docker && systemctl start docker

Openshift install (on Master)

We can now finally get started with the OpenShift install steps.
yum -y install atomic-openshift-docker-excluder atomic-openshift-excluder atomic-openshift-utils bridge-utils bind-utils git iptables-services net-tools wget
Once we have all the packages ready to go, we run
atomic-openshift-installer install
The setup asks a number of questions. After selecting a user that you'd like to enable for SSH access, you will be asked to select a variant for the install. We will select option [1] for OpenShift Container Platform, which is also the default. You will be asked to enter the hostname or IP of your Master node, and choose whether the host will be RPM based or container based. The installer will then provide a brief summary of the information entered, and will prompt for additional hosts. We will select y and this time we will enter the hostname or IP of our Node server. You can go through configuring additional Node servers in this section. For the 'New Default Subdomain', you can configure the . information as you have defined it under the /etc/dnsmasq.conf file. This portion can be used later for external routing. If you have any http or https proxies, you can configure them on the next screen. The installer then shows a summary of all the information captured, and what the configuration would look like. Once you confirm all the shown configurations, the installer kicks off the setup. It can take a while for the install to complete. Once the installation has completed successfully, you can verify the running services using the following command
systemctl status | grep openshift
The output of this command will list the services running on both master and node(s). If you run the same command on the node(s), it will only show the services running on that node. You can also run some sample OC commands on the Master to make sure all looks good
oc get pods
oc get projects
oc get nodes
That should do it! You have now set up a single node OpenShift cluster in your lab environment. The process of creating users depends upon which Identity Provider you would like to have set up with OpenShift. You can access the OpenShift console via https://<Master_IP_or_FQDN>:8443/console If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at info@keyvatech.com. About the Author [table id=3 /] [post_title] => Red Hat OpenShift : Day 1 Install Guide [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => red-hat-openshift-day-1-install-guide [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:00:50 [post_modified_gmt] => 2020-03-05 19:00:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1090 [menu_order] => 43 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 1071 [post_author] => 7 [post_date] => 2018-12-06 17:46:12 [post_date_gmt] => 2018-12-06 17:46:12 [post_content] => By Anuj Tuli, Chief Technology Officer As the industry moves towards self-healing containers, agile applications, and seamless infrastructures, there is an impending need for setting up auto-remediation of incidents and configuration drifts. Infrastructure and Operations teams have to depend heavily on automated tools, systems and processes, to manage the ever-expanding parlance of the IT framework. Closed-Loop Incident Process is one such subset of Closed Loop Automation, and is defined as follows:
  1. You receive an alert for a service down in your operations center console
  2. An automation framework picks up the alert, and fetches information contained in the various fields (e.g. reason for alert, configuration item). If the configuration item that alerted does not exist in the CMDB, then it creates the corresponding CI in the CMDB (Configuration Management Database). If the CI already exists in the CMDB, it creates an Incident Ticket in your IT Service Management system.
  3. The framework auto-remediates the issue based on the custom runbooks you have defined for your organization. For example, if the disk is full, delete the logs and removes any temporary files. The Incident ticket is also updated with the results of the remediation effort.
  4. If the auto-remediation succeeds, the associated incident ticket is updated, and closed. If the auto-remediation fails for any reason, a notification is then sent out for human intervention.
Many organizations have already adopted this automated remediation process and expanded it to include the top 5 common alert types on which they spend the most time. In most cases, they are automating consistent repeatable processes that an engineer works on, again and again, day in and day out. Automating these processes have saved these organizations a ton of manual hours, reduced human errors, and added tangible efficiency to their infrastructure and operations teams. If you need assistance in building the auto-remediation framework, Keyva can help.  If you'd like to talk about how other organizations have garnered benefits from such automation, please feel free to reach us at info@keyvatech.com About the Author [table id=3 /] [post_title] => Closed-Loop Automation: A Primer [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => closed-loop-automation-a-primer [to_ping] => [pinged] => [post_modified] => 2020-03-05 16:00:33 [post_modified_gmt] => 2020-03-05 16:00:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1071 [menu_order] => 44 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 1058 [post_author] => 7 [post_date] => 2018-11-28 15:29:29 [post_date_gmt] => 2018-11-28 15:29:29 [post_content] => By Anuj Tuli, Chief Technology Officer Organizations are always looking to improve efficiencies within their infrastructure.  One such area where organizations look to make improvements centers around what to use to run their applications:  virtual machines or containers? Comparing the two is a lot like comparing apples and oranges.  The fact is, these are two very distinct, very different technologies.  The pros and cons of each vary widely depending on your needs.

High level component architecture for Containers and VMs

Here are some simple comparisons to consider as you explore which option is best for your environment:
VIRTUALIZATION
•  VMs virtualize hardware.•  Containers virtualize applications and dependent libraries.
ENCAPSULATION
•  Virtual Machines encapsulate the entire operating system library.•  Containers only encapsulate the application layer (or database layer) and application libraries.
HOSTING
• VMs are hosted on physical machines, managed through a hypervisor layer and consume the resources of the hardware on which they reside.• Containers can be hosted by physical or virtual machines, managed through an orchestration service (like Kubernetes), and consume the resources of the host and the operating system on which they reside.
PORTABILITY
• VMs are (generally) not portable. Only if the same hypervisor layer hosts the VMs on-premises and in the Cloud, can they be dynamically ported to achieve a seamless hybrid architecture.• Containers are natively portable, since the application runtimes are encapsulated within the container, and are a great fit for hybrid architectures.
SCALABILITY
• Scripting or automation needs to be set up to dynamically scale in or scale out VMs.• Using container orchestration modules, the scale in or scale out features are natively made available to containers.
STORAGE
VM’s and Containers are both able to attach storage. The difference is in the scope and lifecycle of the storage volume.
• Multiple containers on the same VM can have attached storage that are separated in scope from each other.• Container-attached storage goes away if the container shuts down.
NETWORK
VMs and Containers both can achieve network segmentation, either at a service level or at an individual unit level.
The question of whether to use VMs or containers, is less a matter of comparing features and benefits, and more a question of use case at hand.  If you are an organization that runs fewer apps, you might look to VMs as your preferred framework, while an application-centric company may consider containers. When the goal is to make the most use out of your physical hardware infrastructure, VMs are tremendously useful. When the goal is to make sure your applications are scalable, resilient, secure, and offer zero downtime, despite them needing to be frequently updated, an implementation of containers might be worth considering. If you are still unsure which option is best, don’t be afraid to involve a trusted partner, like Keyva who knows that every company’s today and tomorrow looks different, and will meet you where you are at.
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => Apples and oranges: comparing virtual machines (VMs) and containers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => apples-and-oranges-comparing-virtual-machines-vms-and-containers [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:29:53 [post_modified_gmt] => 2020-03-10 14:29:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1058 [menu_order] => 45 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1026 [post_author] => 7 [post_date] => 2018-11-07 16:22:11 [post_date_gmt] => 2018-11-07 16:22:11 [post_content] => As application architectures evolve to accommodate current trends and technologies, the security model needs to evolve with them. The developer and operations teams need to think about securing various aspects of the application lifecycle. Organizations should consider the following security paradigms: Physical security – this includes security of the datacenter that houses the application infrastructure, and controlled access to the racks and switches. Network security – this includes access to the organization’s networking via secure VPN tunnels, presence of firewalls for access to specific ports, network micro-segmentation, traffic isolation, partitioned LANs, DDoS attacks, intrusion detection and elimination, security of private gateway connecting on-premises and public cloud components. Logical Access security – things includes role-based access control, hierarchical Active Directory structure, control privileged access. Data security – this includes encryption capability, data integrity and backup, data classification, persistent protection, controlled sharing. Application security – this includes authentication rules, authorization rules, session management, role-based access, limiting exposure of functions (via API), latest version of binaries, latest patches applied to the underlying platform, limit direct access to the database, exception handling, logging and auditing, SSL certificates. There is no panacea for protecting your application or the data within it – it is an ongoing process. All aspects of security require constant reviews and updates. But by following a combination industry best practices and strategies to secure the access to the application and the content within, IT teams can rest easy that their business critical applications will be available when their users want them. Keyva can provide a holistic assessment of your current security state, and recommendations towards a future steady state. Are you interested in learning more about how various organizations are achieving security for their applications and data? If so, please reach out to one of our associates and we’d be glad to talk with you about our experiences. [post_title] => Security Considerations for Modern Applications [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-considerations-for-modern-applications [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:27:03 [post_modified_gmt] => 2020-03-10 14:27:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1026 [menu_order] => 46 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 1131 [post_author] => 7 [post_date] => 2019-01-15 21:26:42 [post_date_gmt] => 2019-01-15 21:26:42 [post_content] => You might've heard this before – configuration management database (CMDB) should be the single source of truth. But what does that mean? And how can you achieve it? With all the different third party applications in your environment, you might think it could be a gargantuan effort to consolidate all the data in to the CMDB. It may be a decent amount of work, but the reality is that it is easier than what most would anticipate. By taking the 3 step approach below, you can come close to configuration management database nirvana – a current and accurate CMDB: Step 1: Develop, and agree on a configuration management database schema, and the necessary mappings This step requires the IT Service Management (ITSM) teams, and the various business units to agree on a configuration management database schema and how it will be organized. ITSM teams would create data mappings – to help map data captured from various software components in the environment, into the specified fields within the CMDB schema. This would also include activities around customization of various fields within the CMDB forms, and customizations around API access. Step 2: Integrate and Automate Integrate the configuration management database with all sources of data as per the identified data mappings. This can be done by leveraging existing integrations, or by creating new ones. The population of data within the CMDB can be done as part of an extract-transfer-load process (retrospectively), or as part of the creation of a CI using automation (prospectively). The process of CMDB population is a multi-step process, whereby data is captured via one of the discovery tools and automatically updated within the CMDB. Automation of CI population also helps create relationships between CIs and Change tickets or Incident tickets, thereby making the review process for Change Advisory Boards much easier. Step 3: Optimize and Reconcile Once the data is in the configuration management database, it is important to make sure it is accurate. Given that many different sources of data may compete for the same target field within the CMDB, weights can be assigned to each source to improve accuracy. For example, the asset tag of a device may have a higher weightage when that data is coming from a discovery tool, but the CPU information captured within a configuration management system can be trusted more than any other source. Furthermore, the data captured from all the various sources should be put in staging datasets. It is up to the administrators of the system, to define rule sets and reconciliation rules that will automatically filter the required data in to production data set for consumption. The above may seem like oversimplification of the tasks required to have a fully functional CMDB, but many organizations have successfully adopted a version of this breakdown. It is highly likely that the most time will be spent upfront during the configuration management database schema and data mapping exercise. By investing time and effort towards having an accurate CMDB, organizations can effectively understand the various configurations and their relationships within in the environment, and thereby easily track and manage them. Associates at Keyva have been helping customers set up and optimize their ITSM and CMDB systems for the past two decades. We've also helped organizations develop integrations between configuration management database and third party application software so as to accelerate the population of the CMDB, and to keep it current and accurate. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at info@keyvatech.com
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => 3 Steps to Achieving Configuration Management Database (CMDB) Nirvana [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 3-steps-to-achieve-configuration-management-database-cmdb-nirvana [to_ping] => [pinged] => [post_modified] => 2020-01-22 17:56:20 [post_modified_gmt] => 2020-01-22 17:56:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1131 [menu_order] => 39 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 108 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4cb23135395ec3510d5a3f0d0dbdbc97 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )
3 Digit on Dark Metal Background on red brick wall

3 Steps to Achieving Configuration Management Database (CMDB) Nirvana

You might’ve heard this before – configuration management database (CMDB) should be the single source of truth. But what does that mean? And how can you achieve it? With all ...
Read more
Server room caution sign

Configuration Drift : The Bane of Continuous Deployments

If you have tried setting up a DevOps pipeline to achieve continuous deployment, and ran into configuration drift issues –  you will know the pains quite well. A consistent and ...
Read more
Cloud computing, futuristic display technology connectivity concept

Application Modernization: A Path to Cloud-Native

Over the last few years, we’ve all been exposed to different meanings of the word “Cloud”. Similarly, other terms like “Application Development” and “Application Modernization” are sometimes used nebulously as ...
Read more
2018 Accelerate: State of DevOps Strategies for a New Economy

2018 DORA Report Review

by Anuj Tuli, Chief Technology Officer If you haven’t gotten a chance to review this year’s DORA report, check it out below. It has great information on Software Delivery Performance, ...
Read more
code displayed on computer monitor

Red Hat OpenShift : Day 1 Install Guide

By Anuj Tuli, Chief Technology Officer Here are the steps to installing Red Hat OpenShift Container Platform from scratch, for your lab or dev environments. We will walk through setting ...
Read more
Closed Loop Automation

Closed-Loop Automation: A Primer

By Anuj Tuli, Chief Technology Officer As the industry moves towards self-healing containers, agile applications, and seamless infrastructures, there is an impending need for setting up auto-remediation of incidents and ...
Read more
two people working on two different laptops

Apples and oranges: comparing virtual machines (VMs) and containers

By Anuj Tuli, Chief Technology Officer Organizations are always looking to improve efficiencies within their infrastructure.  One such area where organizations look to make improvements centers around what to use ...
Read more
Businessman holding shield protect icon, Concept cyber security safe your data

Security Considerations for Modern Applications

As application architectures evolve to accommodate current trends and technologies, the security model needs to evolve with them. The developer and operations teams need to think about securing various aspects ...
Read more