Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 12 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 12 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 88, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 1999 [post_author] => 7 [post_date] => 2019-11-11 18:46:49 [post_date_gmt] => 2019-11-11 18:46:49 [post_content] =>

In our most recent post we talked about how to set up Kong in your environment. We will now take a look at how to set up an API abstraction service in Kong, so you can route your requests to backend fulfillment APIs.  

In the example we look at today, we will set up a generic call for "Get Incident Ticket" and have it translated via Kong to a backend ServiceNow API call for ServiceNow Incident Management. You can use this example to set up similar API call translations to any microservice for any custom or commercial application.  

Step 1 – Check to make sure the Kong service is up and running 

kong health 

kong start 

curl -i http://<kong_FQDN_or_IP>:8001/ 

Step 2 – Set up a Service in Kong for the ServiceNow Incident API 

curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services --data 'name=servicenow-sample-get-incident' --data 'url=https://<servername>.service-now.com/api/now/table/incident?sysparam_limit=1' 

Step 3 – Create a route service for the API endpoint 

curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services/servicenow-sample-get-incident/routes --data 'hosts[]=itsm-server' --data 'paths[]=/get-incident' --data 'methods[]=GET' 

Step 4 – Test the API routing call. Note that the API translation happens on port 8000 by default. You will also provide the username and password (if needed) for the translated service. In our case, we will be passing basic authentication credentials for ServiceNow.  

curl -i -X GET --url http://localhost:8000/get-incident --header "Host: itsm-server" -u username:password 

The output would return the JSON formatted response from ServiceNow, and would look similar to below:  

{"result":[{"parent":"","made_sla":"true","caused_by":"","watch_list":"","upon_reject":"cancel","sys_updated_on":"2019-09-05 11:30:23","child_incidents":"0","hold_reason":"","approval_history":"","number":"INC0010001","resolved_by":"","sys_updated_by":"admin","opened_by":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"user_input":"","sys_created_on":"2019-09-05 11:30:16","sys_domain":{"link":"https://itsm-server/api/now/table/sys_user_group/global","value":"global"},"state":"2","sys_created_by":"admin","knowledge":"false","order":"","calendar_stc":"","closed_at":"","cmdb_ci":"","delivery_plan":"","contract":"","impact":"3","active":"true","work_notes_list":"","business_service":"","priority":"5","sys_domain_path":"/","rfc":"","time_worked":"","expected_start":"","opened_at":"2019-09-05 11:30:16","business_duration":"","group_list":"","work_end":"","caller_id":"","reopened_time":"","resolved_at":"","approval_set":"","subcategory":"","work_notes":"","short_description":"keyva_snow_test","close_code":"","correlation_display":"","delivery_task":"","work_start":"","assignment_group":"","additional_assignee_list":"","business_stc":"","description":"keyva snow test description","calendar_duration":"","close_notes":"","notify":"1","service_offering":"","sys_class_name":"incident","closed_by":"","follow_up":"","parent_incident":"","sys_id":"c1341204dbf3b70045a1f26039961932","contact_type":"","reopened_by":"","incident_state":"2","urgency":"3","problem_id":"","company":"","reassignment_count":"0","activity_due":"","assigned_to":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"severity":"3","comments":"","approval":"not requested","sla_due":"","comments_and_work_notes":"","due_date":"","sys_mod_count":"1","reopen_count":"0","sys_tags":"","escalation":"0","upon_approval":"proceed","correlation_id":"","location":"","category":"inquiry"}]}   

This quick walk-through showed you how you can easily create an API abstraction layer using Kong for specific back-end fulfillment calls. You can create similar calls for any level of infrastructure and application APIs and build capabilities towards an Infrastructure-as-Code implementation. 

Keyva helps organizations implement API abstraction and leverage it to deliver Infrastructure-as-Code. The team at Keyva has years of experience with Kong and other API abstraction tools. We also offer lunch-and-learn sessions for discussions around how other organizations are using these technologies and what use cases would work best for your organization. Please contact us if you're interested in discussing API abstraction and how it can work for you.  


CTO Anuj Tuli

Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.

During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.

Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/

Join the Keyva Community! Follow Keyva on LinkedIn at:

[post_title] => How to set up API abstraction service in Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-api-abstraction-service-in-kong [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:42:40 [post_modified_gmt] => 2020-03-05 19:42:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1999 [menu_order] => 22 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1993 [post_author] => 7 [post_date] => 2019-11-05 14:57:47 [post_date_gmt] => 2019-11-05 14:57:47 [post_content] =>

Keyva is excited to announce the release of ServiceNow App for Red Hat OpenShift, certified by ServiceNow. 

Many teams and organizations that have containerized critical application workloads are using the Red Hat OpenShift implementation of Kubernetes. Traditionally, the IT Service Management teams and the Development teams haven't had a common intersection point. To increase compliance, governance, and auditability, team owners are constantly challenged and encouraged to reduce shadow IT, and provide a common gateway to consume any and all IT services. ServiceNow has established itself as a prominent player in the IT Service Management domain, and can act as the single point of entry for all IT requests.  

This integration allows organizations to consume application services that are built in OpenShift as deployment jobs, from a ServiceNow service catalog request. The integration allows for any and all customizations within ServiceNow while leveraging existing approval processes, and also allows you to define specific trigger points within ServiceNow for when to launch the build jobs.  

With this integration, you can:  

· Trigger Red Hat OpenShift build jobs from ServiceNow Catalog Requests, Change Requests, Incident Requests, and more  

· Accelerate the adoption of Red Hat OpenShift Container Application Platform as the container platform of choice  

· Allow ServiceNow teams the ability to fulfill IT automation requests via OpenShift 
· Easily map field values in ServiceNow record and pass them as arguments to OpenShift build job 
· Leverage best practices integration methodology to integrate disparate domain tools  

· Get fully supported integration built using best practices in specific domains  

If you'd like a free trial or would require more details, please reach out to one of our associates at [email protected] 

You can find the integration listing here


CTO Anuj Tuli

Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.

During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.

Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/

[post_title] => ServiceNow App for Red Hat OpenShift [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift [to_ping] => [pinged] => [post_modified] => 2020-03-09 14:37:05 [post_modified_gmt] => 2020-03-09 14:37:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1993 [menu_order] => 23 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1933 [post_author] => 7 [post_date] => 2019-09-26 18:41:21 [post_date_gmt] => 2019-09-26 18:41:21 [post_content] =>

We've all heard that Continuous Integration (CI) and Continuous Delivery (CD) are a major part of the DevOps release cycle. The question is – how do we get to a point where we can make full use of the advantages offered by CI/CD processes? This will also require minor tweaks or major modifications for the workload (i.e. applications) passing through the pipeline. A majority of the work to make the applications ready for full utilization of CI/CD processes will be around mapping the library dependencies for the application in question, and for modularizing various functions of the application such that they can be developed and released independently without impacting other code sections.  

Another major benefit of having modular code being developed by different teams is that it can be automatically tested as it gets deployed. Smaller code releases are preferable over larger patches, especially when you need to go through integration testing with other code sections that may or may not have been updated. It also matters how your cloud infrastructure is implemented. For example, if you are pooling testing or development resources, you may have limited capacity to make progress in parallel.  

The next step then is to determine the code dependency between various sections, and how changing code in one section, or changing a commonly used library version, can affect other functionality. Determining this web of dependency can be a daunting task depending on the complexity of the application, and the overall purpose that the application delivers to the business. Multiply this by hundreds or thousands of applications that you may have in your environment. This code maintainability evaluation exercise can be very beneficial if you want to have multiple teams work on different sections of the code. On the other hand, if the web of dependencies is not understood in detail, it can lead to massive repercussions for other application functionalities unintentionally. 

Associates at Keyva have helped multiple organizations assess their application readiness and helped with application modernization.  These include things like refactoring code for existing applications, adding a wrapper over current applications so they can be consumed easily by DevOps processes, and more. Keyva also uses code analysis and application discovery tools from CAST software in conjunction with an analysis of your CMDB to provide you a wholistic view of the application dependency mapping. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]


CTO Anuj Tuli

Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.

During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.

Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/

[post_title] => Code dependency and maintainability – Boon or Curse for CI CD? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => code-dependency-and-maintainability-boon-or-curse-for-ci-cd [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:32:20 [post_modified_gmt] => 2020-03-05 19:32:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1933 [menu_order] => 25 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1531 [post_author] => 7 [post_date] => 2019-07-09 15:33:31 [post_date_gmt] => 2019-07-09 15:33:31 [post_content] =>

Take the first step towards transforming your apps into cloud-native

If you are a medium-sized or large organization which depends on your IT teams to provide you with on-demand infrastructure, and support for your business critical applications; if you are an organization with sprawling thousands of applications and are planning to take the journey, or are already on a path, to cloud-native, you must have faced questions like "where do I start in migrating these applications to a new cloud platform?" Or "Once I've migrated an application on to a cloud platform, how do I make sure my application code updates don't drift away from leveraging the most a cloud platform has to offer?".  

Let's take a real world example. An organization that has 1500 Applications, about 80% of which are Commercial-Off-The-Shelf (COTS) apps, and about 20% are custom home-grown. These applications are mostly run on unix-based systems, with some instances run on Windows hosts, and they are looking for assistance on getting started with questions like – how do they decide which applications to move? what changes need to be made to these applications to be compatible with the new platform? what risks and vulnerabilities are associated in not taking any actions on these applications? how long will the effort be to migrate these applications? are these applications even ready to be migrated? and so on.  

CAST Software provides organizations the ability to have automated application assessments for an entire portfolio of applications using various programming languages, and profile them based on multiple quality and quantity indicators. Using a combination of an assessment questionnaire and the automated code insights, CAST helps you decide which applications to target migrating first, how code changes affect an application resiliency, identify security vulnerabilities in the existing application code, and much more. CAST also provides the ability for you to export the results of your application assessment without the need to export any source code. This can be done by leveraging CAST API. For the example customer mentioned above with thousands of applications, the process of evaluating their entire portfolio of applications can be easily automated leveraging this function.  

Here is an example of how the command line API call would look like, that would export the application metrics without the need to export any source code -  

java -jar HighlightAutomation.jar --workingDir "/samples/pathToWorkingDir" --sourceDir "/samples/sourceDir/src/" --skipUpload 

Since jar files can be run on Unix and Windows systems alike, the command remains the same for both platforms. You can also use the command wrapper created by Keyva (https://github.com/keyva/casthighlight_wrapper) to run the assessment.  

For the aforementioned customer, coupling up the ability to run API commands with their configuration management system or a workflow automation system like Red Hat Ansible, they can scan for source code on their server inventory for all on-premises or cloud based servers, and automatically create an application portfolio assessment report on a scheduled basis.  

To get started with our application assessment questionnaire, please visit us at https://keyvatech.com/survey/. We also provide a free assessment for one of your applications built using Java or Python and help you roadmap the required effort and the steps you would need to take to assess and migrate your entire portfolio of applications.  

[post_title] => Transform into Cloud-Native [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => transformintocloud-native [to_ping] => [pinged] => [post_modified] => 2019-09-30 18:53:00 [post_modified_gmt] => 2019-09-30 18:53:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1531 [menu_order] => 27 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 1270 [post_author] => 7 [post_date] => 2019-05-15 20:40:41 [post_date_gmt] => 2019-05-15 20:40:41 [post_content] => Organizations today are rattled with managing their infrastructure and application performance metrics. This is only exacerbated by the myriad of technology and tools in the market that now exists. How can Operations teams keep up with increasing demand of managing larger and ever-changing workloads under shrinking budgets? That’s a question which many organizational teams are left to answer and find a solution that fits their needs. Are you faced with this challenge? One step that can get your team closer to the solution is to genericize your processes independent of tools, so that teams can follow a unified process and gain end to end visibility in to their entire infrastructure – regardless of the underlying technology that is deployed. Closed-Loop-Incident-Process is one such operational process, that has tremendous benefits when coupled with automation and tools consolidation. A Closed-Loop-Incident-Process A Closed-Loop-Incident-Process, or CLIP for short, is when you automatically take actions on alerts on your Network Operations Center (NOC) unified console including auto remediation, while integrating the remediation process with your ticketing system (e.g. Incident tickets). It does not matter whether you use one of the APM tools, or infrastructure monitoring tools, and the process holds true for any and all IT Services Management (ITSM) and Continuous Management Data Base (CMDB) systems you have. Once your teams agree on an end to end process that works for their environment and organization, you can begin the work of integrating the various tool sets you have to achieve the end goal. [caption id="attachment_1278" align="alignleft" width="212"] Fig 1a.[/caption] It is important to keep your CMDB accurate and current, and many organizations end up spending a lot of cycles and redundant time trying to achieve that state. Eventually, organizations can use the CIs and CI relationships within the CMDB to implement event correlation and operational intelligence that can proactively reduce alerts that would’ve been classified as noise. Check out the CLIP framework here (fig 1a) here.     How Kevya Can Help Keyva has helped several customers integrate and automate their operational processes to achieve time and cost savings. Keyva can help genericize the many different processes you may have and integrate tool sets to achieve end to end use case automation, with the end goal of achieving Operational Intelligence so the operations teams can put time towards automating complex remediation tasks, rather than on repetitive manual tasks. Are you ready to automate? If you have any questions, or feedback, please reach out to a Keyva associate at [email protected] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTO Anuj TuliAnuj Tuli serves as the Chief Technology Officer for Keyva. In his current role at Keyva, Anuj helps organizations adopt IT Process Automation, Containers, implement CI/CD methodology, modernize their applications, and develop an automation framework which supports end-to-end application lifecycle - planning, development, testing, deployment, and operations. He joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specialized in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => The Closed Loop Incident Process (CLIP) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => closed-loop-incident-process [to_ping] => [pinged] => [post_modified] => 2019-05-16 14:21:01 [post_modified_gmt] => 2019-05-16 14:21:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1270 [menu_order] => 31 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 1258 [post_author] => 7 [post_date] => 2019-04-25 15:18:38 [post_date_gmt] => 2019-04-25 15:18:38 [post_content] => Keyva announces the release of open source version for ServiceNow App that integrates with Red Hat Ansible using Ansible Tower (or AWX) APIs. The integration allows users to trigger Ansible jobs from within ServiceNow Catalog Requests or Change tickets. Users have the ability to customize triggers to suit their own needs – to not only launch the Ansible job from a specific ServiceNow Application, but also being able to define the specific conditions (e.g. Status field set to ‘In Progress’). Many organizations use Ansible as the automation and orchestration layer, while using ServiceNow as their ITSM suite and CMDB. There are several common use cases that require an integration between the two offerings. [caption id="attachment_1260" align="alignleft" width="211"] Fig 1a - Sample Provisioning Use Case[/caption] A similar use case can be implemented using this integration for Day 2 tasks like Patching, or Unprovisioning. Customers that are looking to launch a service request through a centralized portal like ServiceNow, and have Ansible as their orchestration fulfillment engine can leverage this open sourced integration.  Check out the sample provisioning use case (fig 1a) here. You can check out the integration on our GitHub repository here - https://github.com/keyva/ansible If you have any questions, or feedback, please reach out to a Keyva associate at [email protected] [post_title] => ServiceNow App for Red Hat Ansible Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-automation [to_ping] => [pinged] => [post_modified] => 2024-05-15 18:57:15 [post_modified_gmt] => 2024-05-15 18:57:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1258 [menu_order] => 32 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 1187 [post_author] => 7 [post_date] => 2019-02-28 19:55:38 [post_date_gmt] => 2019-02-28 19:55:38 [post_content] => Many organizations have started utilizing DevOps practices and tools for data warehousing and data lake setups. Data Analysts and Database Managers can follow DevOps practices for managing updates and new database releases across various environments in a uniform fashion, to produce repeatable results. Just like application teams create and manage the CI/CD pipeline for applications, the data that these applications consume can have its own release pipeline that is managed by the database teams. In many cases, cloud based data warehousing platforms provide the ability to host the applications that consume this data, all within the same environment.  Applications that consume data housed in a data warehouse may also leverage Kafka or other DevOps tools to achieve low latency query performance. As you release updates to your applications, you may also need to account for the updates to the service bus layer and the database layer. Continuous deployment and continuous integration becomes all the more important. Data teams that institute DevOps practices and tools for data warehousing can promote an agile culture within their silos. This includes the process of fetching or discovering the data for data warehousing, the process of making sure it is current and accurate for the consuming applications, and the process of organizing it for data mining and analysis.  You can apply DevOps practices and policies to data automation (just like infrastructure automation). Starting from self-service models to request new data instances, to requesting updates, and other data lifecycle steps. There are many organizations that have built entire data platforms on containers. For infrastructure and database teams, it is imperative to provide data "as-a-service" with measured and tracked SLAs and costs – whether these services are provided on container platforms or otherwise. Public cloud platforms have made it easy for consumers to leverage SaaS data warehousing solutions. Using DevOps practices do not have to be limited to providing the underlying infrastructure or service, but can also be applied to the building of reports. Jenkins automation can be used to release database updates, integration tools can be used to fetch the relevant data from multiple sources to populate the target systems, and opensource tools like Grafana can be used for dashboards. Primary objective of such a setup would be to capture data from various components and locations within the environment to a centralized location via ETL, and process that data to produce business intelligence.  When bringing data in from multiple sources for data warehousing, the exercise of data mapping and data reconciliation and sanitization usually take the most time and effort upfront. Architectural considerations also include the paradigm of monitoring the data warehouse components, as well as the data within it. Data processing engines like Hadoop MapReduce or Spark, along with the database serving platforms form the core components of any data warehouse setup. By implementing the best practices architecture, and tuning specifically for your environment, you can optimize your data warehouse setup to achieve a balance between performance and cost. Various industry use cases like fraud prevention in banking, storing health records and doctors notes in healthcare, customer profiling for retail, real time streaming in media, and others, have already leveraged the benefits provided by data lakes for capturing and storing unstructured data, and data warehousing for structured data. With the adoption of blockchain technologies, the relevance of Big Data is only anticipated to grow. Most enterprises depend heavily on applications for their business, and thereby have adopted agile processes for application releases. Combining the consumption of Big Data with emphasis on extracting relevant and accurate data at the right time, is paramount for business critical applications. The adoption of DevOps practices and tools for data warehousing within data teams is still in its nascent stage, but is being picked up by more and more data experts every day. If you need assistance with data warehousing to move your disparate data from various sources, or need help assessing the feasibility of a data warehouse platform without substantially affecting your business critical applications, Keyva can help. Associates at Keyva have worked with many different organizations in various verticals to help in data migration and application modernization projects. These include things like creating a data migration factory, creating ETL strategies with data mapping, refactoring existing applications, adding a wrapper over current applications so they can be consumed easily by DevOps processes, modifying existing applications to consume data from SaaS platforms, and more. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected].
CTO Anuj TuliAnuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => DevOps and Data Warehousing [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => devops-and-data-warehousing [to_ping] => [pinged] => [post_modified] => 2020-01-22 18:20:10 [post_modified_gmt] => 2020-01-22 18:20:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1187 [menu_order] => 33 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1164 [post_author] => 7 [post_date] => 2019-02-25 20:16:42 [post_date_gmt] => 2019-02-25 20:16:42 [post_content] => This write-up walks through setting up a two node Hadoop v3.1.1 cluster, and running a couple of sample MapReduce jobs. Prerequisites:
systemctl stop firewalld 
systemctl disable firewalld
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root" 
export HDFS_SECONDARYNAMENODE_USER="root" 
export YARN_RESOURCEMANAGER_USER="root" 
export YARN_NODEMANAGER_USER="root" 
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/ 
export PATH=$PATH:$JAVA_HOME/bin
Update the core-site file (on the master node) vi /hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml Modify the <configuration> section as per below:
<configuration> 
 <property> 
  <name>fs.defaultFS</name> 
  <value>hdfs://hadoop1:9000</value> 
 </property> 
</configuration> 
Update the hdfs-site file (on the master node)
vi /hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
Modify the <configuration> section as per below:
<configuration> 
 <property> 
  <name>dfs.replication</name> 
  <value>1</value> 
 </property> 
</configuration>
Set up the machines for passwordless SSH access (on both machines):
 ssh-keygen 
 ssh-copy-id -i ~/.ssh/id_rsa.pub root@hadoop1 
 ssh-copy-id -i ~/.ssh/id_rsa.pub root@hadoop2
On the master node, update the workers file to reflect the slave nodes
vi /hadoop/hadoop-3.1.1/etc/hadoop/workers
Add the entry
hadoop2
And then on the master node, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs namenode -format
On the datanode, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs datanode –format
On the master node, start the dfs service:
/hadoop/hadoop-3.1.1/sbin/start-dfs.sh
On the master node, run the dfsadmin report, to validate the availability of datanodes
/hadoop/hadoop-3.1.1/bin/hdfs dfsadmin -report
The output of this command should show two entries for datanodes - one for hadoop1 and one for hadoop2. The nodes are now set up to handle MapReduce jobs. We will look at two examples. We will use the sample jobs from hadoop-mapreduce-examples-3.1.1.jar file under the share folder. There is a large number of opensource java projects available, which run various kinds of mapreduce jobs. We will run these exercises on the master node. Exercise 1: We will solve a sudoku puzzle using MapReduce.  First we will need to create a sudoku directory under root folder in hdfs file system.
/hadoop/hadoop-3.1.1/bin/hdfs dfs -mkdir /sudoku
Then create an input file with the sudoku puzzle, under your current directory:
vi solve_this.txt
Update the file with the below text. Each entry on the same line is separated by a space.
? 9 7 ? ? ? ? ? 5
? 6 3 ? 4 ? 2 ? ?
? ? ? 9 ? ? ? 8 ?
? ? 9 ? ? ? ? 7 ?
? ? ? 1 ? 6 ? ? ?
2 5 4 8 3 ? ? ? 1
? 7 ? ? ? 1 8 ? ?
? 8 ? ? 7 ? 6 ? 4
5 ? ? ? ? 2 ? 9 ?
Now move (put) the file from your current directory in to the hdfs folder (sudoku) that we created earlier.
/hadoop/hadoop-3.1.1/bin/hdfs dfs -put solve_this.txt /sudoku/solve_this.txt
To make sure that the file was copied:
/hadoop/hadoop-3.1.1/bin/hdfs dfs -ls /sudoku
Run the mapreduce job, to solve the puzzle:
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar sudoku solve_this.txt
Solving solve_this.txt
1 9 7 6 2 8 4 3 5
8 6 3 7 4 5 2 1 9
4 2 5 9 1 3 7 8 6
6 1 9 2 5 4 3 7 8
7 3 8 1 9 6 5 4 2
2 5 4 8 3 7 9 6 1
9 7 2 4 6 1 8 5 3
3 8 1 5 7 9 6 2 4
5 4 6 3 8 2 1 9 7
Found 1 solutions Exercise 2: We will run a wordcount method on the sudoku puzzle file. Run the wordcount method on the sudoku puzzle file, and have the output stored in wcount_result folder.
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount /sudoku/solve_this.txt /sudoku/wcount_result
The lengthy output lists out the results of detailed analysis conducted on the file. We will cat the results of various results,
/hadoop/hadoop-3.1.1/bin/hdfs dfs -cat /sudoku/wcount_result/*
1 3
2 3
3 2
4 3
5 3
6 3
7 4
8 4
9 4
? 52
The above output captures the total number of times a particular digit is listed in the solved puzzle. To see all the different sample methods available under hadoop-mapreduce-examples-3.1.1.jar, run the following command:
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar
If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at [email protected]. CTO Anuj TuliAnuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => How to set up Hadoop two node cluster and run MapReduce jobs [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-hadoop-two-node-cluster-and-run-mapreduce-jobs [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:07:13 [post_modified_gmt] => 2023-06-28 18:07:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1164 [menu_order] => 34 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 1999 [post_author] => 7 [post_date] => 2019-11-11 18:46:49 [post_date_gmt] => 2019-11-11 18:46:49 [post_content] =>

In our most recent post we talked about how to set up Kong in your environment. We will now take a look at how to set up an API abstraction service in Kong, so you can route your requests to backend fulfillment APIs.  

In the example we look at today, we will set up a generic call for "Get Incident Ticket" and have it translated via Kong to a backend ServiceNow API call for ServiceNow Incident Management. You can use this example to set up similar API call translations to any microservice for any custom or commercial application.  

Step 1 – Check to make sure the Kong service is up and running 

kong health 

kong start 

curl -i http://<kong_FQDN_or_IP>:8001/ 

Step 2 – Set up a Service in Kong for the ServiceNow Incident API 

curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services --data 'name=servicenow-sample-get-incident' --data 'url=https://<servername>.service-now.com/api/now/table/incident?sysparam_limit=1' 

Step 3 – Create a route service for the API endpoint 

curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services/servicenow-sample-get-incident/routes --data 'hosts[]=itsm-server' --data 'paths[]=/get-incident' --data 'methods[]=GET' 

Step 4 – Test the API routing call. Note that the API translation happens on port 8000 by default. You will also provide the username and password (if needed) for the translated service. In our case, we will be passing basic authentication credentials for ServiceNow.  

curl -i -X GET --url http://localhost:8000/get-incident --header "Host: itsm-server" -u username:password 

The output would return the JSON formatted response from ServiceNow, and would look similar to below:  

{"result":[{"parent":"","made_sla":"true","caused_by":"","watch_list":"","upon_reject":"cancel","sys_updated_on":"2019-09-05 11:30:23","child_incidents":"0","hold_reason":"","approval_history":"","number":"INC0010001","resolved_by":"","sys_updated_by":"admin","opened_by":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"user_input":"","sys_created_on":"2019-09-05 11:30:16","sys_domain":{"link":"https://itsm-server/api/now/table/sys_user_group/global","value":"global"},"state":"2","sys_created_by":"admin","knowledge":"false","order":"","calendar_stc":"","closed_at":"","cmdb_ci":"","delivery_plan":"","contract":"","impact":"3","active":"true","work_notes_list":"","business_service":"","priority":"5","sys_domain_path":"/","rfc":"","time_worked":"","expected_start":"","opened_at":"2019-09-05 11:30:16","business_duration":"","group_list":"","work_end":"","caller_id":"","reopened_time":"","resolved_at":"","approval_set":"","subcategory":"","work_notes":"","short_description":"keyva_snow_test","close_code":"","correlation_display":"","delivery_task":"","work_start":"","assignment_group":"","additional_assignee_list":"","business_stc":"","description":"keyva snow test description","calendar_duration":"","close_notes":"","notify":"1","service_offering":"","sys_class_name":"incident","closed_by":"","follow_up":"","parent_incident":"","sys_id":"c1341204dbf3b70045a1f26039961932","contact_type":"","reopened_by":"","incident_state":"2","urgency":"3","problem_id":"","company":"","reassignment_count":"0","activity_due":"","assigned_to":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"severity":"3","comments":"","approval":"not requested","sla_due":"","comments_and_work_notes":"","due_date":"","sys_mod_count":"1","reopen_count":"0","sys_tags":"","escalation":"0","upon_approval":"proceed","correlation_id":"","location":"","category":"inquiry"}]}   

This quick walk-through showed you how you can easily create an API abstraction layer using Kong for specific back-end fulfillment calls. You can create similar calls for any level of infrastructure and application APIs and build capabilities towards an Infrastructure-as-Code implementation. 

Keyva helps organizations implement API abstraction and leverage it to deliver Infrastructure-as-Code. The team at Keyva has years of experience with Kong and other API abstraction tools. We also offer lunch-and-learn sessions for discussions around how other organizations are using these technologies and what use cases would work best for your organization. Please contact us if you're interested in discussing API abstraction and how it can work for you.  


CTO Anuj Tuli

Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.

During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.

Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/

Join the Keyva Community! Follow Keyva on LinkedIn at:

[post_title] => How to set up API abstraction service in Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-set-up-api-abstraction-service-in-kong [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:42:40 [post_modified_gmt] => 2020-03-05 19:42:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1999 [menu_order] => 22 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 111 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => ce4235f0f9f7210aec59dda83c4f879b [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )
Kong API

How to set up API abstraction service in Kong

In our most recent post we talked about how to set up Kong in your environment. We will now take a look at how to set up an API abstraction ...
Red Hat OpenShift

ServiceNow App for Red Hat OpenShift

Keyva is excited to announce the release of ServiceNow App for Red Hat OpenShift, certified by ServiceNow.  Many teams and organizations that have containerized critical application workloads are using the Red ...
two coworkers working together on a project

Code dependency and maintainability – Boon or Curse for CI CD?

We’ve all heard that Continuous Integration (CI) and Continuous Delivery (CD) are a major part of the DevOps release cycle. The question is – how do we get to a ...
running in the clouds

Transform into Cloud-Native

Take the first step towards transforming your apps into cloud-native If you are a medium-sized or large organization which depends on your IT teams to provide you with on-demand infrastructure, ...
digital transformation

The Closed Loop Incident Process (CLIP)

Organizations today are rattled with managing their infrastructure and application performance metrics. This is only exacerbated by the myriad of technology and tools in the market that now exists. How ...

ServiceNow App for Red Hat Ansible Automation

Keyva announces the release of open source version for ServiceNow App that integrates with Red Hat Ansible using Ansible Tower (or AWX) APIs. The integration allows users to trigger Ansible ...
data center

DevOps and Data Warehousing

Many organizations have started utilizing DevOps practices and tools for data warehousing and data lake setups. Data Analysts and Database Managers can follow DevOps practices for managing updates and new ...
technology

How to set up Hadoop two node cluster and run MapReduce jobs

This write-up walks through setting up a two node Hadoop v3.1.1 cluster, and running a couple of sample MapReduce jobs. Prerequisites: Two machines set up with RHEL 7. You could ...