Take the first step towards transforming your apps into cloud-native
If you are a
medium-sized or large organization which depends on your IT teams to provide
you with on-demand infrastructure, and support for your business critical
applications; if you are an organization with sprawling thousands of
applications and are planning to take the journey, or are already on a path, to
cloud-native, you must have faced questions like "where do I start in migrating
these applications to a new cloud platform?" Or "Once I've migrated
an application on to a cloud platform, how do I make sure my application code
updates don't drift away from leveraging the most a cloud platform has to
offer?".
Let's take a real world
example. An organization that has 1500 Applications, about 80% of which are
Commercial-Off-The-Shelf (COTS) apps, and about 20% are custom home-grown.
These applications are mostly run on unix-based
systems, with some instances run on Windows hosts, and they are looking for
assistance on getting started with questions like – how do they decide which
applications to move? what changes need to be made to these applications to be
compatible with the new platform? what risks and vulnerabilities are associated
in not taking any actions on these applications? how long will the effort be to
migrate these applications? are these applications even ready to be migrated?
and so on.
CAST Software
provides organizations the ability to have automated application assessments
for an entire portfolio of applications using various programming languages,
and profile them based on multiple quality and quantity indicators. Using a
combination of an assessment questionnaire and the automated code insights,
CAST helps you decide which applications to target migrating first, how code
changes affect an application resiliency, identify security vulnerabilities in
the existing application code, and much more. CAST also provides the ability
for you to export the results of your application assessment without the
need to export any source code. This can be done by leveraging CAST API. For
the example customer mentioned above with thousands of applications, the
process of evaluating their entire portfolio of applications can be easily
automated leveraging this function.
Here is an example of
how the command line API call would look like, that would export the
application metrics without the need to export any source code -
Since jar files can
be run on Unix and Windows systems alike, the command remains the same for both
platforms. You can also use the command wrapper created by Keyva
(https://github.com/keyva/casthighlight_wrapper)
to run the assessment.
For the
aforementioned customer, coupling up the ability to run API commands with their
configuration management system or a workflow automation system like Red Hat
Ansible, they can scan for source code on their server inventory for all
on-premises or cloud based servers, and automatically create an application
portfolio assessment report on a scheduled basis.
To get started with our application assessment questionnaire, please visit us at https://keyvatech.com/survey/. We also provide a free assessment for one of your applications built using Java or Python and help you roadmap the required effort and the steps you would need to take to assess and migrate your entire portfolio of applications.
Not knowing the current state of your application portfolio can add increased risks and cost to cloud migration efforts. With cloud project failure rates so high, how can you be sure "if" and "which" of your applications are cloud ready? Would a prioritized list of applications and an actionable roadmap to deliver the applications as a cloud-ready workload be critical to your success? In this webinar you will:
Learn how CAST Highlight can help you identify and prioritize candidate applications for migration
Learn how Keyva can help you take insights from CAST Highlight and:
Understand the level of effort associated with the required application changes
Understand how your application is connected to your infrastructure
Build a plan to migrate your application to the cloud and reconcile the underlying infrastructure
Make go/no-go decisions based on the TCO required to move an app to the cloud
Speakers
Anuj Tuli - Chief Technology Officer | Keyva
In his current role at Keyva, Anuj helps organizations adopt Containers, implement CI/CD methodology, modernize their applications, and develop an automation framework which supports end-to-end application lifecycle - planning, development, testing, deployment, and operations.
Kevin Furet - Senior Solutions Specialist | CAST
Kevin is a Senior Solutions Specialist at CAST, helping partners leverage Software Intelligence to build value-focused offerings for software modernization, application diagnostics and risk management programs. As a member of the Strategic Partnerships team, he helps Advisories and Consulting firms assess large IT organizations’ application landscapes and define actionable, future-proof strategies.
Passionate about open source? Join us at the next Milwaukee Red Hat User Group on June 26th!
This RHUG will feature two exciting presentations: Integrate Red Hat Ansible Tower and ServiceNow for end-to-end use case automation and Systemd by Example. This event is sponsored by Keyva Technologies.
Integrate Red Hat Ansible Tower and ServiceNow for end-to-end use case automation
This presentation will be given by Keyva Technologies.
In his current role at Keyva, Anuj helps organizations adopt Containers, implement CI/CD methodology, modernize their applications, and develop an automation framework which supports end-to-end application lifecycle - planning, development, testing, deployment, and operations.
Systemd by Example
Become comfortable with the new systemd init system used in Linux distributions to bootstrap the user space and to manage system processes after booting. This exciting tool is already available on your servers. Now is your chance to see examples of service security, sandboxing, container management, metric data collection, and logging aggregation.
Presenter: Keith Resar, Automation Developer, Red Hat Keith Resar is an Automation Developer with Red Hat and a regular Open Source contributor. He's consulted with dozens of organizations, helping them successfully implement cultural change through the adoption of new technology. With an expertise in cloud automation and container PaaS, he's made deploying applications more accessible throughout the enterprise.
Keyva will be at the Red Hat Ansible Meetup on June 20th.
Keyva is excited to be presenting at the Red Hat Ansible Meetup. Here is the information regarding the event.
Keyva’s open source ServiceNow App for Red Hat Ansible Automation allows users to kick-off deployment jobs in Ansible from ServiceNow Catalog Requests. This integration allows you to: o Trigger Red Hat Ansible jobs from ServiceNow Catalog Requests, Change Requests, Incident Requests, and moreo Accelerate adoption of Red Hat Ansible as the automation tool of choiceo Allow ServiceNow teams the ability to fulfill IT automation requests via Ansibleo Easily map field values in a ServiceNow record and pass them as arguments to an Ansible jobo Leverage best practices integration methodology to integrate disparate domain toolso Contribute to the open source project, and customize the existing code to fit your specific needs Presented by our Anuj Tuli, who meets its clients where they are at on their journey to become more agile and adopt DevOps practices. Keyva partners with you to get to from where you are to where you need to be. We accomplish this through an agile approach to IT problem-solving focused on delivering technical capabilities that meet your business objectives. We work hard to simplify your IT in order to free up your time so that you can focus on your core business and driving value for your customers. We will open the door at 6:30 Have drinks and pizza until 7:00, then the talk will start. Hope to see you there! Thanks. The doors automatically lock, you may need leave a message if you can't get in the doors, we try to have someone waiting at the door until 7:00 after that please post to the message/discussion board To sign up for the event click here.
[post_title] => ServiceNow Requests Automation Ansible Minneapolis.
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => servicenow-requests-automation-ansible-minneapolis
[to_ping] =>
[pinged] =>
[post_modified] => 2019-06-13 13:36:30
[post_modified_gmt] => 2019-06-13 13:36:30
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=1292
[menu_order] => 30
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [4] => WP_Post Object
(
[ID] => 1270
[post_author] => 7
[post_date] => 2019-05-15 20:40:41
[post_date_gmt] => 2019-05-15 20:40:41
[post_content] => Organizations today are rattled with managing their infrastructure and application performance metrics. This is only exacerbated by the myriad of technology and tools in the market that now exists. How can Operations teams keep up with increasing demand of managing larger and ever-changing workloads under shrinking budgets? That’s a question which many organizational teams are left to answer and find a solution that fits their needs. Are you faced with this challenge? One step that can get your team closer to the solution is to genericize your processes independent of tools, so that teams can follow a unified process and gain end to end visibility in to their entire infrastructure – regardless of the underlying technology that is deployed. Closed-Loop-Incident-Process is one such operational process, that has tremendous benefits when coupled with automation and tools consolidation. A Closed-Loop-Incident-Process A Closed-Loop-Incident-Process, or CLIP for short, is when you automatically take actions on alerts on your Network Operations Center (NOC) unified console including auto remediation, while integrating the remediation process with your ticketing system (e.g. Incident tickets). It does not matter whether you use one of the APM tools, or infrastructure monitoring tools, and the process holds true for any and all IT Services Management (ITSM) and Continuous Management Data Base (CMDB) systems you have. Once your teams agree on an end to end process that works for their environment and organization, you can begin the work of integrating the various tool sets you have to achieve the end goal. [caption id="attachment_1278" align="alignleft" width="212"] Fig 1a.[/caption] It is important to keep your CMDB accurate and current, and many organizations end up spending a lot of cycles and redundant time trying to achieve that state. Eventually, organizations can use the CIs and CI relationships within the CMDB to implement event correlation and operational intelligence that can proactively reduce alerts that would’ve been classified as noise. Check out the CLIP framework here (fig 1a) here. How Kevya Can Help Keyva has helped several customers integrate and automate their operational processes to achieve time and cost savings. Keyva can help genericize the many different processes you may have and integrate tool sets to achieve end to end use case automation, with the end goal of achieving Operational Intelligence so the operations teams can put time towards automating complex remediation tasks, rather than on repetitive manual tasks. Are you ready to automate? If you have any questions, or feedback, please reach out to a Keyva associate at info@keyvatech.com ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Anuj Tuli serves as the Chief Technology Officer for Keyva. In his current role at Keyva, Anuj helps organizations adopt IT Process Automation, Containers, implement CI/CD methodology, modernize their applications, and develop an automation framework which supports end-to-end application lifecycle - planning, development, testing, deployment, and operations. He joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specialized in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => The Closed Loop Incident Process (CLIP)
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => closed-loop-incident-process
[to_ping] =>
[pinged] =>
[post_modified] => 2019-05-16 14:21:01
[post_modified_gmt] => 2019-05-16 14:21:01
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=1270
[menu_order] => 31
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [5] => WP_Post Object
(
[ID] => 1258
[post_author] => 7
[post_date] => 2019-04-25 15:18:38
[post_date_gmt] => 2019-04-25 15:18:38
[post_content] => Keyva announces the release of open source version for ServiceNow App that integrates with Red Hat Ansible using Ansible Tower (or AWX) APIs. The integration allows users to trigger Ansible jobs from within ServiceNow Catalog Requests or Change tickets. Users have the ability to customize triggers to suit their own needs – to not only launch the Ansible job from a specific ServiceNow Application, but also being able to define the specific conditions (e.g. Status field set to ‘In Progress’). Many organizations use Ansible as the automation and orchestration layer, while using ServiceNow as their ITSM suite and CMDB. There are several common use cases that require an integration between the two offerings. [caption id="attachment_1260" align="alignleft" width="211"] Fig 1a - Sample Provisioning Use Case[/caption] A similar use case can be implemented using this integration for Day 2 tasks like Patching, or Unprovisioning. Customers that are looking to launch a service request through a centralized portal like ServiceNow, and have Ansible as their orchestration fulfillment engine can leverage this open sourced integration. Check out the sample provisioning use case (fig 1a) here. You can check out the integration on our GitHub repository here - https://github.com/keyva/ansible If you have any questions, or feedback, please reach out to a Keyva associate at info@keyvatech.com [post_title] => ServiceNow App for Red Hat Ansible Automation
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => servicenow-app-for-red-hat-ansible-automation
[to_ping] =>
[pinged] =>
[post_modified] => 2023-06-28 18:06:47
[post_modified_gmt] => 2023-06-28 18:06:47
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=1258
[menu_order] => 32
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [6] => WP_Post Object
(
[ID] => 1187
[post_author] => 7
[post_date] => 2019-02-28 19:55:38
[post_date_gmt] => 2019-02-28 19:55:38
[post_content] => Many organizations have started utilizing DevOps practices and tools for data warehousing and data lake setups. Data Analysts and Database Managers can follow DevOps practices for managing updates and new database releases across various environments in a uniform fashion, to produce repeatable results. Just like application teams create and manage the CI/CD pipeline for applications, the data that these applications consume can have its own release pipeline that is managed by the database teams. In many cases, cloud based data warehousing platforms provide the ability to host the applications that consume this data, all within the same environment. Applications that consume data housed in a data warehouse may also leverage Kafka or other DevOps tools to achieve low latency query performance. As you release updates to your applications, you may also need to account for the updates to the service bus layer and the database layer. Continuous deployment and continuous integration becomes all the more important. Data teams that institute DevOps practices and tools for data warehousing can promote an agile culture within their silos. This includes the process of fetching or discovering the data for data warehousing, the process of making sure it is current and accurate for the consuming applications, and the process of organizing it for data mining and analysis. You can apply DevOps practices and policies to data automation (just like infrastructure automation). Starting from self-service models to request new data instances, to requesting updates, and other data lifecycle steps. There are many organizations that have built entire data platforms on containers. For infrastructure and database teams, it is imperative to provide data "as-a-service" with measured and tracked SLAs and costs – whether these services are provided on container platforms or otherwise. Public cloud platforms have made it easy for consumers to leverage SaaS data warehousing solutions. Using DevOps practices do not have to be limited to providing the underlying infrastructure or service, but can also be applied to the building of reports. Jenkins automation can be used to release database updates, integration tools can be used to fetch the relevant data from multiple sources to populate the target systems, and opensource tools like Grafana can be used for dashboards. Primary objective of such a setup would be to capture data from various components and locations within the environment to a centralized location via ETL, and process that data to produce business intelligence. When bringing data in from multiple sources for data warehousing, the exercise of data mapping and data reconciliation and sanitization usually take the most time and effort upfront. Architectural considerations also include the paradigm of monitoring the data warehouse components, as well as the data within it. Data processing engines like Hadoop MapReduce or Spark, along with the database serving platforms form the core components of any data warehouse setup. By implementing the best practices architecture, and tuning specifically for your environment, you can optimize your data warehouse setup to achieve a balance between performance and cost.Various industry use cases like fraud prevention in banking, storing health records and doctors notes in healthcare, customer profiling for retail, real time streaming in media, and others, have already leveraged the benefits provided by data lakes for capturing and storing unstructured data, and data warehousing for structured data. With the adoption of blockchain technologies, the relevance of Big Data is only anticipated to grow. Most enterprises depend heavily on applications for their business, and thereby have adopted agile processes for application releases. Combining the consumption of Big Data with emphasis on extracting relevant and accurate data at the right time, is paramount for business critical applications. The adoption of DevOps practices and tools for data warehousing within data teams is still in its nascent stage, but is being picked up by more and more data experts every day.If you need assistance with data warehousing to move your disparate data from various sources, or need help assessing the feasibility of a data warehouse platform without substantially affecting your business critical applications, Keyva can help. Associates at Keyva have worked with many different organizations in various verticals to help in data migration and application modernization projects. These include things like creating a data migration factory, creating ETL strategies with data mapping, refactoring existing applications, adding a wrapper over current applications so they can be consumed easily by DevOps processes, modifying existing applications to consume data from SaaS platforms, and more. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at info@keyvatech.com. Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => DevOps and Data Warehousing
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => devops-and-data-warehousing
[to_ping] =>
[pinged] =>
[post_modified] => 2020-01-22 18:20:10
[post_modified_gmt] => 2020-01-22 18:20:10
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=1187
[menu_order] => 33
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) [7] => WP_Post Object
(
[ID] => 1164
[post_author] => 7
[post_date] => 2019-02-25 20:16:42
[post_date_gmt] => 2019-02-25 20:16:42
[post_content] => This write-up walks through setting up a two node Hadoop v3.1.1 cluster, and running a couple of sample MapReduce jobs. Prerequisites:
Two machines set up with RHEL 7. You could use another distribution, but the commands may vary.
Perl, wget, and other required packages downloaded using yum
Disable the firewall, or open up connectivity between the two machines. Since we are setting it up as a lab instance, we will go ahead and disable the firewall
hadoop1 will be the master node, and hadoop2 will be the datanode.
Add entry for hadoop1 and hadoop2 under /etc/hosts on both machines. We will need a JDK installation (on both machines):
yum install java-1.8.0-openjdk -y
You can validate that java is installed by querying for the installed version
java -version
Create a separate directory under '/' path where we will download the bits for hadoop (on both machines)
mkdir hadoop
cd /hadoop/
wget http://mirror.cc.columbia.edu/pub/software/apache/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz
tar -xzf hadoop-3.1.1.tar.gz
In order to point hadoop to the correct java installation, we will need to capture the full path of java install
readlink -f $(which java)
Export the path as environment variable (on both machines)
We will modify the bashrc profile file to make sure that all the required environment variables are available when we log in to the machine console. This change is made (on both machines):
On the master node, update the workers file to reflect the slave nodes
vi /hadoop/hadoop-3.1.1/etc/hadoop/workers
Add the entry
hadoop2
And then on the master node, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs namenode -format
On the datanode, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs datanode –format
On the master node, start the dfs service:
/hadoop/hadoop-3.1.1/sbin/start-dfs.sh
On the master node, run the dfsadmin report, to validate the availability of datanodes
/hadoop/hadoop-3.1.1/bin/hdfs dfsadmin -report
The output of this command should show two entries for datanodes - one for hadoop1 and one for hadoop2. The nodes are now set up to handle MapReduce jobs. We will look at two examples. We will use the sample jobs from hadoop-mapreduce-examples-3.1.1.jar file under the share folder. There is a large number of opensource java projects available, which run various kinds of mapreduce jobs. We will run these exercises on the master node. Exercise 1: We will solve a sudoku puzzle using MapReduce. First we will need to create a sudoku directory under root folder in hdfs file system.
/hadoop/hadoop-3.1.1/bin/hdfs dfs -mkdir /sudoku
Then create an input file with the sudoku puzzle, under your current directory:
vi solve_this.txt
Update the file with the below text. Each entry on the same line is separated by a space.
Found 1 solutions Exercise 2: We will run a wordcount method on the sudoku puzzle file. Run the wordcount method on the sudoku puzzle file, and have the output stored in wcount_result folder.
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount /sudoku/solve_this.txt /sudoku/wcount_result
The lengthy output lists out the results of detailed analysis conducted on the file. We will cat the results of various results,
The above output captures the total number of times a particular digit is listed in the solved puzzle. To see all the different sample methods available under hadoop-mapreduce-examples-3.1.1.jar, run the following command:
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar
If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at info@keyvatech.com. Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => How to set up Hadoop two node cluster and run MapReduce jobs
[post_excerpt] =>
[post_status] => publish
[comment_status] => closed
[ping_status] => closed
[post_password] =>
[post_name] => how-to-set-up-hadoop-two-node-cluster-and-run-mapreduce-jobs
[to_ping] =>
[pinged] =>
[post_modified] => 2023-06-28 18:07:13
[post_modified_gmt] => 2023-06-28 18:07:13
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://keyvatech.com/?p=1164
[menu_order] => 34
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
) ) [post_count] => 8
[current_post] => -1
[before_loop] => 1
[in_the_loop] =>
[post] => WP_Post Object
(
[ID] => 1531
[post_author] => 7
[post_date] => 2019-07-09 15:33:31
[post_date_gmt] => 2019-07-09 15:33:31
[post_content] =>
Take the first step towards transforming your apps into cloud-native
If you are a
medium-sized or large organization which depends on your IT teams to provide
you with on-demand infrastructure, and support for your business critical
applications; if you are an organization with sprawling thousands of
applications and are planning to take the journey, or are already on a path, to
cloud-native, you must have faced questions like "where do I start in migrating
these applications to a new cloud platform?" Or "Once I've migrated
an application on to a cloud platform, how do I make sure my application code
updates don't drift away from leveraging the most a cloud platform has to
offer?".
Let's take a real world
example. An organization that has 1500 Applications, about 80% of which are
Commercial-Off-The-Shelf (COTS) apps, and about 20% are custom home-grown.
These applications are mostly run on unix-based
systems, with some instances run on Windows hosts, and they are looking for
assistance on getting started with questions like – how do they decide which
applications to move? what changes need to be made to these applications to be
compatible with the new platform? what risks and vulnerabilities are associated
in not taking any actions on these applications? how long will the effort be to
migrate these applications? are these applications even ready to be migrated?
and so on.
CAST Software
provides organizations the ability to have automated application assessments
for an entire portfolio of applications using various programming languages,
and profile them based on multiple quality and quantity indicators. Using a
combination of an assessment questionnaire and the automated code insights,
CAST helps you decide which applications to target migrating first, how code
changes affect an application resiliency, identify security vulnerabilities in
the existing application code, and much more. CAST also provides the ability
for you to export the results of your application assessment without the
need to export any source code. This can be done by leveraging CAST API. For
the example customer mentioned above with thousands of applications, the
process of evaluating their entire portfolio of applications can be easily
automated leveraging this function.
Here is an example of
how the command line API call would look like, that would export the
application metrics without the need to export any source code -
Since jar files can
be run on Unix and Windows systems alike, the command remains the same for both
platforms. You can also use the command wrapper created by Keyva
(https://github.com/keyva/casthighlight_wrapper)
to run the assessment.
For the
aforementioned customer, coupling up the ability to run API commands with their
configuration management system or a workflow automation system like Red Hat
Ansible, they can scan for source code on their server inventory for all
on-premises or cloud based servers, and automatically create an application
portfolio assessment report on a scheduled basis.
To get started with our application assessment questionnaire, please visit us at https://keyvatech.com/survey/. We also provide a free assessment for one of your applications built using Java or Python and help you roadmap the required effort and the steps you would need to take to assess and migrate your entire portfolio of applications.
Take the first step towards transforming your apps into cloud-native If you are a medium-sized or large organization which depends on your IT teams to provide you with on-demand infrastructure, ...
CAST Highlight Webinar In partnership with Keyva Not knowing the current state of your application portfolio can add increased risks and cost to cloud migration efforts. With cloud project failure ...
Passionate about open source? Join us at the next Milwaukee Red Hat User Group on June 26th! This RHUG will feature two exciting presentations: Integrate Red Hat Ansible Tower and ...
Keyva will be at the Red Hat Ansible Meetup on June 20th. Keyva is excited to be presenting at the Red Hat Ansible Meetup. Here is the information regarding the ...
Organizations today are rattled with managing their infrastructure and application performance metrics. This is only exacerbated by the myriad of technology and tools in the market that now exists. How ...
Keyva announces the release of open source version for ServiceNow App that integrates with Red Hat Ansible using Ansible Tower (or AWX) APIs. The integration allows users to trigger Ansible ...
Many organizations have started utilizing DevOps practices and tools for data warehousing and data lake setups. Data Analysts and Database Managers can follow DevOps practices for managing updates and new ...
This write-up walks through setting up a two node Hadoop v3.1.1 cluster, and running a couple of sample MapReduce jobs. Prerequisites: Two machines set up with RHEL 7. You could ...