Keyva is excited to announce the release of ServiceNow App for Red Hat OpenShift, certified by ServiceNow.
Many teams and organizations that have containerized critical application workloads are using the Red Hat OpenShift implementation of Kubernetes. Traditionally, the IT Service Management teams and the Development teams haven't had a common intersection point. To increase compliance, governance, and auditability, team owners are constantly challenged and encouraged to reduce shadow IT, and provide a common gateway to consume any and all IT services. ServiceNow has established itself as a prominent player in the IT Service Management domain, and can act as the single point of entry for all IT requests.
This integration allows organizations to consume application services that are built in OpenShift as deployment jobs, from a ServiceNow service catalog request. The integration allows for any and all customizations within ServiceNow while leveraging existing approval processes, and also allows you to define specific trigger points within ServiceNow for when to launch the build jobs.
With this integration, you can:
· Trigger Red Hat OpenShift build jobs from ServiceNow Catalog Requests, Change Requests, Incident Requests, and more
· Accelerate the adoption of Red Hat OpenShift Container Application Platform as the container platform of choice
· Allow ServiceNow teams the ability to fulfill IT automation requests via OpenShift
· Easily map field values in ServiceNow record and pass them as arguments to OpenShift build job
· Leverage best practices integration methodology to integrate disparate domain tools
· Get fully supported integration built using best practices in specific domains
If you'd like a free trial or would require more details, please reach out to one of our associates at [email protected]
You can find the integration listing here
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => ServiceNow App for Red Hat OpenShift [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift [to_ping] => [pinged] => [post_modified] => 2020-03-09 14:37:05 [post_modified_gmt] => 2020-03-09 14:37:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1993 [menu_order] => 23 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1933 [post_author] => 7 [post_date] => 2019-09-26 18:41:21 [post_date_gmt] => 2019-09-26 18:41:21 [post_content] =>We've all heard that Continuous Integration (CI) and Continuous Delivery (CD) are a major part of the DevOps release cycle. The question is – how do we get to a point where we can make full use of the advantages offered by CI/CD processes? This will also require minor tweaks or major modifications for the workload (i.e. applications) passing through the pipeline. A majority of the work to make the applications ready for full utilization of CI/CD processes will be around mapping the library dependencies for the application in question, and for modularizing various functions of the application such that they can be developed and released independently without impacting other code sections.
Another major benefit of having modular code being developed by different teams is that it can be automatically tested as it gets deployed. Smaller code releases are preferable over larger patches, especially when you need to go through integration testing with other code sections that may or may not have been updated. It also matters how your cloud infrastructure is implemented. For example, if you are pooling testing or development resources, you may have limited capacity to make progress in parallel.
The next step then is to determine the code dependency between various sections, and how changing code in one section, or changing a commonly used library version, can affect other functionality. Determining this web of dependency can be a daunting task depending on the complexity of the application, and the overall purpose that the application delivers to the business. Multiply this by hundreds or thousands of applications that you may have in your environment. This code maintainability evaluation exercise can be very beneficial if you want to have multiple teams work on different sections of the code. On the other hand, if the web of dependencies is not understood in detail, it can lead to massive repercussions for other application functionalities unintentionally.
Associates at Keyva have helped multiple organizations assess their application readiness and helped with application modernization. These include things like refactoring code for existing applications, adding a wrapper over current applications so they can be consumed easily by DevOps processes, and more. Keyva also uses code analysis and application discovery tools from CAST software in conjunction with an analysis of your CMDB to provide you a wholistic view of the application dependency mapping. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Code dependency and maintainability – Boon or Curse for CI CD? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => code-dependency-and-maintainability-boon-or-curse-for-ci-cd [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:32:20 [post_modified_gmt] => 2020-03-05 19:32:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1933 [menu_order] => 25 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1531 [post_author] => 7 [post_date] => 2019-07-09 15:33:31 [post_date_gmt] => 2019-07-09 15:33:31 [post_content] =>Take the first step towards transforming your apps into cloud-native
If you are a medium-sized or large organization which depends on your IT teams to provide you with on-demand infrastructure, and support for your business critical applications; if you are an organization with sprawling thousands of applications and are planning to take the journey, or are already on a path, to cloud-native, you must have faced questions like "where do I start in migrating these applications to a new cloud platform?" Or "Once I've migrated an application on to a cloud platform, how do I make sure my application code updates don't drift away from leveraging the most a cloud platform has to offer?".
Let's take a real world example. An organization that has 1500 Applications, about 80% of which are Commercial-Off-The-Shelf (COTS) apps, and about 20% are custom home-grown. These applications are mostly run on unix-based systems, with some instances run on Windows hosts, and they are looking for assistance on getting started with questions like – how do they decide which applications to move? what changes need to be made to these applications to be compatible with the new platform? what risks and vulnerabilities are associated in not taking any actions on these applications? how long will the effort be to migrate these applications? are these applications even ready to be migrated? and so on.
CAST Software provides organizations the ability to have automated application assessments for an entire portfolio of applications using various programming languages, and profile them based on multiple quality and quantity indicators. Using a combination of an assessment questionnaire and the automated code insights, CAST helps you decide which applications to target migrating first, how code changes affect an application resiliency, identify security vulnerabilities in the existing application code, and much more. CAST also provides the ability for you to export the results of your application assessment without the need to export any source code. This can be done by leveraging CAST API. For the example customer mentioned above with thousands of applications, the process of evaluating their entire portfolio of applications can be easily automated leveraging this function.
Here is an example of how the command line API call would look like, that would export the application metrics without the need to export any source code -
java -jar HighlightAutomation.jar --workingDir "/samples/pathToWorkingDir" --sourceDir "/samples/sourceDir/src/" --skipUpload
Since jar files can be run on Unix and Windows systems alike, the command remains the same for both platforms. You can also use the command wrapper created by Keyva (https://github.com/keyva/casthighlight_wrapper) to run the assessment.
For the aforementioned customer, coupling up the ability to run API commands with their configuration management system or a workflow automation system like Red Hat Ansible, they can scan for source code on their server inventory for all on-premises or cloud based servers, and automatically create an application portfolio assessment report on a scheduled basis.
To get started with our application assessment questionnaire, please visit us at https://keyvatech.com/survey/. We also provide a free assessment for one of your applications built using Java or Python and help you roadmap the required effort and the steps you would need to take to assess and migrate your entire portfolio of applications.
[post_title] => Transform into Cloud-Native [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => transformintocloud-native [to_ping] => [pinged] => [post_modified] => 2019-09-30 18:53:00 [post_modified_gmt] => 2019-09-30 18:53:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1531 [menu_order] => 27 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1270 [post_author] => 7 [post_date] => 2019-05-15 20:40:41 [post_date_gmt] => 2019-05-15 20:40:41 [post_content] => Organizations today are rattled with managing their infrastructure and application performance metrics. This is only exacerbated by the myriad of technology and tools in the market that now exists. How can Operations teams keep up with increasing demand of managing larger and ever-changing workloads under shrinking budgets? That’s a question which many organizational teams are left to answer and find a solution that fits their needs. Are you faced with this challenge? One step that can get your team closer to the solution is to genericize your processes independent of tools, so that teams can follow a unified process and gain end to end visibility in to their entire infrastructure – regardless of the underlying technology that is deployed. Closed-Loop-Incident-Process is one such operational process, that has tremendous benefits when coupled with automation and tools consolidation. A Closed-Loop-Incident-Process A Closed-Loop-Incident-Process, or CLIP for short, is when you automatically take actions on alerts on your Network Operations Center (NOC) unified console including auto remediation, while integrating the remediation process with your ticketing system (e.g. Incident tickets). It does not matter whether you use one of the APM tools, or infrastructure monitoring tools, and the process holds true for any and all IT Services Management (ITSM) and Continuous Management Data Base (CMDB) systems you have. Once your teams agree on an end to end process that works for their environment and organization, you can begin the work of integrating the various tool sets you have to achieve the end goal. [caption id="attachment_1278" align="alignleft" width="212"]systemctl stop firewalld systemctl disable firewalld
yum install java-1.8.0-openjdk -yYou can validate that java is installed by querying for the installed version
java -versionCreate a separate directory under '/' path where we will download the bits for hadoop (on both machines)
mkdir hadoop cd /hadoop/ wget http://mirror.cc.columbia.edu/pub/software/apache/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz tar -xzf hadoop-3.1.1.tar.gzIn order to point hadoop to the correct java installation, we will need to capture the full path of java install
readlink -f $(which java)Export the path as environment variable (on both machines)
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/We will modify the bashrc profile file to make sure that all the required environment variables are available when we log in to the machine console. This change is made (on both machines):
vi ~/.bashrcAdd the following lines to the file
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root" export HDFS_SECONDARYNAMENODE_USER="root" export YARN_RESOURCEMANAGER_USER="root" export YARN_NODEMANAGER_USER="root" export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/ export PATH=$PATH:$JAVA_HOME/binUpdate the core-site file (on the master node) vi /hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml Modify the <configuration> section as per below:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop1:9000</value> </property> </configuration>Update the hdfs-site file (on the master node)
vi /hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xmlModify the <configuration> section as per below:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>Set up the machines for passwordless SSH access (on both machines):
ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub root@hadoop1 ssh-copy-id -i ~/.ssh/id_rsa.pub root@hadoop2On the master node, update the workers file to reflect the slave nodes
vi /hadoop/hadoop-3.1.1/etc/hadoop/workersAdd the entry
hadoop2And then on the master node, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs namenode -formatOn the datanode, format the hdfs file system:
/hadoop/hadoop-3.1.1/bin/hdfs datanode –formatOn the master node, start the dfs service:
/hadoop/hadoop-3.1.1/sbin/start-dfs.shOn the master node, run the dfsadmin report, to validate the availability of datanodes
/hadoop/hadoop-3.1.1/bin/hdfs dfsadmin -reportThe output of this command should show two entries for datanodes - one for hadoop1 and one for hadoop2. The nodes are now set up to handle MapReduce jobs. We will look at two examples. We will use the sample jobs from hadoop-mapreduce-examples-3.1.1.jar file under the share folder. There is a large number of opensource java projects available, which run various kinds of mapreduce jobs. We will run these exercises on the master node. Exercise 1: We will solve a sudoku puzzle using MapReduce. First we will need to create a sudoku directory under root folder in hdfs file system.
/hadoop/hadoop-3.1.1/bin/hdfs dfs -mkdir /sudokuThen create an input file with the sudoku puzzle, under your current directory:
vi solve_this.txtUpdate the file with the below text. Each entry on the same line is separated by a space.
? 9 7 ? ? ? ? ? 5 ? 6 3 ? 4 ? 2 ? ? ? ? ? 9 ? ? ? 8 ? ? ? 9 ? ? ? ? 7 ? ? ? ? 1 ? 6 ? ? ? 2 5 4 8 3 ? ? ? 1 ? 7 ? ? ? 1 8 ? ? ? 8 ? ? 7 ? 6 ? 4 5 ? ? ? ? 2 ? 9 ?Now move (put) the file from your current directory in to the hdfs folder (sudoku) that we created earlier.
/hadoop/hadoop-3.1.1/bin/hdfs dfs -put solve_this.txt /sudoku/solve_this.txtTo make sure that the file was copied:
/hadoop/hadoop-3.1.1/bin/hdfs dfs -ls /sudokuRun the mapreduce job, to solve the puzzle:
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar sudoku solve_this.txt Solving solve_this.txt 1 9 7 6 2 8 4 3 5 8 6 3 7 4 5 2 1 9 4 2 5 9 1 3 7 8 6 6 1 9 2 5 4 3 7 8 7 3 8 1 9 6 5 4 2 2 5 4 8 3 7 9 6 1 9 7 2 4 6 1 8 5 3 3 8 1 5 7 9 6 2 4 5 4 6 3 8 2 1 9 7Found 1 solutions Exercise 2: We will run a wordcount method on the sudoku puzzle file. Run the wordcount method on the sudoku puzzle file, and have the output stored in wcount_result folder.
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount /sudoku/solve_this.txt /sudoku/wcount_resultThe lengthy output lists out the results of detailed analysis conducted on the file. We will cat the results of various results,
/hadoop/hadoop-3.1.1/bin/hdfs dfs -cat /sudoku/wcount_result/* 1 3 2 3 3 2 4 3 5 3 6 3 7 4 8 4 9 4 ? 52The above output captures the total number of times a particular digit is listed in the solved puzzle. To see all the different sample methods available under hadoop-mapreduce-examples-3.1.1.jar, run the following command:
/hadoop/hadoop-3.1.1/bin/hadoop jar /hadoop/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jarIf you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at [email protected].
sudo subscription-manager repos --enable rhel-7-server-ansible-2.6-rpms subscription-manager repos --enable rhel-7-desktop-optional-rpmsYou can install the latest version of ansible using Yum:
yum install ansible(Since we will be installing Ansible Tower on this same machine, it is recommended to use the Yum method to install Ansible). -OR- You can build the RPM package by downloading the latest version of Ansible code from Git. If choosing this method, first we will need to get all the pre-requisite libraries ready (some of these are optional):
yum update yum install python-dev python-pip wget yum install git yum update -y nss curl libcurl yum install rpm-build yum -y install pythonDownload the latest code, and build:
mkdir ansible cd ansible/ git clone https://github.com/ansible/ansible.git systemctl stop firewalld systemctl disable firewalld cd ./ansible/ make rpm rpm -Uvh ./rpm-build/ansible-*.noarch.rpmOnce installed, you can view and modify the default Ansible hosts file at /etc/ansible/hosts You can also verify successful installation using the command:
ansible –-versionNow, we can go ahead and set up Ansible Tower on this machine. We will be using the integrated installation, which installs the GUI, the REST API, and the database – all on the same machine:
mkdir ansible-tower cd ansible-tower/ wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.4.1-1.el7.tar.gz tar xvzf ansible-tower-setup-bundle-3.4.1-1.el7.tar.gz cd ansible-tower-setup-bundle-3.4.1-1.el7/Tower connects to the PostgreSQL database using password authentication. We will need to create a md5 hash to configure Tower to talk with the database. Replace <CUSTOM-DB-PASSWORD> with a password of your choosing:
python -c 'from hashlib import md5; print("md5" + md5("<CUSTOM-DB-PASSWORD>" + "awx").hexdigest())'Make a note of the hash key generated from this command. We will use it for our next step. We have to now update the inventory file (located within ansible-tower-setup-bundle-3.4.1-1.el7 directory) with the passwords for the database, the hash key generated above, and a custom password of our choosing for rabbit_mq. Find the following lines and update them accordingly. First, for setting the admin password for the console
admin_password='AdminPassword'Next, set the password for database connectivity. Please note, this password should be the same as what you used to replace <CUSTOM-DB-PASSWORD> during the hash key generation step above. Also, we will paste the copied hash key, and set it for the hashed password line
pg_password='password' pg_hashed_password='md5f58b4d5d85dbde46651335d78bb56b8c'And finally, choose a custom password for rabbit_mq
rabbitmq_password='password'We are now ready to run the setup script
./setup.shOnce all the steps are completed successfully, you can verify the Tower installation by going to the URL
https://<MACHINE-IP-OR-FQDN>:443You can use the admin credentials (username: admin, password: admin password as defined in the inventory file) to log in and access the console. You can request a free Ansible Tower license for an evaluation environment of up to 10 nodes, or can purchase a RedHat subscription for larger environments, and some additional logging, management and support features. If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at [email protected].
Keyva is excited to announce the release of ServiceNow App for Red Hat OpenShift, certified by ServiceNow.
Many teams and organizations that have containerized critical application workloads are using the Red Hat OpenShift implementation of Kubernetes. Traditionally, the IT Service Management teams and the Development teams haven't had a common intersection point. To increase compliance, governance, and auditability, team owners are constantly challenged and encouraged to reduce shadow IT, and provide a common gateway to consume any and all IT services. ServiceNow has established itself as a prominent player in the IT Service Management domain, and can act as the single point of entry for all IT requests.
This integration allows organizations to consume application services that are built in OpenShift as deployment jobs, from a ServiceNow service catalog request. The integration allows for any and all customizations within ServiceNow while leveraging existing approval processes, and also allows you to define specific trigger points within ServiceNow for when to launch the build jobs.
With this integration, you can:
· Trigger Red Hat OpenShift build jobs from ServiceNow Catalog Requests, Change Requests, Incident Requests, and more
· Accelerate the adoption of Red Hat OpenShift Container Application Platform as the container platform of choice
· Allow ServiceNow teams the ability to fulfill IT automation requests via OpenShift
· Easily map field values in ServiceNow record and pass them as arguments to OpenShift build job
· Leverage best practices integration methodology to integrate disparate domain tools
· Get fully supported integration built using best practices in specific domains
If you'd like a free trial or would require more details, please reach out to one of our associates at [email protected]
You can find the integration listing here
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => ServiceNow App for Red Hat OpenShift [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-openshift [to_ping] => [pinged] => [post_modified] => 2020-03-09 14:37:05 [post_modified_gmt] => 2020-03-09 14:37:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1993 [menu_order] => 23 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 126 [max_num_pages] => 16 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 2f81d7721674a56fe84c555fe214246a [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )