By Anuj Tuli, CTO
Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Orlando release (latest release) of ServiceNow. ServiceNow announced its release of Orlando on January 23rd, 2020, which is the newest version in the long line of software updates since the company's creation.
Customers can now upgrade their ServiceNow App for Ansible Tower from previous ServiceNow Releases – London, Madrid, New York – to Orlando release seamlessly.
You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here: http://bit.ly/2W5tYHv
[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Orlando release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-orlando-release [to_ping] => [pinged] => [post_modified] => 2020-03-24 15:27:07 [post_modified_gmt] => 2020-03-24 15:27:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2278 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2221 [post_author] => 7 [post_date] => 2020-02-18 08:09:33 [post_date_gmt] => 2020-02-18 08:09:33 [post_content] =>By Brad Johnson, Lead DevOps Engineer
When developing automation you may be faced with challenges that are simply too complicated or tedious to accomplish with Ansible alone. There may even be cases where you are told that “it can’t be automated”. However, when you combine the abilities of Ansible and custom python using the pexpect module, then you are able to automate practically anything you can do on the command line. In this post we will discuss the basics of creating a custom Ansible module in python.
Here are a few examples of cases where you might need to create a custom module:
For the purposes of this article we will focus on the first case. When writing a traditional linux shell or bash script it simply isn’t possibly to continue your script when a command you run drops you into a new shell or new interactive interface. If these tools also provided a non-interactive mode or config/script input we would not need to do this. To overcome this situation we need to use python with pexpect. The native Ansible “expect” module provides a simple interface to this functionality and should be evaluated before writing a custom module. However, when you need more complex interactions, want specific data returned or want to provide a re-usable and simpler interface to an underlying program for to others to consume, then custom development if warranted.
In this guide I will talk about the requirements and steps needed to create your own library module. The source code with our example is located here and contains notes in the code as well. The pexpect code is intentionally complex to demonstrate some use cases.
#!/usr/bin/env python
import os
import getpass
DOCUMENTATION = '''
---
module: my_module
short_description: This is a custom module using pexpect to run commands in myscript.sh
description:
- "This module runs commands inside a script in a shell. When run without commands it returns current settings only."
options:
commands:
description:
- The commands to run inside myscript in order
required: false
options:
description:
- options to pass the script
required: false
timeout:
description:
- Timeout for finding the success string or running the program
required: false
default: 300
password:
description:
- Password needed to run myscript
required: true
author:
- Brad Johnson - Keyva
'''
EXAMPLES = '''
- name: "Run myscript to set up myprogram"
my_module:
options: "-o myoption"
password: "{{ myscript_password }}"
commands:
- "set minheap 1024m"
- "set maxheap 5120m"
- "set port 7000"
- "set webport 80"
timeout: 300
'''
RETURN = '''
current_settings: String containing current settings after last command was run and settings saved
type: str
returned: On success
logfile: String containing logfile location on the remote host from our script
type: str
returned: On success
'''
def main():
# This is the import required to make this code an Ansible module
from ansible.module_utils.basic import AnsibleModule
# This instantiates the module class and provides Ansible with
# input argument information, it also enforces input types
module = AnsibleModule(
argument_spec=dict(
commands=dict(required=False, type='list', default=[]),
options=dict(required=False, type='str', default=""),
password=dict(required=True, type='str', no_log=True),
timeout=dict(required=False, type='int', default='300')
)
)
commands = module.params['commands']
options = module.params['options']
password = module.params['password']
timeout = module.params['timeout']
try:
# Importing the modules here allows us to catch them not being installed on remote hosts
# and pass back a failure via ansible instead of a stack trace.
import pexpect
except ImportError:
module.fail_json(msg="You must have the pexpect python module installed to use this Ansible module.")
try:
# Run our pexpect function
current_settings, changed, logfile = run_pexpect(commands, options, password, timeout)
# Exit on success and pass back objects to ansible, which are available as registered vars
module.exit_json(changed=changed, current_settings=current_settings, logfile=logfile)
# Use python exception handling to keep all our failure handling in our main function
except pexpect.TIMEOUT as err:
module.fail_json(msg="pexpect.TIMEOUT: Unexpected timeout waiting for prompt or command: {0}".format(err))
except pexpect.EOF as err:
module.fail_json(msg="pexpect.EOF: Unexpected program termination: {0}".format(err))
except pexpect.exceptions.ExceptionPexpect as err:
# This catches any pexpect exceptions that are not EOF or TIMEOUT
# This is the base exception class
module.fail_json(msg="pexpect.exceptions.{0}: {1}".format(type(err).__name__, err))
except RuntimeError as err:
module.fail_json(msg="{0}".format(err))
def run_pexpect(commands, options, password, timeout=300):
import pexpect
changed = True
script_path = '/path/to/myscript.sh'
if not os.path.exists(script_path):
raise RuntimeError("Error: the script '{0}' does not exist!".format(script_path))
if script_path == '/path/to/myscript.sh':
raise RuntimeError("This module example is based on a hypothetical command line interactive program and "
"can not run. Please use this as a basis for your own development and testing.")
# Set prompt to expect with username embedded in it
# YOU MAY NEED TO CHANGE THIS PROMPT FOR YOUR SYSTEM
# My default RHEL prompt regex
prompt = r'\[{0}\@.+?\]\$'.format(getpass.getuser())
output = ""
child = pexpect.spawn('/bin/bash')
try:
# Look for initial bash prompt
child.expect(prompt)
# Start our program
child.sendline("{0} {1}".format(script_path, options))
# look for our scripts logfile prompt
# Example text seen in output: 'Logfile: /path/to/mylog.log'
child.expect(r'Logfile\:.+?/.+?\.log')
# Note that child.after contains the text of the matching regex
logfile = child.after.split()[1]
# Look for password prompt
i = child.expect([r"Enter password\:", '>'])
if i == 0:
# Send password
child.sendline(password)
child.expect('>')
# Increase timeout for longer running interactions after quick initial ones
child.timeout = timeout
try:
# Look for program internal prompt or new config dialog
i = child.expect([r'Initialize New Config\?', '>'])
# pexpect will return the index of the regex it found first
if i == 0:
# Answer 'y' to initialize new config prompt
child.sendline('y')
child.expect('>')
# If any commands were passed in loop over them and run them one by one.
for command in commands:
child.sendline(command)
i = child.expect([r'ERROR.+?does not exist', r'ERROR.+?$', '>'])
if i == 0:
# Attempt to intelligently add items that may have multiple instances and are missing
# e.g. "socket.2" may need "add socket" run before it.
# Try to allow the user just to use the set command and run add as needed
try:
new_item = child.after.split('"')[1].split('.')[0]
except IndexError:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after))
child.sendline('add {0}'.format(new_item))
i = child.expect([r'ERROR.+?$', '>'])
if i == 0:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after.strip()))
# Retry the failed original command after the add
child.sendline(command)
i = child.expect([r'ERROR.+?$', '>'])
if i == 0:
raise RuntimeError("ERROR: unable to automatically add new item in myscript,"
" file a bug\n {0}".format(child.after.strip()))
elif i == 1:
raise RuntimeError("ERROR: unspecified error running a myscript command\n"
" {0}".format(child.after.strip()))
# Set timeout shorter for final commands
child.timeout = 15
# If we processed any commands run the save function last
if commands:
child.sendline('save')
# Using true loops with expect statements allow us to process multiple items in a block until
# some kind of done or exit condition is met where we then call a break.
while True:
i = child.expect([r'No changes made', r'ERROR.+?$', '>'])
if i == 0:
changed = False
elif i == 1:
raise RuntimeError("ERROR: unexpected error saving configuration\n"
" {0}".format(child.after.strip()))
elif i == 2:
break
# Always print out the config data from out script and return it to the user
child.sendline('print config')
child.expect('>')
# Note that child.before contains the output from the last expected item and this expect
current_settings = child.before.strip()
# Run the 'exit' command that is inside myscript
child.sendline('exit')
# Look for a linux prompt to see if we quit
child.expect(prompt)
except pexpect.TIMEOUT:
raise RuntimeError("ERROR: timed out waiting for a prompt in myscript")
# Get shell/bash return code of myscript
child.sendline("echo $?")
child.expect(prompt)
# process the output into a variable and remove any whitespace
exit_status = child.before.split('\r\n')[1].strip()
if exit_status != "0":
raise RuntimeError("ERROR: The command returned a non-zero exit code! '{0}'\n"
"Additional info:\n{1}".format(exit_status, output))
child.sendline('exit 0')
# run exit as many times as needed to exit the shell or subshells
# This might be useful if you ran a script that put you into a new shell where you then ran some other scripts
# This is also a good example of
while True:
i = child.expect([prompt, pexpect.EOF])
if i == 0:
child.sendline('exit 0')
elif i == 1:
break
finally:
# Always try to close the pexpect process
child.close()
return current_settings, changed, logfile
if __name__ == '__main__':
main()
In order to create a module you need to put your new “mymodule.py” file somewhere in the Ansible module library path, typically the “library” directory next to your playbook or library inside your role. It’s also important to note that Ansible library modules run on the target ansible host, so if you want to use the ansible “expect” module or make a custom module with pexpect in it then you will need to install the python pexpect module on the remote host before running module. (Note: the pexpect version provided in RHEL/CentOS repos is old and will not support the Ansible “expect” module, install via pip instead for the latest version.)
Information on the library path is located here:
https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html
Your example.py file needs to be a standard file with a python shebang header and also import the ansible module. Here is a bare minimum amount of code needed for an ansible module.
#!/usr/bin/env python from ansible.module_utils.basic import AnsibleModule module = AnsibleModule(argument_spec=dict(mysetting=dict(required=False, type='str'))) try: return_value = "mysetting value is: {0}".format(module.params['mysetting']) except: module.fail_json(msg="Unable to process input variable into string") module.exit_json(changed=True, my_output=return_value)
With this example you can see how variables are passed into and out of the module. This also includes a basic exception handle for dealing with errors and allowing ansible to deal with the failure. This exception clause is too broad for normal use as it will catch and hide all errors that could happen in the try block. When you create your module you should only except error types that you anticipate to avoid hiding stack traces of unexpected errors from your logs.
Now we can add in some custom pexpect processing code. This is again a very basic example. The example code linked in this blog post has a complicated and in-depth example. This function would then be added into our try-except block in the code above.
def run_pexpect(password): import pexpect child = pexpect.spawn('/path/to/myscript.sh') child.timeout = 60 child.expect(r"Enter password\:") child.sendline(password) child.expect('Thank you') child.sendline('exit') child.expect(pexpect.EOF) exit_dialog = child.before.strip() return exit_dialog
There are some important things to note here when dealing with pexpect and Ansible.
When creating custom modules I would encourage you to give thought to making the simplest, most maintainable and modular modules possible. It can be easy to create one module/script to rule them all, but the linux concept of having one tool to do one thing well will save you rewriting chunks of code that do the same thing and also help future maintainers of the automation you create.
https://docs.ansible.com/ansible/latest/modules/expect_module.html
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html
https://pexpect.readthedocs.io/en/stable/overview.html
If you have any questions about the steps documented here, would like more information on the custom development process, or have any feedback or requests, please let us know at [email protected].
[post_title] => Build custom Red Hat Ansible modules: pexpect [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => build-custom-red-hat-ansible-modules-pexpect [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:26 [post_modified_gmt] => 2022-01-26 13:18:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2221 [menu_order] => 10 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2149 [post_author] => 7 [post_date] => 2020-01-22 18:13:59 [post_date_gmt] => 2020-01-22 18:13:59 [post_content] =>Kong Enterprise provides you the ability to rate limit the traffic for various objects using the Rate Limiting Advanced Plugin. In the example below, we will rate limit a service fronted by Kong Enterprise.
We will use our existing Kong Enterprise on RHEL 7 environment. The installation process for this environment is documented here.
First lets make sure we have an existing service we can use. If your environment needs to have a service created, you can also check out our blog on how to do so here.
We will also be using the RBAC controls and the user we set up in our blog post. If you have not yet setup RBAC you can learn how to do so here.
1) Create a service that we can use for this example
Log in to the Kong portal at https://<kong_FQDN_or_IP>:8445 and navigate to your chosen Workspace -> Services -> New Service
Fill in the fields for Service Name, Host, Path, Port and other fields as necessary
You can also run the step of creating a Service via the command line in the format below:
curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services --data 'name=DemoService' --data 'url=myurl.com'
Check to make sure the Service was created successfully by navigating through the console
Or running the following command line:
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8001/services" --header "Kong-Admin-Token: rbac_user_token_1"
2) Next we will add a route for this service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/routes" --data "hosts[]=mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
3) Use the rate limiting plugin with our defined service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/plugins" --data "name=rate-limiting-advanced" --data "config.sync_rate=0" --data "config.window_size=60" --data "config.limit=2" --header "Kong-Admin-Token: rbac_user_token_1"
This configuration means that the DemoService service should not be allowed to process more than 2 requests per 60 seconds period.
4) Now we will test running more than 2 requests against the DemoService service.
After running the request below more than twice
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8000/" --header "Host: mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
We get the following message:
HTTP/1.1 429 Too Many Requests
By controlling the volume of requests to a specific service, and by adding RBAC controls in front of it, you can secure a quasi-firewall for east-west traffic against internal networking vulnerabilities.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Kong Enterprise - How to Setup the Rate Limiting Advanced Plugin [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => kong-enterprise-how-to-setup-the-rate-limiting-advanced-plugin [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:35 [post_modified_gmt] => 2022-01-26 13:18:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2149 [menu_order] => 11 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2133 [post_author] => 2 [post_date] => 2020-01-15 09:49:55 [post_date_gmt] => 2020-01-15 09:49:55 [post_content] =>If you've used the community version of the Kong API gateway, you have probably noticed that anyone that knows the server name or IP for your Kong community API gateway can access and modify existing objects including services and routes. To set up and use role-based access control Kong Enterprise version provides additional capabilities.
In this example, we will leverage the Kong Enterprise on RHEL 7 lab instance we set up earlier. You can read the install steps here.
Before getting started, please make sure enforce_rbac=on
is in the kong.conf file.
Log in to https://<Kong-Enterprise-VM-IP>:8445/login using kong_admin as the username and the password you set during the install process (this is the same password you assigned during the step of EXPORT_PASSWORD='password')
Click on Teams -> RBAC Users
Create a new user rbac_user_1 with a token of rbac_user_token_1
Make sure that enabled checkbox is checked
Add roles –> admin
Note that we are creating this user with 'admin' permissions, but not 'super-admin'. So it will have access to all endpoints, across all workspaces—except RBAC Admin API.
A new RBAC user, rbac_user_1, gets created
Now let's try and test the RBAC setup. We will use Postman (https://www.getpostman.com/) for this example.
First we will create a new Collection labeled 'Kong Enterprise' and then a new Request within that Collection called 'Get Services'.
Next, we will try to run a GET request against https://<Kong-Enterprise-VM-IP>:8445/services to list out all available services. If you don't pass any headers or credentials, you get the error notification "Invalid credentials. Token or User credentials required".
By adding the header with Kong-Admin-Token and the value of the token set in the earlier step 'rbac_user_token_1', we try to run the request again and this time it succeeds
As you can see, with RBAC enabled, Kong Enterprise provides much greater control over who can access and modify various objects. The user permissions can be tailored to suit various team needs – depending upon how granular you want access to be.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Setting up Role-Based Access Control (RBAC) with Kong Enteprise [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => setting-up-role-based-access-control-rbac-with-kong-enteprise [to_ping] => [pinged] => [post_modified] => 2020-05-03 18:02:12 [post_modified_gmt] => 2020-05-03 18:02:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2133 [menu_order] => 13 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2119 [post_author] => 7 [post_date] => 2020-01-10 10:05:33 [post_date_gmt] => 2020-01-10 10:05:33 [post_content] =>We all know that Agile is has been around for a while now. You have probably heard about it a thousand times already over the past few years. But even today there are organizations (e.g. Utilities – that prefer CapEx) that rely heavily on the waterfall model for software development – whereby all the project and implementations details are agreed to by the stakeholders upfront. In the past, this practice has proved to be risky, inflexible, too time consuming and costly at the beginning of the project. When starting any new project, even when applying all possible game theory outcomes to determine the risk and estimated effort, at some point you don’t know what you don't know. The methodology of Agile helps alleviate some of these issues. Using various methodologies, the process of Agile allows for the requirements to evolve and change, and a team comprised of developers and experts from various organizational areas work together to address the tasks as they evolve. This type of a setup is typically led by a Scrum Master that leads regular checkpoint meetings with the stakeholders, helps break down the work into smaller chunks to be picked up by the developers, and sets up timelines for completion and accountability.
There are several methodologies and frameworks that you can follow to be more Agile - for example, you can use Scrum methodology, Test-Driven Development, DevOps, Continuous Integration, Continuous Delivery, Kanban, Extreme Programming, and more. The idea is to be able to provide flexibility and avoiding lock-in to a set process or tools, and have regular checkpoints with the stakeholders so that any shifts in directly can be accommodated earlier in the cycle.
The Manifesto for Agile Software Development outlines the following four values:
It also follows the following twelve principles:
The Product Owner is one of the most important stakeholders in the Agile process. Product Owner is responsible for setting the overall strategy and direction for the deliverable that is being worked. It is quite easy to miss the big picture of what is being delivered and why, because the Scrum tasks are at a very low level. The Product Owner's role is to help make sense of all the small unrelated tasks, to deliver a product that provides business value to the organization.
Keyva has offerings available to help you with your Agile journey. You can find more information here. Please contact us if you'd like to have us review your environment and provide suggestions on what might work for you. Simply drop us a line here: [email protected]
[post_title] => Agile Methodologies - Rise & Shine [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => agile-methodologies-rise-shine [to_ping] => [pinged] => [post_modified] => 2020-03-05 20:11:34 [post_modified_gmt] => 2020-03-05 20:11:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2119 [menu_order] => 15 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2058 [post_author] => 7 [post_date] => 2019-12-03 17:20:12 [post_date_gmt] => 2019-12-03 17:20:12 [post_content] =>As organizations transform and containerize their business critical applications with the objective of making them ready for deployment in to various Cloud platforms, they are also need to address application security. There are several considerations to achieve varying levels of security for your cloud native app. Let's take a look at how Kong can help make this process easier:
Access Restrictions, RBAC and Traffic Restrictions
Kong API gateway can tie into your existing AD/LDAP setup and map to existing groups and users to provide role based access control (RBAC) in front of your cloud-native app. By setting up access rules in Kong, you can restrict the traffic consuming your application to specific IPs or DNS addresses. You can also easily manage and update these rules so that they can be tweaked based on the type of application and the desired functionality. For example, you can create a security profile for web-based applications that automatically allow incoming traffic on ports 80 and 8080, but block all other ports.
Throttling
If your back-end infrastructure is not yet able to handle large spikes of incoming application requests, or if you would like to maintain a set rate of incoming requests into a queue to accommodate for processing times, you can throttle the API calls to be limited to a fixed number using the Kong API gateway. Such throttling can help deter any DDoS based attacks, especially if your critical applications are exposed to the outside network.
Canary Testing and 'Promote to Production'
Kong allows you the ability to gradually and smoothly transition the workloads from your lower environments to higher environments and thereby automate the 'promote to production' process. By defining destination 'weights' for the incoming traffic, you can initially assign a higher weight to the lower environment, and as the results are hardened, you can reduce weight for your test end point and increase it for your production end point. This allows you to reduce the deployment risks and also release new patches and updates with zero downtime.
Additionally, Kong offers a number of free plugins that can be used with its community opensource edition. By leveraging a base Kong API gateway with the Bot Detection plugin, you can protect your application service from most common attack bots, and also allows you to blacklist or whitelist specific traffic sources.
Keyva helps fortune 500 organizations evaluate their existing business application portfolios and transform their applications to cloud-native architectures. Keyva can help you deliver new agile technical capabilities and drive adoption. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Secure your business critical apps with Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => secure-your-business-critical-apps-with-kong [to_ping] => [pinged] => [post_modified] => 2020-01-08 18:44:58 [post_modified_gmt] => 2020-01-08 18:44:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2058 [menu_order] => 18 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2038 [post_author] => 7 [post_date] => 2019-11-21 05:38:10 [post_date_gmt] => 2019-11-21 05:38:10 [post_content] =>If you have an IT environment with hundreds or thousands of servers running a multiple OSes, you know the operational challenges of patching and maintenance all too well. Many organizations have operational team members dedicated to patching systems on a weekly basis. If this process is being carried out manually, it requires a lot of time from an engineer to go through the systems individually. The engineer needs to track the current patch level and deploy the latest patches that make such systems compliant. Then that engineer needs to run integration tests to make sure nothing was adversely affected. All OS types have different methodologies for patching, and this work is mostly targeted for off-hours when the systems are more likely to be idle.
There are many different levels of automation that can be applied to OS patching. At the lowest level of automation, you can use configuration management systems to make sure that all devices under management are at a consistent level of software patching. This process can be triggered manually via the configuration management console or can be scheduled from within the tool. You can turn this into a more advanced automated use case by integrating with your ticketing system. An example: a Change ticket is scheduled for patching the targeted systems, it goes through the necessary team approvals, and automated configuration management is triggered once the approved change window is reached. Notifications will then be sent out to the operations team upon success or failure of the automation. In one of our customer studies the operations team saved over 800 hours (20 FTE weeks) per year by automating patching for 1500 of their Windows and Unix servers. The more devices or OSes in your environment, the more you have to gain from thorough automation.
Keyva helps fortune 100 organizations assess their existing patch management processes and compare their "As Is" state to their recommended "To Be" state. By helping implement the recommendations around people, process, and tool changes, we've saved hundreds of hours of manual work for these organizations. This in turn has helped make their Operations teams more efficient and able to provide more direct value-add to their core business rather than spend time on repetitive tasks. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Patch Management Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => patch-management-automation [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:52:02 [post_modified_gmt] => 2020-03-05 19:52:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2038 [menu_order] => 19 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 2033 [post_author] => 7 [post_date] => 2019-11-19 19:11:01 [post_date_gmt] => 2019-11-19 19:11:01 [post_content] =>We talked about how Kong can help you get set up with API abstraction here, and we also talked about Composable Infrastructures here. Today we will discuss how you can combine both these ideas to enable the delivery of Automation-as-a-Service by setting up Composable Automation with Kong.
Most organizations that are developing a lot of IT automation usually do so using multiple best-of-breed automation tools. We will consider Red Hat Ansible and Chef for the example today. Now, both these tools require a very different set of skills to administer and operate. Ansible is YAML and Python-based, while Chef requires skills with Ruby scripting. But given that an organization has already invested a lot of time and effort in developing automation for these different products, how do we maximize the value out of those existing modules? Would we need another overarching tool that can orchestrate them? Or do we simply replace one with the other? But given they both require different skills, how do we retrain the various teams that use the tool being replaced? All these questions are valid, and require a deeper assessment of the targeted end goal.
One of the options to resolve this quandary is to create Composable Automation using the tools you already have, and add a layer of Kong API abstraction to it. In other words, Red Hat Ansible and the Chef teams would create a deployable package that has all the platform prerequisites for a team to start creating playbooks, and make it available via a code repository. This way, the end users using these automation platforms can keep working with the tool of their choice, and developing automation leveraging their existing skillsets – without having to worry about administering those platforms or learning a new programming language. Adding a layer of Kong abstraction can help genericize the automation being called. For example, let's say that there is automation built into both products for provisioning a VM, and given that both automation products have specific REST API calls to trigger this specific function, the API abstraction layer can redirect the "provision a VM" call to the appropriate automation tool on the backend depending upon the requested parameters. This also helps from the business value perspective – as you can now replace or retire the automation platform you don't need gradually and without affecting the calling service.
Keyva can help you assess your application readiness and adopt Agile methodologies. Through leveraging Infrastructure-As-Code deployments and implementing Composable Infrastructures you can rapidly modernize your applications. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Composable Automation with Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => composable-automation-with-kong [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:46:02 [post_modified_gmt] => 2020-03-05 19:46:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2033 [menu_order] => 20 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2278 [post_author] => 7 [post_date] => 2020-03-24 15:27:04 [post_date_gmt] => 2020-03-24 15:27:04 [post_content] =>By Anuj Tuli, CTO
Keyva announces the certification of their ServiceNow App for Red Hat Ansible Tower against the Orlando release (latest release) of ServiceNow. ServiceNow announced its release of Orlando on January 23rd, 2020, which is the newest version in the long line of software updates since the company's creation.
Customers can now upgrade their ServiceNow App for Ansible Tower from previous ServiceNow Releases – London, Madrid, New York – to Orlando release seamlessly.
You can find out more about the App, and view all the ServiceNow releases it is certified against, on the ServiceNow store here: http://bit.ly/2W5tYHv
[post_title] => ServiceNow App for Red Hat Ansible Tower "NOW Certified" against Orlando release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => servicenow-app-for-red-hat-ansible-tower-now-certified-against-orlando-release [to_ping] => [pinged] => [post_modified] => 2020-03-24 15:27:07 [post_modified_gmt] => 2020-03-24 15:27:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2278 [menu_order] => 7 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 5af415623278b5326d82b5c298fb9f9b [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )