Imagine wanting to make your company a sustainable organism that your employees and clients could be proud of. Wherever you are in sustainability criteria, the first step is to evaluate your current state. Next, you will want to identify the various (innumerable ever evolving) ways you could make your organization sustainable. At the same time, you also want to identify all the stakeholders that will have to implement these solutions simultaneously on the road to becoming sustainable.
Similarly, now imagine you want your organization to collaborate in identifying infrastructure needs across various departments so that resources (both IT resources and human resources) can be optimally utilized. Increasing operational efficiency directly translates to increased revenues and reduced costs. In an ideal case, you would want to have a single view of all your IT infrastructure (hardware, software and everything in between) so that you can see where processes can be optimized by eliminating redundant infrastructure as well as by leveraging expertise that any department or vertical may have gained, for the benefit of new or relatively newer departments. Maybe different departments or verticals are utilizing on-premises infrastructure for running their operations, maybe some are utilizing only cloud and some others are utilizing a hybrid solution. Maybe you want to consolidate the view of ongoing operational costs. All of this is possible with the help of a CMDB such as ServiceNow, BMC Helix CMDB or ManageEngine. Even if a consolidated view is not of immense value, and you just need to know more about transferring data from varied sources to one database or system, this information can then be used by multiple disparate teams of employees so that there can be better collaboration between them. Keyva Seamless Data PumpTM integrates data between various source and target systems within your infrastructure.
Traditional approach
Small companies usually start out provisioning infrastructure resources for their employees, be it self-used workstations or data centers that host business critical applications, using rudimentary methods such as an excel sheet or a notepad. When the company is small, the infrastructure needs are less and this is no issue. As the company starts to grow, these excel sheets may grow humongously making it tedious for the few employees to keep track of infrastructure elements already provisioned. To service these company resources, is another big challenge as manual cross verification needs to be performed for each resource to find its location, its allowed privileges, its infrastructure relationships, so as not to hamper any of these, inadvertently rendering that resource non-functional. This process can become disastrous if a human error occurs.
Companies sometimes add automation tools to help with the discovery of their infrastructure such as BMC Discovery. Discovery tools automatically discover infrastructure on the company’s internal network. Some of the infrastructure resources may need to be corrected if there is an error in discovery. Some other infrastructure resources may need to be manually added to this tool due to certain characteristics of the resource. All of this manual due diligence is performed to ensure that all infrastructure elements are recognized within the Discovery automation tool.
Now if you want to enable employees to open incidents against any infrastructure, a CMDB (Configuration Management Database) is usually employed. An example of a well-known CMDB is ServiceNow. Employees can find infrastructure elements that may need maintenance, and request actions against them. Or a new server may need to be provisioned to deploy a business-critical application for clients, as an example. Creating a ticket like this, sends the request to a DevOps team to then perform the required maintenance or provisioning. The DevOps engineer may need to interact with the requester, and this often happens through the CMDB ticket. After the DevOps engineer is done with the maintenance, they may need to ask the requester for verification through the same ticket on the CMDB appliance. This process possibly may change the attributes of the infrastructure resource in question. These changed attributes then need to be synced back into the Discovery appliance. There is also a possibility of new infrastructure discovered by the Discovery appliance that then needs to be synced back into the CMDB tool to make it visible to employees.
Keyva Seamless Data PumpTM automates your data
The Data Pump is built for a specific source to target combination. For example, BMC Discovery to ServiceNow is a popular combination that our clients use. Clients set up the scheduler to run typically at an off-peak hour to sync up CMDB data from the source to the target. This enables the DevOps team to access the current state of individual infrastructure elements in their CMDB tool, accurately. This takes away human error in the syncing process, avoids the tedious excel sheet maintenance and saves employee time, making them more efficient in servicing employee requests. It also ensures not to increase network load by scheduling the sync at a low-traffic time.
Keyva Seamless Data PumpTM has been servicing clients for over 20 years. Notable clients across the United States, Europe, and Asia Pacific region have been using it successfully to transfer their CMDB data from a source to a target appliance. The target appliance (such as ServiceNow) can be utilized to keep track of the infrastructure resources that have been provisioned. CMDB data represents infrastructure elements in each appliance. The Data Pump integrates varied source and target systems such as BMC products (TrueSight, Helix Atrium, Discovery, etc.), Jira, ServiceNow, Cherwell, VMWare, HPSA (HP Server Automation) amongst many others. We are continuously adding more integrations such as IBM HMC (Hardware Management Console) to ServiceNow and others. This automation tool is trusted by clients and supported by one team.
The Technology
The Data Pump utilizes the latest technologies such as REST API calls to source and target systems. It can be installed on Windows or Linux OS that has Java installed. It can be setup to be functional in under 45 minutes. Initial load from source to target is usually bulky and optimized for speed using multiple simultaneously processing threads. Scheduler enables you to schedule your incremental load at your chosen time to run daily or periodically, to alleviate load off of your peak hours on your network. The Extract, Transform, Load (ETL) in the Data Pump can be customized per your business needs. As an example, you can ensure empty or null values of specific columns from the source do not get synced. You can transform source data using provided functions to apply patterns uniformly to all source data in an automated fashion. You can change source and target connection settings easily through a user-friendly UI, as required. Complex extract query patterns are available to extract source data just the way your use case needs it. The Data Pump creates reports for data loads for manual or automated reviews by management or team. To test if an ETL works as intended, you can simulate it in the Data Pump manually before you finalize the source-target mappings; these then get automatically fed into the Scheduler. Using our CMDB expertise, the tool comes with built-in maps to get you off the ground within minutes. These maps provide the most likely needed source extract queries, data transformation patterns and source-target data mapping.
The Support
A dedicated support team takes care of client queries and enhancement requests. Support cases can be opened using our support portal, with contractual SLAs. Elaborate documentation is provided to allow clients to independently customize Data Pump through user guides, admin guides and product help menu. We invite and value our clients’ feedback – if you have a need to integrate with a specific source or target datapoint or appliance your business uses, we are excited to build new integrations within the Data Pump. We offer multi-year support discounts. Permanent (CapEx) or Subscription (OpEx) licensing models are available.
The Benefits
The Data Pump keeps infrastructure elements in sync within your organization, whether it is used by your employees or customers. Your DevOps and DevSecOps teams do not have to worry about keeping infrastructure resources in sync. Everyday infrastructure provisioning and maintenance, either for employees or for business-critical applications, can then be automated. You can customize the Data Pump to keep your cloud, hybrid and on-premises infrastructure in sync. This provides visibility into your infrastructure in various departments or verticals. It alleviates the need to build a tool in-house to connect all of your infrastructure which might take you months, specialized skills, and could be costly. It also alleviates the need to use error-prone methods to consolidate infrastructure, such as text files or sheets.
Typically, the Data Pump is installed and customized by IT Architects, DevOps Engineers, DevSecOps Engineers, Platform Engineers, Systems Engineers or Infrastructure Engineers. These teams use the Data Pump to ensure that when an infrastructure resource is needed, its current state is readily and accurately available.
Let’s Talk. Contact [email protected] for a demo today! We will show you how to leverage the power of a connected organization. At Keyva, we take pride in helping our clients identify and automate routine tasks so that you can focus on your core business instead of mundane time-consuming work.
Shubhangi Wakodikar, Senior Product Developer Shubhangi is a skilled software engineer specializing in using Core Java, Gradle, and a diverse technology stack to maintain and enhance products for large-scale clients. Passionate about leveraging technology to simplify human life and promote environmentally friendly operations, she brings expertise across various programming languages, frameworks, and tools. Like what you read? Follow Shubhangi on LinkedIn |