Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 0 [post__not_in] => Array ( [0] => 4974 ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 0 [post__not_in] => Array ( [0] => 4974 ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND wp_yjtqs8r8ff_posts.ID NOT IN (4974) AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 0, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 4978 [post_author] => 2 [post_date] => 2025-04-17 14:53:36 [post_date_gmt] => 2025-04-17 14:53:36 [post_content] => At Evolving Solutions and Keyva, our Kubernetes expertise extends past the platform to your broader IT and business needs. While Kubernetes outperforms traditional virtual machine environments in flexibility and efficiency, its complexity can be daunting. [post_title] => From Warm-Up Act to Headliner: The Rise of Kubernetes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => from-warm-up-act-to-headliner-the-rise-of-kubernetes [to_ping] => [pinged] => [post_modified] => 2025-04-17 14:53:36 [post_modified_gmt] => 2025-04-17 14:53:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4978 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 4873 [post_author] => 15 [post_date] => 2025-04-15 20:21:54 [post_date_gmt] => 2025-04-15 20:21:54 [post_content] =>
Comparing Bitfield/Scripts and Risor as alternatives to shell scripts
I don't like writing shell scripts. I like uniformity and predictability, easy to grok interfaces, and, if I can get it, type-hinting. And shell scripts have none of that, often written quick with little deference to the system they're participating in or the future maintainer. So it's unfortunate to me that shell scripts are the glue of automation, connecting disparate components and data across Linux to form pipelines. It's equally unfortunate is that, like glue, shell scripts are not always applied with care. When I came across Risor and "script" (github.com/bitfield/scripts), two projects with the goal of making script writing easier in Go, I had a lot of questions. How easy was it to write a script from scratch? What features are available? How easy was it to share scripts across hosts and pipelines? Could these really replace shell scripts? To answer those questions, I wanted to run a few tests to see how well each could perform and determine if either could perform as well as a regular shell script, and whether the drawbacks outweighed the benefits. I had 4 criteria I wanted to evaluate with each:
  1. Getting started: How difficult was it to install the tool or library and start executing a script?
  2. Performance: How well does the tool or library perform compared to standard shell scripts?
  3. Distribution: How difficult is it to share and execute scripts in different environments?
  4. Maintenance: How difficult is it to learn the tool or library to contribute or troubleshoot?
Weighing those together, not in any scientific or numerical scale, will at least give an idea of how these projects compare to each other, how they could potentially replace bash in existing workflows, and when they might be worth considering.
Risor
Risor is a CLI tool and Go package that reads and executes files written in Risor's DSL.
Installing and Getting Started with Risor
There are two ways of using Risor: As an executable and as a package. Installing the executable is simple for Mac users: Risor has a package available through Homebrew (brew install risor) that handles installing the tool and any dependencies. On other platforms, there isn't a precompiled binary (that I could find in the docs), so the tool needs to be built before it can be used. That means installing Go and configuring the environment, cloning the project, and running the go install. Not especially difficult, but more work than required for MacOS. Using the Risor package is the same process on all platforms, requiring Go's build tools, and is pulled down to the environment in the same method as any other Go package.
Risor modules
Functionality within Risor is implemented by compiling and installing various modules along with the project and then exposing as the DSL at the script later. Modules are included for interacting with the OS, DNS, JSON, and other tools and applications like Kubernetes. While there are many cases where the DSL doesn't really provide a benefit, like reading files or printing text where shell languages are already simple enough, the more advanced and specific modules provide a much simpler interface. For example, the tablewriter module can take arrays of data and print nice looking tables to the shell. I can see uses where I'm trying to display a table of data during a pipeline and implementing it in Risor is much easier than in other languages. Risor also has documentation for implementing custom modules and contributing them back to the project or distributing them privately.
Creating an example Risor script
For the first test, I created a temporary directory to start from and installed Risor.
$ cd $(mktemp -d)
$ brew install risor
Second, I created a script read_file to read a file and print the contents to stdout.
$ <<EOT >> read_file
#!/usr/bin/env risor --
my_file := os.args()[1]
printf(cat(my_file))
EOT
$ chmod u+x ./read_file
Lastly, I created an example file and executed the script to read it.
$ <<EOT >> test.txt
Hello, Risor!
EOT
$ ./read_file ./test.txt
Hello, Risor!
All together, a pretty simple process from start to finish, albeit by only performing a very simple task.
Comparing Risor performance to shell
To compare against the shell performance, I wrote that same script in Zsh, reading a file from an argument and printing to standard out.
$ <<EOT >> shell_script
#!/bin/zsh
my_file="\${1}";
printf "%s" "\$(cat \${my_file})"
EOT
$ chmod u+x ./shell_script
To generate the comparison, I started a timer and loop and executed each script 1000 times to get a little easier number to compare.
# Shell script
$ time zsh -c 'for i in {1..1000}; do ( ./shell_script ./test.txt >> /dev/null ); done'
zsh -c   0.02s user 0.15s system 3% cpu 4.420 total

# Risor
$  time zsh -c 'for i in {1..1000}; do ( ./read_file ./test.txt >> /dev/null ); done'
zsh -c 'for i in {1..1000}; do ( ./read_file ./test.txt >> /dev/null ); done'  0.04s user 0.20s system 1% cpu 18.706 total
Performing the file read 1000 times with Risor took 18.7 seconds. Performing that same operation with just Zsh took only 4.4 seconds. Side note: Assigning the argument to a variable in the shell script slowed it down by almost half (47%), whereas assigning the argument to a variable in Risor made almost no different (2%). Is it enough to be noticeable? Probably. Considering that this is a very small example, reading and printing a file, performed a thousand times, the difference of an individual run is milliseconds, and in isolation, may never be noticed. But, for more complex scripts reading multiple files and performing string transforms, or multiple scripts run at different points in a pipeline, those delays start adding up.
Distributing Risor scripts
Risor executes these scripts when executed by reading the script file, parsing the DSL, and executing it. There are a few available methods for distributing and executing Risor scripts across hosts:
  1. Install Risor and custom modules anywhere the scripts will need to run, following the normal installation steps,
  2. Create and build a Go project that utilizes the Risor package and distribute those binaries from a repository,
  3. Or compile the scripts using another a separate tool, like com/rubiojr/rsx, and distribute the binaries.
Compared to sharing shell scripts, none of these solutions are particularly simple. All require the installation of outside tools or complicated methods of packaging. I had really hoped there would be a way to compile scripts using the same toolset that is used to execute them, but it's either not possible or not yet implemented.
Bitfield/Scripts
script is a Go package that exposes shell executables and functionality as Go-like objects.
Getting started with script
Since script is a Go package, installing and getting started doesn't require applications outside of the Go toolset, and is identical for each platform
  1. Install and configure Go environment
  2. Create a new module for code go mod init go.example.com/script
  3. Fetch and add dependency go get github.com/bitfield/script
While altogether not difficult, I've always lamented Go's requirement that everything be a module. It's difficult to work quickly to prototype a solution because of the requirement to initiate a module, define dependencies, etc., when all I want to do is test if my code works. For me, the benefits of using script get lost quickly because it doesn't provide anything that couldn't be gained by just writing in Shell or Python. Because scripts must be compiled by script before they can be run, leaving go run ... aside because that still requires the entire environment, the quick nature of writing them is lost and writing them is similar to any other Go-compiled executable.
Creating a script test
I implemented the same functionality in the script example that I did for Risor, so I started by creating a new temp directory and a new module.
$ cd $(mktemp -d)

$ go mod init go.example.com/script
go: creating new go.mod: module go.example.com/script
Then I add the script dependency and created the new script.
$ go get github.com/bitfield/script
go: added github.com/bitfield/script v0.24.0
go: added github.com/itchyny/gojq v0.12.13
go: added github.com/itchyny/timefmt-go v0.1.5
go: added mvdan.cc/sh/v3 v3.7.0

$ mkdir -p cmd/read_file

$ <<EOT >> cmd/read_file/main.go

package main

import (
     "strings"
     "github.com/bitfield/script"
)
func main() {
     my_file := script.Args().First(1)
     filename, err := my_file.String()
     if err != nil {
          panic(err)
     }
     script.File(strings.TrimSpace(filename)).Stdout()
}
EOT
Next, I built the project and created my test file.
$ go build ./...
$ <<EOT >> test.txt
Hello, script!
EOT
And ran my script.
$ ./read_file ./test.txt
Hello, script!
Not too bad, either. There's much more code involved, and it admittedly took a bit of debugging to figure out .String() would always append a newline and there wasn't a way to access arguments individually, but once figured out it was simple to get going.
script performance vs. shell
Since my script is a compiled Go executable, I hoped that the performance would be improved, as well. To test, I performed the same test as Risor and ran the script through a loop 1000 times to get a bigger number to compare between the two.
# Shell script results (from earlier)
$ time zsh -c 'for i in {1..1000}; do ( ./shell_script ./test.txt >> /dev/null ); done'
zsh -c   0.02s user 0.15s system 3% cpu 4.420 total
$ time zsh -c 'for i in {1..1000}; do ( ./read_file ./test.txt >> /dev/null ); done'
zsh -c 'for i in {1..1000}; do ( ./read_file ./test.txt >> /dev/null ); done'  0.02s user 0.15s system 4% cpu 3.800 total
script completed 1000 iterations in 3.8 seconds, beating the Zsh script by 16%! In my previous experience, Go executables have rarely outperformed pure shell scripts. Will it make a noticeable different in execution time in real-world use? I doubt it. It's a difference of half a second across a thousand operations. But, it being close means performance is not a reason to avoid using script.
Distributing script executables
script projects produce executable binaries, similar to the binaries produced by any Go project. So, an executable produced by script will be able to run on similar hosts without needing to install additional dependencies. This is familiar for Go projects, and one of the main benefits of Go. That does come with it’s own challenges, though. Go builds binaries for a target CPU architecture, unless given a target architecture to build. So, when distributing a script, the build pipeline will need to build and push a version for each target architecture. The other option left is to generate the binary on the host that will run it. Doing that loses out on one of the major benefits of Go, requiring the Go toolset be installed and configured everywhere it will be run. Building a project for multiple architectures is a familiar requirement for Go development, so it's not inherently a negative.
script interpreter
I like that the scripts can be compiled, but I wished there was a way to execute the script without needing to compile it. Being able to view the source and execute it, without other dependencies, is something that Shell scripts do well, and can speed up debugging broken pipelines. Within the script documentation, it provides an example script to run script projects using just the provided code. This script will take the provided source, generate a temporary Go project, and compile and execute the script. I can see some issues with that right away, but I created a goscript.sh within the same project to test it. I'll save pasting the full goscript.sh here, but it can be found in the original post linked by the documentation. I created a .goscript file using the using the code from the main.go.
$ <<EOT >> read_file.goscript
#!$(pwd)/goscript.sh
my_file := script.Args().First(1)
filename, err := my_file.String()
if err != nil {
     panic(err)
}
script.File(strings.TrimSpace(filename)).Stdout()
EOT
$ chmod u+x ./read_file.goscript
And then executed the file.
$ ./read_file.goscript
# command-line-arguments
./script.go:15:13: undefined: strings

So, immediately, there's a problem. Without altering the goscript.sh, the interpreter script doesn't include any package dependencies, meaning it's limited to only the functionality exposed by the github.com/bitfield/script package or built-in to Go. It's an interesting project, but the limitations are too imposing: The full Go toolset needs to be installed, and the script can only use the script package and API.
Comparing the tools
In the beginning, I laid out 4 criteria I was going to use to assess these projects:
  1. Getting started
  2. Performance
  3. Distribution
  4. Maintenance

Risor vs. script
First, I'll compare the two projects to each other, and then compare the "winner" to just using Shell languages.
1. Getting started
Under the "Getting Started" criteria, I give it to Risor. Risor provided me with an easier starting process. No project needed to be created and installing the tool was a single command (thanks to Homebrew). script was a bit more involved, requiring everything needed of a Go project, including Go source, regardless of platform it was developed on. Risor 1, script 0.
2. Performance
"Performance"-wise, the clear winner is script. script was over 400% faster than Risor, and was able to perform simple operations more efficiently than the same test in Risor. Risor 1, script 1.
3. Distribution
I give the "Distributing" category again to `script`. Risor is more flexible, allowing scripts to be interpreted and run on any platform that the Risor executable is installed on. But, those Risor scripts will always require installing external dependencies to run. Compiling for multiple architectures may be more work, but `script` projects can be installed and run as a single file, so sharing them amongst platforms doesn't require vetting additional dependencies to install. Risor 1, script 2.
4. Maintenance
For long-term maintenance, I give the point to script. Both projects require some knowledge of the API, Risor using a custom DSL and script using Go structs, interfaces, and functions. But, like introducing a new language to an environment, adopting a DSL should always be a conscious, deliberate choice. It takes time for a developer to learn an API and a project, and adding a DSL eliminates the benefit of knowing that language. script also can be deployed to hosts without outside dependencies, so there are no additional packages to keep updated. Risor 1, script 3.
Better than shell?
After weighing all that, the question is, "is script better than writing shell scripts"? In my non-scientific, completely arbitrary position: Not particularly. I can see specific use cases where I will use script in the future. In places where maybe I would put a shell script to perform structured operations, like querying known sources and generating JSON for pipeline inputs, I may look at script as an alternative. But, that assumes that I know ahead of time that the team I'm working with can take it over. If the maintainers don't know Go, then I might as well hand them a jigsaw puzzle. Shell languages being so ubiquitous means that it's likely most developers have enough knowledge to trouble shoot the issues, without needing to learn an entire other language and toolset. And what’s provided by using script needs to be compelling enough to give up that simplicity. The performance of script was surprising, though, and I am excited to implement script for personal projects. I'd held off from writing those personal projects in Go because the startup times were painful and it didn't provide enough of a benefit to rewrite them. But I'm eager to see if I can gain the distribution improvements and only have to download a single file rather than a whole library. If you would like to learn more or have a conversation about go-based scripting, contact us. [table id=12 /] [post_title] => Testing Different Go-based Scripting Projects [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => testing-different-go-based-scripting-projects [to_ping] => [pinged] => [post_modified] => 2025-03-25 20:35:56 [post_modified_gmt] => 2025-03-25 20:35:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4873 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 4916 [post_author] => 2 [post_date] => 2025-04-02 20:35:04 [post_date_gmt] => 2025-04-02 20:35:04 [post_content] => Improving efficiency and productivity remains a core business imperative. As management expert Peter Drucker noted, 'Efficiency is doing better what is already being done.'  It is in the pursuit of that endeavor that companies continually strive to enhance efficiency throughout their operations, seeking improvements in areas such as labor, logistics, supply chains, and resource management In today's digital age, there's another critical area where businesses can extract greater efficiencies: data transfer and synchronization. In the past, industries like healthcare, finance, and retail were the primary data-driven sectors. Today, you would be hard pressed to find any business that doesn’t rely on real-time data.
Data Latency, the Silent Productivity Killer
Latency is the nemesis of network administrators, negatively impacting application performance and user experiences. However, there is another form of latency that can prove just as detrimental: data latency. Data latency hampers timely decision-making, resulting in missed opportunities and suboptimal resource allocation. Business success hinges on delivering data to decision-makers at the velocity required for meaningful action. Not long ago, companies relied on monthly statements to monitor their accounts, constraining many critical decisions to a monthly cycle. Today, credit card holders receive instant notifications of their transactions in real-time, exemplifying the power of immediate data flow. Without real-time data, financial statements may be outdated or incorrect, leading to poor financial decision making. The flow of real time data accelerates decision making. The faster that your company can make decisions, the faster it can capitalize on emerging opportunities and maintain a competitive edge. Speed, however, is only part of the equation. Data quality is equally crucial. According to a 2021 Gartner study, poor data quality costs organizations an average of $12.9 million a year. The combination of delayed data flow and poor data quality creates significant business risks. These risks come in the form of higher operational costs rising as employees waste time manually gathering and validating information. Customer satisfaction suffers when inaccurate or delayed data leads to poor service and inconsistent communication, eroding trust. Supply chains falter when outdated data drives inventory and logistics decisions, resulting in stockouts, overstock, and delivery failures. Security weakens as inefficient data handling leaves sensitive information vulnerable to breaches. IT teams struggle under the burden of manual workarounds and troubleshooting, driving up costs while reducing efficiency.
A Magic Fountain
What if there were a magical fountain that could significantly increase data transfer and synchronization overnight? One that took care of everything behind the scenes, transforming your data management processes with nearly no effort on your part so that data flowed fluidly and unabated across your organization? This hypothetical "fountain" could transform your data operations by: The good news is that this hypothetical fountain comes in the form of the Keyva Seamless Data Pump. This data integration platform efficiently transfers large datasets between on-premises and cloud systems while securing and transforming data to match each end user's required format.
Set It and Forget It
Once configured, the Data Pump operates with minimal intervention. It can be set up to be functional in under 45 minutes. That is possible because it comes with preconfigured settings and built-in maps to get you started quickly. By transforming data integration into a set of repeatable, automated processes, it eliminates the need for tedious manual data entry of traditional Extract, Transform, Load (ETL) processes that: The Data Pump has an internal scheduler that allows you to automate data synchronization at off-peak hours to reduce network load and ensure consistent updates without manual oversight. This puts you in charge of when this automation occurs. Thanks to its emphasis on security, you can also forget worrying about compliance issues with your data transfers because the Data Pump ensures end-to-end encryption, access controls, and audit logging to meet operational requirements.
Reaping the Rewards of Efficient Data Transfer
Still wondering how a more efficient data transfer management platform can benefit your business. Let's explore three industry examples to illustrate:
Keyva
At Keyva, we understand that efficient, secure data flow is the lifeblood of modern business. We also know that you don’t have the resources to reinvent the wheel, which is why our solutions are designed to transform using your existing assets and best-of-breed hybrid solutions. The Keyva Seamless Data Pump exemplifies our commitment to transforming client operations through innovative automation. Our team can assess your environment, identify inefficiencies, and implement a solution that optimizes your data transfer processes. This creates tangible and measurable value for your organization and customers. Whether you need a hands-off solution or a tailored approach for your unique enterprise requirements, we ensure your data flows seamlessly and securely at the necessary velocity to drive business success. [table id=9 /] [post_title] => The Data Velocity Advantage: Boosting Transfer and Sync Efficiency [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-data-velocity-advantage-boosting-transfer-and-sync-efficiency [to_ping] => [pinged] => [post_modified] => 2025-03-25 20:35:22 [post_modified_gmt] => 2025-03-25 20:35:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4916 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 4905 [post_author] => 2 [post_date] => 2025-03-24 14:30:13 [post_date_gmt] => 2025-03-24 14:30:13 [post_content] => In the same way that humans cannot live without water, today's digital businesses cannot function without data. As individuals, we consume water throughout the day in various forms and locations. We get a glass of water from the refrigerator or the water cooler at work. We take a quick gulp from a water bottle while on the treadmill or from the school water fountain. All this water is sourced from rivers, wells, or lakes, then transported via pipelines or highways, forming a distribution system so reliable we seldom consider its complexity. Similarly, the flow of data in modern enterprises demands robust and dependable systems to keep it accessible, secure, and usable.
The Keyva Seamless Data Pump
Keyva Seamless Data Pump is a data integration platform designed to transfer large data sets between multiple systems, both on-premises and cloud-based. It secures the data and transforms it into the required format for each end user. The Data Pump also ensures that you are only served the amount of data you need, whether it be a single serving or a disaster recovery weekly backup. This enhances both the relevance of the data and its security. Some of its key features include: This unique blend of prebuilt functionality and flexible customization sets Keyva Seamless Data Pump apart, empowering businesses to manage their data as effortlessly as turning on a tap.
Use Cases of Keyva Seamless Data Pump
Need to transform data from multiple sources in multiple datacenters into a centralized CMDB, Seamless Data Pump delivers. Here are two examples use cases: A large global bank needed to make sure their CMDB was kept current and accurate. With multiple sources of data gathering and data inputs, limited capability of the discovery engine, and an engineering team that did not have expertise in every tool’s API schema, they needed help in getting data transformed and entered into the CMDB in an automated way and quickly. Any delays could exacerbate the problem of data drift and data integrity. The Data Pump helped consolidate data from multiple systems and used the power of CMDB reconciliation to assign appropriate weights to specific datasets, and helped control all of this through a standardized interface which also reduced the need for staff training. Now consider an international manufacturing and distribution organization that had three geographically dispersed data centers: two for operational resiliency and one for disaster recovery. In this setup, enormous volumes of data were continuously transferred across dedicated links between these facilities. Just as water transportation incurs significant costs, moving large amounts of data can was expensive. They wanted to optimize data transfer costs, while not changing or losing any of the current endpoint functionality. The Data Pump addressed this challenge with several key features:

Scalability and Security
In today’s fast-paced environments, data surges can be unpredictable. For instance, consider a hospital overwhelmed with patients following a natural disaster or construction accident. Such a surge generates a significant influx of data that must be processed quickly, but with Keyva Seamless Data Pump, there's no need to expand your data teams to manage the load. This highly scalable solution can instantly adjust to ensure that   the relevant data is sent to target data sources. Security is equally critical. All data is encrypted during transit to thwart any unauthorized access. The Data Pump also supports the use of service accounts, so organizations can control the permissions model for the type and amount of data that gets processed.  It uses secure connection protocols for respective APIs of the source and target products, so that data is securely translated and loaded. The ability of the Data Pump to adapt to sudden data surges while maintaining stringent security protocols makes it an exceptional choice for organizations dealing with  high-volume data transfers in dynamic environments.
Conclusion
Just as people rely on consistent, reliable drinking water from national distribution networks, businesses and stakeholders should expect the same dependability from their data infrastructure. While many solutions can move data, the Keyva Seamless Data Pump stands out for its consistency, reliability, and scalability. The Date Pump exemplifies Keyva’s commitment to providing innovative tools that transform our client IT environments and businesses. Our team can assess your environment, understand your needs, and demonstrate how Keyva Seamless Data Pump can add value. We offer implementation and customization services to ensure the Data Pump works optimally for you. [post_title] => The Future of Data Automation: Keyva Seamless Data Pump in Action [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-future-of-data-automation-keyva-seamless-data-pump-in-action [to_ping] => [pinged] => [post_modified] => 2025-02-26 14:32:42 [post_modified_gmt] => 2025-02-26 14:32:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4905 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 4933 [post_author] => 7 [post_date] => 2025-03-13 15:31:41 [post_date_gmt] => 2025-03-13 15:31:41 [post_content] => Read about a client who faced operational inefficiencies in managing its mainframe storage system. Download now [post_title] => Case Study: Automated Mainframe Storage Management [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => automated-mainframe-storage-management [to_ping] => [pinged] => [post_modified] => 2025-03-20 21:54:25 [post_modified_gmt] => 2025-03-20 21:54:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4933 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 4911 [post_author] => 2 [post_date] => 2025-03-03 14:36:50 [post_date_gmt] => 2025-03-03 14:36:50 [post_content] => Think about how much time we spend searching for stuff. Whether it be trying to locate something in the junk drawer at home, searching for misplaced car keys, googling a web search on an unfamiliar topic, or querying an important document at work, we spend too much time trying to find things we need every day. While wasted time searching for items in our personal lives can lead to increased frustration, businesses experience more serious consequences in the form of diminished productivity, increased operational costs, and compromised decision quality when critical information remains elusive.
Navigating the Modern Data Maze
Today's workplace presents a fundamental shift in data storage. Unlike the past, when employees could find everything on a local NAS server, information now resides across multiple environments including local storage, cloud platforms, SharePoint, CRM systems, Microsoft 365 applications, SQL databases and various third-party SaaS solutions. This fragmentation has eliminated the convenience of "one-stop shop," often requiring employees to navigate through scattered repositories to locate that one critical piece of information they need. We often hear the phrase, “garbage in, garbage out” when it comes to data analytics. Good decision making depends on clean data that is unbiased and timely.  Additionally, ineffective search capabilities can lead to incomplete information resulting in suboptimal decisions and missed opportunities. In essence, the power of your data is directly proportional to your ability to access and utilize it efficiently.
Empowering Every Employee to Use Intelligent Search
This is why organizations need to implement intelligent search systems into their enterprise. Intelligent search harnesses advanced technologies such as natural language processing, machine learning, and semantic understanding to interpret user intent and comprehend context. It enhances productivity by minimizing the time employees spend hunting for information. Every organization has power users who excel at searching for information, whether it's mastering the art of Google searches or wielding complex SQL commands. However, businesses shouldn’t depend on a select few with specialized skills to access critical information. Intelligent Search democratizes this capability, making effective information retrieval accessible to all employees, regardless of their technical expertise. This technology allows users to conduct simple searches using natural language, eliminating the need for specialized query syntax or deep technical knowledge. Once a plain language query is initiated, the system then:
  1. Interprets the user's intent
  2. Identifies relevant data sources
  3. Constructs and executes appropriate API queries
  4. Aggregates and presents results in a user-friendly format
Thanks to intelligent search, you don’t need to know the specific technical APIs to conduct the query, and you don’t have to know where to go. This makes for quick access to relevant information for everyone, improving productivity by reducing time spent on complex searches.
From Fragmented to Seamless
A foundational feature of modern network system management and security solutions is the "single pane of glass" approach that provides administrators comprehensive visibility across all infrastructure areas using a single interface. Intelligent search extends this concept to end users, offering a powerful, centralized query capability that spans the entire network ecosystem. Even when users know where to look, they find themselves navigating through different interfaces, repeatedly logging in, and adapting to various search mechanisms across today’s hybrid enterprises. Smart intelligence on the other hand seamlessly integrates with backend connections to various systems, eliminating the need for multiple logins to provide a consistent user experience regardless of the underlying data source. But streamlining the query process does not short-cut the need for security. Smart search utilizes role-based access to filter search results based on user roles, departments, or other attributes to ensure that users only see the information relevant to their position.
Intelligent Search Support with Keyva
At Keyva, we have worked on how to democratize data queries for years. A few years ago, we did it through the use of middleware that leveraged APIs for all the endpoints we needed to integrate with, and the translation intelligence was built there. We utilized tagging strategy for documents with weighted relevance, to ensure the most appropriate data was returned to users. Today, the landscape has evolved dramatically with the advent of AI technologies. These advancements have significantly reduced the complexity and setup requirements that were once necessary for effective data integration and retrieval. The role once played by our custom middleware has been superseded by advanced AI algorithms that can adapt to individual user needs and behavior, and continuously learn and improve based on user interactions. AI now serves as the cornerstone for connecting disparate data sources and indexing them into a unified system for seamless discovery across the enterprise. We are indeed living in exciting times. At Keyva, we're at the forefront of this data revolution, and our position allows us to guide our clients through this transformative era in data management and accessibility. Find out how a smarter search can garner smarter decisions for your business. [table id=3 /] [post_title] => Intelligent Search: How Smarter Searches Result in Smarter Decisions [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => intelligent-search-how-smarter-searches-result-in-smarter-decisions [to_ping] => [pinged] => [post_modified] => 2025-03-04 16:20:53 [post_modified_gmt] => 2025-03-04 16:20:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4911 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 4880 [post_author] => 2 [post_date] => 2025-03-01 20:21:00 [post_date_gmt] => 2025-03-01 20:21:00 [post_content] => "Config" is factor 3 of The Twelve-Factor App. Embedded in that config is the data an application uses to understand and work with the environment it is deployed. Config may contain hostnames, URLs, tags, contacts, and importantly, secrets: Passwords, API keys, certificates, client tokens, cryptographic keys, utilized by the application to secure or access resources and data. Configuration is kept separate from an app to make it easy to configure per-environment, but also to rotate secrets to ensure compliance or protect data in case of exposure. Deploying applications with Red Hat Ansible provides several ways to easily and securely inject secrets into the configuration, so that deployments are unique to the environment they're running in.
Managing secrets with Ansible Vaults
Ansible vault is a tool embedded within the Ansible package that encrypts and decrypts variables and files for use within the Ansible ecosystem. Ansible vaults are:
Introduction to Ansible vaults
The key benefit of Ansible vaults are they are the Ansible-native way to manage secrets within Red Hat Ansible infrastructure and integrate neatly with the rest of the Ansible Automation Platform, playbooks, and templates. Being the Ansible-native way of managing secrets, Ansible vault is probably already installed on the hosts and systems developing roles and deploying playbooks. Vault data can be included in playbooks with minor changes to deployment scripts within the CLI or to job templates in Ansible Automation Platform.
Using Vault IDs
Vault IDs associate a password source with an encrypted object. To Ansible, they provide the root key that is used to encrypt the
Encrypting secrets with Ansible vaults
Encrypting and decrypting data with Ansible vault likely doesn't require installing any additional packages. 1. Create and install a new virtual environment
$ python3 -m venv env
$ source env/bin/activate
2. Install Ansible
$ pip install ansible
Collecting ansible
3. Create file vars.yml with variables
$ <> vars.yml
api_key: My API Key
db_password: MyD8Pa55word
EOT
4. Encrypt with Ansible vault (enter a password when prompted)
$ ansible-vault encrypt vars.yml
New Vault password:
Confirm New Vault password:
Encryption successful
To decrypt and edit the contents, run ansible-vault decrypt
$ ansible-vault decrypt vars.yml
Vault password:
Decryption successful
Working with vault data in Ansible More likely, though, is that you'll be using those variables within an Ansible Playbook execution. 1. The vars.yml needs to be re-encrypted:
$ ansible-vault encrypt vars.yml
New Vault password:
Confirm New Vault password:
Encryption successful
2. Update the playbook to use the newly created vars file
# playbook.yml
---
- name: Use API key
  hosts: localhost
  vars_files:
    - vars.yml
  vars:
    api_key: "{{ undef(hint=Specify your API key') }}"
    db_password: "{{ undef(hint='Provide a DB password') }}"
  tasks:
    - ansible.builtin.debug:
        msg: "my super secret API key: {{ api_key }}"
3. Specify --ask-vault-pass to prompt for password when playbook is executed.
$ ansible-playbook playbook.yml --ask-vault-pass
Vault password:
PLAY [Use API Key] *******************************************************************
TASK [Gathering Facts] *******************************************************************
ok: [localhost]
TASK [ansible.builtin.debug] *******************************************************************
ok: [localhost] => {
"msg": "my super secret API key: My API Key"
}
When Ansible encounters the encrypted variables, it will see the header defining it as a vault, and then automatically use the provided IDs to decrypt the data. Credentials can be added within the Ansible Automation platform like any other credentials, and then included in the Job template to be made available within the play.
Working with external secrets
Not all secrets will be stored within Ansible Automation Platform. Organizations may have security or audit policies that require secrets be stored in a central platform or service. For that, there are two more commonly used methods for accessing secrets: Lookup plugins and modules.
Accessing external secrets with Ansible Lookup plugins
Lookup plugins add additional functionality to Jinja2, the templating framework utilized by Ansible, to return data using the configured provider. Many plugins are included in the default Ansible distribution, and authors can create and include their own plugins within collections, roles, and playbooks. A list of available plugins can be viewed by calling ansible-doc -t lookup -l in the CLI. With lookup plugins, playbooks can template in dynamic data directly within the configuration, eliminating a lot of additional config and code that may otherwise be needed to provide the values to a play. For example, the community.dns.lookup_as_dict plugin will query DNS and return a dictionary of DNS entries for the provided domain.
$ ansible localhost \
  -m 'ansible.builtin.debug' \
  -a "msg={{ lookup('community.dns.lookup_as_dict', 'example.org') }}"
localhost | SUCCESS => {
    "msg": [
        {
            "address": "96.7.128.186"
        },
        {
            "address": "23.215.0.132"
        },
        {
            "address": "96.7.128.192"
        },
        {
            "address": "23.215.0.133"
        }
    ]
}
The lookup function can just as easily be used with external secret managers, like Hashicorp Vault, AWS Secrets Manager, and Azure Key Vault. The lookup function is configured similarly to other Jinja functions and can be used throughout playbooks or templates to pull in secrets. The following example playbook and template file utilize the lookup plugin to grab secrets from Hashicorp Vault and inject them into the task or template.
# playbook.yml
---
- name: Query KV from Hashi Vault using Lookup plugin
  hosts: localhost
  vars:
    # Alternatively, export the VAULT_ADDR env to the Ansible runtime
    vault_address: https://my-vault-url.dev:8201
  tasks:
    - name: Using lookup within a playbook
      ansible.builtin.debug:
        msg: "{{ lookup('community.hashi_vault.vault_kv2_get', 'my_secret', url=vault_address) }}"
    - name: Using lookup with a template
      ansible.builtin.template:
        src: service_config.yml.j2
        dest: /etc/myservice/config.yml
        owner: user
        group: group
        mode: '0644'

{# service_config.yml.j2 #}
---
connnection:
  hostname: my_url
  username: my_user
  password: {{ lookup('community.hashi_vault.vault_kv2_get', 'database/password', url=vault_address) }}
Using a lookup plugin reduces the potential for exposure of a secret by only accessing it when required, and writing it directly to the task or template.
Using Modules to manage secrets
Lookup plugins are probably the easiest way of reading secrets, but if a playbook needs to manage the full lifecycle of a secret, then it may be better to use Ansible modules. Modules make up the tasks that are performed by Ansible during a run. Here is an example of pulling a secret from Hashicorp Vault using a module.
# playbook.yml
---
- name: Create secret in Hashi Vault using Module
  hosts: localhost
  module_defaults:
    group/community.hashi_vault.vault:
      url: https://my-vault-url.dev:8201
  tasks:
    - name: Write secret to Vault
      community.hashi_vault.vault_kv2_write:
        path: application/secret_value
        data: mysecretvalue

Protecting secrets in Ansible
There are a few things to be aware of, though, when writing playbooks that utilize Ansible vault and/or external secrets.
Keeping secrets at rest
Ansible vault uses AES-256 to encrypt vault data at-rest, and encrypted HMACs to ensure the integrity of that data. To encrypt, Vault uses the provided password and a unique salt to generate data encryption keys for the data and HMAC each time it performs encryption, and uses that same password and salt to verify and decrypt the data when it reads. The security of data stored by Ansible vault is protected by controlling access to the password(s) used to encrypt, rather than needing to control access to the data itself. Even though data is encrypted with AES-256, it’s still important to keep secrets, even encrypted secrets, secure from unauthorized access by storing them within an artifact repository, object store, or secured filesystem
Protecting secrets during Ansible Plays
When Ansible accesses secrets during a play, the content of a secret, whether from Ansible vault or from an external provider, can potentially be output to logs. Printing to logs within Ansible Automation Platform isn’t immediately an issue, but, many organizations configure AAP to forward logs to a central repository for compliance and management. Again, this may be expected. But, when that data is forwarded, it increases the risk that secrets can be seen or accessed by unauthorized individuals. Within Ansible Automation Platform, job template authors can change the configure to limit logging verbosity, reducing the amount of information published to the output during a play. But that setting can still be overridden, and the risk of accident exposure from unaware authors is still there. The best way to control secrets during Ansible plays is to enable the no_log: True for the task. By enabling no_log, Ansible will not print any information from the task, even with verbosity at level 4.
$ ansible-playbook playbook.yml --ask-vault-pass -vvvv
ansible-playbook [core 2.18.2]
...
Vault password:
...
PLAYBOOK: playbook.yml *******************************************************************
...
1 plays in playbook.yml
Trying secret <ansible.parsing.vault.PromptVaultSecret object at 0x1025ba510> for vault_id=default
...
PLAY [localhost] *******************************************************************
TASK [Gathering Facts] *******************************************************************
ok: [localhost]
TASK [ansible.builtin.debug] *******************************************************************
task path: .../playbook.yml:9
ok: [localhost] => {
    "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
}
Even with connection debug enabled, in the truncated logs above, the task output is hidden. That secret can still be exposed to output, though, if it’s utilized by other tasks. Any task that includes that secret within it’s config or output would need to enable no_log to prevent exposure. This can be frustrating because it makes it difficult to debug issues during a play because several tasks from the playbook might need to be hidden. Ansible Automation Platform provides many tools to secure secrets and data within job templates and plays, and available plugins and modules provide multiple methods of utilizing those secrets within plays. But, authors need to be careful how those secrets are utilized within the play, and utilize the correct features of Ansible to ensure secrets are not exposed or stored insecurely at rest. If you would like to learn more or have a conversation about how Ansible Automation Platform can provide value in your organization, contact us. [table id=12 /] [post_title] => Managing Secrets with Red Hat Ansible [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => managing-secrets-with-red-hat-ansible [to_ping] => [pinged] => [post_modified] => 2025-02-18 20:21:47 [post_modified_gmt] => 2025-02-18 20:21:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4880 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 4919 [post_author] => 2 [post_date] => 2025-02-27 16:14:15 [post_date_gmt] => 2025-02-27 16:14:15 [post_content] => Read about a client who faced significant challenges in automating their virtual machine (VM) image build and lifecycle management Download now [post_title] => Case Study: Virtualization Automation & Infrastructure Management [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => case-study-virtualization-automation-infrastructure-management [to_ping] => [pinged] => [post_modified] => 2025-02-27 16:14:15 [post_modified_gmt] => 2025-02-27 16:14:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4919 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 4978 [post_author] => 2 [post_date] => 2025-04-17 14:53:36 [post_date_gmt] => 2025-04-17 14:53:36 [post_content] => At Evolving Solutions and Keyva, our Kubernetes expertise extends past the platform to your broader IT and business needs. While Kubernetes outperforms traditional virtual machine environments in flexibility and efficiency, its complexity can be daunting. [post_title] => From Warm-Up Act to Headliner: The Rise of Kubernetes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => from-warm-up-act-to-headliner-the-rise-of-kubernetes [to_ping] => [pinged] => [post_modified] => 2025-04-17 14:53:36 [post_modified_gmt] => 2025-04-17 14:53:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=4978 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 125 [max_num_pages] => 16 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 948d93c77a98231c4fb5c4a02479a17c [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

From Warm-Up Act to Headliner: The Rise of Kubernetes

At Evolving Solutions and Keyva, our Kubernetes expertise extends past the platform to your broader IT and business needs. While Kubernetes outperforms traditional virtual machine environments in flexibility and efficiency, its complexity can ...

Testing Different Go-based Scripting Projects

Comparing Bitfield/Scripts and Risor as alternatives to shell scripts I don’t like writing shell scripts. I like uniformity and predictability, easy to grok interfaces, and, if I can get it, ...
Data Velocity

The Data Velocity Advantage: Boosting Transfer and Sync Efficiency

Improving efficiency and productivity remains a core business imperative. As management expert Peter Drucker noted, ‘Efficiency is doing better what is already being done.’  It is in the pursuit of ...
Abstract lines and spheres image stock photo

The Future of Data Automation: Keyva Seamless Data Pump in Action

In the same way that humans cannot live without water, today’s digital businesses cannot function without data. As individuals, we consume water throughout the day in various forms and locations. ...

Case Study: Automated Mainframe Storage Management

Read about a client who faced operational inefficiencies in managing its mainframe storage system. Download now
man searching on phone and laptop

Intelligent Search: How Smarter Searches Result in Smarter Decisions

Think about how much time we spend searching for stuff. Whether it be trying to locate something in the junk drawer at home, searching for misplaced car keys, googling a ...

Managing Secrets with Red Hat Ansible

“Config” is factor 3 of The Twelve-Factor App. Embedded in that config is the data an application uses to understand and work with the environment it is deployed. Config may ...

Case Study: Virtualization Automation & Infrastructure Management

Read about a client who faced significant challenges in automating their virtual machine (VM) image build and lifecycle management Download now