Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 14 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 14 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 104, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 1090 [post_author] => 7 [post_date] => 2018-12-13 16:00:37 [post_date_gmt] => 2018-12-13 16:00:37 [post_content] => By Anuj Tuli, Chief Technology Officer Here are the steps to installing Red Hat OpenShift Container Platform from scratch, for your lab or dev environments. We will walk through setting up the OC cluster with 1 master and 1 node, but you can set up as many nodes as you'd like. Since we are not setting up the master nodes in HA configuration, we'd recommend this setup be limited to your lab environments. This guide is valid for RHEL 7, and OSE 3.5. We will set up Openshift Container Platform on VMware virtual machines. If you are using KVM or another hypervisor, the steps for hardware configurations may differ slightly. First, let us take a look at the pre-requisite steps that need to be addressed before we begin our work. This applies to both VMs (Master VM and Node VM):

Prepare and Install Packages (on Master and Nodes)

Since we set up RHEL with minimal packages, we would need to enable all the needed rpms. First register with subscription manager using your Red Hat profile credentials.
subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
To enable OpenShift rpms, you will need to find the associated Pool ID and attach it separately.
subscription-manager list --available --all
Find the pool ID associated with the Red Hat OpenShift Container Platform, and run:
subscription-manager attach --pool <Pool ID>
You will now be able to enable the associated repos.
subscription-manager repos --enable rhel-7-server-ose-3.5-rpms
subscription-manager repos --enable rhel-7-server-openstack-10-rpms
Optionally, if you want to set up OC Cluster in HA configuration:
subscription-manager repos --enable="rhel-ha-for-rhel-7-server-rpms"
Finish setting up other utils:
yum repolist
yum -y update
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum install gcc python-virtualenv
yum install atomic-openshift-utils

Set up dnsmasq (on Master and Nodes)

When using OpenShift SaaS offerings, the service provider takes care of setting up DNS and routing. But since we are setting up the cluster from the ground up, we need to set up these components manually. We will be using dnsmasq for our lab.
yum -y install dnsmasq bind-utils
We will now modify the /etc/dnsmasq.conf configuration file. It is recommended that you back up the existing conf file before modification, in case you need to revert to it later. Modify the /etc/dnsmasq.conf file: On the Master, add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<master IP>
resolv-file=/<path>/<custom-filename>
On each Node(s), add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<nodeIP>
resolv-file=/<path>/<custom-filename>
This /<path>/<custom-filename> is where we will list our nameserver (in most cases, your subnet gateway) IP address. On the Master, create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
On each Node(s), create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
We will also need to update the /etc/resolv.conf file and modify the existing nameserver entry to be a loopback address. On the Master and Node(s), open the /etc/resolv.conf file and modify the nameserver entry
nameserver 127.0.0.1
Note that if you reboot your VMs, you may need to update the nameserver entry in this file again. On both the Master and the Node(s), we have disabled the firewall service already (as a pre-requisite). We will now enable the dnsmasq service:
systemctl enable dnsmasq && systemctl start dnsmasq
In order to make sure the dnsmasq service is working correctly, you can try to ping the <subdomain>.<domain> address you defined in the /etc/dnsmasq.conf file under the address section.
ping <subdomain>.<domain>
If you run this command on the Master, it should return the IP address of your Master server. You could also add another custom subdomain in front (any string), and it should return the same IP address. For example -
ping <my_sub>.<subdomain>.<domain>
should return the IP of the Master server as well.

Configuring Docker (on Master and Nodes)

Our next step is to set up Docker on these machines.
yum -y install docker-1.12.6
We will be modifying the /etc/sysconfig/docker-storage-setup file. It is recommended that you back up the existing file before modification. But first, we need to find out what our volume is named. If you recall, we had set up additional 40 GB volumes on our machines for use with Docker storage. The output of fdisk –l will give you the name of your additional disk volume. In my case, it was /dev/sdb. We will use the sdb name in our docker-storage-setup file. Open /etc/sysconfig/docker-storage-setup through your favorite editor, comment all existing lines, and add the following entries:
DEVS=sdb
VG=docker-vg
Save and close the file. We will disable cluster locking configuration for LVM
lvmconf --disable-cluster
And then run our Docker storage setup
docker-storage-setup
You can verify the setup using the command
lvs
It will show you the attributes and sizes associated with the various volumes We can now start the Docker service
systemctl enable docker && systemctl start docker

Openshift install (on Master)

We can now finally get started with the OpenShift install steps.
yum -y install atomic-openshift-docker-excluder atomic-openshift-excluder atomic-openshift-utils bridge-utils bind-utils git iptables-services net-tools wget
Once we have all the packages ready to go, we run
atomic-openshift-installer install
The setup asks a number of questions. After selecting a user that you'd like to enable for SSH access, you will be asked to select a variant for the install. We will select option [1] for OpenShift Container Platform, which is also the default. You will be asked to enter the hostname or IP of your Master node, and choose whether the host will be RPM based or container based. The installer will then provide a brief summary of the information entered, and will prompt for additional hosts. We will select y and this time we will enter the hostname or IP of our Node server. You can go through configuring additional Node servers in this section. For the 'New Default Subdomain', you can configure the . information as you have defined it under the /etc/dnsmasq.conf file. This portion can be used later for external routing. If you have any http or https proxies, you can configure them on the next screen. The installer then shows a summary of all the information captured, and what the configuration would look like. Once you confirm all the shown configurations, the installer kicks off the setup. It can take a while for the install to complete. Once the installation has completed successfully, you can verify the running services using the following command
systemctl status | grep openshift
The output of this command will list the services running on both master and node(s). If you run the same command on the node(s), it will only show the services running on that node. You can also run some sample OC commands on the Master to make sure all looks good
oc get pods
oc get projects
oc get nodes
That should do it! You have now set up a single node OpenShift cluster in your lab environment. The process of creating users depends upon which Identity Provider you would like to have set up with OpenShift. You can access the OpenShift console via https://<Master_IP_or_FQDN>:8443/console If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at [email protected]. About the Author [table id=3 /] [post_title] => Red Hat OpenShift : Day 1 Install Guide [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => red-hat-openshift-day-1-install-guide [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:41:50 [post_modified_gmt] => 2024-05-28 17:41:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1090 [menu_order] => 43 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1071 [post_author] => 7 [post_date] => 2018-12-06 17:46:12 [post_date_gmt] => 2018-12-06 17:46:12 [post_content] => By Anuj Tuli, Chief Technology Officer As the industry moves towards self-healing containers, agile applications, and seamless infrastructures, there is an impending need for setting up auto-remediation of incidents and configuration drifts. Infrastructure and Operations teams have to depend heavily on automated tools, systems and processes, to manage the ever-expanding parlance of the IT framework. Closed-Loop Incident Process is one such subset of Closed Loop Automation, and is defined as follows:
  1. You receive an alert for a service down in your operations center console
  2. An automation framework picks up the alert, and fetches information contained in the various fields (e.g. reason for alert, configuration item). If the configuration item that alerted does not exist in the CMDB, then it creates the corresponding CI in the CMDB (Configuration Management Database). If the CI already exists in the CMDB, it creates an Incident Ticket in your IT Service Management system.
  3. The framework auto-remediates the issue based on the custom runbooks you have defined for your organization. For example, if the disk is full, delete the logs and removes any temporary files. The Incident ticket is also updated with the results of the remediation effort.
  4. If the auto-remediation succeeds, the associated incident ticket is updated, and closed. If the auto-remediation fails for any reason, a notification is then sent out for human intervention.
Many organizations have already adopted this automated remediation process and expanded it to include the top 5 common alert types on which they spend the most time. In most cases, they are automating consistent repeatable processes that an engineer works on, again and again, day in and day out. Automating these processes have saved these organizations a ton of manual hours, reduced human errors, and added tangible efficiency to their infrastructure and operations teams. If you need assistance in building the auto-remediation framework, Keyva can help.  If you'd like to talk about how other organizations have garnered benefits from such automation, please feel free to reach us at [email protected] About the Author [table id=3 /] [post_title] => Closed-Loop Automation: A Primer [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => closed-loop-automation-a-primer [to_ping] => [pinged] => [post_modified] => 2020-03-05 16:00:33 [post_modified_gmt] => 2020-03-05 16:00:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1071 [menu_order] => 44 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1058 [post_author] => 7 [post_date] => 2018-11-28 15:29:29 [post_date_gmt] => 2018-11-28 15:29:29 [post_content] => By Anuj Tuli, Chief Technology Officer Organizations are always looking to improve efficiencies within their infrastructure.  One such area where organizations look to make improvements centers around what to use to run their applications:  virtual machines or containers? Comparing the two is a lot like comparing apples and oranges.  The fact is, these are two very distinct, very different technologies.  The pros and cons of each vary widely depending on your needs.

High level component architecture for Containers and VMs

Here are some simple comparisons to consider as you explore which option is best for your environment:
VIRTUALIZATION
•  VMs virtualize hardware.•  Containers virtualize applications and dependent libraries.
ENCAPSULATION
•  Virtual Machines encapsulate the entire operating system library.•  Containers only encapsulate the application layer (or database layer) and application libraries.
HOSTING
• VMs are hosted on physical machines, managed through a hypervisor layer and consume the resources of the hardware on which they reside.• Containers can be hosted by physical or virtual machines, managed through an orchestration service (like Kubernetes), and consume the resources of the host and the operating system on which they reside.
PORTABILITY
• VMs are (generally) not portable. Only if the same hypervisor layer hosts the VMs on-premises and in the Cloud, can they be dynamically ported to achieve a seamless hybrid architecture.• Containers are natively portable, since the application runtimes are encapsulated within the container, and are a great fit for hybrid architectures.
SCALABILITY
• Scripting or automation needs to be set up to dynamically scale in or scale out VMs.• Using container orchestration modules, the scale in or scale out features are natively made available to containers.
STORAGE
VM’s and Containers are both able to attach storage. The difference is in the scope and lifecycle of the storage volume.
• Multiple containers on the same VM can have attached storage that are separated in scope from each other.• Container-attached storage goes away if the container shuts down.
NETWORK
VMs and Containers both can achieve network segmentation, either at a service level or at an individual unit level.
The question of whether to use VMs or containers, is less a matter of comparing features and benefits, and more a question of use case at hand.  If you are an organization that runs fewer apps, you might look to VMs as your preferred framework, while an application-centric company may consider containers. When the goal is to make the most use out of your physical hardware infrastructure, VMs are tremendously useful. When the goal is to make sure your applications are scalable, resilient, secure, and offer zero downtime, despite them needing to be frequently updated, an implementation of containers might be worth considering. If you are still unsure which option is best, don’t be afraid to involve a trusted partner, like Keyva who knows that every company’s today and tomorrow looks different, and will meet you where you are at.
CTO Anuj Tuli Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors. During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ [post_title] => Apples and oranges: comparing virtual machines (VMs) and containers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => apples-and-oranges-comparing-virtual-machines-vms-and-containers [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:29:53 [post_modified_gmt] => 2020-03-10 14:29:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1058 [menu_order] => 45 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1026 [post_author] => 7 [post_date] => 2018-11-07 16:22:11 [post_date_gmt] => 2018-11-07 16:22:11 [post_content] => As application architectures evolve to accommodate current trends and technologies, the security model needs to evolve with them. The developer and operations teams need to think about securing various aspects of the application lifecycle. Organizations should consider the following security paradigms: Physical security – this includes security of the datacenter that houses the application infrastructure, and controlled access to the racks and switches. Network security – this includes access to the organization’s networking via secure VPN tunnels, presence of firewalls for access to specific ports, network micro-segmentation, traffic isolation, partitioned LANs, DDoS attacks, intrusion detection and elimination, security of private gateway connecting on-premises and public cloud components. Logical Access security – things includes role-based access control, hierarchical Active Directory structure, control privileged access. Data security – this includes encryption capability, data integrity and backup, data classification, persistent protection, controlled sharing. Application security – this includes authentication rules, authorization rules, session management, role-based access, limiting exposure of functions (via API), latest version of binaries, latest patches applied to the underlying platform, limit direct access to the database, exception handling, logging and auditing, SSL certificates. There is no panacea for protecting your application or the data within it – it is an ongoing process. All aspects of security require constant reviews and updates. But by following a combination industry best practices and strategies to secure the access to the application and the content within, IT teams can rest easy that their business critical applications will be available when their users want them. Keyva can provide a holistic assessment of your current security state, and recommendations towards a future steady state. Are you interested in learning more about how various organizations are achieving security for their applications and data? If so, please reach out to one of our associates and we’d be glad to talk with you about our experiences. [post_title] => Security Considerations for Modern Applications [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-considerations-for-modern-applications [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:27:03 [post_modified_gmt] => 2020-03-10 14:27:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1026 [menu_order] => 46 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 983 [post_author] => 7 [post_date] => 2018-10-15 14:44:37 [post_date_gmt] => 2018-10-15 14:44:37 [post_content] => At Keyva we believe it is incredibly important to meet our clients where they are in their digital transformation journey. We get that there are a lot of moving parts involved with your initiatives, and adopting DevOps is no exception. Among other things, you have to take into account the organizational impacts, including those affecting process. Equally vital is having foresight to anticipate the implications for shared tooling. There are multiple stakeholders and, with occasionally conflicting preferences and processes, it can be hard to strike an accord and forge a single path forward. Building this consensus is an increasingly common challenge and, at Keyva, we have found this challenge to be best solved through doing a series of formal assessments, which go beyond your standard workshop. The purpose of these assessments is to drive clarity, build that singular consensus, and provide an actionable path forward. So, how do we do it?
  1. Get it Together. We recommend scheduling time with all of the stakeholders to go over the need-to-knows for the project. What is the ultimate goal? Who is involved? What does everyone need to make it happen?
  2. Go Deep. After the initial review, it’s time to gain greater insight on the current state of the organization and the goals. We like to take this time to talk about the desired outcomes for the project, and go through the challenges and opportunities in front of you, to forge a clear path ahead. Assess. Review. Plan. Next, we compile an overall assessment of your situation.
  3. Keyva’s assessments also leverage our experience and knowledge of industry best practices, to ensure every concern is met. The assessment documents should include the things you have learned about where you are today, what is important to you, and what technology capabilities you’re looking to develop, as well as uncover the next steps necessary to move ahead with your plans.
Once you have undergone this series of assessments, you can come up with an actionable roadmap, and begin to plan out a general timeline for when you would like to meet your organization’s needs. Admittedly, this is the time where it pays to have a strong partner, like Keyva, whose recommendations not only help to keep your timeline reasonable, but include detailed justifications and specific recommendations to help you and your stakeholders drive that new found consensus and clarity, so that you can move forward in the best possible way for your organization. When it comes down to it, undergoing a digital transformation is an involved process. It is a journey you can certainly chose to take on your own, but if you could use a partner to help you guide you through building that consensus, coming up with actionably plans for moving ahead, and help light the path, reach out to Keyva. We’re here to help!
Jesse LanghoffJesse Langhoff is the Sales Director for Keyva, an Information and Technology Services company, based in Minneapolis, MN. He specializes in cloud, automation and DevOps, with a focus on new and emerging technologies. He is a relationship-based sales executive, with his primary focus being on meeting the business needs of his clients. Like what you read? Follow Jesse on LinkedIn at https://www.linkedin.com/in/jlanghoff/
[post_title] => Your DevOps Journey: Building consensus when dealing with multiple stakeholders [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => your-devops-journey-building-consensus-when-dealing-with-multiple-stakeholders [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:25:09 [post_modified_gmt] => 2020-03-10 14:25:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=983 [menu_order] => 47 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 946 [post_author] => 7 [post_date] => 2018-10-01 09:00:54 [post_date_gmt] => 2018-10-01 09:00:54 [post_content] => Minneapolis, MN – Keyva, a new IT innovation consulting firm, opened its doors for business today. Keyva is the vision of technology industry veteran, Jaime Gmach, who founded the company to help businesses free up more time to focus on innovation. After founding and leading the technology solutions company, Evolving Solutions, for the last 23 years, Gmach came to realize that while many of Evolving Solutions’ clients would like to prioritize innovation, they are simply too busy with everyday tasks. “Most organizations realize that improvement and innovation are critical to their success,” says Gmach. “However, business needs force them to focus on simply keeping the hamster wheel spinning. Our goal with Keyva is to help organizations increase time spent on innovation from an industry average of 10% up to their desired target of 40%.” The Keyva value-proposition is two-fold. 1) Create efficiency via automation, and 2) Lead transformation by dedicating “new” time to innovation and future-forward initiatives. Keyva consultants meet with the client to identify business opportunities, create vendor-agnostic automation solutions tailored to their specific needs and help them transform their overall capabilities. “We know that every client’s “today” and ”tomorrow” looks different,” says Gmach. “Our approach is to meet our clients where they are at today, and then guide them forward from there. We make today more efficient, so they can innovate for tomorrow.” Our consultants help enterprises to automate multi-clouds, multi-vendors, processes, applications and infrastructure within their environment.  From determining issues to developing a strategy to execution of automation, we thoroughly walk our clients through each step. Current Evolving Solutions clients will have the opportunity to work with both companies.  Evolving Solutions and Keyva will be affiliated businesses, and Gmach will lead both organizations as CEO.  Both organizations possess experienced leadership teams, and the sales teams will be able to engage SMEs from both companies to deliver a much broader set of solutions for their clients. About Keyva Keyva is a consulting firm focused on delivering innovative technology solutions. Keyva simplifies IT to free up time and allow businesses to focus on their core offering and on customer value. Keyva consultants help enterprises automate multi-clouds, multi-vendors, processes, applications and infrastructure within their environment, while leading transformation initiatives to allow companies to take the next step on their business journey.  Learn more at www.keyvatech.com. About Evolving Solutions Evolving Solutions has been focused on creating long-term client relationships for over 23 years.  Equipped with exceptional data center technology and trusted talent, we offer expertise primarily in the areas of IT infrastructure, cloud and software solutions. We help companies thrive by providing modern and innovative IT solutions to best manage their data. We’re able to do this by hiring the best technical talent and partnering with quality manufacturers over years of experience in the industry.  From the beginning, we’ve made hiring the right people a priority. We are careful to hire competent and talented individuals, so the job is always done right.  Learn more at www.evolvingsol.com. [post_title] => New IT Consulting Firm Helps Businesses Prioritize Innovation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => new-it-consulting-firm-helps-businesses-prioritize-innovation [to_ping] => [pinged] => [post_modified] => 2020-03-05 20:39:28 [post_modified_gmt] => 2020-03-05 20:39:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=946 [menu_order] => 48 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 941 [post_author] => 2 [post_date] => 2018-09-30 20:52:49 [post_date_gmt] => 2018-09-30 20:52:49 [post_content] => As a technology industry veteran of 35 years and business owner for the past 24, I have had an opportunity to experience many different trends in technology throughout my career. From the growth of the personal computer, to the early days of the internet, to the remarkable developments in compute and storage innovation, I have been fortunate to see a lot. The impact to business, and by extension our personal lives, from each of these technology innovations has been profound. These past innovations bring me to automation today. I will be the first to admit that the areas of automation, orchestration and DevOps were only slightly more than curiosities to me three years ago. While I understood the business value by definition, I was not convinced that the business outcome properly aligned with that definition. In other words, it seemed that it was still somewhat high on the hype curve and the return on investment was not yet clearly defined. My best, relatively recent analogy, would be from five to six years ago where many companies were all-in with their investments in analytics appliances, before they fully understood the business problems they were trying to solve. Contrast my view three years ago with my view today, where those same words; automation, orchestration and DevOps are part of my daily lexicon and part of most every conversation with client business leaders. Rarely does a client discussion occur without reference to automation and its importance to their business in the coming year. It should then not have been a surprise to me when all six executives in a recent panel discussion identified automation as their most critical business initiative for 2019. In fact, so much emphasis was placed on the importance of automation, that it dwarfed all other business initiatives discussed by the panel members. As I tried my hardest to get into the heads of the various business leaders to better understand the common thread that connected all of the interest in automation, it hit me. It made me think of one of my favorite books, Good to Great, by Jim Collins. The panel executives were desperate for innovation and recognized that automation was a vehicle to create separation between them and their competitors. I believe that there are many companies who are well run from an IT perspective, but who are not recognizing the opportunity to capitalize on automation and orchestration as key differentiators for their business. As I look back on how my understanding on the value of IT automation to businesses has changed over the past few years, I can honestly say that the clarity of its significance could not be more apparent. The impact of a clear and committed strategy to streamline processes through automation, orchestration and a well thought out DevOps strategy is not only important, it is critical to the success of most every business. So much so that companies who do not buy in to its importance to their business, will be left behind their competitors and could end up much like the companies depicted by Jim Collins, many of whom are no longer in existence today. CEO Jamie Gmach Jaime Gmach Keyva CEO & President [post_title] => An Executive Perspective on Automation by Jaime Gmach [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => an-executive-perspective-on-automation-by-jaime-gmach [to_ping] => [pinged] => [post_modified] => 2023-06-28 18:07:35 [post_modified_gmt] => 2023-06-28 18:07:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=941 [menu_order] => 49 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 427 [post_author] => 2 [post_date] => 2018-09-26 08:00:21 [post_date_gmt] => 2018-09-26 08:00:21 [post_content] => DevOps is not a physical or virtual product, nor a cloud based offering. It is a set of processes (a framework) implemented in organizations to establish seamless communications between Development and Operations teams, primarily used to roll out new instances or upgrades to applications quickly and reliably. Business applications are the key to any business today – and keeping these applications up to date is crucial to fulfill security and compliance requirements. DevOps serves an operating system for today’s organizations that rely heavily on their applications being current, and always available anytime anywhere for their customers. Reducing the speed to market for new products and new releases is an ongoing challenge for businesses. By defining and following processes that allow the developers to pick the latest stable version of an application or software, add or update any code sections that need tweaking, perform preliminary functional and integration testing, and deploy the updated code – all in an automated fashion with a single click of a button – immensely increases the speed to fix any discovered errors in production environments, and guarantees a consistent deployment every time. Let us take a look at some of the common use cases for implementing DevOps, as well as some challenges associated with it. Infrastructure as Code Customers may have hundreds or thousands of business applications in their environment. These range from business critical tier1 applications to backend system supporting applications. Also, there may be a mix of custom applications and commercial-off-the-shelf (COTS) applications. The application owners and architects, responsible for making sure their applications are always up and running, may end up consuming Cloud capabilities and resources if their own internal Infrastructure teams are unable to cope up with the ever increasing computing demands and agility from the application teams. In order to address this phenomenon, the Infrastructure teams need to implement the same agile processes that traditionally have been the cornerstone of application development. Setting up infrastructure as code involves – a) breaking up infrastructure components in to smaller divisible core components that can be reused, b) creating “golden images” of vetted infrastructure configurations, and saving them in central repositories, c) deploying the same configurations consistently and automatically, to reduce or eliminate configuration drift caused by human errors. By coordinating infrastructure release cycles with application releases, and using similar version-control methodologies, infrastructure teams can now provide solutions (on-site or in the cloud) with flexibility, speed and cost effectiveness for the hosted applications. This agility now allows infrastructure teams to deliver infrastructure as code. Implementing a CI/CD pipeline A Continuous Integration and Continuous Delivery pipeline is an implementation of the automated software delivery process. A CI/CD pipeline connects to a code repository, and has an automated process that picks up the latest code changes, builds and packages them, and deploys them to higher environments. As the code progresses from Development to QA environment, role based access control limits the development teams from making any further changes to the packaged code so as to preserve the integrity during testing. As the code is moved from QA to Production, access is limited only to the application and infrastructure administrators. Having a well-implemented CI/CD pipeline reduces mean time to resolution (MTTR) as it allows for smaller code changes and streamlined testing. A few challenges To implement DevOps best practices, most of the challenges are related to the change in processes, or related to change in people culture. Traditional infrastructure teams need to be trained on microservices, agile methodology, and the concept of treating and deploying infrastructure as code. Traditional application owners and developers need to work with their internal IT teams to outline specific requirements for application availability, scalability, compliance, and security – so the IT teams can better prepare for those requirements – rather than practice shadow IT by consuming available Cloud offerings directly. The teams need to understand the new metrics and measurable KPIs for the new processes, as well as get trained on new tools and technologies that help implement DevOps much easier. All this can take time and effort, but once implemented, the IT organization can help differentiate the business against their competitors – since the business is now able to respond to business application challenges and security threats much faster. [post_title] => DevOps: An operating system for IT in today’s organizations [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => devops-an-operating-system-for-it-in-todays-organizations [to_ping] => [pinged] => [post_modified] => 2020-03-10 14:19:23 [post_modified_gmt] => 2020-03-10 14:19:23 [post_content_filtered] => [post_parent] => 0 [guid] => http://wp2.commonsupport.com/newwp/wellinor/?p=409 [menu_order] => 50 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 1090 [post_author] => 7 [post_date] => 2018-12-13 16:00:37 [post_date_gmt] => 2018-12-13 16:00:37 [post_content] => By Anuj Tuli, Chief Technology Officer Here are the steps to installing Red Hat OpenShift Container Platform from scratch, for your lab or dev environments. We will walk through setting up the OC cluster with 1 master and 1 node, but you can set up as many nodes as you'd like. Since we are not setting up the master nodes in HA configuration, we'd recommend this setup be limited to your lab environments. This guide is valid for RHEL 7, and OSE 3.5. We will set up Openshift Container Platform on VMware virtual machines. If you are using KVM or another hypervisor, the steps for hardware configurations may differ slightly. First, let us take a look at the pre-requisite steps that need to be addressed before we begin our work. This applies to both VMs (Master VM and Node VM):

Prepare and Install Packages (on Master and Nodes)

Since we set up RHEL with minimal packages, we would need to enable all the needed rpms. First register with subscription manager using your Red Hat profile credentials.
subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
To enable OpenShift rpms, you will need to find the associated Pool ID and attach it separately.
subscription-manager list --available --all
Find the pool ID associated with the Red Hat OpenShift Container Platform, and run:
subscription-manager attach --pool <Pool ID>
You will now be able to enable the associated repos.
subscription-manager repos --enable rhel-7-server-ose-3.5-rpms
subscription-manager repos --enable rhel-7-server-openstack-10-rpms
Optionally, if you want to set up OC Cluster in HA configuration:
subscription-manager repos --enable="rhel-ha-for-rhel-7-server-rpms"
Finish setting up other utils:
yum repolist
yum -y update
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum install gcc python-virtualenv
yum install atomic-openshift-utils

Set up dnsmasq (on Master and Nodes)

When using OpenShift SaaS offerings, the service provider takes care of setting up DNS and routing. But since we are setting up the cluster from the ground up, we need to set up these components manually. We will be using dnsmasq for our lab.
yum -y install dnsmasq bind-utils
We will now modify the /etc/dnsmasq.conf configuration file. It is recommended that you back up the existing conf file before modification, in case you need to revert to it later. Modify the /etc/dnsmasq.conf file: On the Master, add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<master IP>
resolv-file=/<path>/<custom-filename>
On each Node(s), add or modify the #address and the #resolv-file sections as follows:
address=/<subdomain.domain.com>/<nodeIP>
resolv-file=/<path>/<custom-filename>
This /<path>/<custom-filename> is where we will list our nameserver (in most cases, your subnet gateway) IP address. On the Master, create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
On each Node(s), create the file
vi <path>/<custom-filename>
And add the line
nameserver <IP>
We will also need to update the /etc/resolv.conf file and modify the existing nameserver entry to be a loopback address. On the Master and Node(s), open the /etc/resolv.conf file and modify the nameserver entry
nameserver 127.0.0.1
Note that if you reboot your VMs, you may need to update the nameserver entry in this file again. On both the Master and the Node(s), we have disabled the firewall service already (as a pre-requisite). We will now enable the dnsmasq service:
systemctl enable dnsmasq && systemctl start dnsmasq
In order to make sure the dnsmasq service is working correctly, you can try to ping the <subdomain>.<domain> address you defined in the /etc/dnsmasq.conf file under the address section.
ping <subdomain>.<domain>
If you run this command on the Master, it should return the IP address of your Master server. You could also add another custom subdomain in front (any string), and it should return the same IP address. For example -
ping <my_sub>.<subdomain>.<domain>
should return the IP of the Master server as well.

Configuring Docker (on Master and Nodes)

Our next step is to set up Docker on these machines.
yum -y install docker-1.12.6
We will be modifying the /etc/sysconfig/docker-storage-setup file. It is recommended that you back up the existing file before modification. But first, we need to find out what our volume is named. If you recall, we had set up additional 40 GB volumes on our machines for use with Docker storage. The output of fdisk –l will give you the name of your additional disk volume. In my case, it was /dev/sdb. We will use the sdb name in our docker-storage-setup file. Open /etc/sysconfig/docker-storage-setup through your favorite editor, comment all existing lines, and add the following entries:
DEVS=sdb
VG=docker-vg
Save and close the file. We will disable cluster locking configuration for LVM
lvmconf --disable-cluster
And then run our Docker storage setup
docker-storage-setup
You can verify the setup using the command
lvs
It will show you the attributes and sizes associated with the various volumes We can now start the Docker service
systemctl enable docker && systemctl start docker

Openshift install (on Master)

We can now finally get started with the OpenShift install steps.
yum -y install atomic-openshift-docker-excluder atomic-openshift-excluder atomic-openshift-utils bridge-utils bind-utils git iptables-services net-tools wget
Once we have all the packages ready to go, we run
atomic-openshift-installer install
The setup asks a number of questions. After selecting a user that you'd like to enable for SSH access, you will be asked to select a variant for the install. We will select option [1] for OpenShift Container Platform, which is also the default. You will be asked to enter the hostname or IP of your Master node, and choose whether the host will be RPM based or container based. The installer will then provide a brief summary of the information entered, and will prompt for additional hosts. We will select y and this time we will enter the hostname or IP of our Node server. You can go through configuring additional Node servers in this section. For the 'New Default Subdomain', you can configure the . information as you have defined it under the /etc/dnsmasq.conf file. This portion can be used later for external routing. If you have any http or https proxies, you can configure them on the next screen. The installer then shows a summary of all the information captured, and what the configuration would look like. Once you confirm all the shown configurations, the installer kicks off the setup. It can take a while for the install to complete. Once the installation has completed successfully, you can verify the running services using the following command
systemctl status | grep openshift
The output of this command will list the services running on both master and node(s). If you run the same command on the node(s), it will only show the services running on that node. You can also run some sample OC commands on the Master to make sure all looks good
oc get pods
oc get projects
oc get nodes
That should do it! You have now set up a single node OpenShift cluster in your lab environment. The process of creating users depends upon which Identity Provider you would like to have set up with OpenShift. You can access the OpenShift console via https://<Master_IP_or_FQDN>:8443/console If you have any questions about the steps documented here, would like more information on the installation procedure, or have any feedback or requests, please let us know at [email protected]. About the Author [table id=3 /] [post_title] => Red Hat OpenShift : Day 1 Install Guide [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => red-hat-openshift-day-1-install-guide [to_ping] => [pinged] => [post_modified] => 2024-05-28 17:41:50 [post_modified_gmt] => 2024-05-28 17:41:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=1090 [menu_order] => 43 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 2f81d7721674a56fe84c555fe214246a [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

Red Hat OpenShift : Day 1 Install Guide

By Anuj Tuli, Chief Technology Officer Here are the steps to installing Red Hat OpenShift Container Platform from scratch, for your lab or dev environments. We will walk through setting ...
Closed Loop Automation

Closed-Loop Automation: A Primer

By Anuj Tuli, Chief Technology Officer As the industry moves towards self-healing containers, agile applications, and seamless infrastructures, there is an impending need for setting up auto-remediation of incidents and ...
two people working on two different laptops

Apples and oranges: comparing virtual machines (VMs) and containers

By Anuj Tuli, Chief Technology Officer Organizations are always looking to improve efficiencies within their infrastructure.  One such area where organizations look to make improvements centers around what to use ...
Businessman holding shield protect icon, Concept cyber security safe your data

Security Considerations for Modern Applications

As application architectures evolve to accommodate current trends and technologies, the security model needs to evolve with them. The developer and operations teams need to think about securing various aspects ...
man sitting at desk writing programming code

Your DevOps Journey: Building consensus when dealing with multiple stakeholders

At Keyva we believe it is incredibly important to meet our clients where they are in their digital transformation journey. We get that there are a lot of moving parts ...
Minneapolis skyline in grayscale

New IT Consulting Firm Helps Businesses Prioritize Innovation

Minneapolis, MN – Keyva, a new IT innovation consulting firm, opened its doors for business today. Keyva is the vision of technology industry veteran, Jaime Gmach, who founded the company ...
person typing on electronic device and reviewing graphs

An Executive Perspective on Automation by Jaime Gmach

As a technology industry veteran of 35 years and business owner for the past 24, I have had an opportunity to experience many different trends in technology throughout my career. ...
people typing on laptops

DevOps: An operating system for IT in today’s organizations

DevOps is not a physical or virtual product, nor a cloud based offering. It is a set of processes (a framework) implemented in organizations to establish seamless communications between Development ...