This guide will walk through how to set up Red Hat Ansible Tower in a highly-available configuration. In this example, we will set up 4 different systems – 1 for PostgreSQL database (towerdb), and 3 web nodes for Tower (tower1, tower2, tower3).
We will be using Ansible Tower v3.6 and PostgreSQL 10, on RHEL 7 systems running in VMware for this technical guide. The commands for setting up the same configuration on RHEL 8 will be different for some cases. This guide does not account for clustering of the PostgreSQL database. If you are setting up Tower in HA capacity for Production environments, it is recommended to follow best practices for PostgreSQL clustering, to avoid a single point of failure.
First, we will need to prep all the RHEL instances by enabling the Red Hat repos. All the commands below are to be run on all 4 systems – towerdb, tower1, tower2, tower3
subscription-manager register
subscription-manager refresh
subscription-manager attach –-auto
subscription-manager repos –-list
subscription-manager repos --enable rhel-7-server-rh-common-beta-rpms
subscription-manager repos --enable rhel-7-server-rpms
subscription-manager repos --enable rhel-7-server-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-source-rpms
subscription-manager repos --enable rhel-7-server-rh-common-debug-rpms
subscription-manager repos --enable rhel-7-server-optional-source-rpms
subscription-manager repos --enable rhel-7-server-extras-rpms
sudo yum update
sudo yum install wget
sudo yum install python36
sudo pip3 install httpie
Also:
- a) Update the /etc/hosts file on all 4 hosts with entries for all systems
- b) Add and copy thesshkeys on all systems
On the Database system (towerdb), we will now set up PostgreSQL 10
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install postgresql10 postgresql10-server
Initialize the database
/usr/pgsql-10/bin/postgresql-10-setup initdb
systemctl enable postgresql-10
systemctl start postgresql-10
Verify you can log in to the database
:~nbsp;sudo su – postgres
:~
nbsp;Psql
# \list
This command will show you the existing (default) database list.
Next, we will configure the database to make sure it can talk to all the Tower web nodes:
sudo vi /var/lib/pgsql/10/data/pg_hba.conf
Add/update the line with 'md5' entry to allow all hosts:
host all all 0.0.0.0/0 md5
Update the postgresql.conf file
sudo vi /var/lib/pgsql/10/data/postgresql.conf
Add/update the entry to listen to all incoming requests:
listen_addresses = '*'
Restart the database services, to pick up the changes made:
sudo systemctl restart postgresql-10
sudo systemctl status postgresql-10
On each of the Tower web nodes (tower1, tower2, tower3), we will set up the Ansible Tower binaries:
mkdir ansible-tower
cd ansible-tower/
wget https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz
tar xvzf ansible-tower-setup-bundle-3.6.2-1.el7.tar.gz
cd ansible-tower-setup-bundle-3.6.2-1
python -c 'from hashlib import md5; print("md5" + md5("password" + "awx").hexdigest())'
md5f58b4d5d85dbde46651335d78bb56b8c
Where password will be the password that you will be using a hash of, when authenticating against the database
Back on the database server (towerdb), we will go ahead and set up the database schema pre-requisites for Tower install:
:~nbsp;sudo su – postgres
:~
nbsp;Psql
postgres=# CREATE USER awx; CREATE DATABASE awx OWNER awx; ALTER USER awx WITH password 'password';
On tower1, tower2, tower3, update the inventory file and run the setup. Make sure your script contents match on all tower web tier systems.
You will need to update at least the following values and customize them for your environment:
admin_password='password'
pg_password='password'
rabbit_mq = 'password'
Under the [tower] section, you will have to add entries for all your tower web hosts. The first entry will typically serve as the primary node when the cluster is run.
We will now run the setup script:
./setup.sh
You can either copy this inventory file on the other 2 tower systems (tower2 and tower3), or replicate the content to match the content in the file on tower1, and run the setup script on the other 2 tower systems as well.
Once the setup script is run on all hosts, and it finishes successfully, you will be able to test your cluster instance. You can do so by going to one of the tower hosts URL, initiating a job template, and see which specific tower node it runs on – based on the tower node that is designated to be the primary node at that time. You will also be able to view the same console details, and logs of job runs, regardless of which tower web URL you go to.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected].
Anuj Tuli is the chief technology officer at Keyva. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli helps customers chart a prescriptive strategy for Application Containerization, CI/CD Pipeline Implementations, API abstraction, Application Modernization, and Cloud Automation integrations. He leads the development and management of Cloud Automation IP and related professional services. With an application developer background, he provides a hands-on perspective towards various technologies.
Like what you read? Follow Anuj on LinkedIn.
Join the Keyva Community! Follow Keyva on LinkedIn at: