finid and Caitlin Postal
The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.
Ansible is an open-source software tool used to automate the provisioning, configuration management, and deployment of servers and applications. You can use Ansible to automate tasks on one or more servers or run a distributed application across multiple servers. For a multi-server setup, completing the Initial Server Setup for each server can be time-consuming, whereas using Ansible will speed up the process with automation playbooks.
Ansible is agentless, so you do not need to install any Ansible component on the servers you want to run Ansible against. Those servers are the Ansible hosts and must run Python 3 and OpenSSH, both of which are pre-installed on Ubuntu 22.04 and all Linux distributions. The Ansible control node is the machine that will initiate automation, which can run any compatible Unix-like operating system or Windows, if Windows Subsystem for Linux (WSL) is installed.
In this tutorial, you’ll use Ansible to automate the initial server setup of multiple Ubuntu 22.04 servers. You’ll accomplish the following initial setup tasks on all the servers:
Because you’ll use Ansible to run a comprehensive playbook defining each task, those tasks will be completed using just one command and without you needing to log in to the servers individually. You can run an optional secondary playbook to automate server management after the initial server setup.
To complete this tutorial, you will need:
Ansible installed on a machine that will act as your control node, which can be your local machine or a remote Linux server. To install Ansible, follow Step 1 of How To Install and Configure Ansible on Ubuntu 22.04, and you can refer to the official Ansible installation guide as needed for other operating systems.
vi
as its default editor. If your control node is a Linux machine and you prefer using nano
, use the section on Setting the Ansible Vault Editor in the How To Use Ansible Vault tutorial to change the text editor linked to the EDITOR
environment shell variable. This tutorial will use nano
as the editor for Ansible Vault.Two or more Ubuntu 22.04 servers and the public IPv4 address of each server. No prior setup is required as you’ll use Ansible to automate setup in Step 5, but you must have SSH access to these servers from the Ansible control node mentioned above. If you are using DigitalOcean Droplets, you’ll find the IPv4 address in each server’s Public Network section of the Networking tab in your dashboard.
ssh-copy-id
to connect the key pair to the hosts.You’ll modify a directive in your control node’s SSH client configuration file in this step. After making this change, you’ll no longer be prompted to accept the SSH key fingerprint of remote machines, as they will be accepted automatically. Manually accepting the SSH key fingerprints for each remote machine can be tedious, so this modification solves a scaling issue when using Ansible to automate the initial setup of multiple servers.
While you can use Ansible’s known_hosts
module to accept the SSH key fingerprint for a single host automatically, this tutorial deals with multiple hosts, so it is more effective to modify the SSH client configuration file on the control node (typically, your local machine).
To begin, launch a terminal application on your control node and, using nano
or your favorite text editor, open the SSH client configuration file:
- sudo nano /etc/ssh/ssh_config
Find the line that contains the StrictHostKeyChecking
directive. Uncomment it and change the value so that it reads as follows:
...
StrictHostKeyChecking accept-new
...
Save and close the file. You do not need to reload or restart the SSH daemon because you only modified the SSH client configuration file.
Note: If you do not wish to change the value of the StrictHostKeyChecking
from ask
to accept-new
permanently, you can revert it to the default after running the playbook in Step 7. While changing the value will mean your system accepts SSH key fingerprints automatically, it will reject subsequent connections from the same hosts if the fingerprints change. This feature means the accept-new
change is not as much of a security risk as changing that directive’s value to no
.
Now that you have updated the SSH directive, you’ll begin the Ansible configuration, which you’ll do in the next steps.
The Ansible hosts
file (also called the inventory file) contains information on Ansible hosts. This information may include group names, aliases, domain names, and IP addresses. The file is located by default in the /etc/ansible
directory. In this step, you’ll add the IP addresses of the Ansible hosts you spun up in the Prerequisites section so that you can run your Ansible playbook against them.
To begin, open the hosts
file using nano
or your favorite text editor:
- sudo nano /etc/ansible/hosts
After the introductory comments in the file, add the following lines:
...
host1 ansible_host=host1-public-ip-address
host2 ansible_host=host2-public-ip-address
host3 ansible_host=host3-public-ip-address
[initial]
host1
host2
host3
[ongoing]
host1
host2
host3
host1
, host2
, and host3
are aliases for each host upon which you want to automate the initial server setup. Using aliases makes it easier to reference the hosts elsewhere. ansible_host
is an Ansible connection variable and, in this case, points to the IP addresses of the target hosts.
initial
and ongoing
are sample group names for the Ansible hosts. Choose group names that will make it easy to know what the hosts are used for. Grouping hosts in this manner makes it possible to address them as a unit. Hosts can belong to more than one group. The hosts in this tutorial have been assigned to two different groups because they’ll be used in two different playbooks: initial
for the initial server setup in Step 6 and ongoing
for the later server management in Step 8.
hostN-public-ip-address
is the IP address for each Ansible host. Be sure to replace host1-public-ip-address
and the subsequent lines with the IP addresses for the servers that will be part of the automation.
When you’re finished modifying the file, save and close it.
Defining the hosts in the inventory file helps you to specify which hosts will be set up with Ansible automation. In the next step, you’ll clone the repository with sample playbooks to automate multi-server setup.
In this step, you’ll clone a sample repository from GitHub containing the necessary files for this automation.
This repo contains three files for a sample multi-server automation: initial.yml
, ongoing.yml
, and vars/default.yml
. The initial.yml
file is the main playbook that contains the plays and tasks you’ll run against the Ansible hosts for initial setup. The ongoing.yml
file contains tasks you’ll run against the hosts for ongoing maintenance after initial server setup. The vars/default.yml
file contains variables that will be called in both playbooks in Step 6 and Step 8.
To clone the repo, type the following command:
- git clone https://github.com/do-community/ansible-ubuntu.git
Alternatively, if you’ve added your SSH key to your GitHub account, you can clone the repo using:
- git@github.com:do-community/ansible-ubuntu.git
You will now have a folder named ansible-ubuntu
in your working directory. Change into it:
- cd ansible-ubuntu
That will be your working directory for the rest of this tutorial.
In this step, you acquired the sample files for automating multiple Ubuntu 22.04 servers using Ansible. To prepare the files with information specific to your hosts, you will next update the vars/default.yml
file to work with your system.
This playbook will reference some information for automation that may need to be updated with time. Placing that information in one variable file and calling the variables in the playbooks will be more efficient than hard-coding them within the playbooks, so you will modify variables in the vars/default.yml
file to match your preferences and setup needs in this step.
To begin, open the file with nano
or your favorite text editor:
- nano vars/default.yml
You will review the contents of the file, which include the following variables:
create_user: sammy
ssh_port: 5995
copy_local_key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
The value of the create_user
variable should be the name of the sudo
user that will be created on each host. In this case, it is sammy
, but you can name the user whatever you like.
The ssh_port
variable holds the SSH port you’ll use to connect to the Ansible hosts after setup. The default port for SSH is 22
, but changing it will significantly reduce the number of automated attacks hitting your servers. This change is optional but will boost the security posture of your hosts. You should choose a lesser-known port that is between 1024
and 65535
and which is also not in use by another application on the Ansible hosts. In this example, you are using port 5995
.
Note: If your control node is running a Linux distribution, pick a number higher than 1023
and grep
for it in /etc/services
. For example, run grep 5995 /etc/services
to check if 5995
is being used. If there’s no output, then the port does not exist in that file and you may assign it to the variable. If your control node is not a Linux distribution and you don’t know where to find its equivalent on your system, you can consult the Service Name and Transport Protocol Port Number Registry.
The copy_local_key
variable references your control node’s SSH public key file. If the name of that file is id_rsa.pub
, then you don’t need to make any changes in that line. Otherwise, change it to match your control node’s SSH public key file. You can find the file under your control node’s ~/.ssh
directory. When you run the main playbook in Step 5 and after a user with sudo
privileges is created, the Ansible controller will copy the public key file to the user’s home directory, which enables you to log in as that user via SSH after the initial server setup.
When you’re finished modifying the file, save and close it.
Now that you’ve assigned values to the variables in vars/default.yml
, Ansible will be able to call those variables while executing the playbooks in Step 6 and Step 8. In the next step, you’ll use Ansible Vault to create and secure the password for the user that will be created on each host.
Ansible Vault is used to create and encrypt files and variables that can be referenced in playbooks. Using Ansible Vault ensures that sensitive information is not transmitted in plaintext while executing a playbook. In this step, you’ll create and encrypt a file containing variables whose values will be used to create a password for the sudo
user on each host. By using Ansible Vault in this manner, you ensure the password is not referenced in plaintext in the playbooks during and after the initial server setup.
Still in the ansible-ubuntu
directory, use the following command to create and open a vault file:
- ansible-vault create secret
When prompted, enter and confirm a password that will be used to encrypt the secret
file. This is the vault password. You’ll need the vault password while running the playbooks in Step 6 and Step 8, so do not forget it.
After entering and confirming the vault password, the secret
file will open in the text editor linked to the shell’s EDITOR
environment variable. Add these lines to the file, replacing the values for type_a_strong_password_here
and type_a_salt_here
:
- password: type_a_strong_password_here
- password_salt: type_a_salt_here
The value of the password
variable will be the actual password for the sudo
user you’ll create on each host. The password_salt
variable uses a salt for its value. A salt is any long, random value used to generate hashed passwords. You can use an alphabetical or alphanumeric string, but a numeric string alone may not work. Adding a salt when generating a hashed password makes it more difficult to guess the password or crack the hashing. Both variables will be called while executing the playbooks in Step 6 and Step 8.
Note: In testing, we found that a salt made up of only numeric characters led to problems running the playbook in Step 6 and Step 8. However, a salt made up of only alphabetical characters worked. An alphanumeric salt should also work. Keep that in mind when you specify a salt.
When you’re finished modifying the file, save and close it.
You have now created an encrypted password file with variables that will be used to create a password for the sudo
user on the hosts. In the next step, you’ll automate the initial setup of the servers you specified in Step 2 by running the main Ansible playbook.
In this step, you’ll use Ansible to automate the initial server setup of as many servers as you specified in your inventory file. You’ll begin by reviewing the tasks defined in the main playbook. Then, you will execute the playbook against the hosts.
An Ansible playbook is made up of one or more plays with one or more tasks associated with each play. The sample playbook you’ll run against your Ansible hosts contains two plays with a total of 14 tasks.
Before you run the playbook, you’ll review each task involved in its setup process. To begin, open the file with nano
or your favorite text editor:
- nano initial.yml
The first section of the file contains the following keywords that affect the behavior of the play:
- name: Initial server setup tasks
hosts: initial
remote_user: root
vars_files:
- vars/default.yml
- secret
...
name
is a short description of the play, which will display in the terminal as the play runs. The hosts
keyword indicates which hosts are the play’s target. In this case, the pattern passed to the keyword is the group name of the hosts you specified in the /etc/ansible/hosts
file in Step 2. You use the remote_user
keyword to tell the Ansible controller the username to use to log in to the hosts (in this case, root
). The vars_files
keyword points to the files containing variables the play will reference when executing the tasks.
With this setup, the Ansible controller will attempt to log in to the hosts as the root
user via SSH port 22
. For each host it is able to log in to, it will report an ok
response. Otherwise, it will report the server is unreachable
and will start executing the play’s tasks for whichever host can be logged in to. If you were completing this set up manually, this automation replaces logging into a host using ssh root@host-ip-address
.
Following the keywords section is a list of tasks to be executed sequentially. As with the play, each task starts with a name
that provides a short description of what the task will accomplish.
The first task in the playbook updates the package database:
...
- name: update cache
ansible.builtin.apt:
update_cache: yes
...
This task will update the package database using the ansible.builtin.apt
module, which is why it is defined with update_cache: yes
. This task accomplishes the same thing as when you log in to an Ubuntu server and type sudo apt update
, often a prelude to updating all installed packages.
The second task in the playbook updates packages:
...
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
...
Like the first task, this task also calls the ansible.builtin.apt
module. Here, you make sure all installed packages are up to date using a wildcard to specify packages (name: "*"
) and state: latest
, which would be the equivalent of logging into your servers and running the sudo apt upgrade -y
command.
The third task in the playbook ensures the Network Time Protocol (NTP) Daemon is active:
...
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
...
This task calls the ansible.builtin.systemd
module to ensure that systemd-timesyncd
, the NTP daemon, is running (state: started
). You run a task like this when you want to ensure that your servers keep the same time, which can help run a distributed application on those servers.
sudo
groupThe fourth task in the playbook verifies that there’s a sudo
group:
...
- name: Make sure we have a 'sudo' group
ansible.builtin.group:
name: sudo
state: present
...
This task calls the ansible.builtin.group
module to check that a group named sudo
exists on the hosts (state: present
). Because your next task depends on the presence of a sudo
group on the hosts, this task checks that sudo
groups exist, so you can be sure that the next task does not fail.
The fifth task in the playbook creates your non-root user with sudo
privileges:
...
- name: Create a user with sudo privileges
ansible.builtin.user:
name: "{{ create_user }}"
state: present
groups: sudo
append: true
create_home: true
shell: /bin/bash
password: "{{ password | password_hash('sha512', password_salt) }}"
update_password: on_create
...
Here, you create a user on each host by calling the ansible.builtin.user
module and appending the sudo
group to the user’s groups. The user’s name is derived from the value of the create_user
variable, which you specified in the vars/default.yml
. This task also ensures that a home directory is created for the user and assigned with the proper shell.
Using the password
parameter and a combination of the password and salt you set in Step 5, a function that calls the SHA-512 cryptographic hash algorithm generates a hashed password for the user. Paired with the secret
vault file, the password is never passed to the controller in plaintext. With update_password
, you ensure that the hashed password is only set the first time the user is created. If you rerun the playbook, the password will not be regenerated.
The sixth task in the playbook sets the key for your user:
...
- name: Set authorized key for remote user
ansible.posix.authorized_key:
user: "{{ create_user }}"
state: present
key: "{{ copy_local_key }}"
...
With this task, you copy your public SSH key to the hosts by calling on the ansible.posix.authorized_key
. The value of user
is the user’s name created on the hosts in the previous task, and key
points to the key to be copied. Both variables are defined in the var/default.yml
file. This task has the same effect as running the ssh-copy-id
command manually.
The seventh task in the playbook disables remote login for the root
user:
...
- name: Disable remote login for root
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
...
Next, you call on the ansible.builtin.lineinfile
module. This task searches for a line that starts with PermitRootLogin
in the /etc/ssh/sshd_config
file using a regular expression (regexp
) and then replaces it with the value of line
. This task ensures remote login using the root
account will fail after running the play in this playbook. Only remote login with the user account created in task 6 will succeed. By disabling remote root login, you ensure that only regular users may log in and that a privilege escalation method, usually sudo
, will be required to gain admin privileges.
The eighth task in the playbook changes the SSH port:
...
- name: Change the SSH port
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#Port 22'
line: 'Port "{{ ssh_port }}"'
...
Because SSH listens on the well-known port 22
, it tends to be subject to automated attacks targeting that port. By changing the port that SSH listens on, you reduce the number of automated attacks hitting the hosts. This task uses the same ansible.builtin.lineinfile
module to search for a line that starts with the regexp
in the SSH daemon’s configuration file and changes its value to that of the line
parameter. The new port number that SSH listens on will be the port number that you assigned to the ssh_port
variable in Step 4. After restarting the hosts at the end of this play, you will not be able to log in to the hosts via port 22
.
The ninth task in the playbook allows SSH traffic:
...
- name: UFW - Allow SSH connections
community.general.ufw:
rule: allow
port: "{{ ssh_port }}"
...
Here, you call on the community.general.ufw
module to allow SSH traffic through the firewall. Notice that the port number of SSH is not 22
, but the custom port number you specified in the vars/default.yml
file in Step 4. This task is the equivalent of manually running the ufw allow 5995/tcp
command.
The tenth task guards against brute-force attacks:
...
- name: Brute-force attempt protection for SSH
community.general.ufw:
rule: limit
port: "{{ ssh_port }}"
proto: tcp
...
Calling the community.general.ufw
module again, this task uses the rate-limiting rule
to deny login access to an IP address that has failed six or more connection attempts to the SSH port within a 30-second timeframe. The proto
parameter points to the target protocol (in this case, TCP).
The eleventh task enables the firewall:
...
- name: UFW - Deny other incoming traffic and enable UFW
community.general.ufw:
state: enabled
policy: deny
direction: incoming
...
Still relying on the community.general.ufw
module in this task, you enable the firewall (state: enabled
) and set a default policy
that denies all incoming traffic.
The twelfth task in this play cleans up package dependencies:
...
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
...
By calling the ansible.builtin.apt
module again, this task removes package dependencies that are no longer required on the server, which is the equivalent of running the sudo apt autoremove
command manually.
The thirteenth task in this playbook restarts SSH.
...
- name: Restart the SSH daemon
ansible.builtin.systemd:
state: restarted
name: ssh
The last task calls on the ansible.builtin.systemd
module to restart the SSH daemon. This restart has to be done for the changes made in the daemon’s configuration file to take effect. This task has the same effect as restarting the daemon with sudo systemctl restart ssh
.
The initial connection to the hosts was via port 22
as root
, but earlier tasks have changed the port number and disabled remote root login, which is why you need to restart the SSH daemon at this stage of the play. The second play will use different connection credentials (a username instead of root
and the newly defined port number that is not 22
).
This play starts after the last task in play 1 has completed successfully. It is affected by the following keywords:
...
- name: Rebooting hosts after initial setup
hosts: initial
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- ~/secret
vars:
ansible_become_pass: "{{ password }}"
...
The pattern passed to the hosts
keyword is the initial
group name specified in the /etc/ansible/hosts
file in Step 2. Because you’ll no longer be able to log in to the hosts using the default SSH port 22
, the port
keyword points to the custom SSH port configured in Step 4.
In the first play, the Ansible controller logged into the hosts as the root
user. With remote root logins disabled by the first play, you now have to specify the user the Ansible controller should log in as. The remote_user
keyword directs the Ansible controller to log in to each host as the sudo
user created in task 5 of the first play.
The become
keyword specifies that privilege escalation be used for task execution on the defined hosts. This keyword instructs the Ansible controller to assume root privileges for executing tasks on the hosts, when necessary. In this case, the controller will assume root privileges using sudo
. The ansible_become_pass
keyword sets the privilege escalation password, which is the password that will be used to assume root privileges. In this case, it points to the variable with the password you configured using Ansible Vault in Step 5.
In addition to pointing to the vars/default.yml
file, the vars_files
keyword also points to the secret
file you configured in Step 5, which tells the Ansible controller where to find the password
variable.
After the keywords section is the lone task that will be executed in this play.
Note: Though this is the first task of the second play, it’s numbered as Task 14 because the Ansible Controller sees it not as Task 1 of Play 2 but as Task 14 of the playbook.
The final task of the playbook will reboot all the hosts:
...
- name: Reboot all hosts
ansible.builtin.reboot:
Rebooting the hosts after completing the tasks in the first play will mean that any updates to the kernel or a library will take effect before you start installing your application(s).
The full playbook file looks like this:
- name: Initial server setup tasks
hosts: initial
remote_user: root
vars_files:
- vars/default.yml
- secret
tasks:
- name: update cache
ansible.builtin.apt:
update_cache: yes
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
- name: Make sure we have a 'sudo' group
ansible.builtin.group:
name: sudo
state: present
- name: Create a user with sudo privileges
ansible.builtin.user:
name: "{{ create_user }}"
state: present
groups: sudo
append: true
create_home: true
shell: /bin/bash
password: "{{ password | password_hash('sha512', password_salt) }}"
update_password: on_create
- name: Set authorized key for remote user
ansible.builtin.authorized_key:
user: "{{ create_user }}"
state: present
key: "{{ copy_local_key }}"
- name: Disable remote login for root
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
- name: Change the SSH port
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#Port 22'
line: 'Port "{{ ssh_port }}"'
- name: UFW - Allow SSH connections
community.general.ufw:
rule: allow
port: "{{ ssh_port }}"
- name: Brute-force attempt protection for SSH
community.general.ufw:
rule: limit
port: "{{ ssh_port }}"
proto: tcp
- name: UFW - Deny other incoming traffic and enable UFW
community.general.ufw:
state: enabled
policy: deny
direction: incoming
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Restart the SSH daemon
ansible.builtin.systemd:
state: restarted
name: ssh
- name: Rebooting hosts after initial setup
hosts: initial
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
tasks:
- name: Reboot all hosts
ansible.builtin.reboot:
When finished reviewing the file, save and close it.
Note: You can add new tasks to the playbook or modify existing ones. However, changing the YAML file may corrupt it because YAML is sensitive to spacing, so take care if you choose to edit any aspect of the file. For more on working with Ansible playbooks, follow our series on How To Write Ansible Playbooks.
Now you can run the playbook. First, check the syntax:
- ansible-playbook --syntax-check --ask-vault-pass initial.yml
You’ll be prompted for the vault password you created in Step 5. If there are no errors with the YAML syntax after successful authentication, the output will be:
Outputplaybook: initial.yml
You may now run the file with the following command:
- ansible-playbook --ask-vault-pass initial.yml
You will again be prompted for the vault password. After successful authentication, the Ansible controller will log in to each host as the root
user and perform all the tasks in the playbook. Rather than running the ssh root@node-ip-address
command on each server individually, Ansible connects to all the nodes specified in /etc/ansible/hosts
and then executes the tasks in the playbook.
For the sample hosts in this tutorial, it took Ansible about three minutes to complete the tasks across three hosts. When the tasks have been complete, you’ll get an output like the following:
OutputPLAY RECAP *****************************************************************************************************
host1 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Each task and play keyword section that evaluates successfully will count toward the number in the ok
column. With 14 tasks across two plays, with all evaluated successfully, the count is 16
. Of the tasks that were evaluated, only 11
led to changes on the servers, represented by the changed
column.
The unreachable
count shows the number of hosts the Ansible controller could not log in to. None of the tasks failed, so the count for failed
is 0
.
A task is skipped
when the condition specified in the task is not met (usually with the when
parameter). No tasks are skipped
in this case, but it will be applicable in Step 8.
The last two columns (rescued
and ignored
) relate to error handling either specified for a play or task.
You have now successfully automated the initial server setup of multiple Ubuntu 22.04 servers using Ansible to execute one command that completes all the tasks specified by the playbook.
To check that everything has worked as expected, you’ll next log in to one of the hosts to verify setup.
To confirm the output of the play recap at the end of the previous step, you can log in to one of your hosts using the credentials that were configured earlier to verify the setup. These actions are optional for learning purposes because the Ansible recap reports an accurate completion.
Start by logging in to one of the hosts using the following command:
- ssh -p 5995 sammy@host1-public-ip-address
You use the the -p
option to point to the custom port number you configured for SSH in Step 4 (5995
), and sammy
is the user created in Step 6. If you’re able to log in to the host as that user via that port, you know that Ansible completed those tasks successfully.
Once logged in, check if you are able to update the package database:
- sudo apt update
If you’re prompted for a password and can authenticate with the password you configured for the user in Step 5, you can confirm that Ansible succcessfully completed the tasks for creating a user and setting the user’s password.
Now that you know the setup playbook worked as intended, you can run a second playbook for ongoing maintenance.
The initial server setup playbook that was executed in Step 6 will scale to as many servers as you wish, but it cannot manage hosts after that initial setup. While you can log in to each host individually to run commands, that process does not scale as you work on more servers concurrently. As part of Step 3, you also pulled an ongoing.yml
playbook that can be used for continued maintenance. In this step, you’ll run the ongoing.yml
playbook to automate ongoing maintenance of the hosts set up in this tutorial.
Before you run the playbook, you’ll review each task. To begin, open the file with nano
or your favorite text editor:
- nano ongoing.yml
Unlike the initial setup playbook, the maintenance playbook contains just one play and fewer tasks.
The following keywords in the first section of the file affect the behavior of the play:
- hosts: ongoing
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
...
Other than the group passed to the hosts
keyword, these are the same keywords used in the second play of the setup playbook.
After the keywords is a list of tasks to be executed sequentially. As in the setup playbook, each task in the maintenance playbook starts with a name
that provides a short description of what the task will accomplish.
The first task updates the package database:
...
- name: update cache
ansible.builtin.apt:
update_cache: yes
...
This task will update the package database using the ansible.builtin.apt
module, which is why it is defined with update_cache: yes
. This task accomplishes the same thing as when you log in to an Ubuntu server and type sudo apt update
, which is often a prelude to installing a package or updating all installed packages.
The second task updates packages:
...
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
...
Like the first task, this task also calls the ansible.builtin.apt
module. Here, you ensure that all installed packages are up to date using a wildcard to specify packages (name: "*"
) and state: latest
, which would be the equivalent of logging in to your servers and running the sudo apt upgrade -y
command.
The third task in the playbook ensures the NTP Daemon is set up:
...
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
...
Active services on a server might fail for a variety of reasons, so you want to make sure that such services remain active. This task calls the ansible.builtin.systemd
module to ensure that systemd-timesyncd
, the NTP daemon, remains active (state: started
).
The fourth task checks the status of the UFW firewall:
...
- name: UFW - Is it running?
ansible.builtin.command: ufw status
register: ufw_status
...
You can check the status of the UFW firewall on Ubuntu with the sudo ufw status
command. The first line of the output will either read Status: active
or Status: inactive
. This task uses the ansible.builtin.command
module to run the same command, then saves (register
) the output to the ufw_status
variable. The value of that variable will be queried in the next task.
The fifth task will re-enable the UFW firewall if it has been stopped:
...
- name: UFW - Enable UFW and deny incoming traffic
community.general.ufw:
state: enabled
when: "'inactive' in ufw_status.stdout"
...
This task calls the community.general.ufw
module to enable the firewall only when the term inactive
appears in the output of the ufw_status
variable. If the firewall is active, then the when
condition is not met, and the task is marked as skipped
.
The sixth task in this playbook cleans up package dependencies:
...
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
...
This task removes package dependencies that are no longer required on the server by calling the ansible.builtin.apt
module, which is the equivalent of running the sudo apt autoremove
command.
The seventh task checks if a reboot is required:
...
- name: Check if reboot required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required
...
On Ubuntu, a newly installed or upgraded package will signal that a reboot is required for changes that were introduced with the installation or upgrade to take effect by creating the /var/run/reboot-required
file. You can confirm if that file exists using the stat /var/run/reboot-required
command. This task calls the ansible.builtin.stat
module to do the same thing, then saves (register
) the output to the reboot_required
variable. The value of that variable will be queried in the next task.
The eighth task will reboot the server if necessary:
...
- name: Reboot if required
ansible.builtin.reboot:
when: reboot_required.stat.exists == true
By querying the reboot_required
variable from task 7, this task calls the ansible.builtin.reboot
module to reboot the hosts only when
the /var/run/reboot-required
exists. If a reboot is required and a host is rebooted, the task is marked as changed
. Otherwise, Ansible marks it as skipped
in the play recap.
The full playbook file for ongoing maintenance will be as follows:
- hosts: ongoing
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
tasks:
- name: update cache
ansible.builtin.apt:
update_cache: yes
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
- name: UFW - Is it running?
ansible.builtin.command: ufw status
register: ufw_status
- name: UFW - Enable UFW and deny incoming traffic
community.general.ufw:
state: enabled
when: "'inactive' in ufw_status.stdout"
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Check if reboot required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required
- name: Reboot if required
ansible.builtin.reboot:
when: reboot_required.stat.exists == true
When finished reviewing the file, save and close it.
Note: You can add new tasks to or modify existing tasks in the playbook. However, changing the YAML file may corrupt it because YAML is sensitive to spacing, so take care if you choose to edit any aspect of the file. For more on working with Ansible playbooks, follow our series on How To Write Ansible Playbooks.
Now you can run the file. First, check the syntax:
- ansible-playbook --syntax-check --ask-vault-pass ongoing.yml
You’ll be prompted for the vault password you created in Step 5. If there are no errors with the YAML syntax after successful authentication, the output will be:
Outputplaybook: ongoing.yml
You may now run the file with the following command:
- ansible-playbook --ask-vault-pass ongoing.yml
You’ll be prompted for your vault password. After successful authentication, the Ansible controller will log in to each host as sammy
(or the username you specified) to perform the tasks in the playbook. Rather than running the ssh -p 5995 sammy@host_ip_address
command on each server individually, Ansible connects to the nodes specified by the ongoing
group in /etc/ansible/hosts
and then executes the tasks in the playbook.
If the command completes successfully, the following output will print:
OutputPLAY RECAP *****************************************************************************************************
host1 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
host2 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
host3 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Unlike the play recap for the initial server setup, this play recap notes the two tasks that were skipped
because the condition set for each task with the when
parameter was not met.
You can use this playbook to maintain the hosts without needing to log in to each host manually. As you build and install applications on the hosts, you can add tasks to the playbook so that you can also manage those applications with Ansible.
In this tutorial, you used Ansible to automate the initial setup of multiple Ubuntu 22.04 servers. You also ran a secondary playbook for ongoing maintenance of those servers. Ansible automation is a time-saving tool when you need to set up an application like Cassandra or MinIO in a distributed or cluster mode.
More information on Ansible is available at the official Ansible documentation site. To further customize your playbook, you can review An Introduction to Configuration Management and Configuration Management 101: Writing Ansible Playbooks.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
The port change does not work, as the SSH connection gets refused, even though UFW has been adjusted and the new port opened. I tried it with my own ansible script as well as downloading yours, but the port change causes a connection refusal.