Tutorial

How To Use Ansible with Terraform for Configuration Management

How To Use Ansible with Terraform for Configuration Management

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

Ansible is a configuration management tool that executes playbooks, which are lists of customizable actions written in YAML on specified target servers. It can perform all bootstrapping operations, like installing and updating software, creating and removing users, and configuring system services. As such, it is suitable for bringing up servers you deploy using Terraform, which are created blank by default.

Ansible and Terraform are not competing solutions, because they resolve different phases of infrastructure and software deployment. Terraform allows you to define and create the infrastructure of your system, encompassing the hardware that your applications will run on. Conversely, Ansible configures and deploys software by executing its playbooks on the provided server instances. Running Ansible on the resources Terraform provisioned directly after their creation allows you to make the resources usable for your use case much faster. It also enables easier maintenance and troubleshooting, because all deployed servers will have the same actions applied to them.

In this tutorial, you’ll deploy Droplets using Terraform, and then immediately after their creation, you’ll bootstrap the Droplets using Ansible. You’ll invoke Ansible directly from Terraform when a resource deploys. You’ll also avoid introducing race conditions using Terraform’s remote-exec and local-exec provisioners in your configuration, which will ensure that the Droplet deployment is fully complete before further setup commences.

Prerequisites

Note: This tutorial has specifically been tested with Terraform 1.0.2.

Step 1 — Defining Droplets

In this step, you’ll define the Droplets on which you’ll later run an Ansible playbook, which will set up the Apache web server.

Assuming you are in the terraform-ansible directory, which you created as part of the prerequisites, you’ll define a Droplet resource, create three copies of it by specifying count, and output their IP addresses. You’ll store the definitions in a file named droplets.tf. Create and open it for editing by running:

  1. nano droplets.tf

Add the following lines:

~/terraform-ansible/droplets.tf
resource "digitalocean_droplet" "web" {
  count  = 3
  image  = "ubuntu-18-04-x64"
  name   = "web-${count.index}"
  region = "fra1"
  size   = "s-1vcpu-1gb"

  ssh_keys = [
      data.digitalocean_ssh_key.terraform.id
  ]
}

output "droplet_ip_addresses" {
  value = {
    for droplet in digitalocean_droplet.web:
    droplet.name => droplet.ipv4_address
  }
}

Here you define a Droplet resource running Ubuntu 18.04 with 1GB RAM on a CPU core in the region fra1. Terraform will pull the SSH key you defined in the prerequisites from your account and add it to the provisioned Droplet with the specified unique ID list element passed into ssh_keys. Terraform will deploy the Droplet three times because the count parameter is set. The output block following it will show the IP addresses of the three Droplets. The loop traverses the list of Droplets, and for each instance, pairs its name with its IP address and appends it to the resulting map.

Save and close the file when you’re done.

You have now defined the Droplets that Terraform will deploy. In the next step, you’ll write an Ansible playbook that will execute on each of the three deployed Droplets and will deploy the Apache web server. You’ll later go back to the Terraform code and add in the integration with Ansible.

Step 2 — Writing an Ansible Playbook

You’ll now create an Ansible playbook that performs the initial server setup tasks, such as creating a new user and upgrading the installed packages. You’ll instruct Ansible on what to do by writing tasks, which are units of action that are executed on target hosts. Tasks can use built-in functions, or specify custom commands to be run. Besides the tasks for the initial setup, you’ll also install the Apache web server and enable its mod_rewrite module.

Before writing the playbook, ensure that your public and private SSH keys, which correspond to the one in your DigitalOcean account, are available and accessible on the machine from which you’re running Terraform and Ansible. A typical location for storing them on Linux would be ~/.ssh (although you can store them in other places).

Note: On Linux, you’ll need to ensure that the private key file has appropriate permissions. You can set them by running:

  1. chmod 600 your_private_key_location

You already have a variable for the private key defined, so you’ll only need to add one for the public key location.

Open provider.tf for editing by running:

  1. nano provider.tf

Add the following line:

~/terraform-ansible/provider.tf
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

variable "do_token" {}
variable "pvt_key" {}
variable "pub_key" {}

provider "digitalocean" {
  token = var.do_token
}

data "digitalocean_ssh_key" "terraform" {
  name = "terraform"
}

When you’re done, save and close the file.

With the pub_key variable now defined, you’ll start writing the Ansible playbook. You’ll store it in a file called apache-install.yml. Create and open it for editing:

  1. nano apache-install.yml

You’ll be building the playbook gradually. First, you’ll need to define on which hosts the playbook will run, its name, and if the tasks should be run as root. Add the following lines:

~/terraform-ansible/apache-install.yml
- become: yes
  hosts: all
  name: apache-install

By setting become to yes, you instruct Ansible to run commands as the superuser, and by specifying all for hosts, you allow Ansible to run the tasks on any given server—even the ones passed in through the command line, as Terraform does.

The first task that you’ll add will create a new, non-root user. Append the following task definition to your playbook:

~/terraform-ansible/apache-install.yml
  tasks:
    - name: Add the user 'sammy' and add it to 'sudo'
      user:
        name: sammy
        group: sudo

You first define a list of tasks and then add a task to it. It will create a user named sammy and grant them superuser access using sudo by adding them to the appropriate group.

The next task will add your public SSH key to the user, so you’ll be able to connect to it later on:

~/terraform-ansible/apache-install.yml
    - name: Add SSH key to 'sammy'
      authorized_key:
        user: sammy
        state: present
        key: "{{ lookup('file', pub_key) }}"

This task will ensure that the public SSH key, which is looked up from a local file, is present on the target. You’ll supply the value for the pub_key variable from Terraform in the next step.

You can now order the installation of Apache and the mod_rewrite module by appending the following tasks:

~/terraform-ansible/apache-install.yml
    - name: Wait for apt to unlock
      become: yes
      shell:  while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1; do sleep 5; done;
      
    - name: Install apache2
      apt:        
        name: apache2
        update_cache: yes
        state: latest
        
    - name: Enable mod_rewrite
      apache2_module:
        name: rewrite
        state: present
      notify:
        - Restart apache2

  handlers:
    - name: Restart apache2
      service:
      name: apache2
      state: restarted

The first task will wait until any previous package installation using the apt package manager is complete. The second task will run apt to install Apache. Then, the third one will ensure that the mod_rewrite module is present. After it’s enabled, you need to ensure that you restart Apache, which you can’t configure from the task itself. To resolve that, you call a handler to issue the restart.

At this point, your playbook will look like the following:

~/terraform-ansible/apache-install.yml
- become: yes
  hosts: all
  name: apache-install
  tasks:
    - name: Add the user 'sammy' and add it to 'sudo'
      user:
        name: sammy
        group: sudo
        
    - name: Add SSH key to 'sammy'
      authorized_key:
        user: sammy
        state: present
        key: "{{ lookup('file', pub_key) }}"
        
    - name: Wait for apt to unlock
      become: yes
      shell:  while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1; do sleep 5; done;
      
    - name: Install apache2
      apt:
        name: apache2
        update_cache: yes
        state: latest
      
    - name: Enable mod_rewrite
      apache2_module:
        name: rewrite 
        state: present
      notify:
        - Restart apache2

  handlers:
    - name: Restart apache2
      service:
        name: apache2
        state: restarted

When you’re done, check that indentations of all YAML elements are correct and match the ones shown above. This is all you need to define on the Ansible side, so save and close the playbook. You’ll now modify the Droplet deployment code to execute this playbook when the Droplets have finished provisioning.

Step 3 — Running Ansible on Deployed Droplets

Now that you have defined the actions Ansible will take on the target servers, you’ll modify the Terraform configuration to run it upon Droplet creation.

Terraform offers two provisioners that execute commands: local-exec and remote-exec, which run commands locally or remotely (on the target), respectively. remote-exec requires connection data, such as type and access keys, while local-exec does everything on the machine Terraform is executing on, and so does not require connection information. It’s important to note that local-exec runs immediately after the resource you have defined it for has finished provisioning; therefore, it does not wait for the resource to actually boot up. It runs after the cloud platform acknowledges its presence in the system.

You’ll now add provisioner definitions to your Droplet to run Ansible after deployment. Open droplets.tf for editing:

  1. nano droplets.tf

Add the highlighted lines:

~/terraform-ansible/droplets.tf
resource "digitalocean_droplet" "web" {
  count  = 3
  image  = "ubuntu-18-04-x64"
  name   = "web-${count.index}"
  region = "fra1"
  size   = "s-1vcpu-1gb"

  ssh_keys = [
      data.digitalocean_ssh_key.terraform.id
  ]

  provisioner "remote-exec" {
    inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]

    connection {
      host        = self.ipv4_address
      type        = "ssh"
      user        = "root"
      private_key = file(var.pvt_key)
    }
  }

  provisioner "local-exec" {
    command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u root -i '${self.ipv4_address},' --private-key ${var.pvt_key} -e 'pub_key=${var.pub_key}' apache-install.yml"
  }
}

output "droplet_ip_addresses" {
  value = {
    for droplet in digitalocean_droplet.web:
    droplet.name => droplet.ipv4_address
  }
}

Like Terraform, Ansible runs locally and connects to the target servers via SSH. To run it, you define a local-exec provisioner in the Droplet definition that runs the ansible-playbook command. This passes in the username (root), the IP of the current Droplet (retrieved with ${self.ipv4_address}), the SSH public and private keys, and specifies the playbook file to run (apache-install.yml). By setting the ANSIBLE_HOST_KEY_CHECKING environment variable to False, you skip checking if the server was connected to beforehand.

As was noted, the local-exec provisioner runs without waiting for the Droplet to become available, so the execution of the playbook may precede the actual availability of the Droplet. To remedy this, you define the remote-exec provisioner to contain commands to execute on the target server. For remote-exec to execute, the target server must be available. Since remote-exec runs before local-exec, the server will be fully initialized by the time Ansible is invoked. python3 comes preinstalled on Ubuntu 18.04, so you can comment out or remove the command as necessary.

When you’re done making changes, save and close the file.

Then, deploy the Droplets by running the following command. Remember to replace private_key_location and public_key_location with the locations of your private and public keys respectively:

  1. terraform apply -var "do_token=${DO_PAT}" -var "pvt_key=private_key_location" -var "pub_key=public_key_location"

The output will be long. Your Droplets will provision and then a connection will establish with each. Next the remote-exec provisioner will execute and install python3:

Output
... digitalocean_droplet.web[1] (remote-exec): Connecting to remote host via SSH... digitalocean_droplet.web[1] (remote-exec): Host: ... digitalocean_droplet.web[1] (remote-exec): User: root digitalocean_droplet.web[1] (remote-exec): Password: false digitalocean_droplet.web[1] (remote-exec): Private key: true digitalocean_droplet.web[1] (remote-exec): Certificate: false digitalocean_droplet.web[1] (remote-exec): SSH Agent: false digitalocean_droplet.web[1] (remote-exec): Checking Host Key: false digitalocean_droplet.web[1] (remote-exec): Connected! ...

After that, Terraform will run the local-exec provisioner for each of the Droplets, which executes Ansible. The following output shows this for one of the Droplets:

Output
... digitalocean_droplet.web[2] (local-exec): Executing: ["/bin/sh" "-c" "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u root -i 'ip_address,' --private-key private_key_location -e 'pub_key=public_key_location' apache-install.yml"] digitalocean_droplet.web[2] (local-exec): PLAY [apache-install] ********************************************************** digitalocean_droplet.web[2] (local-exec): TASK [Gathering Facts] ********************************************************* digitalocean_droplet.web[2] (local-exec): ok: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Add the user 'sammy' and add it to 'sudo'] ******************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Add SSH key to 'sammy''] ******************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Update all packages] ***************************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Install apache2] ********************************************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Enable mod_rewrite] ****************************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): RUNNING HANDLER [Restart apache2] ********************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): PLAY RECAP ********************************************************************* digitalocean_droplet.web[2] (local-exec): [ip_address] : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ...

At the end of the output, you’ll receive a list of the three Droplets and their IP addresses:

Output
droplet_ip_addresses
= { "web-0" = "..." "web-1" = "..." "web-2" = "..." }

You can now navigate to one of the IP addresses in your browser. You will reach the default Apache welcome page, signifying the successful installation of the web server.

Apache Welcome Page

This means that Terraform provisioned your servers and your Ansible playbook executed on it successfully.

To check that the SSH key was correctly added to sammy on the provisioned Droplets, connect to one of them with the following command:

  1. ssh -i private_key_location sammy@droplet_ip_address

Remember to put in the private key location and the IP address of one of the provisioned Droplets, which you can find in your Terraform output.

The output will look similar to the following:

Output
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0-121-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of ... System load: 0.0 Processes: 88 Usage of /: 6.4% of 24.06GB Users logged in: 0 Memory usage: 20% IP address for eth0: ip_address Swap usage: 0% IP address for eth1: ip_address 0 packages can be updated. 0 updates are security updates. New release '20.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it. *** System restart required *** Last login: ... ...

You’ve successfully connected to the target and obtained shell access for the sammy user, which confirms that the SSH key was correctly configured for that user.

You can destroy the deployed Droplets by running the following command, entering yes when prompted:

  1. terraform destroy -var "do_token=${DO_PAT}" -var "pvt_key=private_key_location" -var "pub_key=public_key_location"

In this step, you have added in Ansible playbook execution as a local-exec provisioner to your Droplet definition. To ensure that the server is available for connections, you’ve included the remote-exec provisioner, which can serve to install the python3 prerequisite, after which Ansible will run.

Conclusion

Terraform and Ansible together form a flexible workflow for spinning up servers with the needed software and hardware configurations. Running Ansible directly as part of the Terraform deployment process allows you to have the servers up and bootstrapped with dependencies for your development work and applications much faster.

This tutorial is part of the How To Manage Infrastructure with Terraform series. The series covers a number of Terraform topics, from installing Terraform for the first time to managing complex projects.

You can also find additional Ansible content resources on our Ansible topic page.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: How To Manage Infrastructure with Terraform

Terraform is a popular open source Infrastructure as Code (IAC) tool that automates provisioning of your infrastructure in the cloud and manages the full lifecycle of all deployed resources, which are defined in source code. Its resource-managing behavior is predictable and reproducible, so you can plan the actions in advance and reuse your code configurations for similar infrastructure.

In this series, you will build out examples of Terraform projects to gain an understanding of the IAC approach and how it’s applied in practice to facilitate creating and deploying reusable and scalable infrastructure architectures.

About the authors
Default avatar
Savic

author



Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
2 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

There is also a decent ansible provider for terraform. Might be a nicer way of handling population of ansible variables directly from terraform

https://github.com/habakke/terraform-provider-ansible

Hi. FYI., I’m pretty new to Terraform and Ansible, so excuse me if my question shows a lack of understanding… Does this approach work when updating the config in the Ansible playbook? e.g. First time I run Terraform it provisions the resources and executes the Ansible playbook. If I make changes to the Ansible playbook and re-run Terraform, is it going to detect the Ansible changes and apply them?

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.