Tutorial

Webinar Series: Getting Started with Containers

Webinar Series: Getting Started with Containers

This article supplements a webinar series on deploying and managing containerized workloads in the cloud. The series covers the essentials of containers, including container lifecycle management, deploying multi-container applications, scaling workloads, and understanding Kubernetes, along with highlighting best practices for running stateful applications.

This tutorial includes the concepts and commands covered in the first session in the series, Getting Started with Containers.

Introduction

Docker is a platform to deploy and manage containerized applications. Containers are popular among developers, administrators, and devops engineers due to the flexibility they offer.

Docker has three essential components:

  • Docker Engine
  • Docker Tools
  • Docker Registry

Docker Engine provides the core capabilities of managing containers. It interfaces with the underlying Linux operating system to expose simple APIs to deal with the lifecycle of containers.

Docker Tools are a set of command-line tools that talk to the API exposed by the Docker Engine. They are used to run the containers, create new images, configure storage and networks, and perform many more operations that impact the lifecycle of a container.

Docker Registry is the place where container images are stored. Each image can have multiple versions identified through unique tags. Users pull existing images from the registry and push new images to it. Docker Hub is a hosted registry managed by Docker, Inc. It’s also possible to run a registry within your own environments to keep the images closer to the engine.

By the end of this tutorial, you will have installed Docker on a DigitalOcean Droplet, managed containers, worked with images, added persistence, and set up a private registry.

Prerequisites

To follow this tutorial, you will need:

By default, the docker command requires root privileges. However, you can execute the command without the sudo prefix by running docker as a user in the docker group.

To configure your Droplet this way, run the command sudo usermod -aG docker ${USER}. This will add the current user to the docker group. Then, run the command su - ${USER} to apply the new group membership.

This tutorial expects that your server is configured to run the docker command without the sudo prefix.

Step 1 — Installing Docker

After SSHing into the Droplet, run the following commands to remove any existing docker-related packages that might already be installed and then install Docker from the official repository:

  1. sudo apt-get remove docker docker-engine docker.io
  2. sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
  3. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  4. sudo apt-key fingerprint 0EBFCD88
  5. sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  6. sudo apt-get update
  7. sudo apt-get install -y docker-ce

After installing Docker, verify the installation with the following commands:

  1. docker info

The above command shows the details of Docker Engine deployed in the environment. The next command verifies that the Docker Tools are properly installed and configured. It should print the version of both Docker Engine and Tools.

  1. docker version

##Step 2 — Launching Containers

Docker containers are launched from existing images which are stored in the registry. Images in Docker can be stored in private or public repositories. Private repositories require users to authenticate before pulling images. Public images can be accessed by anyone.

To search for an image named hello-world, run the command:

  1. docker search hello-world

There may be multiple images matching the name hello-world. Choose the one with the maximum stars, which indicates the popularity of the image.

Check the available images in your local environment with the following command:

  1. docker images

Since we haven’t launched any containers yet, there will not be any images. We can now download the image and run it locally:

  1. docker pull hello-world
  2. docker run hello-world

If we execute the docker run command without pulling the image, Docker Engine will first pull the image and then run it. Running the docker images command again shows that we have the hello-world image available locally.

Let’s launch a more meaningful container: an Apache web server.

  1. docker run -p 80:80 --name web -d httpd

You may notice additional options passed to the docker run command. Here is an explanation of these switches:

  • -p — This tells Docker Engine to expose the container’s port 80 on the host’s port 80. Since Apache listens on port 80, we need to expose it on the host port.
  • --name — This switch assigns a name to our running container. If we omit this, Docker Engine will assign a random name.
  • -d — This option instructs Docker Engine to run the container in detached mode. Without this, the container will be launched in the foreground, blocking access to the shell. By pushing the container into the background, we can continue to use the shell while the container is still running.

To verify that our container is indeed running in the background, try this command:

  1. docker ps

The output shows that the container named web is running with port 80 mapped to the host port 80.

Now access the web server:

  1. curl localhost

Let’s stop and remove the running container with the follow commands:

  1. docker stop web
  2. docker rm web

Running docker ps again confirms that the container is terminated.

Step 3 — Adding Storage to Containers

Containers are ephemeral, which means that anything stored within a container will be lost when the container is terminated. To persist data beyond the life of a container, we need to attach a volume to the container. Volumes are directories from the host file system.

Start by creating a new directory on the host:

  1. mkdir htdocs

Now, let’s launch the container with a new switch to mount the htdocs directory, pointing it to the Apache web server’s document root:

  1. docker run -p 80:80 --name web -d -v $PWD/htdocs:/usr/local/apache2/htdocs httpd

The -v switch points the htdocs directory within the container to the host’s file system. Any changes made to this directory will be visible at both the locations.

Access the directory from the container by running the command:

  1. docker exec -it web /bin/bash

This command attaches our terminal to the shell of the containers in an interactive mode. You should see that you are now dropped inside the container.

Navigate to the htdocs folder and create a simple HTML file. Finally, exit the shell to return to the host:

  1. cd /usr/local/apache2/htdocs
  2. echo '<h1>Hello World from Container</h1>' > index.html
  3. exit

Executing the curl localhost command again shows that the web server is returning the page that we created.

We can not only access this file from the host, but we can also modify it:

  1. cd htdocs
  2. cat index.html
  3. echo '<h1>Hello World from Host</h1>' | sudo tee index.html >/dev/null

Running curl localhost again confirms that the web server is serving the latest page created from the host.

Terminate the container with the following command. (The -f forces Docker to terminate without stopping first.)

  1. docker rm -f web

Step 4 — Building Images

Apart from running existing images from the registry, we can create our own images and store them in the registry.

You can create new images from existing containers. The changes made to the container are first committed and then the images are tagged and pushed to the registry.

Let’s launch the httpd container again and modify the default document:

  1. docker run -p 80:80 --name web -d httpd
  2. docker exec -it web /bin/bash
  3. cd htdocs
  4. echo '<h1>Welcome to my Web Application</h1>' > index.html
  5. exit

The container is now running with a customized index.html. You can verify it with curl localhost.

Before we commit the changed container, it’s a good idea to stop it. After it is stopped we will run the commit command:

  1. docker stop web
  2. docker commit web doweb

Confirm the creation of the image with the docker images command. It shows the doweb image that we just created.

To tag and store this image in Docker Hub, run the following commands to push your image to the public registry:

  1. docker login
  2. docker tag your_docker_hub_username/doweb
  3. docker push your_docker_hub_username/doweb

You can verify the new image by searching in Docker Hub from the browser or the command line.

Step 5 — Launching a Private Registry

It is possible to run the registry in private environments to keep the images more secure. It also reduces the latency between between the Docker Engine and the image repository.

Docker Registry is available as a container that can be launched like any other container. Since the registry holds multiple images, it’s a good idea to attach a storage volume to it.

  1. docker run -d -p 5000:5000 --restart=always --name registry -v $PWD/registry:/var/lib/registry registry

Notice that the container is launched in the background with port 5000 exposed and the registry directory mapped to the host file system. You can verify that the container is running by executing the docker ps command.

We can now tag a local image and push it to the private registry. Let’s first pull the busybox container from Docker Hub and tag it.

  1. docker pull busybox
  2. docker tag busybox localhost:5000/busybox
  3. docker images

The previous command confirms that the busybox container is now tagged with localhost:5000, so push the image to the private registry.

  1. docker push localhost:5000/busybox

With the image pushed to the local registry, let’s try removing it from the environment and pulling it back from the registry.

  1. docker rmi -f localhost:5000/busybox
  2. docker images
  3. docker pull localhost:5000/busybox
  4. docker images

We went through the full circle of pulling the image, tagging it, pushing it to the local registry, and, finally, pulling it back.

There may be instances where you would want to run the private registry in a dedicated host. Docker Engine running in different machines will talk to the remote registry to pull and push images.

Since the registry is not secured, we need to modify the configuration of Docker Engine to enable access to an insecure registry. To do this, edit the daemon.json file located at /etc/docker/daemon.json. Create the file if it doesn’t exist.

Add the following entry:

Editing /etc/docker/daemon.json
{
  "insecure-registries" : ["REMOTE_REGISTRY_HOST:5000"]
}

Replace REMOTE_REGISTRY_HOST with the hostname or IP address of the remote registry. Restart Docker Engine to ensure that the configuration changes are applied.

##Conclusion This tutorial helped you to get started with Docker. It covered the essential concepts including the installation, container management, image management, storage, and private registry. The upcoming sessions and articles in this series will help you go beyond the basics of Docker.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: Webinar Series: Deploying & Managing Containerized Workloads in the Cloud

This series covers the essentials of containers, including container lifecycle management, deploying multi-container applications, scaling workloads, and understanding Kubernetes, along with highlighting best practices for running stateful applications. These tutorials supplement the by the same name.

About the authors


Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
4 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Issue with this part :

docker tag your_docker_hub_username/doweb

Which yields :

"docker tag" requires exactly 2 arguments.
See 'docker tag --help'.

Usage:  docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

So it should be instead :

docker tag doweb your_docker_hub_username/doweb

Not sure if its because the command changed, or just an oversight ;)

Really good article, learned a bunch of new things.

But I am having trouble in understanding the second part of this command after the pipe.

echo '<h1>Hello World from Host</h1>' | sudo tee index.html >/dev/null

Would be awesome if some basic explanations of issued commands were mentioned in the article.

Otherwise a solid 5 / 5!

Wondering how to make the containers to persist, in terms of being able to restart, in case I want to restart a server out say an outage/maintenance.

Really good article for beginners, a couple of suggestions : In Step 1 : Explanation for installing docker commands required. Step 4 : the commands for Docker tag and push are not clear.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.