This tutorial is out of date and no longer maintained.
This article is no longer current. If you are interested in writing an update for this article, please see DigitalOcean wants to publish your tech tutorial!
Reason: On December 22, 2016, CoreOS announced that it no longer maintains fleet. CoreOS recommends using Kubernetes for all clustering needs.
See Instead: For guidance using Kubernetes on CoreOS without fleet, see the Kubernetes on CoreOS Documentation.
Kubernetes is a system designed to manage applications built within Docker containers across clustered environments. It handles the entire life cycle of a containerized application including deployment and scaling.
In this guide, we’ll demonstrate how to get started with Kubernetes on a CoreOS cluster. This system will allow us to group related services together for deployment as a unit on a single host using what Kubernetes calls “pods”. It also provides health checking functionality, high availability, and efficient usage of resources.
This tutorial was tested with Kubernetes v0.7.0. Keep in mind that this software changes frequently. To see your version, once it’s installed, run:
kubecfg -version
We will start with the same basic CoreOS clusters we have used in previous CoreOS guides. To get this three member cluster up and running, follow our CoreOS clustering guide.
This will give you three servers to configure. While each node is essentially interchangeable at the CoreOS level, within Kubernetes, we’ll need to assign more specialized roles. We need one node to act as the master, this will run a few extra services, such as an API server and a controller manager.
For this guide, we will use the following details:
Hostname | Public IPv4 | Private IPv4 | Role |
---|---|---|---|
coreos-1 | 192.168.2.1 | 10.120.0.1 | Master |
coreos-2 | 192.168.2.2 | 10.120.0.2 | Minion1 |
coreos-3 | 192.168.2.3 | 10.120.0.3 | Minion2 |
In the configuration we will be following, the master will also be a fully functional minion server capable of completing work. The idea for this configuration was taken from Brian Ketelson’s guide on setting up Kubernetes on CoreOS here.
If you followed the guide above to create the cluster, both etcd
and fleet
should be configured to use each server’s private IPv4 for communication. The public IP address can be used for connecting from your local machine.
This guide will take this basic CoreOS cluster and install a number of services on top of it.
First, we will configure flannel
, a network fabric layer that provides each machine with an individual subnet for container communication. This is a relatively new CoreOS project made in a large part to adapt to Kubernetes assumptions about the networking environment.
We will configure Docker to use this networking layer for deployments. On top of this, we will set up Kubernetes. This involves a number of pieces. We need to configure a proxying service, an API layer, and a node-level “pod” management system called Kubelet.
The first thing we need to do is configure the flannel
service. This is the component that provides individual subnets for each machine in the cluster. Docker will be configured to use this for deployments. Since this is a base requirement, it is a great place to start.
At the time of this writing, there are no pre-built binaries of flannel
provided by the project. Due to this fact, we’ll have to build the binary and install it ourselves. To save build time, we will be building this on a single machine and then later transferring the executable to our other nodes.
Like many parts of CoreOS, Flannel is built in the Go programming language. Rather than setting up a complete Go environment to build the package, we’ll use a container pre-built for this purpose. Google maintains a Go container specifically for these types of situations.
All of the applications we will be installing will be placed in the /opt/bin
directory, which is not created automatically in CoreOS. Create the directory now:
sudo mkdir -p /opt/bin
Now we can build the project using the Go container. Just run this Docker command to pull the image from Docker Hub, run the container, and download and build the package within the container:
docker run -i -t google/golang /bin/bash -c "go get github.com/coreos/flannel"
When the operation is complete, we can copy the compiled binary out of the container. First, we need to know the container ID:
docker ps -l -q
The result will be the ID that looks like this:
004e7a7e4b70
We can use this ID to specify a copy operation into the /opt/bin
directory. The binary has been placed at /gopath/bin/flannel
within the container. Since the /opt/bin
directory isn’t writeable by our core
user, we’ll have to use sudo
:
sudo docker cp 004e7a7e4b70:/gopath/bin/flannel /opt/bin/
We now have flannel available on our first machine. A bit later, we’ll copy this to our other machines.
Kubernetes is composed of quite a few different applications and layered technologies. Currently, the project does not contain pre-built binaries for the various components we need. We will build them ourselves instead.
We will only complete this process on one of our servers. Since our servers are uniform in nature, we can avoid unnecessary build times by simply transferring the binaries that we will produce.
The first step is to clone the project from its GitHub repository. We will clone it into our home directory:
cd ~
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
Next, we will go into the build directory within the repository. From here, we can build the binaries using an included script:
cd kubernetes/build
./release.sh
This process will take quite a long time. It will start up a Docker container to build the necessary binary packages.
When the build process is completed, you will be able to find the binaries in the ~/kubernetes/_output/dockerized/bin/linux/amd64
directory:
cd ~/kubernetes/_output/dockerized/bin/linux/amd64
ls
e2e kube-apiserver kube-proxy kubecfg kubelet
integration kube-controller-manager kube-scheduler kubectl kubernetes
We will transfer these to the /opt/bin
directory that we created earlier:
sudo cp * /opt/bin
Our first machine now has all of the binaries needed for our project. We can now focus on getting these applications on our other servers.
Our first machine has all of the components necessary to start up a Kubernetes cluster. We need to copy these to our other machines though before this will work.
Since Kubernetes is not a uniform installation (there is one master and multiple minions), each host does not need all of the binaries. Each minion server only needs the scheduler, docker, proxy, kubelet, and flannel executables.
However, transferring all of the executables gives us more flexibility down the road. It is also easier. We will be transferring everything in this guide.
When you connected to your first machine, you should have forwarded your SSH agent information by connecting with the -A
flag (after starting the agent and adding your key). This is an important step. Disconnect and reconnect if you did not pass this flag earlier.
You will need to run the eval
and ssh-add
commands from the Step 2—Authenticate section of this tutorial before connecting with -A
.
Start by moving into the directory where we have placed our binaries:
cd /opt/bin
Now, we can copy the files in this directory to our other hosts. We will do this by tarring the executables directly to our shell’s standard out. We will then pipe this into our SSH command where we will connect to one of our other hosts.
The SSH command we will use will create the /opt/bin
directory on our other host, change to the directory, and untar the information it receives through the SSH tunnel. The entire command looks like this:
tar -czf - . | ssh core@192.168.2.2 "sudo mkdir -p /opt/bin; cd /opt/bin; sudo tar xzvf -"
This will transfer all of the executables to the IP address you specified. Run the command again using your third host:
tar -czf - . | ssh core@192.168.2.3 "sudo mkdir -p /opt/bin; cd /opt/bin; sudo tar xzvf -"
You now have all of the executables in place on your three machines.
The next step is to set up our systemd
unit files to correctly configure and launch our new applications. We will begin by handling the applications that will only run on our master server.
We will be placing these files in the /etc/systemd/system
directory. Move there now:
cd /etc/systemd/system
Now we can begin building our service files. We will create two files on master only and five files that also belong on the minions. All of these files will be in /etc/systemd/system/*.service
.
Master files:
apiserver.service
controller-manager.service
Minion files for all servers:
scheduler.service
flannel.service
docker.service
proxy.service
kubelet.service
The first file we will configure is the API server’s unit file. The API server is used to serve information about the cluster, handle post requests to alter information, schedule work on each server, and synchronize shared information.
We will be calling this unit file apiserver.service
for simplicity. Create and open that file now:
sudo vim apiserver.service
Within this file, we will start with the basic metadata about our service. We need to make sure this unit is not started until the etcd
and Docker services are up and running:
[Unit]
Description=Kubernetes API Server
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
Next, we will complete the [Service]
section. This will mainly be used to start the API server with some parameters describing our environment. We will also set up restart conditions:
[Unit]
Description=Kubernetes API Server
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
[Service]
ExecStart=/opt/bin/kube-apiserver \
-address=127.0.0.1 \
-port=8080 \
-etcd_servers=http://127.0.0.1:4001 \
-portal_net=10.100.0.0/16 \
-logtostderr=true
ExecStartPost=-/bin/bash -c "until /usr/bin/curl http://127.0.0.1:8080; do echo \"waiting for API server to come online...\"; sleep 3; done"
Restart=on-failure
RestartSec=5
The above section establishes the networking address and port where the server will run, as well as the location where etcd
is listening. The portal_net
parameter gives the network range that the flannel
service will use.
After we start the service, we check that it is up and running in a loop. This ensures that the service is actually able to accept connections before the dependent services are initiated. Not having this can lead to errors in the dependent services that would require a manual restart.
Finally, we will have to install this unit. We can do that with an [Install]
section that will tell our host to start this service when the machine is completely booted:
[Unit]
Description=Kubernetes API Server
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
[Service]
ExecStart=/opt/bin/kube-apiserver \
-address=127.0.0.1 \
-port=8080 \
-etcd_servers=http://127.0.0.1:4001 \
-portal_net=10.100.0.0/16 \
-logtostderr=true
ExecStartPost=-/bin/bash -c "until /usr/bin/curl http://127.0.0.1:8080; do echo \"waiting for API server to come online...\"; sleep 3; done"
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
When you are finished, close the file.
The next piece required by Kubernetes is the Controller Manager server. This component is used to perform data replication among the cluster units.
Open up a file called controller-manager.service
in the same directory:
sudo vim controller-manager.service
We’ll begin with the basic metadata again. This will follow the same format as the last file. In addition to the other dependencies, this service must start up after the API server unit we just configured:
[Unit]
Description=Kubernetes Controller Manager
After=etcd.service
After=docker.service
After=apiserver.service
Wants=etcd.service
Wants=docker.service
Wants=apiserver.service
For the [Service]
portion of this file, we just need to pass a few parameters to the executable. Mainly, we are pointing the application to the location of our API server. Here, we have passed in each of our machines’ private IP addresses, separated by commas. Modify these values to mirror your own configuration. Again, we will make sure this unit restarts on failure since it is required for our Kubernetes cluster to function correctly:
[Unit]
Description=Kubernetes Controller Manager
After=etcd.service
After=docker.service
After=apiserver.service
Wants=etcd.service
Wants=docker.service
Wants=apiserver.service
[Service]
ExecStart=/opt/bin/kube-controller-manager \
-master=http://127.0.0.1:8080 \
-machines=10.120.0.1,10.120.0.2,10.120.0.3
Restart=on-failure
RestartSec=5
We will also be using the same installation instructions so that this unit starts on boot as well:
[Unit]
Description=Kubernetes Controller Manager
After=etcd.service
After=docker.service
After=apiserver.service
Wants=etcd.service
Wants=docker.service
Wants=apiserver.service
[Service]
ExecStart=/opt/bin/kube-controller-manager \
-master=http://127.0.0.1:8080 \
-machines=10.120.0.1,10.120.0.2,10.120.0.3
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Save and close the file when you are finished.
Now that we have our master-specific services configured, we can configure the unit files that need to be present on all of our machines. This means that you should add these files to both the master and the minion servers and configure them accordingly.
These five files should be created on all machines, in /etc/systemd/system/*.service
.
scheduler.service
flannel.service
docker.service
proxy.service
kubelet.service
The next component is the scheduler. The scheduler decides which minion to run workloads on and communicates to make sure this happens.
Create and open a file for this unit now:
sudo vim scheduler.service
This unit starts off in much of the same way as the last one. It has dependencies on all of the same services:
[Unit]
Description=Kubernetes Scheduler
After=etcd.service
After=docker.service
After=apiserver.service
Wants=etcd.service
Wants=docker.service
Wants=apiserver.service
The service section itself is very straight-forward. We only need to point the executable at the network address and port that the API server is located on. Again, we’ll restart the service in case of failure.
The installation section mirrors the others we have seen so far:
[Unit]
Description=Kubernetes Scheduler
After=etcd.service
After=docker.service
After=apiserver.service
Wants=etcd.service
Wants=docker.service
Wants=apiserver.service
[Service]
ExecStart=/opt/bin/kube-scheduler -master=127.0.0.1:8080
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
When you are finished, save and close the file.
The next component that we need to get up and running is flannel
, our network fabric layer. This will be used to give each node its own subnet for Docker containers.
Again, on each of your machines, change to the systemd
configuration directory:
cd /etc/systemd/system
Create and open the flannel
unit file in your text editor:
sudo vim flannel.service
Inside of this file, we will start with the metadata information. Since this service requires etcd
to register the subnet information, we need to start this after etcd
:
[Unit]
Description=Flannel network fabric for CoreOS
Requires=etcd.service
After=etcd.service
For the [Service]
section, we’re first going to source the /etc/environment
file so that we can have access to the private IP address of our host.
The next step will be to place an ExecStartPre=
line that attempts to register the subnet range with etcd
. It will continually try to register with etcd until it is successful. We will be using the 10.100.0.0/16
range for this guide.
Then, we will start flannel
with the private IP address we’re sourcing from the environment file.
Afterwards, we want to check whether flannel
has written its information to its file (so that Docker can read it in a moment) and sleep if it has not. This ensures that the Docker service does not try to read the file before it is available (this can happen on the first server to come online). We will configure the restart using the usual parameters and install the unit using the multi-user.target
again:
[Unit]
Description=Flannel network fabric for CoreOS
Requires=etcd.service
After=etcd.service
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/bin/bash -c "until /usr/bin/etcdctl set /coreos.com/network/config '{\"Network\": \"10.100.0.0/16\"}'; do echo \"waiting for etcd to become available...\"; sleep 5; done"
ExecStart=/opt/bin/flannel -iface=${COREOS_PRIVATE_IPV4}
ExecStartPost=-/bin/bash -c "until [ -e /run/flannel/subnet.env ]; do echo \"waiting for write.\"; sleep 3; done"
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Save and close the file when you are finished. Create the same file on your other hosts.
The next file that we will create is not actually related to the executables in our /opt/bin
directory. We need to create a Docker service file so that the service will be started with knowledge of the flannel
networking overlay we just configured.
Create the appropriate unit file with your text editor:
sudo vim docker.service
Start with the usual metadata. We need this to start after the flannel
service has been configured and brought online:
[Unit]
Description=Docker container engine configured to run with flannel
Requires=flannel.service
After=flannel.service
For the [Service]
section, we’ll need to source the file that flannel
uses to store the environmental variables it is creating. This will have the current host’s subnet information.
We then bring down the current docker0
bridge interface if it is running and delete it. This allows us to restart Docker with a clean slate. The process will configure the docker0
interface using the flannel
networking information.
We use the same restart and [Install]
details that we’ve been using with our other units:
[Unit]
Description=Docker container engine configured to run with flannel
Requires=flannel.service
After=flannel.service
[Service]
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=-/usr/bin/ip link set dev docker0 down
ExecStartPre=-/usr/sbin/brctl delbr docker0
ExecStart=/usr/bin/docker -d -s=btrfs -H fd:// --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Save and close the file when you are finished. Create this same file on each of your hosts.
The next logical unit to discuss is the proxy server that each of the cluster members runs. The Kubernetes proxy server is used to route and forward traffic to and from containers.
Open a proxy unit file with your text editor:
sudo vim proxy.service
For the metadata section, we will need to define dependencies on etcd
and Docker. For the [Service]
section, we just need to start the executable with the local etcd
server’s address. The restarting configuration and the installation details will mirror our previous service files:
[Unit]
Description=Kubernetes proxy server
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
[Service]
ExecStart=/opt/bin/kube-proxy -etcd_servers=http://127.0.0.1:4001 -logtostderr=true
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Save the file when you are finished. Create this same file on each of your hosts.
Now, we will create the Kubelet unit file. This component is used to manage container deployments. It ensures that the containers are in the state they are supposed to be in and monitors the system for changes in the desired state of the deployments.
Create and open the file in your text editor:
sudo vim kubelet.service
The metadata section will contain the same dependency information about etcd
and Docker:
[Unit]
Description=Kubernetes Kubelet
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
For the [Service]
section, we again have to source the /etc/environment
file to get access to the private IP address of the host. We will then call the kubelet
executable, setting its address and port. We also override the hostname to use the same private IP address and point the service to the local etcd
instance. We provide the same restart and install details that we’ve been using:
[Unit]
Description=Kubernetes Kubelet
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/opt/bin/kubelet \
-address=${COREOS_PRIVATE_IPV4} \
-port=10250 \
-hostname_override=${COREOS_PRIVATE_IPV4} \
-etcd_servers=http://127.0.0.1:4001 \
-logtostderr=true
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Save and close the file when you are finished.
Now that you have all of your service files started, you can enable these units. Doing so processes the information in the [Install]
section of each unit.
Since our units say that they are wanted by the multi-user target, this means that when the system tries to bring the server into multi-user mode, all of our services will be started automatically.
To accomplish this, go to your /etc/systemd/system
directory:
cd /etc/systemd/system
From here, we can enable all of the scripts:
sudo systemctl enable *
This will create a multi-user.target.wants
directory with symbolic links to our unit files. This directory will be processed by systemd
toward the end of the boot process.
Repeat this step on each of your servers.
Now that we have our services enabled, we can reboot the servers in turn.
We will start with the master server, but you can do so in any order. While it is not necessary to reboot to start these services, doing so ensures that our unit files have been written in a way that permits a seamless dependency chain:
sudo reboot
Once the master comes back online, you can reboot your minion servers:
sudo reboot
Once all of your servers are online, make sure your services started correctly. You can check this by typing:
systemctl status service_name
Or you can go the journal by typing:
journalctl -b -u service_name
Look for an indication that the services are up and running correctly. If there are any issues, a restart of the specific service might help:
sudo systemctl restart service_name
When you are finished, you should be able to view your machines from your master server. After logging into your master server, check that all of the servers are available by typing:
kubecfg list minions
Minion identifier
----------
10.200.0.1
10.200.0.2
10.200.0.3
In a future guide, we’ll talk about how to use Kubernetes to schedule and control services on your CoreOS cluster.
You should now have Kubernetes set up across your CoreOS cluster. This gives you a great management and scheduling interface for working with services in logical groupings.
You probably noticed that the steps above lead to a very manual process. A large part of this is because we built the binaries on our machines. If you were to host the binaries on a web server accessible within your private network, you could pull down the binaries and automatically configure them by crafting special cloud-config files.
Cloud-config files are flexible enough that you could inject most of the unit files without any modification (with the exception of the apiserver.service
file, which needs access to the IPs of each node) and start them up as the CoreOS node is booted. This is outside of the scope of this guide, but a good next step in terms of automating the process.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Latest version of kubernetes (1.3.0) was not building for me. I had to checkout to 25eb53b (1.2.5) in order for it to work
Also a rather big in RAM instance is needed for kubernetes to be built. That would be nice to be mentioned in the guide. 2GB is not enough.
I currently have a setup like this one running on DO. The question I now have is how I can reach ie. a webservice from the outside world. Kubernetes has the createExternalLoadBalancer which can be used on VPSes inside GCE to create an external loadbalancer for the service.
How can I get the same behaviour on DO?
to anyone considering following this tutorial. Do Not, it is grossly outdated
Hey, so I am stuck on installing flannel. I get the following error when running
docker run -i -t google/golang /bin/bash -c "go get github.com/coreos/flannel"
subsequent calls return the same error. It is keeping Flannel from compiling and, as a result, I can’t continue on with this tutorial. Does anyone have any tips? I followed this tutorial to set up the CoreOS instance https://www.digitalocean.com/community/tutorials/how-to-secure-your-coreos-cluster-with-tls-ssl-and-firewall-rules and I have 3 droplets at the lowest pricing option
as justin has said, this is a very manual process so i molded the whole thing into a ruby script to make it more repeatable: https://github.com/stephanlindauer/CoreOS-K8s-DigitalOcean-Provisioning
would be real nice to get an updated version of this tutorial after kubernetes is already on version 1.2
following this tutorial i get:
For me it fails on the fleet instalation step: Create the Flannel Network Fabric Layer
Any ideas ?
I have created a setup (kubernetes 1.0.6) for Debian (available at https://github.com/RemyNoulin/kubernetesDebian).
The setup needs to be changed to use the private network to run in digital ocean. (it works without change with public IPs)
Made a simple «bootstrap» cloud-init configs for kubernetes (v1.0.5) for digital ocean inspired by original examples.
https://github.com/AndreyAntipov/kubernetes-coreos-do