Tutorial

How To Create a Kubernetes Cluster Using Kubeadm on CentOS 7

Updated on April 25, 2019
English
How To Create a Kubernetes Cluster Using Kubeadm on CentOS 7
Not using CentOS 7?Choose a different version or distribution.
CentOS 7

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.

Note: This tutorial uses version 1.14 of Kubernetes, the official supported version at the time of this article’s publication. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation.

Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error-prone.

In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.

Goals

Your cluster will include the following physical resources:

  • One master node

    The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

  • Two worker nodes

    Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.

Prerequisites

Step 1 — Setting Up the Workspace Directory and Ansible Inventory File

In this section, you will create a directory on your local machine that will serve as your workspace. You will also configure Ansible locally so that it can communicate with and execute commands on your remote servers. To do this, you will create a hosts file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.

Out of your three servers, one will be the master with an IP displayed as master_ip. The other two servers will be workers and will have the IPs worker_1_ip and worker_2_ip.

Create a directory named ~/kube-cluster in the home directory of your local machine and cd into it:

  1. mkdir ~/kube-cluster
  2. cd ~/kube-cluster

This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.

Create a file named ~/kube-cluster/hosts using vi or your favorite text editor:

  1. vi ~/kube-cluster/hosts

Press i to insert the following text to the file, which will specify information about the logical structure of your cluster:

~/kube-cluster/hosts
[masters]
master ansible_host=master_ip ansible_user=root

[workers]
worker1 ansible_host=worker_1_ip ansible_user=root
worker2 ansible_host=worker_2_ip ansible_user=root

When you are finished, press ESC followed by :wq to write the changes to the file and quit.

You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts will be your inventory file and you’ve added two Ansible groups (masters and workers) to it specifying the logical structure of your cluster.

In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user.

Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip) that also specify the ansible_user as root.

Having set up the server inventory with groups, let’s move on to installing operating-system-level dependencies and creating configuration settings.

Step 2 — Installing Kubernetes’ Dependencies

In this section, you will install the operating-system-level packages required by Kubernetes with CentOS’s yum package manager. These packages are:

  • Docker - a container runtime. This is the component that runs your containers. Support for other runtimes such as rkt is under active development in Kubernetes.

  • kubeadm - a CLI tool that will install and configure the various components of a cluster in a standard way.

  • kubelet - a system service/program that runs on all nodes and handles node-level operations.

  • kubectl - a CLI tool used for issuing commands to the cluster through its API Server.

Create a file named ~/kube-cluster/kube-dependencies.yml in the workspace:

  1. vi ~/kube-cluster/kube-dependencies.yml

Add the following plays to the file to install these packages to your servers:

~/kube-cluster/kube-dependencies.yml
- hosts: all
  become: yes
  tasks:
   - name: install Docker
     yum:
       name: docker
       state: present
       update_cache: true

   - name: start Docker
     service:
       name: docker
       state: started

   - name: disable SELinux
     command: setenforce 0

   - name: disable SELinux on reboot
     selinux:
       state: disabled

   - name: ensure net.bridge.bridge-nf-call-ip6tables is set to 1
     sysctl:
      name: net.bridge.bridge-nf-call-ip6tables
      value: 1
      state: present

   - name: ensure net.bridge.bridge-nf-call-iptables is set to 1
     sysctl:
      name: net.bridge.bridge-nf-call-iptables
      value: 1
      state: present

   - name: add Kubernetes' YUM repository
     yum_repository:
      name: Kubernetes
      description: Kubernetes YUM repository
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
      gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      gpgcheck: yes

   - name: install kubelet
     yum:
        name: kubelet-1.14.0
        state: present
        update_cache: true

   - name: install kubeadm
     yum:
        name: kubeadm-1.14.0
        state: present

   - name: start kubelet
     service:
       name: kubelet
       enabled: yes
       state: started

- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     yum:
        name: kubectl-1.14.0
        state: present
        allow_downgrade: yes

The first play in the playbook does the following:

  • Installs Docker, the container runtime.

  • Starts the Docker service.

  • Disables SELinux since it is not fully supported by Kubernetes yet.

  • Sets a few netfilter-related sysctl values required for networking. This will allow Kubernetes to set iptables rules for receiving bridged IPv4 and IPv6 network traffic on the nodes.

  • Adds the Kubernetes YUM repository to your remote servers’ repository lists.

  • Installs kubelet and kubeadm.

The second play consists of a single task that installs kubectl on your master node.

Note: While the Kubernetes documentation recommends you use the latest stable release of Kubernetes for your environment, this tutorial uses a specific version. This will ensure that you can follow the steps successfully, as Kubernetes changes rapidly and the latest version may not work with this tutorial.

Save and close the file when you are finished.

Next, execute the playbook:

  1. ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

On completion, you will see output similar to the following:

Output
PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [master] TASK [install Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [disable SELinux] **** changed: [master] changed: [worker1] changed: [worker2] TASK [disable SELinux on reboot] **** changed: [master] changed: [worker1] changed: [worker2] TASK [ensure net.bridge.bridge-nf-call-ip6tables is set to 1] **** changed: [master] changed: [worker1] changed: [worker2] TASK [ensure net.bridge.bridge-nf-call-iptables is set to 1] **** changed: [master] changed: [worker1] changed: [worker2] TASK [start Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [add Kubernetes' YUM repository] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [start kubelet] **** changed: [master] changed: [worker1] changed: [worker2] PLAY [master] ***** TASK [Gathering Facts] ***** ok: [master] TASK [install kubectl] ****** ok: [master] PLAY RECAP **** master : ok=9 changed=5 unreachable=0 failed=0 worker1 : ok=7 changed=5 unreachable=0 failed=0 worker2 : ok=7 changed=5 unreachable=0 failed=0

After execution, Docker, kubeadm, and kubelet will be installed on all of the remote servers. kubectl is not a required component and is only needed for executing cluster commands. Installing it only on the master node makes sense in this context, since you will run kubectl commands only from the master. Note, however, that kubectl commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.

All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

Step 4 — Setting Up the Master Node

In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.

Create an Ansible playbook named master.yml on your local machine:

  1. vi ~/kube-cluster/master.yml

Add the following play to the file to initialize the cluster and install Flannel:

~/kube-cluster/master.yml
- hosts: master
  become: yes
  tasks:
    - name: initialize the cluster
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: centos
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/centos/.kube/config
        remote_src: yes
        owner: centos

    - name: install Pod network
      become: yes
      become_user: centos
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

Here’s a breakdown of this play:

  • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm to use the same subnet.

  • The second task creates a .kube directory at /home/centos. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

  • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root centos user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

  • The last task runs kubectl apply to install Flannel. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The kube-flannel.yml file contains the descriptions of objects required for setting up Flannel in the cluster.

Save and close the file when you are finished.

Execute the playbook:

  1. ansible-playbook -i hosts ~/kube-cluster/master.yml

On completion, you will see output similar to the following:

Output
PLAY [master] **** TASK [Gathering Facts] **** ok: [master] TASK [initialize the cluster] **** changed: [master] TASK [create .kube directory] **** changed: [master] TASK [copy admin.conf to user's kube config] ***** changed: [master] TASK [install Pod network] ***** changed: [master] PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0

To check the status of the master node, SSH into it with the following command:

  1. ssh centos@master_ip

Once inside the master node, execute:

  1. kubectl get nodes

You will now see the following output:

Output
NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0

The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

Step 5 — Setting Up the Worker Nodes

Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master’s API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

Navigate back to your workspace and create a playbook named workers.yml:

  1. vi ~/kube-cluster/workers.yml

Add the following text to the file to add the workers to the cluster:

~/kube-cluster/workers.yml
- hosts: master
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['master'].join_command }} --ignore-preflight-errors all  >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

Here’s what the playbook does:

  • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

  • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

Save and close the file when you are finished.

Execute the playbook:

  1. ansible-playbook -i hosts ~/kube-cluster/workers.yml

On completion, you will see output similar to the following:

Output
PLAY [master] **** TASK [get join command] **** changed: [master] TASK [set join command] ***** ok: [master] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** master : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let’s verify that the cluster is working as intended.

Step 6 — Verifying the Cluster

A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let’s verify the cluster and ensure that the nodes are operating correctly.

You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:

  1. ssh centos@master_ip

Then execute the following command to get the status of the cluster:

  1. kubectl get nodes

You will see output similar to the following:

Output
NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0 worker1 Ready <none> 1d v1.14.0 worker2 Ready <none> 1d v1.14.0

If all of your nodes have the value Ready for STATUS, it means that they’re part of the cluster and ready to run workloads.

If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven’t finished their setup yet. Wait for around five to ten minutes before re-running kubectl get node and inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.

Now that your cluster is verified successfully, let’s schedule an example Nginx application on the cluster.

Step 7 — Running An Application on the Cluster

You can now deploy any containerized application to your cluster. To keep things familiar, let’s deploy Nginx using Deployments and Services to see how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports and volumes).

Still within the master node, execute the following command to create a deployment named nginx:

  1. kubectl create deployment nginx --image=nginx

A deployment is a type of Kubernetes object that ensures there’s always a specified number of pods running based on a defined template, even if the pod crashes during the cluster’s lifetime. The above deployment will create a pod with one container from the Docker registry’s Nginx Docker Image.

Next, run the following command to create a service named nginx that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:

  1. kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.

Run the following command:

  1. kubectl get services

This will output text similar to the following:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

From the third line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000 automatically, while ensuring that the port is not already bound by another service.

To test that everything is working, visit http://worker_1_ip:nginx_port or http://worker_2_ip:nginx_port through a browser on your local machine. You will see Nginx’s familiar welcome page.

If you would like to remove the Nginx application, first delete the nginx service from the master node:

  1. kubectl delete service nginx

Run the following to ensure that the service has been deleted:

  1. kubectl get services

You will see the following output:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

Then delete the deployment:

  1. kubectl delete deployment nginx

Run the following to confirm that this worked:

  1. kubectl get deployments
Output
No resources found.

Conclusion

In this guide, you’ve successfully set up a Kubernetes cluster on CentOS 7 using Kubeadm and Ansible for automation.

If you’re wondering what to do with the cluster now that it’s set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here’s a list of links with further information that can guide you in the process:

  • Dockerizing applications - lists examples that detail how to containerize applications using Docker.

  • Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.

  • Deployments Overview - this provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.

  • Services Overview - this covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.

Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.

Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar
bsder

author



Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Read comments before attempting this tutorial, the author lefts out several necessary steps like disabling swap and creating the centos user… this list goes on.

A useful overview but too limited to be of any use. I think this tutorial should have at least included information about installing an ingress controller and managing the cluster remotely as these are things everyone will need to do almost immediately.

Swap should be disabled. its not yet supported by kubernetes, otherwise kubeadm init does not work

If you are getting errors that admin.conf was missing:

Failures from following this guide seem to be around firewalld ports or swap still on.

Have code to enable specific firewalld ports(I did so by uploading a services file I created and enable the service) or disable firewalld (not a good habit). Then also disable swap. You will need to also remove the created file on master node since it populated with preflight check information when it failed and will not run the kubeadm init again.

Master setup is showing

TASK [copy admin.conf to user's kube config] *************************************************************************
fatal: [master]: FAILED! => {"changed": false, "msg": "Source /etc/kubernetes/admin.conf not found"}

Any ideas? Obviously the file is not there…

Nice tutorial !

Just one step i think you shold add , create user centos at the master nodes

At Step 4 of running the master.yml playbook, i am getting below error. Please can someone share some idea how to resolve it ?

[root@AUTO-POD kube-cluster]# ansible-playbook -i hosts ~/kube-cluster/master.yml

PLAY [master] ************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************** ok: [master]

TASK [initialize the cluster] ******************************************************************************************** ok: [master]

TASK [create .kube directory] ******************************************************************************************** ok: [master]

TASK [copy admin.conf to user’s kube config] ***************************************************************************** fatal: [master]: FAILED! => {“changed”: false, “msg”: “Source /etc/kubernetes/admin.conf not found”}

PLAY RECAP *************************************************************************************************************** master : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

[root@AUTO-POD kube-cluster]#

Thank you so much, this got me up and running. My only problem is that each of my servers has a public and private IP address. Kubernetes deployments are only listening on the private IP, so I can not reach the nginx example, except from the command line on one of the kubernetes servers.

I had a weird error with the master node using SSH key-pairs on the ‘Step 2’ section when I tried to execute Playbook on the first step, but it was solved adding the SSH key-pairs on each user space (admin and restricted user) for allowing install the packages on the same server. It’s very important to watch if you are running Ansible commands as normal user on Master node due to if you had configured key-pairs on each server you need to do sudo and/or add SSH key-pairs of this normal user on each server.

Nice tutorial need help to resolve below error fatal: [master]: FAILED! => {“msg”: “The task includes an option with an undefined variable. The error was: ‘dict object’ has no attribute ‘stdout_lines’\n\nThe error appears to be in ‘/root/kube-cluster/WorkerJoin.yaml’: line 11, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: set join command\n ^ here\n”}

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.