Tutorial

How to Setup a K3s Kubernetes Cluster on Ubuntu 22.04

Published on December 23, 2023
How to Setup a K3s Kubernetes Cluster on Ubuntu 22.04

Introduction

Kubernetes is among the industry-preferred tools for container orchestration. However, setting up a Kubernetes cluster from scratch could be a daunting task, requiring numerous configurations. Additionally, there are multiple ways to get started with setting up a Kubernetes cluster, but most of them could be time-consuming unless you aim to establish a production-grade cluster.

To simplify the Kubernetes cluster setup and enable the possibility of deploying it in remote resource-constrained locations - thus making it a right candidate for edge computing - Rancher Labs developed K3s. K3s is a lightweight Kubernetes distribution that allows the installation of a Kubernetes cluster using a small binary within a few minutes.

In this tutorial, you will learn how to install K3s on Ubuntu and about the additional configuration options available in K3s.

Prerequisites

To complete this tutorial, you will need:

Step 1— Installing K3s

In this step, you will install the latest version of K3s on your Ubuntu machine.

Login to your server as your sudo-enabled user (in this tutorial, it will be sammy) using the following command if using password-based login:

  1. ssh sammy@your_server_ip

Next, install the K3s using the following command.

  1. curl -sfL https://get.k3s.io | sh -

You will be prompted to enter the user’s password to execute the script.

The command uses curl to download the script located at https://get.k3s.io and executes the script by piping it to sh -. Upon script execution, K3s cluster installation will begin with the default configuration options which creates a single-node Kubernetes cluster.

You will receive an output similar to this:


[secondary_label Output]

[INFO]  Finding release for channel stable

[INFO]  Using v1.27.7+k3s2 as release

[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.27.7+k3s2/sha256sum-amd64.txt

[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.27.7+k3s2/k3s

[INFO]  Verifying binary download

[INFO]  Installing k3s to /usr/local/bin/k3s

[INFO]  Skipping installation of SELinux RPM

[INFO]  Creating /usr/local/bin/kubectl symlink to k3s

[INFO]  Creating /usr/local/bin/crictl symlink to k3s

[INFO]  Creating /usr/local/bin/ctr symlink to k3s

[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh

[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh

[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env

[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service

[INFO]  systemd: Enabling k3s unit

Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

[INFO]  systemd: Starting k3s

...

The script output shows the steps performed by the installation script to install and start the cluster. Next, you will check the K3s service status using systemctl to verify if it is running or not using the following command.

  1. systemctl status k3s

This command will show the status as active (running):


[secondary_label Output]

● k3s.service - Lightweight Kubernetes

     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)

     Active: active (running) since Mon 2023-11-27 16:52:01 UTC; 19s ago

       Docs: https://k3s.io

    Process: 8396 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exi>

    Process: 8398 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)

    Process: 8399 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)

   Main PID: 8400 (k3s-server)

      Tasks: 20

     Memory: 467.3M

        CPU: 12.952s

     CGroup: /system.slice/k3s.service

             ├─8400 "/usr/local/bin/k3s server"

             └─8421 "containerd " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" >



...

In this step, you installed K3s on Ubuntu to create a single-node Kubernetes cluster. Next, you will have a look at the default deployed Kubernetes objects on the cluster.

Step 2 — Checking Default Kubernetes Objects

In this step, you will check the default Kubernetes objects deployed after the installation of K3s.

Execute the following command to see all Kubernetes objects deployed in the cluster in the kube-system namespace. kubectl is installed automatically during the K3s installation and thus does not need to be installed individually.

  1. sudo kubectl get all -n kube-system

You will receive an output similar to this:


[secondary_label Output]

NAME                                         READY   STATUS      RESTARTS   AGE

pod/local-path-provisioner-957fdf8bc-t8vpx   1/1     Running     0          4m34s

pod/coredns-77ccd57875-4hrd9                 1/1     Running     0          4m34s

pod/helm-install-traefik-crd-j2sqs           0/1     Completed   0          4m34s

pod/helm-install-traefik-mvxhw               0/1     Completed   1          4m34s

pod/metrics-server-648b5df564-gqxcz          1/1     Running     0          4m34s

pod/svclb-traefik-18597fcd-2cf68             2/2     Running     0          4m6s

pod/traefik-768bdcdcdd-srb8d                 1/1     Running     0          4m7s



NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE

service/kube-dns         ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       4m44s

service/metrics-server   ClusterIP      10.43.69.115    <none>           443/TCP                      4m43s

service/traefik          LoadBalancer   10.43.149.125   159.65.159.115   80:32266/TCP,443:32628/TCP   4m7s



NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

daemonset.apps/svclb-traefik-18597fcd   1         1         1       1            1           <none>          4m7s



NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/local-path-provisioner   1/1     1            1           4m44s

deployment.apps/coredns                  1/1     1            1           4m44s

deployment.apps/metrics-server           1/1     1            1           4m44s

deployment.apps/traefik                  1/1     1            1           4m7s



NAME                                               DESIRED   CURRENT   READY   AGE

replicaset.apps/local-path-provisioner-957fdf8bc   1         1         1       4m34s

replicaset.apps/coredns-77ccd57875                 1         1         1       4m34s

replicaset.apps/metrics-server-648b5df564          1         1         1       4m34s

replicaset.apps/traefik-768bdcdcdd                 1         1         1       4m7s



NAME                                 COMPLETIONS   DURATION   AGE

job.batch/helm-install-traefik-crd   1/1           28s        4m41s

job.batch/helm-install-traefik       1/1           31s        4m41s



...

The above output shows different objects deployed within the Kubernetes cluster. For example, 4 deployments are running, each for coredns, local-path-provisioner, metrics-server and traefik.

If you run the command without using sudo, you may get the following error.

Output
WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied ...

To avoid needing sudo while executing kubectl commands, change the permissions of the config file with chmod, as shown below.

  1. sudo chmod 644 /etc/rancher/k3s/k3s.yaml

In this step, you have verified the status of Kubernetes objects deployed in the K3s cluster by default. Next, you will understand and modify the configuration options in K3s during installation.

Step 3 — Understanding and Modifying Configuration Options in K3S

You installed K3s using the default setup, however, it is possible to adjust the configuration to achieve specific custom behavior of the cluster. In this step, you will learn how to use environment variables in K3s to adjust the configuration in the install script.

For example, the default setup comes with the installation of traefik ingress controller, in some cases, you might want to disable the ingress controller during the installation.

The environment variable INSTALL_K3S_EXEC can be used to pass the flags to the K3s service. The following command can be used to disable the traefik during K3s installation.

  1. curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -

Check the Kubernetes objects for verification; the controller resources will not be present this time.

  1. sudo kubectl get all -n kube-system

The complete list of available flags can be found here. https://docs.k3s.io/cli/server.

In addition to modifying the configuration options using environment variables, it can be also done within the K3s configuration file. The necessary options can be specified in the configuration file and then restart the K3s server to apply the changes.

K3s uses the configuration file present at /etc/rancher/k3s/config.yaml.

Execute the following command to create and write to the configuration file.

  1. sudo nano /etc/rancher/k3s/config.yaml

Write the following to the file config file.

  1. disable: traefik

Press Ctrl+X to write and exit the file. The line disable: traefik instructs the K3s service to delete the resources related to traefik installation.

Next, restart the K3s service using the following command to apply the changes.

  1. sudo systemctl restart k3s

Now, you can verify by listing all the Kubernetes objects in the kube-system namespace. There should not be any resource related to traefik installation present now.

  1. sudo kubectl get all -n kube-system

In this step, you learned how to modify the K3s configuration during and post installation. Next, you will uninstall the K3s cluster to clean up the virtual machine.

Step 4 — Uninstalling K3s

You must run a shell script called ‘/usr/local/bin/k3s-uninstall.sh’ to uninstall K3S. The script is generated automatically upon K3s installation and can be used to execute a full cleanup. During the uninstallation, any K3s configuration and cluster tools that were created or installed during the K3s installation are deleted.

Execute the following command to uninstall K3s:

  1. /usr/local/bin/k3s-uninstall.sh

Verify the uninstallation by checking the K3s service status using the following command:

  1. systemctl status k3s

You will receive an output similar to this:


[secondary_label Output]

Unit k3s.service could not be found.

Conclusion

In this article, you installed a K3s cluster on Ubuntu and understood the configuration options available in K3s and the mechanism to utilize them. Now, that you have set up your own Kubernetes cluster, you should explore various types of objects and their functionality in Kubernetes.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.