Sr Technical Writer
A bare metal Kubernetes deployment is the process of setting up a Kubernetes cluster directly on bare metal servers. This approach provides unparalleled performance and control, eliminating the overhead associated with virtualization. It is ideal for workloads that demand high efficiency, such as AI/ML applications and high-performance computing tasks. In this tutorial, you will learn to set up a Kubernetes cluster on bare metal infrastructure, covering prerequisites, installation steps, networking configurations, monitoring, and best practices.
Feature | Bare-Metal Kubernetes | VM-based Kubernetes |
---|---|---|
Performance | High | Moderate |
Overhead | Low | High |
Flexibility | High | Moderate |
Resource Utilization | Efficient | Inefficient |
Scalability | High | Moderate |
Cost | Low | High |
Bare-Metal Kubernetes is a deployment approach that sets up a Kubernetes cluster directly on bare metal servers. This approach provides high performance, low overhead, and high flexibility. It efficiently utilizes resources and is highly scalable, all at a low cost.
On the other hand, VM-based Kubernetes deploys a Kubernetes cluster on virtual machines. This approach provides moderate performance, high overhead, and moderate flexibility. It utilizes resources inefficiently and is moderately scalable, all at a high cost.
Before diving into the deployment process, it’s essential to understand the advantages of running Kubernetes on bare metal:
Enhanced Performance: Direct access to hardware resources ensures minimal latency and maximized throughput.
Resource Efficiency: Eliminating the hypervisor layer reduces overhead, allowing applications to utilize the full potential of the hardware.
Cost Savings: By optimizing resource utilization, organizations can achieve better performance-per-dollar compared to virtualized environments.
Flexibility: Full control over hardware configurations enables customization tailored to specific workload requirements.
Ensure the following prerequisites are met before proceeding:
Hardware Requirements:
Master Node: At least 4 CPUs, 16GB RAM, and 100GB SSD storage.
Worker Nodes: At least 2 CPUs, 8GB RAM, and 100GB SSD storage per node.
Operating System: Ubuntu 24.04 LTS(or above) or CentOS 9 Stream installed on all nodes.
Network Configuration:
Static IP addresses are assigned to each node.
Proper DNS settings configured.
Access:
You need to follow these steps on all master and worker nodes.
Run the following command on all nodes to ensure they are up-to-date:
Assign a unique hostname to each node. On each node, run:
Edit /etc/hosts
on all nodes to include the IP addresses and hostnames of all other nodes:
Please Note that, if the nodes are on the same private network (e.g., in the same VPC or subnet), use their private IP addresses instead for better security and performance. You can use the public IPs instead of private IPs if nodes are on different networks.
Swap is a space on a disk that is used when the amount of physical RAM memory is full. When a Linux system runs out of RAM, inactive pages are moved from the RAM to the swap space. Disabling swap is recommended for Kubernetes as it can cause issues with the container runtime. To disable the swap, run the following commands:
Run the following on all nodes to enable the required networking modules:
A container runtime is a software that is responsible for running containers. It is a crucial component of any containerized environment. Some examples of famous container runtimes are Docker, containerd, and CRI-O. On all master and worker nodes, you will need to install the container runtime.
Follow the below steps on all the master and worker nodes:
In this tutorial, you will use the containerd
container runtime.
On all the master and worker nodes, follow these steps to install the Kubernetes components:
Once the installation is complete, verify the installation by checking the versions:
Check the status of the kubelet service:
If the service is not active, start it:
That’s it! You have now installed the Kubernetes components on your bare metal Ubuntu machines.
On the master node, initialize the Kubernetes cluster using the following command:
Note: If you notice the following error while initializing the k8s cluster using the above command on the master node:
The error message indicates that the ip_forward
setting in your system is not enabled. This setting is necessary for Kubernetes to allow network traffic to be forwarded between pods and nodes. To fix this error, you need to enable IP forwarding.
Enable IP Forwarding Temporarily:
This command will initialize the Kubernetes control plane and generate a kubeconfig
file. It will also print the next steps and the command to join the worker nodes to the cluster.
After the initialization is complete, you will see a message that looks like this:
Note: In the above output of a successful Kubernetes control plane deployment, please make a note of kubeadm join
command and the --token
and --discovery-token-ca-cert-hash sha256
values as you will need it in the upcoming steps when joining worker nodes to the Kubernetes cluster.
To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root
user, you can run:
You should now deploy a pod network to the cluster.
To deploy a pod network to your Kubernetes cluster, you can use network plugins like Calico, Flannel, or Weave. Here, you will use Flannel.
This command should be run on the master node.
Before installing the Pod Network, you need to ensure that the required Kubernetes ports are open. These ports are used by various Kubernetes services:
Port 6443
: This is the default secure port for the Kubernetes API server.Port 10250
: This is the default port for the Kubelet API server.Ports 2379-2380
: These ports are used by the etcd
server.To open these ports, you can use the following commands:
Now, let’s install the Pod network using the Flannel network plugin.
A successful pod network deployment will give the below output:
This command will deploy the Flannel pod network to your cluster. You can verify the deployment by running:
You should see the Flannel pods running in the kube-system
namespace.
If you want to get your Kubernetees cluster details, you can use the following command:
This should give you the following output:
To join the worker nodes to the Kubernetes cluster, you will need to run the following kubectl join
command on each worker node.
Replace <MASTER_NODE_IP>
with the IP address of your master node IP, <TOKEN>
with the token generated during the master node setup, and <HASH>
with the hash generated during the master node setup.
Once you have run this command on all worker nodes, you can verify that they have joined the cluster by running:
You should see all worker nodes listed and in the Ready
state.
In this step, you will deploy a sample application, specifically an nginx
server. This can be done using the following command:
This command will create a deployment named nginx
and use the nginx
image.
Next, we will expose the deployment nginx
to the outside world on port 80
. The --type=NodePort
flag specifies that the service should be exposed on a NodePort.
After running these commands, you can access the nginx
server by using the IP address of any of your worker nodes and the NodePort that was assigned to the nginx
service. You can find the NodePort by running the following command:
You should see the newly created nginx
service listed, along with its NodePort.
To access the nginx
service, you can use the IP address of any of your worker nodes and the NodePort
that was assigned to the nginx
service.
You can find the NodePort
by running the following command:
You should see the newly created nginx
service listed, along with its NodePort.
In this example, the NodePort for the nginx
service is 32224
. You can access the nginx
server by using the IP address of any of your worker nodes and the NodePort. For example, if the IP address of your worker node is 10.111.19.83
, you can access the nginx
server at http://10.111.19.83:32224
.
Kubernetes provides a rich set of monitoring tools and integrations. Some of the popular ones are:
To set up monitoring for your Kubernetes cluster, you can follow this tutorial on How to Set Up DigitalOcean Kubernetes Cluster Monitoring with Helm and Prometheus Operator.
Yes, Kubernetes can be installed on physical servers without virtualization, providing direct access to hardware resources.
The simplest way to deploy Kubernetes on bare metal is by using kubeadm
, which automates the cluster setup process.
Yes, Kubernetes supports other container runtimes like containerd
and CRI-O
. Docker is no longer required since Kubernetes deprecated Docker support in v1.20.
Docker performs better on bare metal because it eliminates the overhead of a hypervisor, allowing direct access to system resources. However, VMs provide better isolation and security.
Yes, you can manually deploy applications using Kubernetes manifests (kubectl apply -f
). However, Helm simplifies package management and application deployment.
Bare metal Kubernetes refers to running Kubernetes directly on physical machines instead of virtualized environments. It is used for enhanced performance, reduced latency, and better resource efficiency, making it ideal for AI/ML and high-performance workloads.
In cloud setups, Kubernetes nodes run on virtual machines managed by a cloud provider. Bare metal deployments require manual setup and configuration, but offer greater flexibility, cost efficiency, and performance.
You need:
kubeadm
, kubelet
, and kubectl
for cluster setup and management.
A container runtime like containerd or CRI-O.
Load balancers like MetalLB for service exposure.
Yes, but scaling requires manual provisioning of additional nodes and network configurations, unlike cloud environments where resources are automatically allocated.
For additional guidance, check out DigitalOcean DOKS Managed Kubernetes Networking.
In this comprehensive tutorial, you learned how to deploy Kubernetes on bare metal infrastructure. We covered the key aspects of setting up a Kubernetes cluster on physical machines, including the use of kubeadm
for automation, the importance of container runtimes like containerd
and CRI-O
, and the role of networking plugins such as Calico and Flannel.
By following this tutorial, you should now have a solid understanding of how to deploy and manage a Kubernetes cluster on bare metal, taking advantage of the performance, control, and efficiency it offers.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!