Tutorial

Recommended Steps to Secure a DigitalOcean Kubernetes Cluster

Published on February 29, 2020
English
Recommended Steps to Secure a DigitalOcean Kubernetes Cluster

The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.

Introduction

Kubernetes, the open-source container orchestration platform, is steadily becoming the preferred solution for automating, scaling, and managing high-availability clusters. As a result of its increasing popularity, Kubernetes security has become more and more relevant.

Considering the moving parts involved in Kubernetes and the variety of deployment scenarios, securing Kubernetes can sometimes be complex. Because of this, the objective of this article is to provide a solid security foundation for a DigitalOcean Kubernetes (DOKS) cluster. Note that this tutorial covers basic security measures for Kubernetes, and is meant to be a starting point rather than an exhaustive guide. For additional steps, see the official Kubernetes documentation.

In this guide, you will take basic steps to secure your DigitalOcean Kubernetes cluster. You will configure secure local authentication with TLS/SSL certificates, grant permissions to local users with Role-based access controls (RBAC), grant permissions to Kubernetes applications and deployments with service accounts, and set up resource limits with the ResourceQuota and LimitRange admission controllers.

Prerequisites

In order to complete this tutorial you will need:

  • A DigitalOcean Kubernetes (DOKS) managed cluster with 3 Standard nodes configured with at least 2 GB RAM and 1 vCPU each. For detailed instructions on how to create a DOKS cluster, read our Kubernetes Quickstart guide. This tutorial uses DOKS version 1.16.2-do.1.
  • A local client configured to manage the DOKS cluster, with a cluster configuration file downloaded from the DigitalOcean Control Panel and saved as ~/.kube/config. For detailed instructions on how to configure remote DOKS management, read our guide How to Connect to a DigitalOcean Kubernetes Cluster. In particular, you will need:
    • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation. This tutorial will use kubectl version 1.17.0-00.
    • The official DigitalOcean command-line tool, doctl. For instructions on how to install this, see the doctl GitHub page. This tutorial will use doctl version 1.36.0.

Step 1 — Enabling Remote User Authentication

After completing the prerequisites, you will end up with one Kubernetes superuser that authenticates through a predefined DigitalOcean bearer token. However, sharing those credentials is not a good security practice, since this account can cause large-scale and possibly destructive changes to your cluster. To mitigate this possibility, you can set up additional users to be authenticated from their respective local clients.

In this section, you will authenticate new users to the remote DOKS cluster from local clients using secure SSL/TLS certificates. This will be a three-step process: First, you will create Certificate Signing Requests (CSR) for each user, then you will approve those certificates directly in the cluster through kubectl. Finally, you will build each user a kubeconfig file with the appropriate certificates. For more information regarding additional authentication methods supported by Kubernetes, refer to the Kubernetes authentication documentation.

Creating Certificate Signing Requests for New Users

Before starting, check the DOKS cluster connection from the local machine configured during the prerequisites:

  1. kubectl cluster-info

Depending on your configuration, the output will be similar to this one:

Output
Kubernetes master is running at https://a6616782-5b7f-4381-9c0f-91d6004217c7.k8s.ondigitalocean.com CoreDNS is running at https://a6616782-5b7f-4381-9c0f-91d6004217c7.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

This means that you are connected to the DOKS cluster.

Next, create a local folder for client’s certificates. For the purpose of this guide, ~/certs will be used to store all certificates:

  1. mkdir ~/certs

In this tutorial, we will authorize a new user called sammy to access the cluster. Feel free to change this to a user of your choice. Using the SSL and TLS library OpenSSL, generate a new private key for your user using the following command:

  1. openssl genrsa -out ~/certs/sammy.key 4096

The -out flag will make the output file ~/certs/sammy.key, and 4096 sets the key as 4096-bit. For more information on OpenSSL, see our OpenSSL Essentials guide.

Now, create a certificate signing request configuration file. Open the following file with a text editor (for this tutorial, we will use nano):

  1. nano ~/certs/sammy.csr.cnf

Add the following content into the sammy.csr.cnf file to specify in the subject the desired username as common name (CN), and the group as organization (O):

~/certs/sammy.csr.cnf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = sammy
O = developers
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth

The certificate signing request configuration file contains all necessary information, user identity, and proper usage parameters for the user. The last argument extendedKeyUsage=serverAuth,clientAuth will allow users to authenticate their local clients with the DOKS cluster using the certificate once it’s signed.

Next, create the sammy certificate signing request:

  1. openssl req -config ~/certs/sammy.csr.cnf -new -key ~/certs/sammy.key -nodes -out ~/certs/sammy.csr

The -config lets you specify the configuration file for the CSR, and -new signals that you are creating a new CSR for the key specified by -key.

You can check your certificate signing request by running the following command:

  1. openssl req -in ~/certs/sammy.csr -noout -text

Here you pass in the CSR with -in and use -text to print out the certificate request in text.

The output will show the certificate request, the beginning of which will look like this:

Output
Certificate Request: Data: Version: 1 (0x0) Subject: CN = sammy, O = developers Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (4096 bit) ...

Repeat the same procedure to create CSRs for any additional users. Once you have all certificate signing requests saved in the administrator’s ~/certs folder, proceed with the next step to approve them.

Managing Certificate Signing Requests with the Kubernetes API

You can either approve or deny TLS certificates issued to the Kubernetes API by using kubectl command-line tool. This gives you the ability to ensure that the requested access is appropriate for the given user. In this section, you will send the certificate request for sammy and aprove it.

To send a CSR to the DOKS cluster use the following command:

cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
  name: sammy-authentication
spec:
  groups:
  - system:authenticated
  request: $(cat ~/certs/sammy.csr | base64 | tr -d '\n')
  usages:
  - digital signature
  - key encipherment
  - server auth
  - client auth
EOF

Using a Bash here document, this command uses cat to pass the certificate request to kubectl apply.

Let’s take a closer look at the certificate request:

  • name: sammy-authentication creates a metadata identifier, in this case called sammy-authentication.
  • request: $(cat ~/certs/sammy.csr | base64 | tr -d '\n') sends the sammy.csr certificate signing request to the cluster encoded as Base64.
  • server auth and client auth specify the intended usage of the certificate. In this case, the purpose is user authentication.

The output will look similar to this:

Output
certificatesigningrequest.certificates.k8s.io/sammy-authentication created

You can check certificate signing request status using the command:

  1. kubectl get csr

Depending on your cluster configuration, the output will be similar to this:

Output
NAME AGE REQUESTOR CONDITION sammy-authentication 37s your_DO_email Pending

Next, approve the CSR by using the command:

  1. kubectl certificate approve sammy-authentication

You will get a message confirming the operation:

Output
certificatesigningrequest.certificates.k8s.io/sammy-authentication approved

Note: As an administrator you can also deny a CSR by using the command kubectl certificate deny sammy-authentication. For more information about managing TLS certificates, please read Kubernetes official documentation.

Now that the CSR is approved, you can download it to the local machine by running:

  1. kubectl get csr sammy-authentication -o jsonpath='{.status.certificate}' | base64 --decode > ~/certs/sammy.crt

This command decodes the Base64 certificate for proper usage by kubectl, then saves it as ~/certs/sammy.crt.

With the sammy signed certificate in hand, you can now build the user’s kubeconfig file.

Building Remote Users Kubeconfig

Next, you will create a specific kubeconfig file for the sammy user. This will give you more control over the user’s access to your cluster.

The first step in building a new kubeconfig is making a copy of the current kubeconfig file. For the purpose of this guide, the new kubeconfig file will be called config-sammy:

  1. cp ~/.kube/config ~/.kube/config-sammy

Next, edit the new file:

  1. nano ~/.kube/config-sammy

Keep the first eight lines of this file, as they contain the necessary information for the SSL/TLS connection with the cluster. Then starting from the user parameter, replace the text with the following highlighted lines so that the file looks similar to the following:

config-sammy
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: certificate_data
  name: do-nyc1-do-cluster
contexts:
- context:
    cluster: do-nyc1-do-cluster
    user: sammy
  name: do-nyc1-do-cluster
current-context: do-nyc1-do-cluster
kind: Config
preferences: {}
users:
- name: sammy
  user:
    client-certificate: /home/your_local_user/certs/sammy.crt
    client-key: /home/your_local_user/certs/sammy.key

Note: For both client-certificate and client-key, use the absolute path to their corresponding certificate location. Otherwise, kubectl will produce an error.

Save and exit the file.

You can test the new user connection using kubectl cluster-info:

  1. kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy cluster-info

You will see an error similar to this:

Output
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Error from server (Forbidden): services is forbidden: User "sammy" cannot list resource "services" in API group "" in the namespace "kube-system"

This error is expected because the user sammy has no authorization to list any resource on the cluster yet. Granting authorization to users will be covered in the next step. For now, the output is confirming that the SSL/TLS connection was successful and the sammy authentication credentials were accepted by the Kubernetes API.

Step 2 — Authorizing Users Through Role Based Access Control (RBAC)

Once a user is authenticated, the API determines its permissions using Kubernetes built-in Role Based Access Control (RBAC) model. RBAC is an effective method of restricting user rights based on the role assigned to it. From a security point of view, RBAC allows setting fine-grained permissions to limit users from accessing sensitive data or executing superuser-level commands. For more detailed information regarding user roles refer to Kubernetes RBAC documentation.

In this step, you will use kubectl to assign the predefined role edit to the user sammy in the default namespace. In a production environment, you may want to use custom roles and/or custom role bindings.

Granting Permissions

In Kubernetes, granting permissions means assigning the desired role to a user. Assign edit permissions to the user sammy in the default namespace using the following command:

  1. kubectl create rolebinding sammy-edit-role --clusterrole=edit --user=sammy --namespace=default

This will give output similar to the following:

Output
rolebinding.rbac.authorization.k8s.io/sammy-edit-role created

Let’s analyze this command in more detail:

  • create rolebinding sammy-edit-role creates a new role binding, in this case called sammy-edit-role.
  • --clusterrole=edit assigns the predefined role edit at a global scope (cluster role).
  • --user=sammy specifies what user to bind the role to.
  • --namespace=default grants the user role permissions within the specified namespace, in this case default.

Next, verify user permissions by listing pods in the default namespace. You can tell if RBAC authorization is working as expected if no errors are shown.

  1. kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy auth can-i get pods

You will get the following output:

Output
yes

Now that you have assigned permissions to sammy, you can now practice revoking those permissions in the next section.

Revoking Permissions

Revoking permissions in Kubernetes is done by removing the user role binding.

For this tutorial, delete the edit role from the user sammy by running the following command:

  1. kubectl delete rolebinding sammy-edit-role

You will get the following output:

Output
rolebinding.rbac.authorization.k8s.io "sammy-edit-role" deleted

Verify if user permissions were revoked as expected by listing the default namespace pods:

  1. kubectl --kubeconfig=/home/localuser/.kube/config-sammy --namespace=default get pods

You will receive the following error:

Output
Error from server (Forbidden): pods is forbidden: User "sammy" cannot list resource "pods" in API group "" in the namespace "default"

This shows that the authorization has been revoked.

From a security standpoint, the Kubernetes authorization model gives cluster administrators the flexibility to change users rights on-demand as required. Moreover, role-based access control is not limited to a physical user; you can also grant and remove permissions to cluster services, as you will learn in the next section.

For more information about RBAC authorization and how to create custom roles, please read the official documentation.

Step 3 — Managing Application Permissions with Service Accounts

As mentioned in the previous section, RBAC authorization mechanisms extend beyond human users. Non-human cluster users, such as applications, services, and processes running inside pods, authenticate with the API server using what Kubernetes calls service accounts. When a pod is created within a namespace, you can either let it use the default service account or you can define a service account of your choice. The ability to assign individual SAs to applications and processes gives administrators the freedom of granting or revoking permissions as required. Moreover, assigning specific SAs to production-critical applications is considered a best security practice. Since service accounts are used for authentication, and thus for RBAC authorization checks, cluster administrators could contain security threats by changing service account access rights and isolating the offending process.

To demonstrate service accounts, this tutorial will use an Nginx web server as a sample application.

Before assigning a particular SA to your application, you need to create the SA. Create a new service account called nginx-sa in the default namespace:

  1. kubectl create sa nginx-sa

You will get:

Output
serviceaccount/nginx-sa created

Verify that the service account was created by running the following:

  1. kubectl get sa

This will give you a list of your service accounts:

Output
NAME SECRETS AGE default 1 22h nginx-sa 1 80s

Now you will assign a role to the nginx-sa service account. For this example, grant nginx-sa the same permissions as the sammy user:

  1. kubectl create rolebinding nginx-sa-edit \
  2. --clusterrole=edit \
  3. --serviceaccount=default:nginx-sa \
  4. --namespace=default

Running this will yield the following:

Output
rolebinding.rbac.authorization.k8s.io/nginx-sa-edit created

This command uses the same format as for the user sammy, except for the --serviceaccount=default:nginx-sa flag, where you assign the nginx-sa service account in the default namespace.

Check that the role binding was successful using this command:

  1. kubectl get rolebinding

This will give the following output:

Output
NAME AGE nginx-sa-edit 23s

Once you’ve confirmed that the role binding for the service account was successfully configured, you can assign the service account to an application. Assigning a particular service account to an application will allow you to manage its access rights in real-time and therefore enhance cluster security.

For the purpose of this tutorial, an nginx pod will serve as the sample application. Create the new pod and specify the nginx-sa service account with the following command:

  1. kubectl run nginx --image=nginx --port 80 --serviceaccount="nginx-sa"

The first portion of the command creates a new pod running an nginx web server on port :80, and the last portion --serviceaccount="nginx-sa" indicates that this pod should use the nginx-sa service account and not the default SA.

This will give you output similar to the following:

Output
deployment.apps/nginx created

Verify that the new application is using the service account by using kubectl describe:

  1. kubectl describe deployment nginx

This will output a lengthy description of the deployment parameters. Under the Pod Template section, you will see output similar to this:

Output
... Pod Template: Labels: run=nginx Service Account: nginx-sa ...

In this section, you created the nginx-sa service account in the default namespace and assigned it to the nginx webserver. Now you can control nginx permissions in real-time by changing its role as needed. You can also group applications by assigning the same service account to each one and then make bulk changes to permissions. Finally, you could isolate critical applications by assigning them a unique SA.

Summing up, the idea behind assigning roles to your applications/deployments is to fine-tune permissions. In real-world production environments, you may have several deployments requiring different permissions ranging from read-only to full administrative privileges. Using RBAC brings you the flexibility to restrict the access to the cluster as needed.

Next, you will set up admission controllers to control resources and safeguard against resource starvation attacks.

Step 4 — Setting Up Admission Controllers

Kubernetes admission controllers are optional plug-ins that are compiled into the kube-apiserver binary to broaden security options. Admission controllers intercept requests after they pass the authentication and authorization phase. Once the request is intercepted, admission controllers execute the specified code just before the request is applied.

While the outcome of either an authentication or authorization check is a boolean that allows or denies the request, admission controllers can be much more diverse. Admission controllers can validate requests in the same manner as authentication, but can also mutate or change the requests and modify objects before they are admitted.

In this step, you will use the ResourceQuota and LimitRange admission controllers to protect your cluster by mutating requests that could contribute to a resource starvation or Denial-of-Service attack. The ResourceQuota admission controller allows administrators to restrict computing resources, storage resources, and the quantity of any object within a namespace, while the LimitRange admission controller will limit the number of resources used by containers. Using these two admission controllers together will protect your cluster from attacks that render your resources unavailable.

To demonstrate how ResourceQuota works, you will implement a few restrictions in the default namespace. Start by creating a new ResourceQuota object file:

  1. nano resource-quota-default.yaml

Add in the following object definition to set constraints for resource consumption in the default namespace. You can adjust the values as needed depending on your nodes’ physical resources:

resource-quota-default.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-quota-default
spec:
  hard:
    pods: "2"
    requests.cpu: "500m"
    requests.memory: 1Gi
    limits.cpu: "1000m"
    limits.memory: 2Gi
    configmaps: "5"
    persistentvolumeclaims: "2"
    replicationcontrollers: "10"
    secrets: "3"
    services: "4"
    services.loadbalancers: "2"

This definition uses the hard keyword to set hard constraints, such as the maximum number of pods, configmaps, PersistentVolumeClaims, ReplicationControllers, secrets, services, and loadbalancers. This also set contraints on compute resources, like:

  • requests.cpu, which sets the maximum CPU value of requests in milliCPU, or one thousandth of a CPU core.
  • requests.memory, which sets the maximum memory value of requests in bytes.
  • limits.cpu, which sets the maximum CPU value of limits in milliCPUs.
  • limits.memory, which sets the maximum memory value of limits in bytes.

Save and exit the file.

Now, create the object in the namespace running the following command:

  1. kubectl create -f resource-quota-default.yaml --namespace=default

This will yield the following:

Output
resourcequota/resource-quota-default created

Notice that you are using the -f flag to indicate to Kubernetes the location of the ResourceQuota file and the --namespace flag to specify which namespace will be updated.

Once the object has been created, your ResourceQuota will be active. You can check the default namespace quotas with describe quota:

  1. kubectl describe quota --namespace=default

The output will look similar to this, with the hard limits you set in the resource-quota-default.yaml file:

Output
Name: resource-quota-default Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 5 limits.cpu 0 1 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 1 2 replicationcontrollers 0 10 requests.cpu 0 500m requests.memory 0 1Gi secrets 2 3 services 1 4 services.loadbalancers 0 2

ResourceQuotas are expressed in absolute units, so adding additional nodes will not automatically increase the values defined here. If more nodes are added, you will need to manually edit the values here to proportionate the resources. ResourceQuotas can be modified as often as you need, but they cannot be removed unless the entire namespace is removed.

If you need to modify a particular ResourceQuota, update the corresponding .yaml file and apply the changes using the following command:

  1. kubectl apply -f resource-quota-default.yaml --namespace=default

For more information regarding the ResourceQuota Admission Controller, refer to the official documentation.

Now that your ResourceQuota is set up, you will move on to configuring the LimitRange Admission Controller. Similar to how the ResourceQuota enforces limits on namespaces, the LimitRange enforces the limitations declared by validating and mutating containers.

In a similar way to before, start by creating the object file:

  1. nano limit-range-default.yaml

Now, you can use the LimitRange object to restrict resource usage as needed. Add the following content as an example of a typical use case:

limit-ranges-default.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-range-default
spec:
  limits:
  - max:
      cpu: "400m"
      memory: "1Gi"
    min:
      cpu: "100m"
      memory: "100Mi"
    default:
      cpu: "250m"
      memory: "800Mi"
    defaultRequest:
      cpu: "150m"
      memory: "256Mi"
    type: Container

The sample values used in limit-ranges-default.yaml restrict container memory to a maximum of 1Gi and limits CPU usage to a maximum of 400m, which is a Kubernetes metric equivalent to 400 milliCPU, meaning the container is limited to use almost half its core.

Next, deploy the object to the API server using the following command:

  1. kubectl create -f limit-range-default.yaml --namespace=default

This will give the following output:

Output
limitrange/limit-range-default created

Now you can check the new limits with following command:

  1. kubectl describe limits --namespace=default

Your output will look similar to this:

Output
Name: limit-range-default Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu 100m 400m 150m 250m - Container memory 100Mi 1Gi 256Mi 800Mi -

To see LimitRanger in action, deploy a standard nginx container with the following command:

  1. kubectl run nginx --image=nginx --port=80 --restart=Never

This will give the following output:

Output
pod/nginx created

Check how the admission controller mutated the container by running the following command:

  1. kubectl get pod nginx -o yaml

This will give many lines of output. Look in the container specification section to find the resource limits specified in the LimitRange Admission Controller:

Output
... spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 250m memory: 800Mi requests: cpu: 150m memory: 256Mi ...

This would be the same as if you manually declared the resources and requests in the container specification.

In this step, you used the ResourceQuota and LimitRange admission controllers to protect against malicious attacks toward your cluster’s resources. For more information about LimitRange admission controller, read the official documentation.

Conclusion

Throughout this guide, you configured a basic Kubernetes security template. This established user authentication and authorization, applications privileges, and cluster resource protection. Combining all the suggestions covered in this article, you will have a solid foundation for a production Kubernetes cluster deployment. From there, you can start hardening individual aspects of your cluster depending on your scenario.

If you would like to learn more about Kubernetes, check out our Kubernetes resource page, or follow our Kubernetes for Full-Stack Developers self-guided course.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Default avatar

Senior Technical Editor

Editor at DigitalOcean, fiction writer and podcaster elsewhere, always searching for the next good nautical pun!


Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
2 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

This tutorial is out of date. The certificatesigningrequest in step 1 no longer works. The API is now v1 (instead of v1beta1) and the new API requires the field spec:signerName. The one below works (as of the date of this comment):

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: sammy-authentication
spec:
  groups:
  - system:authenticated
  request: $(cat ~/certs/sammy.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - digital signature
  - key encipherment
  - server auth
  - client auth

Is the communication between my pc and Digital ocean kubernetes is encrypted when I am connecting using the cluster config file downloaded from digital ocean?

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.