Engineering

How startups scale on DigitalOcean Kubernetes: Best Practices Part VI - Security

Engineer

Posted: October 8, 202412 min read
<- Back to Blog Home

Share

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!Sign up

This article is the final part of a 6-part series on DigitalOcean Kubernetes best practices.

In Part 5, we focused on disaster recovery, where we highlighted recovering from disasters like hardware failures, data center outages, downtime caused by human error, and, as we’ll focus on in this part, security breaches.

Let’s picture a scenario, you’re a startup that just had a bad actor infiltrate your Kubernetes cluster and delete your database instance containing thousands of records of customer data. Thankfully, you’re following the best practices outlined in Part 5 and you were able to successfully restore your customer data from backups. However, some of the customer data the hacker accessed was sensitive. So now you have to let all your customers know about the leak (which affects the perception of the business) and rotate all leaked credentials. That’s a lot of work! And if you’re not following robust security practices, then it can take a long time to recover from this breach. In this part, we’re going to review security best practices which can be preventative measures that make it harder for bad actors to infiltrate your cluster, and if a leak occurs, correct measures that allow your business to recover effectively.

There are three security concepts we want to highlight in this post. There are plenty more worth looking into and adopting depending on your use case but these three are fundemental and fairly easy to adopt for a production kubernetes environment.

Zero Trust security

Zero Trust is the idea that we assume we cannot trust any of our services on the same network, and so all requests between services must be authenticated and authorized. For example, we shouldn’t assume the communication between an app and the database service is secure. Instead we should assume it has been compromised and that verification is always needed before starting communication. This way of thinking is more realistic in the world of security, because it’s not about if your business will be hacked but rather when it will be hacked.

Least Privilege

Least Privilege is the idea of scoping down permissions for a user or service down to the bare minimum required. This is so that when a user or service is compromised the hacker’s exploit is limited to what behavior the user and service has. While setting up granular permissions feels like one extra step that takes away time from deploying your brand new app to prod, loose permissions is the perfect gift for a hacker to abuse. One form of Least Privilege that’s commonly seen is when you set up access tokens with expiration and limited scope. When working with other DigitalOcean compute resources, you might have tokens in place that for example might expire in 30 days and have read-only access. However, there’s many other ways to limit privileges and we’ll explore some of the most effective ways to do so in a Kubernetes cluster.

Encryption at Rest and Encryption in Transit

Encryption at Rest is the idea of ensuring your application data is actually encrypted and not stored in plain text. Encryption in Transit is the idea of ensuring your secrets are being pulled in a secure way into your application. DOKS clusters do Encryption at Rest by having ETCD secret encryption enabled to protect customer data in the cluster. However, you’ll likely have your application credentials stored outside your DOKS cluster as well and that’s when it’s worth considering secret management solutions that allow for safe storage and retrieval of your secrets as well as syncing those secrets within the cluster.

Checklist: Set up Network Policies

By default, networking within a Kubernetes cluster is pretty open. Any service can reach any other service within the cluster. Now this is far from ideal when a hacker is able to hijack a more exposed service that normally doesn’t talk to the database service but is able to do so now because there were no network policies preventing it.

With the built-in network policies offered by Kubernetes, you can control which pods are able to talk to which while still being able to communicate across namespaces. This is done at the IP level in that you can specify which network connections (and connection types) are allowed between pods.

Cilium network policies support in DOKS

While built-in policies are great when starting out, a more powerful option supported by DOKS is Cilium network policies provided by the open source Cilium CNI. These policies are more flexible because they associate pods with Cilium identities rather than IPs. Cilium identities are based on Kubernetes labels and so in a more dynamic environment where pods are restarted, the Cilium identity will continue to be associated with the pod even if the pod has a new IP. Being more efficient with the management of network policies among resources means less overhead when scaling. Furthermore, DOKS has built-in support for Cilium Hubble which provides a nice UI to view and monitor your network traffic in detail and get security insights.

Checklist: Use mTLS to encrypt and authenticate traffic

Network policies allow us to control where traffic is allowed to go in a cluster. But it doesn’t handle the encryption and authentication of that traffic between applications via a mechanism like TLS. One of the most effective ways to achieve this in a kubernetes cluster is to invest in a service mesh like Istio or Linkerd.

Let’s say we have a service called Service A and a database service called Service D in our cluster. We set up network policies so that Service A is supposed to be the only one that’s able to talk to D. But how does Service A know that the service it’s reaching out to, Service D, is indeed the service fronting the database for this app. More importantly, how does Service D know the service that’s reaching out for information from the database is indeed Service A? It doesn’t if the traffic is unencrypted and unauthenticated.

Man in the middle

In the situation described earlier with services A and D, it’s possible that a hacker is able to sit in the middle of the communication between these two services. At some point the hacker may try to spoof Service A and start making queries to Service D just like Service A could. If Service D has no way of verifying the service that’s requesting information from the database is indeed Service A then it will willingly give out information to the hacker.

This is what mutual TLS (mTLS) solves and it is the mechanism which ensures the 2 mutual services, A and D, are indeed who they say they are. Both services A and D need to verify each other’s identities with TLS and establish a secure connection before starting communication. This embraces the concept of Zero Trust as we don’t assume if either service is who they say they are until we verify with TLS. So if Service D can’t verify the service that’s trying to talk to it is Service A, then it doesn’t establish a connection with this service, and so protecting access to the database. Quickly setting up mTLS for all your services in the cluster is one of the key benefits of introducing a service mesh.

Benefits of using a service mesh

It should be noted TLS encryption can be done without a service mesh but configuring it in every service, especially as the number of services in your organization grows, can become demanding and that’s where a service mesh makes things easier.

Furthermore, one of the key benefits of a service mesh is network segmentation. This allows you to divide up networking within your cluster and control which service can talk to which. Now you might be thinking, isn’t that what network policies do? Yes, but network policies work at the IP layer while this is at the application layer. So you don’t need network policies for mapping service to service communication but they can still be useful to block traffic before reaching the workloads within the mesh.

Checklist: Use RBAC to limit access controls with the Kubernetes API

As mentioned prior, network policies and a service mesh are great ways to limit access for service-to-service communication. However, you might have services that need to interact dynamically with kubernetes resources (like secrets) and so you need to have it directly talk to the kubernetes API. Using Kubernetes RBAC through service accounts allows you to limit access granularly. By scoping permissions down to the bare minimum required by a service, you are adopting the idea of least privilege. So if a service is hijacked, then a hacker cannot make it do anything beyond what it is already capable of, and so limiting their ability to exploit.

Checklist: Invest in a secret manager

When managing application secrets outside your cluster, a good secret management solution uses the latest encryption standards to keep your data secure (Encryption at Rest), and furthermore, makes it so that retrieving the secrets is also secure (Encrypted in Transit).

It should also have a good API that makes it easy to pull the latest secret values, so your application can be injected with the credentials on every new deployment. HashiCorp Vault has become the de facto open source standard for secret management.

Quickly rotating secrets with a Kubernetes operator

Great so you’re managing secrets in your secret manager. However, let’s say your organization’s credentials were backed up in a third party service, and that service announced they had a data breach. So now you have to rotate every leaked secret in your Kubernetes cluster. This could mean manually going into the secret manager UI, updating the leaked secrets, and redeploying the changes to your cluster. A quicker cloud native solution is to use a secrets operator. By secrets operator, I mean a Kubernetes operator that syncs your secret manager with the Kubernetes secrets in your cluster. So if you update the secret in the secret manager, the operator notices the change and reflects that change in the Kubernetes secret - no re-deploy required! The Vault Secrets Operator (VSO) is the open source solution offered that achieves this in Vault.

Checklist: Secure your containers!

One thing that’s easy to overlook is the most fundamental part of your Kubernetes cluster, which is the containers inside the pods hosting your applications. Containers by default in Kubernetes are still very open. And for most of your applications they don’t need such loose permissions (remember loose permissions is the best gift a hacker can get during their exploit). Here are some ways you can make your containers more secure:

Run as non-root

Most of your applications likely don’t need to be running as the root user. Running as root allows the hacker to escape the container and access the host file system (“container breakout”). So unsetting root access is what you should do for most of your pods, however for the pods that do require elevated permissions to achieve certain behavior, this can still be done without root access.

Capabilities

Linux capabilities make it so you can enable selective elevated actions for the container without being a root user. For example, adding the capability to bind to privileged ports (like port 80) using the capability NET_BIND_SERVICE. On container startup, this capability will be enabled and so when the process tries to bind to port 80, the kernel will notice this is enabled and allow it to proceed even if the process is non-root.

Seccomp Profiles

To further adopt the idea of Least Privilege, seccomp (secure computing mode) profiles are an even more granular way to limit process behavior by limiting what system calls are allowed. This can be used separately or in conjunction with capabilities. For example, let’s say you want to allow a non-root process to bind to port 80, and so you use the capability NET_BIND_SERVICE but you also don’t want to allow the setsockopts system call (to prevent port reuse) so in addition you would also add a seccomp profile to block that system call. What’s convenient about seccomp is that you don’t have to create a custom profile and ensure you haven’t missed any powerful system calls. Instead there is a default profile called RuntimeDefault which exists to disable many powerful system calls not needed by containers. However, to be as strict as possible with your profiles it’s a good idea to start with the default and modify as needed to meet your use case. Furthermore, there is a way to notify system calls to see if there are any that should be blocked with the seccomp notifier. After identifying any additional system calls that you believe should be blocked, a slow rollout of the seccomp profile is a sound way to ensure your workloads are not disrupted.

Here’s an example pod spec that applies capabilities and seccomp:

apiVersion: v1
kind: Pod
metadata:
  name: more-secure-pod
spec:
  ...
  securityContext:
    # set to non-zero to run as non-root
    runAsUser: 1000
    # set capability that allows managing network settings
    capabilities:
      add:
      - NET_ADMIN
      # drop all other capabilities so only NET_ADMIN is supported
      drop:
      - ALL
  seccompProfiles:
    # don't allow powerful system calls not needed by the container
    type: RuntimeDefault

Don’t use the latest tag

In a production environment, especially where application versioning is important, you shouldn’t use the latest image tag for your container image. While it is convenient during early development, if a bad actor infiltrates your container registry then it’s possible for them to push a bad image which will be pulled by your pod on your next deployment (or sooner depending on the image policy).

Scan your images

Scanning your images is an automated way to ensure that there is no misconfiguration or vulnerabilities in the images loaded into your cluster. Usually you can run image scanning in your CICD pipeline, but some also provide a kubernetes operator that will scan your workloads in the cluster. An image scanning tool like Trivy can be used in both ways.

Checklist: Automating security checks with webhooks

So now you’ve applied the best practices listed above to your workloads, but what about future workloads? Is every developer going to remember to do this? Probably not so one thing you could do is write documentation on the best practices for securing your workloads. But what if someone doesn’t see this document or forgets a step? And it’s likely you’ll have to keep this up to date. This is where automating this behavior with guardrails (or “policies” to comply with) is the better way to ensure everyone at your company is deploying secure workloads and adhering to general best practices in your cluster. Open Policy Agent (OPA) is an engine where you can define policies as code and it can be used generically with other technologies, not just Kubernetes. This means by itself it doesn’t tightly integrate with Kubernetes, but that’s where OPA Gatekeeper comes in.

OPA Gatekeeper

The OPA Gatekeeper project comes with Kubernetes integrations like CRDs, pre-defined policies and most importantly comes with mutating and validating webhooks which will ensure any defined policies are adhered to before any request is satisfied by the Kubernetes API server. This means any request to create a new resource will be modified (by the mutating webhook) and validated (by the validating webhook) before successfully being created. If the validation fails then it’s indicative to the developer that their workload is not complying with all the required policies. Learn more about OPA Gatekeeper here.

Conclusion

Okay so you’ve set up network policies to restrict what types of connections and IP ranges are allowed in your cluster, a service mesh which ensures every service knows who they’re supposed to be talking to, and a secret manager for managing application credentials inside and outside the cluster effectively. Furthermore, you’ve updated your containers to follow best practices, and set up webhooks that will enforce policies outlined by your company for all future workloads. You’re on a roll!

However, this blog post is not an exhaustive list of every possible security measure you could take with your workload, rather its goal is to highlight some of the most effective ways you can secure critical parts of your DOKS workloads. So where can you find more information about securing your workloads? Reference OWASP! The Open Web Application Security Project (OWASP) specifies best practices for securing your workloads in general, whether that’s on a kubernetes cluster or on a single machine like a droplet. For kubernetes specifically, there’s a great blog post by OWASP that mentions The Top 10 Security Risks in Kubernetes. Some of the risks we highlighted in here are mentioned there as well.

Thank you!

Thank you for taking part in this six part series on How SMBs and Startups Scale on DigitalOcean Kubernetes! We hope that this series has been an informative step in your Kubernetes journey!

Share

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!Sign up

Related Articles

Introducing new GitHub Actions for App Platform
Engineering

Introducing new GitHub Actions for App Platform

How SMBs and startups scale on DigitalOcean Kubernetes: Best Practices Part V - Disaster Recovery
Engineering

How SMBs and startups scale on DigitalOcean Kubernetes: Best Practices Part V - Disaster Recovery

How to Migrate Production Code to a Monorepo
Engineering

How to Migrate Production Code to a Monorepo