Kubernetes admission controllers are plugins that govern and enforce how the cluster is used. They can be thought of as a gatekeeper that intercepts (authenticated) API requests and may change the request object or deny the request altogether. For more information, visit: https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/
Creating admission controllers may involve editing kube-apiserver or setting up a webhook service. You cannot directly access or modify the kube-apiserver configuration in DOKS because the control plane is fully managed by DigitalOcean.
Unlike setting up a webhook service, using policy engines offer a Kubernetes-native and declarative approach to defining and enforcing admission policies. It eliminates the need to build, deploy, and maintain webhook servers.
Admission controllers are important for several reasons:
While admission controllers are essential for maintaining policy enforcement, resource management, and security within Kubernetes clusters, they can introduce challenges such as performance overhead, unintended resource denials, complexity, and potential for outages. Proper design, testing, documentation, and error handling can mitigate many of these adverse effects.
- kubectl get clusterpolicy
- kubectl get clusterpolicy \<policy-name\> \-o yaml
The following instructions use Kyverno, a policy engine designed specifically for Kubernetes.
- brew install kustomize
- kustomize build https://github.com/kyverno/policies/pod-security | kubectl apply -f -
These Kyverno policies are based on the Kubernetes Pod Security Standards definitions.
According to Kubernetes documentation, the Pod Security Standards define three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly-permissive to highly-restrictive. The Privileged policy is purposely-open, and entirely unrestricted. It is typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.
The Privileged policy is defined by an absence of restrictions. If you define a Pod where the Privileged security policy applies, the Pod you define is able to bypass typical container isolation mechanisms. For example, you can define a Pod that has access to the node’s host network.
Restricting the admission of privileged pods to DOKS clusters is important for several reasons:
Restricting the admission of privileged pods in DOKS clusters can have some negative impacts, especially in use cases where elevated permissions are necessary.
Ensure the DOKS cluster is restricting privileged containers by running:
- kubectl get clusterpolicy
This will list all cluster policies. Please see the table for the desired output:
NAME | ADMISSION | BACKGROUND | VALIDATE ACTION | READY | AGE | MESSAGE |
---|---|---|---|---|---|---|
disallow-privileged-containers | true | true | Audit | True | 60m | Ready |
The following instructions use Kyverno, a policy engine designed specifically for Kubernetes.
Install Kyverno: Kyverno Installation Guide
Install Kustomize:
- brew install kustomize
- kustomize build https://github.com/kyverno/policies/pod-security | kubectl apply -f -
A container running with the `allowPrivilegeEscalation` flag set to `true` may have processes that can gain more privileges than their parent.
There should be at least one admission control policy defined which does not permit containers to allow privilege escalation. The option exists (and is defaulted to true) to permit set-user-id binaries to run.
If you need to run containers which use set-user-id binaries or require privilege escalation, this should be defined in a separate policy and you should carefully check to ensure that only limited service accounts and users are given permission to use that policy.
Restricting privilege escalation is important for many reasons, including:
Disabling privilege escalation may limit the ability of applications to perform certain functions that require elevated privileges.
Ensure the DOKS cluster is disallowing privilege escalation by running:
- kubectl get clusterpolicy
This will list all cluster policies. Please see the table for the desired output:
NAME | ADMISSION | BACKGROUND | VALIDATE ACTION | READY | AGE | MESSAGE |
---|---|---|---|---|---|---|
disallow-privilege-escalation | true | true | Audit | True | 60m | Ready |
The following instructions use Kyverno, a policy engine designed specifically for Kubernetes.
- brew install kustomize
- kustomize build https://github.com/kyverno/policies/pod-security | kubectl apply -f -
Follow the Audit Procedure to ensure disallow-privilege-escalation is enabled.
During an upgrade, the control plane (Kubernetes main) is replaced with a new control plane running the new version of Kubernetes. This process takes a few minutes, during which API access to the cluster is unavailable but workloads are not impacted.
Once the control plane is replaced, the worker nodes are replaced in a rolling fashion, one worker pool at a time. DOKS uses the following replacement process for the worker nodes:
As nodes are upgraded, workloads may experience downtime if there is no additional capacity to host the node’s workload during the replacement. If you enable surge upgrades, then up to 10 new nodes for a given node pool are created up front before the existing nodes of that node pool start getting drained. Since everything happens concurrently, one node stalling the drain process doesn’t stop the other nodes from proceeding. However, since one pool is upgraded at a time, it means that DOKS doesn’t move to the next node pool until the current node pool finishes. When you enable surge upgrades, Kubernetes reschedules each worker node’s workload, then replaces the node with a new node running the new version and reattaches any DigitalOcean Volumes Block Storage to the new nodes. The new worker nodes have new IP addresses
Upgrading a DOKS cluster is important for several reasons, including:
Upgrades may create downtime. We recommend enabling surge upgrades on existing clusters. Any data stored on the local disks of the worker nodes are lost in the upgrade process. We recommend using persistent volumes for data storage, and not relying on local disk for anything other than temporary data.
Visit the Overview tab of the cluster in the control panel. You will see a View Available Upgrade button if there is a new version available for your cluster.
Review the How to Upgrade DOKS Clusters to Newer Versions documentation for on demand and automated upgrading.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.