Cheatsheet

Getting Started with Kubernetes: A kubectl Cheat Sheet

Published on September 19, 2019
English
Getting Started with Kubernetes: A kubectl Cheat Sheet

###Introduction

Kubectl is a command-line tool designed to manage Kubernetes objects and clusters. It provides a command-line interface for performing common operations like creating and scaling Deployments, switching contexts, and accessing a shell in a running container.

How to Use This Guide:

  • This guide is in cheat sheet format with self-contained command-line snippets.
  • It is not an exhaustive list of kubectl commands, but contains many common operations and use cases. For a more thorough reference, consult the Kubectl Reference Docs
  • Jump to any section that is relevant to the task you are trying to complete.

Prerequisites

Sample Deployment

To demonstrate some of the operations and commands in this cheat sheet, we’ll use a sample Deployment that runs 2 replicas of Nginx:

nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

Copy and paste this manifest into a file called nginx-deployment.yaml.

Installing kubectl

Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to install kubectl on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

First, update your local package index and install required dependencies:

  1. sudo apt-get update && sudo apt-get install -y apt-transport-https

Then add the Google Cloud GPG key to APT and make the kubectl package available to your system:

  1. curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  2. echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  3. sudo apt-get update

Finally, install kubectl:

  1. sudo apt-get install -y kubectl

Test that the installation succeeded using version:

  1. kubectl version

Setting Up Shell Autocompletion

Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to set up autocompletion on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

kubectl includes a shell autocompletion script that you can make available to your system’s existing shell autocompletion software.

Installing kubectl Autocompletion

First, check if you have bash-completion installed:

  1. type _init_completion

You should see some script output.

Next, source the kubectl autocompletion script in your ~/.bashrc file:

  1. echo 'source <(kubectl completion bash)' >>~/.bashrc
  2. . ~/.bashrc

Alternatively, you can add the completion script to the /etc/bash_completion.d directory:

  1. kubectl completion bash >/etc/bash_completion.d/kubectl

Usage

To use the autocompletion feature, press the TAB key to display available kubectl commands:

  1. kubectl TAB TAB
Output
annotate apply autoscale completion cordon delete drain explain kustomize options port-forward rollout set uncordon api-resources attach certificate config cp describe . . .

You can also display available commands after partially typing a command:

  1. kubectl d TAB
Output
delete describe diff drain

Connecting, Configuring and Using Contexts

Connecting

To test that kubectl can authenticate with and access your Kubernetes cluster, use cluster-info:

  1. kubectl cluster-info

If kubectl can successfully authenticate with your cluster, you should see the following output:

Output
Kubernetes master is running at https://kubernetes_master_endpoint CoreDNS is running at https://coredns_endpoint To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl is configured using kubeconfig configuration files. By default, kubectl will look for a file called config in the $HOME/.kube directory. To change this, you can set the $KUBECONFIG environment variable to a custom kubeconfig file, or pass in the custom file at execution time using the --kubeconfig flag:

  1. kubectl cluster-info --kubeconfig=path_to_your_kubeconfig_file

Note: If you’re using a managed Kubernetes cluster, your cloud provider should have made its kubeconfig file available to you.

If you don’t want to use the --kubeconfig flag with every command, and there is no existing ~/.kube/config file, create a directory called ~/.kube in your home directory if it doesn’t already exist, and copy in the kubeconfig file, renaming it to config:

  1. mkdir ~/.kube
  2. cp your_kubeconfig_file ~/.kube/config

Now, run cluster-info once again to test your connection.

Modifying your kubectl Configuration

You can also modify your config using the kubectl config set of commands.

To view your kubectl configuration, use the view subcommand:

  1. kubectl config view
Output
apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED . . .

Modifying Clusters

To fetch a list of clusters defined in your kubeconfig, use get-clusters:

  1. kubectl config get-clusters
Output
NAME do-nyc1-sammy

To add a cluster to your config, use the set-cluster subcommand:

  1. kubectl config set-cluster new_cluster --server=server_address --certificate-authority=path_to_certificate_authority

To delete a cluster from your config, use delete-cluster:

Note: This only deletes the cluster from your config and does not delete the actual Kubernetes cluster.

  1. kubectl config delete-cluster

Modifying Users

You can perform similar operations for users using set-credentials:

  1. kubectl config set-credentials username --client-certificate=/path/to/cert/file --client-key=/path/to/key/file

To delete a user from your config, you can run unset:

  1. kubectl config unset users.username

Contexts

A context in Kubernetes is an object that contains a set of access parameters for your cluster. It consists of a cluster, namespace, and user triple. Contexts allow you to quickly switch between different sets of cluster configuration.

To see your current context, you can use current-context:

  1. kubectl config current-context
Output
do-nyc1-sammy

To see a list of all configured contexts, run get-contexts:

  1. kubectl config get-contexts
Output
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * do-nyc1-sammy do-nyc1-sammy do-nyc1-sammy-admin

To set a context, use set-context:

  1. kubectl config set-context context_name --cluster=cluster_name --user=user_name --namespace=namespace

You can switch between contexts with use-context:

  1. kubectl config use-context context_name
Output
Switched to context "do-nyc1-sammy"

And you can delete a context with delete-context:

  1. kubectl config delete-context context_name

Using Namespaces

A Namespace in Kubernetes is an abstraction that allows you to subdivide your cluster into multiple virtual clusters. By using Namespaces you can divide cluster resources among multiple teams and scope objects appropriately. For example, you can have a prod Namespace for production workloads, and a dev Namespace for development and test workloads.

To fetch and print a list of all the Namespaces in your cluster, use get namespace:

  1. kubectl get namespace
Output
NAME STATUS AGE default Active 2d21h kube-node-lease Active 2d21h kube-public Active 2d21h kube-system Active 2d21h

To set a Namespace for your current context, use set-context --current:

  1. kubectl config set-context --current --namespace=namespace_name

To create a Namespace, use create namespace:

  1. kubectl create namespace namespace_name
Output
namespace/sammy created

Similarly, to delete a Namespace, use delete namespace:

Warning: Deleting a Namespace will delete everything in the Namespace, including running Deployments, Pods, and other workloads. Only run this command if you’re sure you’d like to kill whatever’s running in the Namespace or if you’re deleting an empty Namespace.

  1. kubectl delete namespace namespace_name

To fetch all Pods in a given Namespace or to perform other operations on resources in a given Namespace, make sure to include the --namespace flag:

  1. kubectl get pods --namespace=namespace_name

Managing Kubernetes Resources

General Syntax

The general syntax for most kubectl management commands is:

  1. kubectl command type name flags

Where

  • command is an operation you’d like to perform, like create
  • type is the Kubernetes resource type, like deployment
  • name is the resource’s name, like app_frontend
  • flags are any optional flags you’d like to include

For example the following command retrieves information about a Deployment named app_frontend:

  1. kubectl get deployment app_frontend

Declarative Management and kubectl apply

The recommended approach to managing workloads on Kubernetes is to rely on the cluster’s declarative design as much as possible. This means that instead of running a series of commands to create, update, delete, and restart running Pods, you should define the workloads, services, and systems you’d like to run in YAML manifest files, and provide these files to Kubernetes, which will handle the rest.

In practice, this means using the kubectl apply command, which applies a particular configuration to a given resource. If the target resource doesn’t exist, then Kubernetes will create the resource. If the resource already exists, then Kubernetes will save the current revision, and update the resource according to the new configuration. This declarative approach exists in contrast to the imperative approach of running the kubectl create , kubectl edit, and the kubectl scale set of commands to manage resources. To learn more about the different ways of managing Kubernetes resources, consult Kubernetes Object Management from the Kubernetes docs.

Rolling out a Deployment

For example, to deploy the sample Nginx Deployment to your cluster, use apply and provide the path to the nginx-deployment.yaml manifest file:

  1. kubectl apply -f nginx-deployment.yaml
Output
deployment.apps/nginx-deployment created

The -f flag is used to specify a filename or URL containing a valid configuration. If you’d like to apply all manifests from a directory, you can use the -k flag:

  1. kubectl apply -k manifests_dir

You can track the rollout status using rollout status:

  1. kubectl rollout status deployment/nginx-deployment
Output
Waiting for deployment "nginx-deployment" rollout to finish: 1 of 2 updated replicas are available... deployment "nginx-deployment" successfully rolled out

An alternative to rollout status is the kubectl get command, along with the -w (watch) flag:

  1. kubectl get deployment -w
Output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/2 2 0 3s nginx-deployment 1/2 2 1 3s nginx-deployment 2/2 2 2 3s

Using rollout pause and rollout resume, you can pause and resume the rollout of a Deployment:

  1. kubectl rollout pause deployment/nginx-deployment
Output
deployment.extensions/nginx-deployment paused
  1. kubectl rollout resume deployment/nginx-deployment
Output
deployment.extensions/nginx-deployment resumed

Modifying a Running Deployment

If you’d like to modify a running Deployment, you can make changes to its manifest file and then run kubectl apply again to apply the update. For example, we’ll modify the nginx-deployment.yaml file to change the number of replicas from 2 to 3:

nginx-deployment.yaml
. . .
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
. . .

The kubectl diff command allows you to see a diff between currently running resources, and the changes proposed in the supplied configuration file:

  1. kubectl diff -f nginx-deployment.yaml

Now allow Kubernetes to perform the update using apply:

  1. kubectl apply -f nginx-deployment.yaml

Running another get deployment should confirm the addition of a third replica.

If you run apply again without modifying the manifest file, Kubernetes will detect that no changes were made and won’t perform any action.

Using rollout history you can see a list of the Deployment’s previous revisions:

  1. kubectl rollout history deployment/nginx-deployment
Output
deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none>

With rollout undo, you can revert a Deployment to any of its previous revisions:

  1. kubectl rollout undo deployment/nginx-deployment --to-revision=1

Deleting a Deployment

To delete a running Deployment, use kubectl delete:

  1. kubectl delete -f nginx-deployment.yaml
Output
deployment.apps "nginx-deployment" deleted

Imperative Management

You can also use a set of imperative commands to directly manipulate and manage Kubernetes resources.

Creating a Deployment

Use create to create an object from a file, URL, or STDIN. Note that unlike apply, if an object with the same name already exists, the operation will fail. The --dry-run flag allows you to preview the result of the operation without actually performing it:

  1. kubectl create -f nginx-deployment.yaml --dry-run
Output
deployment.apps/nginx-deployment created (dry-run)

We can now create the object:

  1. kubectl create -f nginx-deployment.yaml
Output
deployment.apps/nginx-deployment created

Modifying a Running Deployment

Use scale to scale the number of replicas for the Deployment from 2 to 4:

  1. kubectl scale --replicas=4 deployment/nginx-deployment
Output
deployment.extensions/nginx-deployment scaled

You can edit any object in-place using kubectl edit. This will open up the object’s manifest in your default editor:

  1. kubectl edit deployment/nginx-deployment

You should see the following manifest file in your editor:

nginx-deployment
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
. . . 
spec:
  progressDeadlineSeconds: 600
  replicas: 4
  revisionHistoryLimit: 10
  selector:
    matchLabels:
. . .

Change the replicas value from 4 to 2, then save and close the file.

Now run a get to inspect the changes:

  1. kubectl get deployment/nginx-deployment
Output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 6m40s

We’ve successfully scaled the Deployment back down to 2 replicas on-the-fly. You can update most of a Kubernetes’ object’s fields in a similar manner.

Another useful command for modifying objects in-place is kubectl patch. Using patch, you can update an object’s fields on-the-fly without having to open up your editor. patch also allows for more complex updates with various merging and patching strategies. To learn more about these, consult Update API Objects in Place Using kubectl patch.

The following command will patch the nginx-deployment object to update the replicas field from 2 to 4; deploy is shorthand for the deployment object.

  1. kubectl patch deploy nginx-deployment -p '{"spec": {"replicas": 4}}'
Output
deployment.extensions/nginx-deployment patched

We can now inspect the changes:

  1. kubectl get deployment/nginx-deployment
Output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 18m

You can also create a Deployment imperatively using the run command. run will create a Deployment using an image provided as a parameter:

  1. kubectl run nginx-deployment --image=nginx --port=80 --replicas=2

The expose command lets you quickly expose a running Deployment with a Kubernetes Service, allowing connections from outside your Kubernetes cluster:

  1. kubectl expose deploy nginx-deployment --type=LoadBalancer --port=80 --name=nginx-svc
Output
service/nginx-svc exposed

Here we’ve exposed the nginx-deployment Deployment as a LoadBalancer Service, opening up port 80 to external traffic and directing it to container port 80. We name the service nginx-svc. Using the LoadBalancer Service type, a cloud load balancer is automatically provisioned and configured by Kubernetes. To get the Service’s external IP address, use get:

  1. kubectl get svc nginx-svc
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.245.26.242 203.0.113.0 80:30153/TCP 22m

You can access the running Nginx containers by navigating to EXTERNAL-IP in your web browser.

Inspecting Workloads and Debugging

There are several commands you can use to get more information about workloads running in your cluster.

Inspecting Kubernetes Resources

kubectl get fetches a given Kubernetes resource and displays some basic information associated with it:

  1. kubectl get deployment -o wide
Output
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 29m nginx nginx app=nginx

Since we did not provide a Deployment name or Namespace, kubectl fetches all Deployments in the current Namespace. The -o flag provides additional information like CONTAINERS and IMAGES.

In addition to get, you can use describe to fetch a detailed description of the resource and associated resources:

  1. kubectl describe deploy nginx-deployment
Output
Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 11 Sep 2019 12:53:42 -0400 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision: 1 Selector: run=nginx-deployment . . .

The set of information presented will vary depending on the resource type. You can also use this command without specifying a resource name, in which case information will be provided for all resources of that type in the current Namespace.

explain allows you to quickly pull configurable fields for a given resource type:

  1. kubectl explain deployment.spec

By appending additional fields you can dive deeper into the field hierarchy:

  1. kubectl explain deployment.spec.template.spec

Gaining Shell Access to a Container

To gain shell access into a running container, use exec. First, find the Pod that contains the running container you’d like access to:

  1. kubectl get pod
Output
nginx-deployment-8859878f8-7gfw9 1/1 Running 0 109m nginx-deployment-8859878f8-z7f9q 1/1 Running 0 109m

Let’s exec into the first Pod. Since this Pod has only one container, we don’t need to use the -c flag to specify which container we’d like to exec into.

  1. kubectl exec -i -t nginx-deployment-8859878f8-7gfw9 -- /bin/bash
Output
root@nginx-deployment-8859878f8-7gfw9:/#

You now have shell access to the Nginx container. The -i flag passes STDIN to the container, and -t gives you an interactive TTY. The -- double-dash acts as a separator for the kubectl command and the command you’d like to run inside the container. In this case, we are running /bin/bash.

To run commands inside the container without opening a full shell, omit the -i and -t flags, and substitute the command you’d like to run instead of /bin/bash:

  1. kubectl exec nginx-deployment-8859878f8-7gfw9 ls
Output
bin boot dev etc home lib lib64 media . . .

Fetching Logs

Another useful command is logs, which prints logs for Pods and containers, including terminated containers.

To stream logs to your terminal output, you can use the -f flag:

  1. kubectl logs -f nginx-deployment-8859878f8-7gfw9
Output
10.244.2.1 - - [12/Sep/2019:17:21:33 +0000] "GET / HTTP/1.1" 200 612 "-" "203.0.113.0" "-" 2019/09/16 17:21:34 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "203.0.113.0", referrer: "http://203.0.113.0" . . .

This command will keep running in your terminal until interrupted with a CTRL+C. You can omit the -f flag if you’d like to print log output and exit immediately.

You can also use the -p flag to fetch logs for a terminated container. When this option is used within a Pod that had a prior running container instance, logs will print output from the terminated container:

  1. kubectl logs -p nginx-deployment-8859878f8-7gfw9

The -c flag allows you to specify the container you’d like to fetch logs from, if the Pod has multiple containers. You can use the --all-containers=true flag to fetch logs from all containers in the Pod.

Port Forwarding and Proxying

To gain network access to a Pod, you can use port-forward:

  1. sudo kubectl port-forward pod/nginx-deployment-8859878f8-7gfw9 80:80
Output
Forwarding from 127.0.0.1:80 -> 80 Forwarding from [::1]:80 -> 80

In this case we use sudo because local port 80 is a protected port. For most other ports you can omit sudo and run the kubectl command as your system user.

Here we forward local port 80 (preceding the colon) to the Pod’s container port 80 (after the colon).

You can also use deploy/nginx-deployment as the resource type and name to forward to. If you do this, the local port will be forwarded to the Pod selected by the Deployment.

The proxy command can be used to access the Kubernetes API server locally:

  1. kubectl proxy --port=8080
Output
Starting to serve on 127.0.0.1:8080

In another shell, use curl to explore the API:

curl http://localhost:8080/api/
Output
{ "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "203.0.113.0:443" } ]

Close the proxy by hitting CTRL-C.

Conclusion

This guide covers some of the more common kubectl commands you may use when managing a Kubernetes cluster and workloads you’ve deployed to it.

You can learn more about kubectl by consulting the official Kubernetes reference documentation.

There are many more commands and variations that you may find useful as part of your work with kubectl. To learn more about all of your available options, you can run:

kubectl --help

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
1 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Wonderful cheat sheet, covering a lot of ground with kubectl.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.