Along with tracing and logging, monitoring and alerting are essential components of a Kubernetes observability stack. Setting up monitoring for your Kubernetes cluster allows you to track your resource usage and analyze and debug application errors.
A monitoring system usually consists of a time-series database that houses metric data and a visualization layer. In addition, an alerting layer creates and manages alerts, handing them off to integrations and external services as necessary. Finally, one or more components generate or expose the metric data that will be stored, visualized, and processed for alerts by this monitoring stack.
One popular monitoring solution is the open-source Prometheus, Grafana, and Alertmanager stack:
In addition, tools like kube-state-metrics and node_exporter expose cluster-level Kubernetes object metrics as well as machine-level metrics like CPU and memory usage.
Implementing this monitoring stack on a Kubernetes cluster can be complicated, but luckily some of this complexity can be managed with the Helm package manager and CoreOS’s Prometheus Operator and kube-prometheus projects. These projects bake in standard configurations and dashboards for Prometheus and Grafana, and abstract away some of the lower-level Kubernetes object definitions. The Helm prometheus-operator
chart allows you to get a full cluster monitoring solution up and running by installing Prometheus Operator and the rest of the components listed above, along with a default set of dashboards, rules, and alerts useful for monitoring Kubernetes clusters.
In this tutorial, we will demonstrate how to install the prometheus-operator
Helm chart on a DigitalOcean Kubernetes cluster. By the end of the tutorial, you will have installed a full monitoring stack into your cluster.
To follow this tutorial, you will need:
kubectl
command-line interface installed on your local machine and configured to connect to your cluster. You can read more about installing and configuring kubectl
in its official documentation.Before we install the prometheus-operator
Helm chart, we’ll create a custom values file that will override some of the chart’s defaults with DigitalOcean-specific configuration parameters. To learn more about overriding default chart values, consult the Helm Install section of the Helm docs.
To begin, create and open a file called custom-values.yaml
on your local machine using nano
or your favorite editor:
- nano custom-values.yaml
Copy and paste in the following custom values, which enable persistent storage for the Prometheus, Grafana, and Alertmananger components, and disable monitoring for Kubernetes control plane components not exposed on DigitalOcean Kubernetes:
# Define persistent storage for Prometheus (PVC)
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: do-block-storage
resources:
requests:
storage: 5Gi
# Define persistent storage for Grafana (PVC)
grafana:
# Set password for Grafana admin user
adminPassword: your_admin_password
persistence:
enabled: true
storageClassName: do-block-storage
accessModes: ["ReadWriteOnce"]
size: 5Gi
# Define persistent storage for Alertmanager (PVC)
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: do-block-storage
resources:
requests:
storage: 5Gi
# Change default node-exporter port
prometheus-node-exporter:
service:
port: 30206
targetPort: 30206
# Disable Etcd metrics
kubeEtcd:
enabled: false
# Disable Controller metrics
kubeControllerManager:
enabled: false
# Disable Scheduler metrics
kubeScheduler:
enabled: false
In this file, we override some of the default values packaged with the chart in its values.yaml file.
We first enable persistent storage for Prometheus, Grafana, and Alertmanager so that their data persists across Pod restarts. Behind the scenes, this defines a 5 Gi
Persistent Volume Claim (PVC) for each component, using the DigitalOcean Block Storage storage class. You should modify the size of these PVCs to suit your monitoring storage needs. To learn more about PVCs, consult Persistent Volumes from the official Kubernetes docs.
Next, replace your_admin_password
with a secure password that you’ll use to log in to the Grafana metrics dashboard with the admin user.
We’ll then configure a different port for node-exporter. Node-exporter runs on each Kubernetes node and provides OS and hardware metrics to Prometheus. We must change its default port to get around the DigitalOcean Kubernetes firewall defaults, which will block port 9100 but allow ports in the range 30000-32767. Alternatively, you can configure a custom firewall rule for node-exporter. To learn how, consult How to Configure Firewall Rules from the official DigitalOcean Cloud Firewalls docs.
Finally, we’ll disable metrics collection for three Kubernetes control plane components that do not expose metrics on DigitalOcean Kubernetes: the Kubernetes Scheduler and Controller Manager, and etcd cluster data store.
To see the full list of configurable parameters for the prometheus-operator
chart, consult the Configuration section from the chart repo README or the default values file.
When you’re done editing, save and close the file. We can now install the chart using Helm.
prometheus-operator
ChartThe prometheus-operator
Helm chart will install the following monitoring components into your DigitalOcean Kubernetes cluster:
By default, along with scraping metrics generated by node-exporter, kube-state-metrics, and the other components listed above, Prometheus will be configured to scrape metrics from the following components:
On your local machine, let’s begin by installing the prometheus-operator
Helm chart and passing in the custom values file we created above:
- helm install --namespace monitoring --name doks-cluster-monitoring -f custom-values.yaml stable/prometheus-operator
Here we run helm install
and install all components into the monitoring
namespace, which we create at the same time. This allows us to cleanly separate the monitoring stack from the rest of the Kubernetes cluster. We name the Helm release doks-cluster-monitoring
and pass in the custom values file we created in Step 1. Finally, we specify that we’d like to install the prometheus-operator
chart from the Helm stable
directory.
You should see the following output:
OutputNAME: doks-cluster-monitoring
LAST DEPLOYED: Mon Apr 22 10:30:42 2019
NAMESPACE: monitoring
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
doks-cluster-monitoring-grafana Pending do-block-storage 10s
==> v1/ServiceAccount
NAME SECRETS AGE
doks-cluster-monitoring-grafana 1 10s
doks-cluster-monitoring-kube-state-metrics 1 10s
. . .
==> v1beta1/ClusterRoleBinding
NAME AGE
doks-cluster-monitoring-kube-state-metrics 9s
psp-doks-cluster-monitoring-prometheus-node-exporter 9s
NOTES:
The Prometheus Operator has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=doks-cluster-monitoring"
Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.
This indicates that Prometheus Operator, Prometheus, Grafana, and the other components listed above have successfully been installed into your DigitalOcean Kubernetes cluster.
Following the note in the helm install
output, check the status of the release’s Pods using kubectl get pods
:
- kubectl --namespace monitoring get pods -l "release=doks-cluster-monitoring"
You should see the following:
OutputNAME READY STATUS RESTARTS AGE
doks-cluster-monitoring-grafana-9d7f984c5-hxnw6 2/2 Running 0 3m36s
doks-cluster-monitoring-kube-state-metrics-dd8557f6b-9rl7j 1/1 Running 0 3m36s
doks-cluster-monitoring-pr-operator-9c5b76d78-9kj85 1/1 Running 0 3m36s
doks-cluster-monitoring-prometheus-node-exporter-2qvxw 1/1 Running 0 3m36s
doks-cluster-monitoring-prometheus-node-exporter-7brwv 1/1 Running 0 3m36s
doks-cluster-monitoring-prometheus-node-exporter-jhdgz 1/1 Running 0 3m36s
This indicates that all the monitoring components are up and running, and you can begin exploring Prometheus metrics using Grafana and its preconfigured dashboards.
The prometheus-operator
Helm chart exposes Grafana as a ClusterIP
Service, which means that it’s only accessible via a cluster-internal IP address. To access Grafana outside of your Kubernetes cluster, you can either use kubectl patch
to update the Service in place to a public-facing type like NodePort
or LoadBalancer
, or kubectl port-forward
to forward a local port to a Grafana Pod port.
In this tutorial we’ll forward ports, but to learn more about kubectl patch
and Kubernetes Service types, you can consult Update API Objects in Place Using kubectl patch and Services from the official Kubernetes docs.
Begin by listing running Services in the monitoring
namespace:
- kubectl get svc -n monitoring
You should see the following Services:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m
doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m
doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m
doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m
doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m
prometheus-operated ClusterIP None <none> 9090/TCP 34m
We are going to forward local port 8000
to port 80
of the doks-cluster-monitoring-grafana
Service, which will in turn forward to port 3000
of a running Grafana Pod. These Service and Pod ports are configured in the stable/grafana
Helm chart values file:
- kubectl port-forward -n monitoring svc/doks-cluster-monitoring-grafana 8000:80
You should see the following output:
OutputForwarding from 127.0.0.1:8000 -> 3000
Forwarding from [::1]:8000 -> 3000
This indicates that local port 8000
is being forwarded successfully to a Grafana Pod.
Visit http://localhost:8000
in your web browser. You should see the following Grafana login page:
Enter admin as the username and the password you configured in custom-values.yaml
. Then, hit Log In.
You’ll be brought to the following Home Dashboard:
In the left-hand navigation bar, select the Dashboards button, then click on Manage:
You’ll be brought to the following dashboard management interface, which lists the dashboards installed by the prometheus-operator
Helm chart:
These dashboards are generated by kubernetes-mixin
, an open-source project that allows you to create a standardized set of cluster monitoring Grafana dashboards and Prometheus alerts. To learn more, consult the Kubernetes Mixin GitHub repo.
Click in to the Kubernetes / Nodes dashboard, which visualizes CPU, memory, disk, and network usage for a given node:
Describing each dashboard and how to use it to visualize your cluster’s metrics data goes beyond the scope of this tutorial. To learn more about the USE method for analyzing a system’s performance, you can consult Brendan Gregg’s The Utilization Saturation and Errors (USE) Method page. Google’s SRE Book is another helpful resource, in particular Chapter 6: Monitoring Distributed Systems. To learn how to build your own Grafana dashboards, check out Grafana’s Getting Started page.
In the next step, we’ll follow a similar process to connect to and explore the Prometheus monitoring system.
To connect to the Prometheus Pods, we once again have to use kubectl port-forward
to forward a local port. If you’re done exploring Grafana, you can close the port-forward tunnel by hitting CTRL-C
. Alternatively you can open a new shell and port-forward connection.
Begin by listing running Services in the monitoring
namespace:
- kubectl get svc -n monitoring
You should see the following Services:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m
doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m
doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m
doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m
doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m
prometheus-operated ClusterIP None <none> 9090/TCP 34m
We are going to forward local port 9090
to port 9090
of the doks-cluster-monitoring-pr-prometheus
Service:
- kubectl port-forward -n monitoring svc/doks-cluster-monitoring-pr-prometheus 9090:9090
You should see the following output:
OutputForwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
This indicates that local port 9090
is being forwarded successfully to a Prometheus Pod.
Visit http://localhost:9090
in your web browser. You should see the following Prometheus Graph page:
From here you can use PromQL, the Prometheus query language, to select and aggregate time series metrics stored in its database. To learn more about PromQL, consult Querying Prometheus from the official Prometheus docs.
In the Expression field, type machine_cpu_cores
and hit Execute. You should see a list of time series with the metric machine_cpu_cores
that reports the number of CPU cores on a given node. You can see which node generated the metric and which job scraped the metric in the metric labels.
Finally, in the top navigation bar, click on Status and then Targets to see the list of targets Prometheus has been configured to scrape. You should see a list of targets corresponding to the list of monitoring endpoints described at the beginning of Step 2.
To learn more about Promtheus and how to query your cluster metrics, consult the official Prometheus docs.
We’ll follow a similar process to connect to AlertManager, which manages Alerts generated by Prometheus. You can explore these Alerts by clicking into Alerts in the Prometheus top navigation bar.
To connect to the Alertmanager Pods, we will once again use kubectl port-forward
to forward a local port. If you’re done exploring Prometheus, you can close the port-forward tunnel by hitting CTRL-C
. Alternatively you can open a new shell and port-forward connection.
Begin by listing running Services in the monitoring
namespace:
- kubectl get svc -n monitoring
You should see the following Services:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m
doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m
doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m
doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m
doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m
doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m
prometheus-operated ClusterIP None <none> 9090/TCP 34m
We are going to forward local port 9093
to port 9093
of the doks-cluster-monitoring-pr-alertmanager
Service.
- kubectl port-forward -n monitoring svc/doks-cluster-monitoring-pr-alertmanager 9093:9093
You should see the following output:
OutputForwarding from 127.0.0.1:9093 -> 9093
Forwarding from [::1]:9093 -> 9093
This indicates that local port 9093
is being forwarded successfully to an Alertmanager Pod.
Visit http://localhost:9093
in your web browser. You should see the following Alertmanager Alerts page:
From here, you can explore firing alerts and optionally silencing them. To learn more about Alertmanager, consult the official Alertmanager documentation.
In this tutorial, you installed a Prometheus, Grafana, and Alertmanager monitoring stack into your DigitalOcean Kubernetes cluster with a standard set of dashboards, Prometheus rules, and alerts. Since this was done using Helm, you can use helm upgrade
, helm rollback
, and helm delete
to upgrade, roll back, or delete the monitoring stack. To learn more about these functions, consult How To Install Software on Kubernetes Clusters with the Helm Package Manager.
The prometheus-operator
chart helps you get cluster monitoring up and running quickly using Helm. You may wish to build, deploy, and configure Prometheus Operator manually. To do so, consult the Prometheus Operator and [kube-prometheus](<https://github.com/coreos/kube-prometheus) GitHub repos.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I dont really understand why those tutorials always end up with running something on your local host. Would be great if at least try to use and online droplet or something.
This tutorial is no longer up to date.
The syntax for helm is for helm 2, helm 3 is now standard.
The helm package reports itself as being deprecated.
Anyone know what the current correct way of doing this is?
This works for me with helm 3 and kube-prometheus-stack chart, the only problem I had is that I needed to set the permissions for the prometheus volume.
The dashboards are not loaded and when you log in to Grafana, there are not any preloaded dashboards.
This doesn’t seem to be functioning… I follow your steps and when trying to deploy I get:
The only difference is that the cluster is in AWS.
I’ve also tried to deploy without the
custom-values.yaml
, and the error is:Hello, I am currently having one issue with this tutorial => helm chart for grafana’s part is not working:
If I go to a log, i can see following message:
Deployment.yaml file generated by helm looks like this:
Thanks