The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.
Kubernetes is a popular way to host websites and other services that benefit from its reliability and scalability. As more websites interact with sensitive data, such as personal information or passwords, browsers are starting to require that all websites use TLS to secure their traffic. However, it can be difficult to manage all the moving parts required to host a TLS-based site, from acquiring TLS certificate to renewing those certificates on time and configuring your server to use them.
Fortunately, there are services you can run in your Kubernetes cluster to manage a lot of this complexity for you. You can use Traefik Proxy (pronounced like “traffic”) as a network proxy with cert-manager as the service that acquires and manages secure certificates. Using these services with Let’s Encrypt, a provider of free and automated secure certificates, reduces the burden of managing your certificates, typically to the point where you only need to do the initial setup.
In this tutorial, you will set up cert-manager, Traefik, and Let’s Encrypt in your Kubernetes cluster, along with an example website service, to acquire, renew, and use secure certificates with your website automatically.
If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.
kubectl
. If you need to create a cluster, DigitalOcean has a Kubernetes Quickstart.kubectl
for interacting with your cluster. See the product documentation for installing kubectl
on Linux, MacOS, and Windows.doctl
installed and configured. To set this up, see our product documentation for How To Install and Configure doctl.kubectl
. To get started, follow our tutorial, Build and Deploy Your First Image to Your First Cluster.your_domain
. You can purchase a domain name from Namecheap, get one for free with Freenom, or use the domain registrar of your choice.Traditionally, when setting up secure certificates for a website, you would need to generate a certificate signing request and pay a trusted certificate authority to generate a certificate for you. You would then need to configure your web server to use that certificate and remember to go through that same process every year to keep your certificates up-to-date.
However, with the creation of Let’s Encrypt in 2014, it’s now possible to acquire free certificates through an automated process. These certificates are only valid for a few months instead of a year, though, so using an automated system to renew those certificates is a requirement. To handle that, you’ll use cert-manager, a service designed to run in Kubernetes that automatically manages the lifecycle of your certificates.
In this section, you will set up cert-manager to run in your cluster in its own cert-manager
namespace.
First, install cert-manager using kubectl
with cert-manager’s release file:
- kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml
By default, cert-manager will install in its own namespace named cert-manager
. As the file is applied, a number of resources will be created in your cluster, which will appear in your output (some of the output is removed due to length):
Outputnamespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
# some output excluded
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
In this section, you installed cert-manager to manage your secure certificates. Now, you need to set up a way to tell cert-manager how you want your certificates to be issued. In the next section, you’ll set up a Let’s Encrypt issuer in your cluster.
Using a secure certificate for your website is a way to tell your users they can trust that the site they’re viewing came from your servers. To do this, the certificate authority must validate that you own the domain the certificate is for. Let’s Encrypt does this by using a standard called ACME, which uses challenges to prove you own the domain you’re generating a certificate for. cert-manager supports both DNS and HTTP challenges for various providers, but in this tutorial, you’ll use the DNS-01 challenge with DigitalOcean’s DNS provider.
In this section, you will create a ClusterIssuer
for your cluster to tell cert-manager how to issue certificates from Let’s Encrypt and which credentials to use to complete the DNS challenges required by Let’s Encrypt.
Note: This tutorial assumes you are using DigitalOcean for your DNS provider and configures the ClusterIssuer
with that assumption. cert-manager supports a number of different cloud providers for both HTTP and DNS challenges, so the same concepts can be applied to them.
For more information about other providers supported by cert-manager, see the ACME Introduction in cert-manager’s documentation.
Before you create the ClusterIssuer
for your cluster, you’ll want to create a directory for your cluster configuration. Use the mkdir
command to create a directory and then cd
to enter that directory:
- mkdir tutorial-cluster-config
- cd tutorial-cluster-config
Once you’ve created your directory, you’ll need the Personal Access Token for DNS access that you created as part of this tutorial’s prerequisites. A DigitalOcean access token will look similar to dop_v1_4321...
with a long string of numbers.
To store your access token as a secret in Kubernetes, you’ll need to base-64 encode it. To do this, you can use the echo
command to pipe your token to the base64
command, replacing the highlighted portion with your access token:
- echo -n 'dop_v1_4321...' | base64
This command will send your access token from echo
to the base64
command to encode it. The -n
option ensures that a new line isn’t included at the end. Depending on your access token, you will receive output similar to the following:
OutputZG9wX3YxX3RoaXNpc25vdGFyZWFsdG9rZW5idXRpbXB1dHRpbmdhYnVuY2hvZnN0dWZmaW5oZXJlc29sZW5ndGhzbWF0Y2g=
This output is your base-64 encoded access token. Copy this because you’ll be using it next.
Using nano
or your favorite editor, create and open a new file called lets-encrypt-do-dns.yaml
:
- nano lets-encrypt-do-dns.yaml
Add the following code to create a Kubernetes Secret
. Be sure to use your base-64 encoded access token in the access-token
field:
apiVersion: v1
kind: Secret
metadata:
namespace: cert-manager
name: lets-encrypt-do-dns
data:
access-token: ZG9wX3Y...
This Secret
will be called lets-encrypt-do-dns
and is stored in the namespace cert-manager
. In the data
section, you include the base-64 encoded access-token
you created earlier. This Secret securely stores the access token you will reference when creating the Let’s Encrypt issuer.
Next, save your file and apply it to the cluster using kubectl apply
:
- kubectl apply -f lets-encrypt-do-dns.yaml
In the output, you’ll receive a message that your secret has been created in the cluster:
Outputsecret/lets-encrypt-do-dns created
Now, create a new file named lets-encrypt-issuer.yaml
to contain cert-manager’s ClusterIssuer
, which you’ll use to issue your Let’s Encrypt certificates:
- nano lets-encrypt-issuer.yaml
Add the following lines, entering your email address in the spec.acme.email
field (this is the address Let’s Encrypt will associate with the certificates it provides):
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
spec:
acme:
email: your_email_address
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-issuer-account-key
solvers:
- selector: {}
dns01:
digitalocean:
tokenSecretRef:
name: lets-encrypt-do-dns
key: access-token
In the first two lines, the apiVersion
and kind
say this Kubernetes resource is a cert-manager ClusterIssuer
. Next, you name it letsencrypt-issuer
. In this case, you didn’t include a namespace
field because the resource is a Cluster
resource, meaning it applies to the entire cluster instead of a single namespace.
Next, in the spec
section, you define the acme
challenge section to tell cert-manager this ClusterIssuer
should use ACME to issue certificates using the letsencrypt-issuer
. The email
is your email address to which Let’s Encrypt will send any certificate-related communications, such as renewal reminders if there’s a problem and cert-manager doesn’t renew them in time. The server
field specifies the URL to contact for requesting the ACME challenges and is set to the production Let’s Encrypt URL. After the server
field, you include the privateKeySecretRef
field with the name of the secret that cert-manager will use to store its generated private key for your cluster.
One of the most important sections in the spec.acme
section is the solvers
section. In this section, you configure the ACME challenge solvers you want to use for the letsencrypt-issuer
. In this case, you include a single solver, the dns01
solver. The first part of the solver configuration, the selector
, is configured to be {}
, which means “anything.” If you wanted to use different solvers for other certificates in your cluster, you could set up additional selectors in the same issuer. You can find more information about how to do this in cert-manager’s ACME Introduction.
Inside the dns01
section, you add a digitalocean
section to say this issuer should use DigitalOcean as the DNS-01 solver. If you are using a different cloud provider, this is where you would configure the other provider. Inside this section, you include a tokenSecretRef
to reference the lets-encrypt-do-dns
access-token
field of the Secret
you created earlier. cert-manager will use this access token when creating DNS records on your behalf.
Once you’ve saved your issuer file, apply it to the cluster using kubectl apply
:
- kubectl apply -f lets-encrypt-issuer.yaml
The output will confirm that the ClusterIssuer
, named letsencrypt-issuer
, has been created:
Outputclusterissuer.cert-manager.io/letsencrypt-issuer created
In this section, you set up cert-manager and configured it to issue certificates from Let’s Encrypt. However, no certificates are being requested, nothing is serving your website, and you don’t have a website service running in your cluster. In the next section, you’ll set up Traefik as the proxy between the outside world and your websites.
Traefik is an open-source proxy service designed to integrate with Kubernetes for website traffic and other network traffic coming in and out of your cluster. As your network traffic grows, you may want to increase the number of Traefik instances running in your cluster to spread out resource usage across different Kubernetes nodes. To use a single address to refer to multiple service instances like this, you can use a load balancer to accept the network connections and send them to the different Traefik instances, in effect balacing the network traffic load.
In this section, you’ll install Traefik into your cluster and prepare it to be used with the certificates managed by cert-manager and the website you’ll add in Step 5. You will also set up a load balancer, which will send incoming network traffic to your Traefik service from outside your cluster, as well as prepare you to handle multiple instance of Traefik, should you choose to run them.
First, create a namespace called traefik
where you’ll install Traefik. To do this, open a file named traefik-ns.yaml
:
- nano traefik-ns.yaml
Enter a Kubernetes Namespace
resource:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
After saving your file, apply it to your cluster using kubectl apply
:
- kubectl apply -f traefik-ns.yaml
Once your command runs, the cluster’s output will confirm that the namespace has been created:
Outputnamespace/traefik created
After creating the traefik
namespace, you will install the Traefik service itself. For this, you’ll use a utility called Helm. Helm is a package manager for Kubernetes that makes installing Kubernetes services similar to installing an app on your computer. In Helm, a package is called a chart.
First, you’ll need to add the traefik
Helm repository to your available repositories, which will allow Helm to find the traefik
package:
- helm repo add traefik https://helm.traefik.io/traefik
Once the command completes, you’ll receive confirmation that the traefik
repository has been added to your computer’s Helm repositories:
Output"traefik" has been added to your repositories
Next, update your chart repositories:
- helm repo update
The output will confirm that the traefik
chart repository has been updated:
OutputHang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
Update Complete. ⎈Happy Helming!⎈
Finally, install traefik
into the traefik
namespace you created in your cluster:
- helm install --namespace=traefik traefik traefik/traefik
There are a lot of traefik
s in this command, so let’s go over what each one does. The first traefik
in your command, with --namespace=traefik
, tells Helm to install Traefik in the traefik
namespace you created earlier. Next, the highlighted traefik is the name you want to give to this installation of Traefik in your cluster. This way, if you have multiple installations of Traefik in the same cluster, you can give them different names, such as traefik-website1
and traefik-website2
. Since you’ll only have one Traefik installation in your cluster right now, you can just use the name traefik
. The third traefik/
is the repository you added earlier and want to install from. Finally, the last traefik
is the name of the chart you want to install.
Once you run the command, output similar to the following will print to the screen:
NAME: traefik
LAST DEPLOYED: Sun Oct 2 16:32:57 2022
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
Once the Helm chart is installed, Traefik will begin downloading on your cluster. To see whether Traefik is up and running, run kubectl get all
to see all the Traefik resources created in the traefik
namespace:
- kubectl get -n traefik all
Your output will appear similar to the output below:
OutputNAME READY STATUS RESTARTS AGE
pod/traefik-858bb8459f-k4ztp 1/1 Running 0 94s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/traefik LoadBalancer 10.245.77.251 <pending> 80:31981/TCP,443:30188/TCP 94s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/traefik 1/1 1 1 94s
NAME DESIRED CURRENT READY AGE
replicaset.apps/traefik-858bb8459f 1 1 1 94s
Depending on your cluster and when you ran the previous command, some of the names and ages may be different. If you see <pending>
under EXTERNAL-IP
for your service/traefik
, keep running the kubectl get -n traefik all
command until an IP address is listed. The EXTERNAL-IP
is the IP address the load balancer is available from on the internet. Once an IP address is listed, make note of that IP address as your traefik_ip_address
. You’ll use this address in the next section to set up your domain.
In this section, you installed Traefik into your cluster and have an EXTERNAL-IP
you can direct your website traffic to. In the next section, you’ll make the changes to your DNS to send traffic from your domain to the load balancer.
Now that you have Traefik set up in your cluster and accessible on the internet with a load balancer, you’ll need to update your domain’s DNS to point to your Traefik load balancer. Before you continue, be sure your domain is added to your DigitalOcean account. cert-manager will need to be able to update DNS settings for your domain using the access token you set up earlier. You’ll use doctl
to set up your domain’s DNS records to point to Traefik’s load balancer.
Note: This section assumes you’re using DigitalOcean as your DNS host. If you’re using a DNS host other than DigitalOcean, you’ll still create the same DNS record types with the same values, but you’ll need to refer to your DNS host’s documentation for how to add them.
First, create a DNS A
record for your domain named tutorial-proxy.your_domain
that points to your traefik_ip_address
:
- doctl compute domain records create your_domain --record-name tutorial-proxy --record-type A --record-data traefik_ip_address
A DNS A
record tells the DNS to point a given hostname to a specific IP address. In this case, tutorial-proxy.your_domain
will point to traefik_ip_address
. So, if someone requests the website at tutorial-proxy.your_domain
, the DNS servers will direct them to traefik_ip_address
.
After running the command, you’ll receive confirmation that your record has been created:
OutputID Type Name Data Priority Port TTL Weight
12345678 A tutorial-proxy traefik_ip_address 0 0 1800 0
Now, create a CNAME
-type DNS record named tutorial-service.your_domain
and direct it to tutorial-proxy.your_domain
. Since you’ll likely have several services running in your cluster at some point, using an A
record to point each domain to your Traefik proxy could be a lot of work if you ever need to change your proxy’s IP address. Using a CNAME
tells DNS to use the address of the domain it’s pointing to. In this case, the domain is tutorial-proxy.your_domain
, so you only need to update your one A
record to point to a new IP address instead of multiple A
records.
To create the CNAME
record, use the doctl
command again. Be sure to include the trailing period (.
) in --record-data
:
- doctl compute domain records create your_domain --record-name tutorial-service --record-type CNAME --record-data tutorial-proxy.your_domain.
This will create your tutorial-service.your_domain
CNAME
DNS record pointing to tutorial-proxy.your_domain
. Now, when someone requests tutorial-service.your_domain
, the DNS server will tell them to connect to the IP address tutorial-proxy.your_domain
is pointing to. The trailing .
in the --record-data
tells the DNS server that it’s the end of the domain being provided and it shouldn’t append any other information on the end, similar to how a period (.
) is used to end a sentence.
After running the command, you will see output similar to the following:
OutputID Type Name Data Priority Port TTL Weight
12345679 CNAME tutorial-service tutorial-proxy.your_domain 0 0 1800 0
Since DigitalOcean is your primary DNS server, you can query the server directly to determine whether it’s set up correctly instead of waiting for other DNS servers on the internet to be updated. To verify your settings are coming through the DNS servers correctly, use the dig
command to view what ns1.digitalocean.com
, DigitalOcean’s primary DNS server, thinks the records should be:
Note: If you are using a DNS host other than DigitalOcean, replace ns1.digitalocean.com
in this command with one of the DNS servers your DNS host had you set up on your domain.
- dig @ns1.digitalocean.com +noall +answer +domain=your_domain tutorial-proxy tutorial-service
dig
is a utility that connects directly to DNS servers to “dig” into the DNS records to find the one you’re looking for. In this case, you provide @ns1.digitalocean.com
to tell dig
you want to query the ns1.digitalocean.com
server for its DNS records. The +noall +answer
options tell dig
to only output a shorter response. (You can remove these two options if you want more information about the DNS query.) For more about dig
, check out our guide to Retrieve DNS Information Using Dig.
Next, using +domain=your_domain
tells dig
to add .your_domain
to the end of any hostnames provided to the command. Finally, tutorial-proxy
and tutorial-service
are the hostnames to look up. Since you’re using the +domain
option, you don’t need to use the full phrase tutorial-proxy.your_domain
, as it will automatically be added on the end.
You should receive output similar to the following, with your own values for your_domain
and traefik_ip_address
:
Outputtutorial-proxy.your_domain. 1662 IN A traefik_ip_address
tutorial-service.your_domain. 1800 IN CNAME tutorial-proxy.your_domain.
tutorial-proxy.your_domain. 1800 IN A traefik_ip_address
The first line of the output shows that tutorial-proxy.your_domain
is an A
(IN A
) record that points to traefik_ip_address
. The second confirms that tutorial-service.your_domain
is a CNAME
(IN CNAME
) record that points to tutorial-proxy.your_domain
. Finally, the last line is the query dig
runs to find the address your CNAME
record points to. Since it’s tutorial-proxy.your_domain
, it will show the same A
record IP address as before.
In this section, you added an A
-type DNS record and a CNAME
-type DNS record to your domain so that network clients, such as browsers, know where to go to connect to your Traefik service. In the next section, you’ll set up a temporary web server in your cluster to complete your configuration.
In the previous sections, you set up cert-manager and Traefik to handle your website’s secure certificates and route web traffic to your web service. At this point, though, you don’t have a web service to send traffic to. In this section, you’ll use the Nginx web server to simulate a website you’d host in your cluster.
To simulate the website, you’ll set up a Deployment
using the nginx
Docker image. It will only show the Nginx “Welcome!” page, but this is enough to ensure everything is connected correctly and working as expected.
First, create a file named tutorial-service.yaml
:
- nano tutorial-service.yaml
Add the following code, which creates a Namespace
called tutorial
and a Deployment
named tutorial-service
:
apiVersion: v1
kind: Namespace
metadata:
name: tutorial
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: tutorial
name: tutorial-service
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: tutorial-service
app.kubernetes.io/part-of: tutorial
template:
metadata:
labels:
app.kubernetes.io/name: tutorial-service
app.kubernetes.io/part-of: tutorial
spec:
containers:
- name: service
image: nginx
ports:
- containerPort: 80
Similar to the traefik
namespace you created earlier, the first resource in this file will create a new namespace in your cluster named tutorial
. The next resource, the tutorial-service
Deployment
, specifies that you want three replicas of the website running in your cluster, so if one crashes, you’ll still have two others until the third comes back.
The next section, the selector
, tells Kubernetes how to find any pods associated with this Deployment
. In this case, it will find any pods with labels that match. Under the template
section, you define what you want each of your pods to look like. The metadata
section provides the labels that will be matched in the selector
, and the spec
specifies that you want one container in the pod named service
that uses the nginx
image and listens for network connections on port 80
.
Once you’ve saved your changes, apply them to the cluster:
- kubectl apply -f tutorial-service.yaml
The output will confirm that the tutorial
namespace and the tutorial-service
deployment have been created:
Outputnamespace/tutorial created
deployment.apps/tutorial-service created
To check whether your deployment is running, you can use the kubectl get pods
command to list the pods running in the tutorial
namespace:
- kubectl get -n tutorial pods
Output similar to the following will print:
OutputNAME READY STATUS RESTARTS AGE
tutorial-service-568b4f8477-hpstl 1/1 Running 0 2m15s
tutorial-service-568b4f8477-mcpqd 1/1 Running 0 2m15s
tutorial-service-568b4f8477-mg8mb 1/1 Running 0 2m15s
You should find a list of three pods with the STATUS
of Running
and random names following tutorial-service-
. The AGE
will vary depending on how much time has passed between running the kubectl apply
and the kubectl get
commands.
Now that your web service is up and running, you need a way to send traffic across all three pods. In Kubernetes, you use a Service
for this. Any traffic sent to the Service
will be load balanced between the various pods the Service
points to.
To create your Service
, open your tutorial-service.yaml
file again and add a Service
to the end:
...
- name: service
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: tutorial
name: tutorial-service
spec:
selector:
app.kubernetes.io/name: tutorial-service
app.kubernetes.io/part-of: tutorial
ports:
- protocol: TCP
port: 80
targetPort: 80
Similar to the Deployment
, your Service
has a selector
section listing the labels for finding the pods to which you want to send traffic. These labels match the labels you included in the pod template
section in the Deployment
. The Service
also has one port listed in the ports
section that says any TCP
traffic sent to port: 80
of the service should be sent to targetPort: 80
on the pod chosen by the load balancer.
After saving your changes, apply the Service
to the cluster:
- kubectl apply -f tutorial-service.yaml
This time in the output, the namespace and deployment are listed as unchanged
(because you didn’t make any changes to them) and the tutorial-service
has been created
:
Outputnamespace/tutorial unchanged
deployment.apps/tutorial-service unchanged
service/tutorial-service created
Once your tutorial-service
is created, you can test that you can access the service by using the kubectl port-forward
command to make the service available on your local computer:
- kubectl port-forward -n tutorial service/tutorial-service 8888:80
This command forwards any traffic sent to port 8888
on your local computer to port 80
of tutorial-service
in the cluster. In your Kubernetes cluster, you set up the tutorial-service
Service
to listen for connections on port 80
, and you need a way to send traffic from your local computer to that service in the cluster. In the command, you specify you want to port-forward
to service/tutorial-service
in the tutorial
namespace, and then provide the combination of ports 8888:80
. The first port listed is the port your local computer will listen on, while the second port (after the :
) is the port on service/tutorial-service
where the traffic will be sent. When you send traffic to port 8888
on your local computer, all that traffic will be sent to port 80
on service/tutorial-service
, and ultimately to the pods service/tutorial-service
is pointing to.
When you run the command, you’ll receive output similar to the following:
OutputForwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80
Note that the command will not return and will keep running to forward the traffic.
To make a request against your service, open a second terminal on your computer and use the curl
command to your computer on port 8888
:
- curl http://localhost:8888/
This command makes an HTTP request through your forwarded port (8888
) to the cluster’s tutorial-service
, and returns an HTML response containing the Nginx welcome page:
Output<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
In your original terminal, you can now press CONTROL+C
to stop the port-forward
command. You will also see some additional output from when you made the curl
connection:
OutputForwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80
Handling connection for 8888
In this section, you set up an nginx
web service in your cluster using a Deployment
and a Service
. You then used kubectl port-forward
with the curl
command to ensure nginx
is running correctly. Now that cert-manager, Traefik, and your service are set up, you’ll bring them all together in the next section and make your service available over HTTPS on the internet with cert-manager and Traefik.
Even though you have all the individual services running in your cluster, they’re all running relatively independently. cert-manager is just sitting there, Traefik doesn’t know about any sites it should serve, and your Nginx website is only available if you port forward to the cluster. In this section, you’ll create an Ingress
resource to connect all your services.
First, open tutorial-service.yaml
again:
- nano tutorial-service.yaml
Add an Ingress
at the end of the file, after the tutorial-service
Service
you added earlier. Be sure to update the configuration with your own domain name and include the ---
at the beginning to separate your Ingress
resource from the Service
resource above it:
...
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tutorial-service-ingress
namespace: tutorial
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
cert-manager.io/cluster-issuer: letsencrypt-issuer
spec:
rules:
- host: tutorial-service.your_domain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tutorial-service
port:
number: 80
tls:
- secretName: tutorial-service-cert
hosts:
- tutorial-service.your_domain
These lines include the rules and annotations to tell everything how to fit together. The Ingress
resource includes references to Traefik, cert-manager, and your tutorial-service
. The annotations
section includes a few different but important annotations.
The traefik.ingress.kubernetes.io/router.entrypoints
annotation tells Traefik that traffic for this Ingress
should be available via the websecure
entrypoint. This is an entrypoint the Helm chart configures by default to handle HTTPS traffic and listens on traefik_ip_address
port 443
, the default for HTTPS.
The next annotation, traefik.ingress.kubernetes.io/router.tls
, is set to true
to tell Traefik to only respond to HTTPS traffic and not to HTTP traffic. Since your website needs to be secure to handle any sensitive data, you don’t want your users accidentally using an insecure version.
The last annotation, cert-manager.io/cluster-issuer
is set to letsencrypt-issuer
to tell cert-manager the issuer you’d like to use when issuing secure certificates for this Ingress
. At this point, letsencrypt-issuer
is the only issuer you have configured, but you could add more later and use different ones for different sites.
In the Ingress
spec.rules
section, you include one rule for routing traffic sent to the Ingress
. It says for the host
named tutorial-service.your_domain
, use http
for the given paths
. The only path
included is the root /
path with a pathType
of Prefix
, which means any traffic sent should be sent to the provided backend
. The backend
section says it’s a service
, that the service it should send traffic to is the tutorial-service
Service
you created earlier, and that traffic should be sent to port 80
of the Service
.
The spec.tls
section of the Ingress
provides the information cert-manager needs to request and issue your secure certificates as well as the information Traefik needs to use those certificates. The secretName
is the Kubernetes Secret
where cert-manager will put the issued secure certificate, and the Secret
Traefik will use to load the issued certificate. The hosts
section lists the hostnames cert-manager will request the certificates for. In this case, it will only be the tutorial-service.your_domain
hostname, but you could also include others you own if you’d like the site to respond to multiple hostnames.
After saving the Ingress
you created, use kubectl apply
again to apply the new resource to your cluster:
- kubectl apply -f tutorial-service.yaml
The Ingress
will be created
and the other resources will remain unchanged
:
Outputnamespace/tutorial unchanged
deployment.apps/tutorial-service unchanged
service/tutorial-service unchanged
ingress.networking.k8s.io/tutorial-service-ingress created
Once the Ingress
is created, Traefik will begin to configure itself and cert-manager will begin the challenge/response process to have the certificate issued. This can take a few minutes, so you can check whether the certificate has been issued by reviewing the certificates
in your tutorial
namespace:
- kubectl get -n tutorial certificates
You will receive output similar to the following:
OutputNAME READY SECRET AGE
tutorial-service-cert False tutorial-service-cert 12m
If the READY
field is False
, the certificate has not been issued yet. You can keep running the same command to watch for it to turn to True
. It can take some time to be issued, but if it takes longer than a few minutes, it could mean something is wrong with your configuration.
Note: If your certificate is not issued after 10-15 minutes, it can be helpful to look at the log messages for cert-manager to see if it’s having trouble requesting the certificate. To view these logs, you can use the following command to watch the logs, and press CONTRL+C
to stop following them:
- kubectl logs -n cert-manager deployment/cert-manager --tail=10 -f
Once your certificate is ready, you can make an HTTPS request against your cluster using curl
:
- curl https://tutorial-service.your_domain
Note: Depending on how long ago you updated your DNS records and how long the DNS records take to spread across the internet’s DNS servers, you may see an error that your domain couldn’t be found or it goes to the wrong place. If this happens, you can use a curl
workaround to skip the DNS check for now by running the following command:
- curl https://tutorial-service.your_domain --resolve 'tutorial-service.your_domain:443:traefik_ip_address'
This command tells the curl
command to use the --resolve
option to override any DNS resolution for tutorial-service.your_domain
on port 443
with traefik_ip_address
instead. Since the DNS result curl
is getting is incorrect, this will still allow you to connect to Traefik inside your cluster until DNS is fully updated.
In your output, you will get the same Nginx “Welcome!” page from the port forwarding earlier, but this time it’s accessible over the internet:
Output<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
However, if you try to request the HTTP version of your site, you won’t receive a response:
- curl http://tutorial-service.your_domain
A 404
error will load instead:
Output404 page not found
Since you configured your site in the Ingress
to not respond to HTTP traffic, Traefik never set up a site at that address and returns a 404
error. This can confuse your users if they know they should see a website, so many administrators will configure their servers to redirect HTTP traffic to the HTTPS site automatically. Traefik also allows you to do this by updating Traefik to tell it to redirect all web
traffic to the websecure
port:
- helm upgrade --namespace=traefik traefik traefik/traefik --set 'ports.web.redirectTo=websecure'
The --set 'ports.web.redirectTo=websecure'
option tells Traefik to reconfigure itself to do the redirection automatically.
You should see a message similar to the one below that the traefik
installation has been “upgraded”:
OutputRelease "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Oct 2 19:17:34 2022
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
Now if you make a request to your HTTP location, you’ll receive output that says the site was moved:
- curl http://tutorial-service.your_domain
This response is expected:
OutputMoved Permanently
Since you want all your traffic to go to your HTTPS site, Traefik is now returning an automatic redirect to the HTTPS site from the HTTP site on your behalf. Web browsers will do this redirect automatically, but curl
requires an additional option, -L
, to tell it to follow redirects. Update your curl
command with the -L
option to follow redirects:
- curl -L http://tutorial-service.your_domain
The output will contain the Nginx welcome page from your HTTPS site:
Output<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
This output confirms that the redirect works as expected.
In this section, you tied cert-manager, Traefik, and your Nginx website together using a Kubernetes Ingress
resource. You also updated your Traefik configuration to redirect HTTP traffic to HTTPS websites to ensure users can find your website.
In this tutorial, you installed a few different services in your Kubernetes cluster to make it easier to run a website with secure certificates. You installed the cert-manager service to handle the lifecycle of TLS certificates issued from Let’s Encrypt. You installed Traefik to make your websites available outside your cluster and to use the TLS certificates issued by Let’s Encrypt. Lastly, you created an Nginx website in your cluster to test your cert-manager and Traefik configurations.
Now that you have cert-manager and Traefik configured in your cluster, you could also set up more websites with different Ingress
resources to serve many websites from the same cluster with a single cert-manager and Traefik installation.
You can read the Traefik Proxy documentation for more about the different functionalities Traefik can provide in your cluster. cert-manager also has extensive documentation on how to use it with other types of Let’s Encrypt challenges, as well as sources other than Let’s Encrypt.
To continue configuring your Kubernetes cluster, check out our other tutorials on Kubernetes.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi! I’m getting errors during the challenge. I’ve looked around but I can’t find a solution.
I’ve read around that helm may need extra settings for digitalocean deployments (https://github.com/cert-manager/cert-manager/issues/2485 and https://github.com/traefik/traefik-helm-chart/blob/master/EXAMPLES.md#use-proxyprotocol-on-digital-ocean) but it has not helped.
A truncated output of the challenge description is described below