Tutorial

How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes

How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes

Introduction

Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services.

Popular Ingress Controllers include Nginx, Contour, HAProxy, and Traefik. Ingresses provide a more efficient and flexible alternative to setting up multiple LoadBalancer services, each of which uses its own dedicated Load Balancer.

In this guide, we’ll set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once we’ve set up the Ingress, we’ll install cert-manager into our cluster to manage and provision TLS certificates for encrypting HTTP traffic to the Ingress. This guide does not use the Helm package manager. For a guide on rolling out the Nginx Ingress Controller using Helm, consult How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm.

If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.

Prerequisites

Before you begin with this guide, you should have the following available to you:

  • A Kubernetes 1.15+ cluster with role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster, but you are free to create a cluster using another method.
  • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation. If you are using a DigitalOcean Kubernetes cluster, please refer to How to Connect to a DigitalOcean Kubernetes Cluster to learn how to connect to your cluster using kubectl.
  • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.
  • The wget command-line utility installed on your local machine. You can install wget using the package manager built into your operating system.

Once you have these components set up, you’re ready to begin with this guide.

Step 1 — Setting Up Dummy Backend Services

Before we deploy the Ingress Controller, we’ll first create and roll out two dummy echo Services to which we’ll route external traffic using the Ingress. The echo Services will run the hashicorp/http-echo web server container, which returns a page containing a text string passed in when the web server is launched. To learn more about http-echo, consult its GitHub Repo, and to learn more about Kubernetes Services, consult Services from the official Kubernetes docs.

On your local machine, create and edit a file called echo1.yaml using nano or your favorite editor:

  1. nano echo1.yaml

Paste in the following Service and Deployment manifest:

echo1.yaml
apiVersion: v1
kind: Service
metadata:
  name: echo1
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo1
spec:
  selector:
    matchLabels:
      app: echo1
  replicas: 2
  template:
    metadata:
      labels:
        app: echo1
    spec:
      containers:
      - name: echo1
        image: hashicorp/http-echo
        args:
        - "-text=echo1"
        ports:
        - containerPort: 5678

In this file, we define a Service called echo1 which routes traffic to Pods with the app: echo1 label selector. It accepts TCP traffic on port 80 and routes it to port 5678, http-echo’s default port.

We then define a Deployment, also called echo1, which manages Pods with the app: echo1 Label Selector. We specify that the Deployment should have 2 Pod replicas, and that the Pods should start a container called echo1 running the hashicorp/http-echo image. We pass in the text parameter and set it to echo1, so that the http-echo web server returns echo1. Finally, we open port 5678 on the Pod container.

Once you’re satisfied with your dummy Service and Deployment manifest, save and close the file.

Then, create the Kubernetes resources using kubectl apply with the -f flag, specifying the file you just saved as a parameter:

  1. kubectl apply -f echo1.yaml

You should see the following output:

Output
service/echo1 created deployment.apps/echo1 created

Verify that the Service started correctly by confirming that it has a ClusterIP, the internal IP on which the Service is exposed:

  1. kubectl get svc echo1

You should see the following output:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

This indicates that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will forward traffic to containerPort 5678 on the Pods it selects.

Now that the echo1 Service is up and running, repeat this process for the echo2 Service.

Create and open a file called echo2.yaml:

echo2.yaml
apiVersion: v1
kind: Service
metadata:
  name: echo2
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo2
spec:
  selector:
    matchLabels:
      app: echo2
  replicas: 1
  template:
    metadata:
      labels:
        app: echo2
    spec:
      containers:
      - name: echo2
        image: hashicorp/http-echo
        args:
        - "-text=echo2"
        ports:
        - containerPort: 5678

Here, we essentially use the same Service and Deployment manifest as above, but name and relabel the Service and Deployment echo2. In addition, to provide some variety, we create only 1 Pod replica. We ensure that we set the text parameter to echo2 so that the web server returns the text echo2.

Save and close the file, and create the Kubernetes resources using kubectl:

  1. kubectl apply -f echo2.yaml

You should see the following output:

Output
service/echo2 created deployment.apps/echo2 created

Once again, verify that the Service is up and running:

  1. kubectl get svc

You should see both the echo1 and echo2 Services with assigned ClusterIPs:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

Now that our dummy echo web services are up and running, we can move on to rolling out the Nginx Ingress Controller.

Step 2 — Setting Up the Kubernetes Nginx Ingress Controller

In this step, we’ll roll out v1.1.1 of the Kubernetes-maintained Nginx Ingress Controller. Note that there are several Nginx Ingress Controllers; the Kubernetes community maintains the one used in this guide and Nginx Inc. maintains kubernetes-ingress. The instructions in this tutorial are based on those from the official Kubernetes Nginx Ingress Controller Installation Guide.

The Nginx Ingress Controller consists of a Pod that runs the Nginx web server and watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress Resource is essentially a list of traffic routing rules for backend Services. For example, an Ingress rule can specify that HTTP traffic arriving at the path /web1 should be directed towards the web1 backend web server. Using Ingress Resources, you can also perform host-based routing: for example, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

In this case, because we’re deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will create a LoadBalancer Service that provisions a DigitalOcean Load Balancer to which all external traffic will be directed. This Load Balancer will route external traffic to the Ingress Controller Pod running Nginx, which then forwards traffic to the appropriate backend Services.

We’ll begin by creating the Nginx Ingress Controller Kubernetes resources. These consist of ConfigMaps containing the Controller’s configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment which uses v1.1.1 of the Nginx Ingress Controller image. To see a full list of these required resources, consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

Note: In this tutorial, we’re following the official installation instructions for the DigitalOcean Provider. You should choose the appropriate manifest file depending on your Kubernetes provider.

To create the resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml

We use apply here so that in the future we can incrementally apply changes to the Ingress Controller objects instead of completely overwriting them. To learn more about apply, consult Managing Resources from the official Kubernetes docs.

You should see the following output:

Output
namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created

This output also serves as a convenient summary of all the Ingress Controller objects created from the deploy.yaml manifest.

Confirm that the Ingress Controller Pods have started:

  1. kubectl get pods -n ingress-nginx \
  2. -l app.kubernetes.io/name=ingress-nginx --watch
Output
NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-l2jhk 0/1 Completed 0 13m ingress-nginx-admission-patch-hsrzf 0/1 Completed 0 13m ingress-nginx-controller-c96557986-m47rq 1/1 Running 0 13m

Hit CTRL+C to return to your prompt.

Now, confirm that the DigitalOcean Load Balancer was successfully created by fetching the Service details with kubectl:

  1. kubectl get svc --namespace=ingress-nginx

After several minutes, you should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.245.201.120 203.0.113.0 80:31818/TCP,443:31146/TCP 14m ingress-nginx-controller-admission ClusterIP 10.245.239.119 <none> 443/TCP 14m

Note down the Load Balancer’s external IP address, as you’ll need it in a later step.

Note: By default the Nginx Ingress LoadBalancer Service has service.spec.externalTrafficPolicy set to the value Local, which routes all load balancer traffic to nodes running Nginx Ingress Pods. The other nodes will deliberately fail load balancer health checks so that Ingress traffic does not get routed to them. External traffic policies are beyond the scope of this tutorial, but to learn more you can consult A Deep Dive into Kubernetes External Traffic Policies and Source IP for Services with Type=LoadBalancer from the official Kubernetes docs.

This load balancer receives traffic on HTTP and HTTPS ports 80 and 443, and forwards it to the Ingress Controller Pod. The Ingress Controller will then route the traffic to the appropriate backend Service.

We can now point our DNS records at this external Load Balancer and create some Ingress Resources to implement traffic routing rules.

Step 3 — Creating the Ingress Resource

Let’s begin by creating a minimal Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service.

In this guide, we’ll use the test domain example.com. You should substitute this with the domain name you own.

We’ll first create a simple rule to route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

Begin by opening up a file called echo_ingress.yaml in your favorite editor:

  1. nano echo_ingress.yaml

Paste in the following ingress definition:

echo_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
spec:
  rules:
  - host: echo1.example.com
    http:
        paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: echo1
              port:
                number: 80
  - host: echo2.example.com
    http:
        paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: echo2
              port:
                number: 80

When you’ve finished editing your Ingress rules, save and close the file.

Here, we’ve specified that we’d like to create an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header specifies the domain name of the target server. To learn more about Host request headers, consult the Mozilla Developer Network definition page. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

You can now create the Ingress using kubectl:

  1. kubectl apply -f echo_ingress.yaml

You’ll see the following output confirming the Ingress creation:

Output
ingress.networking.k8s.io/echo-ingress created

To test the Ingress, navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer’s external IP. The Load Balancer’s external IP is the external IP address for the ingress-nginx Service, which we fetched in the previous step. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.

Once you’ve created the necessary echo1.example.com and echo2.example.com DNS records, you can test the Ingress Controller and Resource you’ve created using the curl command line utility.

From your local machine, curl the echo1 Service:

  1. curl echo1.example.com

You should get the following response from the echo1 service:

Output
echo1

This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

Now, perform the same test for the echo2 Service:

  1. curl echo2.example.com

You should get the following response from the echo2 Service:

Output
echo2

This confirms that your request to echo2.example.com is being correctly routed through the Nginx ingress to the echo2 backend Service.

At this point, you’ve successfully set up a minimal Nginx Ingress to perform virtual host-based routing. In the next step, we’ll install cert-manager to provision TLS certificates for our Ingress and enable the more secure HTTPS protocol.

Step 4 — Installing and Configuring Cert-Manager

In this step, we’ll install v1.7.1 of cert-manager into our cluster. cert-manager is a Kubernetes add-on that provisions TLS certificates from Let’s Encrypt and other certificate authorities (CAs) and manages their lifecycles. Certificates can be automatically requested and configured by annotating Ingress Resources, appending a tls section to the Ingress spec, and configuring one or more Issuers or ClusterIssuers to specify your preferred certificate authority. To learn more about Issuer and ClusterIssuer objects, consult the official cert-manager documentation on Issuers.

Install cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers by following the official installation instructions. Note that a namespace called cert-manager will be created into which the cert-manager objects will be created:

  1. kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml

You should see the following output:

Output
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created . . . deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

To verify our installation, check the cert-manager Namespace for running pods:

  1. kubectl get pods --namespace cert-manager
Output
NAME READY STATUS RESTARTS AGE cert-manager-578cd6d964-hr5v2 1/1 Running 0 99s cert-manager-cainjector-5ffff9dd7c-f46gf 1/1 Running 0 100s cert-manager-webhook-556b9d7dfd-wd5l6 1/1 Running 0 99s

This indicates that the cert-manager installation succeeded.

Before we begin issuing certificates for our echo1.example.com and echo2.example.com domains, we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we’ll use the Let’s Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

Let’s create a test ClusterIssuer to make sure the certificate provisioning mechanism is functioning correctly. A ClusterIssuer is not namespace-scoped and can be used by Certificate resources in any namespace.

Open a file named staging_issuer.yaml in your favorite text editor:

nano staging_issuer.yaml

Paste in the following ClusterIssuer manifest:

staging_issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-staging
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: your_email_address_here
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Here we specify that we’d like to create a ClusterIssuer called letsencrypt-staging, and use the Let’s Encrypt staging server. We’ll later use the production server to roll out our certificates, but the production server rate-limits requests made against it, so for testing purposes you should use the staging URL.

We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the ACME account’s private key. We also use the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

Roll out the ClusterIssuer using kubectl:

  1. kubectl create -f staging_issuer.yaml

You should see the following output:

Output
clusterissuer.cert-manager.io/letsencrypt-staging created

We’ll now repeat this process to create the production ClusterIssuer. Note that certificates will only be created after annotating and updating the Ingress resource provisioned in the previous step.

Open a file called prod_issuer.yaml in your favorite editor:

nano prod_issuer.yaml

Paste in the following manifest:

prod_issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: your_email_address_here
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

Note the different ACME server URL, and the letsencrypt-prod secret key name.

When you’re done editing, save and close the file.

Roll out this Issuer using kubectl:

  1. kubectl create -f prod_issuer.yaml

You should see the following output:

Output
clusterissuer.cert-manager.io/letsencrypt-prod created

Now that we’ve created our Let’s Encrypt staging and prod ClusterIssuers, we’re ready to modify the Ingress Resource we created above and enable TLS encryption for the echo1.example.com and echo2.example.com paths.

If you’re using DigitalOcean Kubernetes, you first need to implement a workaround so that Pods can communicate with other Pods using the Ingress. If you’re not using DigitalOcean Kubernetes, you can skip ahead to Step 6.

Step 5 — Enabling Pod Communication through the Load Balancer (optional)

Before it provisions certificates from Let’s Encrypt, cert-manager first performs a self-check to ensure that Let’s Encrypt can reach the cert-manager Pod that validates your domain. For this check to pass on DigitalOcean Kubernetes, you need to enable Pod-Pod communication through the Nginx Ingress load balancer.

To do this, we’ll create a DNS A record that points to the external IP of the cloud load balancer, and annotate the Nginx Ingress Service manifest with this subdomain.

Begin by navigating to your DNS management service and create an A record for workaround.example.com pointing to the DigitalOcean Load Balancer’s external IP. The Load Balancer’s external IP is the external IP address for the ingress-nginx Service, which we fetched in Step 2. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records. Here we use the subdomain workaround but you’re free to use whichever subdomain you prefer.

Now that you’ve created a DNS record pointing to the Ingress load balancer, annotate the Ingress LoadBalancer Service with the do-loadbalancer-hostname annotation. Open a file named ingress_nginx_svc.yaml in your favorite editor and paste in the following LoadBalancer manifest:

ingress_nginx_svc.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
    service.beta.kubernetes.io/do-loadbalancer-hostname: "workaround.example.com"
  labels:
    helm.sh/chart: ingress-nginx-4.0.6
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

This Service manifest was extracted from the complete Nginx Ingress manifest file that you installed in Step 2. Be sure to copy the Service manifest corresponding to the Nginx Ingress version you installed; in this tutorial, this is 1.1.1. Also be sure to set the do-loadbalancer-hostname annotation to the workaround.example.com domain.

When you’re done, save and close the file.

Modify the running ingress-nginx-controller Service using kubectl apply:

kubectl apply -f ingress_nginx_svc.yaml

You should see the following output:

Output
service/ingress-nginx-controller configured

This confirms that you’ve annotated the ingress-nginx-controller service and Pods in your cluster can now communicate with one another using this ingress-nginx-controller Load Balancer.

Step 6 — Issuing Staging and Production Let’s Encrypt Certificates

To issue a staging TLS certificate for our domains, we’ll annotate echo_ingress.yaml with the ClusterIssuer created in Step 4. This will use ingress-shim to automatically create and issue certificates for the domains specified in the Ingress manifest.

Open up echo_ingress.yaml in your favorite editor:

  1. nano echo_ingress.yaml

Add the following to the Ingress resource manifest:

echo_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - echo1.example.com
    - echo2.example.com
    secretName: echo-tls
  rules:
    - host: echo1.example.com
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: echo1
                port:
                  number: 80
    - host: echo2.example.com
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: echo2
                port:
                  number: 80

Here we add an annotation to set the cert-manager ClusterIssuer to letsencrypt-staging, the test certificate ClusterIssuer created in Step 4. We also add an annotation that describes the type of ingress, in this case nginx.

We also add a tls block to specify the hosts for which we want to acquire certificates, and specify a secretName. This secret will contain the TLS private key and issued certificate. Be sure to swap out example.com with the domain for which you’ve created DNS records.

When you’re done making changes, save and close the file.

We’ll now push this update to the existing Ingress object using kubectl apply:

  1. kubectl apply -f echo_ingress.yaml

You should see the following output:

Output
ingress.networking.k8s.io/echo-ingress configured

You can use kubectl describe to track the state of the Ingress changes you’ve just applied:

  1. kubectl describe ingress
Output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal UPDATE 6s (x3 over 80m) nginx-ingress-controller Ingress default/echo-ingress Normal CreateCertificate 6s cert-manager Successfully created Certificate "echo-tls"

Once the certificate has been successfully created, you can run a describe on it to further confirm its successful creation:

  1. kubectl describe certificate

You should see the following output in the Events section:

Output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Requested 64s cert-manager Created new CertificateRequest resource "echo-tls-vscfw" Normal Issuing 40s cert-manager The certificate has been successfully issued

This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for the two domains configured.

We’re now ready to send a request to a backend echo server to test that HTTPS is functioning correctly.

Run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

  1. wget --save-headers -O- echo1.example.com

You should see the following output:

Output
. . . HTTP request sent, awaiting response... 308 Permanent Redirect . . . ERROR: cannot verify echo1.example.com's certificate, issued by ‘ERROR: cannot verify echo1.example.com's certificate, issued by ‘CN=(STAGING) Artificial Apricot R3,O=(STAGING) Let's Encrypt,C=US’: Unable to locally verify the issuer's authority. To connect to echo1.example.com insecurely, use `--no-check-certificate'.

This indicates that HTTPS has successfully been enabled, but the certificate cannot be verified as it’s a fake temporary certificate issued by the Let’s Encrypt staging server.

Now that we’ve tested that everything works using this temporary fake certificate, we can roll out production certificates for the two hosts echo1.example.com and echo2.example.com. To do this, we’ll use the letsencrypt-prod ClusterIssuer.

Update echo_ingress.yaml to use letsencrypt-prod:

  1. nano echo_ingress.yaml

Make the following change to the file:

echo_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - echo1.example.com
    - echo2.example.com
    secretName: echo-tls
  rules:
    - host: echo1.example.com
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: echo1
                port:
                  number: 80
    - host: echo2.example.com
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: echo2
                port:
                  number: 80

Here, we update the ClusterIssuer name to letsencrypt-prod.

Once you’re satisfied with your changes, save and close the file.

Roll out the changes using kubectl apply:

  1. kubectl apply -f echo_ingress.yaml
Output
ingress.networking.k8s.io/echo-ingress configured

Wait a couple of minutes for the Let’s Encrypt production server to issue the certificate. You can track its progress using kubectl describe on the certificate object:

  1. kubectl describe certificate echo-tls

Once you see the following output, the certificate has been issued successfully:

Output
Normal Issuing 28s cert-manager Issuing certificate as Secret was previously issued by ClusterIssuer.cert-manager.io/letsencrypt-staging Normal Reused 28s cert-manager Reusing private key stored in existing Secret resource "echo-tls" Normal Requested 28s cert-manager Created new CertificateRequest resource "echo-tls-49gmn" Normal Issuing 2s (x2 over 4m52s) cert-manager The certificate has been successfully issued

We’ll now perform a test using curl to verify that HTTPS is working correctly:

  1. curl echo1.example.com

You should see the following:

Output
<html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx/1.15.9</center> </body> </html>

This indicates that HTTP requests are being redirected to use HTTPS.

Run curl on https://echo1.example.com:

  1. curl https://echo1.example.com

You should now see the following output:

Output
echo1

You can run the previous command with the verbose -v flag to dig deeper into the certificate handshake and to verify the certificate information.

At this point, you’ve successfully configured HTTPS using a Let’s Encrypt certificate for your Nginx Ingress.

Conclusion

In this guide, you set up an Nginx Ingress to load balance and route external requests to backend Services inside of your Kubernetes cluster. You also secured the Ingress by installing the cert-manager certificate provisioner and setting up a Let’s Encrypt certificate for two host paths.

There are many alternatives to the Nginx Ingress Controller. To learn more, consult Ingress controllers from the official Kubernetes documentation.

For a guide on rolling out the Nginx Ingress Controller using the Helm Kubernetes package manager, consult How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
50 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

I like the article very much and alternatively you can expose the ingress controller through NodePort if you want to save money on the load balancer/domain and access the application as below:

curl --resolve echo1.example.com:$IC_HTTPS_PORT:$IC_IP https://echo1.example.com:$IC_HTTPS_PORT/ --insecure

Hey! can you please elaborate on this?

How does your nginx service looks like? as simple as changing LoadBalancer for NodePort?

Does this mean that if the node goes down then all services go down?

I don’t really like paying for 2 loadbalancers when only 1 is needed, so I like your approach :)

Affirmative, using Nodeport is a bad idea for this reason. When your cluster gets upgraded, all the services will go down.

You can still pay for only one LoadBalancer though.

The idea of an IngressController is that it goes between the LB and the actual service so you can have multiple services per LB.

scrawford
DigitalOcean Employee
DigitalOcean Employee badge
December 21, 2018

This comment has been deleted

    scrawford
    DigitalOcean Employee
    DigitalOcean Employee badge
    December 21, 2018

    Awesome tutorial! Thanks for putting this together. For the installation of cert-manager via helm, I was able to create with --set createCustomResources=true from the start and avoid creating with the flag set to false and then updating it to true.

    helm install --name cert-manager --namespace kube-system stable/cert-manager --set createCustomResource=true
    

    This is the correct way to do it. See my comment below.

    Hanif Jetha
    DigitalOcean Employee
    DigitalOcean Employee badge
    January 3, 2019

    Thank you! I’m glad you found the tutorial helpful. That --set createCustomResources workaround was due to this bug which has since been resolved by upgrading Helm to v2.12.1. I’ve updated the tutorial accordingly.

    Execllent tutorial! FYI - On my Windows 10 machine, in order to get the cert-manager to recognize the ClusterIssuer, I needed to run this to setup the CRDs kubectl apply -f 00-crds.yaml

    00-crds.yaml

    I had an issue while creating the ClusterIssuer, whether for staging or production.

    kubectl create -f staging_issuer.yaml
    
    

    error: unable to recognize “staging_issuer.yaml”: no matches for kind “ClusterIssuer” in version “certmanager.k8s.io/v1alpha1

    I fixed this by running helm del --purge cert-manager

    and then

    helm install --name cert-manager --namespace kube-system stable/cert-manager --set createCustomResource=true
    
    Hanif Jetha
    DigitalOcean Employee
    DigitalOcean Employee badge
    January 3, 2019

    Thanks for catching this! That --set createCustomResources workaround was due to this bug which has since been resolved by upgrading Helm to v2.12.1. I’ve updated the tutorial accordingly.

    Great tutorial - thanks very much.

    Thanks for this article. I can now actually use my DO K8s cluster with https setup.

    When I look at my DigitalOcean Loadbalancer I see that it has “a issue” where it shows that one of my two droplets as being in Status of “Down”.
    I increased the resource by adding another node and it shows that being down too. Is that a concern?

    Hanif Jetha
    DigitalOcean Employee
    DigitalOcean Employee badge
    January 8, 2019

    Thanks for your comment!

    This is expected, see the note in Step 2. Since the default Nginx Ingress Deployment only runs 1 Ingress Pod, the K8S node running the Pod will appear as “healthy” with the other nodes appearing “down” so that Ingress traffic does not get routed to them. See the attached links in the aforementioned note for more details.

    Hope this is helpful!

    Thanks for the reply DO.

    Was you able to fix this? I’m also having this problem currently.

    Update: Okay simple comment out the externalTrafficPolicy: Local on the lb yaml and both are available.

    Worked for me too. Is there any problem with this fix?

    Thanks for the tutorial.

    I tried to installed cert-manager using static manifest (without helm)

    kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/cert-manager.yaml
    

    However, I can’t get certificate issued

    Events:
      Type    Reason        Age   From          Message
      ----    ------        ----  ----          -------
      Normal  Generated     32m   cert-manager  Generated new private key
      Normal  OrderCreated  32m   cert-manager  Created Order resource "letsencrypt-staging-714061730"
    

    Do you know what might be the problem here?

    Thanks

    ok. seems like I still use example.com in the hostname.

    I had the same symptom, with a different solution. https://github.com/jetstack/cert-manager/issues/2439#issuecomment-562873934

    I managed to find the cause of my problem to a classic RTFM. So, that's on me upside_down_face
    
    Previous installations of cert-manager left some of the CRDs behind in the cluster.
    
    Following upgrade instruction, I managed to double check my cluster.
    
    ## find all CRDs left behind from previous installations
    kubectl get crd | grep certmanager.k8s.io
    
    ## delete them
    kubectl delete crd CRD_NAME
    After that, uninstall and reinstall cert-manager from scratch fixed the issue. The CertificateRequest was successfully created and the challenge worked!
    

    In addition to this, an odd hack needed for me was to add an empty selector at the bottom of staging_issuer.yaml like this:

      # Enable the HTTP-01 challenge provider
        solvers:
        - http01:
            ingress:
              class: nginx
          selector: {}
    

    (found on a github issue)

    What is the proper way to setup to forward real ip from http requests?

    Great tutorial! Demystified a lot for me, thanks for posting!

    According to the docs cross-namespace ingress isn’t possible, yet in your example the 2 echo servers are in the default namespace but the ingress-nginx namespace holds the info for the 2 ingress paths.

    I’ve followed your example and noticed you are goin across namespaces too, am I right or have I missed something?

    Hanif Jetha
    DigitalOcean Employee
    DigitalOcean Employee badge
    February 6, 2019

    That’s correct! Cross-namespace ingress isn’t possible per https://github.com/kubernetes/kubernetes/issues/17088.

    The echo services live in the default namespace, and the Ingress Controller lives in the ingress-nginx namespace. The Ingress Resource (also commonly just called Ingress) which defines the routing rules does not have a namespace specified and so lives in the default namespace along with the echo services. So the Ingress/Ingress Resource and services are actually in the same namespace.

    Hopefully this helps, and thanks for reading!

    This comment has been deleted

      After creating the ClusterIssuer and updating the Ingress, inspecting the ingress shows that it does not create the certificate. Why would it not create the certificate?

      kubectl describe ingress
      

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 28m nginx-ingress-controller Ingress default/echo-ingress Normal UPDATE 27m nginx-ingress-controller Ingress default/echo-ingress

      kubectl describe certificate
      

      Empty

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      February 6, 2019

      This could be a lot of things! Did you change the hosts from example.com to your own domain, and point the A records to your DO load balancer accordingly?

      I’m getting the same thing. I have verified that my domain and 2 subdomains are setup correctly and their A records are pointing to the DO load balancer. At first I had forgotten to update the example.com records in the echo_ingress.yaml but I updated and re-applied but no luck :(

      Same here. I have setup domain point to ingress endpoint. It’s just work fine with no certificate. kubectl describe certificate always return empty result.

      all set

      Events: Type Reason Age From Message


      Normal Generated 40m cert-manager Generated new private key Normal OrderCreated 40m cert-manager Created Order resource “letsencrypt-staging-4138224086” Normal OrderComplete 40m cert-manager Order “letsencrypt-staging-4138224086” completed successfully Normal CertIssued 40m cert-manager Certificate issued successfully

      but at when opening certificate from browser it is not opening on https://

      Just leaving this as a note.

      Had to do a couple of things to get this working with 0.6.0 of the cert-manager helm chart (may or may not be related to the version number). Both were documented here.

      1. Add the custom CRDs
      kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml
      
      1. Disable cert-manager validation on the kube-system namespace since it’s not a new namespace.
      kubectl label namespace kube-system certmanager.k8s.io/disable-validation="true"
      
      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      February 6, 2019

      Awesome, thanks for surfacing this! Will update the guide ASAP with these changes.

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      March 6, 2019

      Tutorial has been updated to work with v0.6.0 of cert-manager. Thanks again for flagging the changes!

      Hey @hjet thanks for the tutorial!

      Please challenge me on this but I believe you have mistaken the relationship between the privateKeySecretRef in the cluster issuer and the secretName in the ingress resources (spec.tls.hosts.secretName). They are NOT the same secret. Per the documentation, the privateKeySecretRef stores the “ACME account private key” where the the ingress’s secretName stores the TLS private key and certificate.

      In your example you say “Finally, we add a tls block to specify the hosts for which we want to acquire certificates, and specify the private key we created earlier.” which I believe is incorrect and misleading.

      Please let me know your thoughts on this.

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      February 6, 2019

      You are correct! Have updated the language in the tutorial accordingly. Thanks for the catch.

      Hello. how can I use the nginx ingress controller without the external load balancer to be automatically created? I have a single node cluster . dont need External load balancer.

      Thank you.

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      February 7, 2019

      Thanks for your question! You can also use a ServiceType: NodePort. To learn more, consult the official Ingress Controller docs. To learn more about NodePort services, consult the Kubernetes docs.

      Thanks for this awesome manual. Trying to execute

      $ helm install --name cert-manager --namespace kube-system stable/cert-manager --version v0.5.2
      

      I’ve got the following error

      Error: namespaces "kube-system" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "kube-system"
      

      You should add the following commands before installing cert-manager into a cluster

      $ kubectl create serviceaccount --namespace kube-system tiller
      $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
      $ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
      
      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      February 15, 2019

      Hey slotix, thanks for the feedback!!

      Those steps are actually covered in How To Install Software on Kubernetes Clusters with the Helm Package Manager which we link to in the Tutorial’s Prerequisites section.

      Glad you found the tutorial useful!

      Thank you for pointing it out! I’ve stuck with one more problem. Please, can you provide information about how to set up Nginx Ingress to support wss:// web sockets? Unfortunately, I didn’t find clear info about that.

      Hey, were you able to get Web Sockets to work?

      Dude, you are the fucking man.

      I followed this tutorial to get my k8s cluster up and running. However, while it works some of the time, I am often running into outages with an error:

      curl https://www.redacted curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.redacted:443

      Often, deleting the ingress-nginx pods will cause this to go away, but not always. Nothing in the pod logs seem to indicate an error. The only non-trivial difference from your setup is I have each of my services and ingress routes in namespaces.

      Would there be any reference as to how to renew the certificates under this type of configuration? Perhaps even how we could automate it with certbot?

      Thank you for the tutorial. I have a question regarding external loadbalancer and nodeports. The serviceType of the ingress-nginx service is “loadbalancer”. This creates an external cloud loadbalancer in DO. But the communication from the loadbalancer to the ingress is done via nodeports. If I edit the service definition I see that nodeports are configured magically. If I delete the nodeport lines in the service definition they are added again with different nodeports automatically. After having done that the echo1 and echo2 service can not be reached anymore.

      If you want to get real client IP you shoud enable proxy protocol:

      add service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol to ingress service

      kind: Service
      apiVersion: v1
      metadata:
        name: ingress-nginx
        namespace: ingress-nginx
        annotations:
          service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
        labels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
      spec:
        externalTrafficPolicy: Local
        type: LoadBalancer
        selector:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
        ports:
          - name: http
            port: 80
            targetPort: http
          - name: https
            port: 443
            targetPort: https
      
      ---
      
      

      and add use-proxy-protocol: “true” to nginx-configuration ConfigMap:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: nginx-configuration
        namespace: ingress-nginx
        labels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
      data:
        use-proxy-protocol: "true"
      

      You’re the man! Thank you for sharing the annotation required for enabling proxy protocol. I could not find it documented anywhere. Your comment has saved me countless hours.

      I’ve been setting proxy-enabled manually from the web dashboard, but every time DO performs maintenance, the settings would get reset.

      There are issues with this guide in regards to cert-manager 0.7.0+

      First off, when installing from the helm chart, you must make a cert-manager namespace or download and edit the helm charts as mentioned in the docs.

      Additionally, the stable/cert-manager version is bugged, use the jetstack/cert-manager image after running:

      helm repo add jetstack https://charts.jetstack.io
      helm repo update
      

      Also, the ingress is missing the challenge annotation:

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        namespace: global-services
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
          certmanager.k8s.io/acme-challenge-type: http01
      

      These fixes were required to make this tutorial work as of April 4, 2019

      I’m seeing some serious performance issues when using this setup, and I’ve no idea how to get past it. Have anyone else had issues with timeouts and slow responses using the combination of nginx-ingress and DO load balancer?

      Hi,

      Thanks for the tutorial it work like charm.

      I’m trying to create to Ingress with 2 domains and different anotations, can’t make it work!! Do you have any example? the big problem is that the second ingress is not creating or updating the certificate.

      Regards

      What I have noticed is that there is no way to specify custom TLS via the Nginx Ingress without updating the controller to use

      - --default-ssl-certificate=default/tls-secret-name
      

      Currently, it seems that it won’t be able to support multiple TLS for different hostnames since we cannot customize the TLS name via a custom secret without using LetsEncrypt as issuer.

      Thank you for the awesome article!

      I’m having some trouble

      After

      helm install --name cert-manager --namespace kube-system stable/cert-manager --version v0.6.6
      
      

      I get the following error:

      Error: release cert-manager failed: namespaces "kube-system" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "kube-system"
      

      Great article! It should be updated for cert-manager 0.7. Example here

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      June 16, 2019

      Thanks for your feedback!! We’ve updated the tutorial to v0.8.0 of cert-manager and v0.24.1 of ingress-nginx.

      I saw a message at the end of the cert-manager package installation output

      **This Helm chart is deprecated**.
      All future changes to the cert-manager Helm chart should be made in the
      official repository: https://github.com/jetstack/cert-manager/tree/master/deploy.
      The latest version of the chart can be found on the Helm Hub: https://hub.helm.sh/charts/jetstack/cert-manager.
      

      You can use this instead (as documented in official repository):

      helm repo add jetstack https://charts.jetstack.io
      helm install --name my-release --namespace cert-manager jetstack/cert-manager
      
      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      June 16, 2019

      Thanks for surfacing this! We’ve since updated the tutorial to v0.8.0 of cert-manager and now use the jetstack Helm repo, as recommended.

      I really appreciate these in depth tutorials from DO, really good job guys. I’m wondering if it is possible to update this tutorial since the kubernetes guys moved the “mandatory.yaml”, so your tutorial kinda broke with that. I tried following the deploy guide over at kubernetes/ingress-nginx but it does not work as I would expect it to work.

      Thanks!

      For now you can go to the releases page and select the previous releases tag, this will allow the paths to be correct. For example:

      https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/provider/cloud-generic.yaml
      

      Hello! The kubernetes repository has undergone major changes and this tutorial is now broken! Thanks. :)

      Hanif Jetha
      DigitalOcean Employee
      DigitalOcean Employee badge
      June 16, 2019

      Updated!! Thanks for catching that!

      Great article!

      One question: what about internal communication between services. Eg. if we have api endpoints, and microservice1 (ms1) handles requests on /api/endpoint1 and microservice2 (ms2) handles /api/endpoint2. Let’s say that for some reason ms1 needs data which ms2 provides, but we don’t want to call it directly, we still want this indirection which ingress provides. Is there are way for a service to call ingress directly? Is ingress a discover-able service itself via internal dns names?

      Thanks!

      Hello is sthere a way I can send external bindings to the acme server in the issuer.yaml or any where else… I need to pass 2 parameters to the acme server but not understanding where to set them

      Is there a way to add external bindings in the yaml files or anywhere else? Can anyone guide me

      Awesome tutorial! I setup my architecture to follow yours. I am facing one big issues. My application works flawlessly until making a POST request with a file larger ~1mb. Seems like my connection is getting reset. I have tested this functionality with my container running locally and it works fine at any size request.

      Could this have something to do with the load balancer or ingress? CPU usage doesn’t look high.

      Really having trouble finding any other help online. I would really appreciate any help.

      Thank you

      You should add this annotation:

      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: my-app-name
        namespace: my-namespace
        annotations:
          kubernetes.io/ingress.class: "nginx"
          [...]
          nginx.ingress.kubernetes.io/proxy-body-size: 16m
      

      One thing I’m curious about is the Load Balancer created by the cloud-generic.yaml file. In Digital Ocean it states that only one of the nodes is healthy in that load balancer. This is because the in the mandatory.yaml, the Deployment is set to replicas: 1. Isn’t this a waste of Load Balancer resources? My assumption is that one should either update the mandatory.yaml to include more then 1 replica, or use the baremetal one. Or is one meant to use the autoscaling feature?

      Good question, it’s interesting for me too. Do you tried increase replicas count?

      Setting the secretName to the same name as the issuer name is confusing. Once readers start adding more ingresses, they might (just like me) keep the secretName letsencrypt-prod.

      when you’re creating the echo_ingress.yaml you should NOT name the tls secretName ‘letsencrypt-staging’ To the learner, this may throw them off to think that this needs to be the same name as the name of the secret that stores the account stuff for the issuer within the kube-system namespace.

      Please rename this in the tutorial to something like : secretName: echo-1-2-staging-secret and for production: secretName: echo-1-2-prod-secret

      I had set up ingress using this a while back. Is there a way I can set it to auto update certs? I am now getting emails from letsencrypt that my certificates will be invalid in some time.

      Hi Hanif, Excellent article and thanks for putting this together. I have a specific question about the certificate. Appreciate if you can help me out. Let me explain the situation. I am running an application “User Registration” (REST API Based) in GKE cluster with HAProcxy Ingress Controller and HTTP(S) L7 loadbalancer. I have an existing domain (e.g. mydomain.com) where I am hosting my website. The hosting platform has provided a SSL certificate which is securing wild card domain (*.mydomain.com) and my website opens with https://mydomain.com. I have created a subdomain apps.mydomain.com and pointed “A” record to the GCP HTTP(S) loadbalancer IP, so that I can access the application over the internet. I can access my application over the internet on port 80. But it does not work on Post 443 (with https).

      As you the article I need to generate a certificate and key using ACME and the same need to be used in Cluster Issuer and Ingress.

      Do I need to ask my domain/hosting provider for the Certificate and Key? Or I can use ACME to generate another certificate with hostname: apps.mydomain.com?

      Even I downloaded the certificate and key from my hosting provided website (there was an option - user you own server) and used the same in the Ingress only (used the certificate and ley as a secret). But my website turned into an insecure mode and when curl the API https://apps.mydomain.com/CreteUser there was an error:

      TCP_NODELAY set

      • schannel: failed to receive handshake, need more data
      • schannel: SSL/TLS connection with apps.mydomain.com port 443 (step 2/3)
      • schannel: encrypted data got 1845
      • schannel: encrypted data buffer: offset 1845 length 4096
      • schannel: SNI or certificate check failed: SEC_E_WRONG_PRINCIPAL (0x80090322) - The target principal name is incorrect.
      • Closing connection 0
      • schannel: shutting down SSL/TLS connection with demo.apps.product.barnsleypujo.co.uk port 443
      • schannel: clear security context handle curl: (35) schannel: SNI or certificate check failed: SEC_E_WRONG_PRINCIPAL (0x80090322) - The target principal name is incorrect.

      Could you please help me out?

      Thanks, Suvendu

      This command fails in helm 3. The --name flag is deprecated in Helm 3 Chart

      helm install --name cert-manager --namespace kube-system jetstack/cert-manager --version v0.8.0
      

      This worked for me.

      helm install cert-manager jetstack/cert-manager --namespace kube-system --version v0.12.0
      

      This is a very helpful tutorial! However is this still working? I’m following it and everything is going smoothly until this instruction

      helm install --name cert-manager --namespace kube-system jetstack/cert-manager --version v0.8.0
      
      Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1"
      

      I have no idea how to fix this. Any idea?

      thanks!

      Anyone stuck at Waiting for CertificateRequest?

      Status:
        Conditions:
          Last Transition Time:  2019-12-18T11:27:58Z
          Message:               Waiting for CertificateRequest "echo-tls-4274852748" to complete
          Reason:                InProgress
          Status:                False
          Type:                  Ready
      Events:
        Type    Reason        Age   From          Message
        ----    ------        ----  ----          -------
        Normal  GeneratedKey  15m   cert-manager  Generated a new private key
        Normal  Requested     15m   cert-manager  Created new CertificateRequest resource "echo-tls-4274852748"
      

      Thank you for this great tuto. I was wondering two things about the LoadBalancer instance:

      First, the graphs section shows: You do not have a forwarding rule configured for HTTP traffic. Some data cannot be displayed. You have at least one non-HTTP forwarding rule on this Load Balancer. Some data cannot be displayed.

      Is there a way, and if yes how to adapt the configuration in order to access those graphs.

      Secondly, I understood that only the droplet hosting the ingress pod will appears as healthy. My concern is about the https load my system will take when it will be online. (arround 100 https hits/sec), and because I’m migrating to kubernetes also because my formal docker compose setup takes too much load in its loadbalacing docker instance (90%cpu…).

      So, is it a feasable and a good practice to add replicas to the ingress ? How to do it that smartly in order to make sure they balance the load on all pods ?

      This guide helped me big time to trouble shoot a deployment on gitlab with Digital Ocean k8s, helm and the gitlab auto devops charts.

      I thought I’d leave a note here in case anybody else runs into this since its really hard to find good info if you are a beginner with all this stuff… I beat my head against all this CI/CD stuff for probably 2 weeks total and felt like a fool.

      I kept getting the Ingress Controller Fake Certificate no matter what I did. In my case, I had an oldschool ssl cert that I wanted to use, and I had to make sure that the tls secret was getting created in the same namespace as the ingress+service (I put it in my helm charts).

      Out of the box, I never got letsencrypt to work with Gitlabs ingress + cert-manager integration. Very frustrating…

      I tried to move from Traefik to nginx+certmanager several months ago and failed. This tutorial finally got me through succesfully.

      Thanks!

      Try DigitalOcean for free

      Click below to sign up and get $200 of credit to try our products over 60 days!

      Sign up

      Join the Tech Talk
      Success! Thank you! Please check your email for further details.

      Please complete your information!

      Congratulations on unlocking the whale ambience easter egg!

      Click the whale button in the bottom left of your screen to toggle some ambient whale noises while you read.

      Thank you to the Glacier Bay National Park & Preserve and Merrick079 for the sounds behind this easter egg.

      Interested in whales, protecting them, and their connection to helping prevent climate change? We recommend checking out the Whale and Dolphin Conservation.

      Become a contributor for community

      Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

      DigitalOcean Documentation

      Full documentation for every DigitalOcean product.

      Resources for startups and SMBs

      The Wave has everything you need to know about building a business, from raising funding to marketing your product.

      Get our newsletter

      Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

      New accounts only. By submitting your email you agree to our Privacy Policy

      The developer cloud

      Scale up as you grow — whether you're running one virtual machine or ten thousand.

      Get started for free

      Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

      *This promotional offer applies to new accounts only.