Tutorial

How To Set Up ReadWriteMany (RWX) Persistent Volumes with NFS on DigitalOcean Kubernetes

Published on February 19, 2020
English
How To Set Up ReadWriteMany (RWX) Persistent Volumes with NFS on DigitalOcean Kubernetes

This tutorial is out of date and no longer maintained.

Introduction

With the distributed and dynamic nature of containers, managing and configuring storage statically has become a difficult problem on Kubernetes, with workloads now being able to move from one Virtual Machine (VM) to another in a matter of seconds. To address this, Kubernetes manages volumes with a system of Persistent Volumes (PV), API objects that represent a storage configuration/volume, and PersistentVolumeClaims (PVC), a request for storage to be satisfied by a Persistent Volume. Additionally, Container Storage Interface (CSI) drivers can help automate and manage the handling and provisioning of storage for containerized workloads. These drivers are responsible for provisioning, mounting, unmounting, removing, and snapshotting volumes.

The digitalocean-csi integrates a Kubernetes cluster with the DigitalOcean Block Storage product. A developer can use this to dynamically provision Block Storage volumes for containerized applications in Kubernetes. However, applications can sometimes require data to be persisted and shared across multiple Droplets. DigitalOcean’s default Block Storage CSI solution is unable to support mounting one block storage volume to many Droplets simultaneously. This means that this is a ReadWriteOnce (RWO) solution, since the volume is confined to one node. The Network File System (NFS) protocol, on the other hand, does support exporting the same share to many consumers. This is called ReadWriteMany (RWX), because many nodes can mount the volume as read-write. We can therefore use an NFS server within our cluster to provide storage that can leverage the reliable backing of DigitalOcean Block Storage with the flexibility of NFS shares.

In this tutorial, you will configure dynamic provisioning for NFS volumes within a DigitalOcean Kubernetes (DOKS) cluster in which the exports are stored on DigitalOcean Block storage volumes. You will then deploy multiple instances of a demo Nginx application and test the data sharing between each instance.

Note: The deployment of nfs-server described in this tutorial is not highly available, and therefore is not recommended for use in production. Instead, the setup described is meant as a lighter weight option for development or in order to test ReadWriteMany (RWX) Persistent Volumes for educational purposes.

Prerequisites

Before you begin this guide you’ll need the following:

  • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation.

  • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. To create a Kubernetes cluster on DigitalOcean, see our Kubernetes Quickstart. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

Note: Starting with Helm version 3.0, Tiller no longer needs to be installed for Helm to work. If you are using the latest version of Helm, see the Helm installation documentation for instructions.

Step 1 — Deploying the NFS Server with Helm

To deploy the NFS server, you will use a Helm chart. Deploying a Helm chart is an automated solution that is faster and less error-prone than creating the NFS server deployment by hand.

First, make sure that the default chart repository stable is available to you by adding the repo:

  1. helm repo add stable https://kubernetes-charts.storage.googleapis.com/

Next, pull the metadata for the repository you just added. This will ensure that the Helm client is updated:

  1. helm repo update

To verify access to the stable repo, perform a search on the charts:

  1. helm search repo stable

This will give you list of available charts, similar to the following:

Output
NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.3.2 v4.5.0.5 A Helm chart for Aerospike in Kubernetes stable/airflow 5.2.4 1.10.4 Airflow is a platform to programmatically autho... stable/ambassador 5.3.0 0.86.1 A Helm chart for Datawire Ambassador ...

This result means that your Helm client is running and up-to-date.

Now that you have Helm set up, install the nfs-server-provisioner Helm chart to set up the NFS server. If you would like to examine the contents of the chart, take a look at its documentation on GitHub.

When you deploy the Helm chart, you are going to set a few variables for your NFS server to further specify the configuration for your application. You can also investigate other configuration options and tweak them to fit the application’s needs.

To install the Helm chart, use the following command:

  1. helm install nfs-server stable/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=do-block-storage,persistence.size=200Gi

This command provisions an NFS server with the following configuration options:

  • Adds a persistent volume for the NFS server with the --set flag. This ensures that all NFS shared data persists across pod restarts.
  • For the persistent storage, uses the do-block-storage storage class.
  • Provisions a total of 200Gi for the NFS server to be able to split into exports.

Note: The persistence.size option will determine the total capacity of all the NFS volumes you can provision. At the time of this publication, only DOKS version 1.16.2-do.3 and later support volume expanding, so resizing this volume will be a manual task if you are on an earlier version. If this is the case, make sure to set this size with your future needs in mind.

After this command completes, you will get output similar to the following:

Output
NAME: nfs-server LAST DEPLOYED: Thu Feb 13 19:30:07 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available to provision dynamic volumes. You can use this storageclass by creating a PersistentVolumeClaim with the correct storageClassName attribute. For example: --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi

To see the NFS server you provisioned, run the following command:

  1. kubectl get pods

This will show the following:

Output
NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 11m

Next, check for the storageclass you created:

  1. kubectl get storageclass

This will give output similar to the following:

Output
NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 90m nfs cluster.local/nfs-server-nfs-server-provisioner 3m

You now have an NFS server running, as well as a storageclass that you can use for dynamic provisioning of volumes. Next, you can create a deployment that will use this storage and share it across multiple instances.

Step 2 — Deploying an Application Using a Shared PersistentVolumeClaim

In this step, you will create an example deployment on your DOKS cluster in order to test your storage setup. This will be an Nginx web server app named web.

To deploy this application, first write the YAML file to specify the deployment. Open up an nginx-test.yaml file with your text editor; this tutorial will use nano:

  1. nano nginx-test.yaml

In this file, add the following lines to define the deployment with a PersistentVolumeClaim named nfs-data:

nginx-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}
        volumeMounts:
        - mountPath: /data
          name: data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-data
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs

Save the file and exit the text editor.

This deployment is configured to use the accompanying PersistentVolumeClaim nfs-data and mount it at /data.

In the PVC definition, you will find that the storageClassName is set to nfs. This tells the cluster to satisfy this storage using the rules of the nfs storageClass you created in the previous step. The new PersistentVolumeClaim will be processed, and then an NFS share will be provisioned to satisfy the claim in the form of a Persistent Volume. The pod will attempt to mount that PVC once it has been provisioned. Once it has finished mounting, you will verify the ReadWriteMany (RWX) functionality.

Run the deployment with the following command:

  1. kubectl apply -f nginx-test.yaml

This will give the following output:

Output
deployment.apps/web created persistentvolumeclaim/nfs-data created

Next, check to see the web pod spinning up:

  1. kubectl get pods

This will output the following:

Output
NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 23m web-64965fc79f-b5v7w 1/1 Running 0 4m

Now that the example deployment is up and running, you can scale it out to three instances using the kubectl scale command:

  1. kubectl scale deployment web --replicas=3

This will give the output:

Output
deployment.extensions/web scaled

Now run the kubectl get command again:

  1. kubectl get pods

You will find the scaled-up instances of the deployment:

Output
NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 24m web-64965fc79f-q9626 1/1 Running 0 5m web-64965fc79f-qgd2w 1/1 Running 0 17s web-64965fc79f-wcjxv 1/1 Running 0 17s

You now have three instances of your Nginx deployment that are connected into the same Persistent Volume. In the next step, you will make sure that they can share data between each other.

Step 3 — Validating NFS Data Sharing

For the final step, you will validate that the data is shared across all the instances that are mounted to the NFS share. To do this, you will create a file under the /data directory in one of the pods, then verify that the file exists in another pod’s /data directory.

To validate this, you will use the kubectl exec command. This command lets you specify a pod and perform a command inside that pod. To learn more about inspecting resources using kubectl, take a look at our kubectl Cheat Sheet.

To create a file named hello_world within one of your web pods, use the kubectl exec to pass along the touch command. Note that the number after web in the pod name will be different for you, so make sure to replace the highlighted pod name with one of your own pods that you found as the output of kubectl get pods in the last step.

  1. kubectl exec web-64965fc79f-q9626 -- touch /data/hello_world

Next, change the name of the pod and use the ls command to list the files in the /data directory of a different pod:

  1. kubectl exec web-64965fc79f-qgd2w -- ls /data

Your output will show the file you created within the first pod:

Output
hello_world

This shows that all the pods share data using NFS and that your setup is working properly.

Conclusion

In this tutorial, you created an NFS server that was backed by DigitalOcean Block Storage. The NFS server then used that block storage to provision and export NFS shares to workloads in a RWX-compatible protocol. In doing this, you were able to get around a technical limitation of DigitalOcean block storage and share the same PVC data across many pods. In following this tutorial, your DOKS cluster is now set up to accommodate a much wider set of deployment use cases.

If you’d like to learn more about Kubernetes, check out our Kubernetes for Full-Stack Developers curriculum, or look through the product documentation for DigitalOcean Kubernetes.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Default avatar

Senior Technical Editor

Editor at DigitalOcean, fiction writer and podcaster elsewhere, always searching for the next good nautical pun!


Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Do you have any advice on how to handle the situation of the NFS provisioner pod being evicted from an unhealthy node?

Occasionally this can happen but the unhealthy node fails to detatch the DO CSI volume which means the NFS provisioner is unable to start on a different node, bringing down all pods that have an NFS volume mounted.

NFS Server Provisioner helm chart which this tutorial relies on is no longer maintained and is deprecated.

https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

Moreover, what the chart used to deploy (Kubernetes External Storage NFS) is also depreciated.

https://github.com/kubernetes-retired/external-storage

The final location for what carried over from the project seems to be at:

https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner

Old instructions in this thread no longer seem applicable, and its not certain if this latest state of the project above is compatible with DO Kubernetes.

I think someone should look into this and update this guide.

I need comunicate from a differente cluster to mount a nfs-volume with a pod nfs-server deployment in another cluster I need a share filesystem mount with pods in differente cluster. I tried also Space Object storage with a plugin. for kubernetes csi-s3 but it is very slow and block my system. You can help me for find the best solution with digital ocean.

Do you have same idea?

All of this is dependant on: https://github.com/kubernetes-retired/external-storage

But that is apparently retired / dead / not to be used anymore and certainly no longer updated (Think security).

Will there be an update to this article with alternatives or replacement NFS solution?

Ok so if I run…

helm install nfs-server stable/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=do-block-storage,persistence.size=200Gi

…then volume in DO is created and attached to one node, that’s fine, but there is a way (any extra parameter in --set argument) to explicitly set name for newly created volume do be able easier distinguish which helm release corresponds to which volume in DO admin area?

Currently volume names looks like this:

pvc-1dxxxxa6-xxxx-490c-xxxx-642cae4fxxxx

which doesn’t help.

Really appreciate the answer because I can’t find this on Google. Maybe there are any additional interesting parameters out there, page to DO manual would be also useful.

Is there a way to set the NFS mount options with this setup?

Interesting, I’m still seeing a single point of failure here, since DO block storage is attached to one node, how does this solution behave storage behave when the nodes fail?

This comment has been deleted

    Thanx, perfect tutorial.

    Has anyone noticed performance issues in this setup?

    Try DigitalOcean for free

    Click below to sign up and get $200 of credit to try our products over 60 days!

    Sign up

    Join the Tech Talk
    Success! Thank you! Please check your email for further details.

    Please complete your information!

    Become a contributor for community

    Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

    DigitalOcean Documentation

    Full documentation for every DigitalOcean product.

    Resources for startups and SMBs

    The Wave has everything you need to know about building a business, from raising funding to marketing your product.

    Get our newsletter

    Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

    New accounts only. By submitting your email you agree to our Privacy Policy

    The developer cloud

    Scale up as you grow — whether you're running one virtual machine or ten thousand.

    Get started for free

    Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

    *This promotional offer applies to new accounts only.