I have been experimenting with the early access of Kubernetes. In GKE, for example, it’s easy to setup a standard VM with an NFS server configured, and then install the Helm chart for nfs-client-provisioner so you can have shared storage among pods (for things like redundant Django apps running in different pods). I tried this with DO and it did not work, even when I monkeyed with the firewall settings for the k8s nodes.
I am unaware of another inexpensive shared storage option, the standard PVCs offered by DO don’t permit ReadWriteMany. Does anyone have a better solution?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Digital Ocean support comes through again–they let me know that they only just started to deploy the NFS tools on worker nodes and if I deployed a cluster with k8s version 1.11.3-do.1 it should work. I tried again today and nfs-server-provisioner works now, as well as rook.io with CephFS.
Quick bonnie++ test, two different clusters, same worker node configuration:
Also this https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs can help.
Looks like workers just don’t have nfs tools to mount NFS. Both nfs-server-provisioner over “do-block-storage” and external nfs server (already used on my old k8s cluster) failed to mount.