The service works with 1 replica only but, when i try to scale the service. Multi-Attach error for volume i think this has something to do with DO block storage which supports RWO only. is there a way to make it work?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi there @jthegreat,
I believe that NFS is not really needed. I’ve recently tested this and it seems to be working as expected with a PersistentVolumeClaim and a DigitalOcean Block storage.
You can follow the steps on how to do that here:
https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/
Let me know how it goes. Regards, Bobby
answer from DO support:
Thank you for reaching out!
I understand your concern here about the ReadWriteMany. We won’t be able to support ReadWriteMany volumes because of some inherent limitations of our Block storage product. Our Engineering team is working towards the solution/workaround for this ReadWriteMany issue. However, I won’t be able to provide a specific ETA about the same.
Our do-block-storage product does have a few technical limitations that can cause trouble for some usecases. You can find more information on these limitations in our documentation:
https://www.digitalocean.com/docs/kubernetes/overview/#persistent-data
You can work around these limitations for example by using a different protocol to share and mount the storage. An example is exporting the storage throughout the cluster using other means such as the NFS protocol.
There is a helm chart ‘nfs-server-provisioner’ that will create a containerized NFS server. You can then back the nfs server with do-block-storage and use NFS exports for the use cases of shared storage or to get around the any mounting limitations. You can find more information and instructions on the helm chart and the helm project in the links below. nfs-server-provisioner: https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
helm: https://helm.sh/docs/using_helm/#quickstart
Currently, setting up NFS is the workaround for RWX using PVC. However, the deployment of nfs-server described in this tutorial is not highly available, and therefore is not recommended for use in production. If nodes fail the storage would be down until the nfs-server pod is redeployed by kubernetes. There are more production ready set-ups out there. But this still stands as an easy and quick way to get up and running during development or testing out application proof of concepts for RWX volumes.
The helm chart above is good for testing proof of concept or low stakes workflows. The reason for this it is not highly available and thus recommended for production. If your nfs-server pod goes down for any reason. All of your storage exports are now stale which can lead to issues, and it typically doesn’t recover automatically but needs a manual pod restart to get back up and running.
There are some other setups out there like the https://rook.io/ project is highly available and allows for RWX volumes. But its configuration and setup is a bit more in depth.