Question

NFS server inside the same k8s of the clients

Hello, I’m tring to set up a NFS server in k8s. I want make the NFS exportations available on other pods. Here follow the server manifests. For the server implementation I’m tring to use this image: itsthenetwork/nfs-server-alpine:12

I defined a Headless Service called nfs-server exporting the required port 2049

apiVersion: v1
kind: Service
metadata:
  name: nfs-server
  labels:
    app: nfs-server
spec:
  ports:
  - name: nfs
    port: 2049
  selector:
    app: nfs-server
  clusterIP: None

Then I defined a StatefulSet that starts a pod with a BlockStorage volume. The volume is mounted in the same exported path and the pod runs as privileged

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      initContainers:
      containers:
      - name: nfs-server
        image: 'itsthenetwork/nfs-server-alpine:12'
        imagePullPolicy: Always
        ports:
        - containerPort: 2049
        env:
        - name: SHARED_DIRECTORY
          value: "/nfs"
        volumeMounts:
        - mountPath: /nfs
          name: nfs-volume-claim
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: nfs-volume-claim
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: do-block-storage

When I try to start a pod for testing purposes it never starts, because of the volume mounting is not feasable.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: busybox
        command:
        - sh
        - -c
        - 'while true; do date > /tmp/test; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /resources/
          name: nfs-volume-m
      volumes:
      - name: nfs-volume-m
        nfs:
          #server: nfs-server
          server: nfs-server.default.svc.cluster.local
          path: /nfs
          readOnly: false

it seems the system cannot reach the nfs server given the service name. By executing a simple ping inside the container the IP is resolved correctly. I tried the following server names in the nfs volume specifications: nfs-server and nfs-server.default.svc.cluster.local

I think k8s is tring to resolve the name from the host worker node point of view, but I’m not sure about it. How can i face this problem?

Thank you, N


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.