NFS Server CentOS7

Home » CentOS » NFS Server CentOS7
CentOS 6 Comments

Hi guys,

we are setting NFS server on CentOS7 system. Everything working OK
except speed, speed over NFS very drop… if we run dd command directly on server we are getting speed around 1,4Gbps, if we run from client connected to NFS is 200Mbps.

Do you have maybe some advice what we need to check?

Thank you!

Best, Erik

6 thoughts on - NFS Server CentOS7

  • I would start with something like iperf to measure the actual network throughput b/w the client and server. —
    — Skylar Thompson (skylar2@u.washington.edu)
    — Genome Sciences Department (UW Medicine), System Administrator
    — Foege Building S046, (206)-685-7354
    — Pronouns: He/Him/His

  • Make sure you have good speed of physical connection along the whole path from server to client(s), enable Jumbo packets on all switches along the path. NFS experts will add NFS specific tuning.

    Valeri

  • Hi guys,

    Thank you all for replys and sorry I was having enabled digest mode and I did not get all messages in time.

    Here are answers:

    Message 2:

    Client and server are virtual machines inside vmware environment which is connected to 10gbps. Network is devided between virtual machines. Physical servers are connected to Nexus switch with 10gbps.

    Message 3:

    We have try speeds with iperf, they are between 1 and 6 gbps, sometimes goes to 8gbps – depends of the load of networks. All checks was done with sync mode (NFS will be used for important data and we do not want to lose something).

    Message 4:

    Config is default, just installed and tested with NFSv3 NFSv4.

    Message 5:

    We did not enable jumbo frames on network.

    For all: We are testing speed with this script (PHP7.4):

    1, ‘over’ => 1.25, ‘fail’ => BENCHFAIL_SLOWHARDDRIVE);

    ?>

        > Hi guys,

        >

        > we are setting NFS server on CentOS7 system. Everything working OK

        > except speed, speed over NFS very drop… if we run dd command directly

        > on server we are getting speed around 1,4Gbps, if we run from client

        > connected to NFS is 200Mbps.

        >

        > Do you have maybe some advice what we need to check?

        >

       Make sure you have good speed of physical connection along the whole

        path from server to client(s), enable Jumbo packets on all switches

        along the path. NFS experts will add NFS specific tuning.

        Valeri

        > Thank you!

        >

        > Best, Erik

        >

        >

  • NFS4.x works best with newer kernels. Maybe you could try with an elrepo kernel.

    Increase the number of NFS threads on the NFS server (default is 8, which is too little).

    |RPCNFSDCOUNT=32|

    This is an example of kernel tunning for 10Gb network, adjust to your setup:

    vm.min_free_kbytes = 1048576

    vm.swappiness = 10

    net.core.somaxconn = 4096

    # set max buffers to 64MB
    net.core.rmem_max = 67108864
    net.core.wmem_max = 67108864

    # increase Linux autotuning TCP buffer size to 32MB
    net.ipv4.tcp_rmem = 4096 87380 33554432
    net.ipv4.tcp_wmem = 4096 65536 33554432

    net.core.netdev_max_backlog = 250000

    net.ipv4.neigh.default.unres_qlen = 100

    net.ipv4.neigh.ens3f0.unres_qlen = 100

    net.ipv4.neigh.ens3f1.unres_qlen = 100

    fs.file-max = 98584540

    You can calculate the buffer size with this tool:

    https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation
    <https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation>

    Enable readahead for the filesystem for instance:

    blockdev –setra 16384 /dev/mapper/mpatha1

    for x in sd[a-d]; do blockdev –setra 16384 /dev/$x ; done

    It also helps if you disable flowcontrol on the network cards (and switches).

    You could also try to reduce the maximum flow rate on the NFS host (set max rate to 8gbit for example)

    It would be best to test the actual speed by iperf3 test as someone else already suggested. But since the performance it so bad, I would suspect that the source of the problem is in the the network, not the server.

    Cheers,

    Barbara