NFS Performance On CentOS 7

Home » CentOS » NFS Performance On CentOS 7
CentOS 4 Comments

I am setting up a file server with CentOS 7. I’m seeing performance which is considerably slower than a similar server running CentOS 6.6. A 3Gb directory can be copied to/from the CentOS 6.6 server in about 50 seconds. The same directory takes about 270 seconds to copy to/from the CentOS 7 system.

I see the same performance difference with NFS mounted file systems or using scp, so it doesn’t appear to be an NFS issue. The MTU on the NICs on both systems is
1500, and changing it to 6000 on the CentOS 7 system had no effect.

Anyone have any ideas what might cause this problem or how to fix it?

4 thoughts on - NFS Performance On CentOS 7

  • My first guess would be that stat() operations are the bottleneck. Are you using network authentication of some kind? If so, I’d try to identify differences in the authentication cache.

    For instance, CentOS 6 may be using nslcd or nscd, while CentOS 7 is using sssd or nslcd. Repeated UID/GID lookups absent effective cacheing will slow things down as you describe.

  • You may test only your network performance, using iperf, iperf3 or netperf, and then test your disk IO (others may indicate tools for this). Currently you are measuring two big subsystems which can have its down issues and/or affect each other. By splitting the test you may get a better picture of what is better/worse.

    If you can sustain near line rate on iperf, the issue is probably somewhere else then.

    Marcelo

  • 3GB/50seconds = 480Mbps

    Can’t speak directly to CentOS6/7 differences nor NFS on CentOS6/7….

    I’ve seen NFS(netapp filer to vmware host to windows VM) sustain 10000Mbps. So the NFS protocol itself isn’t the bottleneck given sufficient hardware. Since scp performs similar to NFS the on the wire protocol isn’t the problem.

    Verify the MTU setting:
    ping a.b.c.d -M do -s 8972
    Or in your case:
    ping a.b.c.d -M do -s 5972
    (6000 is a very odd MTU)

    I’d start by getting the latest/validated driver from $NICVendor.

    What IO throughput does the local file system give?
    Test with hdparm / dd / iometer / sqlio / cp -a /path /dev/null

    Test sever to server with iperf as others suggested.

    Hope that points you in the right direction.

    Steven Tardy