NFS Shutdown Issue

Home » CentOS » NFS Shutdown Issue
CentOS 3 Comments

Hi all,

I have an odd interaction on a CentOS 7 file server. The basic setup is a minimal 7.x install. I have 4 internal drives (/dev/sd[a-d]) configured in a RAID5 and mounted locally on /data. This is exported via NFS to ~12
workstations which use the exported file systems for /home. I have an external drive connected via USB (/dev/sde) and mounted on /rsnapshot. I
use rsnapshot to back up the RAID5 to the external drive periodically. I
also export /rsnapshot (read only) to the workstations so users can grab a backup copy of /home data if needed. These two mounts (/data and
/rsnapshot) should be independent of each other. However, a couple of times now I’ve had somebody accidentally unplug my backup drive (sde), which causes NFS to shut down. Here are the messages entries when this happens.

Oct 28 16:34:12 linux-fs01 kernel: xhci_hcd 0000:00:14.0: Cannot set link state. Oct 28 16:34:12 linux-fs01 kernel: usb usb4-port2: cannot disable (err -32)
Oct 28 16:34:12 linux-fs01 kernel: usb 4-2: USB disconnect, device number 2
Oct 28 16:34:17 linux-fs01 systemd: Stopping NFS server and services… Oct 28 16:34:17 linux-fs01 systemd: Stopped NFS server and services. Oct 28 16:34:17 linux-fs01 systemd: Stopping NFSv4 ID-name mapping service… Oct 28 16:34:17 linux-fs01 systemd: Stopping NFS Mount Daemon… Oct 28 16:34:17 linux-fs01 systemd: Stopped NFSv4 ID-name mapping service. Oct 28 16:34:17 linux-fs01 rpc.mountd[10570]: Caught signal 15, un-registering and exiting. Oct 28 16:34:17 linux-fs01 systemd: Stopped NFS Mount Daemon. Oct 28 16:34:17 linux-fs01 systemd: Stopped target Local File Systems. Oct 28 16:34:17 linux-fs01 systemd: Unmounting /rsnapshot… Oct 28 16:34:17 linux-fs01 kernel: XFS (sde1): Unmounting Filesystem Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): metadata I/O error: block
0x2800ccac8 (“xlog_iodone”) error 5 numblks 64
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): xfs_do_force_shutdown(0x2)
called from line 1221 of file fs/xfs/xfs_log.c. Return address = 0xffffffffc06cec30
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): Log I/O Error Detected. Shutting down filesystem Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): Please umount the filesystem and rectify the problem(s)
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): Unable to update superblock counters. Freespace may not be correct on next mount. Oct 28 16:34:20 linux-fs01 systemd: Unmounted /rsnapshot. Oct 28 16:34:20 linux-fs01 kernel: nfsd: last server has exited, flushing export cache

Should it be expected that NFS stops when one of the exports is unmounted?
I don’t think so. Below is some more info on the setup. Getting another eye on this would be much appreciated.

Darby

/etc/fstab entries

/dev/md/Storage /data xfs defaults 0 0
LABEL=rsnapshot /rsnapshot xfs defaults 0 0

/etc/exports

/data 10.0.223.0/22(rw,async,no_root_squash)
/rsnapshot 10.0.223.0/22(ro,sync,no_root_squash)

mdrad info

[root@linux-fs01 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4]
md125 : active raid5 sda[3] sdb[2] sdc[1] sdd[0]
134217728 blocks super external:/md127/0 level 5, 64k chunk, algorithm 0 [4/4] [UUUU]

md126 : active raid5 sda[3] sdb[2] sdc[1] sdd[0]
5440012288 blocks super external:/md127/1 level 5, 64k chunk, algorithm 0 [4/4] [UUUU]

md127 : inactive sda[3](S) sdb[2](S) sdd[1](S) sdc[0](S)
20804 blocks super external:imsm

unused devices:
[root@linux-fs01 ~]# mdadm -D /dev/md/System
/dev/md/System:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 134217728 (128.00 GiB 137.44 GB)
Used Dev Size : 44739328 (42.67 GiB 45.81 GB)
Raid Devices : 4
Total Devices : 4

State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-asymmetric
Chunk Size : 64K

Consistency Policy : resync

UUID : b359c1f9:04655baa:52b3ed0c:5aa3c9b6
Number Major Minor RaidDevice State
3 8 0 0 active sync /dev/sda
2 8 16 1 active sync /dev/sdb
1 8 32 2 active sync /dev/sdc
0 8 48 3 active sync /dev/sdd
[root@linux-fs01 ~]#

3 thoughts on - NFS Shutdown Issue

  • I disagree. This is probably part of the NFS server, to catch when systems are being shut down and cleanly notifying and disconnecting clients.

    When the NFS mountd exports a filesystem, it gets a kernel lock on that volume/directory (the linux NFSd is a kernel thread, not a userspace daemon). I don’t believe that the NFS server knows to un-export filesystems on a per-volume basis because it may come or go.

    If you expect the USB device to not be always connected, then you shouldn’t export it as an NFS share. I don’t think you can export an automount either.

  • Interesting – thanks for the info. I can understand that behavior in a shutdown situation. But I’m surprised that unmounting (either cleanly or uncleanly) one of two exported filesystems triggers the NFS service to shutdown completely. Is this consistent with other peoples experiences?

  • I don’t have block devices fail very often, but if you run “systemctl list-dependencies nfs-server” you should see your mount points listed as dependencies.  For example, I export a filesystem mounted at “/export”, and I see “export.mount” in the dependency list for that service.  Given that, I would expect nfs-server to stop if the “export.mount” systemd unit (the /export mount) failed.