NFS Not Recognizing Available File Space

Home » CentOS » NFS Not Recognizing Available File Space
CentOS 8 Comments

Hi,

I have a server running under CentOS 5.8 and I appear to be in a situation in which the NFS file server is not recognizing the available space on a particular disk (actually a hardware RAID-6 of 13 2Tb disks). If I try to write to the disk I get the following error message

[root@nas-0-1 mseas-data-0-1]# touch dum touch: cannot touch `dum’: No space left on device

However, if I check the available space, there seems to be plenty

[root@nas-0-1 mseas-data-0-1]# df -h . Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 21T 20T 784G 97% /mseas-data-0-1

[root@nas-0-1 mseas-data-0-1]# df -i . Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 3290047552 4391552 3285656000 1% /mseas-data-0-1

I don’t know if the following is relevant but the disk in question is served as one of 3 bricks in a gluster namespace.

Based on the test with touch, which is happening directly at the NFS level, this seems to be an NFS rather than gluster issue. I couldn’t find any file in /var/log which had a time that corresponded to the failed touch test and I didn’t see anything in dmesg. We have tried rebooting this system. What else should we look at and/or try to resolve or debug this issue?

Thanks.

Pat

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue Cambridge, MA 02139-4301

8 thoughts on - NFS Not Recognizing Available File Space

  • Maybe you’re hitting the allocation of reserved blocks for root?
    With your disk usage of 97% I’d think that could be the case.

    You didn’t say what file system you’re using for that 21TB array, so we
    (this list) won’t be of too much help without knowing that.

    tune2fs [0] is your friend
    – use it to determine if there are reserved blocks
    – use it to adjust the settings

    [0] https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks

    If you have a non-root shell account on that box, can you write to that array from the NFS host?
    ( Take NFS out of the equation. )

  • Hi:

    xfs file system. The fstab line for this array is:

    /dev/sdb1 /mseas-data-0-1 xfs defaults 1 0

    if I read the man pages correctly tune2fs will not work for xfs. From xfs_info I get the following for mseas-data-0-1

    [root@nas-0-1 mseas-data-0-1]# xfs_info . meta-data=/dev/sdb1 isize%6 agcount2, agsize7846667 blks
    = sectszQ2 attr=1
    data = bsize@96 blocksS71093344, imaxpct%
    = sunit=0 swidth=0 blks, unwritten=1
    naming =version 2 bsize@96
    log =internal bsize@96 blocks2768, version=1
    = sectszQ2 sunit=0 blks, lazy-count=0
    realtime =none extsz@96 blocks=0, rtextents=0

    Unfortunately, I don’t know how to interpret this or if it is giving relevant information to the question at hand>

    Unfortunately we only have a root account on that box.

  • Hi James,

    My system did not recognize the delaylog option, but when I mounted with nobarrier,inode64
    things worked and I was able to write to the array!

    Thanks!

    Pat

  • Pat Haley wrote:
    a) please don’t top post. b) a couple of things: first, is your system on a UPS? barrier is
    *supposed* to help make transactions atomic,
    so that files/dbs are not left in an undefined state in case of power blip/outage; second, it will *VERY*
    much speed up your NFS writes with nobarrier. We finally started moving people from home directories on
    5.x to 6.x when we found that… with 5.x, uncompressing and untarring a 25MB file that expanded to about
    105M, on an NFS-mounted drive was about 30 sec, while on 6.x it ran around 7 MINUTES. Then we found
    nobarrier… and it went down to pretty much what it had been on 5.x.

    mark

  • and please DO NOT forget to eliminate all the unwanted text :-)

    Thanks,

    CUT OUT TEXT NOT RELEVANT.

    Many thanks.