The State Of Xfs On CentOS 6?

Home » CentOS » The State Of Xfs On CentOS 6?
CentOS 7 Comments

We’re looking at getting an HBR (that’s a technical term, honkin’ big RAID). What I’m considering is, rather than chopping it up into 14TB or
16TB filesystems, of using xfs for really big filesystems. The question that’s come up is: what’s the state of xfs on CentOS6? I’ve seen a number of older threads seeing problems with it – has that mostly been resolved?
How does it work if we have some *huge* files, and lots and lots of smaller files?

mark

7 thoughts on - The State Of Xfs On CentOS 6?

  • I have some largish (>20TB) xfs filesystems on CentOS 6, and things seem fine. The one issue I had, quite a while ago (and maybe even in CentOS
    5?), was with growing the fs, but I grew one on CentOS 6 recently with no problems.

    Define “huge”. It seems fine for our use with multi-dozen-GB files
    (possibly getting to >100GB files) and many small files, but our load is generally not that heavy.

    –keith

  • Keith Keller wrote:

    At the moment, files that are tens of gigs, but I would not be at *all*
    surprised to see another decimal point there in the next year or two. HBR
    – a Jetstor 742 with 42 4TB drives…. I’m assuming they’ll want me to do RAID 6 for this, as we’ve been doing on other RAIDs.

    mark

  • That’s significantly bigger than what I have running, but I have heard of people running XFS on larger filesystems than I have.

    If you’re making one large filesystem out of that, the warning James mentioned is even more important: you will want a boatload of memory to xfs_repair that fs quickly. The rough rule of thumb I’ve read (which is probably akin to the 2x RAM==swap guideline) is at least 1GB of memory for every 1TB of storage, but the more the better, I’d wager.

    –keith

  • I’ve had good luck with XFS file systems of 80TB or so for nearline archival storage. thats 36 3TB SAS drives organized as 3 x 11 raid6+0
    with 3 hotspares.

    found two minor(?) gotchas so far with XFS

    1) NFS doesn’t like 64bit inodes. you can A) only nfs share the root of the giant XFS file system (this *is* the traditional way, but people from a Windows background seem to like to micromanage their shares), or B) use UUID exports (not compatible with all nfs clients in my experience), or C) specify fsid=NNN for a arbitrary unique NNN for each export on a given server. We opted for C.

    2) I just discovered the other night that KVM doesn’t like booting disk image files stored on xfs on a 4K sector device (in my case, this was an SSD). solution was to specify cache=writeback, which somehow bypasses O_DIRECT. There’s probably other fixes, but that works well enough.

    also, there was a bad kernel in 6.3 or something, that had a serious bug with XFS. the fix came out 2-3 weeks after 6.3 was released, but I
    ran into internal operations people who don’t update production systems, if you say you tested something on 6.3, then they use 6.3 forever.
    They pathologically skip my installation step 2, “yum -y update”.

  • no, this bug actually caused a big chunk of directories to disappear after a power failure event. I might be wrong about 6.3, it could have been 6.2 or something.

    I agree, XFS is my goto linux file system for large volumes. ZFS is my other favorite, but I still don’t trust it on Linux.