CentOS 7.0 And Mismatched Swap File

Home » CentOS » CentOS 7.0 And Mismatched Swap File
CentOS 16 Comments

Everyone,

I am putting together a new mail server for our firm using a SuperMicro with CentOS 7.0. When performed the install of the os, I put 16 gigs of memory in the wrong slots on the mother board which caused the SuperMicro to recognize 8 gigs instead of 16 gigs. When I installed CentOS 7.0, this error made the swap file 8070 megs instead of what I
would have expected to be a over 16000 megs.

I am using the default xfs file system on the other partitions. Is there a way to expand the swap file? If not, then is this problem sufficiently bad enough for me to start over with a new install. I do not want to start over unless I need to.

Thanks for you help !!!

Greg Ennis

16 thoughts on - CentOS 7.0 And Mismatched Swap File

  • You can grow xfs or create more swap on filesystem files (“swap in file”)
    15.2.2015 17.50 kirjoitti “Gregory P. Ennis” :

  • Everyone,

    I am putting together a new mail server for our firm using a SuperMicro with CentOS 7.0. When performed the install of the os, I put 16 gigs of memory in the wrong slots on the mother board which caused the SuperMicro to recognize 8 gigs instead of 16 gigs. When I installed CentOS 7.0, this error made the swap file 8070 megs instead of what I
    would have expected to be a over 16000 megs.

    I am using the default xfs file system on the other partitions. Is there a way to expand the swap file? If not, then is this problem sufficiently bad enough for me to start over with a new install. I do not want to start over unless I need to.

    Thanks for you help !!!

    Greg Ennis

  • 8G of swap should be more than enough for a 16G system unless you plan to severely over-commit memory.

    Regards,
    Dennis

  • I’d like to think you will want to avoid the use of swap at almost all costs because it’ll slow the system down by a lot. So really you don’t need 1:1 for a server that isn’t using suspend to disk. You only need enough swap so that things can chug (slowly) without totally imploding, giving you enough time to kill whatever is hogging RAM or do a more graceful reboot than you otherwise would.

    It’s fairly stable now, but the longer term solution for this is LVM
    thin provisioning will mean a bulk of extents will remain free (not associated with any LV) until they’re needed. While you wouldn’t use a thin volume for swap, but free extents can be used to make the conventional LV used for swap bigger. And for thin volumes it can mostly obviate even having to resize them.

  • You lucked out, honestly. You really don’t want 8GB of swap on your system. What will most likely happen is that you’ll have a process that starts running away eating memory, and it’ll try to use all of that swap before the kernel’s OOM killer can kick in. You will not enjoy thrashing 8GB of swap for probably hours.

    Really what you should do is drastically reduce the amount of swap you have allocated, and reclaim most of that 8GB of swap space for storage filesystems. In my experience, a few hundred MB of swap is more than sufficient to be able to swap out seldom-used memory while not taking too long to OOM. If you really find a need for more swap later, you can allocate a swap file; it’s slightly less efficient than a swap partition, but compared to real memory the difference will be negligible.

    –keith

  • I am sure glad I did not start over on the installation. Thank you to everyone for the information and education!!!!!

    Thanks again!!!!

    Greg

  • A neat trick for a server with less than idea memory requirement compared to the storage it has, is a pile of swap on an SSD. Having xfs_repair use swap on SSD is a lot faster than its fallback behavior when memory is low and there’s no swap. Whereas swapping to a HDD… brutal.

    Chris Murphy

  • Morning Gregory,

    Sunday, February 15, 2015, 6:42:32 PM, you wrote:

    I think that on todays system, swap is just “the last resort” before a system crashes. If you ever run into the situation that you need the swap system, you will have to throw more memory into the machine. I
    wouldn’t care.

    Btw., are you sure you want to use XFS for a mail server? I made some tests about a year ago and found that EXT4 is by the factor 10 faster compared to XFS. The tests I performed were using the “maildir” style postfix installation that results in many thousands files in the user directories. The only problem was that CentOS was not able to format huge ext4 partitions out of the box. This was valid for C6, don’t know about C7.

    best regards

  • This is a recent benchmarking using Postmark which supposedly simulates mail servers. XFS stacks up a bit better than ext4. http://www.phoronix.com/scan.php?page=article&item=linux-3.19-ssd-fs&num=3

    A neat trick for big busy mail servers that comes up on linux-raid@
    and the XFS list from time to time, is using md linear/concat to put together the physical drives into a single logical block device, and then format it XFS. XFS will create multiple AG’s across all of those devices, and do parallel writes across all of them. It’s often quite a bit better performing than raid0 specifically because of the many thousands of small files in many directories workload.

  • Hey Chris,

    I am unsure I understand what you wrote.
    “XFS will create multiple AG’s across all of those
    devices,”
    Are you comparing md linear/concat to md raid0? and that the upper level XFS will run on top them?

    (Just to make sure I understood what you have written.)

    Eliezer

  • Yes to the first question, I’m not understanding the second question. Allocation groups are created at mkfs time. When the workload IO
    involves a lot of concurrency, XFS over linear will beat XFS or ext4
    over raid0. Whereas for streaming performance workloads, striped raid will work better. If redundancy is needed, mdadm permits creation of
    1+linear, as compared to 10. http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html

    You can think of XFS on linear as being something like raid0 at a file level, rather than at a block level. On a completely empty file system if you start copying a pile of dozens or more (typically hundreds or thousands) of files in mail directories, XFS distributes those across AG’s and hence across all drives, in parallel. ext4 would for the most part focus all writes to the first device until mostly full, then the
    2nd device, then the 3rd. And on raid0 you’ll get a bunch of disk contention that isn’t really necessary because everyones files are striped across all drives.

    So contrary to popular opinion on XFS being mainly useful for large files, it’s actually quite useful for concurrent read write workflows of small files on a many disk linear/concat arrangement. This extends to using raid1 + linear instead of raid10 if some redundancy is desired.

  • Thanks Chris for the detailed response!

    I couldn’t understand the complex sentence about XFS and was almost convinced that XFS might offer a new way to spread across multiple disks.

    And in this case it’s mainly me and not you.

    Now I understand how a md linear/concat array can be exploited with XFS!

    Not related directly but given that XFS has commercial support, it can be an advantage over other file systems which are built to handle lots of small files but might not have commercial support.

    Eliezer

  • The other plus is that growing linear arrays is cake. They just get added to the end of the concat, and xfs_growfs is used. Takes less than a minute. Whereas md raid0 grow means converting to raid4, then adding the device, then converting back to raid0. And further, linear grow can be any size drive, whereas clearly with raid0 the drive sizes must all be the same.

  • It’s not new in XFS, it’s behaved like this forever. But it’s new in that other filesystems don’t allocate this way. I’m not aware of any other filesystem that does.

    Btrfs could possibly be revised to do this somewhat more easily than other fs’s since it also has a concept of allocation chunks. Right now its single data profile allocates in 1GB chunks until full, and the next 1GB chunk goes on the next device in a sequence mainly determined by free space. This is how it’s able to use different sized devices
    (including raid0,1,5,6). So it can read files from multiple drives at the same time, but it tends to only write to one drive at a time
    (unless using one of the striping raid-like profiles).

  • Nice!
    I have been learning about md arrays and have seen the details about growing operation but it’s another aspect which I wasn’t thinking about at first. For now I am not planning any storage but it might come handy later on.

    Thanks, Eliezer