Kernel Memory Accounting

Home » CentOS » Kernel Memory Accounting
CentOS 5 Comments

Hi CentOS experts,

I am using CentOS 7. Trying to disable kernel memory accounting:
according to https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt, passing cgroup.memory=nokmem to the kernel at boot time, should be able to archive that.

However it is not the case in my exercise. These are what I have now
$ grep CONFIG_MEMCG_KMEM /boot/config-3.10.0-327.36.3.el7.x86_64

CONFIG_MEMCG_KMEM=y

$ cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-327.36.3.el7.x86_64
root=UUIDV568066-5719-46d9-981d-278c7559689b ro quiet cgroup.memory=nokmem systemd.log_level

5 thoughts on - Kernel Memory Accounting

  • First – why in the world would you want to disable kernel memory accounting? I don’t think that is even possible (despite not being a kernel programmer myself) because the kernel must needs account for every bit of real and virtual memory in the system in order to do its job.

    Second – the first note in the doc to which you refer says that it is hopelessly out of date and further down it indicates it refers to 2.6
    kernels and we are now at 4.9.

    So now my question boils down to – what is it that you are trying to do that makes you think you have to disable kernel memory accounting?

  • I have 3.10 kernel. I am running some data processing job, need to first copy big (>5 GB) input files. The jobs were killed, because the system thought I used 5 GB memory from the file copying.

    On Fri, Mar 10, 2017 at 3:04 PM, David Both

  • Well, that is exactly what it is supposed to do. The easy way to fix this is add more memory. A wildly impractical attempt to turn off memory accounting will result in a really borked system that will suck up all your time trying to recompile the kernel to make it work. Don’t even go down that road.

    Memory is very cheap these days. Your time is one of the most valuable commodities on the planet.

    And oh, by the way – are you sure it is RAM you ran out of and not hard drive space?

    So my questions now become –

    How much RAM do you have?

    How much swap space?

    What error message did you get?

    Are you using something like top, htop, iotop, or glances to monitor your system and discover the root cause of this problem.

    Do you have SAR installed and enabled? You would also need to set the granularity for 1 minute instead of the default 10.

    What does SAR tell you?

    From where (what device or medium) are you copying the data from and to?

    But no matter how many questions you answer, my response will probably still be the same – get more RAM. Or at least more of the limiting resource -and that does sound like RAM right now.

  • If you’re using ‘cp’ you probably aren’t using 5G of RAM. That’s not how ‘cp’ works. Actual errors might be helpful here.

    If you are running batch processing and don’t want the OOM Killer to ever get involved, the cgroup memory accounting features actually let you turn it off with memory.oom_control=1 in the cgroup. You can also turn off the heuristic overcommit memory manager (see https://www.kernel.org/doc/Documentation/vm/overcommit-accounting for details) but I suggest figuring out your problem with copying files first.


    Jonathan Billings

  • Thank you! With ‘cp’ the job was killed. There was error message:
    Mar 10 09:50:04 kernel: SLUB: Unable to allocate memory on node -1
    (gfp=0x8020)
    Mar 10 09:50:04 kernel: cache: kmalloc-64(5:step_0), object size: 64, buffer size: 64, default order: 0, min order: 0
    Mar 10 09:50:04 kernel: node 0: slabs: 4, objs: 256, free: 0
    Mar 10 09:50:04 kernel: node 1: slabs: 0, objs: 0, free: 0

    When I replaced the cp command with ‘dd bs=4M iflag=direct oflag=direct
    …’, the file copying ran happily to completion.

    CentOS mailing list CentOS@CentOS.org https://lists.CentOS.org/mailman/listinfo/CentOS