Ext4 Deadlock Issue

Home » CentOS » Ext4 Deadlock Issue
CentOS 7 Comments

I’m having an occasional problem with a box. It’s a Supermicro 16-core Xeon, running CentOS 6.3 with kernel 2.6.32-279.el6.x86_64, 96 gigs of RAM, and an Areca 1882ix-24 RAID Controller with 24 disks, 23 in RAID6
plus a hot spare. The RAID is divided into 3 partitions, two of 25 TB
plus one for the rest.

Lately, I’ve noticed sporadic hangs on writing to the RAID, which
“resolve” themselves with the following message in dmesg:

INFO: task jbd2/sdb2-8:3607 blocked for more than 120 seconds.
“echo 0 > /proc/sys/kernel/hung_task_timeout_secs” disables this message. jbd2/sdb2-8 D 000000000000000a 0 3607 2 0x00000080
ffff881055d03d20 0000000000000046 ffff881055d03ca0 ffffffff811ada77
ffff8810552e2400 ffff88109c6566e8 0000000000002f38 ffff8810546e1540
ffff8810546e1af8 ffff881055d03fd8 000000000000fb88 ffff8810546e1af8
Call Trace:
[] ? __set_page_dirty+0x87/0xf0
[] ? prepare_to_wait+0x4e/0x80
[] jbd2_journal_commit_transaction+0x19f/0x14b0 [jbd2]
[] ? __switch_to+0xd0/0x320
[] ? lock_timer_base+0x3c/0x70
[] ? autoremove_wake_function+0x0/0x40
[] kjournald2+0xb8/0x220 [jbd2]
[] ? autoremove_wake_function+0x0/0x40
[] ? kjournald2+0x0/0x220 [jbd2]
[] kthread+0x96/0xa0
[] child_rip+0xa/0x20
[] ? kthread+0x0/0xa0
[] ? child_rip+0x0/0x20

I get two of these messages in close succession, then things proceed normally. It doesn’t happen often, and only under load. I’m uncertain, but think it might just happen once per boot, once it’s happened, it doesn’t repeat until rebooting. I’m not sure of this, however.

Things otherwise seem fine, other logs, RAID controller event logs, RAID
controller physical disk status and health reports are normal, etc. After getting this error once, attempting to re-write the exact same file on the filesystem gets no errors and works normally, so I doubt it’s a physical thing with the drives anyway.

Anyone seen anything like this before? It’s not very frequent, but it’s very annoying.

7 thoughts on - Ext4 Deadlock Issue

  • I haven’t, but the first thing I’d do in the situation you describe is update the firmware on the RAID card.

    I looked around at other discussions of the same error, and it looks like some people have also resolved that problem by switching to the deadline scheduler

    # echo noop > /sys/block/sdb/queue/scheduler

    …but I wouldn’t do that unless you update the firmware and still see the error.

  • Ok, will try the firmware first. I saw some talk of the scheduler, but I
    was uncertain if that applied in my case. By the way, doesn’t the command you include switch to the noop scheduler? Shouldn’t it be “echo deadline”?

  • I’ve had a chance to test this a bit more. Updating the firmware on the controller had no effect, so I tried changing the scheduler, but that doesn’t seem to work either. I have confirmed, however, that this happens exactly once per boot. I can test it by doing something like:

    dd if=/dev/zero of=test1.bin bs=16777216 count=1024 conv=sync