Tune2fs: Filesystem Has Unsupported Feature(s) While Trying To Open

Home » CentOS » Tune2fs: Filesystem Has Unsupported Feature(s) While Trying To Open
CentOS 13 Comments

I have an ext4 filesystem for which I’m trying to use “tune2fs -l”. Here is the listing of the filesystem from the “mount” command:

# mount | grep share
/dev/mapper/VolGroup_Share-LogVol_Share on /share type ext4
(rw,noatime,nodiratime,usrjquota=aquota.user,jqfmt=vfsv0,data=writeback,nobh,barrier=0)

When I try to run “tune2fs” on it, I get the following error:

# tune2fs -l /dev/mapper/VolGroup_Share-LogVol_Share tune2fs 1.41.12 (17-May-2010)
tune2fs: Filesystem has unsupported feature(s) while trying to open
/dev/mapper/VolGroup_Share-LogVol_Share Couldn’t find valid filesystem superblock.

This filesystem was created on this system (i.e. not imported from another system). I have other ext4 filesystems on this server, and they all work with “tune2fs -l”.

Basic system info:

# rpm -qf `which tune2fs`
e2fsprogs-1.41.12-18.el6.x86_64

# cat /etc/redhat-release CentOS release 6.5 (Final)

# uname -a Linux lnxutil8 2.6.32-504.12.2.el6.x86_64 #1 SMP Wed Mar 11 22:03:14
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

I did a little web searching on this, most of the hits were for much older systems, where (for example) the e2fsprogs only supported up to ext3, but the user had an ext4 filesystem. Obviously that’s not the case here. In other words, the filesystem was created with the mkfs.ext4 binary from the same e2fsprogs package as the tune2fs binary I’m trying to use.

Anyone ever seen anything like this?

Thanks!

13 thoughts on - Tune2fs: Filesystem Has Unsupported Feature(s) While Trying To Open

  • That’s in the CentOS 6.4 repo, I don’t see a newer one through 6.7 but I didn’t do a thorough check, just with google site: filter.

    And that’s a CentOSplus kernel in the 6.6 repo; while the regular kernel for 6.7 is currently kernel-2.6.32-573.22.1.el6.src.rpm. So I’m going to guess you’d have this problem even if you weren’t using the CentOSplus kernel.

    I suggest you do a yum upgrade anyway, 6.7 is current, clean it up, test it, and then while chances are it’s still a problem, then it’s probably a legit bug worth filing. In the meantime you’ll have to upgrade your e2fsprogs yourself.

    Well the date of the kernel doesn’t tell the whole story, so you need a secret decoder ring to figure out what’s been backported into this distro kernels. There’s far far less backporting happening in user space tools. So it’s not difficult for them to get stale when the kernel is providing new features. But I’d say the kernel has newer features than the progs supports and the progs are too far behind.

    And yes, this happens on the XFS list and the Btrfs list too where people are using old progs with new kernels and it can be a problem. Sometimes new progs and old kernels are a problem too but that’s less common.

  • tune2fs against a LVM (albeit formatted with ext4) is not the same as tune2fs against ext4.

    Could this possibly be a machine where uptime has outlived its usefulness?

  • tune2fs operates on the content of a block device. A logical volume containing an ext4 system is exactly the same as a partition containing an ext4 filesystem.

  • Then you either made a mistake or ran into a bug. Both “normal” disk partitions and logical volumes are regular block devices and tune2fs or other tool operating on block devices will see no difference between them and treat them identical.

  • uptime=insecurity. Patches must be kept up these days or your uptime won’t matter when your server gets compromised.

  • This sounds like MS Windows admin’s statement. Are there any Unix admins still left around who remember systems with kernel that doesn’t need
    [security] patching for few years? And libc that does not need security patches often. I almost said glibc, but on those Unixes it was libc;
    glibc, however, wasn’t getting security patches too often some long time ago as well. Because these are only kernel and libc/glibc that do require reboot (no splice or similar for me on servers, thank you).

    It sounds to me like the system you are talking about, and us, sysadmins administering it, is pretty much in MS Widows ballpark already. Right?

    Sorry about my rant. I still consider not well debugged code not well debugged code…

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • ALL systems need patching so obsessing about uptime is insecurity on its face. It doe not matter if it is windows or linux or anything else.

  • As I said, I feel I hear MS Widows admins on this list. There are only two things that require reboot in UNIX and Linux Worlds. Kernel patches or rather installation of patched kernel (and again, no splice or similar on my servers), and libc or glibc (as all and everything is linked to libc/glibc, so it is virtually impossible to swap over to patched libc/glibc in RAM).

    All other updates/patches do not require reboot (at least for those who know what they are doing).

    Just my $0.02.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I like to reboot every few months just to make sure all the services that are suppose to come up do come up, so that if it unexpectedly goes down and is brought back up automatically, it is more likely to function as it should.

    But that has nothing to do with patches.

  • I guess, I was trying to stress my point wrong way (or to stress wrong point). I probably should have asked:

    How many users do you have to see logged into your server to become obsessed about not rebooting it? Or: do users on your number crunchers ever run jobs taking a Month and longer (yes, I and my users do know about checkpointing, still…)

    But if it is single user server or laptop, who cares, right? On the other hand, who would care even to mention it?

    Just my humble point of view.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I’m on both sides. On the one hand, rebooting serves a particularly utilitarian purpose of identifying any changes made that may have been executed, but inadvertently not made persistent. In that context, rebooting often is better than not. On the other hand, I, like Valeri, get tired of having to explain that you don’t have to reboot a server every time your app runs out of memory or hangs, as that promotes the false thinking that ‘the OS was the cause of my app hanging because when we restarted the OS, everything was fine again.’

    Now, that said, if someone is hanging their hat on “I have high uptime, therefore my system is secure”, then s/he deserves what they get. That’s really a separate topic from reboots in the *nix world, I think.