Information Week: RHEL 7 Released Today

Home » CentOS » Information Week: RHEL 7 Released Today
CentOS 30 Comments

Excerpt:
Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with embedded support for Docker containers and support for direct use of Microsoft‘s Active Directory. The update uses XFS as its new file system.
— end excerpt –

30 thoughts on - Information Week: RHEL 7 Released Today

  • “direct use of Microsoft’s Active Directory” sounds intresting? via samba
    4? or via other implementation?

    Eero

    2014-06-10 23:22 GMT+03:00 :

  • Tom Bishop wrote:

    Actually, I can wait a few weeks or so… that way, the first bug fixes will be out from RH. I am *always* happy to wait for release x.0.1 of anything, and not go with x.0….

    mark

  • I have a new (to me) server running *buntu at the moment since I have been waiting for 7 to be released so I will roll with it at home as soon as it becomes available.

    Go team CentOS!! :)

  • It’s different this time. The CentOS people have had inside access to RHEL since December last year.[1]

    So, when will CentOS 7 be released? :)

    Yes, I see Jim Perrin’s post today[2] that it is “in the build process,”
    but how long does that take, if nothing goes wrong, and then how long does ISO mastering and mirror seeding take? That is to say, what is the shortest possible delay? Then, how confident are those who have been doing this work that we will get through this build process without errors, based on what happened in the beta and RC stages?

    Ballpark guess, that’s all I’m asking. This week? This month? While the leaves are still on the trees in North America?

    [1] http://goo.gl/VkkRCw
    [2] http://goo.gl/fFhz93

  • They have the same “inside access to RHEL” as everyone else; namely the RHEL 7 beta and RC releases.

    John

  • Really? http://sdt.bz/content/article.aspx?ArticleIDh721&page=1

    Several CentOS core developers are now Red Hat employees, and this changes nothing?

    During the ~6-month RHEL 6 to CentOS 6 effort, the reason for the delay is that Red Hat surprised everyone by changing a bunch of things. Now we have people on the inside for 6 months, but everyone’s still ignorant?

  • It has been repeatedly stated that there is an intellectual firewall in place between the CentOS project and the relevant parts within the Red Hat organization. I have seen nothing to date to doubt that statement.

    That was part of the published reason. Another factor was the overhaul of the build system. There have also been builds of the EL7 beta and RC
    components for months now. I have thousands of reports from the mailing list for these and complete build logs are available at http://buildlogs.CentOS.org as well now.

    John

  • I wonder if the certification for 6.x will still be viable and available?….and for how long? (sigh!) guess I now have to find a RHEL
    7 certification guide!…LOL!

    EGO II

  • m.roth@5-cent.us wrote:

    Does XFS have any advantages over ext4 for normal users, eg with laptops?
    I’ve only seen it touted for machines with enormous disks, 200TB plus.

    Does XFS have the same problems that LVM has if there are disk faults?

  • Timothy Murphy wrote:

    That’s monstrous. No, what I’ve read is that you want to use it when you need to go over 16TB. I’ve seen a lot of comments, and I think one or two in code that I googled, for ext4 that says “really need to fix this to handle > 16TB”… and I start seeing it around ’09 or ’10, and it’s generally documented, I believe, that this is still the case. (And you
    *don’t* want to do an fsck on a multi-TB filesystem that people need to use that day, or the next….)

    No data.

    mark

  • It is generally better at handling a lot of files – faster creation/deletion when there are a large number in the same directory.
    The only down side for a long time has been on 32bit machines where the RH default 4k kernel stacks were too small.

    You can’t really expect any file system to work if the disk underneath is bad. Raid is your friend there.

  • On a slightly related note, is there anything, anywhere that would show the differences in the documentation between versions?
    Something like one of those side-by-side color coded diff listings that you can do with source code would be ideal to flip through on a wide screen.

  • No one is INSIDE … the CentOS team works from the same place now as we always have .. our homes.

    The 4 people hired by Red Hat work for a group inside of Red Hat called the Open Source and Standards group. We have no access to the RHEL
    build system or RHEL source code inside Red Hat.

    We get the source code for the older releases from http://ftp.redhat.com, just like everyone else … we get the source code for RHEL 7 from git.CentOS.org just like everyone else.

    We just brought into the CentOS team members of CERN Linux team (they volunteered from the community .. they do not work for Red Hat) to help us with the community buildsystem:

    http://lists.CentOS.org/pipermail/CentOS-devel/2014-June/010517.html

    Red Hat brought in the CentOS team because Red Hat wanted to have a community platform for their community projects like oVirt, OpenShift Origin, Gluster, Ceph, RDO, etc. They also brought us in to do community related Special Interest Groups. FAQ is here:

    http://community.redhat.com/CentOS-faq/

    I wish we had special access and special knowledge … but its just not true.

    Thanks, Johnny Hughes

  • You could save all docs like single-html and try to compare them? maybe some html compare app?

  • Les Mikesell wrote:

    I’m wondering if, for the home user, BackupPC would be a good test of that?
    Otherwise I can’t think of a case where I would have a very large number of files in the same directory.

    Do you mean that that is a down side of XFS, or ext4?

    In my meagre experience, when a disk shows signs of going bad I have been able to copy most of ext3/ext4 disks before compete failure, while LVM disks have been beyond (my) rescue. Actually, this was in the time of SCSI disks, which seemed quite good at giving advance warning of failure.

  • If you graph machine size — in whatever dimension you like — vs number deployed, I think you’d find all laptops over on the left side of the CentOS deployment curve. I’d expect that curve to be a skewed bell, with a long tail of huge servers over on the right side.

    ext* came up from the consumer world at the same time that XFS was coming down from the Big Iron world. The gap between them has thus been shrinking, so that as implemented in EL7, ext4 has an awful lot of overlap with XFS in terms of features and capabilities.

    XFS still offers a lot more upside, and is more appropriate for the server systems that CentOS will most often be used on. It is a more sensible default, being the right answer for the biggest subset of the CentOS user base.

    Since you’re over there on the left side of the curve, you may well decide that ext4 still makes more sense for you.

    That said, there really isn’t anything about laptop use that argues
    *against* using XFS. It isn’t a perfect filesystem, but then, neither is ext4.

    ext4 in EL7 only goes to 50 TiB, whereas XFS is effectively unlimited[*]. Red Hat will only support up to 500 TiB with XFS in EL7, but I suspect it isn’t due to any XFS implementation limit, but just a more professional way for them to say “Don’t be silly.”

    [*] The absolute XFS filesystem size limit is about 8 million terabytes, which requires about 500 cubic meters of the densest HDDs available today. You’d need 13 standard shipping containers (1 TEU) to transport them all, without any space for packing material. If we add 20% more disks for a reasonable level of redundancy and put them in 24-disk 4U
    chassis and mount those chassis in full-size racks, we need about half a soccer field of floor space — something like ~4000 m^2 — after accounting for walking space, network switches, redundant power, and whatnot to run it all. It’s so many HDDs that you’d need four or five full-time employees in 3 shifts to respond to drive failures fast enough to keep an 8 EiB array from falling over due to insufficient redundancy.
    You simply wouldn’t make a single XFS filesystem that big today, so QED: effectively unlimited.

  • There are users on the BackupPC list that recommend XFS – but for
    ‘home’ size systems it probably doesn’t matter that much.

    XFS – it needs more working space.. RedHat’s choice to configure the kernel for 4k stacks on 32bit systems is probably the reason XFS
    wasn’t the default filesystem in earlier versions. And now that I
    think of it, this may be an issue again if CentOS revives 32bit support.

    I’m not sure what controls the number of soft retries before giving up at the hardware layer. My only experience is that with RAID1 pairs a mirror drive seems to get kicked out at the first hint of an error but the last remaining drive will try much harder before giving up.

  • Isn’t there some ratio of RAM to filesystem size (or maybe number of files or inodes) that you need to make it through an fsck?

  • Aside from some corporation…or from a home business perspective where expansion is expected. I don’t think I would attempt this, but I’m sure there are those who actually need to do something like this to ensure their site remains stable reliable and robust. I can only imagine the nightmares that would begin for me trying to get this all up and running.

    EGO II

  • The floods that knocked out WDC were in Thailand, not Taiwan.

    and I do believe the spinning media industry is running into physics and much further progress at doubling areal density past the 6GB per 3.5″
    drive level will be harder and less reliable. Meanwhile price::capacity in SSD/flash is linear