Wow! Double Wow!

Home » CentOS » Wow! Double Wow!
CentOS 30 Comments

I’m running CentOS 6 (6.5 iirc) on my wife’s machine, which I’ve been updating pretty much every day. Today yum got 425 packages!

Somewhere a dam must have broken. Sometimes some of us don’t appreciate how much work the developers do.

Strength to their arms, and many heartfelt thanks!

30 thoughts on - Wow! Double Wow!

  • Me too. I was [mistakenly, apparently] always considering 5.[n+1], 6.[m+1]
    just re-spins, thus providing latest packages with _backported_ security patches/bugfixes, aimed at providing installation media that is not entail millions of updates. “Releases” with newer versions, drivers included in kernel shuffled, the new kernel (without any necessity in it) which causes hassle to reboot the box… This all effectively defeats the “Enterprise”
    portion of the name of the system, doesn’t it?

    Do not take it as me not being appreciative of the great job the distribution maintainers do. I’m just trying to give a view of us, “users”
    who have to deal with the consequences…

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I had a customer with a Violin SAN and they couldn’t update their RHEL/CentOS servers any higher than a certain point release not because the driver broke, but because the rest of the provided glue broke. I
    can’t recall the fine details, but I’m pretty sure it was a major change to udev in the middle of a major release.

    I don’t understand the direction that has been taken. Anything that runs on 6.0 should run flawlessly on 6.6. Period.

  • security

    Looking back over the list of packages installed, I notice that most end in “el6,” but there are some with “el6_6.” Does that mean she’s now actually running 6.6 rather than 6.5?

    I’ve been wondering when it would be best to switch to CentOS 7. Is there something like fedup in Fedora to do it, or is a fresh install the only way?

  • She is running CentOS 6 with all current updates. This currently equates to 6.6.

    RHEL, and therefore CentOS, does not support maintaining a specific point release version. Updating any CentOS 6 system will now result in an update to 6.6. It is possible to prevent the 6.6 updates from being installed, but this will leave you with no further updates (security or otherwise).

    There is a method to upgrade (there was a recent thread about it in this group), but the recommended method is to install from scratch.

  • Once upon a time, Bowie Bailey said:

    That’s not true for RHEL. A subscription can be switched to an extended x.y.z release train (but that’s a “you get what you pay for” kind of thing; that level of extended support is time consuming).

  • I like that “If” clause of yours… Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_
    system, even though it’s stressful, still not as stressful as running
    “bleeding edge” fedora, right? ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I’m sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in… You do the math.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Yep, that’s exactly what I did. I do not feel justified to use Department’s money (for extra hardware), so I invested just my time. And built servers based on FreeBSD, services run in different jails, etc. So, you can imagine how much I am hit by the need to reboot into updated Linux kernel, do you? Note, you will not find Linux kernel running of FreeBSD
    box (just teasing ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Things break and need maintenance. If your services can’t tolerate that, you need more redundancy. As for the OS updates (which are only one of the many things that can break…), they are ‘pretty well’
    vetted by upstream so breakage is rare and your odds are better installing them than not. But you don’t have to reboot right now –
    schedule it for a convenient time.

  • painted into. It’s not a law of nature. Civilized organisations will always allow a maintenance Window. In the Windows world it is not an issue. Servers can be rebooted with much more freedom than in the Linux/Unix world.

    Cheers,

    Cliff

  • Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users… No, I don’t think Unix admins will start into the direction of Windows world, sorry. I don’t even like Windows world mentioned as an example for Unix world! (Don’t take me too literally, everybody welcomes good things “other worlds” have…)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • in my enterprise world, production systems are fully redundant, and have staging servers running identical software configurations. all upgrades and upgrade procedures are tested on staging before being deployed in production. quite often, the staging systems double as the Disaster Recovery systems, but thats another story. virtually all production systems either have a schedulable downtime (2am sunday morning?), or support rolling upgrades with no downtime (such as our 24/7 factory operations where downtime == no product).

    personally, I’m very glad I work in development, where our informal SLA
    is more like 9-9 5 days/week (developers like to work late).

  • ​Sounds like you have a dream job John! At the very least for a company that spends money on proper hardware!

  • That’s exactly what I mean. It’s not a matter of “starting into the Windows world”. My point was that Windows admins have not become obsessed with
    “uptime”, and hence given their users the expectation of 100% availability.

    I’m all for being responsible to users – and that means patching and if that means some downtime, then the users in general would not be put out, if their expectations had not been raised to expect no downtime.

    Cheers,

    Cliff

  • I used to work with IBM mainframes back when the dinosaurs were hatchlings. At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?

    We soon went to 24×7, but the reason was not because the users wanted it. It was because the engineers and systems programmers wanted time with no users.

    Cheers,

    Cliff

  • main reason I remember for keeping stuff running was, it was more reliable if the temperature was relatively constant… temperature flucations led to more hardware failures than any other source input variable.

  • Bending a spoon 100 times it will break.. Keep temp the same hot or cold no bends.. thus the tracks do not break…

    Its not 22Deg Celsius or 28Deg it is keeping the temp the same, as the temp changes the metal expands and contracts..

    Regards Michael Cole

    hatchlings.

  • If I remember Unix world, patching almost never led to downtime and almost always could be accomplished in presence of users logged in.

    Valeri

    then the users in general would not be put out,

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Once upon a time, Valeri Galtsev said:

    I think that’s a rose-colored glasses look in the rear-view mirror. The
    “traditional” Unix flavors I dealt with (Solaris and DEC Unix) required reboots; DEC Unix pretty much required going to single-user mode to even install a patch kit. When it wasn’t required, it was highly recommended by the documentation.

  • RHEL has kpatch:
    http://rhelblog.redhat.com/2014/02/26/kpatch/

    Technologies like kpatch, ksplice, kGraft, etc. will make it so you don’t have to reboot to get kernel patches. However, I’m more concerned with updating software like glibc, openssl, nss, etc. for running processes. It doesn’t matter if you’re running Linux or FreeBSD or other UNIXes, if you update the underlying software applications and libraries under the user’s processes, there’s always a chance (and quite likely) that something will break.

  • In my early days, the entire system was powered down before the last person went home.

    Regards,

    Paul. England, EU.

  • Technically a kernel patch isn’t for something “that broke”, it’s for something “that was written wrong to begin with”…

    Just to be pedantic.


    Nate Duehr denverpilot@me.com

  • True, but pretty much everything was written wrong to begin with, back in the day when everyone thought bad guys just shouldn’t be allowed to use the network. And the fixes are trickling in bit by bit.


    Les Mikesell
    lesmikesell@gmail.com

  • Been hearing that “back in the day” excuse since Novell / IPX was big. Wash, rinse, repeat.

    There have always been “bad guys” on networks.

    That excuse will still be used long after I’m dead… but an excuse, it most certainly is.

    You can find all sorts of examples of things written long after Internet security was a known/given in the kernel, that had to be replaced. Same with just about every piece of application software.


    Nate Duehr denverpilot@me.com

  • which would have been 1980s to mid 90s.

    the fundamental IP application protocols like FTP, Telnet date back to the late 60s and early 1970s, concurrent with the development of TCP/IP
    and ARPANET. There /was/ no ‘network’ before this for ‘bad guys’ to be on.


    john r pierce 37N 122W
    somewhere on the middle of the left coast

  • It was made official in 1987 with the first known instance of an internet worm that exploited sendmail. The person who released the viral code was held responsible rather than the vendors that shipped the obvious vulnerability – even the commercial vendors that repackaged it and charged for it. Thus the next several decades of taking no responsibility for shipping horrible vulnerabilities was set in motion. And of course there are an assortment of conspiracy theories about how some of the back doors were intentional.


    Les Mikesell
    lesmikesell@gmail.com