Design Changes Are Done In Fedora

Home » CentOS » Design Changes Are Done In Fedora
CentOS 104 Comments

Well, despite the hype from Wall St., Bay St. and The City, a large number of organisations in the world run on software that is decades old and cannot be economically replaced. In many instances in government and business seven years is a typical time-frame in which to get a major software system built and installed. And I have witnessed longer.

So, seven, even ten, years of stability is really nothing at all. And as Linux seeks to enter into more and more profoundly valuable employment the type of changes that we witnessed from v6 to v7 are simply not going to be tolerated. In fact, it my considered belief that RH in Version EL7 has done themselves a serious injury with respect to corporate adoption for core systems. Perhaps they seek a different market?

Think about it. What enterprise can afford to rewrite all of its software every ten years? What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years? The desktop software churn that the PC has inured in people simply does not scale to the enterprise.

If you wish to see what change for change’s sake produces in terms of market share consider what Mozilla has done with Firefox. There is absolutely no interface that is as easy to use as the one you have been working on for the past ten years. And that salient fact seems to be completely ignored by many people in the FOSS community.

104 thoughts on - Design Changes Are Done In Fedora

  • Yes exactly. Do you want your bank to manage your accounts with new and not-well-tested software every 7 years or would you prefer the stability of incremental improvements?

    It’s worse than that – since you can’t just replace all of your servers and code at once, your staff has to be trained on at least two and probably three major versions at any given time – and aware of which server runs what, and which command set has to be used. And the cost and risk of errors increases with the number of arbitrary changes across versions.

  • I said elsewhere that these changes are partly induced by changes started in kernel some 5 years ago. But now I do realize that at least part of them was pushed on the kernel level by folks from RedHat team…

    Well, there are similar changes in other areas of our [human]
    communication with computer hardware. Take the step “up” from Gnome 2 to Gnome 3 for instance. From the way that worked over two decades (with logical tree like access to what you need) all switched to please people without brain and ability to categorize things… just able to do search. And you can continue describing the differences each confirming that same point. Which leads me to say:

    Welcome to ipad generation folks!

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Yes, but Apple knows enough to stay out of the server business where stability matters – and they are more into selling content than code anyway. Client side things do need to deal with mobility these days
    – reconnecting automatically after sleep/wakeup and handling network connection changes transparently, but those things don’t need to break existing usage.

  • Not exactly. They claim they are in server business forever. There is something called MacOS Server. Which is an incarnation of their OS with some scripts added. But (apart from that that thing doesn’t have documentation – “click here, then click there… and you are done” doesn’t count for such) they do not maintain its consistency for any decent period of time. That is, as soon as they release next version of the system you can say goodbye to some of the components of your MacOS Server.

    So, as far as “clever Apple” is concerned, I disagree with you. Unless we both agree they are clever enough to be able to fool their customers ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • You can’t disagree with the fact that they make a lot of money. They do it by targeting consumers without technical experience or need for backwards compatibility to preserve the value of that experience. That’s obviously a big market. But whenever someone else tries to copy that model it is a loss for all of the existing work and experience that built on earlier versions and needs compatibility to continue. For what it’s worth, I haven’t found it to be that much harder to find Mac ported versions of complex open source software
    (e.g. vlc) than for RHEL/CentOS – they all break things pretty badly on major upgrades, and there is usually just one OSX version needed versus a bazillion linux flavors with arbitrary differences).

  • As a software developer, I think I can speak to both halves of that point.

    First, the world where you design, build, and deploy The System is disappearing fast.

    The world is moving toward incrementalism, where the first version of The System is the smallest thing that can possibly do anyone any good. That is deployed ASAP, and is then built up incrementally over years.

    Though you spend the same amount of time, you will not end up in the same place because the world has changed over those years. Instead of building on top of an increasingly irrelevant foundation, you track the actual evolving needs of the organization, so that you end up where the organization needs you to be now, instead of where you thought it would need to be 7 years ago.

    Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver new functionality to production every 1-4 weeks, achieving 100% of the desired feature set over the course of years.

    This isn

  • Sure, if you don’t care if you lose data, you can skip those steps. Lots of free services that call everything they release ‘beta’ can get away with that, and when it breaks it’s not the developer answering the phones if anyone answers at all.

    That works if it was designed for rolling updates. Most stuff isn’t, some stuff can’t be.

    If you are, say, adding up dollars, how many times do you want that functionality to change?

    How many people do you have answering the phone about the wild and crazy changes you are introducing weekly? How much does it cost to train them?

    Please quantify that. How much should a business expect to spend per person to re-train their operations staff to keep their systems working across a required OS update? Not to add functionality. To keep something that was working running the way it was? And separately, how much developer time would you expect to spend to follow the changes and perhaps eventually make something work better?

    How many customers for your service did you keep running non-stop across those transitions? Or are you actually talking about providing a reliable service?

    Again, it’s only useful to talk about if you can quantify the cost. What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same.. And separately, what will it cost in development time to take advantage of any new functionality?

    We’ve got lots of stuff that will drop into Windows server versions spanning well over a 10 year range. And operators that don’t have a lot of special training on the differences between them.

    No, Linux doesn’t offer stability either.

    Were you paying attention when Microsoft wanted to make XP obsolete?
    There is a lot of it still running.

    Not really. Ask the IRS what platform they use. And estimate what it is going to cost us when they change.

    No, that is the way things work. And the reason Microsoft is in business.

    With their eternally beta software? With the ability to just drop things they don’t feel like supporting any more? Not everyone has that luxury.

    So again, quantify that. How much should it cost a business _just_ to keep working the same way? And why do you think it is a good thing for this to be a hard problem or for every individual user to be forced to solve it himself?

    But it could be better, if anyone cared.


    Les Mikesell
    lesmikesell@gmail.com

  • How did you jump from incremental feature roll-outs to data loss? There is no necessary connection there.

    In fact, I

  • No, it’s not necessary for either code interfaces or data structures to change in backward-incompatible ways. But the people who push one kind of change aren’t likely to care about the other either.

    I’m not really arguing about the timing of changes, I’m concerned about the cost of unnecessary user interface changes, code interface breakage, and data incompatibility, regardless of when it happens. RHEL’s reason for existence is that it mostly shields users from that within a major release. That doesn’t make it better when it happens when you are forced to move to the next one.

    Are you offering to do it for free?

    That’s fine if you have one machine and can afford to shut down while you make something work. Most businesses aren’t like that.

    And it is time consuming and expensive.

    Beg your pardon? How about not breaking the things that trigger the calls in the first place – or taking some responsibility for it. Do you think other people have nothing better to do?

    We never change more than half of a load-balenced set of servers at once. So all changes have to be compatible when running concurrently, or worth rolling out a whole replacement farm.

    If you run continuous services you either have to be able to run new/old concurrently or completely duplicate your server farm as you roll out incompatible clients.

    OK, but they have to not break existing interfaces when they do that. And that’s not the case with OS upgrades.

    I’m asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year. Or more to the point, if the same program ran correctly last year, wouldn’t it be nice if it still ran the same way this year, in spite of the OS upgrade you need to do because of the security bugs that keep getting shipped while developers spend their time making arbitrary changes to user interfaces.

    When your system requires extensive testing, the few times it breaks the better. Never would be nice…

    That’s nonsense for any complex system. There are always _many_
    different OS versions in play and many different development groups that only understand a subset, and every new change they need to know about costs time and risks mistakes.

    And it is expensive. Unnecessarily so, in my opinion.

    That’s a very different scenario than a farm of data servers that have to be available 24/7.

    I have a few of those, but I don’t believe that is a sane thing to recommend.

    You’d probably be better off in java if you aren’t already.

    I ask it as if I think that software developers could make changes without breaking existing interfaces. And yes, I do think they could if they cared about anyone who built on those interfaces.

    Well, that has done a great job of keeping Microsoft in business.

    Yes, there are changes – and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare (and I don’t like it when they do it either…).

    Well, some things you have to get right in the first place – and then stability is good.

    And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google’s resources?

    Maybe I misunderstood – I thought you were defending the status quo –
    and the fedora developers that bring it to us.


    Les Mikesell
    lesmikesell@gmail.com

  • Any organization using open source. More specifically, any organization that uses the Linux Kernel. Or have you never read the linux kernel mailing list (LKML)? Open source by its very nature is somewhat
    ‘undisciplined’ in that any particular project’s discipline (or, to use a fifty-dollar word, ‘governance’) is entirely self-imposed by the project members, and some projects have ‘better’ discipline/governance than others. This is the cost of decentralized development of core pieces of the operating system; it is an acceptable cost for my uses.
    And I intentionally put ‘better’ in quotes because what is ‘better’ is entirely subjective.

    Had EL7 been a straight clone of Fedora 20 it would/could have been much worse. Try out F20 for a while to see the differences.

    Define ‘normal’ upgrades. Are you talking about the ‘quarterly’ updates that masquerade as ‘point’ releases? (Yeah, yeah, I know they’re not strictly quarterly, but go read some of the early EL literature……)

    The Update 6 releases seem from my view to have been substantial upheavals and opportunities to get somewhat major things pushed in. EL5
    update 6 was no exception, and, as I recall, EL4 update 6 wasn’t, either, but that’s been a while, so I may not be remembering it completely; and I don’t remember much of anything about the 3u5 to 3u6
    transition. At least during this cycle Red Hat staggered the releases unlike the triple-threat posed last major release cycle, where 4u10,
    5u6, and 6GA all ‘hit’ within weeks of each other.

    Red Hat is walking a tightrope here, and, honestly, I think they are doing a fantastic job in what they do, given the fact that they are not going to please all of their users any particular time. The users’
    requirements are just too varied, and many of those requirements are mutually exclusive. They’re not going to please any one of the users all the time, either.

  • You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.

    Take it from a software developer: it isn

  • OK, but should one developer make an extra effort or the bazillion people affected by it?

    That’s what it takes to build and keep a user base.

    It’s hard to the extent that you made bad choices in interfaces in the first place. Microsoft’s job was hard. But Unix SysV which Linux basically emulates wasn’t so bad. Maybe a few size definitions could have been better.

    And the user base that depended on them.

    So either it “isn’t hard”, or “you need a trained, experienced, professional staff to do it”. Big difference. Which is it?

    If you are embedding business logic in your library interfaces, something is wrong. I’m talking about things that are shipped in the distribution and the commands to manage them. The underlying jobs they do were pretty well established long ago.

    All of our customer-facing services – a nd most internal infrastructure. Admittedly, not individual boxes – but who wants to have systems running concurrently with major differences in code base and operations/maintenance procedures?

    Yes, everything is redundant. But when changes are not backwards compatible it makes piecemeal updates way harder than they should be. Take something simple like the dhcp server in the disto. It allows for redundant servers – but the versions are not compatible. How do you manage that by individual node upgrades when they won’t fail over to each other?

    How nice for you…

    Which sort of points out that the wild and crazy changes in the mainstream distributions weren’t all that necessary either…

    I do. We have a broad mix of languages, some with requirements that force it, some just for historical reasons and the team that maintains it. The java stuff has been much less problematic in porting across systems – or running the same code concurrently under different OS’s/versions at once. I don’t think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux’s, even doing separate builds for them.

    Maybe. I think there’s a bigger pile of not-so-good reasons that things aren’t done portably. Java isn’t the only way to be portable, but you don’t see much on the scale of elasticsearch, jenkins or opennms done cross-platform in other languages.

    The syntax is cumbersome – but there are things like groovy or jruby that run on top of it. And there’s a lot of start-up overhead, but that doesn’t matter much to long-running servers.

    Yes, I’m forced to deal with #1. That doesn’t keep me from wishing that whatever code change had been done had kept backwards compatibility in the user interface commands and init scripts department.

    And you only get that with code that keeps users instead of driving them away.

    What google does points out how unsuitable the distro really is. I
    just don’t see why it has to stay that way.


    Les Mikesell
    lesmikesell@gmail.com

  • Where did you get the 5% from according to google there are

    “over 200 billion lines of existing COBOL code, much of it running mission-critical 24/7 applications, it is simply too costly (in the short run) for many organizations to convert.”

    And what about Fortran, RPG etc.

    Also how big is the outfit you work for? Sounds like you have no shortage of help, a lot of place don’t have unlimited resources like you seem to have.


    Stephen Clark
    *NetWolves Managed Services, LLC.*
    Director of Technology Phone: 813-579-3200
    Fax: 813-882-0209
    Email: steve.clark@netwolves.com http://www.netwolves.com

  • Is one to infer from your mantra ‘cope with change’ that one is not supposed to express any opinion whatsoever, ever, on any forum; on the externalised cost of changes made to software with no evident technical justification? And that to do so is evidence of some moral or intellectual defect in oneself?

    We all cope with change until we die. That is not a philosophy or program. It is an observation on the state of existence; and is no more useful than the observation that, eventually, we all die.

  • First of all, I must say that I agree with you, James, on almost all of your points. Or disagree with your opponents on majority of their points. Moreover, I do suffer myself from “unnecessary change”, thus for some tasks I even switched to different system (if the stuff that affects you is already in the kernel, you can not just switch from one Linux distro to another ;-( I have been suggested to shut up when I was too loud/persistent saying about that (luckily I do not remember by whom and do not care to remember ;-)

    Nonetheless, even though I try to speak up when I’m unhappy thus hopefully I’m providing feedback for developers and architects, I came to realizing that [open source] software developers most likely will not listen to me, even though I do represent certain number of their end customers. Take as example my last worst displeasure. I upgraded my FreeBSD workstation from
    10.0 to 10.1 which made me step up from Gnome 2 to Gnome 3. And I can not bear the change. So, after a couple of weeks of frustration of just trying to do what I usually do on workstation I came to decision to abandon Gnome altogether. Whoever did that transition knows what I’m talking about. It is pretty much as switching from CentOS 6 to CentOS 7.

    What I told myself (not that I’m suggesting others they should…) is this. Developers often work without monetary reward and their only reward is seeing the result of their programming. What they get best reward is from seeing new, fancy… Which kind of contradicts utilitarian programming (to achieve particular goal your program is for). In last case we always were following the principle (yes, I was programmer too): do not make any changes unless they are absolutely necessary. Which appears to contradict goals many developers have (KDE, GNOME, Firefox, Windows 8,… you continue the list). All seem to abandon structured logical tree-like arrangements of your tools, and switch you to stupid search for what you need. Welcome to ipad generation, folks!

    Thus, I decided for myself to tolerate the change as long as I can and keep being grateful to developers whose products I use, but switch to something more suitable for my way of working with things as soon as I can not stand the change.

    I hope, this helps someone ;-)

    Happy New Year everybody! (and welcome to ipad generation! ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • An industry publication, probably 10-15 years ago. (You know, back when they printed these things on paper.) I haven

  • That developer is either being paid by a company with their own motivations or is scratching his own itch. You have no claim on his time.

    Open source is a do-ocracy: those who do the work, rule.

    People throw all kinds of hate at Poettering, but he is *doing things* and getting those things into consequential Linux distributions. The haters are just crying about it, not out there trying to do things differently, not trying to win an audience.

    I don

  • Agreed – but I’m not going to say I like his breakage.

    Yes, but I’d rather be building things on top of a solid foundation than just using planned obsolescence as job security since the same thing needs to be done over and over. And I’ll admit I can’t do it the right way with the approach google uses of just tossing the distribution and its tools.

    If the program won’t start or the distribution libraries are incompatible (which is very, very likely) then it isn’t going to add anything.

    It’s my experience or I wouldn’t have mentioned it. I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.

    The ability to fail back is important, unless you think new software is always perfect. Look through some changelogs if you think that…

    Well yes. We have computers doing specific things. And those things span much longer that 10 years. If you are very young you might not understand that.

    Lots of people do lots of stupid things that I can’t explain. But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there’s probably more instances of running programs than anything else.

    You didn’t come up with portable non-java counterexamples to elasticsearch, jenkins, opennms, etc. I’d add eclipse, jasper reports and the Pentaho tools to the list too – all used here.

    I don’t think CMake is happy with that since in knows where the stock version should be and will find duplicates, And then you have to work out how to distribute your binaries. Are you really advocating copying unmanaged.unpackaged libraries around to random places.

    Yes, they can add without changing/breaking interfaces that people use or commands they already know. The reason people use RHEL at all is because they do a pretty good job of that within the life of a major version. How can you possibly think that the people attracted to that stability only want it for a short length of time relative to the life of their businesses.


    Les Mikesell
    lesmikesell@gmail.com

  • Yes, it is just sad that it is necessary to isolate your work from the disruptive nature of the OS distribution. But it is becoming clearly necessary.

    Agreed – and some of the value of Red Hat shows in the fact that the breakage is not packaged into a mid-rev update.

    Maybe document how you were supposed to deal with the situation, keeping your lease history intact and the ability to fail over during the transition. My point here is that there are people using RHEL
    that care about this sort of thing, but the system design is done in Fedora where I’m convinced that no one actually manages any machines doing jobs that matter or cares what the change might break. That is, the RHEL/Fedora split divided the community that originally built RH into people who need to maintain working systems and people who just want change. And they let the ones who want change design the next release.

    So, after you’ve spent at least 10 years rolling out machines to do things as fast as you can, and teaching the others in your organization to spell ‘chkconfig’ and use ‘system …’ commands, wouldn’t you rather continue to be productive and do more instead of having to start over and re-do the same set of things over again just to keep the old stuff working?

    I’m counting the running/useful instances of actual program code, not the interpreters that might be able to run something. But JavaScript is on the rise mostly because the interpreters upgrade transparently and the hassle is somewhat hidden.

    We spend time ‘maintaining’ because the OS underneath churns. Otherwise we would pretty quickly have all of the programs anyone needs completed. I thought CPAN was approaching that long ago, or at least getting to the point where the new code you have to write to do about anything would take about half a page of calls to existing modules.

    Well, Bourne didn’t deal with sockets. My opinion is that you’d be better off switching to perl at the first hint of needing arrays/sockets or any library modules that already exist in CPAN
    instead of extending a shell beyond the basic shell-ish stuff. But nobody asked me about that.

    Of course I used sharchives – and I value the fact that (unlike most other languages) things written in Bourne-compatible shell syntax at any time in the past will still run, and fairly portably – except perhaps for the plethora of external commands that were normally needed. Perl doesn’t have quite as long a history but is just about as good at backwards compatibility. With the exception of interpolating @ in double-quoted strings starting around version 5, pretty much anything you ever wrote in perl will still work. I’m holding off on python and ruby until they have a similar history of not breaking your existing work with incompatible updates.

    Yes, on windows, you can just pick a boost version out of several available and jenkins and the compiler and tools seem to do the right thing. On linux it may be possible to do that, but you have to fight with everything that knows where the stock shared libraries and include files are supposed to be. While every developer almost certainly has his own way of maintaining multiple versions of things through development and testing, the distribution pretends that only one version of things should ever be installed. Or if multiples are installed there must be a system-wide default for which thing executes set by symlinks to obscure real locations. Never mind that Linux is multi-user and different users might want different versions. Or at least it was that way before ‘software collections’.

    No, what I wish is that every change would be vetted by people actively managing large sets of systems, with some documentation about how to handle the necessary conversions to keep things running. I don’t believe anyone involved in Fedora and their wild and crazy changes actually has anything running that they care about maintaining or a staff of people to retrain as procedures change. There’s no evidence that anyone has weighed the cost of backwards-incompatible changes against the potential benefits – or even knows what those costs are or how best to deal with them.


    Les Mikesell
    lesmikesell@gmail.com

  • Having been through a bunch of these transitions already (SysV -> Linux bingo -> BSD -> OS X

  • Now means the current time. Now is not, and never will be, The (unknown)
    Future.

    In the real world of using computers productively for repetitive tasks, people want stability and perhaps faster running programmes. No one ever wants a major upset of being forced to use a different method to perform the same tasks.

    Young men are enthusiastic about implementing new ideas. Old men with substantially more experience wisely want to avoid disrupting well-running systems. Time is money. Disruptions waste money and cause errors.


    Regards,

    Paul. England, EU.

  • Not really, but start with the number of running android devices – and I think it is reasonable to assume that they are running something, checking gmail, etc. It’s safe enough to say that’s a big number.

    No, I’m saying it pretty much owns the phone/tablet space and the examples of elasticsearch (and other lucene-based stuff), jenkins, etc. show it scales to the other end as well.

    node.js seems like the worst of all worlds – a language designed _not_
    to speak to the OS directly with an OS library glued in. But it is a good example of how programmers think – if one says you shouldn’t do something, it pretty much guarantees that someone else will.

    I’d settle for just not changing the languages in ways that break already written software. But even that seems too much to expect.

    I think you are underestimating the way the working interpreters get to the users. And the way the code libraries mask the differences. If users were exposed to version incompatibilities the popularity would vanish. In fact, being able to actively detect and work around incompatibilities is a large part of the reason it is used at all.

    Not it the sense that there’s nothing new to invent, but that every sysadmin in every organization and every home user should not need to invent new processes to manage their machines.

    Really? How have they made it any easier to manage your 2nd machine than your first?

    It is also purely reproducible. Do something right once and no one should ever have to waste that thought again.

    Precisely. Once there was close to a 1-1 mapping of shell and external commands to system calls. Now there isn’t.

    I wasn’t ignoring it. I just was avoiding another rant about how those have not been maintained in a backwards compatible way either. Even though shell syntax is stable, the programs are usually just glue around an assortment of tools that no longer match section 1 of the unix manuals.

    I might have done that for 8-bit z80 code, but I’ve since learned that it is rarely worth getting that intimate with the hardware.

    Well no, at this point you can’t say that java is yet another new thing – probably not even groovy since that’s mix/match with java. Maybe scala.

    I think everyone involved with perl is sane enough to know that perl 6
    is not a replacement for perl 5. It’s something different. I just hope the Fedora people get it.

    Well, I’ve avoided a couple of traumas then. But those transitions don’t seem to have completed. And they probably won’t until all distros ship multiple versions so programs can co-exist long enough to fix the broken parts.

    No – mostly hoping someone would point out something I had overlooked that makes the transition easy. I thought the computers were supposed to work for us instead of the other way around.


    Les Mikesell
    lesmikesell@gmail.com

  • Some people are annoyed that CentOS keeps changing on them, and keep going to greater and greater lengths to try and argue that CentOS should not change.

    I am explaining to them why this is not a productive view.

  • No one has said it should not change. Just that it breaks all users existing work when it changes in non-backwards compatible ways.

    The non-productive part is that every user who has ever built anything of non-stable components has to deal with the problem on an individual basis. Is there any centralized approach to converting something that worked on CentOS6 to run on CentOS7? Does the program that is supposed to try to automatically upgrade versions have any tricks hidden away to fix things so they work after the upgrade, and could any of them be run separately?

  • You seem to forget. Computers were invented to perform repetitive tasks. Computer usage should be serving mankind – not making it more difficult for mankind.


    Regards,

    Paul. England, EU.

  • Brilliant task to assign to Warren Young. That will keep him away from his disruptive “improvements” philosophy.

  • Or maybe, some of us just seem to remember it differently. In my opinion, robots/automatons were invented to perform repetitive tasks; computers were invented to perform logic operations faster and more-reliably than humans.

  • John R. Dennison писал 2015-01-07 04:49:

    I for one read this thread with interest. Let it be. And IMHO the topics are relevant for anybody professionally involved with computers.

  • There’s still a very odd mix of art and science involved. This is part of the fun, but still it seems like when everyone has the same problem from the same causes there would be some way to automate or re-use the knowledge of the fix instead of making everyone spend time on their own new creative version.

  • My recollection was influenced by my discovery in 1966 of Power Samas
    (later acquired by ICT) 40? 36? column small punch cards feed-in to a printing machine to produce invoices for a then major international publisher. The replacement/upgrade was a Honeywell 1200 with tapes, 80
    column card reader, printer and the ubiquitous air conditioning and humidity control system. Luxuries like keyboards, disks, screens had not been invented. It was plain, simple, effective and fairly reliable Data Processing.

    Two years later on a smaller H-120 I was writing commercial Cobol programmes for a Norwegian angling distributor – orders, invoices, statements, stock control. Again it was essentially repetitive processing done much faster than before the usage of mainframes.

    Computers can not function without logic. One of the the most important advancements was of the “stored programme” feature instead of having to reload the data processing programme, contained on punched cards in strict sequence, every time a job needed doing.

    Never bumped into “robots/automatons” anywhere at all in Data Processing nor encountered anyone using that term. Computers evolved very slowly from automated machine processing. A major advancement was made at the USA’s famous Bell Labs (subsequently destroyed in the interests of shareholder profits) who invented the DTMF. It was the IC.

    SPAM was another USA invention and so too were Microsoft-suitable viruses.

  • Agreed.

    If you want to participate in how the upstream OS is being shaped, I
    suggest looking at the Fedora Project, which is driven by volunteers.

    If you notice the Subject: of this thread, it is “Design changes are done in Fedora”. Pretty clear message.

  • New disruption causing ideas are also the food for innovation and progress. Bring them on, we need them. I think RH / CentOS strike a reasonable / measured balance, more or less freeze a set of function / capability for each major release – only release security patches and changes that are necessary to maintain functioning across the www. Then with each major release do a jump – yes this is disruptive – yes it means learning new ways of doing old things, hopefully those making these decisions in RH have the knowledge and perspective to make the calls / evaluations necessary. If you think you can do better, go get a job at RH.

  • There is a common word disguised in the letter E that we find in both the initialism RHEL and the acronym CentOS. It is the the word Enterprise. It is my observation that the subscriber base to this list tends to those who have wider responsibilities than deciding what the corporate GUI desktop layout should look like next year.

    Most here also seem to be somewhat concerned with concepts of cost and benefit. It is evident in many posts that the increasing costs of supporting large numbers of people negatively impacted by changes introduced to CentOS from outside of their span of control is beginning to impinge more and more upon decision making. In some cases that consideration is evidently influencing the decision to deploy CentOS or not.

    Now, what does the subscriber base to Fedora developers list look like? Well, to begin with there are no fewer than 209 official
    ‘Fedora’ lists. Which should one join to ‘influence’ anything? Let us grant that a number, say half, are self-evidently not of great concern to operational deployment and employment of RHEL, CentOS or SL
    in the Enterprise environment. The oddly named ‘UK-Ambassadors’ or the narrowly focused language translation mailing lists for instance.

    That still leaves in excess of a hundred lists. Where should one apply pressure? What forum exists to discuss the economic costs to Enterprises of introducing a marginal, possibly questionable, improvement to an existing UI or common utility? The devel list? The users list?

    A perusal of the contents of both the Fedora devel list and users list does not give one much hope that such a point of view would be tolerated, much less welcomed. For example, one the the notable contributors to those forums is himself banned from this list. Further, discussions tend to be far, far down in the weeds, if not subterranean altogether, when viewed from the perspective of the question: what is this change, improvement, alteration or deprecation going to cost the installed base to implement?

    No, no-one presently on this list is likely to have very much of an impact on the folks that are the Fedora project. Their objectives are far removed from the concerns of those tasked to keep automated systems working and invisible to the Enterprises that employ them. The overarching concern of the Enterprise is to employ capital and labour to produce value; and not simply to prove technologies or advance personal or political agendas. Not that the later two situations are uncommon in the Enterprise either.

    One might, however, consider that the CentOS list is a concentration of people that evidently have some status within a number of Enterprises. And these influential people have chosen not to pay RH
    for their offering. It might be of some interest to RH in determining why this is so. It might also be the case that this forum, being concerned with issues such as deployment on a large scale and the costs of upgrading RH flagship product, provides valuable insight on how RH’s paying customers might be viewing their product as well.

    After all, because we use CentOS rather than RHEL and forgo the provision of RH’s expert advice, then we ourselves and our organisations are a self-identified technologically advanced user community. And we are concerned more with the entire package than with any particular component or detail. If we have concerns and reservations then perhaps RH should have concerns. If we express them here then there is a chance, a small chance but a chance nonetheless, that someone at RH with a view a little broader than that evidenced in most of the traffic on the Fedora devel list, might take notice.

    IMHO, FWIW.

  • Exactly. They don’t care about breakage, only change. In the early days I tried to follow Fedora development and no one paid any attention to complaints about breaking interfaces that other things rely on. I eventually just gave up when they pushed a kernel update mid-rev that wouldn’t boot on my (mainstream IBM) test box and subsequent updates were the same. I don’t really expect them to care
    – the people who have something invested in existing components and interfaces have been split out of that community. I just see it as a big mistake to let them control the future of the distribution to people that need stability and ongoing interface compatibility.

    I used to try to encourage using more linux vs.Windows here and deployed CentOS for infrastructure myself wherever possible because it used to be easier to manage and more stable. But now that I’m approaching retirement and realizing that the current management processes aren’t going to continue to work, I think that may have been a mistake.

  • But it doesn’t matter how pretty Gnome3 is on some other box. I use remote connections through NX/freenx or x2go exclusively. Gnome3
    won’t work that way. And that’s typical of the changes.


    Les Mikesell
    lesmikesell@gmail.com

  • Let me second you. I for one have fled from Gnome (on my FreeBSD workstation, – once the upgrade made me switch from Gnome 2 to Gnome 3). I need the job done, and want my GUI User Interface be what was perfectly suitable for long time. I
    do not care of its looks or fanciness of “ultimately different” user experience. Therefore I fled from Gnome to mate. (The decision was made after 2 weeks of frustration of doing in Gnome 3 the work I usually was doing on my workstation)

    This is just my $0.02 (and note, this is from me, the one who is “not ready to join ipad generation”).

    Just on a side note: I question intelligence of an attitude that something
    (that works for some people) has to be destroyed to make room for something else one thinks to be more appropriate.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I think this essentially sums up your point, and elucidates what I
    think is the error in your thinking.

    At the point where anything is deployed in CentOS, all the decisions that have been made regarding the technologies in question have been made. The technologies had probably a year of use in Fedora and most likely several months of testing in RHEL’s internal development. If you’re talking about critical core software, whole infrastructures have been built around it, documentation, build infrastructure, deployment guidelines, etc.

    I agree that following the Upstream is a significant time investment, however, as the saying goes, “You Get What You Pay For”. Posting to a CentOS list asking for change is probably too little, too late.

  • Yes, it is pretty clearly an error to think anyone else cares. Red Hat will make some money on new training classes helping people cope with the breakage. And CentOS will continue to be a copy with no input. That still doesn’t make it any better for the CentOS users of things that now have an expiration date.

  • Let me start by saying I’m also not a fan of Gnome3, and prefer MATE. However, I believe the new interface provided by Gnome3 is both well thought out and based on the results of research on Human-Computer interactions. Gnome has published their GNOME Human Interface Guidelines here:

    https://developer.gnome.org/hig-book/3.2/

    https://developer.gnome.org/hig/stable/

    The idea is to have a uniform and consistent interface that is intuitive to all potential users. You’ll probably agree with me that UNIX/Linux interfaces tend to be extremely inconsistent between programs, and even between elements of a display interface. Most of us who have been using UNIX for decades are familiar with many of the quirks and have long since adapted. I don’t fault Gnome for trying to actually provide some guidelines for design. Apple has been praised for many years over its easy-to-use interface, largely because they have very strict control over their interfaces and a walled garden approach to apps. It would be very difficult to duplicate the ease of use from Apple while maintaining the free/open spirit in FOSS, so Gnome has a difficult path to tread.

  • I read it and there are very familiar feelings. Some 7 or so years ago I
    found myself at a meeting (that was open solaris meeting) with a bunch of people who started looking for alternatives to migrate their boxes to from Linux. (I may be off on time; it was right after Oracle acquired Sun Microsystems, so there was already this joke out there: how do you call that system? If you repeat Sun-Oracle very fast you likely will get it right: “snorkel” ;-) One of the pushing points was: already then on average every 30-45 days was either glibc or kernel update, meaning you have to reboot the box (and on multiple threads here there was a bunch of other unpleasant things mentioned so I’ll skip them…). Linux from Unix-like system became more Windows like (sorry if it offends anyone, but I can’t hold myself and not repeat what one of people called Linux then:
    “Lindoze”).

    So, several of those people fled to open solaris. My journey was different, I ended up migrating a bunch of most important boxes to FreeBSD. I know how many on this list are already allergic to me saying this, I promise, this is the last time. I’m using as excuse something familiar I feel in James’s post…

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • does not give one much hope that such a point of view would be tolerated, much less welcomed. days I tried to follow Fedora development and no one paid any rely on. I eventually just gave up when they pushed a kernel update mid-rev that wouldn’t boot on my (mainstream IBM) test box and the people who have something invested in existing components and interfaces have been split out of that community. I just see it as a big mistake to let them control the future of the distribution to people that need stability and ongoing interface compatibility. of people that evidently have some status within a number of deployed CentOS for infrastructure myself wherever possible because it used to be easier to manage and more stable. But now that I’m approaching retirement and realizing that the current management processes aren’t going to continue to work, I think that may have been a mistake. found myself at a meeting (that was open solaris meeting) with a bunch of people who started looking for alternatives to migrate their boxes to from Linux. (I may be off on time; it was right after Oracle acquired Sun Microsystems, so there was already this joke out there: how do you call that system? If you repeat Sun-Oracle very fast you likely will get it right: “snorkel” ;-) One of the pushing points was: already then on average every 30-45 days was either glibc or kernel update, meaning you have to reboot the box (and on multiple threads here there was a bunch of other unpleasant things mentioned so I’ll skip them…). Linux from Unix-like system became more Windows like (sorry if it offends anyone, but I can’t hold myself and not repeat what one of people called Linux then: “Lindoze”). different, I ended up migrating a bunch of most important boxes to FreeBSD. I know how many on this list are already allergic to me saying this, I promise, this is the last time. I’m using as excuse something familiar I feel in James’s post…

    Stupid me: Sorry, in Les’s post I meant to say..

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • The amount of actively-maintained software has always matched the available brainpower given over to its maintenance. Therefore, if we are going to add more features, some old things have to be left to die.

    I do not mean

  • My only point there was: do not destroy what works in the name of building something better. On the other hand, if it is open source software on can not request from developers to continue to maintain what they do not want to maintain. Luckily (for some on us), mate forked off Gnome at that fundamental decision moment. My only disagreement was with “one need to destroy something to build new fancy…”. This doesn’t sound to me as sound way to do things. (Building new building in downtown of Chicago is rare exemption;-)

    Yes, and no. On the “no” part: have you ever heard someone saying
    “sometimes you need to trick macintosh into doing what you actually want to do”. That is: when you are using that nice GUI interface, not on the level of command line, of course.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Yeah, that’s going to be fun when the ‘push incompatible changes’
    mentality reaches the Python interpreter. Oh, wait – when it affects RHEL’s own work, they hold back…

    You get programming efficiency with well designed high level library component support with stable interfaces regardless of the language itself. How long would it take you to hook a perl or java program to a new sql database (assuming you aren’t the first one to ever do it)?
    Or parse some xml? The things that kill programmer time are when you have to do tedious tasks like that from scratch or deal with interface changes in the code that was supposed to handle it.


    Les Mikesell
    lesmikesell@gmail.com

  • As long as it is right and people keep arguing with it, I guess.

    Well it was what I want. Now it’s different.

    A different disto with unique maintenance requirements is exactly what I don’t want. But I don’t see how to avoid having multiple systems with different required procedures overlapping for some time now.

    I don’t blame anyone in CentOS-land for the breakage – I’m sure it eats some of your time too. But, is there anything that could help automate a transition? Are there tricks buried inside the automated
    6x-7x upgrade tool that could be separated out to help build a working-but-parallel 7x system to test before cutting over? How are others dealing with the end of freenx and the inability of x2go to run gnome3? That’s ‘almost’ a uniquely CentOS issue because freenx was so easy on CentOS5/6.

  • is that a promise ? please, hurry up. I, for one, am tired of your diatribes about how change is bad, and I suspect I’m not the only one.

  • Oh No. Just because something works well it does not stop being
    “technology” unless the USA people, who have decimated my language
    (English), have a new definition for “technology”.

    Warren are you serious that things that do not work well are
    “technology” but things that do work well are *not* technology ?

  • Is change for advantage and betterment or merely because someone wants to do the same tasks in a different manner and wants everyone else to abandon their existing methods of working ?

    Changes are not inherently bad but changes prompted by others’ amusement are usually unproductively disruptive.

    Enterprise, in the RHEL context, suggests stability or have I
    misunderstood the USA definition of “Enterprise” ?

  • Enterprise to me implies large business. Businesses that don’t adapt to external changes become fossils and die off.

  • The one east of the Atlantic continues to decline and is only “great”
    when compared in size to Brittany in the north-west of La France.

    Great Britain actually means a Brittany bigger than the French area called Bretagne.

    The USA has certainly damaged my language. These days (ever since George W) one no longer devises a “plan”. Instead one makes a “road map” :-)

  • Adapt sounds more pleasant than “disruptive change”. It suggests a less abrupt change process :-)

  • Enterprise literally means ‘undertaking’. It has been used euphemistically since the later 1980s as a code word for associations having a legally recognised form that operate for some sensibly describable outcome. So one has large, medium and small enterprises, not-for-profit enterprises, commercial enterprises, social enterprises and so forth.

    The greatest threat to the survival of any organism or organisation is a change to its environment. It is because of this that widespread adoption of so much innovation is delayed using societal pressure.
    This is not done entirely out of narrow self-interest but from a sensible appreciation of the limits to the speed at which people can adapt to change.

    As is noted elsewhere, change is inevitable. But there are many kinds of change. For instance, there is the change wrought by sudden and dramatic increases in productivity. How many here are cognisant of the fact that the O2 steel making process introduced in the 1950s lowered the labour content of a Tonne of steel by three orders of magnitude? Without that single change much of what we invisibly accept as part of the urban landscape today would not exist. Without that change it is likely that Bethlehem and Republic would still be in business. Without that change hundreds of thousands would still be employed in the steel mills of North America.

    Then there is fashion.

    An enterprise has its hands full with just dealing with the former type of change. It can ill afford to waste resources on the later.

    With respect to RHEL7 the question is: Which are we dealing with, substance or fashion? Or rather, which type predominates?

    I have no argument against claiming the switch to xfs is substance, not fashion. But then again that change over, however beneficial, is nearly invisible to most of us; subsumed as it is in the overarching effort of setting up a new system from scratch. Once a host is set up its file-system certainly has little further discernible day-to-impact upon anyone, much less end-users.

    But Gnome3? Systemd? These seem highly intrusive changes that directly affect, often negatively, the daily tasks of many people. Are these substance or fashion? Do the changes they make fundamentally improve RHEL or just do the same things a little differently? How much is it worth to an Enterprise to have a similar desktop metaphor on the workstation as on a tablet? How many desktop workstations will be replaced by the smart-phone, the tablet? I do not have an answer but I suspect, not much and not many.

    What does systemd buy the enterprise that sysinit did not provide?
    Leaving aside upstart as a sterile diversion.

    I am not certain of anything here either. I have learned that my initial resistance to change, any change, is just as emotionally charged as that of the next person. So, I tend to wait and see. But, I do ask questions. If only to discover if I am alone in my concerns. I am but one person and I need the views of others, agreeable or disagreeable to my prejudices as the case may be, so as to form an informed opinion.

    I am admittedly somewhat concerned about the overall direction of the RHEL product. I fundamentally disagree with their Frozen Chosen approach to key software components. And with the lock-step forced upgrades that are the result. I am not at all certain that back-porting security fixes to obsolescent software is a profitable activity when often for much the same effort, if not less, the most recent software could be made to run on the older release without adverse effects elsewhere.

    However, I offer no answers and promote no particular course of action, saving only reflection of what is happening now and the price that is paid for it. I am simply seeking the alternative views of others on these issues.

  • I am a newcomer to CentOS and I appreciate the discussion. It would seem to me – and I am sure I am not the first one to state the obvious – Fedora is primarily a desktop OS while CentOS is primarily a server OS.

    The user needs are very different, the features needed are very, very different, hence many of the current features, or future features, should remain in one or the other and not cross over. It seems CentOS is at risk of losing features highly appreciated by its core group of users, obviously a very different type of user that depends on and appreciates Fedora.

    Yes, I know CentOS is derived from RedHat and simply follows upstream development.

  • For those who don’t know, as of version 21, Fedora has split into 3
    streams: workstation, server, and cloud. This addresses many of the concerns raised in this thread. See https://getfedora.org/ for details. I
    gather we’ll see the impact of this change with CentOS-8.

    Kal

  • Well (re)starting services in a reliable way?
    Ensuring that services are up and running?

    About which sysinit are you talking btw?
    The init process in RHEL 6 was upstart.

    systemd has it’s ugly downsides, but it
    _does_ provide much needed features.

    if you don’t know them or if you ignore them or if you think you don’t need them:
    fine

    but don’t think others don’t know or need them.

    HTH

    Sven
    —–BEGIN PGP SIGNATURE—

  • That sounds like you have collected and counted “votes” pro and against systemd. (Mine, BTW is against, and I do not feel it fair to be discounted as a stupid minority as it is implied in your post). There is no point to repeat listing of ugly sides of systemd – which you said yourself are there. As far as “advantages” are concerned: I didn’t see any compared to sysvinit or upstart. I don’t care that _laptop_ with systemd starts 3
    times faster – it’s brilliant when you have to start it right on the podium few seconds before giving your presentation. However, my life is more influenced by the servers I maintain. BTW, when “counting votes” keep in mind an existence of an army of refugees from Linux, they already have voted against ugliness here, there,…

    Just my $0.02

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • How could it sound like I collected “votes”? I don’t care about votes when it comes to technical superiority.

    You are making an excellent job at ignoring my argument. Again: how do you ensure that your system services are up and running with sysvinit?

    – it’s brilliant when you have to start it right on the Than how do you maintain servers with sysvinit?

    I can’t take this serious as it seems you didn’t research any of the design goals of systemd and any of the shortcomings of old init systems.

    kind regards

    Sven
    —–BEGIN PGP SIGNATURE—

  • It is not just 3 times faster .. you can also list prereqs. So if something requires httpd to be started, you don’t just mark one to start with a 10 and the other with a 15 .. then hold your breath and hope that it takes 2 seconds or less for the item marked with 10 to start before the item marked as 15 starts .. the daemon with a require does not attempt to start before the prereq as started and registered as started.

    You also act like starting servers faster is no big deal .. ask Amazon if it was a big deal that the hundreds of thousands of servers they need to restart for AWS xen update took 1/3 the time to restart. Ask them how much money it cost them for things to take way longer to restart.

    As any of the cloud providers how much time/money it can save if you can spin up things faster.

    You guys can’t just ignore the advantages of systemd and even ignore the points like they don’t exist. Here is a prime example. You would need to use another piece of software to do something systemd does that sysinit does not. You need something like monit
    (http://mmonit.com/monit/) to monitor daemons.

    I agree with Sven .. this is a religious argument (like vi/emacs or kde/gnome or even gnome2/gnome3) and not a technical argument now.

    Stuff changes, get used it.

  • On Sun, 2015-01-11 at 20:04 +0100, Sven Kieske wrote to Valeri Galtsev ….

    Design goals ? Compatibility with and/or minimum disruption to existing systems ?

    It was arrogant change with absolutely no regard for the existing CentOS/RHEL users. That *is* a strange “design goal” (or ‘objective’ in English). Some may consider that “goal” an inadvertent omission.

    Obviously designed by non-CentOS/RHEL users for their personal amusement and pleasure and not as an acceptable enhancement that could be implemented, perhaps in phases, within minimum disruption to existing systems reliant on stable CentOS/RHEL. Yes, I know it takes brains to properly consider all the implications of major changes. On this occasion it seems the ‘brains’ were holidaying away from the influence of due diligence and old fashioned commonsense.

    Why should the ‘brains’ care ? They don’t run systems that require stability and reliability – that is why they lurk in Fedora where disruption is a scheduled “design goal”.

    Remember that English phrase? Fools step-in where wise men fear to tread.

    Hopefully the next “improvement” will consider the adverse affect on the non-Fedora users and on their well-tuned systems.

  • There’s always the option of just NOT upgrading…and using what you currently have…(I’m just now going from CentOS 5 to CentOS 6!….) I’m just saying.

    EGO II

  • Indeed. Or another system altogether (sihg). I’m just extending your thought half a step farther ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • And that’s the beauty of it…the “extending” of thoughts to achieve a common goal…..

    EGO II

  • Anyone who already has ‘enterprise’ software already running on a distribution without systemd (e.g. any earlier RHEL/CentOS) clearly is able to get along without those changes.

    There was a time when having one piece of software do one specific job was considered an advantage, since the complexity of large monolithic programs makes them harder to debug. I thing the way systemd will be judged in the long run will relate more to whether bug affecting a large number of systems is allowed to slip through QA. It’s not impossible to get this right – Microsoft hasn’t made a big mistake in a long time… But it seems risky.

    But, It doesn’t even simplify things. You can’t just start someone with ‘service program start’ and know whether it worked.

    Yes, definitely – a lot of people would be vocally unhappy if a distribution dropped vi and made everyone use something different –
    and it is unreasonable to expect anything else. Gnome3 vs gnome2 is a practical matter, though, given that gnome3 doesn’t work with x2go. It’s not really about ‘differences’ it is about making changes that break existing infrastructure without regard to the damage to users.

  • Systemd does support managing and starting SysV init scripts. In fact, it does a better job than SysV init does — putting them into their own cgroup and capturing stdout and stderr into the journal.

    Making ‘chkconfig’ and ‘service’ work with systemd isntead of SysVinit makes it so you have a fairly minimal impact, interface-wise.

    I know this might sound crazy, but have you considered… just once… that maybe the design of RHEL7 might have happened in a planned manner, with the full understanding of its developers? You make it seem like the multi-year development effort to produce RHEL7
    was done in some sort of drunken haze by untrained interns with no scrutiny by experienced linux developers.

    I know conspiracy theories are fun but your argument is simply absurd and insulting. At least try to assemble a convincing argument other than ad hominem and “change = bad”.

  • Or going even farther, if you like CentOS but not systemd, do the work to get CentOS working without it. Unhappy Debian users are trying to do this with Devuan. It seems extremely unlikely that complaining about downstream is going to change anything.

    –keith

  • I know… But: systemd is in a mainstream kernel. All Linux distros imminently have Linux kernel… There are different levels to which I care about two different groups of boxes I maintain. One of them stays with Linux and is upgraded to the latest whenever appropriate no matter whether there is systemd or anything else I might not like. Another group… I’m talking about their issues on different mail lists for quite some time already. So, I’m happy. And wish the same to everybody else ;-)

    Valeri

    PS I guess I just mention it. I’m quite happy about CentOS (or RedHat if I
    look back). One day I realized how happy I am that I chose RedHat way back, – that was when all Debian (and its clones like Ubuntu,…) admins were fighting with the consequences of this:
    http://www.debian.org/security/2008/dsa-1571 . If I had Debian machine I
    would not only regenerate all key pairs, certs, etc. I would question sanity of that box then, and will not be certain what confidential stuff could have been stolen from it… I realized then that that level big flop never happened to RedHat. I couldn’t even point to something that would constitute big flop RedHat of then. One only criticizes something while one cares about it ;-)

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Heartbleed was pretty scary, no? I’d consider that at least as bad as the predictable number generator issue.

    –keith

  • Well, heratbleed and shellshock were pretty much global: all systems (not only Linuxes, not to say particular Linux distributions – my FreeBSD boxes were affected too) using openssl or bash were affected… Same bad, yet these were not flops of particular distribution, so whichever system you decided to stick with , you had these. Not certain about you, but this kind of makes difference for me. When I say I’m happy about [me choosing way back] RedHat heartbleed, no heartbleed, no difference.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I guess everyone will have an opinion of systemd whether it be good or bad. The only resolution is to either use a distro that has systemd on it, use a distro that DOESN’T have systemd on it…or build your OWN
    distro and don’t include systemd! I guess when it all boils down to it, there’s STILL choice…..even when it doesn’t seem like there is!

    EGO II

  • I wouldn’t quite agree with you about someone building one’s own Linux distro without systemd. You see, systemd _IS_ in the mainstrem Linux kernel which you imminently have to use. Having distro with kernel to that level not mainstream, so systemd related stuff is stripped off it is quite a task. Less that writing one’s own kernel and building system based on it, still…

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I am sorry…you’re right. I was basing that statement on the devs who forked Debian to make Devuan. I assumed that they are building a version of the linux kernel with no systemd in it. (Maybe I’m wrong?….will have to check out a few articles and find out what’s really going on!)
    My apologies…once again….

    EGO II

  • No, you are correct. They would just have to figure out how to do it on their own in a way that works.

    The bottom line is that every bit of the code that is used for CentOS is released to everyone. One needs to either use what is compiled or be smart enough to take the source code and make it do what they want.

    That can be done .. but it is much easier to bitch about what someone else is doing that actually do something themselves .. so what you will see is a bunch whinning all over the Internet and people using whatever is released .. because the whinners are too lazy to actually work on an open source project.

  • I will admit to being a bit of a whiner when I first came to Linux, and it was over the massive changes that took place in Gnome 3. it was so long ago that I can’t even remember what I was complaining about,…but after like a month the issue was “reverted” back, or reinstated, and I’ve never complained since then. And the reason I don’t complain anymore?..I had gotten an email response once (will have to dig through the millions I have to find it!…unless I deleted it..) from a person who worked on a project, it wasn’t the one I had been complaining about but it was something popular, and he went into great detail as to what is needed and required of him on a daily basis just to make sure this project “worked” for the millions of people who would download it. After reading his story….I will NEVER complain again! These people dedicate a LOT of their personal time to working on these things and its kinda unfair to whine about one little feature to them when they’ve got bugs to fix….features to improve upon….updates to address and then make sure its compatible with not only what’s current..but what’s “older” as well. So yeah….you guys won’t hear a peep out of me regarding systemd….or anything else for that matter….unless of course its a valid bug that needs to be dealt with! LoL!

    EGO II

  • “You keep using that word. I do not think it means what you think it means.” — Inigo Montoya, The Princess Bride

    ‘systemd’ isn’t part of the Linux kernel. The init system ‘systemd’
    requires a Linux kernel (and won’t work on the BSD or Solaris kernel, for example). Unless you’re using ‘kernel’ as in the core part of the distro OS, which would include both the Linux kernel and init system… which would be either misleading or confusing. I’m hoping you understand the difference.

  • Generalizations are always bad.

    Some changes work best as a disruption; some changes work best as a gradual thing. It really depends upon the change.

    I experienced one of the nicer things about CentOS 7 in the desktop setting today, as I hotplugged a DisplayPort to HDMI adapter connected to a projector system into my Dell Precision M6500 laptop and watched it automatically configure the resolution and extend the desktop to the projector. I experienced a similar nicety when I docked the laptop that has two DisplayPort outputs connected to two Dell 24 inch displays and automatically got three-head operation (ATI/AMD Firepro 7820 here). And when a power glitch took the dock out for 5 seconds, the laptop didn’t go crazy, and everything came back up in a reasonable and elegant manner, including the network, the two external monitors, the external HD, and the external trackball. I had that happen on CentOS 6 once, and had to reboot to get the external monitor (only one at that time) back up.

    Enterprise != server-exclusive. We have several EL workstations here, running a mix of EL5 through EL7, in addition to our almost-exclusively-CentOS server farm running a mix of EL5 through EL7.
    The user experience of EL7 has thus far been very positive on the desktop side, but I’m still gathering data on the server side. Admin on the server side has been pretty seamless, which relatively minimal retraining required. Systemd is just not that much different from upstart, really; just a couple of different paradigms to deal with and relatively minor syntax differences. It is some different from shell-script-assisted SysV init, but not in a negative way, just a neutral ‘different’ for the most part. It does seem to be more robust in error conditions (like the admin shut down one of a cluster for removal from the rack for cleaning or and upgrade or whatnot, and either plugged the ethernet into the wrong port or didn’t plug it in at all;
    the EL7 box dealt with that quite elegantly, where an EL5 box had to have all services restarted.

  • This is what I was referring to:
    http://www.linuxquestions.org/questions/slackware-14/it-seems-that-in-future-linux-kernel-itself-will-force-the-use-of-systemd-4175483653/

    And after careful digesting it pretty much boils down to what I said. But again, I personally only care about it to the level I care about part of my boxes that stay Linux systemd no systemd no mater. I’m just trying to help others to realize that there is no way to build systemd free Linux distribution (that will have any future). You can hate anyone who doesn’t like systemd, but you can not disagree with them when they say there will be no Linux without systemd in any observable future. To me it is merely constatation of fact ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Oh, boy, I like this! Do we finally converge on not rebooting machines often?!

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • What? Did you only read the title of that page? This is what you get when you base your opinion on random forum (a slackware forum?)
    posts.

    The Linux Kernel developers who work on the ‘cgroups’ code are changing the interface to cgroups. The systemd developers are adapting systemd to use that API. systemd relies on the cgroup functionality, so that’s expected.

    If you don’t want to use systemd, eventually you’ll have to use some other interface that can use the new API. Most likely there will continue to be a compatibility layer that’ll let you continue to use the filesystem API for a while. Its pretty clear that the kernel developers aren’t fond of that interface, both for consistency and security reasons.

    I still fail to see how this makes you think that systemd is part of the Linux Kernel. Is systemd’s development heavily influenced by the development in the Kernel? Sure! Does that make it part of the Kernel? I wouldn’t think so. I could probably make a stronger argument that GCC is part of the Linux Kernel, since both their development seem to be intertwined.

  • Is one to infer from that remark that the E in RHEL has no meaning whatsoever? And that it should be ignored? Or perhaps redefined to whatever is convenient for the moment and the POV of the definer? In which case is it anything more than noise? In any case, the point of the defining the word was to show that Enterprise != Large, nothing more.

    This is, of which you are no-doubt quite cognisant, a straw-man augment. Nowhere in this discussion has anyone defined ‘rules’ or claimed that rules exist, in Merriam-Wester or elsewhere, in whatever form you imagine them to take.

    This issue, as I see it, is not about CentOS-7 per se. It is about the path that RHEL seems to be following at the moment and what that might mean to current users sometime in the future. This forum is where I find those who share my interest in RHEL, albeit in the form of CentOS. I am seeking their views on the matter. I do not expect a solution here. Nor would I look for one on any of the multitudinous mailing lists associated with Fedora.

    A solution postulates a problem to solve. I am simply checking whether the RH environment suits our needs and whether it is likely to continue to suit; or is likely to change in ways that might prove most inconvenient, for us.

    We moved to RH5 (or 6 it was a long time ago, pre-RHEL) from HPUX. That change was driven equally by economics and a political change at HP with respect to its clients. It took the better part of five years to complete. If RHEL is changing such that 8 will be less useful than
    7 or more considerably expensive to deploy than we can reasonably afford then we need to be looking now for a replacement.

    Why is that necessary? I am expressing my opinion about the value derived from the resources expended. I was not aware that I am not permitted to express such unless I can point to a representative distribution which somehow manifests an approach which affirms that opinion. That seems a little like saying only a tailor can comment on whether the emperor is wearing any clothes.

    In any case, it seems to me that the rather recent innovation of software collections indicates that perhaps I am not alone in that observation.

    As it happens a most useful, to me at least, piece of information was revealed in the course of this thread. That was the existence of a server based stream for Fedora. I have downloaded that ISO and intend to install it on a VM in the near future. If the results of that investigation prove satisfactory then that will go a long way to alleviating the doubts that my, admittedly limited, experience with CentOS-7 has engendered thus far.

  • The place where it matters is for companies large enough that they have written their own applications and need a stable OS and library set to run on. Every interface change and install/operating procedure change is going to cost development and training time that would generally be better spent improving your own application. If you just run the applications included in the distribution, it doesn’t matter so much since even if the internal interfaces change, they stay consistent within the packages the distribution ships together – that is, someone else has already been forced to deal with the breakage.

    And the need for docker as an even more extreme defense against OS/lib instability really points out the problem.

    That’s an interesting turn of events, but is this just a separation of packages or is there really a group in Fedora that actually maintains large server farms and has an interest in keeping their applications working?

  • For those who want to track what is going on in Fedora, http://
    fedoramagazine.org/ highlights of discussions on the “multitudinous”
    mailing lists, forums, meetings, etc.

    For those interested in Fedora Server, its goals, and the people working on it, http://fedoraproject.org/wiki/Server seems a good place to start, in particular, http://fedoraproject.org/wiki/Server/Product_Requirements_Document. This is still a very new project
    ​: if you want to help shape what happens in the future, get involved.

    ​Kal​


    Kahlil (Kal) Hodgson GPG: C9A02289
    Head of Technology (m) +61 (0) 4 2573 0382
    DealMax Pty Ltd

  • Hallo, hallo, the majority of the world is not the US of A. Our chosen dictionaries are not US of A ones. Probably within the next ten years a Chinese originated version of Linux will supplant many of the US of A versions. No doubt .mil is currently seeking a more secure version of any commercial or free operating system after its publicly embarrassing hacking. The US of A’s DoD is never ever going to confess how deep the hackers penetrated.

    Being in the real world rather than in the hectic and unstable ‘change every 6 months Fedora environment’, just what are the RHEL/CentOS 8
    options at this moment? Real users of RHEL/SL/CentOS want

    1. stability
    2. reliability
    3. security revisions
    4. bug fixes

    Many real users lack the time – because time is always finite – to comprehensively monitor the multitude of Fedora lists.

    Ideally, before RH decides to impose an abstract version of Fedora upon the world, RH could ask for comments and give everyone sufficient time to respond.

    Bored clever people who never really run anything on a daily basis should remember that if they wish to play games, then CentOS is probably not the best Linux. Neither is RHEL/SL. Having a high IQ is never an indication of common, or of any other practical, sense.

    Progress for progress’s sake is not beneficial.


    Regards,

    Paul. England, EU. Je suis Charlie.

  • For the love of Pete! If you use RHEL or CentOS, you’ll have a stable, reliable operating system with bug and security fixes for upward of 10
    years! For free (in the case of CentOS)!

    Will you give it a rest already?

    If you can’t tolerate moderate changes at a 10 year interval, then there probably isn’t any option for you.

    So you want them to do what? 5 years of development and then get your approval on hundreds or thousands of those efforts, and then once they have your approval, another few years of development and testing to make sure that whatever you approve actually works?

    Oh, and you don’t want to pay for it, either.

    I don’t know what world you live in, but economies don’t work that way.
    They never did. Free Software is a participation economy. Your feeling of entitlement to dictate how Fedora, Red Hat, and CentOS
    function, without contributing any of your own time or money, is shockingly disconnected from reality.

    Please stop arguing about this. It is annoying.

  • Just to note: Fedora has been upstream for RHEL for many years. New features are tested in Fedora for a long time before they hit RHEL. For example, systemd was first introduced in Fedora 15 (we are currently at
    21). Ample time has been given to discuss, critique, provide feedback and to help shape what ends up in RHEL. If you are running RHEL/CentOS, consider running an instance of Fedora in a VM or testing environment so you get years of warning about new features before they hit RHEL. If you are concerned about what happens to RHEL, get involved:
    https://fedoraproject.org/wiki/Join.

    Kal