Cemtos 7 : Systemd Alternatives ?

Home » CentOS » Cemtos 7 : Systemd Alternatives ?
CentOS 232 Comments

Reading about systemd, it seems it is not well liked and reminiscent of Microsoft‘s “put everything into the Windows Registry” (Win 95 onwards).

Is there a practical alternative to omnipresent, or invasive, systemd ?

232 thoughts on - Cemtos 7 : Systemd Alternatives ?

  • So you are following the thread on the Fedora list? I have been ignoring it.

    Best I can tell is learn it and use it. And if you have any services, fix them so that they work with systemd. I work with one that does not and it is very slow to complete its startup.

  • To the tune of YMCA

    Young man, you don’t like systemd Oh young man, you get no sympathy Young man, you will find that your luck Is slowly running out with Linux

    So young man, if you want to stick To something, that more resembles Unix And young man, if you want to sing Goodbye to Poettering,

    (bah bah bah bah)

    FreeeeeeeeeBSD (yeah yeah yeah)
    FreeeeeeeeeBSD

    etc. I just made this up at work today, and that’s as far as I got.

  • No. I read some of http://www.phoronix.com/scan.php?page=news_topic&q=systemd

    The systemd proponent, advocate and chief developer? wants to abolish /etc and /var in favour of having the /etc and /var data in /usr.

    Seems a big revolution is being forced on Linux users when stability and the “same old familiar Linux” is desired by many, including me.

    I was keenly waiting to upgrade to C7. Perhaps I’ll upgrade to C6 and retain familiar Linux.

  • Ah, a Broadway musical next ? :-)

    I’m an old man who remembers multics, GECOS/GCOS and the ‘B’ programming language.

    Is FeeBSD systemd-less ? Got a FreeBSD manual.

  • As far as I know there is no going back to SystemV at this point and I am fine with that.

    systemd is just fine. It has been around on Fedora for a few releases now. It is quite compatible with old SystemV start scripts and systemd simply uses the SystemV start scripts as configuration files to start those services.

    What you are probably seeing is the result of a side effect of the new systemd strategy. systemd only starts services when they are actually needed. systemd does this by simply creating a socket on which it listens for requests for that service. The service is only started when a request is made to that socket. Of course some services are up and running from the beginning, but those not needed are left to load and start when a request is made on the socket for that service. So the delay in starting your SystemV service means that your service is waiting for a reply from a service on which it depends and which has not yet been started. systemd receives the request from your service on the socket intended for the service yours is requesting. systemd then starts that service and returns the result – after a bit of a delay – to your SystemV service. After the first request to the systemd managed service, there should be no further delays. Unless the service is seldom used and systemd determines it can remove the service from memory with minimal impact.

    I like systemd a lot. I still like SystemV a lot, too.

    I have a page on one of my web sites that does not explain systemd, but rather provides a number of very good links that do explain it – in morbid detail. These links also discuss the philosophy behind the change. Good reading!

    http://www.databook.bz/?page_id%78

    The latest Fedora documentation has good information about using systemd to manage services and managing and configuring systemd itself.

  • No systemd in FreeBSD. It isn’t Linux, and like any O/S, has its own oddities.

    It would take more adjustment, IMHO, to go from CentOS 6.x to FreeBSD than to go to 7.x. (I’m saying this as someone who uses both FreeBSD and Fedora which has given a hint of what we’ll see in CentOS 7.)

  • That’s a good point. Systemd may be the “abomination of desolation” that causes me to finally start moving to a BSD variant. Or at least start looking at one.

    And I chose the word “abomination” carefully and deliberately. And I also chose the Biblical allegory very deliberately. Just as the “abomination of desolation” is that which will portend the end of the world, systemd is the “abomination of desolation”
    that will portend the beginnings of the destruction of the Linux most of us hold dear.

    But that’s not why I mentioned this… I’d never thought of BSDs before, but considering heartbleed and how OpenBSD forked LibreSSL and is taking security very seriously, it’s actually something I am going to give a great deal of consideration to.

    I’ve been a Linux admin for nearly 20 years now – I was around when it was 0.99 and everyone was cheering about it being POSIX finally. Maybe it’s just time to move on.

    –Russell

  • The answer to this is no, replacing systemd with something else is just way to invasive.

    Since new versions of CentOS, Ubuntu, Debian, RHEL, Fedora, OpenSUSE, Arch, Mageia and other Linux distros are all switching to systemd as the default .. I would suggest that learning how to use it is going to be the way to go.

    Of course, there are alternatives, including using CentOS-6 until 2020.

  • Johnny Hughes wrote:

    I just hope that the distribution of implementing systemd are not so shortsighted (or rather pushing force) as Fedora/RHEL and besides him also offer other alternative (OpenRC is IMO very good candidate, although with sysvinit and upstart I was also satisfied – both did _reliably_
    their jobs).

    Franta Hanzlik

  • It’s already started. Some configs have already moved from /etc to /usr under el7.

    Whilst I’m as resistant to change as the next man, I’ve learned you can’t fight it so best start getting used to it ;-)

  • Y’know, I was considered a troll when I said on Fedora forums that systemd going into server systems might start driving people away from RH to the BSDs. (And to be honest, I was being trollish there, in a friendly way–in the same way at work I’ll say something about Arch loudly enough for our Arch lover to hear.)

    Now that it’s insinuated itself in the RHEL system, I do wonder if it is going to start driving people away. In many ways, IMHO, RH has become the Windows of Linux, with no serious competitors, at least here in the US. Sure, some companies use something else, but when I had to job hunt last year, 90-95 percent of the Linux admin jobs were for RedHat/CentOS/OEL/SL
    admins.

  • If you want to avoid firewalld for now you can uninstall it and instead install the iptables-services package. This replaces the old init scripts and provides an “iptables” systemd unit file that starts and stops iptables and if you require the old “service iptables save”
    command you can reach that using “/usr/libexec/iptables/iptables.init”.

    Also if you want to keep NetworkManager on a Server you can install the NetworkManager-config-server package. This only contains a config chunk with two settings:
    no-auto-default=*
    ignore-carrier=*

    With this package installed you get a more traditional handling of the network. Interfaces don’t get shutdown when the cable is pulled, no automatic configuration of unconfigured interfaces and no automatic reload of configuration files (the last one doesn’t require the package and is now the NetworkManager default behaviour).

    Regards,
    Dennis

  • That presumes that your conservative attitude is the majority opinion though. Systemd is one of the features that I have been looking forward to in CentOS 7 because of the new capabilities it provides so while this will surely drive some people away it will actually attract others and if you think that this will lead to some sort of great exodus then I
    think you are mistaken. Not everybody is this uncomfortable with change.

    Regards,
    Dennis

  • I still prefer IPTables, so in Fedora I simply disabled firewalld and enabled IPTables. No need to uninstall. I have read that IPTables will continue to be available alongside firewalld for the unspecified future.

    Note that IPTables rule syntax and structure have evolved so your ruleset may need to be updated. I did find that the current version of IPTables will actually convert old rulesets on the fly, at least as far as the syntax of the individual rules is concerned. From there you can simply use iptables-save to save the converted ruleset.

  • err.. what? even on that wild fedora thread this did not come up!!!

    i will presume that you understood well your information source and you are actually know what you are referring to … so, could you elaborate more about this?(with some references)
    i use systemd for some time (and i keep myslef informed about it) and i would need to know in time about this kind of change..

    Thanks!
    Adrian

  • Be careful with this though. A while ago I tried this on a system that also had libvirtd running and ran into the problem that libvirt detected the existence of firewalld and as a result tried to use it even though it was disables. It took a while to figure this out once I actually uninstalled firewalld and restarted libvirtd it started to use iptables. This might have been fixed by now but you should keep that in mind when you run into firewall trouble. Some software might mistakenly assume that just because firewalld is present is must also be in active use.

    There was a discussion a while ago on fedora-devel that the current handling of firewalld and zones is not ideal and there might be changes in store for the future. This will probably not hit CentOS 7 but you might to want to keep an ear out in case some deeper structural changes happen. Always good to be ahead of the curve.

    iptables-restore is atomic. It builds completely new tables and then just tells the kernel to switch the old version with the new version. Depending on the timing the packets are either handled by complete old rule set or the complete new rule set. There is never any moment where no rules are applied or only half of the new rules are inserted.

    The problem firewalld tries to solve is that nowadays you often want to insert temporary rules that should only be active while a certain application is running. This collides a bit with the way iptables works. For example libvirt inserts specific rules when you define networks for virtualization dynamically. If you now do an iptables-save these rules get saved and on next boot when these rules are restored the exist again but now libvirt will add them dynamically a second time.

    Firewalld is simply a framework built around iptables that allows for applications to “register” rules with additional information such as
    “this rule is a static one” or “this rule should only be used dynamically while application X is running”. Then there is of course the handling of zones which is a concept iptables by itself does not know about.

    Regards,
    Dennis

  • Very true. I do remember Adam Williamson of Fedora commenting on their forums that he pictured many of the complainers about various changes, including systemd, to be old white guys, which fit me to a T.

    Systemd is one of the features that I have been looking forward

    That’s not what I said. I said I wondered if it would.
    I suspect it won’t. Whether this is good or not depends upon one’s point of view.

  • There are no plans to “abolish” /etc and /var.

    The idea is that rather than say proftpd shipping a default config file
    /etc/proftpd.conf that you then have to edit for you needs instead it will ship the default config somewhere in /usr and let the config in
    /etc override the one in /usr. That way if you want to “factory reset”
    the system you can basically clear out /etc and you are back do the defaults. The same applies to /var. The idea is that /etc and /var become “site-local” directories that only contain the config you actually changed from the defaults for this system.

    Since you already have experience with systemd you are already familiar with this system where it stores its unit files in /usr/lib/systemd and if you want to change some of them you copy them to /etc/systemd and change them there. Same principle.

    /etc and /var will stay as valid as ever though and are not being
    “abolished”.

    Regards,
    Dennis

  • My concern it that it is a massive change with a large footprint. How secure is it really? It has arguably become the second kernel it touches and handles so many things.

    Maybe on desktops it makes sense – but I fail to see any positives for servers that once started run for months at a time between reboots.

  • Thanks for info … actually i am (pretty much) up to date with what is happening (i don’t follow systemd-devel but i am following Lennart Poettering and Kay Sievers on g+)

    My remark was kind of tongue-in-chick with regard to “abolish” word..

    And of the OP : the move of defaults to /usr is/was expected as from the beginning the usr consolidation was prepared in order to have a common system shared over network

    And about new mechanics of systemd: i can wait to see how seamless will be the use of chef/puppet tools on systemd systems. (and on any other cloud stacks)

    Adrian

  • I agree but that is a change that you actively have to opt into though. CentOS 6 will receive updates for many years to come so you don’t have to immediately migrate everything over in a rush. Also systemd is hardly new at this point. It has been available for years and had quite some time to mature. Red Hat would not have made it the core of its
    “Enterprise” OS if it didn’t think it would be very reliable.

    The ability to jail services and restrict it’s resources is one big plus for me. Also the switch from messy bash scripts to a declarative configuration makes things easier once you get used to the syntax. Then there is the fact that services are actually monitored and can be restarted automatically if they fail/crash and they run in a sane environment where stdout is redirected into the journal so that all output is caught which can be useful for debugging.

    Its certainly a change one needs to get used to but as mentioned above I
    don’t think its a bad change and you don’t have to jump to it immediately if you don’t want to.

    Regards,
    Dennis

  • That’s not always true.

    Some configs that were under /etc on el6 must now reside under /usr on el7.

    Take modprobe blacklists for example.

    On el5 and el6 they are in /etc/modprobe.d/

    On el7 they need to be in /usr/lib/modprobe.d/

    If you install modprobe blacklists to the old location under el7 they will not work.

    I’m sure there are other examples, this is just one example I’ve happened to run into.

  • And this is indeed the crux of the matter … systemd is NOT just about booting or boot up time (combing posts here .. but this is the answer to, why use this on a server where fast booting is not important).

    And this too .. try it, see if it meets your needs, if it doesn’t you still have 6.5 years with CentOS-6.5 support until you have to move.

  • Dennis Jacobfeuerborn wrote:

    in which code to who could verify that go to


    And so nothing like, say, fail2ban….

    mark

  • Scott Robbins wrote:


    So he’s guilty of ageism, as well as aggressively NIH (Not Invented Here), and a faddist…. Did he actually have any *good*, persuasive reasons for such changes?

    mark

  • For the record, I’m not uncomfortable with change. I’m uncomfortable with stupid, poorly thought out, monolithic change that ignores half a century of the UNIX philosophy. And creating a daemon that tries to handle everything but the kitchen sink and implementing it in such a way as to make it nearly incomprehensible to me certainly qualifies as that type of change.

    Sysvinit may not be perfect, but it’s UNIX. Systemd is… a lot of things, but more of a windows-like solution than I”m comfortable with. It’s just dumb. Surely there could have been a better way of accomplishing their goals without creating the equivalent of Cartman’s Trapper Keeper.

    And yea, I’m kind of an old white guy (is 38 old?) The guy who called that out as a negative is not helping his cause with me. This old white guy has been doing Linux administration when some people on this list were pulling the hair of girls they liked and eating bugs.

    (and if that was yesterday, I don’t want to hear about it. :))

    –Russell

  • What’s Windows-ish about it? It’s all text files; easily available to look at. And Solaris with SMF went in this direction many years ago.

    Tony Schreiner

  • Generally when people get personal I figure I must have hit a nerve.

    I must have hit a nerve.

    I didn’t say it was windows-like. I said it was more windows-like than I was comfortable with. Even with multiple daemons, It’s still not very transparent, somewhat incomrehensible, documented poorly while still managing to have voluminous documentation, dumps stuff everywhere, and is just generally annoying.

    Even its sysv compatibility is incomplete. It runs sysv scripts, but in such a way as to break any but the simplest. I’ve run into situations where I’ve actually had to make a systemd unit because it broke the script, and I couldn’t fix it. The script was fine, ran perfectly if you just ran it, and systemd did… *something*… to it. I still haven’t figured out what. And debugging is an absolute pain.

    And that’s all I’m saying in response to you. Keep this up and my killfile will have one more entry.

    –Russell

  • We don’t care in facts. We just believe. So if you don’t believe in systemd and Poettering, go away. We make religion, not technology.

    Best Regards Oli

  • Oliver Schad wrote:

    Right. Can we get him to stop Peottering around in our gardens, and go play in one that does not affect so many people negatively?

    mark

  • Systemd is emacs for booting without extension capabilities – so at least with no clue.

    The whole idea is to put stuff into a DSL which can’t be formulated with a not turing complete language. To build something, which has nice shortcuts for many things: fine. But that has nothing to do with systemd.

    What you could build is a event system with nice hooks to place your code. And you could use that for many things, when it’s damn simple and fast.

    But systemd is everything but not stupid simple. It’s only stupid.

    Best Regards Oli

  • I haven’t looked closely on firewalld yet, but in practice it should probably allow making fail2ban functionality more robust and fail2ban like functionality simpler to implement. Especially as I distinctly remember of complaining of problems with fail2ban from Fedora list.
    (Granted have has very little time lately to read any mailing lists)

    -vpk

  • There are many use cases involving servers where such a capability would be highly desirable. Most are cloud-oriented, where you want to spin up an instance rapidly (to deal with increased load, perhaps) and then spin it down, and having dynamically loaded /etc and /var content allows this in a smooth manner. Static servers have their uses, of course, but at least in my data center I find actual server load to be very dynamic, but power load to be rather static; why *shouldn’t* the power used be proportional to the work load?

    The real promise of ‘cloud’ technology is for us in in-house servers that can spin up only when needed, saving power and cooling costs in the process. Stateless is not the only way to go, of course, and nothing in the blog post to which you link is ‘never again honor anything in /etc and /var’ to be found, but rather, much like /etc serves as a fall-back for many programs who look first in a dot-file in ~, the content in /usr serves as an OS-default fallback to the per-system (or per-instance)
    configuration and state in /etc and /var.

    It is a different way of looking at things, for sure, but I can definitely see a server use-case for this sort of thing, especially since there is significant budget pressure to reduce power costs.

    And dynamic spinup of servers to handle increased load is a use case for systemd’s rapid bootup. They go hand-in-hand.

    The Unix philosophy unfortunately sometimes misses the forest for all of the trees. Sometimes tools need to actually be designed to work together, and sometimes a Swiss Army Knife is the right thing to have.

    (And I’m an old Unix hack, too, having used Unix of several flavors since before Linux was even a gleam in Linus’ eyes).

  • Sorry, but I’d recommend that anyone who thinks shell syntax is
    ‘messy’ just stay away from unix-like systems instead of destroying the best parts of them. There is a huge advantage of consistent behavior whether some command is executed interactively on the command line or started automatically by some other means.

    What part of i/o redirection does the shell not handle well for you?

    ‘Immediately’ has different meanings to different people. I’d rather see such things discussed in terms of cost of re-implementations. How much is this going to cost a typical company _just_ to keep their existing programs working the same way over the next decade (which is a relatively short time in terms of business-process changes)? Even if the changes themselves are minor, you have to cover the cost of paying some number of people for that ‘get used to the syntax’ step. Personally I think Red Hat did everyone a disservice by splitting the development side off to fedora and divorcing it from the enterprise users that like the consistency.

  • This is an unfortunate problem in the community today, anyone who disagrees with status-quo is “just an antique”, it’s insulting to say the least. It doesn’t matter our experience, we’re just “causing trouble” because we
    “don’t want change” which is an excuse that isn’t even remotely true. Eventually when all these “old guys” leave, all that will be left are the inexperienced kids and that’s when the real problems will begin to surface. There are a few good reasons to adopt systemd, but the bad outweigh the good in my opinion. Then there’s the problem of giving children the keys to the kingdom (
    http://lists.freedesktop.org/archives/systemd-devel/2014-May/019664.html)
    as they do run off the old guard so they can have their toys.

    Personally, we’ve started evaluating migrating from CentOS 6 to FreeBSD
    rather than CentOS 7.

  • Don’t know about your servers, but ours take much, much longer for their boot-time memory and hardware tests and initialization than anything the old style sysvinit scripts do.

  • The people promoting change most like do not have a large installed base of their own complex programming to maintain or any staff to retrain.

    My opinion is that if a new system is really better, then it should be capable of handling everything the previous standard did transparently. If it can’t, then it’s not really better. It is just different.

  • Well said. Why are the old proven ways somehow so deficient that we absolutely must replace them with something else, no matter how badly thought out. (yes I’m an old fart by some folks figuring, I actually prefer the command line and started with punch cards and paper tape).

    Change isn’t bad… but change for change’s sake is stupid and that’s what this looks like. New is not always better. Based on my observations over the years, new is rarely better! The more rapid the change, the more radical the change, the more likely the change won’t stand the test of time, or rational thinking. Do the analysis, really do it without a bias toward a specific answer and sometimes, yes, sometimes the answer is leave things alone, they work fine the way they are. Just because something is old doesn’t mean it needs to be replaced. Of course, sometimes the answer is to make changes, but keep them small, make them incrementally and give them time to prove themselves before rushing headlong towards the next thing.

    Sadly, poorly thought out change seem to be the trend, and not a surprise given the number of folks with Windows backgrounds making their way into the Linux world. We are definitely loosing touch with the UNIX
    philosophy and that was what made it a great operating system for doing real work.

    I started my work life as a maintainer and while I’ve done my fair share of development, I’ve always thought like a maintainer. Change almost always breaks things, so do it carefully and slowly and with thought not just about the local impact, but the global impact. Please!

  • Les, this is the wrong question to ask. The question I ask is ‘What will be my return on investment be, in potentially lower costs, to run my programs in a different way?’ If there is no ROI, or a really long ROI, well, I still have C6 to run until 2020 while I invest the time in determining if a new way is better or not. Fact is that all of the major Linux distributions are going this way; do you really think all of them would change if this change were stupid?

    Even the Unix philosophy was new at one point. Just because it works doesn’t mean it’s the best that can be found.

    Consistency is not the only goal. Efficiency should trump consistency, and I for one like being able to see where the direction lies well in advance of EL adopting a feature blind. Or don’t you remember how Red Hat Linux development used to be before Fedora and the openness of that process?

    (Leaving part of my .sig in for a change, as I’m wearing the CIO hat in this post.)

  • Windows used to have a single win.ini file with all configs in it. Then it replaced that with a binary equivalent you access and manipulate using programs or whatnot. I think the point is that is where systemd is going to, and there are many good arguments for that. But there are also compelling reasons against it.

    Also, you as a developer/manager has to interact with this 2600lb Gorilla (he gained some weight throughout the years) that is the OS
    using libraries and programs that are poorly documented and rather buggy. Or hack your way in. All that while hoping that ape will not starting flinging poop your way for absolutely no reason (bugs, security flaws, etc).

    And so did Apple in a certain way (their plists and launchd and so on). Was that an improvement? I honestly cannot answer that.

  • I am also struggling with this and the HIPL code (on F20). If you just start the services, it takes about 5 min to complete. If you just run the programs and tell them to drop into the background it is a handful of seconds. Strip out comments from the script and it starts right up with systemctl. Huh? What is going on? I was told that systemctl does seem to try and make ‘sense’ out of comments…

  • Wow. This was my bad in assuming everyone knows who Adam is–a very good natured and helpful person. He was formerly with Mandriva, I think, and came to Fedora where he has made enormous strides in seeing things from the user standpoint, getting bugs filed, making Fedora a much better distribution (and better documented), than it had been. My statement’s implication, though of course, tongue in cheek, was that Even Adam thinks….

    So, for any friends or fans of Adam on this list, I apologize if that came out as a putdown or anything more than a jesting complaint.

  • Physical servers can be told to skip certain parts of their POST, especially the memory test. Memory tests are redundant with ECC. (I
    know; I have an older SuperMicro server here that passes memory testing in POST but throws nearly continuous ECC errors in operation; it does operate, though). If it fails during spinup, flag the failure while spinning up another server.

    Virtual servers have no need of POST (they also don’t save as much power; although dynamic load balancing can do some predictive heuristics and spin up host hypervisors as needed and do live migration of server processes dynamically).

    To detect failures early, spin up every server in a rotating sequence with a testing instance, and skip POST entirely.

    If you have to, spin up the server in a stateless mode and put it to sleep. Then wake it up with dynamic state.

    There are alot of possibilities here, if you’re willing to think outside the 1970’s timesharing minicomputer box that gave rise to the historical Unix philosophy. And this has nothing to do with Windows; I have been a primarily-Linux user since 1997.

    Long POSTs need to go away, with better fault tolerance after spinup being far more desirable, much like the promise of the old as dirt Tandem NonStop system. (I say the ‘promise’ rather than the
    ‘implementation’ for a reason…..).

  • I should also add that Adam’s comment was very tongue-in-cheek and aimed at people who took it that way. Again, I really apologize for taking that out of context and expecting everyone to somehow magically grasp the context especially as it seems it irked a few people. He was saying it to me and a few others, who he knows, and all fit the description, and it was certainly meant as a joke.

  • It is fun to attend a standards meeting (like IETF and IEEE 802) when some young guys and gals, along with their profs make a presentation on how things should REALLY be done. Then that grey headed guy or gal gentlely leads the Q&A into a critical edge case that completely breaks the proposal. At least in my area, we have the institutional memory of why we do things as we done. Sometimes things evolve where we CAN do it better now, or even have to do it better. But us old dogs still have the where-with-all to keep it all straight.

    But also we are demanding more of our systems. HSMs make dealing with virtualization a must and this changes a lot of old assumptions.
    Remember what assume can spell.

  • Yes. Look at Microsoft and Windows 8 and a similar attitude of “get over it, and just buy it”. I’m not surprised that the head developer was terminated days after its release. Lemmings think jumping off a cliff is a good idea, too. Several designers thinking its a good idea and implementing it across the board does NOT mean it’s a good idea to the end user.

    The Unix philosophy is not new, but blossomed after Windows put a stranglehold on everything else.

    I am darn sick and tired about hearing of “efficiency”. Efficiency does not 100% translate to effective productivity. Furthermore, user satisfaction is not counted into efficiency. I have heard people complain about air conditioners with extremely high efficiencies. The problem is that they don’t put out much cold air. If the product is ineffective, very hard to work with, but efficient…I’d far rather use something much less cumbersome and effective but being less efficient. That translates to higher productivity and satisfaction, which you really want. Effectiveness and satisfaction should go hand in hand with efficiency, every time.

    People will vote with their feet on this. And, that “old white men” are complaining about this is ageist, racist, and demeaning to EVERYONE. I am really disappointed in Red Hat saying this, far more than the whole systemd concerns. As others have stated, change for the sake of change isn’t good. Slapping across the face your primary customer base with deep insults isn’t good, even if the customers are horribly wrong, which is quite the opposite here. And trying to splash perfume on a steaming dogpile is absurd.

    Don’t worry, if this attitude continues with Red Hat, I won’t let my rear hit the exit on the way out. And I’ll do the best sort of advertising for this that I can: tell others the nonsense that is occurring, and to stay far away from it…

    Gilbert

    *******************************************************************************
    Gilbert Sebenste ********
    (My opinions only!) ******
    *******************************************************************************

  • Thanks for clairfying that, Scott. I retract my comments in my previous post suggesting that if he were serious, it was racist and ageist.

    Gilbert

    *******************************************************************************
    Gilbert Sebenste ********
    (My opinions only!) ******
    *******************************************************************************

  • I have also had good interaction with Adam. I never found him getting down on my not being a real admin.

  • You aren’t old.

    (Sent from iPhone, so please accept my apologies in advance for any spelling or grammatical errors.)

  • Apple is guilty of that too.

    I think you are making the case for maintainability. Efficiency is in a certain way what brought the Y2K bug. I’ll take maintainability over efficiency any day if I can (design constrains)
    even if I was writing a game.

  • Les Mikesell wrote:
    Ours would, but we disabled the POST-time memory checks on most. When you’ve got upwards of 64GB, it starts to take a while… and for 256G… let’s not go there, unless you’ve got 15 min.

    mark

  • Lamar Owen wrote:

    No, it’s *not* the wrong question. Are you going to figure ROI INCLUDING
    all the a) reworking, b) retraining (oh, that’s right, almost *no* one pays for training, other than on-the-jop or take your own lunch brown bags) in the costs? And how ’bout how long it’s going to recoup those up-front costs (or where you planning on hiring all new people anyway?), and will there be *another* change coming along in five years…?

    May I point to upstart, and that it lasted a few years, before folks decided it was a Bad Idea? How many years of systemd do we have to compare and contrast?

    YES!!!!!!!!! Let fedora duke it out with ubuntu; give us a *work* o/s.

    Wrong. I *STRONGLY* disagree. Efficiency should be a goal off consistency, and consistency should not be highly inefficient. However, as I’ve mentioned before, when I go home after a hard day administering a hundred-plus-many servers and workstations to my own workstation at home, I do *NOT* want to debug my o/s. (And I’m putting off trying to upgrade my router’s DD-WRT, in the hope that I’ll find something less buggy with USB
    printer support).

    mark

  • But the answer is still the same. It’s sort of the same as asking that about getting a shiny new car with a different door size that won’t carry your old stuff without changes and then still won’t do it any better. Our services take all the hardware can do and a lot of startup initialization on their own. Saving a fraction of a second of system time starting them is never going to be a good tradeoff for needing additional engineer training time on how to port them between two different versions of the same OS.

    So a deferred cost doesn’t matter to you? You aren’t young enough to still think that 6 years is a long time away, are you?

    Yes, Linux distributions do a lot of things I consider stupid. Take the difficulty of maintaining real video drivers as an example.

    Re-using things that work may not be best, but if everyone is continually forced to re-implement them, they will never get a chance to do what is best. In terms of your ROI question, you should be asking if that is the best use of your time.

    But that’s why we are here using an ‘enterprise’ release, not rebuilding gentoo every day.

    Efficiency comes from following standards so components are reusable and can be layered on top of each other. Then you can focus on making the least efficient part better and spend your time where it will make a difference. Adding options to increase efficiency is great – as long as you don’t break backwards compatibility.

    Yes, I remember it worked fantastically well up through at least RH7 –
    which was pretty much compatible with CentOS3. That was back when people actually using the systems contributed their fixes directly. I had a couple of 4+ year uptime runs on a system with RH7 + updates –
    and only shut it down to move it once.

  • Scott Robbins wrote:
    reasons

    The only idea of him I have is from this thread, and I have formed an opinion headed *way* down in a nosedive from “not very high”. And I know how other feminists feel when someone makes a sexist comment that they think is a throwaway line… but question where the line came from in their subconscious, and how it affects their attitude to what they’re doing.

    And to expand on my agreement with Les, sounds like he only talks to other fedora folks… and doesn’t get out of his own little circle of admirers.

    mark

  • When you do that to a certain developer you get banned from a certain G+
    feed for make believe “personal attacks” because changing the conversation is much simpler than acknowledging a design flaw.

    Kids these days!

  • Lamar Owen wrote:

    ROTFLMAO! And can you explain the difference between “cloud” and
    “time-sharing on a mainframe”?

    mark

  • They likely don’t, if they did they would gain the experience to know better. Fortunately, we have CentOS 6 which still has a lot of life left in it.

    I agree, completely. Who would replace the hood of a car with half a hood?
    It might have really awesome flames painted on it, but at the end of the day it’s still half a hood.

  • I don’t think that is generally true. I’ve seen several IBM systems disable memory during POST and come up running will a smaller amount.

    Our services that need scaling need all of the hardware capability and aren’t virtualized. That might change someday…

    Our servers tend to just run till they die. If we didn’t need them we wouldn’t have bought them in the first place. I suppose there are businesses with different processes that come and go, but I’m not sure that is desirable.

    If you need load balancing anyway you just run enough spares to cover the failures.

  • On OpenSUSE, my trick for “disabling” systemd in the init script, is to rename systemctl at the start and put it back at the end. There is an included script (functions) look for it and if not found allows good ol’ sysV to do it’s thing. I expect It’s done the same way for RHEL/CentOS.

    Just sayin’

  • Unfortunately the way systemd has intertwined itself into to so much more that just system startup, it could be around for a long time.

  • Yes I can. I did/do both. There are many things I do not like about
    ‘cloud’ and the security ones are going to be tough nuts to work out.
    But it is the way computing is dividing up. I like the isolation of ownership and risks that ‘cloud’ enables.

    I did time-sharing on a MARK-IV. I don’t miss it.

  • 75 baud on a TTY (clank, clank, clank, ding, thud as the printer head returned to the beginning of the line) and an amazingly fast speed of
    300 baud on the up-market Terminet (? spelling).

    Perhaps the speeds were 300 and 1,200 baud? It was a long time ago.

    Those were the days.

  • You might want to report this as a bug. The modprobe and modprobe.d man pages explicitly reference “/etc/modprobe.d/*.conf” for the configuration.

    Regards,
    Dennis

  • Mainframes are housed in vaults with powerful airconditioners; operators walk around with 132-column fanfold printouts. Operators may have goatees.

    Clouds are housed in vaults with powerful airconditioners; operators walk around with little gadgets. Operators may have goatees.

    Dave

  • 110 and 300 Bps. (model 35 and 33)

    75 and 50 Baud was 5 level code (model 15, 19, 28 and 32)

    There was never a 1200 Bps gearset

    I’m an ex teletype mechanic…. Springs, levers, cams and electromagnets. That was my door to telecom and ultimately to computers.

  • Always Learning wrote:

    I was dialing in from work to upload homework at 300 baud, around ’84. When I got my first modem for my first real PC (we’ll skip the CoCo), *I*
    had 1200 baud. *nyah*

    But between 1978, when I went back to college, and ’81 or os, a year into my first programming job, I was on punch cards; then it was the shared three terminals in the hall. Originally, a 370-168 timeshare, then, after we won the lottery to get it nine months before most everyone, we had our
    4300. And we had *real* line printers…..

    But I say there *ain’t* no differmenints. You’se is gots your share, we had VM regions…..

  • You were late to 1200 baud. I got one of the first Anderson/Jacobson accoustic boxes (before they invented the accoustic coupler). First at
    300 baud, then 1200 baud on a DEC Writer II. It was ’83. Then Bell head said I would never get 300 baud working on a non-conditioned line…

    But again that is the point. We have moved on from there. We got the memories, so we know the edge cases. Like dealing with X.75 gateways…

  • this is insane. traditionally in Unix-like systems, /usr is supposed to be able to be read only and sharable between multiple systems, for instance in NFS boot scenarios. /var is specifically for host-specific complex configuration and status stuff like /var/logs /var/state
    /var/run and so forth.

  • actual Teletype KSR/ASR 33 kind of machines were 110 baud (10 cps, as they used 2 stop bits)

  • Again, let me clarify, it was a tongue-in-cheek comment made among friends, and certainly not a RedHat official quote.

  • I personally used a ‘portable’ 300-baud TI Silent 700 which printed on thermal paper and had an acoustic coupler on the side of it for those old phone handsets with the two circular cups. We dialed in and waited with great anticipation to see the next word coming from the remote machine. You also quickly learned what Ctrl-R was for due to the delete key didn’t work very well‎ once the typed character was printed on thermal paper. Yes, 300 and 1200 baud were slow and taught us something about patience.

      Original Message  
    From: Bruce Ferrell Sent: Tuesday, July 8, 2014 12:57 PM
    To: CentOS@CentOS.org Reply To: CentOS mailing list Subject: Re: [CentOS] Cemtos 7 : Systemd alternatives ?

    110 and 300 Bps. (model 35 and 33)

    75 and 50 Baud was 5 level code (model 15, 19, 28 and 32)

    There was never a 1200 Bps gearset

    I’m an ex teletype mechanic…. Springs, levers, cams and electromagnets. That was my door to telecom and ultimately to computers.

  • And more to the point, /usr isn’t supposed t be needed until you are past the point of mounting all filesystems so you can boot from something tiny. Doesn’t modprobe need its files earlier than that?

  • But aside from insulting anyone, you should think of that reference realistically as meaning the people who have established systems working well enough to have built businesses worth maintaining. Do you really want to rock that boat in favor of youngsters that don’t know how to make it work?

  • Yeah, tongue in cheek. Uh huh. Sure.

    “Disadvantages: people who dislike change are going to hate this one. Note to people who dislike change: you could still remove NetworkManager post-install if you really hate it. Having it in core doesn’t preclude that. You could also still exclude it with a kickstart. Makes the minimal install somewhat larger.” – Adam Williamson (
    https://bugzilla.redhat.com/show_bug.cgi?idi3602#c31)

  • Mauricio Tavares wrote:

    Right. Illiterate. And they’re not cool enough to wear berets, like the gotee-wearers, back in the day….

    mark

  • in college (early 1970s) my roommate had a GE Terminet 1200 which was a
    120cps printer with plain paper and a ribbon, and an integral acoustic coupler. this was lightyears–er–12X faster than the defacto Teletype stuff most folks had. But, until circa 1980, most of my actual work was with punchcards and/or (later) direct connect VDTs at 9600 baud. I do still have a USR Courier 2400E somewhere in storage, which was a 2400
    baud modem that had data compression and could send plain ascii at about
    9600 bps, along with a couple Racal Vadic 9600-ish modems.


    john r pierce 37N 122W
    somewhere on the middle of the left coast

  • John R Pierce wrote:

    I think I still have my 56k modem. Unfortunately, I also think it’s ISA….

    mark

  • Jonathan Billings wrote:

    Great. And it’s from freedesktop… as opposed to, say, a system user, and which implies to me that it’s for runlevel 5 GUI-only users….

    mark

  • And more to the point, /usr isn’t supposed t be needed until you are past the point of mounting all filesystems so you can boot from something tiny. Doesn’t modprobe need its files earlier than that?

    perhaps you should read links before you make assumptions. and doubly so before you start following up.

  • This seems to be a common misconception.

    To quote http://0pointer.de/blog/projects/the-biggest-myths.html:

    freedesktop.org hosts a variety of software projects, not all of which are desktop-oriented. Just look at the list here:

    http://www.freedesktop.org/wiki/Software/

    It’s worth noting that much of the software projects hosted there are desktop-oriented, but that doesn’t mean that systemd is only for desktops. It just happens to be the site where the systemd documentation resides, and is a great place to peruse if you are interested in learning more about systemd.

  • Ummm, ‘addressed’ by pointing out that a whole bunch of the changes fedora has made break things that are expected to work in unix-like systems. I fail to see how that helps with the problem.

  • Unless you are offering to do that for me, for free, on all my systems, having to do it certainly does take something away.

    Generally speaking, if a service is broken to the point that it needs something to automatically restart it I’d rather have it die gracefully and not do surprising things until someone fixes it. But then again, doesn’t mysqld manage to accomplish that in a fully-compatible manner on CentOS6?

  • I hate to say it … but all the Blovaiting we might want to do or not do in support of or in opposition to systemd does not matter with respect to CentOS 7.

    RHEL 7 has it, so CentOS 7 has it. Use CentOS 7 or don’t … your choice. If you want to replace systemd and you can figure out how .. do it.

    If it works and you want to get into a SIG, great then start one.

    If you want do discuss the mechanisms for removing systemd and collaborate about doing it via patches and changes to some package(s) in CentOS 7 .. great. Fork the packages from git.CentOS.org and go to github, start coding with your friends .. you can use the CentOS-devel mailing list to discuss the changes.

    But failing that, lets try to close down the thread a bit unless there is really something constructive to add.

  • I’m not sure if I’m reading the same page as you are. I could sum up the response to your objection with two points:

    1. systemd works fine with /usr on a separate file system that is not pre-mounted at boot.

    2. Modern linux distributions currently don’t work well when /usr is not pre-mounted at boot, and this has been the case for a while.

    Fedora went through a process to try to clean up the mess that Linux has fallen into, by identifying all the executables in /bin, /sbin,
    /etc, /var (etc.) that aren’t needed to boot the system, and migrating them into /usr.

    This migration was put into place for reasons unrelated to systemd. Perhaps you have a valid complaint about this change, but don’t lump it together with your issue with systemd.

  • This work is all about being able to boot a system with just a read-only /usr. Any foo you need to get to a complex filesystems, like NFS or encrypted software RAID needs to be in the initial ramdisk which the boot loader can access before the kernel loads and which tools like Dracut build based on what

  • No, it means our servers run for years,

    We design to handle a whole data center failure in only the time it takes for a new client connection. With/without systemd, nobody is going to wait for a new server to spin up.

  • I expect our systems to still have services running past 2020.

    Then I hope I’m never a customer of that service that doesn’t know/care why it is failing. I consider it a much better approach to let you load balancing shift the connections to predictably working servers.

    Seems awkward, compared to openvpn.

    You don’t have to restart openvpn to have it reconnect itself after network outages.

  • Errr, I thought you only needed stuff on the ramdisk to access the root partition. Can’t you mount /usr from a different disk controller or NFS from modules loaded from /lib/modules? Or was that already broken when user’s home directories were kicked into /home? And if not, how did things get in that mess?


    Les Mikesell
    lesmikesell@gmail.com

  • 110 baud definitely rings a bell. I saw my first Teletype in 1967/1968
    at Scotland’s National Engineering Laboratory (NEL). Chugging away, it seemed to be an exciting example of “real” computing – and it wasn’t a bit like punched cards.

  • Always Learning wrote:

    ‘Ey! What’cho got ‘gainst punch cards?

    mark “except the card punch in the lab that punched *other* than
    what it printed, that once….”

  • Never used the Power Samas 36? column cards, just the plain boring 80’s.

    Was an excellent hand puncher. Could easily read cards by holding them up to the light and could fill-in the wrongly punched hole to avoid having to re-punch the entire card.

    H-1250, H-120, H-L61, H-L66, H-L64, DPS 8 etc. + BBC Micro B & B+

  • Always Learning wrote:

    Nope, just the normal 80 column punchs. All IBM, y’know.

    Never heard of a hand puncher. All I ever used or saw were desk-sized machines.

    To catch that error, it took the lab assistant, the *only* time in my school career I needed one, to read the actual holes.

    Nasty.

    mark

  • partially because I am the tester, not the developer. I found the problem, though.

    So I will pass this on to the developers.

  • Ah, we will be at CentOS 11 by then :)

    Systemd will be a thing of the past and we will be dealing with systemq.

  • No. It was paper tape. And on a PDP8 I had access to in ’67, it had a high speed paper tape reader to load the ‘OS’ and FORTRAN system. In 4K
    of memory.

    The invention of the 8″ diskette as the boot media for the 360 was a serious step forward. Also right around that time.

  • I am sure now do not understand the bug end line. From Fedora 17 they modprobe.d moved from /etc to /var/lib ? if so why not just use a symlink from /etc to /var/lib if someone needs it there for any reason what so ever??

    Eliezer

  • The read up on Grace Hopper and how she ‘discovered’ an unknown opcode when a mispunch she glued in with nail polish. They used hand punchers a lot on her programming team.

  • The machines could be programmed by punching the programme onto another same-sized punch card which fitted around a small circular drum. Girls punched and verified my coding sheets and managed to insert more errors into my coding that I had written. Never understood how they managed to verify their own punching errors.

  • Not entirely unknown because the opcode must have been known to the technicians or computer designers but not actually documented for the programmers.

    40 used to be NOP (no op) on Honeywells H-200 series.

  • And the text contains several weaknesses. I’ve personally checked the statement for the 23 dependencies from udev rules to /usr in Fedora 15. Maybe my Fedora 15 was something strange but there was none dependency inside.

    And for me it’s ok to blame the small environment in / for some special udev rules they might exist somewhere. But a solution would be a staged Udev, which would be simple to solve.

    A small problem, a small solution.

    The other “arguments” have the same quality. To have a /usr on a central NFS server doesn’t seem to be a case for such idiots.

    The binary size of systemd is now more than 1.1 MB – for a central service which will kill your system if it’s not rock solid. For a comparison: an apache web server is less than 500 kB.

    Best Regards Oli

  • 370, not 360. the 360’s had microcode in “ROM”, one of the innovations of the System/370 was soft loaded microcode.

  • Yeah. I thought about it a bit and did not feel right with the timeline. It would have to have been just before the 370. Makes more sense it was the 370. I ran with the BUNCH back then. IBM stuff was to be generally ignored…

  • FWIW I accept your apology. I also would be remiss if I didn’t point out that if this is the case, you’ve done him a huge, huge disservice by spreading a non-contextual insult towards a class of people that he probably actually did say.

    Perhaps now he will be more careful with what he says – even amongst friends. Sometimes things said in confidence can come back to bite you.

    What he said, even if amongst friends, btw, *is* wrong, and stupid, and should not have been said. But I’m guilty of the same thing, so beam, mote, and all that.

    –Russell

  • It’ll be named “kitchensink”, there will be only one process in the process table, and every bit of computation will be handled using kernel threads. All services will have been moved into the kernel for “speed”, and exceptions will be handled by everything being virtualized – when the kernel crashes, the guest will just kill itself and respawn.

    I really wish I was joking or being facetious. I’m not. This is pretty much the logical end result of the abomination that’s systemd, and the appallingly stupid idea of putting dbus into the kernel. There’s a reason for privilege and process separation, and people seem to have forgotten it.

    More facetiously, Poettering will have rejoined a BSD project after effectively having killed off Linux for any production use, and laughing all the way to the bank. :)

    –Russell

  • That is a fundamental worry. Everything, except the kernel, dependent on Poettering’s (employed by Red Hat) windows-style gigantic systemd. Nothing can run without systemd’s prior consent. One tiny bug in systemd and everything crashes. Is that RH’s new “resilience” strategy?

    Have I really got this wrong?

    Remember the old fashioned sayings?

    *** Keep it simple stupid (KISS)

    *** If it ain’t broke, don’t fix it (= If it is not broken, do not attempt to repair it)

    M$-style script kiddies are improving Linux?

    Poettering-kraft ? Nein danke.

  • I agree, and feel badly about it. In fairness to myself, it was made in a public place, but I should have thought about how it would affect people before posting it, and I do regret repeating it, especially since I have the highest esteem for Adam.

  • And IBM assembler

    (Sent from iPhone, so please accept my apologies in advance for any spelling or grammatical errors.)

  • I’ve been not so subtly hinting that I think this kind of thing could and will destroy Linux – at least in any context other than rolling your other distribution and hoping for the best. (at least until they start putting dbus into the kernel – and I cannot say strongly enough how utterly DUMB that is.)

    I don’t mean that it will make it go away and people will stop using it and all that. It’ll be going strong in one form or another for decades. What I mean is, that people who don’t know what they’re doing will be the only ones to actually be using it, and those who know better will have long ran off for greener pastures. And then the large distros will start catering to those inexperienced people, and mark my words, we’re going to end up with a catastrophe sooner rather than later. Someone’s going to put the wrong thing into the kernel, open up a security hole, and heartbleed all over again. And no one will learn the lesson.

    We were worried years ago about Microsoft embracing and extending. That turned out to be the wrong worry. Looks like the right worry was people making stupid decisions and killing it from the inside. Congratulations, RedHat and Poettering – you did (or are doing) what Microsoft couldn’t.

    I’ve been toying with the idea of rolling a distribution similar to OpenBSD –
    with a focus on security and doing the right thing – no matter what other stupid crap other people are doing. The problem is that rolling a distro is a lot of work.

    But… it may need to be done. The inmates are running the asylum.

    –Russell

  • > And IBM assembler
    >
    > (Sent from iPhone, so please accept my apologies in advance for any
    > spelling or grammatical errors.)

    Anyone want to borrow my IBM assembly text from college? I will say, though, that if I could have gotten the republication rights, I’d have put all manufacturers of sleeping pills out of business….

    mark

  • Sure.

    “Cloud” is much more dynamic, for better or for worse, than mainframes in ye olde days. Cloud takes advantage on smart clients, and, well, is a bit of a nebulous terms covering many things traditional servers do, but with more of an emphasis on dynamic load balancing. Ideally, if no one is using a server that server should not be running, as it is wasting power. The challenge is to get servers up with low latency.
    And when I say ‘servers’ that includes physical iron as well as fully virtualized hardware and more fluid virtual containers that just sortof act like a server.

    It’s all about getting the necessary services to the client processes, regardless of whether the client is smart or dumb.

    And ideal application for cloud-based technology is renderfarms;
    transparent spinup and spindown of render machines, which often contain very power-hungry GPUs, saves lots of money.

    As much as it is going to rub sysadmins the wrong way (because the very comfortable and flexible SA hat is one I wear often it definitely rubs me a bit wrong), ideally the admin should spend time on setup and implementation more than operation; the operation of this dynamic spinup and spindown of resources, once set up by a competent admin, should be entirely user-driven and automated.

    Ye Olde Mainframes (not the more modern stuff, which *is* more cloud-oriented) never did this and required monstrous opex for personnel. Cloud is about reducing opex, pure and simple. Setup can be capex, and thus a separate budget (at least here, once again wearing the way too stiff CIO hat).

    Robert mentions security, and that is a very very true statement, and is where it is incumbent upon the admin who sets it up to be competent.

  • True enough; but this shouldn’t take five minutes on a server with multiple GB/s memory bandwidth. My Dell 6950’s take a full five minutes to POST, and that’s ridiculous. There’s eight cores; each core has enough bandwidth to its local RAM (NUMA, of course) where it should be able to sustain 2GB/s zeroing without a lot of trouble; that’s a rate of
    16GB/s aggregate, and my 32GB of RAM should be zeroed in 2 seconds or so. Not five minutes.

    It’s still not as bad as our Sun Enterprise 6500 with 18GB, though, which takes about a minute per GB, which is also ridiculous (it’s also NUMA, and the Sun firmware does start up each CPU to test it’s own local RAM blocks).

  • Yes, and I have a few Dells that do that as well. Unfortunately most OS’s aren’t ‘hotplug/unplug’ for RAM, which would alleviate the need to tag it out during POST. But perhaps some of today’s and yesterday’s hardware just isn’t up to the task of reliable rapid power on. So perhaps I should have written ‘Memory tests should be redundant with ECC.’

    Our load graphs here are very spurty, with the spurts going very high during certain image reduction processes.

    It is to the point where I could probably save money by putting a few of the more power hungry systems that have spurty loads on a timed sleep basis, which WoL bringing them back up prior to the next day’s batch.
    But that’s an ad hoc solution, and I really don’t like ad hoc solutions when infrastructure ones are available and better tested.

    And pay the power bill for them

  • i find the biggest part of server POST is all the storage and network adapter bios’s need to get in there, scan the storage busses, enumerate raids, initialize intel boot agents, and so forth.

  • Amortized capex matters to me. I may not have the capex budget this year to do the development, but I do have opex monies to research potential development, and opex monies to do grant writing that can fund the development capex. I’m old enough to remember and to have read an original paper copy of the Misosys Quarterly with a column called ‘Les Information.’

    6 years is a short time, especially for grant funding cycles. But it is enough time for me to have researched, hopefully properly, the means by which to implement hopefully opex-reducing improvements. But these are business decisions, not technical ones.

    My non-development time is opex; my development time is capex (for the purposes of many grants for which we apply). I always ask the question above when figuring ROI. And I got the nickname ‘Mr. Make-Do’ for the very reason that I do tend to heavily reuse ‘ye olde stuffe.’

    I guess you missed the adjective ‘only’ above. Consistency helps reduce opex; reduces opex produces better fiscal efficiency, at least in theory. Each business’s situation will be different, YMMV, of course.

    Everyone who commented thus far on my statement seems to have missed my wearing my CIO hat. I’m talking fiscal efficiency, as in getting the most bang for the buck, especially in terms of opex, which is nearly always a much larger number than capex for us (and, while I know this is not likely to make much technical sense, it is a true statement that $1
    opex is not equal to $1 capex). This is not ‘technical efficiency’
    here, but ‘keep the lights on and the budget in the black’ efficiency.
    If the budget goes red long enough it really doesn’t matter what you do technically. If I need to get a grant to fund $100,000 capex that will save me $10,000 per year opex (and grants almost never fund opex) it is a no-brainer.

    Minor correction:

    RHL 7.2 -> RHAS 2.1. RHL 9 -> RHEL 3.

    I have a client that still has a RHL 5.2 machine running in production.
    It does its job, is not internet-connected and thus security updates are irrelevant, and it will run until it dies. Client has no budget to reengineer properly, and is migrating services one at a time in a pretty slow manner. There’s only two semi-critical services left, and if they just went away the client would go back to a paper system while a newer system is being built. And they’re fully prepared to do that rather than upgrade now.

    I am one of those people who contributed directly; my name can still be found in the changelogs for PostgreSQL 7.4 on CentOS 4 (and I would assume RHEL4, if PostgreSQL is part of the EUS set). I remember the mechanisms, and the gatekeepers, involved, very well. The Fedora way is way more open, with people outside of Red Hat directly managing packages instead of contributing fixes to the ‘official’ Red Hat packager for that package.

  • How is this any different from any other init? Init is the boss, regardless of which flavor of init, full stop.

    SystemV init has many many problems. The worst problem is that it only deals with start and forget and stop and forget, with relatively fragile shell scripts running as root doing the grunt work. A resilient system init should be a bit more hands-on about making sure its children continue to live… (yuck; you can tell I’m a parent (of five)!). Or, in Bill Cosby’s words as Cliff Huxtable to Theo, “I brought you into this world, and I can take you out!” But an init that takes a bit more care to its offspring, making sure they stay alive until such time as they are needed to die (yuck again!) is a vast improvement over ‘start it and forget it.’

    And, of course, CentOS 6 doesn’t use straight SysVInit anyway, but it uses upstart, which lived for quite a while.

    Incidentally, I’m old enough to remember the recursive acronym MUNG and hereby apply that acronym to this thread……

    I’m also familiar with feeping creaturism…..

  • I’ve found that disabling all but the boot device’s BIOS works wonders and makes installs far happier, with the exception of real hardware RAID
    cards. The Linux kernel is quite happy doing any and all fibre-channel enumeration with the HBA’s BIOS turned off. (All my large storage is FC
    and iSCSI SAN). And the ‘Intel boot agent’ only lives long enough to PXE boot if I need that. The 3Ware 9500’s I have typically take a bit longer and require the BIOS, though, but with a small array that’s a few tens of seconds, a minute tops. That’s one advantage of the Linux mdraid……

    But our 6950’s spend five minutes only on the memory test; that’s not counting the Dell PERC boot device enumeration and drive spinup.

    The fastest booting servers I have are our two pfSense firewalls; I’ve trimmed the BIOS setup to the bone and those boxes reboot in a few tens of seconds. (Yeah, I count a firewall as a server, since it’s running on server hardware (Intel 5000X-based quad core dual Xeons with 4GB of RAM each; does wire-speed with > one million pf table entries on a 1Gb/s WAN link) and providing an essential network service to the rest of the hosts on the network).

    But, point taken, since there’s more to a POST than just the memory test.

  • I’m not convinced that being open and receptive to changes from people that aren’t using and appear to not even like the existing, working system is better than having a single community, all running the same system because they already like it, and focusing on improving it while keeping things they like and are currently using. With the latter approach, there was a much better sense of the cost of breaking things that previously worked. With fedora, well, nobody cares –
    they aren’t running large scale production systems on it anyway/

  • So your solution to the problems that happen in complex daemon software is to use even more complex software as a manager for all of them??? Remind me why (a) you think that will be perfect, and (b) why you think an unpredictable daemon should be resurrected to continue its unpredictable behavior.

  • Lamar Owen wrote:

    To me, what takes the most time on POST, far and away, is memory. We had a box – Dell, I *think*, but it might have been a Penguin (Supermicro) that took close to 20 min, before we turned off the memory check (well, yes,
    256GB RAM)….

    What I wish didn’t take so long is the raid-45? driver, which seems to take forever, and always has.

    mark

  • I think you and I remember a different set of lists. I remember lots of griping about changes being forced down throats. Heh, a quick perusal of one of the lists’ archives just a minute ago confirmed my recollection.

    Do you remember the brouhaha over libc5 that ‘just worked’ versus the
    ‘changed for no reason’ glibc2? And don’t get me started on the recollections over the GNOME 1 to 2 upgrade (or fvwm to GNOME, for that matter!), or the various KDE upgrades (and the entire lack of KDE for RHL 5.x due to the odd license for Qt, remember? Mandrake got its start being essentially RHL with KDE…. and of course the ‘stripping’ of KDE
    to ‘cripple it down the GNOME level’ (otherwise known as the ‘Red Hat Desktop’)) or the various kernel uprevs (2.4 broke my whatzit2000 that nobody else has! You CAN’T upgrade!!!!!). And then there was gcc 2.96.
    (I can feel the tremor in the Source just mentioning that….) And then all the i18n changes for 8.0 (I dealt with that one directly, since the PostgreSQL ANSI C default had to be changed to whatever was now localized…. too bad the Redneck install language has gone away.) And then there was the weed called Kudzu. The bad rep for x.0 releases started somewhere, remember? (Smooge was there, too, and has an extensive page about the differences (this link is from my bookmarks and memory; AFAIK it still works):
    http://www.smoogespace.com/documents/behind_the_names.html ).

    And I’m still waiting for my upgrade to Red Baron. ;-), in case you needed it….

    Sorry Les, but I was there, and I have the e-mails. I guess people prefer being able to just gripe without the chance for real responsibility versus now having a bit of responsibility to help since the ability to actually do something about it is available.

    Not that I necessarily disagree with your observations, by the way. I’m just looking at the brushstrokes of the really big picture and remembering how at the time it seemed like we sometimes were just moving from one kluge to another (if you insist on the alternate spelling
    ‘kludge’ feel free to use it…..). But it was a blast being there and watching this thing called Linux find its wings, no?

    It is still a blast for me, even if I actually do serious work with several versions of Linux. And I’m looking forward to spending some quality time with systemd, of which I know very little, and seeing how I
    can make this new tool, which apparently a lot of really smart people think is a great idea, work for me (and I may find that I despise it;
    time will tell). I kind of feel like I’ve been given a new tool set with tools I’ve never seen, and finding out that a screwdriver and a chisel can actually be separate things! Or finding out what a ‘fence wire’ tool can *really* be used for….. ( see:
    http://www.garrettwade.com/images/330/66A0204.jpg )

    And I have two previous versions of CentOS to fall back on while I learn the new tools; I have both C5 and C6 in production, and have plenty of time in which to do a proper analysis on the best way (‘best way’ of course being subjective; there is no such thing as an entirely objective
    ‘best way’) for me to leverage the new tools. The fact of the matter is that Red Hat would not bet the farm on systemd without substantial buy-in from a large number of people. The further fact the Debian and others have come to the same conclusion speaks volumes, whether any given person thinks it stupid or not. And I don’t have enough data to know whether it’s going to work for me or not; I’m definitely not going to knee-jerk about it, though.

    But the rumors of something ‘killing’ Linux have and will always be exaggerated. Systemd certainly isn’t going to, if gcc 2.96 didn’t. I
    mean, think about it: the first rev out of gcc 2.96 wouldn’t even compile the Linux kernel, IIRC!

  • Sure.

    Nothing is ever perfect, and I didn’t use that word. I think it will be, after some bug-wrangling, an improvement for many use cases, but not all.

    I have had services that would reliably crash under certain reproduceable and consistent circumstances that were relatively harmless otherwise. Restarting the process if certain conditions were met was the documented by the vendor solution.

    One of those processes was a live audio stream encoder program;
    occasionally the input sound card would hiccup and the encoder would crash. Restarting the encoder process was both harmless and necessary.
    While the solution was eventually found years later (driver problems) in the meantime the process restart was the correct method.

    There are other init packages that do the same thing; it’s a feature that many want.

  • Lamar Owen wrote:


    On the other hand, restarting can be the *wrong* answer for some things. For example, a bunch of our sites use SiteMinder from CA*. I do *not*
    restart httpd; I stop it, and wait half a minute or so to make sure sitenanny has shut down correctly and completely, closed all of its sockets, and released all of its IPC semaphores and shared memory segments, and *then* start it up. Otherwise, no happiness.

    mark

    * And CA appears to have never heard of selinux, and isn’t that great with linux in general….

  • Since I missed most of the story, can you specify that it is ok for this program to restart whenever it crashes, but this one you will stop restarting after N crashes (N<=0) and then report?

  • Automatically restarting services is always a bad idea, especially when they are customer facing services. There is nothing worse than a problem that hides behind an automatic restart, especially while it’s corrupting data since it’s happily starting right back up after dying in the middle of a transaction and potentially creating new transactions that will also terminate when the app crashes again (and it most often will).

    The least important aspect of a service dying is the state of the service itself, the most important is what has happened to the data when it abended. Restarting the service automatically after failure is a recipe for disaster.

  • No, that is exactly my point. Back then the griping by affected active users happened in more or less real time compared to the changes being done. Now fedora goes off on its own merry way for years before its breakage comes back to haunt the people that wanted stability.

    Don’t think people running a bunch of RH5 servers really cared about X
    or desktops at all…

    That one was sort of inevitable. Likewise for grub2 and UEFI…

    Well, that was the equivalent of fedora. You don’t use that in production. The x.2 release mapped pretty well to ‘enterprise” –
    except maybe for 8.x and 9 which never really were very good.

    In these observations you have to take into account just how badly broken the base code was back then. Wade through some old changelogs if you disagree. There were real reasons that things had to change. But by, say, CentOS5 or so we had systems that would run indefinitely we a few security updates now and then. (Actually CentOS3 was pretty solid, but you have to follow the kernel).

    I’m never against adding new options and features. But I am very aware of the cost of not making the new version backwards compatible with anything the old version would have handled. And I’m rarely convinced that someone who doesn’t consider backwards compatibility as a first priority is going to do so later either, so you are likely wasting your time learning to work with today’s version since tomorrows will break what you just did.

    Yes, but on the other hand, people still pay large sums of money for other operating systems. And there are some reasons for that.

  • It’s old but there is still some rumours that freebsd will get Launchd ported from OS X some day

  • My limited understanding you are actually describing problem which systemd should be answer. It should take care of these things for you. Now you wait minute or two which is wrong way of doing it. Right way would be script that actually checks that nothing of the stuff is left around. It’s same kind of hack solution that restarting dying service is. Sometimes hack solutions are needed and sometimes not.

    In my again limited experience with systemd as running Fedora as “hobby”-server I have gathered that you can decide case by case basis should the process be restarted or not.

    -vpk

  • While I am certainly not an expert with systemd, it appears that you have a more generic mechanism than that in the OnFailure directive in the unit files. So you can basically do any sort of thing on a unit failure, including restart or start a different unit or whatever.

  • No, the *correct* answer I cannot begin to push, since I don’t have an account with CA, and so can’t file a bug against *THEIR* commercial $$$
    crap code, and the one time I tried to push the team who actually owns it, they sort of mentioned it to CA (maybe, or maybe they were just lying to me), and it got blown off.

    And no, not when we have this many servers, and my job depends on doing it correctly.

    mark

  • So you actually go trough everytime to make sure that all the things are properly closed and shut down instead of just waiting few minutes? As sometimes something could go wrong and waiting few minutes isn’t enough. I would prefer the software to do it for me. Even more prefer someone else to write it so that I can do all the other things I need to do and not bill customer of busywork of reinventing the wheel. It’s pipedream that broken software is fixed so I am glad of any solutions which help deal with it.

    -vpk

  • Real-time? Since when? The development direction was already pretty much done by the time the public betas were released and the griping began. Even by the time of the private betas the development direction on several of the releases was already pretty much set in stone. I only had a bit of input for PostgreSQL because I was maintaining the upstream RPM package at the time; but I had no pre-beta access to whatever was in the beehive queue at the time.

    With fedora, on the other hand, you already know that what is going in the next version of EL is going to be previewed in Fedora and you are absolutely free to follow the Fedora lists and get involved in the actual process, rather than being fed an already mostly-baked beta every so often. If you don’t follow the Fedora lists and get involved, well, you get what you pay for, I guess. I don’t currently follow the Fedora lists, incidentally, but I do track the features that are being implemented. We already had Upstart, and the move from Upstart to systemd is not that big (at least in my opinion), so it’s not something that got me up in arms. Plain text non-XML configs that can be on a non-executable filesystem and lots of really nice options in the unit configs really change the way you think of system startup. It is a change; I’ve not decided whether I think is a good change or not; most of the big Linux distributions have decided that it is a good change.

    In a quick google, I found what I thought to be a pretty clearly written article (from 2012) on systemd’s strong points from the point of view of a server admin:
    http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdRight?showcomments

    If it can really deliver this, particularly the feature of sysadmin-modified units all being in one place, yeah, looks like a good thing. And there will be plenty of eyes on it. Most of the articles looking at systemd’s weak points (and there are several) aren’t written in nearly as level a fashion as the above. Lots of vitriol to go around, unfortunately.

    You missed my Red Baron comment, didn’t you? I ran Red Hat Linux 4.1 as a desktop, and once Mandrake 5.3 was out I went completely Linux as my primary work and personal desktop. I figured if I was going to run it as a server I needed to ‘dogfood’ things and really rely on it for daily work. And my employer agreed.

    The days StarOffice became OpenOffice.org and then when OO.o 1.0 wound its way into RHL were very good days for this desktop Linux user.

    Many of which are not technical.

  • do *not*

    Trough? I don’t understand. I do a service httpd stop, and then a ps -ef to grep for siteminder still running, and then start it again. If there are problems getting into the website, I shut it down again, then check using ipcs, and ipcrm to manually get rid of their crap, then service httpd start.

    Waiting a few minutes is not appreciated in either a real production or development environment.

    mark

  • So develop your own. I have some scripts around here somewhere that I
    have used in the past to help me make policies for things that tell you to turn off selinux (I think I did it for roundcube).

  • “Go trough everyting” meant all of the checking you just described.
    (Translates more or less directly like that from my native language.) My point is that I would have made script to deal with that. Not necessary automatic. And with systemd it should automatically check for any children left behind httpd automatically. So no need for the script and not really need for the other checking provided things work. Of course if things don’t work then it’s reason to complain.

    So waiting isn’t appreciated except for waiting done for you to log in
    (dragging yourself) to the server and doing the things described above?
    I would prefer it to be automated with message to me what happened. (In this instance it’s not solution to everything)

    -vpk

  • Following the list just makes it more painfully clear that they don’t care about compatibility or breakage of previously working code/assumptions or other people’s work. It’s all about change. I
    tried to use/follow fedora for a while, but gave up when an update between releases pushed a kernel that wouldn’t boot on the fairly mainstream IBM server I was using for testing.

    Backwards compatibility isn’t a big/little thing, it is binary choice yes/no. If you copy stuff over and it doesn’t work, that’s a no, and it is going to cost something to make it work again.

    Did you keep track of the time you spent keeping that working?

    Many aren’t. And many are just a large base of stuff that works and will break if anything underneath changes.

  • c’est la vie.

    The only constant is change. Some changes stick; some don’t. Unix itself was a major change in the early 70’s, and many of the same issues I see being mentioned here are rehashes of the Unix fragmentation grenade back in the 80’s. People reinvent the wheel, and sometimes their wheel is better, and sometimes it isn’t. And always a very vocal group will gripe against the wheel being reinvented at all, regardless of whether it might be better or not.

    This wheel might be better and it might not; we’ll never know if we don’t try it out. Experience and tradition must be tempered with empirical results from actually experimenting with the new. (And do note that ‘tempered’ does not mean to do away with experience and tradition……).

    Have you checked how compatible or not systemd is for the init scripts of the packages about which you care (such as OpenNMS)?

    My employer put a line item on my timesheet for it, so, yes, I kept track of it and got paid for it. Those paper files have long since been tossed, since that was fifteen-plus years ago. My employer was paying me to keep the server up, I had an employer who understood the value of training, and that employer definitely understood the value of dogfooding.

  • Sure, but it is only progress if you stop changing things when they work.

    OpenNMS provides yum repositories. When they add the EL7 repo, I’ll expect it to include something that already works. So that’s not my problem but will likely waste someone else’s time if the existing init script doesn’t drop in. I’ll just need to make things work for the internal programs, some of which are done by developers that would really rather stay on windows.

    Still you must have come up with some bottom line recommendations. Did your employer make all or some large number of staff follow your lead back then on desktop versions/updates after seeing what it costs?
    Personally, I gave up on the hardware aspect of a linux desktop as soon as I saw freenx working from windows/macs where vendor-optimized video drivers come with the distribution. And then having access to both Linux and native desktop programs I’ve tended to ignore the problems with linux desktop apps.

  • We use a large number of Linux desktops extensively and have since 5.2. For our applications and workflows, this was quicker and more feature rich. We use the same platform in a HPC scenario as well, so (Desktop) Linux is dependent on your needs.

    I can sympathize with you that you are upset about systemd. CentOS, RH, Fedora or Linux is not stopping you from being involved or finding a solution to your problem with systemd. Here are some examples of the way you and others could give back to the community and help others that believe the same as you.
    1. Engage with the community of the early Open Source version of RHEL ie Fedora.
    2. Submit patches, rpms etc as alternatives to systemd for inclusion in Fedora. RH may include it in RHEL as an optional install.
    3. Submit a bugzilla on sytsemd itemizing and describing the deficiencies.
    4. Purchase Support and or services from RH and create a case with business justification.
    5. Create a fork of CentOS7 that does not have systemd.
    6. Create a fork of CentOS6 that has future features/bugs backported from CentOS7
    7. Financially support someone/people to do any of the above if you can’t do it yourself.

    As you can see there are lots of options with open source software. I am sure there are others I have missed. I hope that helps you find a solution to your problem.

    Grant

  • Sorry, but when I hear that, I think of what my first wife used as the typesetter for an underground newspaper….

    mark

  • Les Mikesell writes:

    Can’t find the original post so replying and agreeing with Les. Have the same ongoing problem with radvd. When My IPv6 tunnel provider burps, the tunnel drops. The tunnel daemon usually reconnects but radvd stays down. Solution:

    */12 * * * * /sbin/service radvd status > /dev/null 2>&1 || /sbin/service radvd start 2>&1

    in crontab. How hard is that? And without all of the systemd nonsense.

    Cheers, Dave

  • I assume it was not a linotypewriter that put out lead letters ready for the printing presses?

    My uncle worked for the Cleveland Press as a typesetter.

  • Robert Moskowitz wrote:

    They just had one office suite. She’d type the tape, and that would be sent off to the printers.

    mark “we won’t talk about the month I punch Addressograph plates….”

  • Or, if you want things to respawn, the original init handled that very nicely via inittab. Also,running a shell as the parent of your daemon as a watchdog that can repair its environment and restart it if it exits doesn’t have much overhead. Programs share the loaded executable code across all instances and you pretty much always have some shells running on a linux/unix box – a few more won’t matter.

  • Well, while Linus Torvald is not everyone, he is certainly someone. And I do not believe, although I could be mistaken on the point, that Linus’s current thoughts on Systemd would be considered printable in a public forum. At a guess, I would put it right up there with Gnome2 in his estimation

  • Addressograph plates? That is really ancient ! but they were incredible useful in those days.

  • Always Learning wrote:
    Yeah… but did you ever do it, or see it done? Forget the old manual Underwood, this required actual *force* hitting the keys (yes, the machine was electric). No speed, either – the actuator arms had to hit the metal. WHAM-WHAM-WHAM-WHAM

    mark

  • Saw the plates being used to emboss invoices. Power Sumas cards then produced the invoice details. Then along came a Honeywell 1250, a punch room, coding sheets, masses and masses of punched cards, manual hand punches, electric punching machines and verifiers and even an electric portable verifier too. Only had to wait for about 60 to 90 minutes for the results of a Cobol programme, meanwhile nothing else ran.

    Wish I had photographed everything then.

  • Any picts or videos? I am just picturing a room with those thingies and a couple of Jacob’s latters and a Van der Graff while on the background a table with a covered body is being slowly raised to the roof.

    “No, not the third switch!”

  • Sorry mate. The programmers were all busy sitting around the table in the engineers’ room playing Bridge (a card game) :-)

  • Les Mikesell writes:

    Just pointing out one of several approaches to respawning a daemon without the overhead of systemd. I went with this approach since the problem is not with radvd or its init script but with my IPv6 tunnel provider. I wanted something that didn’t require modifying any of the installed bits. This approach also means that updates to radvd and friends don’t overwrite my modifications. Just “playing with” the IPv6 stuff so having it down for up to five minutes also isn’t a problem. The source of the problem goes away when my ISP provides IPv6 and I don’t need to tunnel IPv6 in IPv4 anymore.

    I look at systemd as being yet another nuclear fly swatter. Overkill for simple problems that can and should be be addressed at the problem without a sweeping, system level change.

    Cheers, Dave

  • [I wasn’t going to reply; but after thinking about it for quite a while, there are a few points here that deserve just a bit of level-headed attention.]

    Replying to Les’ comment: the original inittab respawn method is completely brain-dead, blindly respawning without any thought for what conditions might need to be checked, etc.

    Replying to David: So you’d prefer the overhead of cron plus shells plus a bit of arcane syntax? When I first replied to this crontab line, I
    honestly thought you were being tongue-in-cheek.

    I have a similar sort of kluge in place, on a CentOS 6 system at a client, that uses the autossh package to hold open SSH reverse tunnels;
    reverse tunnels are great when the client’s machine is behind a known-to-change-frequently dynamic address.

    Sounds like something that systemd’s concept of process dependencies could solve for you with an easier (and non-executable) syntax. Systemd provides an ‘OnFailure’ directive that allows you to do whatever you’d like upon failure of an particular ‘unit.’ That sort of mechanism might allow you to implement the process equivalent of Cisco IOS’ IP SLA’s.
    (You could mount /etc (and /var) noexec and have /usr and friends mounted read-only, even.)

    This is why sysadmin configs for systemd are in /etc and the OS-supplied configs are in /usr. Your /etc ‘units’ to systemd will override the OS
    installed ones, and are all collected together in one well-defined and standard place.

    This is why sysadmin configs for systemd are in /etc and the OS-supplied configs are in /usr. Your /etc ‘units’ for systemd will not be touched by the updates to the OS-supplied ones.

    If you can figure out IPv6 then systemd should be no sweat.

    I have done all of the various init styles at various times, so I make this statement having 27 years of experience dealing with Unix-like systems (I won’t bore anyone with the list): in my quick perusal of systemd and its documentation, I’m cautiously optimistic that maybe finally we have something that a sysadmin can really make sing. Time will tell, of course, whether systemd actually addresses the core problems or not; we’ve already had one round of an init replacement, Upstart, that didn’t succeed in fully addressing the core problems (but will be with us until 2020 as part of EL6). And I always reserve the right to be wrong.

    But traditional SystemV init last appears in EL5, which, while it is still supported until 2017, is two major versions old at this point. And in case you missed the announcement from Red Hat, EL5.11 is the last minor version update of EL5, with subsequent updates being released as they come and not batched into point releases. (I now know my last targeted version for IA64 rebuilding, which is good…..as long as I can put in some automation to grab updates from then on).

  • Hi Lamar,

    Having been working with UNIX like systems since 1985
    my biggest complaint with systemd is it so intrusive, it wants to be everything which makes it vulnerable to bugs and exploits – umm.. like Windoze!

    My $.02

  • There is an old English saying “Never put all your eggs in the same basket.”

    If systemd has a bug or is exploited, because it does so many different things, the resulting “damage” to a working system is potentially significantly greater than if systemd did less functions. As previously stated by JH, its in RH so its in CentOS. We have to accept it or go elsewhere.

  • William Woods wrote:

    Well… we know that > 50% of the Web and ‘Net runs on Linux and other unices. Compare and contrast the number of Windows Server vulnerabilities that have been exploited to those of *Nix… and, for extra credit, how fast they were admitted, and fixed…..

    mark

  • William Woods wrote:

    Please stop top posting.

    I suggest you google with the following search criteria: “windows server”
    exploits

    mark

  • Ok, I posted last week a question in this list about configuring two interfaces, one of which being a bridge, to get dhcp and make the bridge one be the primary one as far as which dns and router to pick.

    Using systemd only was my requirement. i.e. using /etc/systemd/system/
    instead of /etc/sysconfig/network{,-scripts}.

    The replies I got were “oh, you can still use
    /etc/sysconfig/network{,-scripts} with systemd *for now*, so you can kludge it together.” What’s the point then? systemd is supposed to handle something as simple as two little interfaces; I am not even talking about vlans and virtual interfaces. I know how to do it using
    /etc/sysconfig/network{,-scripts}, but that is besides the point. If systemd is the Way of the Future, I should be able to do this simple configuration in it in CentOS 7. After all, with CentOS 7, systemd is the de facto way to do things, right?

    I do sound annoyed because I am: I keep hearing about how systemd is
    “going to liberate us from the Old White Man way of doing things,” and I am willing to be the devil’s advocate here for I plan on using Linux in the foreseeable future. But, if it cannot deliver me two little interfaces up and running — not asking for cookies and a massage but would not turn them down if I get my interfaces up and happy — I do have an issue.

    So, for those who know the Way of The systemd, please show me here or in my thread (so we keep this one on-topic) how to do that in a completely systemd-networkd way (and why what I wrote is boink). Is that too much to ask?

  • Are you really trying to win the thread with “but omg windows!”? All software is swiss cheese, the only really secure software is turned off. Windows is no more or less secure than anything else out there.

    OpenSSL is sadly an excellent example of that.

  • Steve Clark wrote:
    Replying to this, because I saw a reply from him, but there was no new content, for some reason.

    Anyway, he also seems determined to see it all as black and white, rather than looking at the *much* larger set of bugs and vulnerabilities that Windows Server has had than any version of ‘Nix. Sure, we have some… but a *lot* fewer, and overwhelmingly far less serious.

    mark

  • Andrew Wyatt wrote:
    vulnerabilities that

    This is *pointless*. Point to something *OTHER* than heartbleed. And as this is the CentOS list, please note that 5.x was *not* affected at all.

    Or does your attention span not go back more than a couple of months?

    mark, getting annoyed

  • Openssl doesn’t have much to do with Unix/linux. It is just one of a bazillion application level programs that you might run. Are you going to include all bugs in all possible windows apps in your security comparison?

    But init/upstart/systemd are very special things in the unix/linux ecosystem. They become the parent process of everything else. For everything else, the only way to create a process is fork(), with it’s forced inheritance of environment and security contexts.

    In any case, giant monolithic programs that try to do everything sometimes become become better than a toolbox, but it tends to be rare. First, it takes years to fix the worst of the bugs – but maybe that has already happened in fedora… And after that it is an improvement only if the designers really did anticipate every possible need. Otherwise the old unix philosophy that processes are cheap –
    if you need another one to do something, use it – is still in play. If you need something to track how many times something has been respawned or to check/clean related things at startup/restart you’ll probably still need a shell there anyway.

  • OpenSSL is a library, not an application, but I understand where you’re going with this. No you wouldn’t include all Windows apps, but you would include anything that’s immediately available to Windows. Same principle here. We wouldn’t measure Oracle, like we wouldn’t SQL server but we would OpenSSL because it’s available in the repo and not third party.

    Yes, they sure are, you’re right about that. Without an init (of any kind), you only have a kernel. You don’t have mounted filesystems, or anything else.

    It’s very rare. I wasn’t speaking to this though in this instance, I was only speaking to Windows security not being any better or worse than anything else. Security is only as good as your admins and your implementation. It’s also an on-going process on any platform. I was just pointing out that it’s beyond silly to “because windows is less secure!”.

  • And not used unless an application uses it.

    And no other processes….

    Yes, using window vs. unix/linux is an overreach as an analoy here –
    and unnecessary. It’s just a matter of ‘big, new, monolithic’ code bases vs. a small set of well-tested reusable tools. We could just run everything under java if we wanted. But. how many years old is java and how often are there still mandatory updates of the whole thing because of some recently noticed security bug in some part of it?

  • Not with so many of Windoze world-wide users getting viruses all the time. CentOS is inherently more secure than Windoze.

  • 1/3 of my servers use C 5.10, 2/3 use C 6.5. I use C 5.10 as my individual development server and desktop.

    C 5 works well for me.

    CentOS 5 Fan :-)

  • That is probably the most pointless comment you have made yet. Just because you use something, and you are a fan does not mean anything in the context of the discussion.

  • On the contrary – it means his services start just fine without systemd, and the best systemd is going to do is start them the same way – that is, not be an improvement even after someone wastes the time to rewrite the startup code.

  • On the contrary it means a discerning user like me, never adverse to complaining, is satisfied with the quality product C 5 undoubtedly is. And satisfied sufficiently to use it instead of C6 and C7.

    Elsewhere you subsequently mentioned, after your apparently derogatory remark about C5 being “ancient” that ancient does not mean bad. I
    concur.

    Have a nice day.

  • I would argue that also has to do with the average windows. You know, the human engineering part of being attacked (“Your computer is infected! The red blinking light says so! Click here to install our
    ‘sheep-me’ program, making sure to run it as an admin user!”)

  • perhaps you should change your username from Always Learning, as it appears you’ve decided to stop as of about 5 years ago.

  • William didn’t say that it was ancient, I did. If you think that “5.x is ancient and had its own set of flaws over its lifecycle” is “derogatory”, it should come as no surprise to us that you’ve mixed up who you were talking too.

  • John R Pierce wrote:
    a) This is rude. b) We have several 5.x servers here. For one, we kept one or two home
    directory servers at 5.x due to writing to an NFS mounted home directory
    from a 6.x server could be a literal order of magnitude slower. It took
    us over a year to find that if we added nobarrier to the filesystems
    that it was < 10% slower. c) We have some production boxes that are 5.10. *YOU* go and tell managers that we're going to take down their production boxes and upgrade them, or were *you* personally going to assure that their budgets would be upped to provide replacement servers that could be built and tested prior to replacement (and note that the last set just got upgraded just before 6.0 came out in '12?)... and this is part of an agency of the US government, and we are *NOT* DOD. Care to talk to your Congresscritters to assure this, if you're a US resident? mark, not sure when I’ll go to 7 at home, what with systemd….

  • do you have plans to replace/upgrade them prior to the end of maintenance updates circa March 2017 ?

    btw, 6.0 came out in july 2011

  • John R Pierce wrote:

    Do I? I’m just a sysadmin. Perhaps you should reread the above… or maybe you’re not familiar with working in a organizational environment.

    mark

  • This is the US gov he is dealing with. He will end up having to do what congress agrees he can do. When you get laws put forth (fortunately shouted down) that want to repeal Pi because it is irrational?

    Look at Detroit for how governments like to kick problems down the road until the mudball is too big to kick anymore.

    Look for an emergency funding request in Feb 2017…

  • Optimistically I will continue learning about a wide range of differing subjects until I die, probably in about 10 years or so.

    I continue to learn new things about C5, and the programmes than run on it, the BSDs, Linux kernel, minor CSS syntaxes. It is fascinating.

    Next month I hope to enrol in German and Polish evening classes. I would have preferred Norwegian (Bokmal) and Dutch (Nederlands) but the local college don’t have them. In November I would like to start a law degree :-)

    I am never complacent and tomorrow I do the first of the compulsory 3
    tests for my motorbike licence (theory and hazard perception, despite riding my bike for the last year as a Learner) – I’m definitely Always Learning and not ashamed to admit it.

    CentOS is clearly a refreshing and invigorating breeze compared to Windoze. Having about 47 years years experience as a computer programmer, I am naturally reticent about systemd – but then every clever and thinking person would be too. I’ve experienced too many computer problems to trust everything to script kiddies or their grown-up enthusiastic cousins.

    Have a nice evening.

  • I was preoccupied studying for my exam tomorrow. No harm done and my points are valid.

  • I work in a corporation, supporting software development for manufacturing. unsupported hardware/software is retired per corporate policy. I actually get a fair amount of grief from using CentOS in my development environment, production uses RHEL under contract (or AIX or Solaris or…)

  • Has anyone here actually interacted with systemd, and if so could you perhaps provide a writeup of your experiences? I feel like I haven’t seen any practical information on systemd in this thread, and I’d like to have that before forming an initial opinion (at which point I’d attempt to interact with it myself in order to form a better informed opinion).

    –keith

  • I’ve been using systemd ever since it was introduced in Fedora, and the RHEL7 beta and CentOS7 final since it came out. I could tell you about all the positive and negative experiences I’ve had. There’s been a lot of misinformation posted on this list (such as the Windows Registry reference)[1]. I wouldn’t want to make any decisions about systemd based on what I’ve seen on this list.

    However, I think you should try it out yourself. I suggest giving it a try in a VM, or try one of the CentOS7 LiveCDs. I was quite hesitant about systemd when I started using it, but it was experience that led me to be able to make good judgements about it.

    1. See the systemd myths web page http://0pointer.de/blog/projects/the-biggest-myths.html

  • I think this could be very useful, especially coming from someone who was initially reluctant (as I and clearly others are).

    That’s what I was concerned about. :) I certainly would try it for myself first, but if I were to read a lot of people writing “I tried to actually use systemd, and it was awful/fine/awesome” that would carry some weight.

    In the interest of full disclosure, that page is written by one of the primary authors of systemd, so we shouldn’t expect an unbiased opinion.
    (Not saying it’s wrong, only that it’s important to understand the perspective an author might have.)

    –keith

  • Ok, I’ll give some examples of my experiences. Warning: long post.

    As a systems integrator, I’m less concerned with hand-crafted, artisian init script hackery and more interested in consistency. I
    rarely start init scripts by running the init script manually, but rely on configuration management tools, which expect a uniform interface. I’m concerned with confining services to use the resource that are expected, and to make sure that they remain secure.

    Prior to the systemd’s release, I had created several custom services to manage the infrastructure and to serve up the apps I supported. They were all written using RHEL and CentOS-specific syntax. I had started looking at replacing them with Upstart-specific services around the time when systemd was announced. Both Upstart and systemd have simpler configuration syntax, their own commands and better support of dependency management.

    When I started using and writing my own systemd units, I found them quite simple, the documentation sufficient, and the features quite useful. Finally getting proper ordering of services, being able to set mountpoints and network activation (for example) as dependencies of services, resource management, these all are huge wins for Linux systems.

    For example, cgroups. I had already started using cgroups in el6, and you get automatic resource management with cgroups with systemd. This is a hugely positive feature that you don’t even realize is possible until you start using it. This also extends to systemd-logind, so you can set up “slices” for resource management of users. Possible in el6
    using pam_cgroup and cgred, but must better implemented with logind, since they’re automatically created and added to a cgroup. Same for services.

    unit, contained within what you define its resources to have. You can also have containers that make it possible to even give a service its own process namespace, with Docker or systemd-nspawn.

    just drop simple text files into directories to set up all of these features makes me happy. For example, instead of needing to modify
    /etc/fstab (which I hate doing, since I try to avoid making my CfgMgmt tool edit files), I can drop an NFS mointpoint definition into a
    .mount unit file, which I can then refer to in a .service unit file that requires the mountpoint.

    Puppet, Chef and Bcfg2 (the CfgMgmt tools I’ve used) all support systemd, so the hard work has already been done to start using it. I was not particularly enamored with fancy SysVinit scripting, and usually prefer as simple and baseline functionality as possible, so I
    really see no loss switching to systemd. The fear of systemd being monolithic actually makes me chuckle, since systemd actually breaks out many of the functions of the SysVinit rc.sysinit into separate units, to be managed and modified as needed (such as TTY allocation, mounting filesystems, managing runlevels, etc.).

    So, the things that have bothered me so far:
    1.) The order of the ‘service SERVICENAME start|stop|status’ options is reversed for ‘systemctl start|stop|status SERVICENAME.service’. It took me a while to get used to that. You can just keep using
    ‘service’ and get similar results, though. The output of the systemctl status is pretty complex too. Also, I keep forgetting to run ‘systemctl status -l SERVICENAME.service’ and the long lines are chopped off until I remember to use -l.

    2.) Daemons under systemd don’t really need to daemonize anymore. In the past, to start up a daemon process, you’d need to fork (or double-fork) and disconnect STDIN, STDOUT and STDERR file descriptors. This was just accepted knowledge. You don’t need to do that anymore in systemd service units. Heck, write to stdout or stderr, it’ll be recorded in the journal. Check out the openssh-server service unit, you’ll see that sshd is run with -D
    there. Systemd supports daemonized services, it’s just not necessary anymore.

    3.) Old SysVinit services that did something other than start/stop/restart/status no longer work. While this was nice for consistency, it means that porting to systemd will require alternative methods. This really bugged me at first, but from the perspective of a systems management person, I can see why it was kind of a hack, and not consistently implemented.

    4.) Debugging. Why is my unit not starting when I can start it from the command line? Once I figured out journalctl it was a bit easier, and typically it was SELinux, but no longer being able to just run
    ‘bash -x /etc/rc.d/init.d/foobar’ was frustrating. sytemd disables core dumps on services by default (at least it did on Fedora, the documentation now says it’s on by default. Huh. I should test that…)

  • Jonathan, thanks for the balanced treatment and for posting actual experience and not just regurgitating tired tropes.

  • Jon as a heads up this isn’t a systemd/el7 thing necessarily…

    Look at the daemon function in /etc/init.d/functions that most standard EL
    init scripts will be using…

    Core files have been disabled on things started with that by default (need to export a variable in the environment of the script usually via sysconfig) the whole of el6 …

  • One thing that bothers me very much when reading that is the several mentions of how you don’t need to learn shell syntax as though that is an advantage or as if the author didn’t already know and use it already. As if he didn’t understand that _every command you type at the command line_ is shell syntax. Or as if he thinks learning a bunch of special-case language quirks is somehow better than one that you can use in many other situations. When you get something that fundamental wrong it is hard to take the rest seriously.

  • Is there a simple generic equivalent to:
    sh -x /etc/rc.d/init.d/program_name start to see how configurations options that are abstracted out of the main files are being picked up and expanded?

  • You mean this paragraph?

    “systemd certainly comes with a learning curve. Everything does. However, we like to believe that it is actually simpler to understand systemd than a Shell-based boot for most people. Surprised we say that? Well, as it turns out, Shell is not a pretty language to learn, it’s syntax is arcane and complex. systemd unit files are substantially easier to understand, they do not expose a programming language, but are simple and declarative by nature. That all said, if you are experienced in shell, then yes, adopting systemd will take a bit of learning.”

    I think the point is that systemd unit file syntax is significantly simpler than shell syntax — can we agree on that? It also is significantly less-featureful than a shell programming language. Yes, you’re going to be using shell elsewhere, but in my experience, the structure of most SysVinit scripts is nearly identical, and where it deviates is where things often get confusing to people not as familiar with shell scripting. Many of the helper functions in
    /etc/rc.d/init.d/functions seem to exist to STOP people from writing unique shell code in their init scripts.

  • No. Everything you type on a command line is shell syntax. If you don’t think that is an appropriate way to start programs you probably shouldn’t be using a unix-like system, much less redesigning it. If you don’t think the shell is the best tool, how about fixing it so it will be the best in all situations.

    Yes, reusing common code and knowledge is a good thing. But spending a bit of time learning shell syntax will help you with pretty much everything else you’ll ever do on a unix-like system, where spending that time learning a new way to make your program start at boot will just get you back to what you already could do on previous systems.

  • Jonathan Billings wrote:


    This one does bother me. I may not want to restart a production instance of apache, when all I want it to do is reload the configuration files, so that one site changes while the others are all running happily as clams.

    mark

  • systemctl reload $unit

    Documented in the systemctl(1) man page.

    If the unit(s) you want to reload don’t support that, and you want to reload more than one unit’s configuration in one command, you use systemctl reload-or-restart $unit

    (I’ve wanted that one for a while, and ‘service’ doesn’t do that, along with globbing of the name; that is ‘systemctl reload-or-restart httpd*’
    (with proper quoting) will restart or reload all running units that match the glob; yeah, now on my load-balanced multiple-frontends plone installation I could ‘systemctl reload-or-restart plone-*’ and it will do the right thing, no matter how many frontend instances I have selected for running…. That’s actually pretty cool.

    There are quite a few of the commands that systemctl supports that I
    have wanted for ‘service’ for a long time.

  • Lamar Owen wrote:

    Which contradicts the long post from the guy I was responding to, who said it *only* did start, stop, restart….

    mark

  • What I meant is that it doesn’t support extra action verbs, such as
    ‘service httpd configtest’. I didn’t mean to indicate that it ONLY
    supported start, stop, restart and status.

  • I figured it was a typo on his part, leaving out ‘reload’ like that, as condrestart is also missing, and it’s part of the standard set. I took it more as, for instance, the PostgreSQL initscript’s ‘initdb’ command, which is a unique one, is no longer directly supported as a command line option. I haven’t looked at what PostgreSQL’s unit under C7 does as yet if a database instance doesn’t exist, but I’m sure it’s already been thought through; I’ll cross that bridge when I come to it.

    Read the man page for yourself.

  • Yes, everything you type in a shell uses shell syntax. But systemd doesn’t use a shell to start a program for a service. This has nothing to do with how programs are started from a shell, but rather how the init system is starting the program. Simplified, declaritive syntax, no need to write the entire logical sequence for handling the action verb parameter for each script (“Whoops! I forgot that ;; in the case statement!”). That’s simpler.

    If the entirety of the Linux startup process was written in simple shell code, that might be a useful line of argument, but even in CentOS6 there was a non-shell init system (Upstart) which requires understanding of a domain-specific language, plus dozens of other various configurations, like xinetd, cron, anacron, gdm, etc which start processes on boot. Each has their quirks. Not all of them use shell syntax, and even those that did had caveats. systemd just replaces Upstart, and can potentially replace xinetd and cron as well, using the same syntax and bringing in the ability to refer to each other for startup sequence management.

    I’m not arguing that you shouldn’t learn shell programming (and I
    don’t believe that Mr. Poettering is either), just that systemd units flattens the learning curve for creating new unit files.

  • You can still run apache’s configtest

    https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Web_Servers.html

    httpd Service Control With the migration away from SysV init scripts, server administrators should switch to using the apachectl and systemctl commands to control the service, in place of the service command. The following examples are specific to the httpd service. The command:
    service httpd graceful is replaced by apachectl graceful The command:
    service httpd configtest is replaced by apachectl configtest The systemd unit file for httpd has different behavior from the init script as follows:
    A graceful restart is used by default when the service is reloaded. A graceful stop is used by default when the service is stopped.

    Thanks, Richard

    —–Original Message—

  • Les, I could re-use your logic to argue that one should never even try to learn bash, and stick to C instead. Every *real* user of UNIX-like systems should be capable of writing C code, which is used in so many more circumstances than bash. C is so much more powerful, more expressive, immensely faster, covers a broader set of use-cases, is being used in so many more circumstances than bash, is far more generic, and in the long run it’s a good investment in programming skills and knowledge.

    Why would you ever want to start your system using some clunky shell-based interpreter like bash, (which cannot even share memory between processes in a native way), when you can simply write a short piece of C code, fork() all your services, compile it, and run?

    All major pieces of any UNIX-like system were traditionally written in C, so what would be the point of ever introducing a less powerful language like bash, and doing the system startup in that?

    And if you really insist on writing commands interactively into a command prompt, you are welcome to use tcsh, and reuse all the syntax and well-earned knowledge of C, rather than invest time to learn yet-another-obscure-scripting-language…

    Right? Or not?

    If not, you may want to reconsider your argument against systemd –

  • So what is the advantage of systemd? I accept we have to use it in C7, but how is systemd going to improve the usability and reliability of CentOS ?

  • That’s easy. I just type: sv httpd reload

    (sv is my system-wide abbreviation for system, ‘cos I’m lazy)

  • So, in C7, how do I do a

    system httpd configtest ?

    Am I going to lose that facility in C7?

  • the big thing with any of these new service managers (I’m more familiar with Solaris SMF than systemd, but I believe it does the same thing), is that it determines whether the service properly starts and tracks service dependencies. sysVinit style services could only be sequenced
    (start all lower numbered services before starting this one) and it had to wait for each service to start before going onto the next, while SMF
    and presumably systemd will start multiple services in parallel as long as they aren’t dependent. also, SMF at least detects when a service fails/stops, and attempts to take corrective action per how that service is configured

  • You could, if every command typed by every user since unix v7 had been parsed with C syntax instead of shell so there would be something they could ‘stick to’. But, that’s not true.

    That might be true, but it is irrelevant.

    If you think bash is ‘clunky’, then why even run an operating system where it is used as the native user interface? Or, if you need to change something, why not fix bash to have the close mapping to system calls that bourne shell had back in the days before sockets?

    Well, Bill Joy thought so. I wouldn’t argue with him about it for his own use, but for everyone else it is just another incompatible waste of human time.

    I’m sure it can work – and will. But I’m equally sure that in my lifetime the cheap computer time it might save for me in infrequent server reboots will never be a win over the expensive human time for the staff training and new documentation that will be needed to deal with it and the differences in the different systems that will be running concurrently for a long time.

    The one place it ‘seems’ like it should be useful would be on a laptop if it handles sleep mode gracefully, but on the laptop where I’ve been testing RHEL7 beta it seems purely random whether it will wake from sleep and continue or if it will have logged me out. And I don’t have a clue how to debug it.

  • Long, but really helpful. Thank you so much for putting your time in!

    Yes, I’ve seen this too with initctl. Grr! But that’s mostly cosmetic, and just something to get used to (as you have).

    How is logging handled if services are printing to stdout? Does systemd separate logs (if told to) so that e.g. my sshd logs are separate from my postfix logs?

    Hmm, this seems most problematic of the issues you’ve described so far. Is journalctl the way to get to stdout/err of a systemd unit? If a service fails, where is that failure logged, and where is the output of the failed executable logged?

    Thanks for your patience in this sometimes frustrating thread. ;-)

    –keith

  • Service stdout logging goes to the journal and is copied to rsyslog, log entries are tagged with the control group they came from, in addition to the process name, so it is easy to see any logs, whether via syslog or stdout, generated by any process in the sshd.service control group, or postfix.service control group.

    $ journalctl –follow _SYSTEMD_UNIT=sshd.service

    man systemd.journal-fields for a list of all the fields you can search on

    When you view the status of a service with systemctl it shows you the command line, when it last tried to start it, what processes are associated with the service if any are running, what the exit code was, and the last few lines from the journal of anything logged by that service, so is kind of a one-stop-shop. For example:

    $ systemctl status sshd sshd.service – OpenSSH server daemon
    Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled)
    Active: active (running) since Thu 2014-07-10 20:55:47 CDT; 4 days ago
    Process: 1149 ExecStartPre=/usr/sbin/sshd-keygen (code=exited, status=0/SUCCESS)
    Main PID: 1164 (sshd)
    CGroup: /system.slice/sshd.service
    └─1164 /usr/sbin/sshd -D

    Jul 10 20:55:47 localhost systemd[1]: Starting OpenSSH server daemon… Jul 10 20:55:47 localhost systemd[1]: Started OpenSSH server daemon. Jul 10 20:55:48 localhost sshd[1164]: Server listening on 0.0.0.0 port 22. Jul 10 20:55:48 localhost sshd[1164]: Server listening on :: port 22.

    $ sudo systemctl stop sshd
    $ systemctl status sshd sshd.service – OpenSSH server daemon
    Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled)
    Active: inactive (dead) since Tue 2014-07-15 17:29:09 CDT; 3s ago
    Process: 1164 ExecStart=/usr/sbin/sshd -D $OPTIONS (code=exited, status=0/SUCCESS)
    Process: 1149 ExecStartPre=/usr/sbin/sshd-keygen (code=exited, status=0/SUCCESS)
    Main PID: 1164 (code=exited, status=0/SUCCESS)

    Jul 10 20:55:47 localhost systemd[1]: Starting OpenSSH server daemon… Jul 10 20:55:47 localhost systemd[1]: Started OpenSSH server daemon. Jul 10 20:55:48 localhost sshd[1164]: Server listening on 0.0.0.0 port 22. Jul 10 20:55:48 localhost sshd[1164]: Server listening on :: port 22. Jul 15 17:29:09 localhost systemd[1]: Stopping OpenSSH server daemon… Jul 15 17:29:09 localhost sshd[1164]: Received signal 15; terminating. Jul 15 17:29:09 localhost systemd[1]: Stopped OpenSSH server daemon.


    Mark Tinberg, System Administrator Division of Information Technology – Network Services University of Wisconsin – Madison mtinberg@wisc.edu

  • Without gainsaying any of the foregoing, let me put the alternative case.

    In my opinion, the era of dedicated sys-admins is passing if not already finished. A good friend of mine, sadly now deceased, began his career working for RCA adjusting colour television sets in the owner’s homes. The neighbourhood garage with two or three teen-aged grease-monkeys and owner-mechanic are gone. So too are chauffeur-mechanics for car owners, elevator operators in buildings, attendants at public toilets (at least in North America), telephone-operators (mainly), and telephone booths (mostly).

    It is called the advance of the ages. All technology must, if it is to become truly useful, disappear from conscious consideration when operated. Consider how little user maintenance is even possible with a modern automobile, how pervasive the idea of instant world-wide voice and data communication is and the absurd triviality of the operating the technology necessary to accomplish this is (from the end user’s perspective).

    How many here remember party-line telephone service? Operator assisted Station-to-Station and Person-to-Person long distance calls? Telegrams?
    Telex? Centrex? Analysis pads?

    Residual artifacts of all these things are still to be found but their function has been subsumed and submerged by technology and the skills necessary to operate them are obsolete.

    All successful automation is going down the same path. The *nixes have not won the OS wars but they are, I believe, a significant part of the medium term future of automation. However, to become truly useful to the widest possible audience the arcane user interface commonly encountered in the myriad of disjointed *nix system utilities has to be radically simplified to the point of triviality.

    A shell is, at its root, a programming language. A peculiar form of virtual machine if you will. The vast majority of computer users are not programmers and will never become so. What this means is that the shell, of whatever ilk, must be submerged to the view of most users if *nix is to thrive.

    For the cognoscenti this will ever be a point of discomfort for it puts into question the value of their hard won skills. Nonetheless, things like systemd, gnome3, and no-doubt dumber and more awful things to come are, in my opinion, inevitable. Computing is just too valuable a resource to be left solely in the hands of a self-selected elite whose entrance requirement is mastery of a dozen subtly different ways of telling computer systems to do essentially the same thing.

    There is a reason that things like Webmin already exist and it is not because of MS-Windows or LUsers. It is because the native administrative interface to the standard *nix system is a nightmare of complexity, inconsistency and sheer bloody-mindedness.

    At least, that is how I see it. I am not comfortable with change but I have long ago given up trying to resist it. If systemd presents a common DSL for service management dependencies __AND__ it works then bring it on. I had to write upstart Stanzas for IAXModem on C6. I suppose it will not be any harder in its successor on C7.