Disappearing Network Manager Config Scripts

Home » CentOS » Disappearing Network Manager Config Scripts
CentOS 106 Comments

Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.

Each time I reboot, it seems like the configuration file I create for Network Manager gets destroyed and replaced with a default file. Nothing in the default file would actually make sense on my network, so I’m not even really sure how this machine is still connected to the network after a reboot destroys my previous configuration.

The only way I seem to be able to keep my proper DNS settings is through the GUI interface to Network Manager. I have to enter the configuration in each time I reboot. At the very least, I just want to stop Network Manager from wiping out my perfectly fine /etc/resolv.conf.

There has to be a better way.

106 thoughts on - Disappearing Network Manager Config Scripts

  • service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX
    vi /etc/resolv.conf chkconfig network on service network start

    John

  • Em 27-04-2014 01:33, Evan Rowley escreveu:

    Your report is weird because NM should be able to work with your standard ifcfg- files, as described in https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-User_and_System_Connections.html

    If you think it’s NM, a ‘service NetworkManager restart’ probably would reproduce the issue, and we could troubleshoot from there. If not, then something else is removing it and NM is just putting something where it was blank.

    Nevertheless, it’s going to get much better on CentOS 7. NM has been really worked on and now includes a ‘nmcli’ command, for managing NM
    through the console/scripts.

    Marcelo

  • Also helps to throw a line of “NM_CONTROLLED=NO” into
    /etc/sysconfig/network-scripts/ifcfg-ethX
    just to further tell it to go away.

  • You know, you don’t get NetworkManager on a server if you don’t install the ‘Desktop’ group. The list of packages that actually require NetworkManager is very small.

    I have a development machine that also acts as a server, and it has NetworkManager installed, but it does not act ‘weird’ in networking.

  • Nathan Duehr wrote:

    Is this an impromptu poll? I think we had one for NM (“it’s so much better in fedora, it was reworked…”), and everyone else, if it’s not a laptop, wants it to Go Away.

    But will they listen to us?

    mark

  • The answer is found in the package set for RHEL7. The time to have voted has long past, and was in the Fedora train. NM is and will be in EL7, and it will be there for ten years, if RH keeps to its support schedule. They won’t pull it after the RC.

    At least in EL6 you can in fact yum remove NM without it taking your whole system away. I haven’t tried on EL7.

    But, I also haven’t had any issues with NetworkManager in my use cases, which includes much more than just laptops. I also am aware that others have had issues, particularly with bridging and bonding.

  • “The NetworkManager daemon attempts to make networking configuration and operation as pain-
    less and automatic as possible by managing the primary network connection and other network
    interfaces, like Ethernet, WiFi, and Mobile Broadband devices. NetworkManager will connect
    any network device when a connection for that device becomes available, unless that behavior
    is disabled. Information about networking is exported via a D-Bus interface to any inter-
    ested application, providing a rich API with which to inspect and control network settings
    and operation.”

    This may be fine for users that don’t know what they are doing or don’t have a stable networking environment, but I have found for me it causes nothing but heartache. The first thing I do is disable it.

    The sad part is that it makes us not understand what is really happening with our systems and when something doesn’t work we have no idea where to look.

    I have been using UNIX/BSD/Linux since the mid eighties and hate where things appear to be going – looking more and more like Windows.

    my $.02

  • There are two sides to this. On the one hand you want to be able to nail down server configurations – and probably anything that is going to stay wired. On the other, you can’t possibly have liked what you had to do to add a new network (or any other) device to a BSD system in the 80’s and it is kind of nice to plug in a usb device and have it come up working without a reboot. I think the real issue is that the way to nail things down either hasn’t stabilized or isn’t well documented. For example, I think there are ways to tell NM not to mess with a specific interface setting, and maybe a way to say you don’t want it to screw up your resolv.conf file, but can you tell it that adding a USB device and picking up a dchp address is OK, but you don’t want to change your default route just because dhcp offers it?

  • Steve Clark wrote:
    to look.

    For one thing, if we’re not active on the fedora lists, then we have no vote, it sounds like. And IMO, a lot of fedora folks are desktop folks, and thinking, perhaps, of competing with ubuntu.

    I think upstream might consider, esp. that we’re now a “partner”, talking to *us*. I mean, this is an ENTERPRISE o/s, and that means, heavily,
    *servers*, and does anyone actually use wireless, or anything other than hardwired, for a server?

    mark

  • Mixed DHCP and static IP configurations is a very useful but often neglected combination. [1]

    Every OS I’ve used requires some hacking around to make it work as desired. The only reason Linux is easiest of the bunch is because it has a history of letting you turn off the automation, so you can prevent it from doing undesired things.

    Windows is far worse than CentOS in this regard, NM or no. [2]

  • Frank Cox wrote:

    A Dell PowerEdge, or an HP DLx80, or a Penguin, or…. Why, what other values of “server” do you have?

    mark

  • Transferring files from one computer to another via SSH or ftp, for one. Backup via rsync for two. Database access for three.

    Need I go on?

  • Frank Cox wrote:

    Two or three? Like, at home? Very small office? My system at home, I’m setting up for backups, and I will get around to samba shares… but I
    call it a workstation.

    “Trains stop at a train station, buses stop at a bus station….”

    mark “I can stop playing solitaire any time I want….”

  • I think you’re setting up false dichotomies here. It isn’t about desktop vs server, or WiFi vs wired.

    First, both CentOS and Ubuntu have server and desktop focused variants.
    RHEL7 will make this separation even clearer[1], though it seems the reason has more to do with keeping the ISOs to single layer DVD size than because they intend for the Workstation/Client and Server editions to functionally diverge.

    Second, as to whether there are servers that use WiFi, of course there are. Print servers, embedded systems, media servers, IP cameras… Lots of Linux servers use WiFi.

    Back in the days when Big Iron Unix was the biggest piece of the market, the very thing being complained about in this thread would have been touted as a great feature over inflexible desktop OSes. Multipath I/O, hot-swap disk controllers, NIC failover, etc. all happened in that world first. Is dynamic networking any different, really?

  • Taxis stop at the train station, cars park at the bus station, busses pull up to the airport…

    The lines aren’t as sharp as you’re trying to draw them.

  • Warren Young wrote:

    Yes. There are a lot of servers that *require* special setups – think of h/a failover systems, or, as someone mentioned, systems with multiple ports, and some of those are on/feed internal subnets. I can’t see how NM
    can do other than mangle that.

    I didn’t see anything about “computer node”, etc. Guest, I would assume, are for kiosk-type setups. Compute node… it automatically detects a GPU(s)? It comes with PBS/Torque installed? Fuse? Gluster? Ready to be joined to a cluster? I’d like to see what their definition of “compute node” is….

    But thanks very much for the link – I didn’t know that RC 7 is out this week….

    mark

  • Warren Young wrote:

    You completely missed the joke. I hate explaining jokes, it kills them.

    “…so doesn’t work stop at a workstation?”

    mark

  • Yes, but the configs tend to be tied to the names of the devices. If a new device is going to be added on the fly when you jack in a USB
    plug, where do you hack to say that device shouldn’t clobber your resolv.conf or default gateway.

  • Les Mikesell wrote:

    Or, for that matter, you reboot, and oops, you left a USB key in there, and /dev/sdc3 ain’t there….

    mark

  • Les Mikesell wrote:

    I don’t mind NM editing resolv.conf if it knows
    – or even thinks it knows – how to improve on the current settings, but what I don’t understand is why it occasionally deletes the current settings without substituting anything else. I can’t imagine any situation where this would help?
    Maybe the present settings are defective in some way;
    but no settings cannot possibly be better.

    Having said all that, in my case NM has been working perfectly for over a year now. But I can’t forgive it for the hours I wasted on it in the past.

    However, I would never run NM on a server, by which I mean a machine that offers services like dhcp and http.

  • Steve, first, if this comes off as a rant, that’s not my intention, and it’s not directed to you personally.

    My experience? There is no such thing as a 100% stable networking environment. Systems like Tandem’s NonStop take that a step further, and realize that there’s no such thing as a 100% stable CPU, either.

    This whole discussion reminds me of the SELinux discussions, and the oft-quoted advice to just disable it, it just gets in the way of The Way I’m Used To Doing Things (TM).

    NetworkManager is well-documented. You just have to read the docs and be willing to try something new. It also logs to /var/log/messages in plain text, too. There are more pieces, yes, to trace through. But, unless you install the Desktop group or the anaconda package on your server you won’t get NetworkManager on it. If you install the Desktop package, there’s a bit of an assumption that you want a Desktop, no?

    Looking like Windows is not a capital crime. (No, I am not a Windows freak; I’ve used *nix of various types probably as long as you have, and I haven’t used any Windows as my primary desktop of choice since Windows
    95 was a pup, and have never used a Windows Server as my primary server of choice.)

    NetworkManager’s goal is extremely simple, and is in the README. It’s simply: “NetworkManager attempts to keep an active network connection available at all times.” Networks are unreliable. Period. That’s why we have BGP and OSPF and all the other interior and exterior gateway protocols, because network links are ‘best-effort’ services; QoS depends upon the expectation of unreliability, in fact, since the only way to guarantee any packet a timeslot in a full pipe is to throw a different packet out the door. See the absolutely delightful video ‘Warriors of the .Net’ (www.warriorsofthe.net and elsewhere). We bond interfaces because one could go down, right? (This is one area where NM is weak, incidentally).

    I cannot foresee every failure in any manual configuration. We have dynamic routing protocols for a reason, since nobody can foresee how to weight every possible static route.

    Back in the late 1800’s people who had used tillers to steer their horseless carriages probably though the same thing about this new fancy gizmo called a steering wheel. And automatic transmissions? Heresy!

    Much of what I learned with Xenix on the Tandy 6000, Convergent Unix System V Rel 2 on the AT&T 3B1, Apollo DomainOS (using the 4.3BSD
    ‘personality’ for the most part), SunOS and later Solaris on Sun3 and SPARC hardware, and older Linux on PC and non-PC hardware still applies;
    but things move on as requirements change. (At least I can still have my vi! I HAVE used vi since the 80’s, and it is still the same quirky beast it always was, even in Xenix V7 on the T6K.).

    But the GUI on the 3B1? And those ‘pads’ on DomainOS? Not portable, and fallen by the wayside.

    Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren’t, better. Systemd is one of these things, and it will be interesting to see how that all plays out over the next few years.

  • Ok, I’ll bite on this one.

    *Why* do we want a server configuration to be nailed down? Is it due to a real need, or is it due to the inadequacies in the tools to allow fully dynamic and potentially transparently load-balanced dynamic configuration? Or is it due to the perceived need to control things manually instead of using effective automation? I do say ‘effective’
    automation, yes, since ineffective or partially effective automation is worse than no automation. But one of the cornerstones of good sysadmin practice is to automate those things that should be automated.

    Dynamic DNS and/or mDNS with associated addresses deals with the need for a static IP; SRV records in the DNS can deal with the need for a static name, as long as you have a domain; and something like (but different from!) Universal PnP can deal with that.

    NetworkManager (and similar automation) has application in cloud-based things, where the server needs to be as dynamic as the device accessing the server. It also has application in embedded things, where you want to plug in an appliance to a network and have its services available regardless of the network environment (maybe no DHCP, maybe no DNS, maybe dynamic addresses, and maybe static; it really shouldn’t matter).

  • Enterprise != servers. Server != hardwired.

    The enterprise desktop is real, and it is not going away.

    Wirelessly-attached servers are out there, especially in manufacturing.

  • No. Just no. Not if you think that means there is just one Desktop and it is physically attached to the box you are installing. That hasn’t been a reasonable assumption for anything running X, ever, and even less so with freenx/x2go. You want the applications on a stable, stably networked server and the displays out where people work.

  • Lamar Owen wrote:

    Define “stable”. please. I have servers (and I really, REALLY want to reboot them, but they’re home directory or project servers, and so it’s really hard to get to do that, since people have jobs that run for days or weeks, that have run flawlessly for > 300 days, with nothing vaguely significant problems.

    That’s a complete misrepresentation of the other side of *that* argument.

    WHY? I’m not a huge fan of “if it ain’t broke, don’t fix it”, but fixing something that, 90% of the time, is no big deal to configure and run, with layers of complexity that have created both new issues, and broken things that are set up in a given way for a reason, does not endear it to me.

    Yup. Agreed (’91 for me, though I did try Coherent in the late 80’s….).

    *Looking* like it, in terms of GUI, isn’t a killer (fvwm2, anyone?)… unless you’re talking Lose 8, er, Win8. *Configuring* *Nix that way *is* a Bad Thing.

    I boggle at this. I’ve not had unreliable networks, not any place I’ve worked, nor where I lived, and that goes back to dial-up in the far exurbs of Austin, TX.

    That *does* come off as snide and supercilious, esp. in this specific forum, with the backgrounds of most of us.

    Just you wait: maybe we should all join some fedora list where we can vote, before they try to force us all to … EMACS!
    (alt.religion.editors….)

    Again, newer does not mean better, either. And if you’re going to go on about the heresy of automatic transmissions, I’ll throw back in your face that when I was young, the fabric of dungarees (blue jeans, er, “jeans” to you) had a weight, I’d guess, about 14 or 16; these days, if you’re really, really lucky, they might be 9, which is why they wear out so soon. And as for the quality of cell phones (oh, of course that’s worn out, it’s
    *soooo* old, it must be last year’s model…)

    mark

  • Lamar Owen wrote:

    I’ve got two rooms, with a number of servers in each room behind a firewall, *required* by US law (HIPAA & PII data). I’ve got compute clusters, and all the compute nodes are all 192.168.etc, and they MUST NOT
    CHANGE, EVER!!! All of those setups are behind their own switches.

    Tall me how I need NM to manage them.

    mark

  • I don’t; I’m familiar with LTSP and similar. In these cases a different group could be defined that includes all of the packages of the Desktop group but without NM, and called ‘LTSP Desktop Server’ or ‘Virtual Desktop Server’ or similar. But in X there is no real difference between a local X server and a remote one, other than the display number and the plumbing. Perhaps to make it even clearer the existing Desktop group could be renamed ‘Console Desktop’ but that’s a bit much, since most Desktop users are console users; that’s not to say that there is not a ‘Citrix Terminal Services’-like use case out there. And you can yum remove NetworkManager without major impact, as long as you make sure to re-enable the other network service.

    Interestingly, X turns the whole client/server thing on its head….. and always has. This is more of a ‘VDI’ type thing, though, and is not the common Desktop use case. Apollo had this problem licked for the local network years ago; the X way is a bit of a regression from the very non-standard way DomainOS did things. Vestiges of the DomainOS way still show up in the Andrew Filesystem, though.

    So, pardon the logic, you want the clients running on reliable servers and the servers running on the remote clients. (Yes, I know what I just said….. it’s supposed to be humorous……). But think about cloud desktops for a moment, and think about dynamic cloud desktop service mobility that follows you (network-wise, for lowest latency) to give you the best user experience. (No, VDI is not doing this seamlessly yet).

  • Lamar Owen wrote:

    I agree that WiFi networking is difficult, but ethernet networking, in my experience, is 99.9% stable. I wish NM would just stick to WiFi.

    Where?
    I haven’t come across any documents that explain clearly how NM is meant to be working, or eg what documents it is reading.

    I find the NM messages on /var/log/messages ludicrously verbose;
    and even after wading through these messages it is difficult to determine exactly what is wrong. In my view NM should spend a little time trying to make these messages more helpful.

    WiFi networks are unreliable – in fact if you study the algorithms involved it is almost a miracle (in my view) that they work at all. Ethernet networks exchange packets in a completely different way, and are very reliable, and also easy to understand.

    Personally, I see the advantages of systemd. But not nearly enough trouble has been taken, in my view, to make it simple to use. Just the fact that one has to type more characters to get to the same place
    (eg “systemctl start whatever.service” in place of “service whatever start”)
    shows a lack of consideration for users.

  • You forgot to mention interoperable along with effective and complete. When a network can run perfectly without a human controlling the names and addresses precisely at some level or another regardless of what you plug into it, I’ll happily agree that automation would be an improvment. Right now I can’t even dream of that as a possibility. And so each component needs to configured by a human – and stay that way – or it isn’t going to work with the rest of the world.

    Is that secure?

    Is that a standard that is universal?

    You just pushed the management somewhere else – you didn’t eliminate it.

    Your argument makes sense for devices that don’t provide a reasonable interface for their own configuration. But how does that apply to a server with a full Linux distribution?

  • No settings might be better. If I take my laptop from one site to another, keeping my previous resolv.conf intact, and NM doesn’t remove it, then my laptop will try to query the previous site’s DNS. They may not like that; depending on how paranoid they are, they may even take measures to block my traffic. Even if not, I may see some really bizarre DNS behavior which could be difficult to troubleshoot, whereas having no DNS at all will be very obvious very quickly.

    I don’t use NetworkManager, so I don’t know the answer to this question:
    is there a way to tell it not to clobber portions of your network configuration, and/or to provide it with defaults if it can’t determine values for a particular option? That seems like the most logical way to handle this scenario.

    –keith

  • My last test with Network Manager was a couple of years ago. At that time, a client that was set to boot using DHCP and NM would not set its hostname when such was provided with the DHCP response. That was a show stopper for me (none of my 200+ non-wifi clients have any configuration on them that identifies the machine in any way). Is this still the case?

    Steve

  • But freenx/NX/x2go put the big picture back the way it belongs. That is both ends run proxy/caching stubs that can disconnect and reconnect from each other without breaking things. The host running the desktop (what you think of as the server) also runs a proxy X display server. The host with the physical display (what you think of as a client) runs a proxy client and server,

    If you’ve never used NX or x2go, try it. You really do want that caching/proxy layer to deal with network latency and give you the ability to disconnect and pick up your still-running session from a different client – and I mean client in the logical sense. X2go even has a handy way to set up remote rdp sessions to windows targets over its SSH tunnel and caching layer.

  • I define stable in this context as ‘behaving in a completely consistent and predictable fashion.’

    Truly stable systems allow rolling reboots with no interruption of services. EMC and others have had this licked for years with their storage arrays; Tandem had it solved for CPU’s and RAM inside a single system image, back in the 80’s (and even though it was a bit, ah, interestingly implemented). Truly stable system remain stable even when their parts are unstable. A truly stable system will be stable even when every one of the constituent parts are inherently unstable. And a truly stable system is hard to make.

    I said it reminds me of it, not that it’s identical to it.

    Reliable and highly available networking using the the traditional Linux networking way is broken for many use cases, not all of which are desktop-oriented. It is broken, and it needs fixing, for those cases.
    And I *am* a fan of ‘it it ain’t broke don’t fix it.’

    A Bad Thing is not a capital crime, and Windows does do some things right, as much as I don’t like saying that.

    I really try hard to not be snide or offend very often, but the idea that something needs to stay a certain way either just because it’s always been that way or because we can’t do it the way someone else who we don’t like has done it deserves a bit of a reality check, really. Or do we want to go back to the Way It Was Done before this pun called Unix launched? I’ve run ITS on an emulated DECsystem 10 in SIMH; I’m glad a better way was developed.

    The perl mantra is and always has been ‘there’s more than one way to do it.’ NetworkManager is a different way to do it, and while far from perfect it is the means Red Hat has decided to use in EL7.

    And, if there were no alternatives I’d use it. It’s not that big of a deal to learn something different, even as busy as I am. Who knows, I
    might even find that I like it.

    Very correct; and in EL6 at least you can use the older way or the newer way. But if the newer way can be fixed to meet Red Hat’s needs, then they’re going to use it. If it can’t, well, the RH distributions’
    histories prove that they’re not afraid to pull the new and go with something else, too, when the need arises.

  • So you only have one network interface active at a time? Our servers typically have at least 6 NICs and it is pretty common to have at least 4 active on different subnets. And bringing up a new interface does _not_ mean I always want to use the DNS servers or default route DHCP might offer.

  • I deleted my first reply. But you’ve twice used this argument and I’m afraid I can’t let it pass.

    I find this common argument execrable. It seems to suggest that if I don’t accept and embrace the new things that you do, I’m somehow a Luddite or my thinking is backwards. Is all your money in bitcoins yet?

    I run CentOS because I want stability. It works and I know how to work it. When something like this is changed, there is an opportunity cost for having to figure out how to get it back to the way I want it to be (compare to recent issues with Mozilla Chrome, uh, Firefox 29). In the aggregate, how much time will be wasted by admins getting this to work when 7 comes out?

    Cheers, Zube

  • For certain usess I agree with that; for others, not so much. Seamlessly pulling applications from an application server to the display server has its distinct advantages, particularly for certain expensive commercial applications.

    I’ve been using NX (both the commercial version and the free version)
    for remote telescope control use for over five years, acting as a proxy for Windows RDP. Works fine.

  • Not sure what you mean here or how it can be seamless, since there’s no general requirement for the CPU or OS of the display to have anything in common with the system running the application – unless maybe it is java which doesn’t need X for remoting. On the other hand, NX/x2go are running real X servers at the display end, so the same things should be possible with a little variation in the plumbing
    – and probably a loss of the ability to reconnect transparently.

    X2go is approximately the same, just with open source clients and more current development. And if you’ve updated your CentOS systems with the EPEL repo enabled recently, you are already running their version of the nx libs.

  • Yes, in enterprise environments there is a huge development/testing cost for every change that has to be made in configurations or operating procedures. I think it is unfortunate there there is no standard defined for configuration files or tools to stabilize it and make common operations across platforms possible in spite of the bizarre differences each vendor tries to add. Something like posix for system management…

  • That’s not what I think, nor is it what I said. Being unwilling to even try something new is being a Luddite; going back to the old because the new isn’t working is not being a Luddite. Being unwilling to try a newer version of something that didn’t work previously is also being a Luddite. Isn’t there a middle ground between ‘love it’ and ‘hate it?’
    I *am* a big fan of ‘if it ain’t broke don’t fix it’ but the old way for some use cases is indeed broken.

    But the simple fact is that NetworkManager is with us for a long time coming. You don’t have to use it if you don’t need it’s particular strengths, or if its particular weaknesses get in the way, but it is there and will be there for at least ten years. Like any other piece of software it has its advantages and disadvantages; use what fits for your situation.

    While this paragraph started life being tagged as a snide remark, perhaps it’s not; it’s certainly not meant to be snide this time. I
    don’t see too many automobiles with tillers these days, nor do I see many first-generation steering wheels. But I see lots of ‘double tillers’ all the time (as handlebars are in essence double tillers).
    The double tiller works marvellously well for the motorcycle use case;
    can you imagine a motorcycle with a steering wheel (they may exist, but I’ve not personally seen one)?

    None of my money is in bitcoin, although I’ve wondered if the EPIC VLIW
    architecture of the IA-64 wouldn’t be ideal for mining purposes.

    As do I, for that particular meaning of ‘stability.’ And I have C5
    machines in production, and they’ll be in production until end of support. Heh, I still have a Red Hat Linux 5.2 machine in (not connected to the Internet) production.

    Is learning a different way of doing things always a waste of time? But then again, I’ve always enjoyed learning new things, and learning new ways to use old things (after all, I’m in the process of rebuilding a TRS-80 Model 4P with a new hard disk interface that uses SD cards simply because I find it to be fun). That is one reason I have the job that I
    do; learning new ways of using old things is part of my official job description, although not in those exact words.

  • But, Les, we’d have to make changes to get things standardized. It would be nice if the standard already existed, but it does not.

  • Quote 1:

    Quote 2:

    I dunno. “Heresy!” “reality check, really.” Sure seems to be the case to me. You certainly aren’t praising people who don’t embrace the change you do. I’ll drop it and let others decide.

    Yes, of course.

    Sure. Given that I have no need of NM, what part is broken that NM
    fixes for me? Or do the “some use cases” not apply to anyone who uses CentOS on static IP desktops?

    [snippity]

    Of course not, but alas, my time is limited. If I had nothing else to occupy my time, changes such as these would not trouble me so. What is very expensive, from an opportunity cost standpoint, is to have to learn to do something in a new way that does not bring me any new benefit. Perhaps I’m mistaken about this (goodness knows
    “mistake maker” is etched on the business cards I don’t have), but whenever new, more complex things replace simple things “for my own good”, I know that I’ll be spending a chunk of time that I could have spent in more fruitful pursuits.

    Cheers, Zube

  • The granularity is pretty poor, but you can choose, as the ‘Method:’
    used for a connection several types. One of these is ‘Automatic (DHCP)’
    and the next one down is ‘Automatic (DHCP) addresses only’ which should, IIRC, leave resolv.conf alone (useful if you’re using something like OpenDNS).

    I’d personally like to see more configurability here, but that’s a post for another day.

  • Yes, I blame all our economic problems on the wastefulness of duplicated effort in learning to manage computers. That and everyone having to stock a near-infinite number of printer ink cartridges. Imagine what you could accomplish with a more productive use of all those smart person-hours and real estate.

  • *You* don’t, at least not at the moment.

    But others with a different setup might. That’s why we have the choice.

  • What I meant about Windows is everything seems to be hidden behind some gui interface, which leads people to not really understand the underpinnings of what is truly happening. NM seems akin to this, at least the last time I tried to use several years ago.

    I work in a development environment where we are constantly adding and removing systems and connections and for me it just gets in the way. I can quickly type ip a a …, ip r a … and be done with it.

  • Choice is great, surprises not so much. And I find it surprising that NM sometimes runs, sometimes doesn’t, depending on seemingly unrelated things. And I still don’t understand how to control what it would do for, say, a dynamically inserted USB device. Is it possible to make it take ‘address only’ from DCHP in that context?

  • You do know that windows servers have a fairly complete set of command line options, don’t you?

  • No, I didn’t forget it.

    Dynamic DNS can be, yes. It depends upon the way the zone file is updated and whether it’s Internet-exposed on not.

    If we’re relying on mDNS we’re probably disconnected.

    But you’ve been around long enough to know that security and convenience are inversely proportional.

    RFC 2782. Becoming more common, and very common for VoIP networks using SIP.

    Why yes, yes I did push the management elsewhere. If you have a hundred thousand cloud nodes, where would you rather manage them; at the individual node level, or in a centralized manner? Go to a cloud panel, select ‘deploy development PostgreSQL server’ and a bit later connect to it and get to work. (Yes, I know you need AAA and all kinds of other things, but for the application developer who needs a clean sandbox to test something, being able to roll a clean temp server out without admin intervention could be very useful).

    Embedded devices, with what I would consider to be full Linux distributions on them, with nothing more than a network device to manage them already exist. Network device meaning Wi Fi, too. NAS appliances are but one application; the WD MyBook Live, for instance, has a complete non-GUI Debian on it, and there are repos for various packages
    (for grins and giggles I installed IRAF on one, and ran it with SSH X
    forwarding to my laptop). Is a NAS appliance not a server?

  • Those would be bugs, and bugs need fixing. But they can’t be fixed if they’re not reported.

    NetworkManager doesn’t work in terms of interfaces, but in terms of connections. I’ll have to try it with a USB Wi Fi before I can answer completely, but when you create a connection you get this option, and I
    would think (and I’m going to try it, since I do have a USB Wi Fi NIC at home, just not sure if it’s supported by ELrepo or not) upon insertion a dialog to create a connection will come up, and you select the option from the pulldown in the IPv4 tab. The udev framework allows connections to ‘belong’ to different NICs, as far as I can tell, and is what makes the connection persistent across reboots in that sense. But I
    reserve the right to be wrong.

  • To the best of my knowledge, DNS is queried for the hostname info if a connection is set to come up at boot, and I believe it’s the first connection that comes up that gets the prize.

    I get a hostname upon boot with my laptops and my desktops, wired and wireless, with CentOS 6.5

  • A GUI and a registry; and I agree with that assessment.

    There is a CLI to manage (but not edit) connections with NetworkManager in C6. EL7 is supposed to improve the CLI functionality of NM, but that remains to be seen (meaning I’ve not taken the time to try it, as it’s not really high on my list of priorities). The command is nmcli, and it takes a bit of reading to see what it is doing. It also takes a bit of thought, since NM is connection, rather than interface, oriented.

    Earlier versions do seem to be more opaque than they probably should be, which is probably one of the primary reasons NM is in the Desktop group, and you get the Traditional networking if you don’t install that group, or NM on its own, at least in C6.

    I understand your pain, and your needs. Many days my production environment feels that way.

  • So how can it be dynamic, but controlled at the same time?

    Sort-of. You just have to work out convenient operations over secure channels.

    I’ll take that as a ‘no’ for the general case.

    I’d like to mange things the same way, regardless of the count.

    How is that easier than saying ‘ssh nodename yum -y install PostgreSQL-server’/ Something I already know how to do and how to make happen any number of ties – and something that works on real hardware and in spite of the differences in VM cloud tools.

    At the expense of being black magic that won’t work outside of that environment. I don’t like magic. I don’t like things that lock you in to only one vendor/tool/OS.

    Actually, I’d like to see a single device do all of that gunk plus have an HDMI out to act as a media player so a typical home would only need one extra ‘thing’ besides the computer/tablet/phone. But it doesn’t matter – you still have to configure it somehow. Do you want things to guess at your firewall rules?

  • Lamar Owen wrote:


    Which leads to a thought: you said that the time to “vote” on NM was long past. My response was that none of *us* saw, or were solicited, and the hope that now that we’re partnered with upstream, we might be.

    I do have a reason for that hope… remember the thread a month or so ago, where *we* *were* asked about tcp-wrappers? For things that mean major changes – systemd, NM, etc, I’m hoping that, in the future, we *also* get solicited in the same way for our views.

    mark

  • Sure is; but we do bonding for a reason.

    There are other interfaces, like various VPN’s and WWAN cards, where NM’s notion that non-bootup connections belong to users is a useful thing.

    The upstream documentation has some info; the man pages (nm-applet(1), nm-connection-editor(1), nm-online(1), nm-tool(1), nmcli(1), NetworkManager.conf(5), nm-system-settings.conf(5), and NetworkManager(8) all have useful information. I’m sure it could be improved, but so far it’s been useful to me. I should probably edit my initial sentence to ‘NetworkManage is fairly well documented’ instead, I
    guess.

    Agreed. They’re almost as opaque as SELinux avc denials.

  • Yes; I don’t recall if I commented or not.

    And it was the Fedora train that solicited the input, not the EL train
    (using a cisco-speak term).

    And maybe this community’s input may have bearing….. on EL8. EL7 is pretty much a done deal, and EL6 is way past a done deal.

  • Well the one and only time I configured an interface on windows from the command line I couldn’t believe I had to type some great big string to identify the interface, of course I had looked up how to do it on the internet so there may have been a shorter way to do it.

    I guess coming from a history of starting out on an IBM 1130 and proceeding thru Burroughs, NCR and Data General OSes and hardware I just got used to understanding at a very low level and doing things without the help of some fancy GUI.

  • Les Mikesell wrote:

    Stocking all those toner cartridges? You’ve seen my basement server, er, sorry, “computer lab”?

    mark “well, they said try to get three years’ worth, with the
    sequester and all….”

  • Les Mikesell wrote:

    That depends on how tight management has them locked down…

    mark “I know you’re in your aa account, and you installed this
    inventory software, but you *can’t* delete that old log
    file in that directory created during testing….”

  • Lamar Owen wrote:

    Great – that many manpages….

    Just like what Lose, I mean, WinDoze, logs… paragraph long “error messages” that are mostly useless and information-free.

    mark

  • No, everyone is in the same boat in terms of the damage from lack of interoperability standards. Makes me wonder why we have cars that are all approximately the correct widths to fit on a road and brake and accelerator pedals in the same relative positions.

  • Les Mikesell wrote:

    a) Human fits to where pedals are. b) I still go with the Roman milspec on main vehicle wheel widths….

    mark

  • So, have you ever had to deal with a CentOS box and multiple NICs. Especially one where you’ve cloned it or moved a disk to a new chassis? Apparently there is just not a good way to identify interfaces.

    Well, you can do it that way on windows if you want. It’s just, ummm, different. Like that thing we were talking about here.

  • Set up a DD-WRT consumer router for use with OpenDNS by way of dns-o-matic and you’ll see how. Now replace OpenDNS and dns-o-matic with your own services.

    How is an RFC quote and an example of a running standardized application using the feature a ‘no?’ Please read https://en.wikipedia.org/wiki/SRV_record and see just how standardized it is.

    How do you guarantee a clean sandbox? In the cloud case, every VM
    rolled is as clean as the template that generated it, and gives you a known starting point. And I use PostgreSQL as the example since I
    maintained those RPMs for five years, and I understand the need for a clean sandbox, having learned the hard way what can happen if you don’t take the care to make your sandbox clean (this was pre-mach, and definitely pre-mock, and buildroots had to be carefully regulated since they weren’t cleanly sandboxed by mock and kin).

    OpenStack will do most of what I’m talking about already.

    That last point is exactly what UPNP was supposed to solve.

    Such a device as you want exists; see the GuruPlug Display and descendants. They are definitely tinkering boxen, and they do have their issues (I have a GuruPlug Server Plus with the eSATA port and the infamous overheating problems) but they are available.

  • Model T”s had the throttle on the steering wheel, along with a manual ignition advance, the wheel brake was a hand lever, and the left pedal
    (where you’d expect a clutch) operated the bands on a planetary transmission. foot down on the left pedal was low gear, foot up was high gear. the middle pedal is reverse. the rightmost pedal (where you’d expect gas) is a transmission brake.

  • Lamar Owen wrote:

    Um, er… DD-WRT is off-topic, so if anyone wants the *REAL* RANT and howto, contact me offlist. The short version is a) that’s the most amateur, in the worst sense of the word, project I’ve ever seen, and b) it took me about a month, and three or so debrickings, to get a good version….

    mark

  • Yep, do it all the time – first two thing I do are:
    rm -f /etc/udev/rules.d/70-persistent-net.rules rm -r /etc/sysconfig/network-scripts/ifcfg-eth*
    and then reboot.

  • So can I expect it to work with ssh? SMTP? SNMP? Or any application I’m likely to use? Who’s going to open the corresponding firewall holes?

    Either clonezilla or a minimal OS install to start. Or if it is a VM, copy/revert an image. But except for development build systems we mostly work with hardware.

    On real hardware?

    Great… Why have a firewall when holes open by magic at an unsecure application’s request?

    I’d really like at least a 4-port switch and room for at least a pair of 2.5″ drives in what could still be a relatively tiny case. That is, combine everything in a typical router, nas, and media player. Current CPUs should be able to handle all those tasks at once.

  • As long as there is unique information to google, it will work out. And while I detest them, the Windows hexadecimal codes are very good for google.

    I just want to see BugHlt:SckMud again. (
    https://groups.google.com/forum/#!topic/comp.sys.tandy/rpZRWj9Y0nE ) At least let me laugh when it all comes crashing down. And, yes, you probably do want to read that post, but do it on your break. It’s one of the most classic Usenet posts of all time.

    I used the avc denial messages as an example for a reason; there is a tool that will help you with those. A similar troubleshooting tool for NM messages could (and should) be written.

  • So, now you’ve got 6 NICs connected to 6 different switches. Which name is which? This is a really fun exercise when the box is remote and you are trying to tell someone used to configuring windows systems how to get it to a point where you can SSH in.

  • That is of course not what I wrote. The above is just one example where I might prefer an empty resolv.conf instead of an old (and possibly incorrect) one.

    So in this case you might prefer an old resolv.conf instead of a new one or an empty one I don’t recall anyone ever writing that any of these scenarios is always preferable over the other.

    At any rate, for CentOS 6 we can still say “if you don’t like NM, don’t use it”.

    –keith

  • Yes, but we are approaching the end of an era. As soon as 7 is out, you won’t be able to get applications for 6 and you’ll be forced to switch, Oh wait, that already happened for flash and chrome, didn’t it?

  • Keith Keller wrote:

    Does this happen?
    I’ve never encountered it. In my case, the probability of my DNS settings in resolv.conf not working in a new site is close to zero, so you are replacing something that might possibly not work by something that is certain not to work.

  • I guess I am confused, you haven’t ever worked with the hardware you are installing the cloned drive in? If that is true then I guess you have a problem.

  • I haven’t followed this thread too closely, so if this has already been stated, please forgive me. Judging from both recent editions of Fedora and the free beta RH7, you don’t HAVE to use NetworkManager. You will have to manually turn it off and turn network on, and judging by later versions of Fedora (though not at all deeply researched by me) you may need to use the system-config-network-tui tool rather than just editing
    /etc/sysocnfig/network/network-scripts/ifcfg-*.

    Unfortunately, (and freely admitting much of this may be old person’s get of my lawn attitude), it does seem that the Fedora developers are working for the single user laptop, and have little concept of system administration–or, to be fair, have little interest in things for the system administrator, and unfortunately, RedHat just throws these things into their next enterprise version without checking.

    NM is not going to go away. However, at least for RHEl7, it should be fairly easy to remove it and use /etc/init.d/network.

  • “Working with it” doesn’t matter. And network manager or not doesn’t matter. Interfaces get named in a random order unless it is a Dell with the netbios naming scheme and probably then only for the motherboard NICs. Our servers generally have on-board Broadcomm and Intel cards and the names within a set may stay ordered, but the cards and motherboards will flip randomly if there are not already matching items with the correct MAC addresses in the udev rulss file (in 6.x, in 5.x having a matching MAC in the ifcfg-eth? file was enough to rename the device to match). This is just a response to your comment about windows names being difficult to know, (and a long-standing problem on its own) and doesn’t relate much to the NetworkManager discussion. If you only have one (or maybe even a pair on the motherboard or a single card) you might always see the same ordering –
    if so consider yourself lucky because that is not the general case. We usually have to run through a drill like ‘ip link ls’ to get a list of interface names, then iterate through them with ‘ifconfig up’ and use ethtool to see which has link up, connecting one at a time. So, in my opinion, there are problems with the old system that need to be fixed, but they aren’t the things that networkmanager does. A way to restore a backup to an identical machine and have the same NICs in the same positions get the old configurations would be nice. Or at least to know the names of the NICs in the same positions. (And if you go back to CentOS3 they did – detection was single threaded back then and would always probe in the same order).

  • It was not explicitly stated, so I appreciate the succinct summary. Thanks!

    Can you recall what gave you this impression? It’d be frustrating to me to have to keep my hands off of the config files directly. (If not, I understand; if I really want to know that badly I should just check it myself.)

    Could this be a SIG in the future? “CentOS NM-Haters SIG” ;-)

    Does RH really “just throw these things in”? It seems like they would annoy many of their more tech-savvy customers with moves like this one
    (if it were to happen).

    –keith

  • I feel for you then. I guess we have been lucky in the 6 or 7 hardware platforms we have used that the nics ( minimum 3, usually 4 or more ) have always stayed the same names in the same order.

  • Keith Keller wrote:

    I think I need to check with my manager – we do have a few RH licenses –
    and maybe I, or several of us, should put in an enhancement request for 7:
    DO NOT INSTALL NM by default *EXCEPT* for either a desktop, or, better, a laptop install. DO set network up by default in all other cases.

    Yes?

    mark

  • I think it is really a side effect of splitting the RH enterprise version and Fedora development into two different things. New development now all comes from the Fedora side where they just want change and don’t care about stability. That’s probably a good thing for desktop users since the desktop environment hasn’t historically been that great, but the server side has been relatively complete for ages and businesses really don’t want to have to rewrite all their applications and deployment processes for every new release. Back in the old days it would have been the same group contributing new development and consuming the final product so the direction might have been better aligned.

  • Maybe.

    The wired use case that is most compelling for NetworkManager is that of per-user 802.1x, and enforcing that at login. I could use that myself;
    coupled with something like Packetfence on the backend (with supported switches) you could get attached to any of a number of different VLANs based on who you are rather than what machine you happen to be using.

    At an educational institution (or really anywhere that you might have guests who need certain access and users who need other access or accesses) being able to partition things like this is highly desirable, and that is the great promise of 802.1x authentication. You log in to the desktop, and you get your desktop and only those LAN resources for which you are authorized, and it’s tied to your user ID and not to the machine.

    This sort of thing is ideal for labs or any other public access machine.

    And, yes, this is a desktop use case, not a server one.

  • That’s actually an illusion. If the detection pulls it up in a different order, then by MAC address it will get put in the old order, at least with EL6. Here’s a ‘grep’ excerpt showing the fun:
    ++++++++++
    Apr 21 14:39:25 www kernel: udev: renamed network interface eth0 to rename2
    Apr 21 14:39:25 www kernel: udev: renamed network interface eth1 to rename3
    Apr 21 14:39:25 www kernel: udev: renamed network interface eth2 to eth0
    Apr 21 14:39:25 www kernel: udev: renamed network interface eth3 to eth1
    Apr 21 14:39:25 www kernel: udev: renamed network interface rename3 to eth3
    Apr 21 14:39:25 www kernel: udev: renamed network interface rename2 to eth2
    ++++++++++

  • ‘Not embracing’ and ‘being actively antagonistic to any change’ are too different things. The Luddite is antagonistic to any change; one who is just cautious is careful what one embraces. The tiller versus steering wheel analogy was a bit of hyperbole, and was meant to be. The reality check is that things are moving on, and if one wants one’s skills to stay current one must learn those skills, even if one doesn’t embrace the changes that require those new skills. That’s the middle ground; not actively for or against, just staying up to date on the state of the art. And I’m neither rabidly for NM, nor am I rabidly against NM, but since it’s there I’m going to take the time to learn why it’s there and see if I can use it in those cases where it makes sense to use it, just like any other technology I’m considering.

    Are you sure you will never have need for NM?

    Totally static desktops, no.

  • Lamar Owen wrote:


    Just looked up 802.1x. Having not read the entire wikipedia article, is this an alternative to kerberizing it all?

    mark

  • Yes, the names are nailed down after the first run creates the
    /etc/udev/rules.d/70-persistent-net.rules with the MAC addresses for that box. But the first detection is more or less random. If you pop that disk into a different chassis, if you don’t remove that file you’ll get all new names with higher numbers and if you do remove it you get the same names but random ordering again. And the ifcfg-eth?
    files that also have the MAC address entries will be ignored if the names and MACs don’t match.

  • Lamar Owen wrote:

    At work, we *only* give out IPs to MAC addresses in the dhcpd configuration files. I’m thinking to do this at home, too. (Heh – try a driveby logon to *my* home network….)

    mark

  • Les Mikesell wrote:

    What I do when I upgrade a box via rsync is either rm
    70-persistant-net.rules, or look at the MAC addresses beforehand, and edit the rules so that they’re correct for this box before the reboot.

    mark

  • Like I said in the part you snipped after we clone the drive I always do:

    Yep, do it all the time – first two thing I do are:
    rm -f /etc/udev/rules.d/70-persistent-net.rules rm -r /etc/sysconfig/network-scripts/ifcfg-eth*
    and then reboot.

    The above makes them be rediscovered on the reboot.

    The lan ports are numbered on the back of the unit and I have never had them not come up in the correct order – In fact it would cause us untold grief it they did.

  • Like I said in the part you snipped after we clone the drive I always do:

    Yep, do it all the time – first two thing I do are:
    rm -f /etc/udev/rules.d/70-persistent-net.rules rm -r /etc/sysconfig/network-scripts/ifcfg-eth*
    and then reboot.

    The above makes them be rediscovered on the reboot.

    The lan ports are numbered on the back of the unit and I have never had them not come up in the correct order – In fact it would cause us untold grief it they did.

    I forgot to add this is over 750 systems.

  • If it is a box we’ve used before, ocsinventory will have reported the last mac/ip pairing, but what normally happens is that new boxes are shipped to their install locations and racked up by people that know all about configuring windows but not so much about CentOS.

  • They must all appear on the same motherboard/card location. Mine stay the same order within a card but the cards motherboard sets jump around. And yes, it causes grief.

  • Two things. One, comments from an extremely knowledgeable person on the Fedora forums, has said that there are now various scripts buried in various places. The person has, time and time again, shown themself to be extremely knowledgeable.

    The second is from an install where I manually edited the files, and the network would not start. I’ve been manually editing for years, and I am almost positive that it was not a syntax error on my part. Each time, after the machine booted, I could manually run ifconfig or ip, or dhclient and it would then come up. Finally, I used system-config-network-tui and it came up As I said, not a deep investigation–for example, I didn’t do a diff of ifconfig-eth0 before and after using system-config-network-tui.

    On most other installs of F20 prior to that, I, due to the aforementioned posting on Fedora forums, just used system-config-network-tui without trying manual configuration–hence, not deeply tested.

    Well, the ones I can think of off the top of my head are allowing any user to update a signed rpm without authorization. The other one is showing all user names at login screen. Those are two household laptop or desktop features that make sense for a single user (or in a household) but not in business–to the point that not even Windows does it with their business class systems.

    Other things, such as NM, are debatable. To me, and apparently many others, they are a home user or workstation at best, feature that shouldn’t be on a server.

    I suspect many people feel the same about systemd as well. It makes things boot faster, but also seems more likely to choke if something doesn’t come up. A recent job change has put me more in the BSD world than Linux these days, so I haven’t been following recent developments as closely as I used to do.

  • I would suggest that it be installed and used by default for a
    ”beginner” install, and specifically asked about in an ”expert”
    install. I don’t see the point in making a distinction between server, desktop, or laptop, because an expert setting up a laptop might prefer not to use NM, and a beginner setting up a server might need NM, or might not even know how to configure a network without it. (I know, beginners probably shouldn’t be installing servers, but they’re going to do it anyway.)

    –keith

  • When I started this thread a week ago, I certainly did not expect this many replies. Without a doubt it seems Network Manager is a controversial topic. I still haven’t worked out my Network Manager woes and just lost an hour troubleshooting a Golang webserver which wouldn’t start.

    Apparently in Golang’s net package, there is a DNS resolver function that’s called whenever a server is started. That function depends on a working
    /etc/resolv.conf – As per usual, the /etc/resolv.conf file turned out to be the blank template NetworkManager always creates. The webserver starts now, but this /etc/resolv.conf will certainly be blown away by NetworkManager the next time the network service restarts.

    I have one idea as to why this problem persists. This file:

    ll /etc/sysconfig/network-scripts/ifcfg-eth0

    -rw-r–r–. 1 root root 44 Apr 26 22:21 ifcfg-eth0

    Is it meant to be executable? Being a configuration file, I’m assuming it doesn’t need to be. Am I wrong?

  • Are you not getting a _correct_ resolv.conf from NetworkManager? Why not?

    This doesn’t seem like it is Go related at all — if you want any DNS to be working at all, pretty much all resolvers need that file.

    You’re not wrong. This is not your problem.

  • That file is ‘sourced’ by other network scripts so doesn’t have to be executable, but the contents set environment variables for other scripts. Or so I believe. No doubt someone will correct me if I am wrong. 8-)

    Cheers,

    Cliff

  • A hack. Set up your resolv.conf as you need it.

    Then as root chattr +i /etc/resolv.conf

    Now NM can configure the interfaces but can’t change /etc/resolv.conf

    Linux nogs.tonyshome.ie 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25
    19:59:55 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  • indeed that’s an ugly hack.

    you should rather set PERRDNS=no DNS1=
    DNS2=
    in /etc/sysconfig/network-scripts/ifcfg-eth0

    The options in this file are documented in:
    /usr/share/doc/initscripts-*/sysconfig.txt

  • Been there, done that.

    NM creates the opposite problem for places that have “lights out” data-centers without trusted (much) remote-hands support, however… when a vendor goes in and swaps a motherboard out of a flaky server… now it’s looking for specific MAC addresses that don’t exist anymore… and getting the average “on-site tech” from a vendor to give you MAC addresses prior to swapping the hardware that’s 1000 miles away, is pretty hit-or-miss. IMHO.

    Really isn’t NM’s fault, and swapping out Ethernet cards (back when they were actual cards… ha…) never has been safe remotely… but I like picking on NM. :-)