Fedora Change That Will Probably Affect RHEL

Home » CentOS » Fedora Change That Will Probably Affect RHEL
CentOS 81 Comments

This might show up twice, I think I sent it from a bad address previously. If so, please accept my apologies.

In Fedora 22, one developer (and only one) decided that if the password chosen during installation wasn’t of sufficient strength, the install wouldn’t continue. A bug was filed, and there was also a great deal of aggravation about it on the Fedora testing list. So, it was dropped.

However, like a US (and probably other countries) politician who has one bad law suddenly exposed, it seems they are doing it for F23, judging from a test installation. I’ve filed a bug if anyone wants to chime in and ask them not to do it.

https://bugzilla.redhat.com/show_bug.cgi?id46771

81 thoughts on - Fedora Change That Will Probably Affect RHEL

  • Kevin Fenzi responded to my post on Fedora testing saying that at least it is FESCO decisions this time, not just a one man one, and asked for patience. (My knee-jerk response is why are they even discussing it after last time, but I refrained.) Thank you for the links Chris.

  • I can certainly see why it can annoy certain people.

    I think a better solution to suite both worlds would be to simply have a boot flag on the installation media such as maybe
    “passwordcheck=true/false” to enable/disable the strength and check features of password entry and simply show a text box (and confirm) if it is disabled without any password checking.

    This way those who need the check disabled for quick deployments can do so and put a stronger password in later at their own time and choosing.

    Meanwhile those who wish to have the password checked can also do so.

    Thus, both people happy :-).

    Personally, I am neither against the idea, nor for it. It doesn’t affect me as I usually use strong passwords regardless.

    Kind Regards, Jake Shipton (JakeMS)
    Twitter: @CrazyLinuxNerd GPG Key: 0xE3C31D8F
    GPG Fingerprint: 7515 CC63 19BD 06F9 400A DE8A 1D0B A5CF E3C3 1D8F

  • https://xkcd.com/1172/

    It’s practically a law that every time someone’s workflow is broken, they request an option to change it. Personally, I’m against it. Putting a weak password into the installer *is* a request for a weak password. There’s no reason to request a weak password twice (with a boot arg and a weak password) when the alternative is to graphically represent the password strength and let the user decide.

    I don’t like the change, but at the same time I do all of my installs with kickstart, and such installs are not affected. Kickstart files can contain a hashed password, and since a hashed password can’t be checked, it can’t be rejected. Thus, any decision FESCO makes won’t affect me at all.

  • Something like this?

    Sorry, your password has been in use for 30 days and has expired –
    you must register a new one.”
    New password
    roses
    “Sorry, too few characters.”
    pretty roses
    “Sorry, you must use at least one numerical character.”
    1 pretty rose
    “Sorry, you cannot use blank spaces.”
    1prettyrose
    “Sorry, you must use at least 10 different characters.”
    1fuckingprettyrose
    “Sorry, you must use at least one upper case character.”
    1FUCKINGprettyrose
    “Sorry, you cannot use more than one upper case character consecutively.”
    1FuckingPrettyRose
    “Sorry, you must use no fewer than 20 total characters.”
    1FuckingPrettyRoseShovedUpYourAssIfYouDon’tGiveMeAccessRightFuckingNow!
    “Sorry, you cannot use punctuation.”
    1FuckingPrettyRoseShovedUpYourAssIfYouDontGiveMeAccessRightFuckingNow
    “Sorry, that password is already in use.”

    BR, Bob Who thinks the password policy in my machines are my concern.

  • One thing that people don’t understand or don’t want to address is that most KNOWN instances of a Linux machine being hacked/owned/pwned/taken over (substitute your word here) and then rooted happen because of weak passwords.

    It is certainly one’s own right (at least in my country) to be completely and utterly stupid with your decision making … but if you have any paying clients who have information on any machines you manage and said clients information gets stolen, if you have weak passwords then expect to shell out some cash for your stupid decision making.

    Thank God we are not still using the computer code we did in 1991 when Linux started. Changes impact people, but good for us that the code has changed and moved forward.

    If people want weak passwords, I guess you can let people have them .. but it is an idiotic thing to do. It is also one that makes you liable if you lose someone’s privacy information because of your decision.

    That is just MY opinion .. yours may vary.

    Thanks, Johnny Hughes

  • Gordon, just to make sure you (and others on the list) understand .. I
    have no issue with your specific post .. I probably should have replied to the OP’s mail instead, but yours was the last I read on this thread.

  • The new rules are nowhere near that stringent:

    http://manpages.ubuntu.com/manpages/trusty/man8/pam_pwquality.8.html

    Much of the evil on the Internet today — DDoS armies, spam spewers, phishing botnets — is done on pnwed hardware, much of which was compromised by previous botnets banging on weak SSH passwords.

    Your freedom to use any password you like stops at the point where exercising that freedom creates a risk to other people’s machines.

    In the previous thread on this topic, 6 months ago, I likened reasonable password strength minima to state-mandated vaccination. Previously-defeated diseases have started to reappear as the antivax movement has gained momentum. Polio came back in Pakistan, measles in California, and whooping cough in Australia, all within the last year or two.

    https://en.wikipedia.org/wiki/Vaccine_controversies

    So no, your local password quality policy is not purely your own concern.

  • Once upon a time, Warren Young said:

    Since most of that crap comes from Windows hosts, the security of Linux SSH passwords seems hardly relevant.

    Your freedom to dictate terms to me stops at my system, which you cannot access even if I set the password to “12345”. You are making an assumption that every Fedora/CentOS install is on the public Internet, and then applying rules based on that (false) assumption.

    When root can override a password policy after install, forcing a policy during install is nothing but stupid and irritating. Despite what was said on the Fedora list, this was an active change taken by anaconda developers (to take out the “click again to accept anyway” option), so they should expect people to complain to them and be prepared to handle the response.


    Chris Adams

  • Well, you are welcome to your opinion and Warren is welcome to his.

    But in relationship to CentOS Linux, this discussion is completely irrelevant.

    If RHEL releases source code that does not accept weak passwords, then we will rebuild that source code for CentOS Linux. If they later change the source code to add back weak password support, we will rebuild that too.

    Whether we like or dislike the policy doesn’t matter in the slightest .. we don’t make those kind of choices in CentOS Linux .. we rebuild the RHEL source code.

  • For what it’s worth, at the Fedora level, we are extremely, extremly unlikely to ship code which does not allow relatively-easy site-local configuration of password policy, regardless of whatever defaults we choose. It’s also likely that Red Hat will choose different defaults from Fedora for RHEL. That’s not my department, but I would certainly be surprised if *that* comes out in a way that doesn’t make setting your own policy simple as well, because that’s something people want and need.

  • Your freedom to have sshd enabled by default stops at the point where exercising that freedom creates risk to other people’s machines.

    I can also use that logic with, password based auth by default, rather than PKA by default.

    A rather strong argument can be made, much more so than a very weak >
    weak password quality policy, for sshd on a default 7 day disable timer. That is, by default, after 7 days, sshd is stopped and disabled. In the autopsies of pwned computers is the quickly provisioned server with a standard simple in-house password for such things, with the idea that after configuration the password will get changed or more likely sshd is disabled or it’ll be added to firewall filtering. The reality is all the bad practices happen because this quickly provisioned machine is forgotten about for one reason or another, and then it gets owned.

    Well, disabling sshd after 7 days would stop all of that and yet doesn’t prevent initial configuration.

    More likely, I think we’ll see either sshd disabled by default or PKA
    required by default, both being provisioned via Cockpit. And that’s because the minimum password quality under discussion is still rather weak when it comes to being able to put a system directly on the Internet or facing it with port forwarding while taking no other precautions. And yet the weak password policy is too strong for many legitimate use cases where the use case/environment aren’t high risk for such passwords.


    Chris Murphy

  • Botnets are terrible, it doesn’t matter how many of them there are or on what platform. The reason why they exist is bad practices. So there needs to be better application of best practices, and best practices need to be easier and default and automatic whenever possible. That applies to all platforms. So I’m not opposed to changes in Fedora, and by extension eventually to CentOS and RHEL, but they have to be balanced out.

    Windows Server has power shell disabled by default. The functional equivalent, sshd, is typically enabled on Linux servers. So I think it’s overdue that sshd be disabled on Linux servers by default, especially because the minimum password quality under discussion is still not good enough for forward facing servers on the Internet with static IPv4 addresses. They will get owned eventually if they use even the new minimum pw quality, and that’s why I see pw quality as the wrong emphasis – at least for workstations.

    Exactly. My dad will absolutely stop using his iPad if it ever requires him to use anything more than 4 numeric digits for his password. The iPad never leaves the house.

    Future concern is IPv6 stuff, now that Xfinity has forcibly changed their hardware to include full IPv6 support. I have no idea if this is NAT’d or rolling IPs or what. But the iPad has no remote services enabled. And the Mac has SSH PKA required. So I’m not that concerned about their crappy login passwords. Their online services are another matter, those I’ve made very clear they will be strong or they don’t get to play.


    Chris Murphy

  • All of the routers I’ve seen merely firewall inbound traffic, allowing none. There’s no need for NAT or rolling IPs.

  • The whole idea of IPv6 is that, with proper authentication and encryption, we can access any device anywhere. So firewalling everything centrally would appear to break that.

  • OK but imagine making that the default, and how many workflows that don’t need that level of authentication will be bothered in one form or another: a.) change workflow b.) learn how to revert the behavior.

    It’s one thing to disable sshd by default because pretty much everyone familiar with a particular distribution will be familiar with console/OOB enabling of sshd, or eventually being used to initially accessing a web interface to enable such a service.

  • to be pedantic about it, the equivalent of PowerShell is NOT sshd, its bash/ksh/csh/zsh/sh … PowerShell does not by itself allow external connections, you’d need to configure a telnetd or sshd server to allow that (or remote desktop or VNC or …).

  • WinRM, more likely. Though I understand the MS is working on an SSH
    server for powershell for some future release.

  • I think you’re assuming that IPv6 carries with it a policy, when it is merely the mechanism.

    In IPv6, everything should have a unique, routeable address. Whether you can reach an address will be subject to local policy in the future, just as it is now. And just as you cannot currently reach a device in a Comcast/Xfinity residential network under IPv4, you can’t under the default IPv6 rules either. I would call that the principle of least surprise.

    You can open inbound IPv6 traffic for specific hosts on the routers I’ve seen.

  • Cite?

    Not that it’s relevant, since even if the skew were 9:1, that’s no excuse for not trying to clean up our 10%.

    That sounds an awful lot like the old canard, “Your right to swing your fist stops at the tip of my nose.” Go down to the local drinking hole tonight and start swinging your fist to within a millimeter of peoples’ noses, and see how far that legal defense gets you.

    The only reason we don’t have specific laws that allow the government to force specific password quality policies is that we’ve been trying to self-govern. If you fight our efforts at self-government, you open the door to heavy-handed external government.

    No, I am making the assumption that the vast majority of CentOS installs are racked up in datacenters, VPS hosts, etc. I am further assuming that most of those either have a public IP, or are SSH-accessible once you get past a LAN/WAN border firewall.

    A border gateway doesn’t help you with weak SSH passwords if a box on the LAN gets pwned and turned into an SSH password guesser.

    The effort to get stronger password minima into Fedora goes back at least four years:

    https://fedoraproject.org/wiki/Features/PasswordQualityChecking

    If it’s finally time to get it into Fedora, it’s *long* past time to get it into RHEL/CentOS, since those boxes are statistically far more likely to be directly exposed to the Internet.

    That’s only true if the majority of people will in fact override the default policy. But as I have repeatedly pointed out here, the stock rules really are not that onerous. They basically encode best practices established 20 years ago.

  • Other than DDoS which is a problem of engineering design of how the network operates (untrusted anything can talk to untrusted anything), what “risk” is created to other people’s machines who have done appropriate security measures by a cracked machine owned by an idiot, that isn’t easily handled in minutes, if not seconds, by fail2ban?

    Equating this to “vaccination” is a huge stretch. It’s more like saying the guy who left his front door unlocked all day is a threat to the neighbor’s house. Other than the perennial brokenness of a worldwide untrusted network piped straight into your home or business without an appropriate firewall and/or monitoring of said silly network, there’s almost zero risk at all to the “house next door with a deadbolt and security bars”.

    You can’t “catch the insecure”… hahaha… it’s not a virus.


    Nate Duehr denverpilot@me.com

  • The current behavior in Fedora and CentOS lets you click Done twice and bypass the weak password complaint.

    In order to protect a system that’s Internet facing with challengeresponseauth (rather than PKA), the minimum password quality would need to be at least initially onerous. Whereas if things are properly configured such that SSH is only used internally, all you have to worry about are internal attacks which are hopefully rather rare.


    Chris Murphy

  • You’re offering a false choice. We do not have to choose between cutting the tree down or leaving the fruit to rot on the twig. Fedora is rightly choosing to pick this low-hanging fruit.

    If you want a Linux distro that doesn’t ship with sshd enabled by default, that is already available. Given that CentOS does ship with sshd enabled by default, it makes sense that it should not allow itself to be so badly misconfigured that it allows trivial exploits.

    That’s more low-hanging fruit; we might get there someday.

    They turned off “PermitRootLogin yes” and “Protocol 1” in EL6 or EL7, the previous low-hanging fruit. Do you think those were bad decisions, too?

    The stock PAM on-fail delay is about 2 seconds. I can’t see that sshd has any rate-limiting built into it, but it does limit the number of unauthenticated connections to 100 by default in EL7. Together, this means a small botnet could try 50 guesses at a single account’s password per second.

    The current non-policy allows abc1 as a password. According to:

    https://www.grc.com/haystack.htm

    …that password can be brute-forced in about half an hour at 1000 guesses per second, or 4 days at 50/sec. Your 7-day window is too short, if you don’t institute *some* kind of password quality minima.

    Also keep in mind that the GRC calculator is assuming you’re brute-forcing it, and not intelligently trying common passwords first, and sensible variations.

    The stock rules currently allow “monkey” as a password, which the GRC calculator considers stronger than “abc1” due to the length, but it’s a top-10 most-used password, so it will be among the first to be tried by any intelligent attacker.

    That wouldn’t break my heart, either.

    The main problem with that is that you need some way to install the client computer’s public key into the authorized_keys file during initial setup. You don’t need password auth to be enabled to do that, but it would make things considerably more difficult.

    Meanwhile, *this* thread is about using 9+ character passwords that aren’t laughably easy to break, which is not difficult.

    Really? Which of these new rules is onerous?

    http://manpages.ubuntu.com/manpages/trusty/man8/pam_pwquality.8.html

  • iPads can’t be coopted into a botnet. The rules for iPad passwords must necessarily be different than for CentOS.

    True, but more on-point here is that OS X ships with sshd disabled by default. You have to dig into the pref panes and tick an obscurely-named checkbox to enable it.

    The Apple ID password rules are a fair bit stronger than the libpwquality rules we’ve been discussing here, and have been so for some time:

    https://support.apple.com/en-us/HT201303

    Given that recent OS X releases want to use your Apple ID as the OS login credentials, that effectively makes these the OS password quality rules, too.

    Fedora is late to the party, and CentOS consequently even later.

  • Warren Young wrote:

    Is that true, I wonder?
    For some reason Fedora and CentOS seem reluctant to find out anything about their users (or what their users want).

    Is anything known about the ratio of RHEL to CentOS machines?

  • I can’t speak for CentOS, but Fedora, at least, this is absolutely not true. It’s just a difficult and expensive thing to do in a meaningful way (and there’s considerable concern that doing it in a non-scientific way does more harm than good). So, we do the best we can given the channels we have.

  • I’m not sure how you mean that comment.

    If you’re saying that the Internet is badly designed and that we need to rip it up and replace it before we can address DDoSes, you’re trying to boil the ocean. We have real-world practical solutions available to us that do not require a complete redesign of the Internet. One of those is to tighten down CentOS boxes so they don’t get coopted into botnets.

    If instead you’re saying that DDoSes are solvable with “just” a bit of engineering, then that’s wrong, too. It takes a really big, expensive slice of a CDN or similar to choke down a large DDoS attack. I do not accept that as a necessary cost of doing business. That’s like a 1665 Londoner insisting that city planning can only be done with close-packed wooden buildings.

    I don’t believe that the Internet must go through the equivalent of the Great Fire of 1666 before we can put our critical tech onto a more survivable foundation.

    Resource waste is enough by itself. How many billions of dollars goes into extra bandwidth, CDN fees, security personnel, security appliances, etc., all to solve a problem that is not necessary to the design of the Internet in the first place?

    Back before the commercialization of the Internet, if your box was found to be attempting to DoS another system, you’d be cut off the Internet. No appeal, no mercy. It’s all /dev/null for you.

    Now we have entrenched commercial interests that get paid more when you get DDoS’d. I’ll give you one guess what happens in such a world.

    fail2ban isn’t in the stock package repo for CentOS 7, much less installed and configured default. Until it is, it’s off-topic for this thread.

    Mind, I’m all for fail2ban. If Fedora/Red Hat want to start turning it on by default, too, that’s great.

    Why? If you are unvaccinated and catch some preventable communicable disease, you begin spreading it around, infecting others. This is exactly analogous to a box getting pwned, joining a botnet, and attempting to pwn other boxes.

    When almost everyone is vaccinated, you get an effect called herd immunity, which means that even those few who cannot be vaccinated for some valid medical reason are highly unlikely to ever contract the disease because it cannot spread properly through the population.

    That’s only true in a world where you have armed gangs running through the streets looking for free fortifications from which to attack neighboring houses. That is the analogous situation to the current botnet problem.

    If that were our physical security situation today, then I would be advocating fortifying our physical dwellings, too.

    Thankfully, that is not the case where I live.

    The difference appears to be one of global society, rather than technology, but obviously we aren’t going to solve any of that here.

    Take an unvaccinated child on a long vacation to some 3rd world cesspit, then report back on how that worked out.

    “Like every other creature on the face of the earth,
    Godfrey was, by birthright, a stupendous badass, albeit
    in the somewhat narrow technical sense that he could
    trace his ancestry back up a long line of slightly less
    highly evolved stupendous badasses to that first self-
    replicating gizmo — which, given the number and variety
    of its descendants, might justifiably be described as
    the most stupendous badass of all time. Everyone and
    everything that wasn’t a stupendous badass was dead.”

    ― Neal Stephenson, Cryptonomicon

    We don’t have time to wait for CentOS to become autonomous and evolve its own badass immune system. We have to give it one ourselves.

  • Disabled by default is in no way cutting the tree down. It happens to be the default on Fedora Workstation. While Fedora Server currently leaves it enabled, there has been some discussion of disabling it by default when Cockpit has a switch for enabling it, and then expect it to be enabled there – that way it’s opt in rather than opt out. And the problem with opt out is a lot of users don’t know this service is running and exposes them to infiltration unless they have a strong passphrase or use PKA.

    Well that’s rather difficult for it to know the environment it’s in, which is why there’s such a thing as defaults. Many people have no idea sshd is enabled by default, meanwhile everyone could voluntarily choose stronger passphrases. Opt out vs opt in, and opt in assures the person doing the setup is making a conscious decision.

    That requires UI work and coordination seeing as it requires a service made active on one computer and keys made on each computer that will access that server, and then a mechanism to securely transfer the pub key to the server. This is non-trivial.

    Disabling Protocol 1 is a good decision because it negatively impacted essentially no one with sane workflows. The disabling of root I don’t disagree with but I think it’s specious.

    It was a number I pulled out of my ass, but it’s still better than infinite. The proper value in this case is not one that always assures are particular passphrase won’t be brute forced, but one that’s high enough that the sysadmin leaves the timer feature intact, while low enough to thwart a non-targeted attack. If you’re targeted, you’re probably screwed short of a rather strong passphrase or PKA.

    Of course, both iptables and firewalld also have rate limiting options. Maybe fail2ban is better because it can apply restrictions per IP so that legitimate attempts get through. I didn’t realize PAM
    has a fail delay, that actually matters.

    So now the problem with this type of policy enforcement in a GUI is you have to create instructions for the user to follow in order to successfully pick a minimally acceptable passphrase without iterating. Iteration means failure.

    And no OS does this right now. Everyone is completely permissive because no one wants to replicate a UI across completely different applications: the installer, Gnome Initial Setup, and Gnome Users &
    Groups.

    I still think informed consent is the way this will probably end up working – meaning the user is informed their password is common
    (dictionary word, derivative, or a top 10,000 most common password)
    should not be used but give them a way to use it anyway.

    All of them. Those rules are absurd to require by default on a computer in a low risk environment. I would never accept such a product that required such login rules.


    Chris Murphy

  • In the context of SSH, challenge/response authentication generally means things like OTP fobs and smart cards. It is not a synonym for password auth, it is an alternative to it.

    Some definitions of C/R do include password auth under the same umbrella, but SSH uses the term in a more narrow sense, where it means any system where the actual credentials do not cross the wire, only a trapdoor response from which you cannot reverse-engineer the credentials. In that sense, PKA is also a form of C/R auth, though the OpenSSH docs don’t use the term that way.

    Not true. CentOS 7 limits SSH password guesses to about 50 per second, and then only if you can rope 100 attackers together to go after a single account. A random 9-character password will withstand about a million years of such pounding.

    You only need to go beyond that when you’re trying to fend off offline attacks, such as clusters of GPU number crunchers tearing through /etc/shadow.

    I don’t have the luxury of setting such a boundary. I must access remote systems via SSH all the time to do my job.

    If your alternative is a VPN, all you’ve done is shift the burden, since that is equivalent to PSK and strong passwords in SSH. In fact, properly configured, SSH is a form of VPN.

  • Chrome OS does, because your OS password is your Google password. Therefore, Chrome OS’s password quality minima are Google’s minima, which are similar to libpwquality’s defaults:

    http://passrequirements.com/passwordrequirements/google

    OS X and iOS offer the option of using your Apple ID as your OS login password, which has similar requirements to Google’s:

    https://support.apple.com/en-us/HT201303

    Windows has also been doing this since Windows 8. Microsoft’s rules are stronger than either Google’s or Apple’s:

    http://www.liveside.net/2012/07/23/microsoft-account-to-enforce-stricter-password-controls/

    Android, Apple, and Microsoft currently allow you to use non-Internet based authentication, but defaults matter.

    You’ll notice that this list is mobile-heavy. These rules exist because these passwords are subject to public pounding over the Internet…just like a great many CentOS boxes.

    We’ve had that at least since EL6 came out, about 5 years ago. (Probably before that in the Fedora line.)

    Apparently those in a position to decide these things see that this has not caused a sufficient shift in the quality of passwords used on Red Hattish boxes, evidenced by lack of a sharp drop in botnet members.

    Yes, well, we’ll see what you’re using in another 2-ish years when CentOS 8 ships. Money, mouth, and all that.

  • Windows has a lower minimum acceptable password quality than CentOS. OS X has a lower minimum still than Windows – as in, a single number is accepted. For an admin. With sshd enabled. And yet the Mac world does not burn.

    That doesn’t mean single digit passwords are good, or should be recommended. It just means Apple doesn’t care to fight that battle, or dump requirements onto the user. Instead they dump requirements onto the OS and onto application developers with better defaults: sshd is disabled, application binaries must be signed, App Store applications run in something like a sandbox, etc.

    So they are building up defenses elsewhere, rather than shifting the responsibility onto the user in the form of weird and confusing password requirements and the commensurate UI.

    Two points of clarity:
    1. the quoted text above is a configuration change I made; OS X does not require PKA out of the box.

    2. Fedora Workstation has sshd disabled by default, and you have to dig into the pref panes to enable an identically named service “Remote Login”; although enabling it takes solidly three more clicks on GNOME
    than OS X. So in some strange sense it’s less likely to be inadvertently enabled on GNOME.

    No that’s not true. The user is encouraged to authenticate this way, they are not required to, it’s very easy to bypass. I don’t use it. Windows has a similar behavior, but rather strongly implies it’s the only way to setup a user account (via an Outlook account) but that too can be bypassed.

    What is currently in Anaconda master branch, which is how Fedora Rawhide has behaved for ~ 6 months, is you get a show stopper installation if you don’t meet the minimum password requirement. And that requirement is not stated or explained. It’s basically “it’s not good enough, try again”.

    Where Fedora and CentOS are late to the party are improving defenses that don’t require the user to do anything differently.


    Chris Murphy

  • This is confusing. I think it’s overwhelmingly, abundantly clear that Fedora care about their users and are listening. CentOS cares with a hard and fast upper limit which is binary compatibility with RHEL. So if you want to change CentOS behavior you’d have to buy into RHEL and convince Red Hat, and then it’d trickle down to CentOS.

  • This is just wrong .. we have started Special Interest Groups where people from other places come in a build things that they want who are the users (community). We have guys from arm companies (helping do arm64), Citrix (adding xen support in CentOS-6 and CentOS-7), IBM
    (building a ppc64 and ppc64le arch), Openstack (via RDO), Open Nebula, Project Atomic, Storage (via glusterfs and ceph) etc. We have guys from CERN helping run our Koji Community Build System. The CentOS-Devel list, where all this feedback is occuring has grown by 10 times since we started the SIG programs. We have several projects in the 2015 Google Summer of Code where the community has input into add on projects for CentOS (like a 32 bit armv7 image builder).

    This is also true that CentOS Linux, the base, is just a plain rebuild of RHEL source code. That is what it is and what it will always be .. the SIGs (where are building much community interaction) are optional addons to that base.

  • As far as I know, PermitRootLogin has not been set to “no” by default.
    At least, I’ve never seen that on a system I’ve installed. Am I missing something?

  • It’s not just an imperfect analogy it really doesn’t work on closer scrutiny.

    Malware itself is not a good analog to antigens. Vaccinations provide immunity to only certain kinds of antigens, and only specific ones at that. Challenge-Response, which is what a login password is, is about user authentication it is not at all meant or designed to provide immunity from malware. That we’re trying to use it to prevent infections is more like putting ourselves into bubbles; and humans put into bubbles for this reason are called immune compromised.

    So this push to depend on stronger passwords just exposes how “immune compromised” we are in these dark ages of computer security. There are overwhelmingly worse side effects of password dependency than immunization. The very fact SSH PKA by default is even on the table in some discussions demonstrates the level of crap passwords are at.

    Software patches, SELinux and AppArmor are closer analogs to certain aspects of human immunity, but even that is an imperfect comparison.

    And also, a large percent of malware doesn’t even depend on brute force password attacks. There are all kinds of other ways to compromise computers, create botnets, that don’t depend on passwords at all. So vaccinations have something like 95% efficacy, while passwords alone have nothing close to this effectiveness against malware.


    Chris Murphy

  • As the one who started this thread, and has watched it explode, I feel like a troll, and apologize to everyone.

    I’ve seen various decisions made by Fedora, which weren’t even necessarily bad for its apparent target audience, the desktop user, that, while not insurmountable, get put into RHEL, and therefore CentOS.

    Fedora has made several decisions where a developer or developers will ignore popular opinion. I remember when pkgkit would allow any user to update through the GUI without authentication and it took the story making the front page of slashdot to get it changed.

    Like any organization, Fedora has some people who are very responsive to user input and others who aren’t. To me the reason to make noise about something in Fedora is to try to keep it from getting into RHEL and hence CentOS.

  • RHEL (and Fedora) unlike FreeBSD and a few other systems, has PermitRootLogin set to yes by default. On a minimal install, (I don’t know about workstation) I’ve always found sshd to be enabled by default.

  • Chris Murphy wrote:

    You (and others) are misunderstanding my off-the-cuff remark. It was purely an observation about the lack of statistics. I rarely if ever see a statement of the kind
    “Among Fedora users 37% use KDE and 42% Gnome”. Or (after the remark I was responding to)
    “83% of CentOS machines are in datacenters, and 7% are home-servers”.
    (Or “x% of Fedora users have turned SELinux to permissive”.)

    I’m not saying that Fedora or CentOS should work on democratic principles. I welcome Johnny Hughes unambiguous statement that CentOS follows RHEL. This saves a lot of time arguing about things that cannot be changed.

    But I hold the (old-fashioned?) view that before expressing an opinion one should get the facts.

  • We can’t gather facts about people .. people go bat shit crazy if their machines report stuff back.

    At CentOS, we can’t even tell you how many users we have, because we can’t possibly buy all the mirrors that are required to give out updates to all users.

    Instead, we have a couple hundred mirrors JUST to distribute CentOS to external mirrors run by the community (currently 624 mirrors in 85
    countries) when we do a release. We don’t have the ability to gather statistics on servers we don’t own.

    Fedora is in the same boat.

  • I would highly recommend looking into Fedora Server; over the past couple of years, we’ve made a deliberate effort to address Fedora’s cloud computing and traditional server userbases as intentional target audiences. Take a look at

    * https://fedoraproject.org/wiki/Server/Product_Requirements_Document#User_Profiles.2C_Primary_Use_Cases_and_Goals
    * https://fedoraproject.org/wiki/Cloud/Cloud_PRD?rd=Cloud_PRD#User_Profiles.2C_Goals.2C_and_Primary_Use_Cases

    and see if you feel like your uses are better represented.

    It may have seemd that way, but I don’t think Slashdot was a major factor in this decision either.

    I’d like to encourage you to think of this in a different direction. Instead of interacting with Fedora when you want to stop a decision you don’t like, help us build something you *do* like.

    When people just scream “this new password policy is the worst thing ever quit doing things differently!”, developers who are genuinely trying to make things better get discouraged, and while dissuading someone from contributing in this way may _feel_ like a victory when it was something you didn’t like, it’s a loss long term. So, instead: “I
    have the use case ABC, which doesn’t seem to fit in. I think it’s an important situation for target audience, so I propose…”.

    And, as always, triple bonus points when there’s a complete design or an example implementation, because we certainly don’t lack for _ideas_.

  • Yeah, pretty much, although I might be less… direct about the language. :) We are very sensitive to user privacy concerns. And gathering this kind of information accurately in other ways is expensive.

    I can tell you some ad hoc numbers from F21, which come with tons of caveats. This is based on ISO download numbers from the master mirror, which is very imprecise and does not reflect installations — someone might have downloaded the cloud image once and installed a million nodes. Or downloaded it a million times and never actually booted it. But, anyway, from this:

    * About 70% Fedora Workstation (our GNOME-based desktop primarily
    targetted at software developers and technical users.)
    * About 20% Fedora Server
    * About 5% Fedora Cloud
    * About 2% KDE Desktop Spin
    * About 2% Xfce Desktop Spin
    * About 1% other spins and images


    Matthew Miller

    Fedora Project Leader

  • So many flaws:

    1. It’s just a gloss on a Wired article, which itself is a scare report ahead of publication of a paper that hadn’t been presented at the time of writing. All this pair of articles says is, “This could happen, and Apple is bad because it can happen!” Rational response: “With what likelihood can it happen?” Answer: crickets.

    I finally managed to track down the paper, here:

    https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-wang-tielei.pdf

    tl;dr: You have to hook the iOS device up to a PC that’s already been rooted. Then it can infect the iOS device through the previously-trusted iTunes sync channel.

    If you’re worried enough about that to do something about it, I want you to tell me your experiences either never using SSH and WiFi PSKs, or always using passphrase protection on them.

    I also want you to tell me about how you never download device firmware to a PC, but only direct to the device that needs to be flashed with it, and only from SSL protected hosts. In most cases, this is a far bigger risk than the iOS flaw you point out, because you don’t need to jump through all the hoops the researchers did in order to exploit the iTunes sync process.

    (Oh, and by the way, no, the “23%” value from the paper is not a likelihood. If 23% of rocks can fall from the sky, it doesn’t mean 23% of rocks *will* fall from the sky.)

    2. It’s been a year since that report, during which time Apple have released 8 updates containing security patches for iOS. Apple doesn’t generally say much, if anything, about security flaws they’ve fixed, so any one of them could have closed this door already.

    3. No massive new iOS botnet has appeared in the past year. Meanwhile, CentOS boxes actually exist in botnets today.

  • My mistake. I grepped sshd_config on a fresh EL7 machine here and saw

    #PermitRootLogin yes

    and assumed it meant “no”. It’s just documenting the default.

    I explicitly set it to “no” on systems I am solely in control of, and I’d prefer that upstream changed that default in the precursor(s) to CentOS 8, too. EL7 ships ready to use sudo out-of-the-box, if you tick the “administrative user” checkbox on the non-root user during install. That removes the last good reason to allow remote root logins by default.

  • Every analogy will break down if you look too closely. The question is, is it a *useful* analogy?

    Fine. If you want to be picky, a better analogy to a good password and reasonable limits on SSH logins is a healthy integument and healthy cell walls.

    Has that changed any of the conclusions about bad passwords? No. Therefore we have succeeded in clarifying nothing except our application of biology, which is interesting, but not on topic here.

    Now it is you who are off the rails. The hygiene hypothesis explains a great deal about human disease because we have an active immune system to deal with an evolving set of biological challenges.

    CentOS’s immune system doesn’t get stronger purely by subjecting it to more attacks. It improves only through human intervention.

    While true, that doesn’t tell us that it is a good idea to allow weak passwords.

    If you will allow me to return to biology, it’s like saying that prophylaxis is a bad idea because it points out how imperfect our immune systems are. Stop covering your face when you sneeze, stop using condoms, stop going to the dentist: we need stronger humans, so let’s evolve some!

    That seems like a falsifiable statement, so I expect you will be able to point to a scientific paper that supports that assertion.

    So let’s dial back my previous proposal. We’ll just stop using dental prophylaxis, then, because it doesn’t prevent the contraction of oral STIs.

    Just because one particular method of prophylaxis fails to protect against all threats doesn’t mean we should stop using it, or increase its strength.

  • Disingenuous. It does not REQUIRE you to use your AppleID as the user password, and it’s probably not a good practice anyway.

    Using it as an example is silly, in that it LOWERS security.

    Comparing CentOS (an OS quite often used on servers on well-protected networks) to a consumer-grade OS that wants to integrate your login to “the cloud”, is rediculous. Of COURSE the defaults for a cloud connected machine are higher.

    Nate

  • Actually it does.There is no more obvious head butting than with strong passwords vs usability. Strong login passwords and usability are diametrically opposed.

    The rate of brute force attack success is exceeding that of human ability (and interest) to remember ever longer more complex passwords. I just fired my ISP because of the asininity of setting a 180
    compulsory expiration on passwords.

    Now I use Google. They offer MFA opt in. And now I’m more secure than I was with the myopic ISP.

    Apple and Microsoft (and likely others) have been working to deprecate login passwords for years – obviously they’re not ready to flip the switch over yet, it isn’t an easy problem to solve, but part of why they haven’t had more urgency is because they are doing a lot of work on peripheral defenses that obviate, to pretty good degree, the need for strong passwords, relegating the login password to something like
    “big sky theory” – it’s safe enough to tolerate very weak passwords in most use cases. The highest risk, by a lot, is from a family member.

    I’m not arguing directly against strong passwords as much as I’m arguing against already unacceptable usability problems resulting from stronger password policies, because it doesn’t scale. Making policies opt out let alone compulsory is unacceptable. Even as the policies get stronger people’s trust in password efficacy relating to security continues to diminish.


    Chris Murphy

  • I don’t see how you got any requirement from my post. I pointed out that it was only a “want” in the post you quoted. I’m not trying to obscure anything, just pointing out that other OSes are in fact already moving toward libpwquality-like restrictions.

    Windows 8+ makes bypassing the cloud login even more difficult than Apple does, and Chrome OS doesn’t even offer the option.

    iOS requires a cloud login now on hard boots. It allows a short PIN for unlocking a device that is only sleeping, but the equivalent of that in CentOS would be a separate password on the X screensaver, which really isn’t on-point here. I assume Android does this now, too. (Haven’t used Android myself since 2.3.)

    The important point is that there’s a clear trend here. The fact that you can currently bypass the cloud login in some of these cases does not invalidate that point.

    Really?

    As others have already pointed out in this thread, the local-only password policy on these OSes is far weaker than the rules proposed for F23. Human nature and the contents of this thread should tell you how many people will use stronger local passwords than these cloud services demand.

    You may point out that the move to a cloud authentication system extends the attack surface out into the public Internet, but when you implement a public login service using strong security — as it appears that Apple, Google, and Microsoft have done — it’s still a net win.

    As I have already pointed out, a 9-character purely-random password can survive a million years of constant pounding with reasonable rate limiting. Given that Microsoft, Apple, and Google all do more than just rate limiting on their cloud login systems, that means that even a relatively short but random password will survive any sustained frontal attack.

    Offline attacks are far more dangerous, but strong mitigations for those have been well-known for decades. I assume that Google, Apple and Microsoft are using these techniques to defeat offline attacks, in case their secure password stores are ever compromised. (Key derivation, salting, hashing, zero-knowledge proofs…)

    I am not wholeheartedly in favor of these cloud login systems, nor am I arguing that CentOS 8 should have one, too. I am only pointing out that the security features they’ve all been designed with are worth emulation in CentOS’s local-only password authentication system, too.

    CentOS should not require a well-protected network in order to be secure. It should be secure in its own right, from the moment it first boots after installation.

    Anyway, your premise that your CentOS boxes are on networks so well protected that you don’t need strong passwords is quite unsound:

    https://en.wikipedia.org/wiki/Stuxnet
    https://en.wikipedia.org/wiki/Certificate_authority#CA_compromise
    https://en.wikipedia.org/wiki/RSA_SecurID#March_2011_system_compromise

    I doubt your LAN is more secure than that of RSA, Iran’s nuclear program, and several CAs.

    Security professionals do not rely solely on borders to secure individual systems. They rely on defense in depth, a concept at least as old as the ancient Greek phalanx formation:

    https://en.wikipedia.org/wiki/Phalanx

  • Security is *always* opposed to convenience.

    The question is not “security or no security,” it’s “how much security?”

    The correct answer must balance the threats and risks. Given that the threats and risks here are nontrivial, the password quality restrictions should also be nontrivial.

    You must consider offline and online attack scenarios separately.

    Online we have already dealt with: 50 guesses max/sec, allowing a 9-character random password to survive a million years of constant attack.

    Offline is an entirely separate matter, and is already addressed by /etc/shadow salting and hashing in CentOS. We know how to make it even stronger if the threat requires it: move to OTP keys, use a better KDF than SHA512, etc.

    Good for you. Password expiration is silly. A good strong password should last years under any reasonable threat.

    But we’ve not been talking about password expiration here.

    Of course. It’s why Bruce Schneier wrote only one book on cryptography, but several on human factors.

    That does not tell us that we should be sloppy with our crypto and authentication methods, though.

    I’m still not seeing how it’s difficult to remember, securely record, type, or transcribe a password that will pass the new restrictions. They’re on the mild side, as these things go.

    If you wanted to use the GRC password haystack calculator results to argue for a slight reduction in the defaults, I could get behind that.

    Six random characters pulled only from the unambiguous subset of the alphanumeric set, no uppercase, and one symbol gets you a password that should withstand constant pounding for the life of the machine. I could live with that minimum.

    I have no strong feelings on the new libpwquality rules, exactly. What I do feel strongly about is that there should be *some* reasonable minima that can’t easily be bypassed. Where that level is set is not only a sensible subject for debate, it is one that’s easy to separate from emotion; it’s basically a question of arithmetic.

    I don’t see why we can’t take some responsibility for this mess and try to build up some herd immunity.

    Passwords are what we have today. Strengthening them to a level that will suffice until something better comes along is reasonable.

  • False. OS X by default runs only signed binaries, and if they come from the App Store they run in a sandbox. User gains significant security with this, and are completely unaware of it. There is no inconvenience.

    What is the inconvenience of encrypting your device compared to the security? Zero vs a ton more secure (either when turned off and data is at rest or a remote kill that makes it very fast to effectively wipe all data)

    I disagree to the point I’d stop using products based on such restrictions. I will not participate in security theatre, other than to be theatrically irritated.

    I’m guessing you’re not a tester or much of a home user. There are many such people using OS X, Windows, and yes Fedora and likely CentOS, where environments and use case preclude compulsory compliance because the risk is managed in other ways.

    And Apple and Microsoft have been working to kill login passwords for a while. Google and Facebook too. No one likes them. And our trust in them is diminishing. They are not long term tenable. Making longer ones compulsory already causes companies who do so grief as people complain vociferously about such policies.

    This idea that opt in is not sufficient demonstrates how archaic and busted computer security is when you have to become coercive to everyone regardless of use case to make it safe.

    In any case, the complaint over on the Fedora proposal has been sufficiently addressed, even though the details are still being worked out. The gist is that the user will have informed consent, and will opt in to better quality passwords. So they will essentially be told a. the password they’ve proposed sucks, b. fairly clear information on why it sucks, c. the option to change it or continue anyway.

    Because there is no such thing when it comes to computers. Computers with strong passphrases still sometimes get pwned, and at a much higher rate than vaccines not working. Please stop with this hideously bad analogy. Computers with NO passwords are often not ever getting pwned for their entire lifetime, and those computers, a.k.a. mobile devices, are used in public spaces, on public wifi, on public networks. Anyone without vaccines in such proximity to illness would definitely get sick. That doesn’t happen with computers.

    The environment has changed, and the old architectures and methods aren’t working the way they did. And somehow free open source software has got to do better than it has been with security, because proprietary systems are innovating more in this space right now, and aren’t passing the buck onto the user with this burden in the form of stronger password requirements.

    Besides, it’s FOSS for a reason and people will opt out because ultimately you can’t make them do what you want. Apple and Microsoft could possibly get away with it. I think their customers would become foaming irate, however.


    Chris Murphy

  • You accepted that risk the day you put a public machine on it. He who has the most bandwidth, wins, in a DDoS. It’s the very nature of the network design. Anyone who can fill your pipe with garbage can take you offline until they stop. You can ask for help from the carriers and see how far you get, but the inherent risk was there from day one and you choose to play.

    What happens? Folks have to think harder about connecting stuff to a worldwide untrusted, and generally unfiltered network? One word: “Duh.”

    Didn’t realize that. Brilliant move, removing it… (rolls eyes at RH)…

    It’s not a disease. It’s someone using their machine for them because they’re too dumb to use a decent password. Nothing at all happens to the people who used decent passwords other than that aforementioned DDoS problem, which is completely unrelated. You’re making it sound like the OS should be responsible for dumb people… problem with that is, the dumber you let them be, the dumber they stay. And without any harm to the “neighbor” who “pre-vaccinated” I guess, in your world, but simply typing in a decent password, what’s the point? Let them lose data, and they’ll learn.

    Global society hasn’t changed, and neither has the network in decades. Why should the OS change to make people dumber?

    No one reading this list is likely to be “unvaccinated”, but they’ll surely be annoyed if they need to install an “unvaccinated” machine on a properly secured network. Leave security to the end-user. The Internet has always been a meritocracy and using a decent password isn’t exactly a high bar to jump. It’s really none of the OS’s business.

    Nate

  • Linux are a significant portion of the ‘botnet’ traffic out there. How do I know this? From a hacked Linux server which was brute-forced and conscripted into being a slow bruteforcer node back in 2009 or so. The particular payload that was dropped on that box was dropped into a normal user account with a moderately strong (but obviously not strong enough) password, and the code never even attempted to escalate privileges. It didn’t need to; the slow bruteforcer started and ran as the normal user account and actively attacked other hosts. It did not attempt to install a rootkit and it ran as a normal user with a program name of something that was not out of the ordinary. It did not trigger our rootkit detector or file modification monitors, since normal user directories aren’t normally monitored. Again, the attack vector was a relatively weak password (mixed case, letters and numbers, but less than ten characters long). And it ran slow enough that neither snort nor fail2ban were triggered.

    While I am not at liberty to share the specifics of the code or the huge password files it contained, nor can I share the log files, given the amount of traffic generated and its patterns it is pretty easy to figure out that it was part of a very large operation. Due to this we now block outgoing (and incoming) SSH on port 22 by default now, opening holes only upon request (and we’re small enough to make that practical). A quick analysis of the code showed some polymorphism in use. The particular slow bruteforcer I found has been adequately documented elsewhere, so I won’t go into more details here. But suffice to say that the password file included some very long and random-looking passwords, along with a million words and regexes (a mixed letters and numbers password with ‘1337’ (leet) spelling should be considered as easy to break as that same password spelled with letters only). And looking through my logs I could see attempts on several user ID’s from entirely unique IP addresses; no IP address was used more than once.

    Better enforcement of password policy on that server would have prevented the attack from succeeding and the machine becoming an attacker itself.

  • While I agree with you about the long-term viability of passwords, I’ll disagree with this statement. There is a loss of convenience with signed binaries from a store: the user can no longer install directly from the program vendor’s website but must go through the walled garden of the store, and developers are held hostage to having to meet the store’s policy or get their signing key revoked and/or their app
    ‘de-stored’ or worse. There is significant inconvenience to users when their app is removed from the store for whatever reason and they cannot get updates (or reinstall their app, for which they may have paid a fee)
    anymore because the app is no longer in the store (and that could be for arbitrary reasons, including political ones). This is, of course, the case to a more limited degree with CentOS and signed packages, since packages can be removed from repositories and installation of packages by default requires signed packages (but it’s not as inconvenient, nor is it as secure, as the OS X model of only allowing signed binaries to run). For that comparison, repository = store.

    Or a hackable remote kill that allows an attacker to wipe you device out from under you. Or now the inconvenience of losing access to the encrypted volume because you forgot the exact spelling of that ten word seventy-five character passphrase and you’re locked out and no data recovery tool out there will get your files back.

    Security and convenience are always at odds with each other; more secure
    = less convenient in some form or fashion; even if you have to dig for the loss of convenience there will be a loss of convenience somewhere for increased security.

  • I don’t think it was removed… I don’t see it in the default repos for RHEL 5, 6, or 7. It’s in EPEL for each, though.

  • “More secure” only to the level one can trust google ;-)

    Just my $0.02

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • In 2009, but I’m not sure how you can be this certain today if no other defense strategy is employed. The only way to be certain a server won’t be attacked is if sshd is disabled, and essentially certain it won’t be if PKA only is allowed, and practically certain with a 7 word passphrase. Less than this, it’s a matter of the attacker and time (yes a six word passphrase will take a government entity and some time, but a four and even five word passphrase are already in the realm of botnets and targeted attackers’ ability to crack).[1]

    “Pretty much anything that can be remembered can be cracked.”
    –Schneier (although I think it’s a bit of hyperbole, of course you can remember a 7 word passphrase, but probably not too many of them).

    [1]
    http://world.std.com/~reinhold/dicewarefaq.html#128-bit


    Chris Murphy

  • This is untrue. Apple’s developer program and OS X support two kinds of code signed applications: App Store programs are signed by the developer and Apple and only run in a sandbox limiting their interaction with each other; and developer only signed applications, which can opt into App Sandbox, and users can install these applications normally including from the vendor’s web site. Both types of applications can be installed and executed by default. Unsigned applications will not execute by default. An admin user can change this and permit those applications to run – in fact the user has the ability to grant an exception *per application*.

    I have a litany of criticisms of the Apple developer program, but those aren’t relevant to this discussion. What is relevant is that both developers and users in the OS X “walled garden” have a security advantage with almost no inconvenience.

    Now I see that the “Docker Engine will now automatically verify the provenance and integrity of all Official Repos using digital signatures.” So that’s a good thing on Linux, but it is far away from ubiquitous let alone a default behavior.

    These are all implementation criticisms. The idea of code signing is valid and useful. Apple made it really quite easy for users, mostly easy for developers, and that’s the part that absolutely should be replicated in free software. But that part is orthogonal to the thick layer added on that is Apple’s unique process, and I’m in no way suggesting that part is a model, nor should it be suggested its inextricably attached to the code signing concept.

    I very much disagree with any sentiment that users, even sysadmins, should be security experts. This shit is too complicated for that. The penalty for getting it wrong is too high.

    This idea that minimally better password quality is going to stop jack shit? I don’t buy it. It will stop Tonka Toy type attacks. It’s not going to stop anything moderately serious or more, to do that means adopting best practices.

    Again, I don’t know who puts computers with sshd enabled with challengeresponseauth directly facing the Internet, but that to me sounds like a bad choice. I have to access all clients’ servers through a VPN first, none of them have such services Internet facing.

    The security increase of even minimally higher quality passphrases is less than the increase in inconvenience to the end user. And that includes sysadmins. So full circle is that I think it’s a bad idea for sysadmins to be coddled into thinking that a GUI installer enforcing 8
    character passwords instead of 6 means anything has actually improved. It’s still merely just treading water (or maybe sinking to the bottom at the same rate). And in contrast to that, the same system blindly enables sshd by default and also with challengeresponseauth. When I
    called this password quality change concept turd polishing, I mean leaving sshd enabled by default with challengeresponseauth is the turd and polishing it is trying to protect this with an 8 character password instead of 6 makes the former OK (and shinier). It’s an absurd juxtaposition.

    Had the proposal been a compulsory 16 character passphrase, I merely would have gone and made some popcorn. I’d have nothing to say about that. Because at least *that* would be a meaningful increase in minimum password quality, but holy hell people would have totally f’n flipped out if they had to pick a 16 character password to install an OS.

  • Yes I know, but I put them in approximately the same ballpark as having to trust my proprietary CPU, and proprietary logic board’s proprietary firmware.

  • There is difference IMHO. Proprietary hardware manufacturers were in making profit on selling hardware (at least from the beginning). Google has always been in making profit on information [about us] they can collect. But in general you are right. Likelihood wise, I’ll stick to my opinion ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • You must not use OS X regularly, else you’d know there is plenty of inconvenience in this policy. There’s a whole lot of good software that is both unsigned and not in the App Store. Examples:

    a. Most open source software. Many of these projects (e.g. KiCad) can barely manage to serve community-provided unsigned binaries on OS X as it is. Signing apps and managing the App Store submission process is out of the question. The next version of OS X will block all the third-party app repositories (e.g. Homebrew) by default, in order to provide better security:

    http://www.imore.com/os-x-el-capitan-faq

    b. Most network monitoring software, because putting en0 into promiscuous mode violates the Gatekeeper rules. (Wireshark, etc.) Some App Store networking software (e.g. RubberNet) manages to get around this by offering a second app download from the author’s web page. You don’t call that inconvenient?

    c. Low-level utilities, such as Karabiner and Scroll Reverser, since they also need to bypass the sandbox guidelines to do their job.

    On top of all that, to bypass Gatekeeper, you need to right-click an app and disable Gatekeeper for it on the first launch. Another inconvenience.

    I’m not saying Gatekeeper and such are bad, only that they are in fact exemplars of the rule: better security always causes greater inconvenience.

    I can’t hook my iPad up to my PC and browse it as just another filesystem, as I can with any other digital camera or MP3 player. Apple must do this in order to prevent sideloading malicious apps.

    Did you see my exchange with James Byrne? His bogus counter to my claim that iPads can’t be turned into botnet conscripts was to point (very indirectly) to a paper where some researchers found a way to jump through a whole bunch of hoops to bypass all the security Apple had placed in the path of app sideloading.

    Android doesn’t bother with most of this, and what security there is is bypassable with a checkbox in the Settings app. Consequence: a whole lot more Android devices are security-compromised than Apple ones.

    So, yet another example where greater security is paid for with greater inconvenience.

    There’s more: Until recently, Android didn’t encrypt the whole device to anywhere near the same extent that iOS has for years. Why? Because it costs either CPU time (hence more battery) or die space for low-energy hardware encryption (hence increased device cost). That’s one of the reasons Apple devices cost more than Android ones. That’s not merely an inconvenience for some, it’s a complete barrier to entry.

    Good security is never free.

    Really? You’re going to lay *that* card in this game?

    When you stretch words and phrases beyond their original meaning, they lose shape and utility.

    6-9 character password limits are *not* “security theatre”.

    I write software for a living. Testing is not my primary job, but I do a fair bit of it.

    I use computers at home far more than is good for me. :)

    My home passwords have passed the new libpwquality rules for *years*.

    My iOS ones do, too, by the way, despite the increased difficulty of typing them. I put too much of my life on them to use 4-digit PINs.

    The Fedora project leader already said in this thread, multiple times, that this new policy will not be compulsory. I’m not asking for that, either. I’m merely agreeing that “double Done” makes the current restrictions so easy to bypass that they’re basically nonexistent.

    I don’t know what Fedora or Red Hat will be doing to allow bypass, but I do know that libpwquality is configurable:

    http://linux.die.net/man/5/pwquality.conf

    No, it demonstrates that, left to their own devices, most people will hang their assets out in the wind for anyone to slap.

    There are numerous laws and insurance restrictions that require locks and safety mechanisms on all sorts of things. How many of those locks would continue to be provided as a matter of course if those laws and provisions did not exist?

    Pay more attention to history.

    Once upon a time, we had the likes of Blaster, Code Red and Nimda, which continuously flooded the internet with traffic intended to find exploitable holes in Microsoft OSes. They kept finding new boxes so frequently that normal efforts consistent with contemporaneous practice entirely failed to stamp them out.

    https://en.wikipedia.org/wiki/Code_Red_(computer_worm)
    https://en.wikipedia.org/wiki/Blaster_(computer_worm)
    https://en.wikipedia.org/wiki/Nimda

    It got so bad that connecting a new Windows box to the internet without either a NAT router or a third-party software firewall would almost guarantee an infection within minutes:

    http://blog.chron.com/techblog/2008/07/average-time-to-infection-4-minutes/

    Then Windows XP SP2 came out, with Microsoft’s first enabled-by-default firewall, and these worms quickly died out. Windows acquired herd immunity to this whole class of attack.

    Yes, herd immunity. There are still a few pre-SP2 XP boxes out there, but NAT routers and low infection rates mean the old 4-minuutes-to-infection rule no longer applies.

    We didn’t get the immunity without a cost. I used to be able to “message” a remote Windows computer merely by knowing its IP, and I could browse its registry without jumping through hoops. Can’t do that any more.

    Meanwhile over here in CentOS land, you still see SSH password guessers banging on every public IP that responds to port 22. Why? Because it still occasionally works. Increase the password strength minima, and this class of worm, too, will quickly die out.

    The occasional failure of a prophylactic measure does not tell you that you should discontinue its use.

    I thought you threw out a 95% number for vaccine effectiveness above. You are saying that more than 5% of all computers with strong passphrases are currently infected with something? Prove it.

    Not true. Some people have innate immunity, and others can fight off the infection.

    This doesn’t demolish the analogy, though, it just shows that computers are even worse than humans at fighting off attacks. Unlike humans, computers either have a block already in place against the attack, or they do not.

    So your solution is to wait for unspecified innovations to come? All these problems will go away in the indefinite future, so we should do nothing now?

  • +Snip+

    Can someone mod this thread, I’m sure everyone has an opinion about this I
    know I do and obviously so do other but I think the fedora mail list would be more suited to this discussion.

    I think enough points and counter points have been said, lets move onto more relevant CentOS Topics.

    Thanks

  • No, what happens is that you call up your ISP to ask them for help blocking off the DDoS attack, and you either get blown off or transferred to their sales department to buy a “solution” to a problem they allow to exist because it brings in extra revenue.

    Your ISP could block this kind of thing at its border. Your ISP could also use their alliances with fellow ISPs to block DDoSes at their source. They do neither.

    It wasn’t removed. fail2ban has *never* been in the stock CentOS package repos. It’s always been a third-party thing.

    Fedora has it, but that’s not the same thing as saying “Red Hat removed it from RHEL.”

    What do you think biological parasites are, then, if not fauna using your body to sustain themselves because your body can’t destroy them fast enough?

    Computer worms, viruses, and trojans are computer diseases.

    Well, I do generally take a libertarian stance on things, but there is a limit on fobbing everything off on personal responsibility. Society should be able to impose a certain level of sensible limits on some things.

    CentOS is our society in this context. It is the group we choose to be a member of, which sets the ground rules and provides the resources we use. It is perfectly legitimate for us to decide it should support us better by default.

    How’s that working out in your personal life? Is Uncle Bob a virus-fighting crusader these days, 20 years after the commercial Internet got started? Surely all of your relatives are fully trained up by now?

    And yet, people continue to not do backups, and fail to test the backups they do make.

    So, Apple came out with Time Machine, and Microsoft cloned it in Windows 8, calling it File History.

    Are these bad features, because people should have known better already?

    Go read “The Better Angels of Our Nature”, by Stephen Pinker:

    https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature

    There’s plenty to argue with in his conclusions and data, but the book does at least neatly wrap up a huge serving of “the world is a whole lot different today than it once was.”

    Three decades ago, network security was nonexistent. There were X Window programs that would run an animation from my computer across my screen, then across your screen, and then across all the other screens in the computer lab. All with zero need to lower any security barriers. Then we had rlogin, rcp, and completely-insecure NFS.

    Two decades ago, best security practice was deny-by-configuration. Turn off services you aren’t using, use tcpwrappers to block known bad actors, etc.

    Then we moved to allow-by-default firewalls, and then to deny-by-default firewalls.

    Now we’re moving toward encrypt-everything and 2FA apps in everyone’s pocket.

    No change?!

    Current thinking is that human intelligence hasn’t increased — or decreased! — at all in many thousands of years.

    What *has* changed is that the scope of individual expertise has continually shrunk.

    I no longer have to know how to knap my own stone axes because I can buy a camp hatchet from Amazon to split the wood I buy at the convenience store on the way to the campground, which has paved roads, an enclosed privy, concrete pads for the picnic tables, and enclosed fire pits with cooking grates.

    And we call all of that “primitive living” today!

    This would be pure luxury to a Stone Age person, but Computer Age me probably couldn’t reproduce any of it on my own. I spend my life acquiring expertise in other things.

    I should no longer have to do my own arithmetic to figure out what kind of password I should be using for my computers. The computer is perfectly capable of doing that arithmetic for me.

    You completely missed the Disneyland measles outbreak story, didn’t you?

    The Internet hasn’t been a meritocracy since 1993:

    https://en.wikipedia.org/wiki/Eternal_September

  • I got DDOS’d over a stupid email list spat (I banned an obnoxious spammer, he picked up my email address off of past messages ‘recieved from’) a few years ago, it just about totally knocked my ISP’s backbone connections (a couple DS3’s) offline. ISP apologized but said if it happens again we can’t afford to keep you as a customer.


    john r pierce, recycling bits in santa cruz

  • Really, I must not, even though it’s roughly 80/20 OS X to Fedora…

    Spare me. The fact it is imperfect is meaningless to the discussion. The original argument was that security increase always cause user inconvenience. That is not true. Millions of users using tens of thousands of applications in an eco system they see no problem with, unaware that those applications are code signed, and no concern at all about the alternatives. Good for them, they’re safer than without code signing and their life has not been made inconvenient as a result.

    That this needs to be expanded, made easier, made more open, so that it’s not just customers using proprietary software who benefit from stronger security measures with minimal usability impact.

    OK one of us must have the self control to stop, because your arguments are terrible and I’m losing patience.

    What you just claimed, has nothing to do with encryption. It has everything to do with Apple simply not treating their devices as mass storage devices which they haven’t done since forever – even without encryption.

    And Android is the same. Whether encrypted or not, it’s not a mass storage device, you can’t mount the file system. It supports MTP, whether encrypted or not. JFC….

    Ok well I consider passwords that keep the dog out and probably most family members to be security theater.

    No fail2ban, no firewall rules, sshd by default, challengeresponseauth by default, and a 9 character (even random) passphrase, and that shit is going to get busted into. Against a targeted attack by a botnet, you need something stronger than a 9 character password, today. Let alone 6 years from now.

    Those other measures need to get better (PKA only, put it behind a VPN). Not the password getting slightly longer.

    ATMs and credit cards in the U.S. The weak link is the magnetic stripe, not the 4 digit PIN. The enhancement for credit cards due this year is not 5 or 6 digit PINs. It’s EMV chips. And the end user will be minimally affected in terms of usability, the security will be vastly better than even if 5 or 6 digit PINs were employed and besides no one would accept that anyway.

    And that’s where we are with computers and passwords.

    No they just get better, like they have been, at an exponential rate compared to our ability to recall login passwords.

    Define strong. Diceware puts the minimum for large botnet protection at 5 word passphrases. 6 word passphrases for protection against a government entity. Your idea of strong thus far is 9 characters which seems to be b.s. today and certainly laughable in 6 years when we do the autopsy on today’s policy successes and failures.

    I did say disable sshd by default, and several other suggestions many of which could be done right now. That you gloss over this and turn it into this pile of crap leading questions is fairly disqualifying in debate. Each suggestion has greater security efficacy than a 2-3
    character increase in password length.


    Chris Murphy

  • If the Windows fix was firewall on by default, why isn’t that the appropriate “fix” for Linux distros? Why mess with the password strength or which daemons are running?

    Seems like it adds the necessary step of “STOP: If you turn off this, you’d better know what you’re doing”, without messing around with default settings of packages and/or password library configuration files.


    Nate

  • if sshd is firewalled by default, why even run it?


    john r pierce, recycling bits in santa cruz

  • ChallengeResponseAuth is not on by default, on Red Hat derived systems.
    I’m pretty sure that was already clarified, much earlier in this thread.

    6 years from now, the maximum speed of guessing passwords against an SSH
    server will be exactly the same as it is today. The server imposes delays on failure and maximum connection numbers. With those mechanisms, the rate is constant.

    I’ve read your references to diceware here and earlier in this thread, and I’m pretty sure you don’t understand it. Their page makes the purpose clear: “Short passwords are OK for logging onto computer system that are programmed to detect multiple incorrect guesses and protect the stored passwords properly, but they are not safe for use with encryption systems.”

    Diceware is intended to help you generate passphrases that you will use to protect an encryption key, such that an offline attack against that passphrase is unfeasible.

    You appear to be advocating for significantly longer passwords for authentication, but as diceware makes clear, online attacks are already mitigated by rate limits enforced by the server. Offline attacks, such as diceware is intended to thwart, are only possible if the attacker has your password file. In which case they already have root. In which case they don’t really need to crack your passwords.

    So, unless I misread you, can we let this thread die out?

  • Linux users take a lot more care, and pride, in maintaining their systems well and reading the daily logs too. Most increase security on their machines. They have no wish to lose their work, their time and effort investment and their increased Linux-enabled productivity.

    If you want to offer advice and save the world from spam, viruses, Trojans and other crap, please join a Windoze forum and make your positive contribution there ;-)

    Thanks,

  • Oh no they will not if incoming sshd is restricted to a very few IP
    addresses. A properly configured firewall always helps; selinux too. Closing down or moving ports also helps.

  • Trust and Google are mutually incompatible ;-)

    That’s my €0.02


    Regards,

    Paul. England, EU. England’s place is in the European Union.

  • I think Chris is using “challenge response auth” as a synonym for “everything except public key auth” since CRA can be an umbrella auth method for just about every type of authentication, via PAM.

    At bottom, I blame OpenSSH for this confusion. They should have named the pref something else, like TunneledAuth or RFC4256Auth.

    Then we could use the term “challenge/response” in the narrow way I defined it earlier in the thread.

    I’ve only been talking about the online attack scenario, but Chris keeps wanting to go back to the offline scenario. Basically, he’s assuming attackers will have a copy of /etc/shadow.

    It’s also useful on public web sites, since you don’t know if there might someday be a SQL injection attack that can pull the users table, which may not even be salted, much less run through a KDF.

    Since that is not what this proposed Fedora change is trying to address, I don’t see why we need to even be talking about Diceware in this thread.

  • So your motherboards and nics can ‘call-home’ on a regular basis and you would not mind if they did?

    There is, in my opinion, a fundamental difference between accepting the possibility of vendor installed trojans on hosts that may never be connected to an external network and adopting an infrastructure that depends upon such behaviour.

    Ones risk tolerance varies according to the perceived value of the asset to be protected. The problem that Google, Amazon, NSA, FSB, GCHQ, CCSE and the rest pose to the average person is that the average person has no idea of how to value pervasive recording of their private activities. Thus there is no basis upon which they may form a reasonable risk assessment. Therefore no reasonable estimation of the acceptable cost for prevention can be made.

    Consequently this promotes the prevalence of what amounts to folk-remedy security measures; virus scanners (most of dubious or no worth) mainly; master password protection schemes (that in many cases require you to reveal all of your passwords to third-parties); and of course consumer grade two-factor authentication schemes that just happen to require revelation of your private cell phone number to commercial enterprises. The common elements to all these are: low cost, dubious efficacy, hidden defects, and consumer ignorance.

    I have a router at home that ‘talks’ to both my ISP and its manufacturer on a regular basis, regardless of whether or not there is active traffic on the exceptional circuit. Which behaviour is why all of my home traffic, internal and external, goes via an SSH pipe established through a system placed in front of the router.

    But how many consumers, and keep in mind that my ISP is one of the largest telecoms in the world, would even dream that such things happen? Much less take steps to thwart that surveillance? Or even know what steps are possible?

    This sort of stuff should be out and out illegal. But, as the router is the ‘property’ of the telecom it is up to them what they wish to have it do and the consumer’s choice it put up with that or do without.

    We are living in the golden age of snake-oil technology. Which, as the governments of the world have become addicted to surveillance of their subjects, — one cannot really call citizens those so treated by their rulers — is unlikely to change for a generation or more. It took more than 100 years of consumer activism to change advertising and product safety laws and these are yet far from perfect. I am not convinced that effective data security laws will prove any easier to establish. Or be accomplished any sooner.

    Which is why I consider discussion of password strength nothing more than a pointless diversion of attention from the real issues of data security and network integrity. A discussion that is truly representative of our ‘security theatre’ industry; being both expensive and irrelevant. In system design we call this stuff
    ‘bike-shedding’.

  • Not always. I bet you are. But I support a big group of scientists, and when choosing Linux for laptop they usually pick Ubintu. Guess why?
    Because all just works, no need to invest (waste actually) their precious time into fiddling with the system. They can just do their science. I don’t intend to blame anybody for what or how they do. I just would like everybody to realize the reality.

    Valeri

  • For a couple of distros AFAICT, that IS the default — with some sort of firewall, whether it be iptables or firewalld, installed and activated right out of the box.

    As far as password bangers, well, I always find denyhosts to be an INVALUABLE tool and always make it a part of ANY Linux build that I set up.

    —–Original Message—

  • The main lesson of history is that people never learn lessons of history
    (I refer to known dictatorships collecting all possible information about everybody, still us, “free people”, don’t care)

    ISP still will collect information about your traffic destination, as they know where packets from your front box go (their equipment does send this your traffic there). There are ways to thwart that, tor project is the first that comes to my mind.

    This illegal activity is a crime I never heard any politician was ever punished for. 100 years is infinity for me (I will not live that long). But I agree, let’s at least try to do something.

    Valeri

  • and where’s that? how could a default rule know the difference between ‘outside’ and ‘inside’ without knowing specifics about your LAN/WAN configuration … many of my linux systems are in coloc centers where the LAN is unprotected, its public internet delivered directly to the server, and SSH is the only way I access the servers to manage them. yet others are on a corporate WAN which has many subnets, in neither of these cases would a default rule in SSH access be appropriate.