Can We Trust RedHAt Encryption Tools?

Home » General » Can We Trust RedHAt Encryption Tools?
General 40 Comments

Recently I have been deeply troubled by evidence revealing the degree to which U.S. based corporations (well actually all resident in any of the so-called
5-eyes countries) appear to have rolled over and assumed the position with respect to NSA inspired pressure to cripple public key encryption and facilitate intrusions into their software products. This has engendered in me a significant degree of doubt surrounding the integrity of RHEL; and therefore of CentOS since it claims to be a bug for bug, and therefore an exploit for exploit, copy of RHEL.

Reinforcing my doubt is the tale surrounding the long outstanding bug report respecting OpenSSL (https://bugzilla.redhat.com/show_bug.cgi?id19901) opened in October of 2007. This probelm was only recently addressed and then only after a good deal of pointed public questioning by numerous security commentators. RedHat’s reference to ‘patent’ issues surrounding this ‘bug’
are unsubstantiated by any documented evidence. The only response justifying Redhat’s lack of movement is some hand-waving about corporate legal opinion. Despite suggestive language by some RH employees
(https://bugzilla.redhat.com/show_bug.cgi?ida2265#c3) the exact nature of the patent legal problem was never specifically laid out for public comment. Equally troubling to me is the complete lack of any information on what patent issue was finally resolved and how it was resolved so that the related bugs could be fixed.

As patents (with very,very few exceptions) are by their very nature not secret one wonders if the so-called legal problem was of a fundamentally different nature, no less real but somewhat less savoury from a PR standpoint.

In consequence, after a good deal of agonizing over what was within my means to do, I have spent the weekend rebuilding Apache httpd from Apache sources to obtain TLSv1.2. While I still do not have a working copy (yet) I did learn a great deal of how RH back-porting patch policy appears to work. But in the process of researching how to get this package built I ran across a number of discussions respecting OpenSSL, which is the fundamental layer upon which pki rests, and RedHat
(http://www.linuxadvocates.com/2013/09/is-openssls-cryptography-broken.html). None of them were very comforting.

Where this discourse is leading is to is the question of whether or not CentOS
should provide OpenSSL built from clean sources as an extra or plus package and perhaps httpd, sshd and ssh-client and related pki based/reliant packages as well. Similarly, should CentOS.org provide tested spec files that will provide individual system admins a simple method of building these packages from source?

I think that CentOS.org probably should provide this but I am afraid that I
cannot make a strong public case. Suffice that my belief is informed from personal previous experience with federal agencies investigative techniques and the all too frequent willingness of commercial interests to take the road of least resistance when pressured. Particularly where the spectres of expensive litigation and targeted regulatory enforcement looms in the background.

I believe that the issue is of pressing interest to the entire community and I
would like to read what others have to say on the matter.

40 thoughts on - Can We Trust RedHAt Encryption Tools?

  • James B. Byrne wrote:
    position with an exploit

    based/reliant that will from

    I agree, but I just don’t know how much in the way of manhours that would involved.

    However, if you do get it all built, and build packages out of them, there is an extras? contribs? repo, and I’d encourage you to submit it for that.

    mark

  • I agree, but I just don’t know how much in the way of manhours that would

    RHEL nowdays supports already Elliptic Curve on openssl.

  • Eero Volotinen wrote:

    Um, I guess you haven’t read the news lately – the most used, POSIX-mandated elliptic curve is backdoored by the US NSA – when the standards committee was writing the standard, they pushed the backdoored version.

    <https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html>
    As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about. We know this has happened historically: CryptoAG and Lotus Notes are the most public examples, and there is evidence of a back door in Windows. A few people have told me some recent stories about their experiences, and I plan to write about them soon. Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it’s explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program
    — end excerpt –

  • Which complete misses the point.

    First, the initial settings of the EC are significant in determining the strength of the resulting cipher. There is considerable evidence that suggests that some of these default settings have been proposed by or adopted on behalf of interests that would benefit from having an easily compromised encryption technique. While the algorithm may be strong a carefully crafted initial setting might be all it takes to render it vulnerable.

    Second, the delay in providing ECC in itself taken together with the abrupt and unexplained resolution to this matter subsequent to Snowden’s revelations respecting the complicity of commercial entities in furthering illicit surveillance raises my suspicion that there is more to this than meets the eye.

    We are talking about a matter of trust and I am afraid to say that my suspicions of the motives of large commercial enterprises in matters of trust looms large in my thinking. If it turns out to be the case that RH withheld ECC from its users because of the pressure of some external interest we cannot be certain that this was the only item that was affected.

    I am really at a loss as to how to proceed. Do I move off CentOS entirely?
    Where to? What other distribution of similar stature exists that is itself not subject to exactly the same forces that may have been brought to bear on RedHat.

  • I am doing a bit of investigative work to see just how hard it is to build openssl for myself. The source from openssl.org is readily available and the spec file provided seems fairly usable. However, I am seeing lots of errors similar to this when I try to build it using mock:

    + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump
    /usr/bin/strip: unable to copy file
    ‘/builddir/build/BUILDROOT/openssl-1.0.1f-1.x86_64/usr/lib64/libcrypto.so.1.0.0’;
    reason: Permission denied

    What am I tripping over?

  • James B. Byrne wrote:

    Looks like it’s rtrying to install it, not just build it. In the first example, you’re trying to replace the existing /usr/bin/strip, which only root can do. Are you doing make, or make install?

    mark

  • After all the news about backdoors, “planted” bugs or weakened standards in apps, in routers, hardware firmwares, etc… these days, can we trust anything?
    Can we trust the bios?

    Can we trust the compiler not to stealthily inject a backdoor in the compiled version of a clean code?Given that most entries from the The International Obfuscated C Code Contest (http://www.ioccc.org/)

    looks (at least to me) like magic and any average dev would not (be able to) see evil code in the middle of it…And it is not only an NSA/USA thing, since it seems many countries are cooperating or doing the same…

    By example, in the middle of the Snowden revelations, France just passed a blanket spying law (without judicial supervision)!

    Anyway, I think that having a 100% trustable environment is more and more an utopia.

    What? Pessimistic? Me? Yep!

    JD

  • John Doe wrote:

    One thing on the positive side: the last few months, I think a *lot* of folks are eyeballing this stuff, specifically looking for issues, and probably some are going back to things that they said “I dunno… but I’ll come back to look at this someday”. I *suspect* that within about six months, it’ll be as relatively safe as it was maybe 10 years ago.

    Of course, we’ll need some wakeup call to look at it all again in 10
    years. In the meantime, I think things are getting safer, relatively.

    Hmmmm, speaking of BIOS, wonder if this will impact the push for UEFI….

    mark

  • I started out by using the openssl.spec file for openssl-1.0.1f directly from openssl.org. The contents of that file are provided at http://git.openssl.org/gitweb/?p=openssl.git;a=blob_plain;f=openssl.spec;hb=HEAD. What I then do download the source from openssl.org and put that into
    ~/rpmbuild/SOURCES and extract it. I then copy
    ~/rpmbuild/SOURCES/openssl-1.0.1f/openssl.spec into ~/rpmbuild/SPECS and run the following commands:

    mock –buildsrpm –sources=./rpmbuild/SOURCES
    –spec=./rpmbuild/SPECS/openssl.spec

    mock –no-clean –rebuild –root

  • Yeah didn’t Dennis Richie modify the C compiler to insert a backdoor for him when ever the compiler saw login.c was being programmed?

  • You are underestimating government agencies. I think they’d go for a backdoor in the CPU itself – harder to find and only a few companies to corrupt to mange it.

  • I think everyone should assume the entire ecosystem is compromised and shouldn’t trust anything. Code should be reviewed and bugs/weaknesses removed IMMEDIATELY. The problem is obviously not everyone is a programmer and not everyone will have the knowledge to understand how to fix/improve the security issues. Of course, some software is still good, but who’s going to verify that and when? If you don’t use free software, you’re a goner because now you have no ability whatsoever to audit the code!

    We can’t trust the software or the hardware any longer. When the problem runs this deep, what can anyone do? The NSA program has effectively removed my trust with every single U.S. (actually, 5 eyes)
    based tech company.

    I can only imagine what RMS thinks about all of this. If he hadn’t fought for so long for free software, we would all truly be up shits creek.

    Don’t trust proprietary anything. Use free software – it’ll be fixed sooner and properly before anything else.

  • If you start with the assumption that Intel/AMD processors and the gcc compiler are backdoored, what’s left, US or not?

  • I’ve programmed for 40 years, and I don’t understand encryption algorithms nor can I evaluate their strengths and weaknesses. I know very few programmers who can. None personally, in fact.

  • I always just assumed that blowfish was good precisely because it wasn’t the one that was recommended/promoted by the groups likely to be compromised. But, I try to stay out of politics so I don’t worry much about keeping secrets anyway.

  • I always just assumed that blowfish was good precisely because it It might be easier to compromise security of commercial products as source code is not available.

  • they seem to have succeeded in compromising STANDARDS and ALGORITHMS, to heck with implementations.

  • I work with real cryptographers. I do not consider myself one. I am a crypto protocol designer; a different breed. You basically trust the math and the arguments put forward by the real cryptographers. There is LOTS of public review and comment. But we recognize that the largest employer of mathmeticians is the NSA. If there is an exploitable lever, they will know about it before we will; I have a real experience with this back with IPsec and the implicit IV ESP proposal.

    So some programmer has to take the math for the crypto algorithms and implement it correctly. In many cases, this ends up being done at least in firmware, and in some cases actual chips (I work mostly, these days, with sensors). Then you have to trust the likes of me to design the crypto protocol right. There are lots of subtle traps here; I have the scars to show it. Then programmers again have to take our crypto protocols and do them right….

    You get the picture.

    If you do not trust the NIST (read NSA) EC curves, you have two choices. Dan Berstein’s curves (Dan is a long time anti guy, and Bruce Schneier is a long time friend of Dan, and me). Or the Braintrust curves; they are published in an RFC (seems good to me, and I have heard some good references on their work).

    But really, the NIST curves have been under extensive review. They are used both by the govs and banking; NSA knows if they can figure out weaknesses, so can other large gov funded math teams. The big event was the RNG that NSA had added, and the public community came down on it almost from the get-go.

    You want to talk about leaky code? Look how corporate mail proxies work to enable them to read encrypted emails. Simple lying about certs.

  • Robert Moskowitz wrote:

    Ah, thanks, Rob, I was about to post that Bruce had recommended something better than his old Blowfish (and yes, I’ve some small acquaintance with Bruce – via GT).

    mark

  • Only algorithm they compromised was an RNG that got pretty strong thumbs down from the real cryptographers. They have not compromised any IETF
    standard; maybe kept quite about a problem, but have not put holes in any. Most of our problems with TLS is implementations and backwards compatiblity options.

  • But didn’t that come later? With nothing else to go on, that would make me think that it is more likely to have been influenced by whatever means corrupted the others.

  • It was back in the heady days of finding a replacement for DES and
    3DES. Rivest had his RC5 (there are calls again for a streaming cipher, and NIST may well ‘pick’ one this year). Kennedy had SAFER+ (used in Bluetooth, but SAFER+ was eliminated for AES because it was highly dependent on your RNG. Ask bluetooth vendors about their RNG).

    The peer review was brutal. Bruce himself will admit to issues found surrounding twofish. Some question the changes NSA had made with RInjdal, but again, massive peer review. And you see that review regularly. We wanted the GCM mode of operation for IEEE 802.1AE, and NSA offered some tweaks to tighten it up. Just a bit before (grumble, what was the profs name) made the same recommendation. The big things they help with. Too much public review and too many profs looking for research for their students (why we are moving away from SHA1, eventhough further work is showing it to be stronger that we thought).
    It is the subtle things around the use of the algorithms and protocols they go after.

  • I was shocked and horrified to find out that RHEL (and presumably CentOS)
    and Ubuntu no longer implement the ‘rot13’ program.

    Cheers,

    Cliff

  • tr A-Za-z N-ZA-Mn-za-m outfile

    example…

    $ echo this is a message | tr A-Za-z N-ZA-Mn-za-m guvf vf n zrffntr

    $ echo guvf vf n zrffntr | tr A-Za-z N-ZA-Mn-za-m this is a message

  • Thanks! I got similar suggestions when I mentioned this at work. I was of course joking about rot13.

    Cheers,

    Cliff

  • This is quite common. We were discussing this at IETF in Nov. Right now I forget the law which allows employers complete access to employee emails, but as such when the client asks for the recipients cert, the server retrieves it, creates a fake one that is presented to the client. The client encrypts the email, and sends it to the server. The server decrypts, stores the content per corporate policy, then encrypts with the appropriate cert. Well actually it is a bit more than that, as only the symmetric key is encrypted with the cert’s key. This is old stuff for me; I did secure mail a decade ago, and this work around was well known then.

    Also works well for web clients through the corporate http proxy. Actually it is easier for web transactions than email.

  • Well, regardless of my thoughts on the ethics of this situation and my opinion about those who do these sorts of things, I have continued to research this issue. I have discovered that there is a great deal of literature respecting the weakness of the RNG and PRNG processes implemented on headless hosts, in particular headless hosts that are virtualised. Given the essential nature of true random number generation to cryptographically secure key creation this represents a significant weak point on such hosts.

    I am not going to reiterate or summarize any of this here because you can find these discussions easily enough via Google. However, I have developed a small script to alleviate the problem to some degree based on the writings and works of others. This requires the epel repository be enabled:

    #!/bin/bash cat /proc/sys/kernel/random/entropy_avail yum install dieharder haveged rng-tools -q -y cat /etc/sysconfig/rngd sed -i ‘s:EXTRAOPTIONS=””:EXTRAOPTIONS=”-r /dev/urandom”:’ /etc/sysconfig/rngd cat /etc/sysconfig/rngd chkconfig –level 2345 haveged on ; chkconfig –level 2345 rngd on service haveged start ; service rngd start cat /proc/sys/kernel/random/entropy_avail

    This increased the mean amount of entropy present in /dev/random on the systems I installed these packages on from ~176 bits to ~2048 bits.

    I continue to look into other related matters.

  • That’s news to me. Citation?

    Recently, there was a discussion amongst BSD devs and they concluded that they don’t trust hardware RNG either, deciding instead to add their randomness to other sources before going to /dev/random.

    http://arstechnica.com/security/2013/12/we-cannot-trust-intel-and-vias-chip-based-crypto-freebsd-developers-say/

    Lastly, we should all thank this neckbeard who’s been banging the gong all along, and was right:

    http://schestowitz.com/Weblog/archives/2013/07/15/

    -Ben