Low Random Entropy

Home » CentOS » Low Random Entropy
CentOS 17 Comments

I am use to low random entropy on my arm boards, not an intel.

On my Lenovo x120e,

cat /proc/sys/kernel/random/entropy_avail

reports 3190 bits of entropy.

On my armv7 with CentOS7 I would get 130 unless I installed rng-tools and then I get ~1300. SSH into one and it drops back to 30! for a few minutes. Sigh.

Anyway on my new Zotac nano ad12 with an AMD E-1800 duo core, I am seeing 180.

I installed rng-tools and no change. Does anyone here know how to improve the random entropy?

thanks

17 thoughts on - Low Random Entropy

  • Another option involves open source hardware, at http://onerng.info/
    which is a hardware entropy generator. Lots of discussion on how it works, and why.

    the one I have doesn’t seem to work on a usb3.0 port (on my desktop PC), but that may not be its fault.

  • WOW!!!

    installed, enabled, and started.

    Entropy jumped from ~130 bits to ~2000 bits

    thanks

    Note to anyone running a web server, or creating certs. You need entropy. Without it your keys are weak and attackable. Probably even known already.

  • Indeed. Installing haveged is the first thing I do when setting up a new CentOS 7 machine.

    Rebooting and verifying it starts on boot is the second.

  • In article <792718e8-f403-1dea-367d-977b157af82c@htt-consult.com>, Robert Moskowitz wrote:

    Interesting. I just did a quick check of the various servers I support, and have noticed that all the CentOS 5 and 6 systems report entropy in the low hundreds of bits, but all the CentOS 4 systems and the one old FC3 system all report over 3000 bits.

    Since they were all pretty much stock installs, what difference between the versions might explain what I observed?

    Cheers Tony

  • This is partly why so many certs found in the U of Mich study are weak and factorable. So many systems have inadequate entropy for the generation of key pairs to use in TLS certs. Worst are certs created in firstboot process where at times there is no entropy, but the firstboot still creates its certs.

  • so there are mitigations – the question really is: why hasn’t redhat made these mitigations the default for their enterprise products – maybe other influences we are unaware of – seems like a huge big hole. With the advent of SSL/TLS being mandated by google et al, every device needs access to entropy.

  • Who said upstream hasn’t done something? :-) The mentioned artifacts are the evidence that they are “in place/in use”
    by default. BTW, any crypto-sensitive task needs beside entropy also others prerequisites. So, i recommend checking the own systems to understand, what upstream respectively the crypto functions/libraries etc. actually does/do.

  • The challenge is this is so system dependent. Some are just fine with stock install. Others need rng-tools. Still others need haveged. If Redhat were to do anything, it would be to stop making the default cert during firstboot. Rather spin off a one-time process that would wait until there was enough entropy and then create the default cert. Thing is I can come up with situations were that can go wrong.

    There are a lot of best practices with certificates and crypto that are not apparent to most admins. I know some things for the crypto work I
    do (I am the author of the HIP protocol in the IETF). There is just not one size fits all here, and people need to collect clues along with random entropy….

  • OK that makes sense, I’ve been admin on linux servers for about 18
    years, understand the basics, use certificates for web and email servers. This thread has exposed an area that I’m peripherally aware of
    – the need to generate with sufficient entropy the cipher that goes across the internet in order to avoid an observer being able to reverse engineer the keys used. I still fail to see why every server and workstation is not set up to do this at some minimum level – i guess linux out of the box does this, the issue is that the minimum from just the basic kernel on most hardware is too little with today’s ability to crack ciphers..

    Is there some practical guideline out there that puts this in terms that don’t require a PhD in mathematics to understand and implement.

    For instance I have setup and run mail servers for nearly two decades, only in the last 10+ years with certificates and mandated SSL/TLS – yet the issue of low random entropy is relevant here but until this thread I
    hadn’t taken steps to resolve.

  • This default cert is not valid anyway and as random source they use:

    “/proc/apm:/proc/cpuinfo:/proc/dma:/proc/filesystems:/proc/interrupts:/proc/ioports:/proc/pci:/proc/rtc:/proc/uptime”

  • Not valid in what way? Yes the Subject and Issuer names are dorkie, but how are they not valid?

  • You raise an important point. Alice Wonder earlier said she installs haveged on all her servers as best practice. It is hard to fault that approach…

    I am one of the people that make your life difficult. I design secure protocols. I co-chaired the original IPsec work. I created HIP which was used as ‘lessons learned’ for IKEv2. I contributed to IEEE 802.11i which gave us AES-CCM and 802.1AE which gave us AES-GCM. And I wrote the attack on WiFi WPA-PSK because implementors were not following the guidelines in the spec.

    When we are designing these protocols, we talk to the REAL
    cryptographers and work out: ‘oh we need a 256 bit random nonce here and a 32 bit random IV there.’ We end up needing lots of randomness in our protocols. Then we toss the spec over the wall to get someone to code it.

    Fortunately for the coders, the cryptographers have recognized that the EEs cannot really create real randomness (when we did 802.11i, appendix H had how to build a ring oscillator as a random source. Don’t get me going about what Bluetooth did wrong.). So the history of Pseudo Random Generators is long and storied. But a PRNG still needs a good random seed. Don’t get me started on failures on this and lessons learned. I
    worked for ICSAlabs for 14 years and we saw many a broken implementation.

    So here we are with ‘modern’ Linux (which Fedora version is CentOS built from?). We know that no board design can feed real random bits as fast as our protocols may need them. Or at least we probably cannot afford such a board. So first you need a good random harvester. Then a good PRNG. How does RH implement the protocols? I have no idea. I just contribute to the problem, not the solution.

    All that said, it looks like there are basic tools like rng-tools to install to work with board-level rng functions. Then there is haveged that works with the board itself.

    All this said, I should probably write something up and offer it to CentOS-docs. I need to talk to a few people…

    Bob

  • Not valid in terms of validation (PKI), not equal with verification in terms of “it works”.

  • It is a self-signed cert with no identity. So that you know you are consistently, securely, talking with something you know nothing about.
    But the cert is valid per X.509 rules. It is valid per PKIX self-signed certs. Just of limited value. And too many systems never replace it.

    BTW, I use self-signed certs a lot. They have meaningful names, and it is up to various parties as to how they use them.

  • Oh, and in my 8 years with Verizon, I worked with the Cybertrust PKI
    team. And I am the author of the Bridge CA model used in the US Fed PKI, the BioPharm PKI and a few others.

    I despair about what it costs to do decent Identity management. And how meaningless much of it still is.