[https And Self Signed]

Home » CentOS » [https And Self Signed]
CentOS 3 Comments

But I never mentioned anything about passwords. I quite agree with you with respect to avoiding needless password churn. What I wrote was specifically user accounts and their expiry dates. These should be short. Say six to twelve months or so. When the account expires then it can be renewed for another six or 12 months. The password for it is not changed.

One can always write a script to automatically search for and report pending expirations. There is no real need for accounts to actually expire. But, even if accounts do expire for active users then it is not much of a hardship to report the fact and to have them reactivated. On the other hand, disused accounts never get reported and remain deactivated.

Also, when a person leaves our employ and somehow the cancellation of all or some their accounts gets overlooked in the out-processing then shortly their accounts will be deactivated automatically. A fail safe mechanism.

Https And Self Signed

Home » CentOS » Https And Self Signed
CentOS 61 Comments

I followed the instructions here https://wiki.CentOS.org/HowTos/Https

Checking port 80 I get the file… curl http://localhost/file.html


Working

Checking port 443 I get and error curl https://localhost/file.html

curl: (60) Peer’s certificate issuer has been marked as not trusted by the user. More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a “bundle”
of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn’t adequate, you can specify an alternate file using the –cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you’d like to turn off curl’s verification of the certificate, use the -k (or –insecure) option.

How do I get past this? I was looking to just self sign for https.

Yes I can added the –insecure for curl – but – my other app doesn’t seem to work either – perhaps getting the same return message instead of the actual file.

Thanks,

jerry

61 thoughts on - Https And Self Signed

  • Today, I would prefer Let’s Encrypt:

    https://letsencrypt.org/

    It is philosophically aligned with the open source software world, rather than act as bait for a company that would prefer to sell you a cert instead.

    I’m only aware of one case where you absolutely cannot use Let’s Encrypt, but it also affects the other public CAs: you can’t get a publicly-trusted cert for a machine without a publicly-recognized and -visible domain name. For that, you still need to use self-signed certs or certs signed by a private CA.

  • Because of all the security holes people have been finding in TLS, libraries implementing the client side of TLS are getting increasingly intolerant of risky configurations.

    It’s too bad, because self-signed certificates are only unusual on the public Internet. I wish the designers of TLS had included a flag in the cert that let it declare that it was only to be trusted on a private intranet by clients of that same intranet.

    For example, instead of declaring that the given server is foo.example.com, it would be nice if you could generate a self-signed cert that declares that it is for 172.16.69.42, and that any host on 172.16.69.0/24 should trust it implicitly.

    Such a cert could not be used to prove identity, prevent spoofing, or prevent MITM attacks, but it would give a way to set up encryption, which is often all you actually want. MITM attacks could be largely prevented with certificate pinning.

  • I have got question for experts. I just opened settings of Firefox
    (latest, on FreeBSD), and took a look at the list of Certification Authorities it comes with.

    I do see WoSign there (though I’d prefer to avoid my US located servers have certificates signed by authority located in China, hence located sort of behind “the great firewall of China” – call me superstitious).

    I do not see neither starttls.com nor letsencrypt.org between Authorities certificates. This means (correct me if I’m wrong) that client has to import one of these Certification Authorities certificates, otherwise server certificate signed by one of these authorities is on the same page with my private Certification Authority (which I used to run for over 10
    years, then in my kickstart I had my CA certificate imported into CA of clients – but other clients, like laptops had to download, install and trus my CA certificate). Of course, this is a notch better than
    “self-signed” server certificates, as you only need to import CA
    certificate once, whereas you will need to import self-signed server certificates for each of the servers…

    Am I missing something?

    Also: with CA signing server certificate there is a part that is
    “verification of identity” of domain or server owner. Namely, that whoever requested certificate indeed exists as physical entity (person, organization or company) accessible at some physical address etc. This is costly process, and as I remember, free automatically signed certificates were only available from Certification Authority whose CA certificated had no chance to be included into CA bundles shipped with browsers, systems etc. For that exact reason: there is “no identity verification”. The last apparently is costly process.

    So, someone, please, set all of us straight: what is the state of the art today?

    Disclaimer: I have purely academic interest in this myself: my institution makes CA signed certificated for my servers at no cost for me, and that authority is in the CA Cert bundles.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • in my admittedly limited experience with this stuff, you need to create your own rootCA, and use that to sign your certificates, AND you need to take the public key of the rootCA and import it into any trust stores that will be used to verify said certificates.

  • If you don’t do this, then there’s no real point using SSL at all, and you
    *should* be forced to override security with arguments:

    wget –no-check-certificate curl –insecure

    etc.

    jh

  • For my personal needs I use free StartSSL certs and the authority appears as StartCom, Ltd. in Firefox.

    In my experience it is already a trusted authority in most/all browsers. At least I didn’t have to manually trust it, and I haven’t run into one that complains about it.

  • I’m not an expert by any means, but I use letsencrypt (mostly for testing)
    and it’s always worked for me in FreeBSD with Firefox, without any special effort on my part. You can try https://srobb.net which is using letsencrypt as its cert.


    Scott Robbins PGP keyID EB3467D6
    ( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
    gpg –keyserver pgp.mit.edu –recv-keys EB3467D6

  • That’s a perfectly valid concern. The last I heard, modern browsers trust 1,100 CAs! Surely some of those CAs have interests that do not align with my interests.

    That’s because they are not top-tier CAs.

    You must be unaware of certificate chaining:

    https://en.wikipedia.org/wiki/Intermediate_certificate_authorities

    Even top-tier CAs use certificate chaining. The proper way to run a CA is to keep your private root signing key off-line, using it only to sign some number of intermediate CA signing certs, which are the ones used to generate the certs publicly distributed by that CA.

    Doing so lets a CA abandon an escaped private key by issuing a CRL for an escaped private key. The CA then just generates a new signing key and continues on with that; it doesn’t have to get its new signing key into all the TLS clients’s trusted signing key stores because the new key’s trust chain goes back to the still-private offline root key.

    Without that layer of protection, if their private signing key somehow escapes, the CA is basically out of business until they convince all the major browsers to distribute their replacement public key.

    If those laptops are Windows laptops on an AD domain, there is a way to push CA public keys out to them automatically. (Don’t ask me how, I’m not a Windows admin. I’m just aware that it can be done.)

    I’m not exactly sure what you’re asking here. If you are simply pointing out that the free certificate providers — including Let’s Encrypt — do not do public records background checks, D&B checks, phone calls to phone numbers on your web page and DNS records, etc. to prove that you are who you say you are, that is true.

    Let’s Encrypt is not in competition with EV certificates, for example:

    https://en.wikipedia.org/wiki/Extended_Validation_Certificate

    The term of art for what Let’s Encrypt provides is a domain validation certificate. That is, it only proves that the holder was in control of the domain name at the time the cert was generated.

    The answer could fill books. In a forum like this, you can only expect answers to specific questions for such broad topics.

  • I forgot to mention that letsencrypt.com uses one of its own certificates. You can use your browser’s certificate detail view to see the chain of trust. I see two levels here: IdenTrust -> TrustID -> Let’s Encrypt.

    As for starttls.com, that doesn’t exist; you’re probably confusing it with the SMTP STARTTLS protocol extension. What you mean is startssl.com, which is the main public face of StartCom. StartCom is a top-tier CA.

  • For the love of all that is holy, create your own CA and have your own PKI, even for testing.


    It is very easy to creat your own CA, to sign your own certs. There is no need to support self signed “leaf nodes” of the PKI.

    I have taken some liberties on this to save me time, you will need to change config values to suit your needs.

    $ mkdir -p CA/{private,certs}
    $ cd CA
    # copy the default openssl config
    $ cp -v “$(openssl ca -verbose 2>&1 | head -n 1 | sed ‘s/Using configuration from //’)” .
    $ sed -i ‘s/^\(\s*dir\s*=.*\)/#\1\ndir=./’
    openssl.cnf
    $ sed -i ‘s|^\(\s*certificate\s*=.*\)|#\1\ncertificate=$dir/CA.crt|’
    openssl.cnf
    $ sed -i ‘s|^\(\s*private_key\s*=.*\)|#\1\nprivate_key=$dir/private/CA.key|’
    openssl.cnf
    $ sed -i ‘s|^\(\s*new_certs_dir\s*=.*\)|#\1\nnew_certs_dir=$dir/newcerts|’
    openssl.cnf
    $ touch index.txt
    # done setting up the file system
    $ openssl req -config openssl.cnf -new -nodes -keyout private/CA.key -out CA.csr
    # answer the questions
    $ openssl ca -config openssl.cnf -batch -in CA.csr -create_serial -selfsign
    # there should only be one cert, the CA’s self signed cert
    $ cp certs/*.pem CA.crt
    # done creating the CA

    # now you can sign your server certificate signing requests (CSR)

    # make a csr

    #sign server.csr
    $ openssl ca -config openssl.cnf -batch -in server.csr

    #files at end of email for understanding…

    And reducing the trusted CA set in your enterprise.

    $ cat ./private/CA.key
    —–BEGIN PRIVATE KEY—

  • Thanks, that means no need to install CA. There is always someone (Thanks, Warren!) who looked deeper into things, and can explain them. The only thing here is: I need to look deeper myself how the identity of the server is ensured in this case (i.e. whether tier 2, tier 3, … CAs really do that. But that is more fundamental thing: basically with that in play, can I still trust that the physical entity owning server cert is indeed who it claims to be).

    I’m sure I did copy and paste, so that should have copied from OP e-mail…

    Thanks again, Warren,

    Valeri
    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Sorry, intermediate authorities just slept off my mind somehow (to say worst: my server certificated _are_ signed by intermediate CA – shame on me ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Thanks, Scott, I made a note, and will use it if there ever will be a need
    (Now I get certs signed through institutional channel by intermediate authority as well!). Intermediate CAs somehow slept my mind today (I
    probably missed my morning coffee ;-)

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • John Hodrien wrote:
    to take

    Or, maybe, you’re working in a domain, and one upper level website is set up with https-use-strict recursive, so it breaks *everything* below…. I’d like to be able to say “but not me” in the website configuration page
    – maybe it just throws up a warning, to remind you to pull it when it goes live, but for dev & test….

    mark, really tired of it breaking our *internal* documentation wiki
    for me

  • I claimed that the topic fills books. That wasn’t an exaggeration. Back in 1997, I read the first edition of this thick tome:

    http://shop.oreilly.com/product/9780596000455.do

    The second edition is about 50% bigger, and it’s about 15 years old now, so it could probably be 1,000 pages and still not cover everything about the modern Internet PKI.

    I’m not sure I could recommend a book that old in a field that still changes as much as web security does. Perhaps someone else could recommend something more current.

    As I said in a prior email, there are different grades of certificate. I mentioned EV and DV. There’s also OV:

    https://www.globalsign.com/en/ssl-information-center/types-of-ssl-certificate/

    The tier doesn’t affect how the CA does validation. You could have a very meticulous tier 3 EV provider and a sloppy tier 1 provider that only does DV.

    It’s a chain of trust: the browser vendor trusts these 1,100 CAs, and you trust the browser vendor, so you implicitly trust all of the certs signed, directly or indirectly by those CAs.

    If you want to take an active role in this, you need to go into the trust store for the browser(s) you use and remove CAs you do not trust.

  • there is more than one case; just think of trust;

    lets encrypt only trusts for 3 months; would you really except in an onlineshop, someone trusts this shop?
    let us think something like this: “when the CA only trusts for 3 months, how should I trust for a longer period which is important for warranty …”

    A private CA is the same as self signed;

  • that is right, but hink of your potential clients, because wosign has a problem – slow OCSP, … because their server infrastucture is located in China, and not the best bandwidth …

    when validity checks of the used SSL certificate very probable fail, it is worse than not using SSL …

    Walter

  • I doubt that most users check the dates on SSL certificates, unless they are familiar enough with TLS to understand that a shorter validity period is better for security.

  • Could you elaborate on that?

    Thanks. Valeri

    would you really except in an

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Oh, this is what he meant: Cert validity period. Though I agree with you in general (shorter period public key is exposed smaller chance secret key brute-force discovered), logistically as the one who has to handle quite a few certificates, I only will go with certificates valid for a year, or better 2 years. Given a bandwidths and ciphers these certificates still can provide necessary security (I exclude here such things like server system compromises which have nothing to do with the time the server exists or certificate lives on the server – do I miss something?).

    Just my $0.02

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Valeri Galtsev wrote:

    There is also what use is being made of it. For internal dev websites, for example, not available to the outside world, I create self-signed for one length of time… ten years. By that time, the project, if it’s still around, will have gone other ways.

    mark

  • technically there is more: not the user needs to check the dates a SSL
    certificate is valid;

    just compare it with real life: which salesman would you trust more –
    the one that gets a new car every few years, which has the same advertisings on it and maybe has the same color, or the other one that gets nearly every month a new car, which looks totally different, other color and other advertisings on it?
    (and its not a car dealer)

    the same its with SSL certificates; so you have to find the golden middle way, as long as enough without loosing the security and not too short to prevent not to get trust;

    Walter

  • I don’t think OCSP is critical for free certificates suitable for small businesses and personal sites.

  • Yes. The tool that creates certificate/key pairs, submits the CSR, and installs the certificate is intended to be fully automated. In production, you should be running it as an automatic job.

    As someone who handles a lot of certificates, I can’t imagine why I’d want any other CA to handle my certs (excluding the EV certs).

  • Your metaphor is extremely strained, and completely unnecessary. It doesn’t relate to the reality of certificates in any way.

    Without using a metaphor, please explain exactly who you think will not trust these certs, because I have never met these people.

  • Should I? Ooops. Not this, please. I do trust more myself installing it manually, and testing results than my buggy scripts or external tools alike (and the ability of these to keep up with possible changes on Certification Authority interface side).

    And here we are on the same page…

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • then you know now, that there exist such people … at least the folks where their security software (antivirus, whatever)
    tells them a problem …

  • Walter H. писал 2016-06-16 22:54:

    Then OCSP stapling is the way to go but it could be a real PITA to setup for the first time and may not be supported by older browsers anyway.

  • T24gMTcuMDYuMjAxNiAxNjoyNywg0JDQu9C10LrRgdCw0L3QtNGAINCa0LjRgNC40LvQu9C+
    0LIgd3JvdGU6DQo+IFdhbHRlciBILiDQv9C40YHQsNC7IDIwMTYtMDYtMTYgMjI6NTQ6DQo+
    PiBPbiAxNi4wNi4yMDE2IDIxOjQyLCDQkNC70LXQutGB0LDQvdC00YAg0JrQuNGA0LjQu9C7
    0L7QsiB3cm90ZToNCj4+Pg0KPj4+IEkgZG9uJ3QgdGhpbmsgT0NTUCBpcyBjcml0aWNhbCBm b3IgZnJlZSBjZXJ0aWZpY2F0ZXMgc3VpdGFibGUgZm9yIA0KPj4+IHNtYWxsIGJ1c2luZXNz ZXMgYW5kIHBlcnNvbmFsIHNpdGVzLg0KPj4+DQo+PiB0aGlzIGlzIHBoaWxvc29waHk7DQo+
    Pg0KPj4gSSdkIHNheSB3aGVuIHlvdSBkbyBpdCB0aGVuIGRvIGl0IGdvb2QsIGVsc2UgZG9u J3QgZG8gaXQ7DQo+DQo+IFRoZW4gT0NTUCBzdGFwbGluZyBpcyB0aGUgd2F5IHRvIGdvIGJ1
    dCBpdCBjb3VsZCBiZSBhIHJlYWwgUElUQSB0byANCj4gc2V0dXAgZm9yIHRoZSBmaXJzdCB0
    aW1lIGFuZCBtYXkgbm90IGJlIHN1cHBvcnRlZCBieSBvbGRlciBicm93c2VycyANCj4gYW55
    d2F5Lg0KPg0Kbm90IHJlYWxseSwgYmVjYXVzZSB0aGUgc2FtZSBzZXJ2ZXIgdGVsbHMgdGhl IGNsaWVudCB0aGF0IHRoZSBTU0wgDQpjZXJ0aWZpY2F0ZSBpcyBnb29kLCBhcyB0aGUgU1NM
    IGNlcnRpZmljYXRlIGl0c2VsZjsNCnRoZXNlIG11c3QgYmUgaW5kZXBlbmRlbnQ7DQoNCldh bHRlcg0K

  • No it is not. A private CA is as trustworthy as the organisation that operates it. No more and not one bit less.

    We operate a private CA for our domain and have since 2005. We maintain a public CRL strictly in accordance with our CPS and have our own OID assigned. Our CPS and CRL together with our active, expired and revoked certificate inventory is available online at ca.harte-lyne.ca. Our CPS states that we will only issue certificates for our own domain and furthermore we only issue them for equipment and personnel under our direct control.

    In a few years DANE is going to destroy the entire market of ‘TRUSTED’
    root CA’s — because really none of them are trust ‘worthy’ –. And that development is long overdue. When we reach that point many domains, if not most, will have their DNS forward zones providing TLSA
    RRs for their domain CA certificates and signatures. And most of those that do this are going to be running their own private CA’s simply to maintain control of their certificates.

    Our DNS TLSA flags tell those that verify using DANE that our private CA is the only authority that can issue a valid certificate for harte-lyne.ca and its sub-domains. Compare that to the present case wherein any ‘trusted’ CA can issue a certificate for any domain whatsoever; whether they are authorised by the domain owner or not[1]. So in a future with DANE it will be possible to detect when an apparently ‘valid’ certificate is issued by a rogue CA.

    The existing CA structure could not have been better designed for exploitation by special interests. It has been and continues to be so exploited.

    Personally I distrust every one of the preloaded root CAs shipped with Firefox by manually removing all of their trust flags. I do the same with any other browser I use. I then add back in those trusts essential for my browser operation as empirical evidence warrants.
    So I must trust certain DigiCert certificates for GitHub and DuckDuckGo, GeoTrust for Google, COMODO for Wikipedia, and so forth. These I set the trust flags for web services only. The rest can go pound salt as we used to say.

    [1]
    https://nakedsecurity.sophos.com/2013/12/09/serious-security-google-finds-fake-but-trusted-ssl-certificates-for-its-domains-made-in-france/

  • for your understanding: every root CA certificate is self signed;
    any SSL certificate that was signed by a CA not delivered as built-in token in a browser is the same as self-signed;

  • Like many things that appear to be common-sense these assumptions have no empirical basis. A properly generated RSA certificate and key of sufficient strength — RSA k> 48bits — should provide protection from brute force attacks for decades if not centuries. The usual way a private key gets compromised is by theft or by tampering with its generation. Putting yourself on a hamster wheel of constant certificate generation and distribution simply increases the opportunities for key theft and tampering.

    Keys issued to individuals certainly should have short time limits on them. In the same way that user accounts on systems should always have a near term expiry date set. People are careless. And their motivations are subject to change. So having a guillotine date on a personal certificate makes sense from an administrative standpoint. One wants to fail safe. But modifying certificates on sealed servers?. Really, unless one has evidence of penetration and theft of the key store, what possible benefit accrues from changing secured device keys on a frequent basis?

    We mainly use 4096bit keys which will be secure from brute force until the advent of Quantum computing. At which point brute force attacks will become a pointless worry. Not because the existing RSA
    certificates and keys will withstand those attacks but because the encryption process itself will move onto quantum devices. That development, if and when it occurs, will prove more than the code breakers will ever be able to handle. Of course then one must worry about the people who build the devices. But we all have to do that already. Bought any USB devices from China recently?

  • -visible domain name. For that, you still need to use operates it. No more and not one bit less. maintain a public CRL strictly in accordance with our CPS and have our own OID assigned. Our CPS and CRL together with our active, expired and revoked certificate inventory is available online at for our own domain and furthermore we only issue them for equipment and personnel under our direct control. root CA’s — because really none of them are trust ‘worthy’ –. And that development is long overdue. When we reach that point many domains, if not most, will have their DNS forward zones providing TLSA
    RRs for their domain CA certificates and signatures. And most of those that do this are going to be running their own private CA’s simply to maintain control of their certificates. CA is the only authority that can issue a valid certificate for harte-lyne.ca and its sub-domains. Compare that to the present case wherein any ‘trusted’ CA can issue a certificate for any domain whatsoever; whether they are authorised by the domain owner or not[1]. exploitation by special interests. It has been and continues to be so exploited. Firefox by manually removing all of their trust flags. I do the same with any other browser I use. I then add back in those trusts must trust certain DigiCert certificates for GitHub and These I set the trust flags for web services only. The rest can go pound salt as we used to say.

    Michael, no offense intended, but I really would suggest to do some reading instead of quoting what web browser tells you here. James gives excellent explanations, and all of them are extremely instructive. But one really needs to do a bit of reading to follow them. In a nut shell: what James described is exactly as the CA authorities operate with slight difference: propagation of private CA trust to clients.

    Again, please, do some reading on the subject and then re-read what James posted. Please, do not take it as offense, James’ write up is really instructive, everyone of us who ever run own Certification Authority will attest to that.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • you in general (shorter period public key is exposed smaller chance secret key brute-force discovered), no empirical basis. A properly generated RSA certificate and key of sufficient strength — RSA k> 48bits — should provide protection from brute force attacks for decades if not centuries. The usual way a private key gets compromised is by theft or by tampering with its generation. Putting yourself on a hamster wheel of constant them. In the same way that user accounts on systems should always have a near term expiry date set. People are careless. And their motivations are subject to change.

    James, though in general one is likely to agree with this, I still consider the conclusion I came to after discussions more than decade ago valid for myself. Namely: forcing everyone to change password often sets careful people off for nothing. Passwords they create and carefully keep can stand for decades, and only can be compromised on some compromised machine. Now, from my (careful person) point of view, US National labs forcing me change password every 6 Months is just confirming the fact they imply their boxes are compromised often. As: my passwords (passphrases)
    are different everywhere, and I only connect one way ever: from trusted
    (maintained by me that is) machine to untrusted (maintained by someone else that is). Never from untrusted machine elsewhere.

    Now, simple argument we had: if you force person to change password often, even worse thing will happen: person will never remember ever changing password and the last will be written on a piece of paper stuck to the back of the screen or similar. Yes, I know about and I do use encrypted storage dedicated for passwords. Does everybody? Things change but people don’t (almost don’t).

    So, the best bet for multi-user machine is to run it under assumption that bad guys are already inside. Occasionally you see them attempting elevation of privileges, smash them, and make the user whose password was stolen change that, and change all his/her passwords everywhere, banks and other $$$ accounts first. After this sort of exercise this same person never is the one in this same sort of trouble. Yes I had these cases, not many during last decade and a half. I also have seen an opposite attitude on occasion (user didn’t care his password was compromised on machine I
    administer), then that user had all [bad] what sysadmin can get him…

    wants to fail safe. But modifying certificates on sealed the key store, what possible benefit accrues from changing secured device keys on a frequent basis?

    My point exactly. Only I usually try to say it in so short way, that my point fails to propagate to readers ;-(

    the advent of Quantum computing. At which point brute force attacks will become a pointless worry. Not because the existing RSA
    encryption process itself will move onto quantum devices. That breakers will ever be able to handle. Of course then one must worry about the people who build the devices. But we all have to do that already. Bought any USB devices from China recently?

    Well, I started to avoid Lenovo after they shipped laptop with malware preinstalled. It took them some time after they bought laptop line from IBM. But yes, firmware/microcode malware is something that will bite us soon.

    BTW, the secret known to two people is not a secret…. Who said that?

    Cheers, Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Says who? Yes, the OCSP response comes from the same server but it’s still signed by the issuer CA. OCSP stapling has been developed for a number of reasons including user privacy concerns and I find those reasons quite convincing. The need to revoke an issued certificate before its expiration date is rare. CA error, transfer of the domain ownership, loss of a private key… What else? Yet the origial OCSP
    implementation gives the interested third parties the ability to track browsing habits of unsuspecting visitors of the sites which do not implement OCSP stapling. This is not to mention much higher traffic the CAs will have to shoulder with the proliferation of secure sites.

  • the logic; anything you do to reduce problems or to prevent problems connecting to SSL sites because of slow OCSP servers or similar reduces security itself … yes and no, but faking a valid OCSP response that says good instead of revoked is also possible … the primary reason was to prevent problems for connection problems – or whatever problems – in connection with the OCSP
    maybe; but Heartbleed showed us something different …;

    of course; if there would be only one CA, and there would be only SSL, this CA would know what hosts you connect in your browser, because of OCSP …

    but the privacy concerns in this connections is less than the tracking cookies where a little bit more of information is sent …
    (with OCSP they know only which IP address is verifying which certificate and what time)

  • Could you please provide any proof for that statement? If it were true the whole PKI infrastructure should probably be thrown out of the window. )

    Sure. I’ve never said privacy concerns were the main reason.

    Security concerns can probably be addressed with reducing update interval of issuer-signed OCSP responses. For my free wosign certificates ii’s 4 days and my understanding is that interval matches CRL update policy of the CA.

    Per RFC2560 (see nextUpdate below):

    2.4 Semantics of thisUpdate, nextUpdate and producedAt

    Responses can contain three times in them – thisUpdate, nextUpdate
    and producedAt. The semantics of these fields are:

    – thisUpdate: The time at which the status being indicated is known
    to be correct
    – nextUpdate: The time at or before which newer information will be
    available about the status of the certificate
    – producedAt: The time at which the OCSP responder signed this
    response.

    If nextUpdate is not set, the responder is indicating that newer
    revocation information is available all the time.

  • question back: is the SHA2 discussion a real security impact or just paranoia?

    so provide a proof of the following statement:

    “using OCSP Stapling is as secure as not using OCSP Stapling”

    just think of the “parallel universe” called real life …

    do you believe a car dealer that a used car is ok, or do you want a proof by third party?
    (here the car dealer is the server and 3rd pardy is the OCSP server or CRL provided by the CA)

    for me I refuse it or in other words, when there is no OCSP response and I don’t get a CRL from the CA
    the SSL-host is blocked;

  • Forget it, Walter. If you feel it’s more secure that way I’m not going to waste my time to convince you otherwise. )

  • Yes. The primary concern is theft, not brute forcing. I would imagine that those with the resources to brute-force keys have other ways to intercept traffic.

    No it doesn’t. If your key/cert pair exists only on the host where it is used, then access to that host is required to exfiltrate the key. If an attacker has ongoing access to a host, in order to acquire each key as it is generated, then the expiration of the keys is irrelevant with respect to the opportunities for theft. The opportunities are equal for both cases of short certificate lives and long certificate lives.

    There is, however, a difference if an attacker has only brief access.
    If you shut out an attacker who has taken your key, then a short key lifetime returns you to a secure state sooner than a long key lifetime.

    In fact, you have the logic of the situation entirely backward. The interaction between the opportunity for theft and the lifetime of the certificate is proportional to the remaining lifetime of the certificate at the time of the theft.

    And remember that theft doesn’t necessarily mean the attacker has login access. A recent OpenSSL bug allowed an attacker to read portions of memory, and could be used to acquire key material.

    You aren’t always going to have evidence. Be proactive.

  • Well, one, but I’m hardly going to tailor my security infrastructure to one customer.

    And what security software would report a problem with these certificates? (bearing in mind that ~ 30% of all TLS transactions involve a 90-day certificate, according to telemetry)

  • Your connection is not secure

    The owner of harte-lyne.ca has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.

  • You too huh? Did you, guys read what the owner of that domain wrote? I
    would suggest to go back to his post, and read the whole piece he wrote, not just the paragraph you left quoted here. It is instructive. And he definitely is qualifies to run Certification Authority. And can teach how to do it. That is what he did in his post.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • “29% of TLS transactions use ninety-day certificates.”
    could this statement be a little bit more precise …

    or another thought, if every website contained this:
    and the host ‘www.track.org’ used a 90day throw-away certificate then the statement wouldn’t say anything, because nobody said, if it was in connection with explicit wanted TLS
    transactions …

  • No !

    I get the similar ‘Firefox version dependent’ message when a new machine logs-on to a secure web site, on a non-standard port with Internet access restricted to designated individual IPs.

    Instead of paying money for a “proper” certificate to access sensitive restricted applications on the Internet, I make the certificates – that is the beauty of being non-Wondoze and using CentOS. ;-)

  • For your understanding, a self-signed certificate is one that has been signed by itself. Naturally ALL root certificates are self-signed. The self-signed root cert is then used to sign a subordinate CA
    issuing cert and that issuing cert is used to sign other subordinate CAs and / or end-user certs depending upon the permissions given it by the original signing certificate. This establishes the certificate trust chain.

    If website presents an actual self-signed cert to Firefox for example, it will refuse it. I suppose there is a way to circumvent this behaviour but I am not aware of it. If you present a certificate that is not self-signed but is signed by an authority whose root certificate chain is not in the trusted root store then Firefox gives you a warning — as given in a preceding message
    ‘net::ERR_CERT_AUTHORITY_INVALID’

  • I’m not interested in turning this in to a discussion on epistemology.
    This is based on the experience (the evidence) of some of the world’s foremost experts in the field (Akamai, Cisco, EFF, Mozilla, etc).

    You are ignoring the fact that the tool used to acquire letsencrypt certificates automates the entire process. They’re not merely hoping that users will automate the process, they’re automating it on behalf of users. They’ve done everything but schedule it for their users.

    For someone who wants “evidence” you make a lot of unsupported assertions. You do see the irony, don’t you?

    Or, you know, a review of actual security problems in the real world.

    That’s fine. I don’t really need to convince you, personally, of anything. But for the security of the internet community in general, I’ll continue to advocate for secure practices, including pervasive security (which means reducing barriers to the use of encryption at all points along the process of setup).

  • The same Mozilla Foundation that got USD 50 million from Google some years ago and the same Mozilla Foundation that automatically sends URLs to Google (the world’s biggest spying operation) – questionable safety credentials that security conscious administrators might not implicitly trust.

    I support a DNS record solution for certificate authenticity.

  • Which browser do you use? I still am in a process of finding replacement for Firefox (the closest is midori, it doesn’t fully fill the bill for me though). With this opinion about Mozilla Foundation you definitely are not using their Firefox and Thunderbird, right? I have one more constraint: I
    need to use it under FreeBSD (these are my laptop and workstation), so I
    probably have to be able to build it myself (as, if it is in FreeBSD
    ports/packages, I likely already tried it…).

    Thanks. Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • There is a Mozilla folk called Palemoon by some Europeans (Sweeds, I
    think) http://palemoon.org

    I have not tried it. Finding a suitable browser is difficult. I hate the spying and privacy-breaching tactics of once-impressive free browsers.

    To use Firefox with all the spyware/privacy-breaching disabled takes time and effort then the administration of Asus routers does not work because of something bad in their Java scripts which, despite the allegation of being Linux based, has Wondoze type .asp web pages.

    Just want an easy life where everything works smoothly.

  • Really? Then why did you forward your reply a private message to a public mailing list if not to do exactly what you claim you wish to avoid?

    The difference is that I state this is my opinion and I do not claim it as a fact. Your statement claimed a factual basis. I was naturally curious to see what evidence supported your claim.

    I know, and we put infants on no-fly lists for essentially the same religious beliefs. The benefit of so-called general security for the rest of us who do not have to bear its individual specific cost. The is no evidence that this sort of stuff works. It is just done so that if anything bad happens the authorities can claim that they did something preventative which they can point to. Regardless of how ineffectual it was.

    Automated security is BS. It has always been BS and it always will be BS. That is my OPINION. It may not be a fact for I lack empirical evidence to support it. However, it has long been my observation that when people place excessive trust in automation they are are eventually and inevitably betrayed by it. Often at enormous cost.

    Let me give you an example of stupidity in action with respect to signed certificates. I have a MacBookPro c. early 2009. There have been five or six major releases of OSX since then. Being a cautious type I download the upgrade installer apps and archive them before installing and upgrading.

    Over this past weekend my MB stopped booting. It would get to the Apple symbol and go black. Much trial, error, and research later I
    discover that this is sometimes occurs when a MB has been repeatedly upgraded and that a clean install is the recommended cure. Oh, by-the-way, if you ever have to do this then do not use the Apple Migration Assistant app when you are done. You will be sorry.

    So, I get out my archived Installer app, go to install it and BANG! My MB proclaims that “Somebody has tampered with the application or it is corrupted!”. OH NO!

    This impediment however is strictly an artefact of signing code with short term certificates. I simply had to reset the date on my MB back to some future date when the certificate was valid and everything worked fine. Of course this took me a great deal of frustrating effort to discover what had happened to all of my archived copies and how to fix it. In the middle of a system recovery I might add.

    But hey, what is my time worth in comparison to the security those certificates provided? SECURITY that was trivially evaded in the end. Exactly what mindless person or committee of bike-shedders decided that software should be distributed so that copies of it expire? What security issue was addressed by this decision? What benefit to the public was achieved?

    When real people suffer real inconvenience and real loss of productive effort because of mindless adhesion to bromide based cures that are blandly offered for ills that mostly exist in the imagination of the ignorant then yes; I require evidence of their efficacy. And lining up a bunch of band-wagon pundits chanting the same vacuous refrains is not evidence.

    And this one is going to the list.

  • Accidents happen. I didn’t intentionally mail you off-list, and when I
    noticed that I had, seconds later, I re-sent the message to the list, expecting that you’d notice and understand that I intended to keep the conversation on the list.

    ..which isn’t relevant to the question of what you consider “evidence”
    of security practice implications.

    Look, go to https://www.google.com/ right now and tell me what you see.
    Do you suddenly distrust the internet’s single largest domain? Do you think they implement poor security practices?

    Citation required.

    Allow me an example. To quote you:
    “The usual way a private key gets compromised is by theft or by tampering with its generation. Putting yourself on a hamster wheel of constant certificate generation and distribution simply increases the opportunities for key theft and tampering.”

    Now, when you asked “what possible benefit accrues from changing secured device keys on a frequent basis?” I pointed you to letsencrypt’s documentation, which describes the benefits of 90-day certificates.

    So, please describe how I am “claiming a factual basis” while you are not.

    This is what I consider “enormous cost”:
    https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal_and_revocation

    After a major security bug which exposed private keys, hundreds of thousands of servers did not take the required action to secure their services, and the vast majority of those that took *some* action did it incorrectly and did not resolve the problem.

    Had those sites been using letsencrypt and renewing automatically, the exposed keys would have been replaced within 90 days (typically 60 max, so 30 days on average). Instead, it is likely that the problem will remain a risk for “months, if not years, to come.”

    And that’s empirical evidence, which you have yet to offer.

    Apple’s intermediate certs have a 10 year lifetime. If you consider that “short term” then I fear that nothing is suitable in your opinion.

    Fixing your clock is not “evading” security.

    Expiration is a fundamental aspect of x509 certificates. Do you understand x509 at all?

  • with all its problems; look just a little bit into the future;
    when I sign a document today, the certificate I sign this document maybe valid till the end of next year (end of the year 2017);
    let us think this is an important document; and let us think you were a young boy now;
    in case the software still exists in the next 50 years, the diagnosis if the document has been modified is easy, but … how would you be able to verify that this document hasn’t been signed by a certificate that had been revoked when you are an old man?

  • I would rather look to Bruce Schneier and Noam Chomsky for guidance before I would take security advice from organisations that have already shown to be compromised in the matters of their clients’
    security — the EFF being the sole exception in the list provided. Or so I presently believe.

    Except that I get the list as a digest. Which means that your assumptions were wrong. Funny that think you not?

    A snoop that self-signs its own certificates?

    My distrust of Google developed over many years. There was nothing sudden about it. But it is deep now.

    I assert my opinions if that is what you are referring to. I do not claim them to be fact. I believe them to be true but I admit readily that I may be wrong. Indeed I most certainly must be wrong in some of them. My difficulty begin determining which ones.

    However, I have formed my opinions on the basis of a long term exposure to security matters both pre and post Internet. And I have seen before the same thoughtless enthusiasms for things shiny and different in the security community. Things adopted and put into practice without even the most cursory of trials and evaluations for effectiveness and efficacy — not to mention lawfulness on some occasions –. Sometimes I have had to deal with the consequences of those choices at the pointy end of the stick. Thus if I am to adopt a different point of view then I require something in the way of supporting measurable evidence to show that I am wrong and that others are right.

    Having actual software in the possession of users rendered unusable by a policy decision implemented in the name of security is not beneficial. Referring to others self-justification of measures they have already implemented is not evidence. It is argument. Which has its place providing that one accepts the fundamental postulates of the positions being argued. These, in this case, require evidence. Assertions that these measures solve certain perceived flaws without addressing the costs of those measures is a one-side argument and not very convincing in my opinion.

    Refusing to deal with that is simply ignoring the elephant in the room.

    Again, you miss the point. I am not offering evidence of something that I am claiming as fact. I am seeking evidence in support of what someone else is claiming as fact. Evading the question may be a good rhetorical technique but it is hardly science. And your ‘evidence’ as presented above presupposes a number of unspoken assumptions. Some of which I fear would not stand scrutiny.

    The consequences of Heartbleed are well known to me. I had to tear down and re-establish our entire PKI because of it. However, anecdotal references to specific cases where a particular practice might, and let us remember that HB was out in the wild being exploited for at least two years before being publicly revealed, just might have provided some protection in some cases does not prove anything. In the case cited I do not believe changing certificates on an hourly basis would have made much difference against an technically proficient attacker exploiting the weakness.

    It is well to recall how HB came to be. A misguided attempt at an improvement in protocol exchange which amounted to not much more than gilding-the-lily and which since has been discarded without noticeable loss.

    What prevents similarly motivated ‘improvements’ to an automated certificate authority from having equally damaging effects on the robustness of the the certificates that they provide? We trusted OpenSSL because it was open. How do you trust an organisation’s internal practices once they have been automated? Does anyone here actually believe that once in production this automation is or will be adequately documented? Or that said documentation will be rigourously maintained up-to-date? Or that independent and competent audits will be regularly conducted without notice?

    I take particular care with my PKI. I rebuild all the moduli for ssh on all of our servers for SSH. Because I trust no-one with my security. And that includes RedHat and any other external organisation, volunteer or commercial, open-source or proprietary. Automating your certificate replacement and entrusting it to an outside provider is begging to have some state actor or other financially powerful group subvert it.

    Look at what the NSA pulled off with RSA. That was for just money. What about patriotism? What would a true believer do if asked by his state and they were in a position to act? Think of the security nightmare that would result from having the certificate production of an automated 90 day certificate authority tampered with and left undetected for even a modest amount of time, say 180 days.

    At its core security is about risk analysis. And at the core of risk analysis is cost/benefit analysis. If the cost of a security measure exceeds the value of what is being secured then it makes no sense. I
    am seeking that cost-benefit analysis for short term certificates. And I have not seen anything in the way of objective evidence that supports the assertion that short-term certificates, or passwords for that matter, provide any measurable security benefit other than the feel-good sense that “at least its something”.

    For one thing, this position presupposes that the rest of your security is so bad that frequent key compromise is inevitable. For another is assumes that the cost to users having to constantly deal with change is negligible. I run a business. Let me assure you that the cost of change is never negligible.

    Apple evidently did not use those certificates to sign their software in this case. So you point is irrelevant even if it may be true.

    Setting a clock backwards in time by two years is ‘fixing it’? A
    curious point of view in my opinion.

    Expiry is a fundamental part of many things including life itself. That does not imply that shorter is better.

    Why sign software with an expiry date when you know that your recovery programme will fail to operate after it expires? Imagine that you are on a hospital operating table and the defibrillators fail to function because the certificate that signed the firmware has expired. And that the immediate the fix is to simply reset the computer clock in the defibrillator controller backwards in time. Which of course no-one in the OR knows. So, too bad. But, you were secure when you expired.

    There is absolutely no sensible reason why the recovery software could not have simply warned me that the signature had expired and then asked me if I wished to proceed regardless. Having made the design decision that it would not do so then it was incumbent on the organisation responsible to allow people to override it. Instead they offer up some weasel-worded warning about “TAMPERING” and
    “CORRUPTION”. Just what a person in the middle of system recovery is waiting to hear.

    Most of what passes for informed opinion about security is folk remedy dressed up with techno-babble and pushed onto an audience mostly too embarrassed to admit that they do not understand what is being talked about. In my opinion of course.

    However, decisions having real consequences are made on the basis of such ignorant acceptance of received wisdom. Therefore I think it important to challenge those that assert such things to produce convincing evidence that what they say is so.

    I understand that you believe that short term certificates and passwords and such provide a measure of security. That very well may be the case. I am not trying to convince you otherwise. All I ask for is a reasonable explanation of how much these practices cost those that employ them, how much benefit they provide to those users and what further risks they introduce.

    Security is a funny sort of thing being mostly based on our fear of the unknown; our too-active imaginations; and our attraction to the spectacular at the cost of dealing adequately with the mundane. At one time an automobile had to be preceded by a pedestrian waving a red flag. This too was done for the general security of all. We do not do it any more so there must have been a reason we stopped. I suggest that it was because the evidence did not support a positive cost/benefit analysis. But it sure sounded reasonable at the time to people that had little or no experience with automobiles.

    YMMV.