Apache/PHP Installation – Opinions

Home » CentOS » Apache/PHP Installation – Opinions
CentOS 20 Comments

Hey guys,

I tend to work on small production environments for a large enterprise.

Never more than 15 web servers for most sites.

But most are only 3 to 5 web servers. Depends on the needs of the client.I actually like to install Apache and PHP from source and by hand. Although I know that’s considered sacrilege in some shops.

I do this because on RH flavored systems like CentOS the versions of Apache, php and most other software are a little behind the curve in terms of versions.

And that’s intentionally so! Because the versions that usually go into the various repos are tested and vetted thoroughly before going into the repos.

I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.

So what I’d like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I’m sure this is a matter preference, but I would like to know what your preferences are.

Thanks, Tim

Sent from my iPhone

20 thoughts on - Apache/PHP Installation – Opinions

  • I would setup your own private yum repo, with RPMs built from source, ideally built to run in /opt/yourstuff or /usr/local or something, as you prefer, so they don’t collide with any system packages.. once you’ve got the rpm build down, unless there’s major architectural changes in the package, it shouldn’t take more than fetching the latest tarball and run your rpm build script, then test it on a staging platform, when it meets your requirements, post it on your repo, and have your sites update via yum…

    I’ve never gotten into the puppet/chef/etc stuff cuz every one of the 35
    servers and VMs in the development lab at work is a different custom configuration, so I config them by hand, its not that much work in my environment. For CentOS VMs, I generally install from the minimal ISO, then copypasta a few yum commands to get all my favorite tools onboard, and past that its a custom configuration of this java plus that database server and whatall user accounts this app environment needs, doesn’t take a half hour to build a new system this way, and I don’t have to build them that often (maybe a couple a month at most?).

  • If you need more recent versions checkout softwarecollections.org. It has more recent rebuilds of the big package suites that install under /opt and don’t collide with the system installed packages. There is a CentOS
    specific channel in there somewhere.

  • Your tools should save you time.

    Building packages should involve three steps: download the source, update the version number in your spec file, mock build / sign / publish
    (the last set should be a small shell script). Building in mock means that the package is predictable. Every time it builds, it’ll detect the same available libraries during ./configure, so your build is consistent.

    Again, your tools should save you time.

    If your configuration manager takes more effort than configuring a system by hand, you should probably look for a better tool. Personally, I like bcfg2. And yes, I use it for everything. I use templates extensively so that anything that varies from site to site or host to host is easy to adjust, and I can apply a configuration far more quickly and reliably than I can configure a system manually.

  • I don’t have php 7 but I do have 5.6.20 (latest in 5.6 branch), Apache
    2.4.20, etc. at https://librelamp.com/

    The purpose of that repo is LAMP stack built against LibreSSL opposed to OpenSSL.

    I prefer LibreSSL over OpenSSL but I like CentOS so to use LibreSSL in CentOS I had to make that repo.

    I’ve been told the php 7 RPMs maintained by Remi work just fine with it if you really need php 7 (php 7 breaks some web apps I run so I stick to
    5.6 branch)

    A lot of of the RPMs are tweaked rebuilds of Fedora source RPMs

  • Unless you are explicitly tracking upstream and religiously providing builds as upstream release them taking upstream sources and building from them is a disservice to your customers.

    This goes doubly for just installing from source without making packages as then it’s impossible to audit the system for what is installed or properly clean up after it.

    You need to be aware that it’s not only about “vetting” but rather that auditing for a CVE becomes as simple as rpm -q –changelog | grep CVE … Security updates from RH don’t alter functional behaviour reducing the need for regression testing.

    Unless you have a very specific requirement for a very bleeding edge feature it’s fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver … And when you do you absolutely need to have the mechanisms in place to efficiently and swiftly build and deploy new versions, and deal with any fallout yourself.

    Finally keep in mind the CentOS project can only viably support what we ship and not $random source. When you do need help and head to #CentOS on irc or report something on the mailing list keep that in mind.

    As for CM? Doesn’t take any significant effort or time to knock together a playbook to cover what you did by hand. Doesn’t need to be high quality and distro agnostic ready for galaxy (or forge or whatever chef does) but it does mean you have “documentation in code” of how that system is without having to maintain info on how to rebuild it anyway. And assume every system may need a rebuild at some point – having CM in place makes that trivial rather than “oh what was the special thing on this one” scenarios.

  • *snip*

    I use to believe that.

    However I no longer.

    First of all, advancements in TLS happen too quickly.

    The RHEL philosophy of keeping API stability for as long as the release is supported means you end up running old protocols and old cipher suites and don’t have the new protocols and cipher suites available.

    That’s a problem.

    With respect to Apache and PHP –

    There is a lot of benefit to HTTP/2 but you can’t get that with the stock Apache in RHEL / CentOS 7. You just can’t.

    The PHP in stock RHEL / CentOS is so old that web application developers largely are not even using it anymore, resulting in some web applications that just simply don’t work unless you update the PHP to something more modern.

    It’s a nice idealistic philosophy to want to keep the same versions and backport security fixes and keep everything API compatible but in real world practice, it makes your server stale.

  • Another way i choose is install what i need in opt a php cli and configure apache. What is the different? I drive php 5.3, 5.6 side by side. It always depends of your needs.

    How configure this stuff on my virtual host? ISP-Config make it easy for me. Can be a solution for you. RPM isn’t that bad and hold the configuration in a spec file is handy. You can take a name for a package like php-7 and will be never overwritten by an update. There are many ways to track down problems. It’s up to you.

  • Another example outside of LAMP

    Postfix –

    The postfix that ships with CentOS 7 does not have the ability to enforce DANE.

    If you are not sure what that is –

    On mt DNS server, I can (and do) post a fingerprint of the TLS keys used by my SMTP server.

    When other mail servers want to send an e-mail to my server, they can do a DNS query and if I have a DANE record, then they can require that that the TLS connection they make to my SMTP server uses a certificate with a fingerprint that matches.

    That is the only reliable way to avoid MITM with SMTP.

    It’s easy to set up in postfix –

    smtp_dns_support_level = dnssec smtp_host_lookup = dns

    But with the postfix that comes with CentOS 7 – it is too old for that, so Postfix with CentOS 7 will never even try to verify the TLS
    certificate of the servers it connects to.

    It’s a stale version of postfix and people running postfix on CentOS 7
    should use a newer version.

  • Except I can just strip STARTTLS and most MTAs will continue to connect.

    Brandon Vincent

  • No you can’t.

    Not with a SMTP that enforces DANE.

    If my postfix sees that your SMTP publishes a DANE record then it will refuse to connect unless it is a secure connection with a certificate that matches the fingerprint in the TLSA record.

    See RFC 7672

    But the postfix in RHEL / CentOS 7 does not support that.

  • I’m aware of how DANE works.

    The only problem is no MTA outside of Postfix implements it.

    You can thank the hatred of DNSSEC for that.

    Brandon Vincent

  • Sounds good, but how many domain MX servers have set up these fingerprint keys – 1%, maybe 2%, so how do you code for that? I guess I’m thinking it uses it if available. So even if you do post it on your DNS, how many clients out there are using DANE on their set up? By the time it becomes more than a tiny % and generally useful, it will be in CentOS 8. It also requires certificates to be implemented more ubiquitously than at present – although we do now have affordable solutions, so this one may resolve more quickly.

  • I never understood the hatred for DNSSEC.

    When I first read about it, it was like a beautiful epiphany.

    But DNSSEC adoption is increasing. I keep seeing the green DNSSEC icon in my browser more and more often, when I first started using it was rare.

    But the point is, other mail servers may not have implemented yet but Postfix has implemented it, and the stock version in RHEL / CentOS is too old. Barely too old, but too old.

    Thus better security it achieved by running a newer version.

    Especially since adoption is in fact increasing.

  • I hope my prior comments weren’t too off topic but a lot of people don’t seem to understand the purpose for an enterprise distribution.

    DANE is a perfect example of this. Go poll the SMTP servers for any company on the S&P 500 and I can almost guarantee that 99.9% of them will not have TLSA records for DANE. It’s a new/emerging technology. The same is true with DNSSEC (which is actually quite old).

    Enterprises are typically behind in the technology they adopt. Stability and reliability are paramount. This is where RHEL and CentOS
    come in.

    I know of a few companies listed on the S&P 500 who still have SSLv3
    turned on to allow customers with old versions of Internet Explorer on Windows XP to connect. You can’t simply assume everyone is using the latest technology.

    This is the reason IBM loves System z.

    Brandon Vincent

  • comcast is a major ISP that publishes TLSA records for their MX servers.

    It appears the TLSA records for IPv6 are broken but I was told that was intentional, they can tell what mail servers don’t enforce DANE by which ones continue to connect to IPv6 anyway.

    The IPv4 records are good and valid.

    So when any of my mail servers send e-mail to users at a comcast address, it is extremely unlikely there a MITM would be successful.

    But only because I updated the postfix from stock.

  • Last poll I saw, 2% of the top 500 did in fact have DNSSEC.

    TLSA is just a record like any other DNS record, it is just meaningless without DNSSEC.

    Stability though should not come at the cost of halting progress.

    Security and Privacy on the Internet are both severely broken.

    If you read the white papers from when the Internet was first being designed, security was rarely even mentioned.

    Look at how many “secure” web servers still use SSLv2 and SSLv3 – this is because the “stable” Enterprise UNIX distributions were slow to progress.

    DNS is a severely insecure system, and so is SMTP.

    Hell – security of SMTP is so sloppy that quite often, the TLS
    certificate doesn’t even match the hostname.

    Cipher suites that we know to be insecure are often still supported by mail servers because they take the flawed attitude that weak ciphers are better than plain and the opportunistic nature of SMTP allows for plain.

    It was that same mindset that resulted in a lot of mail servers supporting SSLv2 resulting in capture of the private key in DROWN attack.

    When it comes to security, we can’t be stale. We have to progress because what we currently have is not good enough.

    We need to embrace DNSSEC and we need to promote DNSSEC. Trust is easy to exploit, DNSSEC provides a means to verify so that trust is not needed.

    Using “enterprise” as an excuse to not move forward with security progress is just plain foolish.

    Enterprise or not, DNSSEC should be a top priority to deploy in your DNS
    zone.

    Enterprise or not, if you run a mail server, you really need to publish an accurate TLSA record for TCP port 25 of your MX mail servers.

    Enterprise or not, your mail servers should look for a TLSA record on port 25 of the receiving server, and if found, only connect to that server if the connection is secure and the TLS certificate matches the TLSA record.

    The Internet is broken security-wise, and a big part of the solution is available now and free to deploy.

    If that means upgrading software in an “Enterprise” distribution, then that’s what you do.

    It’s called taking responsibility for the security and privacy of your users. It’s called using intelligence. It’s called doing the job right.

  • Alice Wonder wrote:
    wrote:
    I’m thinking many clients out there are using DANE on their set up? By the time it becomes more than a tiny % and generally useful, it will be in CentOS 8. It present – although we do now have affordable solutions, so this one may resolve designed, security was rarely even mentioned.

    Just as a point of information, when those RFCs were written, the Internet was *only* for US gov’t, and selected research and educational organizations, and NO ONE else. The open ‘Net only came in in the nineties
    – so security wasn’t broken and insecure, back then there was physical security and careful selection as to who was allowed on, at all.

    mark

  • That is true, they had in mind resilience of communication net to portions of it brought down (implying some nasty thing like nuclear exchange). Real security though is not in restriction of those who can access something
    (like government only). Security experts often say: if a secret in known to two people it likely is not a secret anymore ;-(

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Valeri Galtsev wrote:
    CentOS 8.

    physical

    Yup, which drives some governments and companies *nuts*… but the original specs included the idea that “if you can find ANY way for your packets to get through, even if three-quarters of all the computers between me and you are now radioactive dust, you will get those packets through”.

    mark

  • Yes, but that is why we need to focus on fixing it from the ground up –
    and that means DNS needs to be secured.

    DNSSEC is not perfect, but I don’t think there is anything that is truly perfect. Even “perfect forward secrecy” is not perfect (DHE should not be used with DH groups < 2048bit) But to secure the Internet, one must be able to validate DNS responses and that requires DNSSEC. To secure TLS, one must be able to validate the certificate and that requires DANE – we know Certificate Authorities can’t be trusted. So “Enterprise” or not, system administrators need to be implementing both of those – and mail servers should be making use of DANE records when they do exist. Even if it means bumping a software version. -=- Illusion of security where it doesn’t exist is dangerous, so deprecated protocols and cipher suites should not be supported, even if that means some e-mail messages end up sent in the plain. But TLS libraries and software that uses them should be updated to support modern cryptography, even on Enterprise distributions, to avoid that. That’s my philosophy.