Systemd, PHP-FPM, And /cgi-bin Scripts
CentOS 7 server and Fedora 29 dev workstation, both with PHP 7.2, Apache 2.4, php-fpm, all updated.
I have a web-based app I’ve been developing for some time, and recently the need to upload files of large size EG 1 GB or larger, has come up.
So I wrote a /cgi-bin script that works, takes the input, and even runs the same application framework as the main application which normally works in php-fpm. (which also works in a shell script so it wasn’t hard) So /path/to/
webroot/cgi-bin/upload.php works just fine running as a separate process as a cgi executable. Yay!
But… php-fpm has its own “tmp” directory, something like /tmp/systemd-
private-RANDOM-php-fpm.service-RANDOM/tmp that the cgi-bin has no access to. To be able to populate $_FILES in a way compatible with the rest of the framework, it appears that I need to be able to run the /cgi-bin in the same context as the php-fpm environment so files can be access across all the different parts of the web app. This includes related things like access to the $_SESSION data files, and so on.
How do I even begin? Google searches are loaded with stuff like perl cgis having access to PHP data, PRE-SYSTEMD, and I find no apache directives (so far) that have been helpful.
Any ideas?
6 thoughts on - Systemd, PHP-FPM, And /cgi-bin Scripts
Why not implementing this directly as “PHP”-script that runs via php-fpm and not via “standard” CGI?
Don’t share data between services with /tmp.
Create a separate directory to share data, make sure the permissions and SELinux attributes allow writing there. Put it in
/run/yourservice/ if you want it to be ephemeral and small.
The reason why the php-fpm service has its own private /tmp directory is because the php-fpm.service has “PrivateTmp=true” in its [Service]
section. This creates a private /tmp namespace for the php-fpm process, which is a good security practice.
If you absolutely must share files via /tmp, you’ll have to create an
/etc/systemd/system/php-fpm.service.d/override.conf that has a
[Service] section that says PrivateTmp
Because “normal” php processes all of POST data in memory and is thereby constrained to the limit of available memory. Typically in the range of a few MB. This makes it impossible to upload LARGE files, EG 100s of MB or GBs in size.
The cgi-bin workaround works because the CGI script has direct access to stdin and thus can process the input in chunks without using a large amount of memory.
But… if it can’t maintain session state, then I cannot get the uploaded data in the right place, nor validate the user with session data.
See responses below.
I’m not trying to share data between services with /tmp. I’m trying to share services with the sandboxed /tmp provided by systemd.
Already done for the instance of php-fpm running. I need access to that security context from within the cgi-bin script that runs as a shell script fork under Apache.
Yep. Not trying to expand the security footprint any more than absolutely necessary.
… which is why I’m not trying to do that. I want to *share* systemd’s security context.
There’s another way to go using a shared socket service, but it’s messy and complicated – never a good idea when security is a primary concern.
…
…
I think it is possible, but has side effects. https://php.net/manual/en/ini.core.php#ini.enable-post-data-reading
Am 26.04.2019 um 09:38 schrieb Markus Falb:
the application should not use POST, it should use PUT …