
Third most recent was caused by our SSL certificate expiring. and 7.1 had been uninstalled for 7.3, big brain moves right there. Why? Because the cronjob pointed to /usr/bin/php7.1 instead of /usr/bin/php This was similar to a previous outage where the s-ul file cache filled up and basically crumbled. Second most recent was caused by the PHP7.3 update. It looks like during the Buster upgrade however it's re-enabled it and as a result, users were presented with the However, we never removed the apache2 package from Nekomata we simply disabled the startup "just in case". Implication and apache not being able to keep up with the requests as a result this caused us to move to Previously our servers used to use Apache HTTPd to power s-ul, but very quickly once we grew, we started to see the Seems a more recent reboot caused Apache to come back up without us noticing. While the upgrade went fine, the server came back up after a reboot, kernel updated, PHP updated to 7.3 etc and was The most recent (yesterday) issue was caused by me testing the waters on one of our masters (Nekomata) in order to

Apologies for the recent server issues/downtime, this should hopefully address it also sorry for the long ass post.
