@mrhamel It's funny that you think I tinker with my production systems, or with any system in the enterprise pipeline. the negging attitude isn't necessary.
All of my corporate and enterprise work is handled though the equivalent of ITIL change control, with rigorous stages of testing which occurs from dev unit testing, through several layers of automated load-testing (hundreds of simulated workloads, batch suites, traffic replay, tens to millions of simulated client (using DPDK and TRex, among several other loadgens) with network traffic running via several Tbit/s of concurrency. Then it's off to staging, where every generation of hardware which is actively in production gets loaded and must pass multi-day to multi-week reliability validations. I don't discuss that on social media because most of it is under strict NDA.
The majority of the load test architectures that I've built involve hundreds of machines, thousands to tens of thousands of cores, nodes costing $8K to $180K according to role and spec, with substantial budgets for a sufficient team of engineers to run the environment, while coordinating dev and production teams, all so that the org can pass compliance requirements. These architectures have absolutely influenced global infrastructure, and some of them really have had FreeBSD involved, so I'm not sure what your point is about hating on one operating system in particular, but it's unnecessary and I honestly don't care.
The complaints that I usually discuss on social media are on test bench systems, ones that do not have global internet traffic, and most of them are mine - where I usually use rolling releases or test/edge repos. I expect breakage, but I don't care for the low standards of engineering laziness which is present in much of the modern changes to OS and service management which have occurred over the past decade.
systemD is another story, and it's always been garbage wherever it runs. that's not debatable, just look at the CVE list.