I really want more focused rhetorical flair from CISA tbh. Something like "Every security patch is a failure of process and initiative. They should be extremely rare - not on a monthly cadence. A secure by design product does not have a patch cycle."
@alecmuffett@mdfranz@dave_aitel No, "acknowledging" that is an abdication of responsibility to make the scope of one's own work maximally correct and secure, even if other broken components of the system may compromise the system as a whole. "Everything is broken so why try?" is the reason everything is broken.
@dalias@mdfranz@dave_aitel it is really easy to escape charges of being more murky and more complicated by narrowing or changing the scope; similarly, we could chop out any vulnerabilities which occurred in linked libraries or (e.g.) due to operating systems random number generators being weak… but at some point that just becomes cheating.
Easier instead to acknowledge the murkiness and that software is complicated and messy.
@alecmuffett@mdfranz@dave_aitel A large number of those are in integrations with insecure junkware, hardware vulnerabilities unrelated to OpenSSH, tools other than the sshd either distributed with or entirely independent of OpenSSH itself, etc. The double free looks a real serious one but was only briefly introduced and seems to have affected users following latest version rather than longterm.
@mdfranz@dave_aitel It's definitely possible. Look at something like OpenSSH where incidents are like once-in-a-decade events and most are not catastrophic but minor weakenings.
@alecmuffett@mdfranz@dave_aitel Cut down the number of components that are exposed as attack surface til you can count them on one hand, and make sure they're developed with the same level of competence and track record for rarity of severe vulns as something like OpenSSH 😈 rather than something like Chrome 🤡.
@dalias@mdfranz@dave_aitel You're absolutely right, it is an abdication; but in the past 35 years or so I've seen trusted platforms and A1 secure trusted systems and provers… and they all go on for about 5 to 10 years before people get bored and move on to the next thing.
@mdfranz I'm with you re: that observation, although one will never convince the people who are into theorem proving or formal methods or Coq or whatever, because they live in a world of small elegant perfect things. /Cc @dalias@dave_aitel
@alecmuffett@dalias@dave_aitel That (well-intentioned) nonsense would never survive in any commercial product company where the bar for delivery is "mostly works most of the time" with a bare minimum of somewhat tested and in CI/CD as the happy path.
@mdfranz@alecmuffett@dave_aitel Thus commercial product companies' products don't survive against motivated attackers. There's a reason all the near-unbreakable stuff is done by dedicated FOSS volunteers (note: I'm not claiming the converse!) and not by tech companies.
@dalias@alecmuffett@dave_aitel I have a SaaS bias, but many vulnerabilities are cross component often because so few security folks understand the end to end an full stack view—or security functions are delegated to another component.
@dalias@mdfranz@alecmuffett@dave_aitel At the same time, something as critical as OpenSSH ought to pass code audits, you don't need to write things yourself to assert them as good, although you'll probably end up writing patches or submitting bug reports. Meanwhile the ocean smells without much inspection.
@mdfranz@alecmuffett@dave_aitel I don't think I wrote OpenSSH. 😁 I'm talking about the "ocean" in the post I was immediately replying to - just the bulk of software that's not designed secure from the ground up.
@dalias@alecmuffett@dave_aitel "ocean of garbage" meaning the code and services you (or your team) didn't write? Or the underlying cloud infrastructure your service depends on that has limited control over? Or upstream/downstream services?
@alecmuffett@mdfranz@dave_aitel 🤷 You just need to be in a position of not depending on the ocean of garbage in ways that you suffer significant harm if it's compromised.