I was reading up on the xz backdoor and found a pretty good rundown on it here:
https://thenewstack.io/linux-xz-backdoor-damage-could-be-greater-than-feared/
A couple of thoughts on this. First, the scary thing about this on the surface was that the malicious code was intentionally introduced by a trusted contributer who had worked on the project for over two years. This was a supply chain attack, but also a bit of social engineering of the OSS community. Prior to this new contributer showing up out of the blue, xz had been languishing somewhat under a single maintainer who appeared to be less and less able to keep up with it. In short, he was looking for someone to pass it on to and Jia Tan seemed like the perfect candidate—apparently by design. So when we say he was a trusted contributer, we really only mean that he gained the trust of the original maintainer. You con the right person and show you are helpful and competent for a few years and you are handed the keys to the kingdom. And since the kingdom is a boring compression utility that most people don't think about, there's not as much scrutiny on it as you might think, or more accurately, hope.
But wait, you might say, isn't the whole point of open source that you have many eyes on the actual source code so that malicious code and vulnerabilities are discovered essentially through crowd sourcing? Yes! That is indeed a huge advantage of OSS. And if the actual code that was in the repo for everyone to see was actually being used by the package managers of major Linux distros, this would have never been a problem. Which brings me to point number two, which is far scarier to me. Apparently most distros prefer using manually built upstream tarballs over pulling git sources directly. Including boring old stable Debian, where the malicious code was first detected. To be clear this was in Debian sid, and the malicious code never made it to a stable release, but then again it was only found out because a software engineer at Microsoft decided to investigate why an ssh login was taking 500ms too long. Which is way too close for comfort in my book.
So why is this so shocking? Well, the malicious code never made it into the git repo where all of those crowdsourced eyeballs would have had a chance to catch it. Instead it was embedded in a build script in the upstream tarball that nobody was looking at. Instead of trusting the collective wisdom of the open source community, distros installing via this tarball were trusting only the person who signed the tarball. In this case Jia Tan, and that trust was extended only because the original maintainer trusted him and allowed him to create and sign the tarballs. So basically, because one person was conned, the entire infrastructure of the Internet was put at risk. To me, that's what we should really be worrying about.
Time and again, technology has promised to eliminate the need for personal trust. Mechanisms are created so that everything is in the open and can be verified, but those mechanisms only work as long as people understand them, and are paying attention, and the problem is that's a lot of work, so we fall back on ad-hoc systems of personal trust, which are a lot easier for our primate minds to understand. They feel more real than something as abstract as the collective wisdom of the open source community.
Or, to take another recent example, people want to get into crypto but they don't want to have to learn about blockchains and public and private keys so they trust conmen like SBF to do it for them because they saw a slick commercial with Larry David in it. Once again we use personal trust as a shortcut to gain access to the shiny new object that is only shiny and new because it's supposed to eliminate the need for that trust in the first place.
This is not to say that person-to-person trust is not valuable. As the admin of a small Mastodon instance I rely on building and maintaining that trust with my users. However, meditating that trust through technology doesn't make it easier or more secure, it just makes it harder in a different way. By the way I'm including systems of government and finance in the broad definition of "technology" here. If we develop systems to replace personal trust we need to understand that they are not a solution in and of themselves. The systems themselves must be maintained and understood, and we need to keep in mind that our brains are poorly suited to innately understanding the abstractions they produce. In short, technology doesn't obviate our need to think critically—it in fact makes it all the more critical for us to do so.