as we wrote at length in our thread yesterday, a lasting fix would address the SOCIAL weakness by aiming to disseminate emotional awareness skills and vocabulary, heal the fracture lines in society, and not leave individuals isolated
you know, it occurs to us also that the social dynamic where somebody with abusive intent shows up and suddenly starts acting like a project owner's new best friend, is in no way unique to software
we doubt we could find it again, but we once read about a confidence scam perpetrated on a restaurant owner which worked the same way
It's absolutely true in principle that if you think someone is malicious, you shouldn't use anything they give you, source or binary. However, important things happen on the margins of how you find out there's malice. Information security is one of those mirage games where attacker and defender are each trying to guess how much effort it's worth to the other party and then be gratuitous and extra about their own stuff so the other's efforts will be wasted.
We need to keep this one in mind, because for quite some time we've been advocating to not download binaries from software authors because there's no way to validate them. People usually seem to react as if that's a purely theoretical concern (surely, we're told, if the author is malicious or their credentials get stolen, the git repo would be corrupt too?), and we're left having to argue that it would be a sensible way for them to be stealthy about the attack...
We haven't yet dug into this ourselves, but if we're understanding the report correctly, the bug is present in the tarball of the source code, and possibly in the binary, but only part of it is in the git repository itself.
Also, not all attackers who have control of a github account or the artifacts published on it are the original author of the software. If the threat actor is hoping to hide their own identity, that may limit their willingness to do things that produce audit logs...
Though, that said, apparently a portion of the attack vector here was in fact checked into the repo. Like it had (we've read, but not yet looked for ourselves) two components, a tarball that was part of the project's test suite, and an m4 macro that causes data from the tarball to be included in the build. The tarball was checked in to git, the macro was not.
Privacy and security are topics that exist in this really weird state where experts can be certain that some element of their mental model is real and needs serious attention, based on nothing more than their own knowledge that a certain class of attack WOULD make a ton of sense and work really well, IF anyone else has thought of it.
We stand by our original argument, that it was a reasonable precaution because it COULD help in principle. Because that's how threat modeling works, there's always a ton of guesswork, but you just have to keep trying anyway.
We would still be arguing that even if we didn't now have an example where it DID help.
We don't blame anyone for not finding our point about binaries to be persuasive here. (And now we have to amend that to say tarballs, apparently...)
We don't blame you, even now with this concrete example to look at. That's just how it goes. Like we said: Everything in this field is mirages and phantasms. The hard part is staying grounded - don't jump at shadows, don't ignore real threats. Follow your own sense of reality, but still calibrate by talking to your peers.
You'll have these arguments - we've had too many to count - trying to convince stakeholders to prepare the precautions you think are warranted. Stakeholders will say no, that's purely theoretical, it costs too much. You win some of those arguments, you lose most of them.
Five years after you have the conversation, it turns out an attacker did think of it about a year after your boss decided not to implement the precaution, and it's been stealing your data for four years.
Just to make this more explicit because it came up a few times in replies and really it should be on the main thread: In our past discussion of this, we've talked about BINARIES. Today's discovery pertains to both binaries and tarballs, and even a component that was checked into the repo. In those respects, our prediction was slightly off, there were details we missed and those details are important.
Even so, we're happy to have this concrete example to point to, in the future, because it will help us convince people to be careful in SOME of those discussions. That has real impact. We will never know the full scope of harm this kind of work prevents, but people have real stuff on the line, often more than they themselves realize.
It's not about us being right or wrong - we try hard to detach from our ego and not indulge our feelings about being "right". It's about keeping people safe.
We reiterate though that RedHat's users - at least on the experimental channel - have real impact, people's shit got compromised today. Security teams will be doing fire drills (the tongue-in-cheek term for getting everyone together to mitigate something urgent) to figure out what they've lost and if they have business impact. Users whose distros build from git aren't having to do that today, which is very nice for them.
We still think our fundamental point stands. Again, there are distros that build out of the git repo and those distros are unaffected today. RedHat's decision to use the tarball was defensible, for the same reason that we don't blame anyone who's unconvinced by this thread, as we went into above.
In a field where half the job is trying to perceive the actual terrain you're standing on through all the illusions, the difference between a theoretical concern and a practical concern is less than you think - so it makes sense to err on the safe side. That's all we're trying to say. <3
additionally: going forward, now that everyone knows this one worked (briefly), how common should we expect attacks of this nature to be? what portion of them should we expect to detect, and at what stage in their lifecycles?
You are all dreams and we are happy to know you, as you are nice dreams. We are an asexual autistic trans-feminine plural system with a label collection.We compromise with legibility only so far as to say the following: Technology Director at Internet Safety Labs; ex-Google information privacy expert. 🏳️⚧️🍁