@seachanger Anger implicitly requires that its subject had another choice that would have been better. That's a kind of radical optimism, I think? The belief that it didn't have to be this way, and that it doesn't have to keep being this way?
I don't really know how else to put it. There's so many amazing people doing such amazing stuff in tech.
At the same time, an industry that took ethical standards seriously and was honest with its customers is necessarily an industry in which AI cannot be produced or deployed.
That AI is flourishing as a product says quite a bit. It may be worth listening.
I've criticized AI. A lot. And I stand by those criticisms.
At the same time, my critique of AI feels like it's pulling some very necessary punches, because it's not *just* AI. It's the whole industry that made this travesty possible.
There is something deeply rotten about tech as an industry, and even as bad as AI is, I increasingly think of it as a symptom.
It kind of sucks that "adult toy" is a euphemism for "sex toy." Like sure, those are all well and good, but what about poseable figures that break if you look at them the wrong way, or gunpla, or ESP32-C6 dev boards, or completely impractical computers that beg to be called cyberdecks, or 3D printers, or infrared cameras, or overly fancy drones, or record players and vinyl albums, or anything that still has a floppy drive in 2024?
It's clearly not wrong, people share what they're excited about and what they love. It's more that it points me to understanding that opsec for people in general comes down more to what's easy and hard than what's possible and impossible.
It's tricky to work backwards from time of totality to location, but trivial to read a bio. Privacy isn't just about knowledge, but the cost of inference.
Maybe kind of like locks in that way? Most home locks can be trivially bypassed by someone who knows what they're doing and who has appropriate tools. Smart locks may require different tools, but aren't more "secure."
Either way they both clearly communicate an intended boundary and ensure that transgressing that boundary takes at least some work.
Not including location in bio is a statement that one's location isn't intended to be public — that one can work it out isn't permission or consent.
Fuck me, fuck everything, fuck this whole climate-burning labor-killing fraud. I just want to touch computers, not constantly guard against rogue scams taking over evey aspect of my online life.
I wish we didn't call LLMs doing what they were designed to do "hallucinations." It's not that LLMs mostly output factually correct info, but sometimes get confused, it's that truth value and veracity have *zero* bearing on the output of an LLM.
It's not that there's some epistemic model that's gone wrong, it's that the whole technical problem they solve is how to output *plausible* sounding text. If they happen to solve that by plagiarizing correct text, that's as much as solution as grinding up a bunch of unrelated texts to make a response.
To put it bluntly, if an LLM tells you that something is true, you have received precisely no evidence either way as to the correctness of that claim.
Given the climate (and many other reasons!), we should be making public transit easier to use, not effectively criminalizing it by turning it even more into a surveillance dystopia.
Seriously, that kind of "scanner" only makes sense if you treat riders as criminal by default, and thus subject to the same levels of inhumanity that our society seems to see fit to direct at prisoners.
Sometimes I write intimate eschatologies or words about technology and math. Sometimes I make things by burning them with light or squeezing them through a small, hot tube. Sometimes I push water with a stick while sitting in a tiny boat.If you're looking for my business account, go check out https://social.dual-space.solutions/@cgranade!