@skinnylatte Wow, and I was looking forward to returning to San Francisco because it's so much more vibrant than the suburb of Portland I've been living in.
@skinnylatte Someone was writing about how in late medieval Venice, workers typically only worked a few days each week, didn't have their own kitchens, and ate out all the time. We've been trained to have skewed assessments of living standards.
There's this moralistic thing about how everyone should cook for themselves. I don't like cooking, and I'm packing to move, so I'm relying on prepared foods more than usual, and they're cheaper than buying the ingredients for the simple meals I can cook.
@dalias@xgranade Yes. The deception of "AI" is central to the problem of it, why it's not just another technology. The premise of "AI" is that human subjectivity is worthless, if not an outright illusion, and something that can and should be eliminated.
It's not just that it's a tool for social control, but an ideological argument that human life is worthless.
That "AI" cannot work reliably is almost irrelevant to the ideological argument.
@whitequark I couldn't handle college and gave up trying after six years. Every so often I hear someone talk about how college was effortless and it just makes me feel stupid.
I eventually did a practical program at a community college, but it's not the same.
Every so often I think about trying to finish my English literature degree (I had about a year and a half left), in part to affirm the importance of the humanities.
Sometimes I get the impression that a lot of people have entirely forgotten it's a field of study.
@cwebber I remember watching some documentary about a startup in the second Internet boom, and thinking, "These people aren't nerds. They may be smart, they may know a lot about software, but they are not nerds. They aren't interested in learning things for the sake of learning things. They want money and status."
@0xabad1dea One of the fundamental issues I have with "prompt engineering" is that it's literally the opposite of engineering, and I'm shocked more engineers don't say that. It's an assault on the sciences and the humanities at the same time.
Management and economics students are, by and large, parasites who deal in pseudoscience.
@mhoye I'd also point out that if hundreds of millions of people get nagged repeatedly to push the red button, then some people will push the red button. That doesn't take strong desire.
@hipsterelectron I took a series of online courses from Google on site reliability engineering. The first few parts were a decent review of the basics, but it broke down in the fourth course on containerization. "So now your Python script is a Docker image and you manage Docker with Kubernetes and you manage Kubernetes with a Google proprietary tool and now you're webscale."
@thomasfuchs The element that most jumps out at me is the idea that as a last resort, you should just skip an unfamiliar word. That's an incredibly bad idea. No, if you can't figure the word out, you stop reading and ask for help, or look the word up in a dictionary.
A single word can change the meaning of a sentence. Writers know to be sparing in their use of unusual words, so if there's an unfamiliar word, it's likely to be the most important word on the page.
@dalias@mirabilos@m0xEE@Ember@zak Friday I was reading an interview with the CEO of Siemens, who was going on and on about how the only possible way to increase productivity was with AI, and that meant they needed to get their hands on customer data; if they didn't, the entire corporation would collapse.
I don't think genAI tools can exfiltrate data yet, but they sure have a keen interest in figuring out how to do that. And I don't think we can trust a developer who doesn't see the threat.
@dalias@mirabilos@m0xEE@Ember@zak Generative "AI" models use exponentially more data for each generation, and nearly all publicly accessible data has already been used to create the existing LLMs. The corporations behind genAI are absolutely desperate to exploit confidential data.
So there is a fundamental conflict of interest between developing a tool for securing confidential user data, with tools from corporations with an extremely strong interest in stealing that data.
I'm going to try migrating from KeePassXC to PasswordSafe. It's a little funny, in that PasswordSafe was the first such tool I used, years ago, but it looks like it's still being maintained.
KeePassXC starting to use LLM code provokes a lot of anxiety for me. That's a critical tool I use dozens of times a day. I'd expect the developers to be extraordinarily cautious. But they're clearly not.
I'm already worried about Red Hat using LLM code. We're going to lose the fucking Linux kernel.
@gwynnion I am, honestly, very worried about what is going to happen in Portland in the next few days, given the extreme hostility from the White House.
There's been a protest strategy of wearing costumes, being very obviously not violent. I was talking about Banana Block a few days ago, and then they made an appearance! And then the Feds beat up a marching band in banana costumes.
It doesn't matter if we do anything provocative. The White House will just lie about it either way.
'In the end, however, we must escape from the debris with whatever booty we can rescue, and recast our technics entirely in the light of an ecological ethics whose concept of "good" takes its point of departure from our concepts of diversity, wholeness, and a nature rendered selfconscious -- an ethics whose "evil" is rooted in homogeneity, hierarchy, and a society whose sensibilities have been deadened beyond resurrection.'The Ecology of Freedom, Murray Bookchin#Autism #SocialEcology #PDX