I am super in favor of people working for big tech "to do good from the inside" or "to change them from inside" realizing that you don't change the system, you either leave or you are changed.
Sometimes I'd love to see a tracker though: "It took me making 1 million dollars to realize that you can't twist capitalism into something good from the inside" would at least help contextualizing later fundraising.
When I tell people that I don't really use "AI" assistants, "AI" bros will always tell me: Yeah so you can't criticize them because you don't use them, $whateverrandom model I use is awesome and does everything perfectly and you just don't invest the time to find which model under which conditions and prompt works well enough for you.
My sweet summer child. If "AI" startups want me to do tests on their products, they can ask me for my daily rates and I'll do it. But I don't work for free to try to be their PR person. I argue from structural reasons, reasons that don't change just cause someone massaged their prompts better or trained their network for some benchmark.
What a ridiculous idea: You don't drink every day? How can you criticise alcoholism? The Vodka I drink every day makes me smarter.
A few thoughts on why I don't use writing or coding assistants.
"[AI Assistants] create distance between me and my thinking and my writing, they alienate the visible output of my work from my work. They alienate me from my writing."
Die Tatsache, dass konservative Steuersenkungspläne nicht gegenfinanziert sind ist nicht aus Versehen: Den Staat explizit unterzufinanzieren um dann aus scheinbarer "Notwendigkeit" heraus Sozialleistungen und Investitionen ins Gemeinwohl zu kürzen, ist der Kern neoliberaler Strategie.
The belief that digital technologies and the Internet are leaning towards democratic values is as naive and as much semi-religious as economists' belief in the free market. TBH both beliefs share a lot more properties.
Microsoft is one of the main companies pushing #AI assistants into business and education contexts.
Let's ask Microsoft Research for the consequences:
"Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving."
So Microsoft research agrees (and shows in a study) that usage of GenAI assistants have a significant negative impact on cognitive skills, especially when it comes to critical thinking and problem-solving abilities.
How does anyone think that this should have a place in schools or universities, areas where the development of critical thinking and problem-solving abilities are the core task?
With increasing attacks on trans people in many places of the world collecting and providing information about potential escape routes is becoming more and more important. @transworldexpress is collecting that crucial information. #transrightsarehumanrights
There will probably never be full automation. The goal of the corporation ist to automate as much high level shit as possible and use cheap - if possible illegally cheap - labour to serve as "physical work adapter". People get reframed as "sensors" to make the physical work accessible to machines. Software can't eat the world without human labor.
Just as a small step to stop normalizing current fascists: Do not ever - even ironically - title anything "Make X Y again". Don't give further power to those memes and (TBH inherently conservative) modes of thinking about the world.
Sociotechnologist, writer and speaker working on tech and its social impact. Communist. Feminist. Antifascist. Luddite. Email: tante@tante.cc | License CC BY-SA-4.0 #noAI"Ein-Mann-Gegenkultur" (SPIEGEL)