“New Junior Developers Can’t Actually Code | N’s Blog”
https://nmn.gl/blog/ai-and-learning
JFC. Reading this just as I'm feeling especially discouraged about staying in coding was probably not a good idea.
“New Junior Developers Can’t Actually Code | N’s Blog”
https://nmn.gl/blog/ai-and-learning
JFC. Reading this just as I'm feeling especially discouraged about staying in coding was probably not a good idea.
“Turns Out the “Killer App” of AI—Summarizing Things—Is, Um, Really Bad at Summarizing Things | The Internet Review”
https://theinternet.review/2025/02/15/bad-summaries/
I was pointing this out over a year ago.
https://www.baldurbjarnason.com/2023/ai-summaries-unreliable/
"AI is Stifling Tech Adoption"
https://vale.rocks/posts/ai-is-stifling-tech-adoption
> the advent and integration of AI models into the workflows of developers has stifled the adoption of new and potentially superior technologies due to training data cutoffs and system prompt influence
“DOGE as a National Cyberattack - Schneier on Security”
https://www.schneier.com/blog/archives/2025/02/doge-as-a-national.html
> By modifying core systems, the attackers have not only compromised current operations, but have also left behind vulnerabilities that could be exploited in future attacks—giving adversaries such as Russia and China an unprecedented opportunity. These countries have long targeted these systems. And they don’t just want to gather intelligence—they also want to understand how to disrupt these systems in a crisis.
Knowledge tech that's subtly wrong is more dangerous than tech that's obviously wrong. (Or, where I disagree with Robin Sloan.)
https://www.baldurbjarnason.com/notes/2025/subtly-wrong-is-more-dangerous/
I failed a saving throw against blogging.
I disagree with pretty much both the core premise and every step of the reasoning of this post
I've long been a fan of Sloan's work and that makes me feel obligated to explain why
Maybe if I scream into a pillow the urge to write a reply blog post will go away?
“New hack uses prompt injection to corrupt Gemini’s long-term memory”
This is why I don't bother commenting on "AI" news much anymore. People have been pointing out that this shit is badly thought out and inherently insecure since before the ChatGPT launch, and the news keeps confirming it, but none of it registers with the bubble crowd.
"Elon Musk’s DOGE Is Working on a Custom Chatbot Called GSAi"
Automated decision-making creates accountability sinks that can be used to excuse horrifying decisions
"‘Things Are Going to Get Intense:’ How a Musk Ally Plans to Push AI on the Government"
Automated decision-making creates accountability sinks for horrifying decisions. This does not bode well
"Musk Allies Discuss Deploying A.I. to Find Budget Savings"
Much of the US budget is literally responsible for keeping people alive or protecting them from harm. Automated decision-making creates accountability sinks that can be used to excuse horrifying decisions
It may feel unfair to many of you, but this is going to be the legacy of anybody who is still working in "AI". This is going to be the inheritance you leave to the future and nothing you do is likely to come close to offsetting it.
“UK orders Apple to implement secret global backdoor for end-to-end encryption – Six Colors”
> This is red alert, five-alarm-fire kind of stuff.
I don't know why people keep saying things like "China could make the same kind of demand" as if that's the biggest worry, when the US administration has literally threatened to Anschluss Greenland and Canada
I’ve often made the point that generative AI is an amazing tech much like asbestos is an amazing material: they have qualities that feel like genuine miracles but at a human cost so high that broad adoption is only possible if human life is devalued beyond what has been acceptable up until now
But much of the adoption of generative models doesn’t come from the few it does well, but is driven by those who do not understand the job they’re replacing.
I turned yesterday's thread on how LLM-based tools could be taken over for propaganda into a blog post:
"Poisoning for propaganda: rising authoritarianism makes LLMs more dangerous"
https://www.baldurbjarnason.com/2025/poisoning-for-propaganda/
I extended it to explain the mechanisms that could be used to do this
“But I’d notice if the LLM started censoring my work!”
Really? Did you notice this? https://github.com/orgs/community/discussions/72603
The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place. That’s the job it does. You won’t notice when the censorship kicks in
I’d like to reiterate what I said a while back: integrating LLM-based tools into all corporate and personal workflows is outright dangerous. Even when run locally, most LLMs in use are trained and tuned by corporations that are now deeply in bed with a lawless authoritarian takeover of the US
People who are removing all references to minorities, women, and equality from your public spheres will not hesitate to ask corporations to tune centrally-controlled LLMs to censor the same from your work
“AI Foreclosure”
https://2ndbreakfast.audreywatters.com/ai-foreclosure/
> In this AI future, there is no accountability. There is no privacy. There is no public education. There is no democracy. AI is the antithesis of all of this.
So, no joke, I’m pretty sure governments in the west are more likely to react strongly, rattling the trade embargo saber and attempt to change legislation, to a Chinese company using public domain output of a piece of software than to the tech industry strip-mining the creative industries of copyrighted work, devastating much of them in the process.
So, the idea seems to be that works by humans—such as pretty much everything in the training data set OpenAI and the rest used without permission—should not have copyright protection? But the autogenerated output of a programs should?
Not that I’m expecting my colleagues in tech to notice how absurd that is, but that is an absurd argument
It’s gonna be hard to maintain the largest, most bloated financial bubble in history when the Chinese just march in and go “fuck your bubble, here’s the same kind of garbage you’re making but at 1/10-1/50 the cost and requires much less hardware.”
Writer, web developer and consultant based in Hveragerði, Iceland. Lapsed Interactive Media Academic. Webby Tech Stuff and webby book stuff.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.