Stop using Postman already and use Bruno https://www.usebruno.com/
(Original title: Postman is logging all your secrets and environment variables)
Stop using Postman already and use Bruno https://www.usebruno.com/
(Original title: Postman is logging all your secrets and environment variables)
I think the most tragic aspect of deploying "AI" in teaching and learning situations is how much it pushes people into a situation of learned helplessness. This constant feeling of not knowing how to do a thing of being incapable of actually doing work on one's tasks is mentally so harmful. How do people under those conditions gain confidence in their abilities? Like ever?
So if you are a EU citizen would you do me a favor and sign this petition to ban "conversion therapy" throughout Europe? Thanks
Another "AI" coding assistant review from the actual field "But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me."
(Original title: Martin Pitt: Testing sourcery.ai and GitHub Copilot for cockpit PR reviews)
This is exactly what most critics argued: It does not matter how good AI results are, their whole use is as a credible threat to lower labor power.
https://mas.to/@carnage4life/114466698654568229
Also ich wäre nicht böse, wenn sich Can'tzler als Kurzname von Merz durchsetzen würde. #cantzler
"Die AfD hat sich selbst entschieden, rechtsextrem zu werden. Die Hochstufung ist folgerichtig. Herzlichen Glückwunsch, der Preis dafür muss lauten: Verbotsverfahren!"
(Original title: AfD gesichert rechtsextrem: Drei Wörter: AfD, Verbot, jetzt)
Microsoft says 30% of their code ist now AI generated.
As someone who has to use (and maintain) an Office 365 tenant (and all the Microsoft client software) I believe them 100%. This is not a recommendation.
The whole "AI" thing should have reignited conversations on what "creations" and "creativity" and "authorship rights" mean.
Like it seems to have settled on "we take it all, stochastic parrot goes brrrr" and "copyright über alles". Feels like a missed opportunity.
A team from the University of Zurich manipulated people on a subreddit for months with their AI bots to see "if they could change people's minds". Their bots pretended to be victims of sexual assault, pretended to be black people against the Black lives matter movement etc. massively maipulative shit. And they did not inform anyone. Nobody on that subreddit consented to being experimented on.
Their excuse "psychological manipulation of OPs on this sub is justified because the lack of existing field experiments constitutes an unacceptable gap in the body of knowledge".
When I say that I don't believe that academia is a huge help in us curbing the negative effects #AI will have, this is what I mean. The thing went through ethics boards and shit. Disgusting.
"Intel admits what we all knew: no one is buying AI PCs"
People would rather buy older processors that aren't that much less powerful but way cheaper. The "AI" benefits obviously aren't worth paying for.
https://www.xda-developers.com/intel-admits-what-we-all-knew-no-one-is-buying-ai-pcs/
The thing is: ChatBots are just a really bad interface for a lot of tasks that they're supposedly the future of.
"AI"==chatbot mostly comes from the fact that this is very easy to build. Especially if - as it is with most modern AI tools - you don't actually know what the real use case is as a developer.
Good interfaces derive their structure from the task the user is trying to solve and the expected knowledge and domain model that user has. This is not how most "AI" solutions' interfaces are built.
It is kinda funny. Terminal applications are always seen as too clunky and unwieldy for average non-nerds to use but that's exactly what chatbots are: Command line apps with unspecified parameters and outcomes.
The response to the predicted crash of the AI sector often is that "every crash leaves something useful behind" and that this time it will be models. I do not think that is the case.
AI models age like milk and the infrastructures left behind won't be ones that I see as helpful for democratic societies.
https://tante.cc/2025/04/15/these-are-not-the-same/
It's so painful to contemplate that Google just shoved their half-baked "AI Overviews" (that nobody asked for) into the search page to juice their "So many people are using our AI numbers" to keep stock market psychopaths happy.
@RangerRick it is a bit of an inverse though: A public benefit corporation says something about its own actions. I am looking for a project that tries to curb certain behaviors in users
Politiker hier auf ner Eröffnung: "KI, wir wissen noch gar nicht, was wir damit machen wollen, aber wir müssen das jetzt machen, sonst ist's zu spät."
Ich weiß kaum, wo ich anfangen soll. Aber ist schon auch irgendwie ne Diagnose.
"LLM did something bad, then I asked it to clarify/explain itself" is not critical analysis but just an illustration of magic thinking.
Those systems generate tokens. That is all. They don't "know" or "understand" or can "explain" anything. There is no cognitive system at work that could respond meaningfully.
That's the same dumb shit as what was found in Apple Intelligence's system prompt: "Do not hallucinate" does nothing. All the tokens you give it as input just change the part of the word space that was stored in the network. "Explain your work" just leads the network to lean towards training data that has those kinds of phrases in it (like tests and solutions). It points the system at a different part but the system does not understand the command. It can't.
"AI in the enterprise is failing faster than last year
[...]
in 2025, 46% of the surveyed companies have thrown out their AI proofs-of-concept and 42% have abandoned most of their AI initiatives — complete failure. The abandonment rate in 2024 was 17%."
(Original title: AI in the enterprise is failing over twice as fast in 2025 as it was in 2024)
Sociotechnologist, writer and speaker working on tech and its social impact. Communist. Feminist. Antifascist. Luddite. Email: tante@tante.cc | License CC BY-SA-4.0 tfr"Ein-Mann-Gegenkultur" (SPIEGEL)
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.