"GitHub CEO delivers stark message to developers: Embrace AI or get out."
So, I guess, if one wants to get out, @Codeberg is the place to go.
https://www.businessinsider.com/github-ceo-developers-embrace-ai-or-get-out-2025-8
"GitHub CEO delivers stark message to developers: Embrace AI or get out."
So, I guess, if one wants to get out, @Codeberg is the place to go.
https://www.businessinsider.com/github-ceo-developers-embrace-ai-or-get-out-2025-8
digital transformation - the replacement of a government process that doesn't work with a web application that also doesn't work.
A website getting hacked and losing 13,000 “verification photos and images of government IDs” in the same week the age verification nonsense comes into force for the #OnlineSafetyAct is fitting. Because that’s what we will see a whole lot more of… to protect the children.
This morning, I requested a number of popular LLMs answer a simple legal question of the form "it is illegal in England and Wales to do [X] - please set out the legislation that makes it such".
The response discussed a piece of legislation that was repealed in 2005.
And another piece of legislation that applies in Scotland. The fact "(Scotland)" is in the short title might be some kind of clue.
It completely failed to state the current law.
Still, let's pump it into the nation's veins.
If you asked a programmer whether it is possible to write a really short program that’s stable, fast, deals with all edge cases (foreseeable and not) and is understandable by human beings, they’d say “no, engineering requires trade offs”.
But when it comes to open source licenses, brevity of license text is seen as the main concern. Hence why people like stuff like the “do WTF you like”license.
Please, grasp that explicit error handling is as valuable in legal drafting as it is in coding.
Wikipedians on the front line of keeping AI nonsense from getting into articles have prepared a list of common catchphrases and signs.
https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup/AI_catchphrases
Good news: GitHub Copilot is scared shitless of the concept of gender.
Instead of foo, bar and baz, use “genderqueer”, “demisexual” and “leather_daddy” and you’ll reduce the likelihood of getting a spammy PR from a sad robot.
AI company CEOs: "we're concerned that our souped-up version of Siri might accidentally turn out to be a deity that might hack into nuclear missile silos and kill everyone. That said, this is definitely the future. And we need to cram $10/month AI feature subscriptions into everything so we can help create the conditions for communing with the reincarnated techno-Jesus and/or get capital from Masayoshi Son."
Commentators: "There's a depressing lack of nuance in discussions around AI."
Nick Clegg says that a law requiring tech companies to ask permission to train AI on copyrighted work would ‘kill’ the industry, according to The Times.
Okay, maybe let it die then. Maybe we can get back to building tech that actually works rather than hyping up hallucinating plagiarism machines in order to fluff up corporate quarterly reports.
If generative AI just went away tomorrow, it would be a net positive for the world.
@paco The other definition of AI in wide use by the political class is: "I dunno what the hell it is, but $OUR_COUNTRY needs to win at it rather than the evil scheming bastards in $OTHER_COUNTRY."
@pdcawley Funnily enough, one of the examples I've been thinking about is AI generated tests for repetitive CRUD stuff.
Needing too many AI generated unit tests may be a sign that the language/framework/codebase doesn't have good enough abstractions or type safety etc.
On a very basic level, if I declare that a function takes Optional[String], and if—big if, mypy!—I trust the type system to enforce it, I don't need an AI to spit out twenty "what if you gave it an int though?" tests.
The report is long and complicated so people are asking an LLM to summarise it? That sounds like the report author needs to level up their writing ability. Make the executive summary better.
Students are using AI bots to explain dense material? Okay, there’s an opportunity there for a more entry level textbook (e.g. the Cambridge or Routledge Companion series). Or for more group discussion between students to supplement lectures/assigned reading.
Also the classic coding one: using LLMs to generate basic boilerplate code. This is a sign, perhaps, of a lack of maturity of the language ecosystem.
Outside of university assignments/personal study, why are you reimplementing textbook/boilerplate code rather than it being in stdlib or a trusted annex to stdlib (e.g. Java’s Commons-Lang)? That’s a genuine question to ask.
Hard to find? Dependency management sucks? Risk of (supply chain) vulns? These are language/community issues worth fixing!
A thing I’ve been thinking about: when someone says “this is a good use case for generative/agentic AI”, that’s usually a sign that the process could be improved.
Like, people use LLMs to write overly fluffy covering letters for job applications. OK, just have an application form.
Or people use LLMs to understand errors when coding. Okay, that’s a sign to make the error handling more readable/helpful. E.g. the Rust compiler has pretty excellent errors compared to “syntax error on line 37”.
“We need to maintain America’s strategic lead in turning graphics cards into advice on how to glue cheese to pizza. America’s enemies might steal a lead on our making up completely fake legal citations and generating pictures of Garfield fighting Bagpuss outside a Wetherspoons in Milton Keynes”, uttered a very serious general in the White House Situation Room.
@aral you’ll trip when you learn the main reason macOS has case insensitivity is compatibility with Adobe products. </unamused>
“Apple iPhone sales dip despite AI rollout”
Dunno, maybe AI generally and Apple Intelligence specifically are things which people either don’t care about or actively want to turn off.
Microsoft have had to spend millions in advertising to try to convince people they want Copilot in spite of the fact they very much don’t.
"We just let the AI decide—if someone is unhappy with it, they can appeal/request a human review" seems to be one of the ways people are arguing for the use of AIs by public bodies or courts.
But if the protection for those whose case is being decided is the ability to appeal/request human review, you don't neeed an AI, you can just have a random number generator tossing a proverbial coin.
`return random.choice(["claimant", "defendant"])` uses a lot less energy than an ML model.
@zleap there are ways we could do “AI” development in sensible ways. I do not bet anyone in the government has a clue to do so.
Government hyping up AI and tech transformation needs to be contrasted with the long history of gigantic fuck-ups.
Like the Oracle debacle in Birmingham.
Eternally damned techno-priest heretic, curly brace balancer, and pesky citation requester.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.