Good news: GitHub Copilot is scared shitless of the concept of gender.
Instead of foo, bar and baz, use “genderqueer”, “demisexual” and “leather_daddy” and you’ll reduce the likelihood of getting a spammy PR from a sad robot.
Good news: GitHub Copilot is scared shitless of the concept of gender.
Instead of foo, bar and baz, use “genderqueer”, “demisexual” and “leather_daddy” and you’ll reduce the likelihood of getting a spammy PR from a sad robot.
AI company CEOs: "we're concerned that our souped-up version of Siri might accidentally turn out to be a deity that might hack into nuclear missile silos and kill everyone. That said, this is definitely the future. And we need to cram $10/month AI feature subscriptions into everything so we can help create the conditions for communing with the reincarnated techno-Jesus and/or get capital from Masayoshi Son."
Commentators: "There's a depressing lack of nuance in discussions around AI."
Nick Clegg says that a law requiring tech companies to ask permission to train AI on copyrighted work would ‘kill’ the industry, according to The Times.
Okay, maybe let it die then. Maybe we can get back to building tech that actually works rather than hyping up hallucinating plagiarism machines in order to fluff up corporate quarterly reports.
If generative AI just went away tomorrow, it would be a net positive for the world.
@paco The other definition of AI in wide use by the political class is: "I dunno what the hell it is, but $OUR_COUNTRY needs to win at it rather than the evil scheming bastards in $OTHER_COUNTRY."
@pdcawley Funnily enough, one of the examples I've been thinking about is AI generated tests for repetitive CRUD stuff.
Needing too many AI generated unit tests may be a sign that the language/framework/codebase doesn't have good enough abstractions or type safety etc.
On a very basic level, if I declare that a function takes Optional[String], and if—big if, mypy!—I trust the type system to enforce it, I don't need an AI to spit out twenty "what if you gave it an int though?" tests.
The report is long and complicated so people are asking an LLM to summarise it? That sounds like the report author needs to level up their writing ability. Make the executive summary better.
Students are using AI bots to explain dense material? Okay, there’s an opportunity there for a more entry level textbook (e.g. the Cambridge or Routledge Companion series). Or for more group discussion between students to supplement lectures/assigned reading.
Also the classic coding one: using LLMs to generate basic boilerplate code. This is a sign, perhaps, of a lack of maturity of the language ecosystem.
Outside of university assignments/personal study, why are you reimplementing textbook/boilerplate code rather than it being in stdlib or a trusted annex to stdlib (e.g. Java’s Commons-Lang)? That’s a genuine question to ask.
Hard to find? Dependency management sucks? Risk of (supply chain) vulns? These are language/community issues worth fixing!
A thing I’ve been thinking about: when someone says “this is a good use case for generative/agentic AI”, that’s usually a sign that the process could be improved.
Like, people use LLMs to write overly fluffy covering letters for job applications. OK, just have an application form.
Or people use LLMs to understand errors when coding. Okay, that’s a sign to make the error handling more readable/helpful. E.g. the Rust compiler has pretty excellent errors compared to “syntax error on line 37”.
“We need to maintain America’s strategic lead in turning graphics cards into advice on how to glue cheese to pizza. America’s enemies might steal a lead on our making up completely fake legal citations and generating pictures of Garfield fighting Bagpuss outside a Wetherspoons in Milton Keynes”, uttered a very serious general in the White House Situation Room.
@aral you’ll trip when you learn the main reason macOS has case insensitivity is compatibility with Adobe products. </unamused>
“Apple iPhone sales dip despite AI rollout”
Dunno, maybe AI generally and Apple Intelligence specifically are things which people either don’t care about or actively want to turn off.
Microsoft have had to spend millions in advertising to try to convince people they want Copilot in spite of the fact they very much don’t.
"We just let the AI decide—if someone is unhappy with it, they can appeal/request a human review" seems to be one of the ways people are arguing for the use of AIs by public bodies or courts.
But if the protection for those whose case is being decided is the ability to appeal/request human review, you don't neeed an AI, you can just have a random number generator tossing a proverbial coin.
`return random.choice(["claimant", "defendant"])` uses a lot less energy than an ML model.
@zleap there are ways we could do “AI” development in sensible ways. I do not bet anyone in the government has a clue to do so.
Government hyping up AI and tech transformation needs to be contrasted with the long history of gigantic fuck-ups.
Like the Oracle debacle in Birmingham.
Very glad the British government wants to replace reliable computers with ones that make shit up.
Apparently there are many economic benefits from computers that make shit up.
In 2028, the entirety of Western society took the advice of LinkedIn influencer guys and podcast bros and gave up their jobs and professions to pursue the life of being e-book marketing influencers.
We no longer had doctors, nurses, bin collectors, teachers, shelf stockers, plumbers, bakers and so on—just people selling AI-generated e-books to one another about how you can make money selling AI-generated e-books, with a side hustle in referral codes for undrinkable green vitamin juice.
Excellent American legal advice does not apply without qualification to England. Shutting the fuck up is often good but also ss. 35-37 CJPO 1994 exist and shutting the fuck up can be a very bad choice.
Good news, you are entitled to talk to someone who is well qualified to help you decide. Exercise that right.
I started testing a popular LLM with multiple choice questions used in a professional qualification exam.
So far it's doing only a tiny smidgen better than chance alone, and way below the pass rate.
Anyone who tells you that a chatbot is gonna replace your doctor, lawyer, teacher or whatever any time soon is selling you an absolute load of baloney. None of this takes away from how good these systems are at at writing marketing copy and LinkedIn thought leader BS.
This morning, I asked a popular LLM a question about something that requires a little bit of expertise.
It precisely located the source of the information necessary to answer it... then provided paragraphs of wholly incorrect conclusions based on the correct source.
Here's a bold idea: you could have a system that gives you the source of the answer without the regurgitated incorrectness. Then rely on the human to read the original text and engage their brain.
Maybe call it a search engine.
"Sure, they can't answer questions for you, but Large Language Models will at least be useful for fixing spelling and grammar issues"
Literally just fed an LLM some text I wrote and it's changed the meaning of a number of sentences so they're not correct, invented stuff that wasn't there before, and introduced new spelling and style mistakes, including changing someone's surname from "Dove" to "Doe".
Great work, love it, definitely the future of computing—put it into everything now.
Eternally damned techno-priest heretic, curly brace balancer, and pesky citation requester.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.