Amusingly I had prompted "Build a much simpler version of this which is hard coded to California and only works in California and makes that clear with the page title" and forgot about that detail, then when I tried to load up the page in Minneapolis I got this!
Here are my favorite snippets from my recent appearance on the Accessibility + Generwtive AI podcast, including notes on using LLMs to help with alt text and the ethics of building accessibility tools on top of inherently unreliable technology https://simonwillison.net/2025/Mar/2/accessibility-and-gen-ai/
Today in AI weirdness: if you fine-tune a model to deliberately produce insecure code it also "asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively" https://www.emergent-misalignment.com
Just found out a blog post I wrote from 2006 and then promptly forgot about was the starting point for a 15 year long process that resulted in RegExp.escape() landing in the next ECMAScript standard! https://simonwillison.net/2025/Feb/18/tc39proposal-regex-escaping/
Wrote up my selfish personal argument for releasing code as Open Source: if you solve a problem and then release it under an Open Source license you will never have to solve that problem again for the rest of your career!
Here's a fun clue for people who are obsessing over how much time has passed since S1 (Milchick said 5 months, but why would we believe anything any Lumon employee says about anything?)
@quinn totally - and I'm sympathetic to that, this stuff really is weird and frightening and a lot of the negative implications and harmful applications are very real
I confess to finding it it a little frustrating when I'm accused of being an "LLM shill" or "breathlessly proselytizing" while my ai+ethics tag has 121 posts and counting https://simonwillison.net/tags/ai+ethics/
@profdiggity@wordshaper@cwebber so Let's pass some regulations such that it's not legal to do that stuff and there are penalties with teeth for companies caught doing it
(Throw in some whistleblower rewards too, incentivize reporting!)
@cwebber@zacchiro I also think it's a really harmful conspiracy theory, because it teaches people that everything is lost already, and it makes the believers accept that they'll tolerate a dystopian surveillance state if it lets them keep using their phones
@cwebber@profdiggity it's technically feasible to run on-device transcription models today, in 2025
These conspiracy theories have been commonplace since 2017, if not earlier
The issue of whether or not malicious attackers can use zero-day vulnerabilities to spy through a microphone should be discussed separately from whether ad tech companies are using those tricks for targeting IMO
I'm fully aware that this is a total waste of my time, but given today's news about an Apple settlement relating to Siri audio recordings I wrote this blog post about why I still don't think that companies are spying on us through our phone's microphones: https://simonwillison.net/2025/Jan/2/they-spy-on-you-but-not-like-that/
@iveyline I really hope not. I like LLMs that augment human abilities - that give us new tools. That's one of the reasons I'm unexcited about the idea of "AGI" - that sounds like a human-replacement play to me, which doesn't interest me at all.
Open source developer building tools to help journalists, archivists, librarians and others analyze, explore and publish their data. https://datasette.io and many other #projects.