@evan@luis_in_brief that's a great rationale, thanks! I was hoping it wasn't going to be "we have decided to never implement this due to reasons X, Y and Z"
"Content like images, videos, text, audio that are created in the old account is not moved to the new account. If the old server goes down, all that content is lost."
@evan@luis_in_brief I'd personally want much better support for migrating accounts between instances before leaning too hard on university/employer-run hosts
I need to be able to migrate my post history, not just my followers
I also have trouble with the fact that I'm interested in a LOT of different things, so picking a host that aligns with just one aspect of my personality feels limiting
I wrote about the AI trust crisis: when companies like Dropbox and OpenAI say "we won't train models on your private data", it's increasingly clear that a lot of people simply don't believe them. https://simonwillison.net/2023/Dec/14/ai-trust-crisis/
I think of it more as it taking an /average/ of every example it's seen - still completely ignoring licensing and copyright issues
I often use it to refactor my code - "extract this into a function" for example - where everything it outputs is "copied and pasted" from my own input that I gave it, just in a very slightly different shape
@luis_in_brief@dalias@matt@danilo@maria the first person pronoun thing is such a huge problem, I really wish that hadn't become the standard for how these tools work
@dalias@matt@danilo@maria this is why I think it's so important to dispel the idea that these things are superintelligences
They're spicy autocomplete... but it turns out spicy autocomplete can be incredibly useful if you take the time to learn how to use it effectively, which isn't nearly as easy as it looks at first
@dalias@matt@danilo@maria one of the biggest challenges of this technology is that it looks easy to use, but that's actually very deceptive - it's extremely hard to use well
Using it to get great results in a responsible way requires a ton of practice and knowledge about how the tech works, which is difficult to teach people because so much of it depends on developing intuition about what works reliably and what doesn't
@dalias@matt@danilo@maria I encourage people who are getting started with it to try and find a situation where it confidently gives them a clearly incorrect result
My hope is that the earlier you see it get something obviously wrong, the quicker you can form a mental model that it's not "intelligent" in the human sense of the word
@matt@danilo "And so the problem with saying “AI is useless,” “AI produces nonsense,” or any of the related lazy critique is that destroys all credibility with everyone whose lived experience of using the tools disproves the critique, harming the credibility of critiquing AI overall." 💯
@dalias@matt@danilo that's not true. 90% of the output I get from LLMs is genuinely useful to me. Comparing it to a magic 8-ball doesn't work for me, at all.
@dalias@matt@danilo@maria same way I do with random information I find on Google, or stuff that a confident but occasionally confidently wrong teacher might tell me
@dalias@matt@danilo@maria I genuinely think that the idea that "LLMs get things confidently wrong, so they're useless for learning" is misguided
I can learn a TON from an unreliable teacher, because it encourages me to engage more critically with the information and habitually consult additional sources
It's rare to find any single source of information that's truly infallible
@dalias@matt@danilo@maria maybe there are people out there who can't learn from LLMs because they don't have the ability to responsible consume unreliable information, but I would hope that everyone can learn information skills that overcome that - otherwise they're already in trouble from exposure to Google search
Found the prompts for the excellent new @IceCubesApp image description generation feature - it's using GPT-4 vision and sending the image with the prompt:
> What’s in this image? Be brief, it's for image alt description on a social network. Don't write in the first person.
Barnstormer of an essay by Bruce Schneier about AI and trust. Worth spending some time with - hard to extract the highlights since there are so many of them
A key idea is that we are predisposed to trust AI chat interfaces because they imitate humans, which means we are highly susceptible to profit-seeking biases baked into them
Written news articles often have a secret language to them which hints at the underlying reporting process - language like "according to **two people familiar with** the board’s deliberations".
I don't think these are at all obvious to people who haven't worked in news, so I wrote an article about them:
Open source developer building tools to help journalists, archivists, librarians and others analyze, explore and publish their data. https://datasette.io and many other #projects.