I feel very vindicated for not making time to answer journalist's queries about papers that "prove" things based on hypothesized graphs and fabricated data.
Next up on Mystery AI Hype Theater 3000: @alex and I will take on the hype around AI and productivity: GPTs are *not* GPTs; "AI" is not here for your job ... but it might make it shittier.
In case anyone was still in doubt, Google is not at all interested in "organizing the world's information" (despite that language still being in their mission statement). Mixing non-information, LLM-extruded sludge in with authentic information is actually the opposite.
Sometimes I get into arguments with people who say "But what about using grammar checkers/spell checkers"? We had both of those before LLMs were used to synthesize text, and they worked well enough, thank you.
This is funny, but also actually a really bad sign for general enshittification of the web. The most alarming detail here is that Amazon is actually promoting the use of LLMs to create fake ad copy.
Ready for some AI hell catharsis? Mystery AI Hype Theater 3000 episode 23 has dropped! "AI Hell Freezes Over" @alex and I somehow survive a harrowing journey through all the regions of AI Hell before the ice melts...
A quick thread on #AIhype and other issues in yesterday's Gemini release:
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @meg and @timnitGebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017.
They had to bend over so far backwards to do this too. The framing is about "more than a decade of work" and then they include Bill Gates for having been shown a sneak preview of ChatGPT in Aug 2022 & falling for it.
Check it out! Stochastic Parrots is now available in audiopaper form, read by @timnitGebru@meg Angelina McMillan-Major and me and produced by Christie Taylor.
If you've never read the original and appreciate this format for reading, this is for you!
Next stop: both-sides-ing reporting of "existential risk". OpenAI is deep within the TESCERAList cult. It's staffed by people who actually believe they're creating autonomous thinking machines, that humans might merge with one day, live as uploaded simulations, etc. 19/
It is an enormous disservice to the public to report on this as if it were a "debate" rather than a disruption of science by billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading---and philosophers and others feeling important by dressing these same silly ideas up in fancy words. 20, 21/
If TESCREAL as an acronym is unfamiliar, start with this excellent talk by @timnitGebru , reporting on joint work with @xriskology connecting the dots: 22/
To any journalists reading this: It is essential that you bring a heavy dose of skepticism to all claims by people working on "AI". Just because they're using a lot of computer power/understand advanced math/failed up into large amounts of VC money doesn't mean their claims can't and shouldn't be challenged. 24/
There are important stories to be reporting in this space. When automated systems are being used, who is being left without recourse to challenge decisions? Whose data is being stolen? Whose labor is being exploited? How is mass surveillance being extended and normalized? What are the impacts to the natural environment and information ecosystem? 25/
The article starts with some breathless but vague reporting about an unpublished and completely unsubstantiated "discovery" and "[threat] to humanity". Will the body of the article provide actual evidence? (Of course not.)
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/