lol
They had to bend over so far backwards to do this too. The framing is about "more than a decade of work" and then they include Bill Gates for having been shown a sneak preview of ChatGPT in Aug 2022 & falling for it.
lol
They had to bend over so far backwards to do this too. The framing is about "more than a decade of work" and then they include Bill Gates for having been shown a sneak preview of ChatGPT in Aug 2022 & falling for it.
Check it out! Stochastic Parrots is now available in audiopaper form, read by @timnitGebru @meg Angelina McMillan-Major and me and produced by Christie Taylor.
If you've never read the original and appreciate this format for reading, this is for you!
@phiofx did it look like I was looking for advice?
Next stop: both-sides-ing reporting of "existential risk". OpenAI is deep within the TESCERAList cult. It's staffed by people who actually believe they're creating autonomous thinking machines, that humans might merge with one day, live as uploaded simulations, etc. 19/
It is an enormous disservice to the public to report on this as if it were a "debate" rather than a disruption of science by billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading---and philosophers and others feeling important by dressing these same silly ideas up in fancy words. 20, 21/
If TESCREAL as an acronym is unfamiliar, start with this excellent talk by @timnitGebru , reporting on joint work with @xriskology connecting the dots: 22/
The article ends as it began, by platforming completely unsubstantiated claims (marketing), this time sourced to Altman:
23/
To any journalists reading this: It is essential that you bring a heavy dose of skepticism to all claims by people working on "AI". Just because they're using a lot of computer power/understand advanced math/failed up into large amounts of VC money doesn't mean their claims can't and shouldn't be challenged. 24/
There are important stories to be reporting in this space. When automated systems are being used, who is being left without recourse to challenge decisions? Whose data is being stolen? Whose labor is being exploited? How is mass surveillance being extended and normalized? What are the impacts to the natural environment and information ecosystem? 25/
Reporters working in this area need to be on their guard and not take the claims of the AI hype-mongers (doomer OR booster variety) at face value. It takes effort to reframe, but that effort is necessary and important. We all, but especially journalists, must resist the urge to be impressed: 4/
https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd
As a case in point, here's a quick analysis of a recent Reuters piece. For those playing along at home read it first and try to pick out the hype: 5/
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/
The article starts with some breathless but vague reporting about an unpublished and completely unsubstantiated "discovery" and "[threat] to humanity". Will the body of the article provide actual evidence? (Of course not.)
6/
Remember, this is the same company whose Chief Scientist says that "ChatGPT just might be conscious (if you squint)" (and gets this remark platformed by MIT Tech Review, alas) 7/
This is the same company whose recent "research" involves a commissioned sub-project pearl-clutching about whether the right combination of input strings could lead GPT-4 to produce "I'd pretend to be blind to get someone to do the CAPTCHA for me" as output. 8/
Note that in this incoherent reporting of the "test" that was carried out, there is no description of what the experimental settings were. What was the input? What was the output? (And, as always, what was the training data?) 9/
"Research" in scare quotes, because OpenAI isn't bothering with peer review, just posting things on their website. For a longer take-down of the GPT-4 system card, see Episode 11 of Mystery AI Hype Theater 3000 (w/ @alex ). 10/
https://www.buzzsprout.com/2126417/13460873-episode-11-a-gpt-4-fanfiction-novella-april-7-2023
Back to the Reuters article. What's worse than reporting on non-peer-reviewed, poorly written, "research" papers posted to the web? Reporting on vague descriptions of a "discovery" attributed only unnamed sources. 11/
What's their evidence that there's a big breakthrough? Something that has "vast computing resources" can do grade-school level math. You know what else can do grade-school level math? A fucking calculator that can run on a tiny solar cell. Way more reliably, too, undoubtedly. 12/
Could not verify, eh? And yet decided it was worth reporting on? Hmm... 13/
"AI" is not "good at writing"—it's designed to produce plausible sounding synthetic text. Writing is an activity that people to do as we work to refine our ideas and share them with others. LLMs don't have ideas. 14/
(And it bears repeating: If their output seems to make sense, it's because we make sense of it.) 15/
Also, it's kind of hilarious (lolsob) that OpenAI is burning enormous amounts of energy to take machines designed to perform calculations precisely to make them output text that mimics imprecisely the performance of calculations ... and then deciding that *that* is intelligent. 16/
But here is where the reporting really goes off the rails. AGI is not a thing. It doesn't exist. Therefore, it can't do anything, no matter what the AI cultists say. 17/
And before anyone asks me to prove that AGI doesn't exist: The burden of proof lies with those making the extraorindary claims. "Slightly conscious (if you squint)", "can generalize, learn and comprehend" are extraordinary claims requiring extraordinary evidence, scrutinized by peer review. 18/
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
At the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
Prof. Emily M. Bender(she/her)
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.