Seeking photo: Does anyone have a good photo of the National Library of Thailand, preferably inside (showing books) that they'd be willing for me to use on a blog post?
If it quacks like a fake doc ... it might be scammy approaches to using language models (generative mathy maths) in healthcare. Join me and @alex as we take apart some truly appalling examples in the next episode of Mystery AI Hype Theater 3000 this Friday, Feb 17, 9:30am Pacific.
News outlets who use large language models (eg. #ChatGPT) to write any of their articles are abandoning journalistic ethics—and showing that they don't respect their readers/don't think their readers deserve actual reporting.
As Gizmodo documents, CNET claimed that they had editors fact-checking everything the text synthesis machine (so-called "AI") wrote, but editors fact-checking synthesized text are being asked to do a different job from editors fact-checking the work of actual journalists.
If the goal here is to save time for humans/expense of hiring humans, we'd want to check that the overall effort to fact check synthetic text is lower than the effort of writing + fact checking. I doubt it.
But Gizmodo suggests that the goal is SEO for ad clicks, making this is another clear example of two bad things:
1) The profit motive again distorting information access systems (see @safiyanoble 's _Algorithms of Oppression_)
2) Synthetic text polluting our information ecology.
Now is the time to demand good journalism (not just about "AI") and to support good journalism. And to heap shame on CNET and anyone else who pulls stunts like this, whether quietly (as CNET did) or with fanfare.
If this is true*, OpenAI decided that in order to build their product, they needed to take action illegal w/in the US so outsourced that ... and then took possession of the illegal images.
There's also something especially obscene about the contrast between OpenAI's reported incipient valuation at $29 Billion and the sums mentioned in this article ($200,000 for the whole contract).
That throws into stark relief what kind of tech labor is valued and what kind is considered incidental. See also: @maryLgray and Suri's _Ghost Work_, @ubiquity75 's _Behind the Screen_ and this piece from @adrienneandgp@milamiceli & @timnitGebru
It's not surprising that this so-called "AI" product also involves exploited labor (exposing poorly paid people to traumatic content at high velocity), but it is still another thing to have it documented like this. Thank you @perrigo for this reporting.
As noted by @CriticalAI (over on Twitter), this op-ed is weirdly misinformed #AIHype. Cheap text synthesis is definitely a threat, but it is one because *people* could use it to (further) gum up the communication processes in our government.
Q for those finding interest in playing with #ChatGPT: Why is this interesting to you? What's the value you find in reading synthetic text? What do you think it's helping you to learn about the world and what are you assuming about the tech to support that idea?
This framing is so gross. To see (human!) generated (ahem: English) text to be a "vital resource" you have to be deeply committed to the project of building AI models and in this particular way.
Surely the lesson here (which is not new, see the work of Strubell et al 2019 etc) is that the approach to so-called "AI" that everyone is so excited about these days is simply unsustainable.
h/t @evanmiltenburg who draws an excellent connection to @abebab 's work on values in ML research:
Super frustrated with all the cheerleading over chatbots for search, so here's a thread of presentations of my work with Chirag Shah on why this is a bad idea. Follow threaded replies for:
op-ed media coverage original paper conference presentation
I am a professor of linguistics at the University of Washington, where I run our Master of Science in Computational Linguistics. I work on computational approaches to syntax and the syntax-semantics interface, the role of #linguistics in #NLP, & on the societal impacts of language technology. On social media, I spend a lot of time debunking #AIhype, for which I find linguistics very useful!
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/