Say it with me as many times as it takes to make the lesson stick: Chatbots are not a suitable replacement for search engines.
https://dair-community.social/@emilymbender/109570353001872254
Say it with me as many times as it takes to make the lesson stick: Chatbots are not a suitable replacement for search engines.
https://dair-community.social/@emilymbender/109570353001872254
Here is another case study in how anthropomorphization feeds AI hype --- and now AI doomerism.
The headline starts it off with "Goes Rogue". That's a predicate that is used to describe people, not tools. (Also, I'm fairly sure no one actually died, but the headline could be clearer about that, too.)
>>
Professor of linguistics here (with a PhD in syntax). Yes, people sometimes use third person expressions to describe themselves. But that doesn't make them first person expressions. The whole point of that article is that machines shouldn't be programmed to use first person expressions (among other anthropomorphizing choices).
@jeffjarvis @aaribaud @kotaro no, that's third person, unambiguously.
This paper, by Abercrombie, Cercas Curry, Dinkar and @zeerak is a delight!
arxiv.org/abs/2305.09800
Some highlights:
Fig 1 is chef's kiss. The initial system response reads like it was made 'safe' but then read the improved version. I'd love to live in that world.
>>
A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.
Great new profile of @timnitGebru in the Guardian.
“I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
Was how she put it all the way back in 2016.
And in 2023:
“That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things.
I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.
>>
There's a newish genre of LLM paper I'm starting to see, where the authors put together an enormous suite of benchmarks, test a bunch of models on them, and write a 50-100 page opus talking about how LLMs can now do shiny new thing.
The reader is absolutely buried in some enormous matrix of tests (models x benchmarks, in the simplest case) and each benchmark goes by way too quickly to actually establish construct validity.
>>
Hey #MAIHT3K fans! We're getting ready to launch as a podcast, and we need some real artwork. We're offering a commission for a show logo and maybe some other social media assets. Are any of you talented folks interested in this?
With @alex of course :)
In case you were wondering whether @geoffreyhinton has gone full xrisk doomer, here he is comparing the possibility of rogue "AI" to climate change, and declaring the latter easier to fix.
Hinton later in the clip: "I'm just a scientist who suddenly realized that these things are getting smarter than us and I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us."
Holy lack of power analysis Batman. "Suddenly realized" the oh noes we made tech that's TOO POWERFUL! And yet nothing about how companies and people are using the tech to concentrate power, despite years of research showing that.
Shotspotter and similar technology that sends police in with the idea they are encountering a "live shooter" situation are existentially serious to Black & Brown in the path of the police.
False arrests mediated by face recognition system errors are existentially serious to the people arrested.
And all of these are things that are actually happening!
>>
"Their concerns aren't as existentially serious as the idea of these things getting more intelligent than us and taking over." --Hinton in this clip. Their = @timnitGebru & co.
Algorithmic border wall & other surveillance is existentially serious to its targets.
Synthetic media creating non-consensual porn is existentially serious to its targets.
Automated decision systems denying social benefits are existentially serious to those left without necessary supports.
https://www.cnn.com/videos/tv/2023/05/02/the-lead-geoffrey-hinton.cnn
>>
"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.
From @daveyalba
>>
“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”
>>
Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.
➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.
>>
“Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.”
➡️Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$).
“But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.”
➡️False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines.
>>
Prof. Emily M. Bender(she/her)
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.