I just noticed that an up-coming online talk I'm giving is being advertised with the wrong title -- as in the host "edited" it before sending it out. Like he knows better than me what I'm going to talk about. Who does that??
"It is not the case that “AI gathers data from the Web and learns from it.” The reality is that AI companies gather data and then optimize models to reproduce representations of that data for profit."
"The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated."
It's Agurketid for Mystery AI Hype Theater 3000 -- @alex and I will be taking on a very silly long-form interview with a CEO whose fantasies about AI have really gotten the better of him.
Join us Monday July 29, 1pm Pacific twitch.tv/dair_institute
Does anyone have info about the context of this slide? I see it credited to an IBM presentation from 1979---but a presentation by whom, to whom about what?
Why can't more journalists reporting on "AI" straightforwardly say "This is bad, actually"? Case in point: In this article juxtaposing the grandiose claims of OpenAI et al with the massive environmental footprint of LLMs, Goldman still has to include this weird AI optimism:
I am a professor of linguistics at the University of Washington, where I run our Master of Science in Computational Linguistics. I work on computational approaches to syntax and the syntax-semantics interface, the role of #linguistics in #NLP, and on the societal impacts of language technology. On social media, I spend a lot of time debunking #AIhype, for which I find linguistics very useful!
I use Mastodon for public scholarship, because that is one of the main benefits I see to public/open social media. It's how I used Twitter and I've definitely benefited from other people using social media this way.
But in my experience, Mastodon is 'splainy AF, which is exhausting. Just this morning, a gentleman decided I would benefit from an explanation of GIGO, FFS.
So, while I never insist on titles, I'm going to include mine in my display name for a while, to see if that helps.
"a potential gold mine for criminal hackers or domestic abusers who may physically access their victim’s device. Images include captures of messages sent on encrypted messaging apps Signal and WhatsApp, and remain in the captures regardless of whether disappearing messages are turned on in the apps."
In 2024 it *still* somehow isn't standard practice to ask in the design process: Are we building the killer app for domestic abusers?
Hey folks, let's have a little chat about construct validity --- this is the concept that if you're going to use a psychological test, there should be good reason to believe that a) the thing it's meant to be testing is real and b) results on the test reflect something about that thing.
This paper and its predecessor that Chirag Shah and I wrote started off as a reaction to Google's plans to shove "GenAI" into search. It was clear that that was a bad idea as early as 2021 (when we started writing the first paper) and it's even clearer now.
As folks discuss the plundering of the open internet/sharing economy by the data-hungry LLM trainers, it seems like a good time to remind ourselves to find something other than "the tragedy of the commons" as a metaphor. On the racist, terrible origins of that phrase:
Must-read reporting by +972 on how the IDF are using “AI” in their indiscriminate murder in Gaza. It’s horrific, and we must not look away. And it’s an absolute nightmare of the usual sorts of AI harms cranked up to the extreme: mass surveillance, "we don't have any choice but to automate", AI as pretext for deadly violence.
There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.
Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)If we don't know each other, I probably won't reply to your DM. For more, see my contacting me page: http://faculty.washington.edu/ebender/contact/