Sometimes I flag them as spam, other times fake accounts, but given that the platforms themselves encourage people to use chatbots to spam all of us, they don't have it as an option that we can report.
I keep on seeing synthetic text in my comments, for example, which social media platforms actively shove down people's throats. We used to call these things inauthentic coordinated activity to be taken down, but now you have the platforms themselves encouraging everyone to take part in such activities. There was one account I saw that was commenting on posts every minute in different languages.
There are so many fake quotes by me each week that I don't what to do about them. Someone just posted a medium article with another fake quote from me saying "Generative AI is not just about creating something new, but about capturing what was once impossible to express."
If you know anything about my work you know that I would never say something like this.
As @emilymbender said what so called Generative AI has done is basically the equivalent of an oil spill in our information ecosystem.
Great overview and advice in this article by Dia Kayyali for @techpolicypress
Don’t let it become normal
Petra Molnar pointed out in her comments, "Technology is ultimately about power – and reinscribing the power differentials which are inherent in our world generally and in the immigration system specifically.”
I haven't looked through this guide by TechTonic Justice yet but it looks interesting. Via @alex
"Most of the time, government officials, landlords, employers, educators, and others who use AI to make decisions don’t announce it. This guide is meant to help you figure out if AI is being used and what you can do about it."
I heard about some pronouncements on "AI" "eradicating all disease" within a decade from one the priests of AGI on 60 minutes--the show which regularly puts out press releases masquerading as journalism (at least as it relates to tech).
And below is another pronouncement from another high priest of AGI. In any other world, people like this would be laughed off. But not in the twilight zone we're living in where they get even more resources and air time.
I will never forget the genocidal mania among the Ethiopian intelligentsia. It was a popular genocide, with your “progressive” diaspora falling over themselves to deny and justify it. The least we can do is raise awareness about what has happened and what is about to happen.
We collaborated with content moderators like Fasica to understand the horrific working conditions of moderators.
Imagine your family going through one of the worst sieges in recorded history with nothing going in & out for 2 years, and the longest continuous internet shutdown ever recorded, and moderating genocidal content targeting your people. One of the moderators saw his cousin being murdered because that was a video routed to him. He wasn’t even allowed to take a short mental health break.
But contrary to his claims and Yann LeCun’s lies saying that “95% of hate speech on facebook is removed by ‘AI’,” all social media is essentially 4chan in these languages and contexts.
We have now spent 3 years documenting these failures. We had no language technology to do this–not even the equivalent of a spell checker. So we partnered with Lesan founded by Asmelash Teka to create those tools.
I told people at Google about this, and asked them to build language technology and at the very least, have basic moderation, in so-called "under-resourced languages” like Tigrinya. Although we were at Google, any concerns sent to YouTube went into a void never to even be acknowledged. The same VP who sent an “anonymous” letter to HR telling me to retract my paper and setting the stage for my firing, however, commented that plenty of work was being done in that arena.
🗣️🗣️🗣️ Announcing new work from DAIR which is very close to my heart, 3 years in the making.
When #TigrayGenocide, the deadliest genocide of the 21st century thus far, started in November 2020, it was 1 month before I got fired from Google. Unlike Tigrayans whose sisters were being raped & parents murdered, I didn’t know exactly what was happening on the ground & who to believe. But I saw the genocidal speech targeting Tigrayans on social media, particularly from Eritreans, in Tigrinya. 🧵
In this Op-ed for Scientific American, Asmelash Teka & I discuss one of the many reasons the idea of replacing US federal workers with so-called generative AI systems should terrify us. We dive into the well defined task of automatic speech recognition (ASR), and describe why OpenAI’s Whisper, which has been integrated into ChatGPT, makes stuff up, or “hallucinates” as it’s called in the industry (bad nomenclature).
OpenAI’s (and Muskrat and others’) quest to build one-model-for-everything has resulted in less reliable systems than we had, even in the well defined task of ASR. Historically “hallucinations” weren’t problems in ASR systems! So now imagine what will happen if DODGE replaces federal workers with these tools to perform tasks that expert federal workers perform?
“There is no “one weird trick” that removes experts & creates miracle machines that can do everything that humans can do, but better.”
But I want to have an aside on the level to which people uncritically use the term "foundation models" and discuss "reasoning" of these models, when it is very likely that the models literally memorized all these benchmarks. It truly is like the story of the emperor's clothes. Everyone seems to be in on it and you're the crazy one being like but HE HAS NO CLOTHES. 🧵
There is no difference between the likes of Stanford and any of these companies, they're one and the same. So schools like it make money from the hype and will perpetuate the hype.
The McKinseys and other huge consulting orgs are raking in bank on the hype, akin to all the people who made money during the gold rush, except for the people looking for gold.
All the things people call "laws" aren't laws and were never "laws". Scaling laws? Some people looked at some plots and came up with that.
-"Reasoning"? Lets set aside how they don't even have a definition for this. But literally change some minor thing on the benchmarks like a number, and you see how these models completely fail. https://arxiv.org/pdf/2410.05229
-"Understanding"? Just watch this debate to see the rigor with which Emily discusses the topic vs those who make these wild claims: https://lnkd.in/e6bgM-43.
If you come up with a new benchmark they'll just guzzle it as part of the training data and then claim to do "reasoning" on that.
It is so mind-boggling to me that people have to even spend time debunking these claims. Such a waste of resources that could be going to doing actual science and engineering work.
The only people doing real "responsible AI" work are those doing this.
"On Monday, Microsoft reportedly terminated the roles of two software engineers, Ibtihal Aboussad and Vaniya Agrawal, who protested the company’s reported dealings with the Israeli military during Microsoft’s Copilot and 50th anniversary event last week."
Fired from Google for raising issues of discrimination in the workplace and writing about the dangers of large language models: https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.Founded The Distributed AI Research Institute (https://www.dair-institute.org/) to work on community-rooted AI research.Author of forthcoming book: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria