"For all its promised benefits, the artificial-intelligence boom is likely to prove costly to public health and even lead to hundreds of deaths a year in the U.S. alone, and those ill effects are likely to disproportionately hit poorer communities, according to a study by California researchers..."
Don't forget to subscribe to our monthly newsletter on our website for monthly updates. Scroll to the bottom of https://www.dair-institute.org to sign up.
Congratulations to @timnitGebru for being awarded the 2025 Miles Conrad Awardee!
"The award recognizes her critical work on the dangers of biases in AI as the information community grapples with ethics and rapidly advancing technologies."
Dr. Gebru will deliver the Miles Conrad Lecture at #NISOPlus25.
@milamiceli This is the last event in our Data Workers' Inquiry series.
➡ Catch the recordings of all prior events at data-workers.org/events. ➡ Find the rich repository of works, ranging from in depth reports to animated shorts and podcasts, at data-workers.org.
Did you use this repository in your class, investigative reporting, or other works? We would love to hear about it if so.
🗣️ 🗣️ 🗣️ This Monday, we will be joined by our friends at Arte es Ética (Oscar Araya), Guerrilla Media Collective (Alex Minshall), the Writers Guild Of America, USA (John López), and Rafael Grohmann at the university of Toronto, with a conversation moderated by our very own @milamiceli
🗯️ They show us the ways in which we are all exploited data workers, and how we can resist this exploitation.
"In the end, the real problem is arguably not that the data labelers shared the images on social media. Rather, it’s that this type of AI training set—specifically, one depicting faces—is far more common than most people understand, notes @milamiceli, a sociologist and computer scientist who has been interviewing distributed workers contracted by data annotation companies for years....The data labelers found this work “really uncomfortable,” she adds.
DAIR & our collaborators call on @facctconference@FAccT to divest from Israeli apartheid, occupation & genocide. Get our zines on the complicity of the conference's sponsors, & a #NoTechForGenocide pin if you haven't done so already.
"When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference...Mr. Altman even insinuated that the similarity was intentional, tweeting a single word 'her' - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human."
"I’ve written about relationships between tech companies and the military before, so I shouldn’t have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body."
Angie Wang's illustrated essay for the New Yorker, wondering whether her child's early attempts at speech were the same as LLMs, was a finalist for the 2024 Pulitzer Prize. "A toddler has a life, and learns language to describe it. An L.L.M. learns language but has no life of its own to describe," she writes.
"But while authorities generally pitch facial recognition as a tool to capture terrorists or wanted murderers, the technology has also emerged as a critical instrument in a very particular context: punishing protesters." by Darren Loucaides
"Experts who reviewed the policy changes at The Intercept’s request said OpenAI appears to be silently weakening its stance against doing business with militaries. “I could imagine that the shift away from ‘military and warfare’ to ‘weapons’ leaves open a space for OpenAI to support operational infrastructures as long as the application doesn’t directly involve weapons development narrowly defined,” said Lucy Suchman, ...
"Microsoft increased worldwide water consumption by a whopping 34 percent — up to almost 1.7 billion gallons annually — last year, which outside researchers told the AP is most likely due to increased AI training. That's dwarfed by Google, which used 5.6 billion gallons last year, a 20 percent jump that's also likely attributable to machine learning." Via Tech Won't Save Us.
@alex and @emilymbender will be discussing large language models and AI Hype on September 21 at 8pm PDT/4pm UK on LinkedIn Live. More details on event page.
"At the center of the concerns...is a fear shared by members of both the public and private sectors that Congress has become so preoccupied with the long-term, existential threats of AI that it’s skipping right over all of the immediate risks that AI ethicists and civil rights advocates have been warning about for years: Things like algorithmic profiling by law enforcement and AI-enabled bias in school admissions, healthcare, insurance, and other sensitive areas."
We are an interdisciplinary AI research institute consisting of scientists, engineers, organizers and activists who believe that AI is not inevitable. We DAIR to imagine, build & use AI deliberately.Our Avatar is DAIR written in black, with a white background.