The fact that they call themselves "AI Safety" and call us "AI Ethics" is very interesting to me.
What makes them "safety" and what makes us "ethics"?
I have never taken an ethics course in my life. I am an electrical engineer and a computer scientist however. But the moment I started talking about racism, sexism, colonialism and other things that are threats to the safety of my communities, I became labeled "ethicist." I have never applied that label to myself.
"Ethics" has a "dilemma" feel to it for me. Do you choose this or that? Well it all depends.
Safety however is more definitive. This thing is safe or not. And the people using frameworks directly descended from eugenics decided to call themselves "AI Safety" and us "AI Ethics" when actually what I've been warning about ARE the actual safety issues, not your imaginary "superintelligent" machines.
All of a sudden I see so many more people sharing my talk on the TESCREAL bundle and paper with @xriskology I wonder why more people are starting to pay attention to Silicon Valley eugenicists that many of us got attacked for calling them what they are 🤔
Nothing has changed about the dangers of large language models but that doesn't stop me from enjoying the tech bros meltdowns over DeepSeek. Drinking their own Kool-Aid and then trying to come up with all sorts of explanations for why no "the Chinese aren't innovating."
I'm telling you someone literally DM'd me to inform me about my grammar on my post talking about how the mansplaining on this platform was "off the roof."
The problem is that I should have said "off the charts" or "through the roof."
Someone saw my post about mansplaining, and then made the decision that the best thing to do is spend the time to construct a direct message kindly informing me about the grammatical issues with that post.
Friends, the mansplaining on this platform is truly off the roof. Like someone just referenced model cards, a framework that I coauthored, to clarify information on a post I wrote about what makes ML models open source.
Maybe assume that I know this info if you’re referencing our framework?
Thank you Khari Johnson for writing about this. I'm glad they're finally being investigated. Us little people are expected to abide by so many of these rules with so much scrutiny, but this behemoth created its assets as a nonprofit and the only thing that happens is that they voluntarily will become a for profit. No consequences.
"We know PRC based companies...distill the models of leading US AI companies...As the leading builder of AI, we engage in countermeasures to protect our IP,...including a careful process for which frontier capabilities to include in released models...it is...important that we are working closely with the US government to best protect the most capable models from efforts by adversaries & competitors to take US technology.” https://www.404media.co/openai-furious-deepseek-might-have-stolen-all-the-data-openai-stole-from-us/
We've discussed how so-called "AI Safety" is a smoke screen for centralizing power & guzzling resources without accountability. See our paper where we also discuss how they use the whole "US has to protect ourselves from China." Predictable, right on cue.
"But DeepSeek & Meta’s recent research suggests that more AI capabilities (& efficiency savings) could be gained by going down a more dangerous path — where AIs develop their own alien language."
The journalists amplifying this garbage will not be held accountable when the hype cycle is gone because the next cycle of journalists will do the same thing during the next hype cycle.
I don't want to amplify the article so not posting the actual article.
"By connecting the dots between financial actors, privacy-violating technologies, and human rights impacts, we’re making the VC investments behind this abuse traceable and public, helping to drive a new standard of privacy-conscious due diligence in tech investments. As privacy violations in the Global Majority continue to impact human rights, legal liability will become a real concern."
OpenAI has to be the most insufferable company in the world. They can steal from the whole world and guzzle all possible resources. But no one can give them a taste of their own medicine even a little bit.
How long till they use this to say give us even more resources and this is why we can't release anything.
Friends, now might be a good time to revisit our Stochastic Parrots Day held in March 2023. You can find the recording of the session and additional resources on our website: https://www.dair-institute.org/stochastic-parrots-day/
@Teratogenese@mempko Honestly can't be interested in the stuff they're doing even w.r.t. it being fun and interesting. I was interested in designing circuits because it was like a puzzle. Algorithms were the same. Whatever this is that they're doing is like "what happens if we throw unlimited data and compute?"
Fired from Google for raising issues of discrimination in the workplace and writing about the dangers of large language models: https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.Founded The Distributed AI Research Institute (https://www.dair-institute.org/) to work on community rooted AI research.