@HistoPol I don't see a discrepancy and haven't said that, only that I can't stand the men who do this. I've written and said elsewhere about how these claims are two sides of the same coin though.
Y'all see why I can't stand these horrific men going off about the so-called "existential risks of AI" while simultaneously telling us everyone should be replaced by "AI"? One of the best grifts of the 21st century.
The Libyan Ministry of internal affairs has said OUT LOUD that its planning a campaign to end “anarchy.” The campaign is set to last for 90 days and will take place in all directions: the capital, east, west, south. They say that they intend to end "anarchy" by force—by bullets, by death.
"Anarchy" is code for Black people. We are watching open air slave auctions and hunting of Black people financed by the European Union who pays these murders to deter migration at any cost.
"As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access."
"If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%."
What are your favorite ways to have compute clusters that aren't Amazon/Google/Microsoft etc, whether it is building your own or other organizations you've used?
Mahmoud Khalil is a hero. Excerpts from his letter.
"I have always believed that my duty is not only to liberate myself from the oppressor, but also to liberate my oppressors from their hatred and fear. My unjust detention is indicative of the anti-Palestinian racism that both the Biden & Trump administrations have demonstrated over the past 16 months as the U.S. has continued to supply Israel with weapons to kill Palestinians & prevented international intervention."
"For decades, anti-Palestinian racism has driven efforts to expand U.S. laws and practices that are used to violently repress Palestinians, Arab Americans, and other communities. That is precisely why I am being targeted."
"Columbia targeted me for my activism, creating a new authoritarian disciplinary office to bypass due process & silence students criticizing Israel. Columbia surrendered to federal pressure by disclosing student records to Congress & yielding to the Trump administration's latest threats."
""If Chinese models become more advanced & more widely used by Americans, China could manipulate the models or ignore harms to American users from "illicit & harmful activities such as identity fraud & intellectual property theft," OpenAI alleged."
Lol but OpenAI on the other hand is shielding us from such activities. They're not arguing that THEY should be the one to commit those. Right?
The idea that you shouldn't be expected to know what data you're using to train your systems and that doing so is "an impossible task" is so normalized that its hard to know that this was not always the case even in the field of AI.
Data theft and scraping became completely normalized, along with the exploitation of crowdworkers, with the advent of photo sharing and other platforms and others like amazon mechanical turk.
So now NOT exploiting people and stealing data is the anomaly.
"California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power."
Whether you're a small restaurant or not you have to ensure that you're not stealing your ingredients. So why is this any different?
What about the 1-2 person creative startups? Who is protecting their works in a society that devalues artists so much that "starving artist" is an expectation?
""In their policy recommendations, OpenAI made it clear that it thinks funneling as much data as possible to AI companies—regardless of rights holders' concerns—is the only path to global AI leadership."
I've always wondered how these cars ever passed any type of safety test. RIP to the students.
"The Highway Patrol’s investigation into a November Cybertruck crash in Piedmont where three college kids died is finding two very Tesla problems: the vehicle immediately caught fire, and its doors would not open."
On information retrieval and what we need for healthy information access systems, please read Chirag Shah & @emilymbender's paper: "Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?" https://dl.acm.org/doi/full/10.1145/3649468
Fired from Google for raising issues of discrimination in the workplace and writing about the dangers of large language models: https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.Founded The Distributed AI Research Institute (https://www.dair-institute.org/) to work on community-rooted AI research.Author of forthcoming book: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria