1/ #OpenSource AI is at the center of a heated policy debate. Should it be regulated like closed models? Policymakers face two competing views: ⚖️ Geopolitical risks (esp. w/r/t China competition) 💡 Ideological values (transparency, innovation, democracy)
Our latest analysis, US Open-Source #AI Governance: Balancing Ideological and Geopolitical Considerations with China Competition, explores the tensions shaping open-source AI policy. A thread 🧵⬇️
4/ Using this framework, we analyze four policy proposals—from export controls to independent risk assessments. Our key finding? Blanket restrictions on open models may be counterproductive and could actually weaken US leadership in AI.
2/ The US has long led in open-source AI, but China is catching up—fast. The rise of DeepSeek, a Chinese startup releasing frontier open models, has policymakers questioning how to maintain US technological advantage while keeping AI open.
3/ We introduce a rubric for assessing open-source AI policies, balancing: ✅ Technological progress ✅ Transparency ✅ Decentralization of power ⚠️ Misuse risks (e.g., China’s military, cyber threats) ⚠️ “Backdoor” risks in AI supply chains ⚠️ Global power dynamics
6/ The stakes are high: A full-scale AI arms race could undermine safety and global stability. Thoughtful governance—grounded in nuanced policy, not knee-jerk restrictions—is critical.
5/ Instead, we propose a targeted risk assessment approach that balances security concerns with the benefits of open innovation. We also highlight the growing need for public AI model audits to enhance trust & safety.
4️⃣ AI safety efforts ramp up ahead of the #Paris summit - Meta joins the “we have a framework” club (good!). - China launches an AI Safety Institute (but does it have power?). - But are we moving towards AI regulation by agency over AI law?
🚨 New Ethical Reckoner alert! 🚨 This week, you get two editions—today’s Reckonnaisance and Tuesday’s Extra Reckoning (feat. a super-secret report 👀). For now, here’s what’s in today’s edition: encryption threats, crypto sports gambling, #AI buzzwords, and AI safety moves. 🧵👇
1️⃣ UK’s secret demand for #iCloud backdoor - A gov order under the “Snoopers’ Charter” demands Apple weaken iCloud encryption. - If implemented, users won’t even know the backdoor exists. - A backdoor for one == a backdoor for all.
2️⃣ #Crypto platforms sneak into sports betting - Crypto.com & Kalshi offer “swaps” on the Super Bowl, dodging gambling regs. - “We’re not sportsbooks, we swear”—but regulators aren’t buying it. - Could create an unregulated, highly addictive gambling loophole.
3️⃣ The latest AI buzzword? #Distillation - #OpenSource AI models are rapidly advancing—by copying closed models. - Companies like OpenAI & Google hate it, but they can’t stop it. - The big question: Is open-source really catching up to proprietary AI?
Huge changes afoot at #Meta. "The company’s most prominent Republican" Joel Kaplan is replacing Nick Clegg as head of global policy--but this isn't just a shake-up to get in with the new US admin. Kaplan has influenced Meta policy for years, usually in conservative directions.
Researching the ethics & governance of emerging tech, especially AI and XR. PhD student in Law, Science, and Technology @Unibo & Master of the (social science of the) Internet @Oxfordex-SWE