I've seen a few conversations where someone says something like this:
I've been using an open-source LLM lately -- I'm a huge fan of not depending on OpenAI, Anthropic, or Google. But I'm really sad that the AI safety groups are trying to ban the kind of open-source LLM that I'm using.
Someone then responds:
What! Almost no one actually wants to ban open source AI of the kind that you're using! That's just a recklessly-spread myth! AI Safety orgs just want to ban a tiny handful of future models -- no one has tried to pass laws that would have banned current open-sourced models!
This second claim is false.
Many AI "safety" organizations or people have in the past advocated bans that would have criminalized the open-sourcing of models currently extant as of now, January 2024. Even more organizations have pushed for bans that would cap open source AI capabilities at more or less exactly their current limits.
(I use open-sourcing broadly to refer to making weights generally available, not always to specific open-source compliant licensing.)
At least a handful of the organizations that have pushed for such bans are well-funded and becoming increasingly well-connected to policy makers.
Note that I think it's entirely understandable that someone would not realize such bans have been the goal of some AI safety orgs!
For comprehensible reasons -- i.e., how many people judge such policies to be a horrible idea, including many people interested in AI safety -- such AI safety organizations have often directed the documents explaining their proposed policies to bureaucrats, legislative staffers, and so on, and not been proactive in communicating their goals to the public.
Note also that not all AI safety organizations or AI-safety concerned people are trying to do this -- although, to be honest, a disturbing number are.
At least a handful of people in some organizations believe -- as do I -- that open source has been increasingly vital for AI safety work. Given how past ban proposals would have been harmful, I think many future such proposals are therefore likely to be harmful as well, especially given that the arguments for them look pretty much identical.
Anyhow, a partial list:
1: Center for AI Safety
The Center for AI Safety is a well-funded (i.e., with > 9 million USD) 501c3 that focuses mostly on AI safety research and on outreach. You've probably heard of them because they gathered signatures for their sentence about AI safety.
Nevertheless, they are also involved in policy. In response to the National Telecommunications and Information Administration's (NTIA) request for comment they sent proposed regulatory rules to them.
These rules propose defining "powerful AI systems" as any systems that meet or exceed certain measures for any of the following:
Computational resources used to train the system (e.g., 10^23 floating-point operations or “training FLOP”; this is approximately the amount of FLOP required to train GPT-3. Note that this threshold would be updated over time in order to account for algorithmic improvements.) [Note from 1a3orn; this means updated downwards]
Large parameter count (e.g., 80B parameters)
Benchmark performance (e.g., > 70% performance on the Multi-task Language Understanding benchmark (MMLU))
Systems meeting any of these requirements, according to the proposal, are subject to a number of requirements that would effectively ban open-sourcing them.
Llama 2 was trained with > 10^23 FLOPs, and thus would have been banned beneath this rule. Fine-tunes of Llama 2 also obtain greater than 70%on the MMLU and thus also would have been banned beneath this rule.
Note that -- despite how this would have prevented the release of Llama 2, and thus thousands of fine-tunes, and enormous quantities of safety research -- the document boasts that its proposals "only regulate a small fraction of the overall AI development ecosystem."
2: Center for AI Policy
The Center for AI Policy -- different from the Center for AI Safety! -- is a DC-based lobbying organization. The announcement they made about their existence made some waves -- because the rules that they initially proposed would have required the already-released Llama-2 to be regulated by a new agency.
However, in a recent interview they say that they're "trying to use the lightest touch we can -- we're trying to use a scalpel." Does this mean that they have changed their views?
Well, they haven't made any legislation they're proposing visible yet. But in the same interview they say that models trained with more than 3x10^24 FLOPs or getting > 85 on the MMLU would be in their "high risk" category, which according to the interview explicitly means they would be banned from being open sourced.
This would have outlawed the Falcon 180b by its FLOP measure, although -- to be fair -- the Falcon 180b was open-sourced by an organization in the United Arab Emirates, so it's not certain that it would matter.
As far as the MMLU measure, no open source model at this level has yet been released, but GPT-4 scores ~90% on the MMLU. Thus, this amounts to a law attempting to permanently crimp open source models beneath GPT-4 levels, an event I otherwise think is reasonably likely in 2024.
(I do not understand why AI safety orgs think that MMLU scores are a good way to measure danger.)
3: Palisade Research
This non-profit, headed by Jeffrey Ladish, has as its stated goal to "create concrete demonstrations of dangerous capabilities to advise policy makers and the public on AI risks." That is, they try to make LLMs do dangerous or scary things so politicians will do particular things for them.
Unsurprisingly, Ladish himself literally called for government to stop the release of Llama 2, saying "we can prevent the release of a LLaMA 2! We need government action on this asap."
(He also said that he thought it would potentially cause millions of dollars of damage, and was more likely to cause more than a billion dollars of damage than to cause less than a million.)
4. The Future Society
The Future Society is a think tank whose goal is to "align artificial intelligence through better governance." They boast 60 partners such as UNESCO and the Future of Life Institute, and claim to have spoke to over 8,000 "senior decision makers" and taught 4,000 students. They aim to provide guidance to both the EU and the US.
In one of their premier policy documents, "Heavy is the head that wears the crown", they define "Type 2" General Purpose AI (GPAI) as a kind trained with > 10^23 FLOPs (but less than 10^26) or scoring > 68% (but less than 88%) on the MMLU. Llama-2, again, falls into this category on both counts.
The document mandates that anyone creating a Type 2 GPAI must -- well, must do many things -- but must provide for "Absolute Trustworthiness," which seems to mean that the model must be incapable of doing anything bad whatsoever, and more to the point means that the provider of the model must be able to "retract already deployed models (roll-back & shutdowns)." Open source models would be unable to meet this requirement, obviously.
Similarly, they say that providers would be "required to continuously monitor the model’s capabilities and behaviour, detecting any anomalies and escalating cases of concern to relevant decision makers," which is again impossible to do with an open source model.
Note that in accord with their policy recommendations, this group specifically calls out Meta's actions, dubbing the open-sourcing of Llama a "particularly egregious case of misuse." They also seem to believe that Apache licensing is unacceptable, explicitly calling the "no guarantee of fitness of purpose" clause in such a license "abusive."
Don't worry, though! The Future Society says that they believe that "legitimate and sustainable governance requires bringing to the table many different perspectives."
(My guess is that this is one of the major teams responsible trying to get the EU's rules to ban open source AI, but the institutional process by which the EU works is completely opaque to me and so I am only left guessing.)
Note that the above is just a partial list of organizations or people who have made their policies or goals extremely explicit.
There are other organizations or people out there whose policies are less legible, but ultimately are equally opposed to open sourcing. Consider, for instance, SaferAI , whose CEO says he's fine "with developing and deploying open source up to somewhere around Llama-1"; or the PauseAI people, who think we should need approvals for training runs "above a certain size (e.g. 1 billion parameters)" and who accused Meta of reckless irresponsibility for releasing Llama-2.
Or there is the extremely questionable StopAI group advised by Conjecture, which wishes to eliminate not merely all open source but all AI trained with > 10^23 FLOPs.
Or there are surprisingly numerous people who want to completely change liability law, so that you cannot open-source a model without becoming liable for damage that it causes.
These and similar statements from them either outright imply or would be hard to separate from policies that would have effectively banned currently-extant open-source.
So, again -- it's just false to say that if AI safety groups haven't tried to ban models that already exist. They already would have banned models that are actively being used, if they had had their way in the past. They would have substantially contributed to a corporate monopoly on LLMs.
If you are like me, and think the proposed policies mentioned above are pretty bad -- the stupidity of a law in no way prevents it from being passed! The above groups have not dissolved in the last 6 months. They still hope to pass something like these measures. They are still operating on the same questionable epistemology.
The open-source AI movement is in general is far behind these groups and needs to get its legislative act together if the better organized "anti-open source" movement is not to obliterate it.
And I think it is better to call it the "anti-open source" movement than the AI safety movement.
The "environmentalist" movement helped get nuclear power plants effectively banned, thereby crippling a safe and low-carbon source of energy, causing immense harm to the environment and to humanity by doing so. They thought they were helping the environment. They were not.
I think that some sectors of the "AI safety" movement are likely on their way to doing a similar thing, by preventing human use of, and research into, an easily-steerable and deeply non-rebellious form of intelligence.