The nonpartisan think tank Brookings this week published part that condemns regulation of the open-source AI block, arguing that it would create legal liability for general-purpose AI systems while undermining their development. Under the draft EU AI Law, open source developers will have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as accuracy and cybersecurity standards.
If a company implemented an open-source AI system that produced some sort of catastrophic outcome, the author argues, it’s not inconceivable that the company would try to deflect responsibility by suing the open-source developers they built their product on.
“This could further concentrate power over the future of AI in large tech companies and prevent research that is critical to the public’s understanding of AI,” wrote Alex Engler, the Brookings analyst who published the paper. “Eventually, [E.U.’s] attempts to regulate open source could create a complex set of requirements that threaten open source AI contributors, possibly without improving the use of general purpose AI.
In 2021, the European Commission — the EU’s politically independent executive body — published the text of the Artificial Intelligence Act, which aims to promote the “reliable deployment of artificial intelligence” in the EU. As they seek input from industry ahead of a vote this fall, EU institutions are looking to make changes to regulations that try to balance innovation with accountability. But some experts say the AI Act, as written, would impose heavy requirements on open efforts to develop AI systems.
The legislation contains exceptions for some categories of open source AI, such as those used exclusively for research and with controls to prevent abuse. But as Engler notes, it would be difficult — if not impossible — to prevent these designs from entering commercial systems, where they could be misused by malicious actors.
In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience in communities that use such AI tools to create pornographically deep fake images of celebrities.
Oren Etzioni, the CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said the burdens introduced by the rules could have a chilling effect on areas such as the development of open text generation systems, which he says allow developers to “catch up” with big tech companies like Google and Meta.
“The road to regulatory hell is paved with the EU’s good intentions,” Etzioni said. “Open source developers should not be subjected to the same burden as those who develop commercial software. It should always be the case that free software can be provided “as is” — consider the case of a student developing AI capabilities; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thus having a chilling effect on academic progress and the reproducibility of scientific results.
Instead of seeking to broadly regulate AI technologies, EU regulators should focus on specific AI applications, Etzioni argued. “There is too much uncertainty and rapid change in AI for a slow regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots or toys should be subject to regulation.”
Not every practitioner believes the AI Act needs further changes. Mike Cook, an AI researcher who is part of Knives and brushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little tighter” than necessary. Setting any standard can be a way to show global leadership, he posits—hopefully encouraging others to follow suit.
“The fear-mongering about ‘stifling innovation’ comes mostly from people who want to remove all regulation and have free reign, and that’s not usually a view I put much stock in,” Cook said. “I think it’s good to make laws for the sake of a better world instead of worrying about whether your neighbor is going to regulate less than you and somehow profit from it.”
More precisely, as my colleague Natasha Lomas did previously Notably, the EU’s risk-based approach lists several prohibited uses of AI (e.g. Chinese-style government social credit rating), while imposing restrictions on AI systems deemed “high-risk” — such as those related to law enforcement . If regulations were targeted at product types rather than product categories (as Etzioni argues they should), this could require thousands of regulations – one for each product type – leading to conflict and even greater regulatory uncertainty.
Analysis written by Lillian Edwards, Professor of Law at Newcastle School of Law and Part-time Legal Adviser at the Ada Lovelace Institute, questions whether providers of systems such as large open source language models (eg GPT-3) may ultimately be liable under AI Act. The language in the legislation puts the onus on downstream implementers to manage the uses and impacts of an AI system, she says — not necessarily the original developer.
“[T]the downstream way it deploys [AI] and adapting it can be as important as how it was originally built,” she writes. “The AI Act pays some attention to this, but not enough, and therefore fails to adequately regulate the many actors who are involved in various ways ‘downstream’ in the AI supply chain.”
At the launch of AI Hugging Face, CEO Clément Delangue, councilor Carlos Muños Ferrandis and policy expert Irene Solaiman say they welcome provisions to protect consumer protections, but that the AI Law as proposed is too vague. For example, they say, it’s unclear whether the legislation will apply to the “pre-trained” machine learning models at the heart of AI-based software or just the software itself.
“This lack of clarity, combined with non-compliance with current community governance initiatives such as open and responsible AI licenses, can hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delange, Ferrandis and Soleiman said in a joint statement. “From a competition and innovation perspective, if you already place too heavy a burden on open-source features at the top of the AI innovation stream, you risk hindering incremental innovation, product differentiation, and dynamic competition, the latter of which is at the heart of emerging technology markets such as AI-related … Regulation should take into account the dynamics of innovation in AI markets and thus clearly identify and protect the main sources of innovation in these markets.”
As for Hugging Face, the company is advocating for improved AI management tools regardless of the AI Act’s final language, such as “responsible” AI licenses and map models that include information such as the intended use of an AI system and how she works. Delangue, Ferrandis, and Solaiman point out that responsible licensing is starting to become common practice for major AI releases like Meta’s OPT-175 language model.
“Open innovation and responsible innovation in AI are not mutually exclusive goals, but rather complementary,” said Delange, Ferrandis and Solaiman. “The intersection between the two should be a major focus for ongoing regulatory efforts, as it is right now for the AI community.”
This may be achievable. Given the many moving parts involved in EU rulemaking (not to mention the stakeholders affected by it), it will likely be years before AI regulation in the bloc begins to take shape.