The UN proposals reflect the strong interest among politicians around the world in regulating AI to mitigate these risks. But it also comes as major powers — especially the United States and China — scramble to lead in a technology that promises to have huge economic, scientific and military benefits, and as those nations lay out their own visions of how they should be used and controlled. .
In March, the United States introduced a UN resolution calling on member states to embrace the development of “safe, secure and reliable AI.” In July, China introduced its own resolution emphasizing cooperation in AI development and making the technology widely available. All UN member states have signed both agreements.
“AI is part of the competition between the US and China, so there’s only so much they’re going to agree on,” said Joshua Meltzer, an expert at the Brookings Institution, a think tank in Washington, DC. Key differences, he says, include what norms and values should be embodied by AI and the protections around privacy and personal data.
Differences between rich nations’ views on AI are already causing cracks in the market. The EU has introduced broad AI regulations with controls on data usage, prompting some US companies to limit the availability of their products there.
The hands-off approach taken by the US government has led California to propose its own AI rules. Earlier versions of those regulations were criticized by AI companies based there as too heavy-handed, for example in how they would require firms to report their activities to the government, leading to a loosening of the rules.
Meltzer adds that AI is developing at such a rapid pace that the UN will not be able to manage global cooperation alone. “Obviously there’s an important role for the UN when it comes to managing AI, but it has to be part of a distributed kind of architecture,” with individual nations also working directly on it, he says. “You have rapidly developing technology and the UN is clearly not set up to deal with that.”
The UN report sought to establish common ground among member states by emphasizing the importance of human rights. “Anchoring the analysis in terms of human rights is very fascinating,” says Chris Russell, a professor at the University of Oxford in the United Kingdom who studies international AI governance. “This gives the work a solid foundation in international law, a very broad scope and a focus on specific harms that happen to people.”
Russell adds that there is a lot of duplication in the work that governments are doing to evaluate AI with a view to regulation. The US and UK governments have separate bodies working on researching AI models of bad behaviour, for example. UN efforts may avoid further cuts. “Working internationally and pooling our efforts makes a lot of sense,” he says.
While governments may see AI as a way to gain a strategic advantage, many academics share their concerns about AI. Earlier this week, a group of prominent scientists from the West and China issued a joint call for more cooperation on AI safety after a conference on the topic held in Vienna, Austria.
Nelson, the advisory body member, says he believes government leaders can work together on important issues as well. But she says much will depend on how the UN and its member states choose to pursue the cooperation plan. “The devil will be in the implementation details,” she says.