Microsoft enters debate over AI regulation, calls for new US agency and executive order | CNN Business
CNN
—
Microsoft on Thursday joined a wide-ranging global debate over the regulation of artificial intelligence, echoing calls for a new federal agency to oversee the technology’s development and urging the Biden administration to pass new restrictions on how the US government uses AI tools.
In a speech in Washington attended by several members of Congress and civil society groups, Microsoft chairman Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision of the technology that could rival the competing efforts of countries like China.
The comments highlight how one of the biggest companies in the AI industry is hoping to influence the rapid push by governments, particularly in Europe and the United States, to curb AI before it causes major disruptions to society and the economy .
In a roughly hour-long appearance that was equal parts product pitch and policy pitch, Smith compared AI to printing and described how it could streamline policymaking and outreach to lawmakers’ constituents , before calling for “the rule of law” to govern AI in every part of its lifecycle and supply chain.
The regulations should apply to everything from data centers that train large language models to end users such as banks, hospitals and others who can apply the technology to make life-altering decisions, Smith said.
For decades, “the rule of law and a commitment to democracy have kept technology in place,” Smith said. “We’ve done it before; we can do it again.”
In his remarks, Smith joined calls made last week by OpenAI, the company behind ChatGPT and in which Microsoft has invested billions, for the creation of a new government regulator that could oversee a system of licenses for cutting-edge AI development, combined with testing and testing. security rules, as well as government-mandated disclosure rules.
Whether a new federal regulator is needed to control AI is quickly emerging as a central point of debate in Washington; Opponents like IBM have argued, including in an op-ed on Thursday, that AI regulation should be incorporated into all existing federal agencies because of their understanding of the industries they oversee and how it is more likely to be transformed by AI.
Smith also called on President Joe Biden to develop and sign an executive order requiring federal agencies that acquire AI tools to implement a risk management framework developed and published this year by the National Institute of Standards and Technology . This framework, which Congress first mandated with legislation in 2020, covers ways that companies can use AI responsibly and ethically.
Such an order would leverage the U.S. government’s immense buying power to shape the AI industry and encourage voluntary adoption of best practices, Smith said.
Microsoft itself plans to implement the NIST framework “across all of our services,” Smith added, a commitment he described as a direct result of a recent White House meeting with AI CEOs in Washington. Smith also pledged to publish an annual AI transparency report.
As part of Microsoft’s proposal, Smith said any new rules for AI should include revamped export controls tailored for the AI age to prevent sanctioned entities from abusing the technology .
And, he said, the government should impose redundant AI circuit breakers that would allow critical infrastructure providers or the data centers on which the algorithms depend to shut down.
Smith’s remarks, and a related policy paper, come a week after Google released its own proposals calling for global cooperation and common standards for artificial intelligence.
“AI is too important not to regulate and too important not to regulate well,” Kent Walker, Google’s president of global affairs, said in a blog post outlining the company’s plan.
.