White House says AI regulation is 'moral duty,' but power to act is limited without Congress
Washington is eager to make sure that Silicon Valley does not get a free pass on potent new technology
In an attempt to regulate artificial intelligence before the technology evolves beyond the ability to set meaningful limits, President Biden on Monday signed an executive order intended to head off potential harms such as discrimination, misinformation and the displacement of workers.
“I believe we have a moral, ethical and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm,” Vice President Kamala Harris said at the event. “And ensures that everyone is able to enjoy its benefits.”
The raft of new regulations and guidelines comes as Harris travels to London for a global artificial intelligence summit hosted by British Prime Minister Rishi Sunak.
Since the advent of ChatGPT a little over a year ago, millions of people have experienced the power of computer intelligence, which can help diagnose cancer and predict wildfires.
It can also lead to increasing social isolation, spread false images and audiovisual clips, known as “deepfakes,” and replace human workers, thus deepening inequality. Sen. Majority Leader Chuck Schumer, a New York Democrat who attended Monday’s event, said the new executive order was “outstanding.”
But he and others also acknowledged its limits.
Recommended reading
Yahoo News UK: How AI will have changed the world by 2030
Yahoo Finance: Next: The AI software revolution
What the new executive order does
The new executive order reflects an urgency that the federal government did not feel during earlier generations, when Washington effectively left Silicon Valley alone. White House chief of staff Jeff Zients told the Associated Press that the president gave him clear direction: “We have to move as fast, if not faster than the technology itself.”
The new executive order includes several key features, White House officials told Yahoo News:
In a creative application of the Defense Production Act, the new executive order will require that companies working on an AI model “that poses a serious risk to national security, national economic security, or national public health” to share the results of safety tests with the federal government in order to ensure those models are not used for malignant purposes.
The Department of Commerce will develop “guidance for content authentication and watermarking to clearly label AI-generated content,” which federal agencies will be expected to use.
In an effort to stem “algorithmic discrimination,” the executive order instructs federal agencies to be on the alert for bias in models related to housing applications, criminal justice settings and other institutions.
“AI’s applications are almost infinitely broad,” Arati Prabhakar, director of the White House science and technology policy office, said on Monday. Its application in fields as wide-ranging as cinema and defense is what makes the technology so difficult to regulate.
Regulators and legislators have usually played catch-up when it comes to technological progress; this time around seems to be the rare instance when they’re keeping up (or at least trying to) with private industry, which is more open to regulation than it had been in previous years.
“The Biden administration should be commended for acting quickly, decisively, and within the full range of its authority to issue an order that addresses the potential harms that AI poses, especially to vulnerable and marginalized communities,” said Alondra Nelson, a senior fellow at the Center for American Progress, a liberal think tank, in a statement.
Recommended reading
Yahoo Finance: Why tech companies want the government to regulate AI
Fortune: Will U.S. states figure out how to regulate AI before the feds?
Limits to executive power
Because of the separation of powers between the presidency and Congress, an executive order is inherently limited. It can, for example, mandate compliance from federal contractors (as this one does) — but not from a private industry. An incoming president can also undo a predecessor’s executive order, whereas laws passed by Congress are much more difficult to reverse.
“This executive order represents bold action but we still need Congress to act,” Biden said during the executive order event on Monday. Speaking to reporters afterwards, Schumer said much the same thing.
Congressional action will be a challenge, given deep partisan divisions in both chambers. “The Congress is deeply polarized and even dysfunctional,” Columbia law professor Anu Bradford told the MIT Technology Review, “to the extent that it is very unlikely to produce any meaningful AI legislation in the near future.”
Recommended reading
Bloomberg: Here’s how executive orders actually work (hint: slowly)
Politico: White House offers a new strategy for AI — and picks new fights
Are the risks overstated?
Despite the raft of new regulations, with more likely to come, some believe that the fear of artificial intelligence has been overstated, and that too many rules will stymie innovation. And that could, in turn, allow China to become the worldwide AI leader.
The executive order, said cryptocurrency lobbyist Jake Chervinsky in a social media post, contained “one vague sentence about how AI might be good for something someday,” followed by what he described as “a dramatic list of risks and dangers that justify regulating the whole thing to death. Why are we so afraid of technology?”
Speaking to Yahoo Finance earlier this year, New York University economist Nouriel Roubini predicted that instead of replacing workers, artificial intelligence would boost their creativity. “I’m quite optimistic. I’m a productivity optimist and a technology optimist,” he said.
The White House is trying to balance such optimism — which it does not entirely share — with bleak predictions that AI will destroy the world, an anxiety shared by nearly half of corporate chiefs at a recent Yale summit.
Recommended reading