The U.S. Looks to Focus—and Align—Its Efforts to Regulate AI
Prior efforts to regulate artificial intelligence in the U.S. have been fragmented, but the White House executive order released last week aims to shape and frame future efforts.
With 50 states, 535 members of Congress and more than 300 federal agencies, and the many committees and task forces in between, agreeing to a common approach on how to regulate artificial intelligence (AI) in the United States is no easy feat.
After several states took the lead in passing some AI-specific laws, the White House released an executive order on October 30 in hopes of shaping future policies and regulatory efforts from agencies and state legislators.
A panel last Thursday called “AI Law and Policy in the U.S.: Where We’re Headed” at the International Association of Privacy Professionals’ first AI Governance Global conference in Boston highlighted some of the efforts to regulate AI going on across the country, how they fit together, and what future developments may be coming.
White House’s Take: ‘Pull Every Lever We Can’
Last week certainly solidified the Biden administration’s approach to regulating AI by approaching it as a “broad technology with broad applications,” meaning that its risks and benefits spread across many different domains and sectors, noted Nik Marda, chief of staff of the Technology Division for the White House Office of Science and Technology Policy.
In its executive order, the White House cast a wide net, attempting to cover as many areas as possible, from immigration to housing, employment, and financial services, among others.
“President Biden was actually very clear to his team that we should be looking to pull every lever we can across the federal government to really tackle that broad range of benefits and risks,” Marda explained. “So that’s sort of the framing that’s been guiding our work.”
To do so, the White House is looking at regulating AI through executive actions, legislation, international collaboration, and discussions with the major AI developers. Tackling each of these buckets requires a bit of an all-hands-on-deck approach, and one that calls for significant resources, Marda noted.
To help implement its order, the White House launched a national AI talent search. Specifically, the Biden administration is looking for staff to assist in building and responsibly using AI systems, building regulatory capacity and establishing a resource and development ecosystem.
“How do we make sure the government has the talent to be able to audit, to be able to oversee, to look at some of the information … that we’re going to get from companies and then actually understand what are the real risks and benefits of these systems, and also be able to create the infrastructure needed for society to be able to leverage AI?” Marda added.
CCPA Brings EU AI Act Principles Home
While the European Union (EU) is moving its EU AI Act along the legislative process, in the United States, efforts at the federal level are a bit behind. For now, it’s states that have mostly led the way, with some proposed AI laws across the country and some enacted ones in places like Connecticut and New York City.
Another key player at the state level is California, where the California Privacy Protection Agency (CPPA) is actively looking at automated decision-making (ADM) systems under the California Privacy Rights Act (CPRA). More specifically, Vinhcent Le, board member at the CPPA, noted that the agency is examining how consumers’ personal information is used to train AI and automated decision-making (ADM) technologies.
“Is the use of this data compatible with the purposes for which it was collected, or compatible with those expectations in which it was collected? And is this use of our data in AI reasonable or proportionate?” Le asked. He added that many of the concepts found in the EU AI Act are “things that we are concerned with in California.”
In fact, the U.S. and EU approaches to mitigating the risks that AI can create are starting to converge, panelists found. “I think there’s a lot of alignment on this risk-based approach. I think the focus on assessments of AI and ADM tools is also something that we’re aligned with,” Le said.
The Role of Soft Law
Alongside these legislative efforts are ongoing inputs from the soft law side of this ecosystem. One example may be the National Telecommunications and Information Administration (NTIA), located within the Department of Commerce, which advises the president on telecommunications and information policy issues, though it doesn’t itself have regulatory authority.
Russell Hanser, NTIA’s associate administrator for policy analysis and development, noted that the agency is actively meeting with stakeholders from across different sectors and reviewing responses from its “AI Accountability Policy Request for Comment.”
“We received over 1,400 unique comments,” Hanser said. “That’s a lot.”
This process highlighted the need for an accountability system that allows for more access to information, independent evaluations, and consequences for system failures. “This is about the plumbing of AI governance … that will make AI governance work,” Hanser said.
Several components of the executive order also fall under that soft law umbrella, as the White House lists several recommendations and encouragements for actions to be taken.
“I think broadly, the sort of soft impact of an executive order that spells out priorities and areas of focus starts to have that shaping effect,” Marda noted, referring to its ability to frame upcoming policy. The order will likely have a broader impact on the private sector as well. While the executive order does focus on the government’s use of AI systems, the private-industry providers of these systems will ultimately be the ones that have to match these new standards.
“There’s a sort of market-shaping effect that comes from that,” Marda said. “And so you see that throughout the various places where we talk about standards and guidance and policies, those sorts of provisions, I think, are gonna have a pretty big impact on shaping private-sector use toward being more responsible.”
What’s Next?
The United States’ move to regulate AI across sectors comes as the country approaches the next presidential elections with concerns around the role that generative AI–powered deep-fakes and other deceiving content will play in the months leading up to the next inauguration.
The White House’s order is attempting to get ahead of such risks by emphasizing the need for “labeling and content provenance mechanisms,” concepts that are also found in the current draft of the EU AI Act.
“The guidance is: Yes, you should label this. Is it enforceable yet? No. Is it going to be ready by the 2024 elections? I hope. The developers of these systems are working on it,” Le said.
More broadly, on the legislative side, several AI-related bills have been proposed and discussed in Congress. When asked whether the White House had identified any particular bills that it would support, Marda noted that the Biden administration is “looking to advance bi-partisan legislation … that both helps promote innovation and manage risk.” He added that President Joe Biden met with several senators this week to discuss AI legislation.
From: Legaltech News