SEC Chair Gensler Warns About Centralization of AI in Financial Sector
“This is a new tool for bad actors to do bad things to the American public.”
The financial sector’s reliance on just one or two dominant artificial intelligence (AI) models could lead to the centralization of the technology and financial instability, U.S. Securities and Exchange Commission (SEC) Chair Gary Gensler warned on Tuesday.
The use of AI presents challenges, including issues around whether its decisions are biased and whether its predictions are accurate, Gensler said during a talk at Yale Law School.
“We’ve seen in our economy how one, or a small number of, tech platforms can come to dominate a field,” the SEC chief said. “We all know it in this room: There’s really one search engine that we mostly use, right? There’s really one leading big retail tech platform. And there’s three leading cloud providers.
“At play, we’re bound to see the same development in AI,” he added. “In fact, we’ve already seen affiliations between the three largest cloud providers and the leading generative AI companies. If you look at the two largest cloud providers, nearly three-quarters of the financial sector already uses either one or two of those two biggest cloud providers. So we already have these dependencies.”
This interconnectedness could lead to individuals making “similar decisions as they get similar signals from the base model and rely on data aggregators,” Gensler said. “Thus, AI may play a central role in the after-action reports of a future financial crisis.”
Gensler also warned of “Al washing,” when companies make false claims about using the technology. He said companies should be truthful when claiming how they use AI.
“Investment advisers or broker-dealers also should not mislead the public by saying that they’re using AI models when they’re not, nor should they say that an AI model does something it’s not doing,” Gensler said. “Such AI washing, whether by companies raising money, broker-dealers, or investment advisers, may violate the law.”
The SEC has proposed a predictive data analytics rule prohibiting conflicts of interest when firms use AI to interact with investors.
Critics have said the proposal would restrict the use of beneficial technology. Last week, Republican senators introduced a bill to prohibit the proposed rule.
Gensler was asked how the SEC will move forward to build the capacity to evaluate AI models and enforce the law.
“We don’t actually review people’s models,” Gensler said, adding that those who deploy an AI model must comply with the law and have the appropriate safeguards. “We have to use securities law as we know it when somebody, a broker-dealer or investment adviser company, is using this tool.
“At the SEC, we’re not trying to put any sand in the gears, but we are very keenly focused. Fraud is fraud,” he added. “This is a new tool for bad actors to do bad things to the American public. And we want to make sure, even through a speech like this today, to make sure that people understand that they still have responsibilities when using this tool.”
See also:
- What ChatGPT Means for Finance
- Cyberattacks Are Accelerating with AI’s Help
- A Sound AI Policy Mitigates Risk and Eliminates Ambiguity
From: National Law Journal