AI Boom Has Left Employers with Head-Spinning Governance Challenges
Businesses and governments are “grappling with how to set boundaries while staying competitive in the technology transformation race.”
With the artificial intelligence (AI) genie out of the bottle, government and industry now must get their arms around another topic that’s nearly as complicated as the technology itself: AI governance, including the policies, laws, and regulations that will guide the development, use, and deployment of AI systems.
The “AI Governance in Practice Report 2024,” released Tuesday by FTI Consulting and the International Association of Privacy Professionals, lists 31 of what the report calls “some of the most prominent and consequential” AI governance efforts in the world—from the EU AI Act and the White House “Blueprint for an AI Bill of Rights” to the Digital India Act and The OECD Framework for Classifying AI Systems.
The attempts at governance fall into five categories: principles, laws and regulations, AI frameworks, declarations and voluntary commitments, and standards efforts. And the 31 are truly just a sampling. Sprinkled throughout the report are many more examples, such as the laws and rules empowering the U.S. Federal Trade Commission (FTC) to protect the privacy of individuals and protect consumers from unfair business practices.
It all makes for a dizzying and seemingly irreconcilable number of disparate efforts worldwide—with a goal of taming AI, on the one hand, and supporting its potential for innovation, on the other.
“AI systems have become powerful engines for disruption across industries and geographies, leaving businesses and governments grappling with how to set boundaries while staying competitive in the technology transformation race,” FTI Technology’s global CEO, Sophie Ross, said in introducing the study.
Yet the report also notes, “Confusion about how the technology works, the introduction and proliferation of bias in algorithms, dissemination of misinformation, and privacy rights violations represent only a sliver of the potential risks.”
The 70-page report states that global private AI investment soared from $4 billion in 2013 to $94 billion in 2021. While that capital infusion likely will usher in a new era of technological innovation, the technology’s voracious appetite for data raises “serious considerations and concerns about the safety of this technology and the potential for it to disrupt the world and negatively impact individuals when left unchecked,” the report stated.
At an FTC roundtable in October, the creative industry begged the agency to intervene, complaining that AI devoured their works and then nearly replicated them—everything from images and voices of actors to published works. John Grisham, Douglas Preston, and other authors of New York Times bestsellers filed a copyright infringement suit last year against OpenAI in federal court in New York. OpenAI contends its practices constitute fair use under copyright law.
The ability for AI to “scrape” information online also creates the potential that these technologies may access sensitive personal information to enable data fraud and unwanted direct marketing.
“Though copyright has emerged as one of the first and foremost frontiers between AI and intellectual property [IP], the full gamut of IP rights are engaged by AI, and specifically generative AI—design rights, performers’ rights, patents, and trademarks,” Joe Jones, director of research and insights for the International Association of Privacy Professionals, said in the report. “Anthropocentric approaches to IP will butt up against AI’s learning techniques, its scale, and [the] nature of its outputs, leaving much uncertainty, complexity, and variety in the implementation of AI and IP governance.”
For organizations wanting to harness the power of AI while seeking to sidestep its potential pitfalls, the report suggests a number of best practices, such as defining a corporate strategy for AI by documenting processes and controls to record and demonstrate compliance.
The report emphasizes that ensuring the safe and ethical use of AI is, by necessity, a shared responsibility, one that includes the developers and deployers of AI systems, as well as the various third parties that organizations rely on in their operations and supply chains.
“An effective AI governance model is about collective responsibility and collective business responsibility, which should encompass oversight mechanisms such as privacy, accountability, compliance, among others,” Vishal Parmar, British Airways’ global lead privacy counsel and data protection officer, said in the report.
Andrew Gamino-Cheong, co-founder and CTO of Trustible AI, cautioned in the report that “AI governance is about to get a lot harder. The internal complexity of governing AI is growing as more internal teams adopt AI, new AI features are built, and the systems get complex.”
He added: “At the same time, the external complexity is also set to grow rapidly with new regulations, customer demands, and safety research evolving. The organizations [that] have invested in structured AI governance already have a leg up and will continue to have a competitive advantage.”
From: Corporate Counsel