A Sound AI Policy Mitigates Risk and Eliminates Ambiguity
AI cannot be ignored. But before everyone dives in, companies need to have an internal policy that outlines what, when, how, and why AI should be used.
Artificial intelligence (AI) is no longer the wave of the future; it is the here and now. Wherever they are used—in finance, academia, publishing, or even screenwriting (one reason the Hollywood strike lasted as long as it did)—AI tools affect multiple aspects of business today. And this will only grow as the technology, use cases, and tools evolve.
Many organizations have started using AI tools already. One company I spoke with in the tech space uses AI to generate information for training materials. Rather than exclusively relying on expensive subject matter experts, the company turns to AI as an alternative. The technology has eased the hiring burden, helped generate useful content, and improved delivery times for many projects. Another organization uses AI for research, incorporating it into workflows as a standard check and balance and an additional data point to ensure accuracy. But before everyone dives in and starts using AI tools in their everyday workflow, it is important to have an internal policy that outlines what, when, how, and why AI should be used.
Forbes reports that 75 percent of surveyed companies that use AI have not yet developed an AI policy. Additionally, Forbes says that having a policy in place is only half the battle. A sound AI policy must dictate how employees will be trained on AI, what human oversight will be implemented, and prohibitions against using AI to broadcast false or harmful information. Consequences for breaching such policies must also be dictated.
Each company must examine AI and develop policies that suit its industry, company culture, and risk tolerance. Technology may provide many benefits. However, its use must not harm a company’s brand, reputation, or end product. Privacy concerns are crucial, particularly in finance, where sensitive information is commonplace and should not be placed in online AI tools.
Employees must consider that many of the large-language model AI tools like Google Bard and ChatGPT are cloud-based platforms, and the data they enter is leaving a controlled corporate environment for the public internet.
Therefore, a corporate AI policy must examine the security level of AI tools and give specific instructions on which information can and cannot be inputted. Similarly, policies should address the potential for malicious activities that stem from using AI. Company information placed in AI tools may lead to information leaks that fuel cyberattacks or data breaches.
The need to put a policy in place applies to nearly every company. Indeed, some face more significant risks than others, which must be accounted for in each case. The language used in an AI policy is also fundamental. Organizations must carefully scrutinize wording in consultation with experts. Thomas Kearns, partner at Olshan Frome Wolosky, provides insights about how precise language is crucial in laying out a corporate AI policy.
“AI tools are relatively new, so it’s important that we think about our work and put guidelines in place to help avoid problems later,” Kearns said. “A sound policy helps define appropriate uses of AI and keeps employee and client information safe. It also educates employees and protects the company from misuse of the technology that could have a negative impact on a business.”
Just like a human, computer intelligence is prone to imperfection. Any AI implementation or acceptable use should include a human review for clarity, accuracy, and potential copyright issues. This should be clearly defined in corporate policy.
As technology improves and gets even more powerful, company policies regarding its use will need to change. We are in the initial stages of the AI age, and with large investments being made globally, this technology may advance more rapidly than any that has come before. This is exactly why having a policy in place now that companies can revisit and revise periodically is critical.
Many see AI as the next major advancement in technology. Still, some caution that we must carefully implement AI and warn that it can become dangerous to us all if left unchecked. One thing is clear: AI cannot be ignored. Organizations should be taking steps now to educate and guide their people about the responsible use of AI.
Just as acceptable use policies for corporate internet usage have existed for years, AI policies are quickly finding a place in policy handbooks. These should be focused on the culture of the company, with clear language that does not leave the use of AI open-ended. After all, the brand you save might just be your own.
Ioana Good is the founder of Promova, an international PR and branding agency. She is also the co-founder of Find A Rainmaker, an online assessment that provides behavioral insights to help companies generate revenue. She can be reached at igood@getpromova.com.
From: New York Law Journal