Table of Contents
When ChatGPT burst onto the scene last year, in-house lawyers had to scramble to figure out how to govern the use of new generative AI tools, and decide who would take charge of those decisions.
Topping their concerns: protecting confidential business and customer data, and establishing human backstops to safeguard against the technology’s propensity to “hallucinate,” or spit out wrong information.
Artificial intelligence isn’t new. But generative AI—tools trained on oceans of content to produce original text—created ripples of panic among legal departments when ChatGPT debuted, because its full legal implications were both far-reaching and not entirely clear. And with public-facing platforms, the tool is easily accessible to employees.
From a company’s perspective, “generative AI is the first thing that can violate all our policies at once,” said Dan Felz, a partner at Alston & Bird in Atlanta.
AI Oversight
As the technology evolves and the legal implications multiply—and with regulation on the horizon in multiple jurisdictions—companies should have a person or team dedicated to AI governance and compliance, said Amber Ezell, policy counsel at the Future of Privacy Forum. The group this summer published a checklist to help companies write their own generative AI policies.
That role often falls to the chief privacy officer, Ezell said. But while AI is privacy adjacent, it also encompasses other issues.
Toyota Motor North America has established an AI oversight group that includes experts in IP, data privacy, cybersecurity, research and development, and more to evaluate internal requests to use generative AI on a case-by-case basis, said Gunnar Heinisch, managing counsel.
The team is “continually trying to evaluate what the risks look like versus what the benefits are for our business” as new issues and use cases arise, Heinisch said.
“Meanwhile, in the background, we’re trying to establish what our principles and framework look like—so, dealing with the ad hoc questions and then trying to establish what that framework looks like, with a long-term regulatory picture in mind,” he added.
Salesforce, the San Francisco-based enterprise software giant, has been using AI for years, said Paula Goldman, chief ethical and humane use officer at the company. While that meant addressing ethical concerns from the start, she noted, generative AI has raised new questions.
The company recently released a new AI acceptable use policy, Goldman said.
“We know that this is very early days in generative AI, that it’s advancing very quickly, and that things will change,” she said. “We may need to adapt our approach, but we’d rather put a stake in the ground and help our customers understand what we think is the answer to some of these very complicated questions right now.”
The conversation about responsible use of the technology will continue as laws evolve, she added.
Creating Policies
The first appearance of ChatGPT was, “All hands on deck! Fire! We need to put some policy in place immediately,” said Katelyn Canning, head of legal at Ocrolus, a fintech startup with AI products.
In a perfect world, Canning said, she would have stopped internal use of the technology while figuring out its implications and writing a policy.
“It’s such a great tool that you have to balance between the reality of, people are going to use this, so it’s better to get some guidelines out on paper,” she said, “just so nothing absolutely crazy happens.”
Some companies banned internal use of the technology. In February, a group of investment banks prohibited employee use of ChatGPT.
Others have no policies in place at all yet—but that’s a dwindling group, Ezell said.
Many others allow their employees to use generative AI, she said, but they establish safeguards—like tracking its use and requiring approval.
“I think the reason why companies initially didn’t have generative AI policies wasn’t because they were complacent or because they didn’t necessarily want to do anything about it,” Ezell said. “I think that it came up so fast that companies have been trying to play catch-up.”
According to a McKinsey survey, among respondents who said their organizations have adopted AI, only 21% said the organizations had policies governing employee use of generative AI. The survey data was collected in April and included respondents across regions, industries, and company sizes, McKinsey said.
For companies creating new policies from scratch, or updating their policies as the technology evolves, generative AI raises a host of potential legal pitfalls, including security, data privacy, employment, and copyright law concerns.
As companies wait for targeted AI regulation that’s under discussion in the EU, Canada, and other jurisdictions, they’re looking to the questions regulators are asking, said Caitlin Fennessy, vice president and chief knowledge officer at the International Association of Privacy Professionals. Those questions are “serving as the rubric for organizations crafting AI governance policies,” she added.
“At this stage, organizations are leveraging a combination of frameworks and existing rulebooks for privacy and anti-discrimination laws to craft AI governance programs,” Fennessy said.
What’s a ‘Hard No?’
At the top of most corporate counsels’ concerns about the technology is a security or data privacy breach.
If an employee puts sensitive information—such as customer data or confidential business information—into a generative AI platform that isn’t secure, the platform could offer up the information somewhere else. It could also be incorporated into the training data the platform operator uses to hone its model—the information that “teaches” the model—which could effectively make it public.
But as companies seek to “fine-tune” AI models—train them with firm- and industry-specific data to obtain maximum utility—the thorny question of how to safeguard secrets will remain at the forefront.
Inaccuracy is also a major concern. Generative AI models have a propensity to hallucinate, or produce incorrect answers.
Companies must be careful to not allow unfettered, un-reviewed use, without checks and balances, said Kyle Fath, a partner at Squire Patton Boggs in Los Angeles, who focuses on data privacy and IP.
A “hard no” would be using generative AI without internal governance or safeguards in place, he said, because humans need to check that the information is factually accurate and not biased, and doesn’t infringe on copyrights.
Risks and Guardrails
Using generative AI for HR functions—like sorting job applications or measuring performance—risks violating existing civil rights law, the US Equal Employment Opportunity Commission has warned.
The AI model could discriminate against candidates or employees based on race or sex, if it’s been trained on data that is itself biased.
Recent guidance from the EEOC is consistent with what employment lawyers had been advising their clients, said David Schwartz, global head of the labor and employment law group at Skadden Arps in New York. Some jurisdictions have already enacted their own AI employment laws—such as New York City’s new requirement that employers subject AI hiring tools to an independent audit checking for bias.
There’s also already regulatory attention on privacy issues in the US and EU, Fath said.
Employee use of generative AI also puts companies at risk of intellectual property law violations. Models that pull data from third-party sources to train their algorithms have already sparked lawsuits against AI providers by celebrities and authors.
“It’s probably not outside of the realm of possibility that those suits could start to trickle down to users of those tools,” beyond just targeting the platforms, Fath said.
Companies are looking closely at whether their current privacy and terms of use policies allow them to touch customer or client data with generative AI, he added.