Employee Use of AI Could Make Your Company’s Work Product Worse. What are the Legal Risks?

Deploying AI at a business costs money, and businesses obviously intend to get something positive from using it. Often, the goal is to enable employees to crank out more work in the same time.

But what happens when AI makes mistakes? What legal risks could arise from those mistakes?

Generative AIs (“GenAIs” in tech lingo), such as ChatGPT, are prone to two kinds of errors – hallucinations and omissions.

“Hallucinating” is a charitable way of describing when GenAI produces erroneous output. Sometimes, a GenAI will say it doesn’t know about an issue but will often give a confidently wrong answer.

GenAI will unavoidably hallucinate sometimes due to how it works. This largely happens because either a prompt requires knowledge of a subject in which the AI has had little training or the prompt strings together various subjects that the AI has not often seen together in training. When that happens, the GenAI can get wrongly “creative.”

GenAIs also sometimes omit critical stuff from their outputs. For example, I’ve been querying GenAIs about a hypothetical copyright law scenario for the past few months. Those systems have identified the right statutes and older case law, but they keep failing to mention a May 2023 Supreme Court case that changed the law. Their answers have not been hallucinations – they haven’t stated anything untrue – they just leave out the critical 2023 case that changed the law.

AI omissions are a big risk for businesses. Checking the sources the AI cites might not reveal that the GenAI omitted something important.

The way to guard against important omissions is to allow people to use GenAI only when they have subject matter expertise in the task at hand so that they might spot what’s missing. But if you restrict use of GenAI to people who are subject matter experts in the area in which it will be used, that probably would make the use of GenAI less profitable.

So, because GenAI will sometimes hallucinate or omit important things in producing output, what should a business do? Ideally, the employees using GenAI will vet the output before the company adopts it.

But studies show that humans sometimes over-rely on AI to produce sound output; the human decides that the AI output looks solid and that it isn’t worth the time and effort to check it. Worse yet, studies also show that human overreliance on AI sometimes causes people to produce a worse work product than if they had not used AI and did the work themselves.

The elusive holy grail is to achieve what computer scientists call “complementarity,” which is when the human-AI team produces a better output (or an equally good output less expensively) than either the human or AI would working alone.

Computer scientists have conducted social science experiments to ascertain how human-AI interactions can be structured to try to achieve complementarity. These studies have identified strategies for changing the personal cost-benefit situation employees confront so that they feel it’s personally worthwhile to verify the AI’s output. Many of the strategies are common sense.

Here are some of them:

1. Use monetary incentives. For your employees who use AI, pay well. Paying more encourages employees to take work more seriously. Pay bonuses for consistently verifying and fixing AI output. Impose financial penalties on employees who fail to vet outputs, such as withholding raises, bonuses, or promotions and, in a worse case, firing.

2. Require employees to disclose when and how they used AI and what they did to verify its output.

3. Prohibit employees from using AI to produce company work products in areas where the employee lacks subject matter expertise.

4. Invest in better AI. Use AI that cites its sources. Make it easier for employees to check sources to do verification. Also, if the AI is tailored to doing the needed task, it will hallucinate and omit key material less frequently.

5. Make the AI less human. Studies show humans are more likely to assume an AI is accurate when it interacts like a human.

6. Make work more enjoyable. People who like their work are more likely to work hard.

Companies can incur legal liability due to employees’ overreliance on AI, namely employees putting out work product that’s wrong or omits essential information.

Defective work may not meet the standards required in a contract, which would be a breach of contract. Allowing the defect to go through may be negligence, which could subject the company to tort liability. The defect might cause the work product to not meet the standard set by an applicable law or regulation. The defect could lead to public embarrassment, hurting the company’s reputation and sales.

In the end, employers must understand how employees’ self-interest will affect how well they use AI, and employers should structure things to incentivize employees to use it responsibly.

Written on September 19, 2024

by John B. Farmer

© 2024 Leading-Edge Law Group, PLC. All rights reserved.