Regulating AI to Protect Society: Where Things Stand and May Be Going

Do you worry about AI destroying your future? Is there any chance that government regulation will save you?

To give shape to your fear, pick your favorite dystopian sci-fi movie:

o Metropolis (1927) – AI-like robot Maria is used to manipulate the working class.

o 2001: A Space Odyssey (1968) – A spaceship’s onboard AI supercomputer, HAL, starts killing the crew to keep them from interfering with HAL’s primary directive.

o The Terminator (1984) – Skynet, a self-aware AI, launches a nuclear war and sends robots to eliminate humanity.

o Ex Machina (2014) – A reclusive scientist creates a humanoid AI robot named Ava, who turns on humans for her own benefit.

Those are the worst cases – AI harming or enslaving humanity. But there are more pedestrian harms to worry about, such as protecting individual privacy, protecting people from AI bias or discrimination, and protecting against AI eliminating jobs and harming wages.

AI regulation is just taking shape. There is no comprehensive federal regulation of AI. Some states and cities have enacted laws addressing specific AI uses. The European Union is crafting a comprehensive AI law, but many important details remain to be determined.

In the U.S., on the federal level, the Biden Administration issued a thought piece entitled “AI Bill of Rights.” It contains five guiding principles for responsible use of AI. It’s just aspirational guidance, not the law.

The Federal Trade Commission has been the most active federal agency, collaborating with the Department of Justice and the Equal Employment Opportunity Commission. These agencies claim to have some power over AI already.

They have warned against possible bias/discrimination violations arising from using AI in credit, tenant approval, hiring and employment, and insurance. They also have addressed invasive commercial surveillance.

The FTC also has issued guidance that companies shouldn’t deceive consumers regarding when using AI to interact with them, that customers should receive an explanation when denied a product or service based on AI decision-making, and that companies should validate whether their AI models work as intended.

On the state and local levels, several states have enacted laws to address possible AI bias/discrimination in hiring and employment decisions. Some jurisdictions have banned or restricted using facial recognition software in law enforcement. Several jurisdictions also have enacted laws allowing civil suits against creators of deepfakes, especially when used for fabricated pornography. California has imposed notice and disclosure requirements regarding using chatbots to incentivize sales or influence election voting.

Overall, the hottest area for government action in the United States is addressing bias/discrimination via AI in hiring and employment. What constitutes illegal bias/discrimination is hotly litigated in the courts and regulatory agencies. The recent Supreme Court decision effectively banning affirmative action in college admissions is shaking up this area of the law.

Europe is leading the way in AI regulation. This leadership parallels online privacy regulation. The EU has a comprehensive and demanding online privacy regulatory regime called the General Data Privacy Regulation (GDPR). There is no U.S. federal analog, but some states, including Virginia, have enacted comprehensive online privacy laws.

The EU is still fashioning its AI regulation, so vital details are undetermined. The primary feature will be to categorize AI into three levels of risk and calibrate regulation to risk level.

The highest level is unacceptable risk, where AI use will be generally prohibited. For example, the EU would prohibit national-level social-credit scoring systems, such as what China uses for repressing its residents. Use in law enforcement also would be tightly controlled.

The moderate risk level will include things such as hiring and employment decisions. These areas will be regulated, but AI use will not be banned.

At the low-risk level, such as chatbots, regulation will probably be limited to transparency requirements so users can make informed decisions.

If you’re keeping score, note you don’t see incipient government regulation addressing two threats: possible job loss and wage diminution and existential or enslavement threats to humanity (like in the movies).

Regarding jobs and wages, I have detected no coming regulation. But labor unions are raising AI concerns in collective bargaining. AI usage was a major issue in the recent strike by Hollywood writers.

As for protecting humanity from catastrophic outcomes, you might see some effort at regulation through treaties and national laws. But those won’t be effective against rogue operators. Control against foreign threats probably must come through cyberwarfare and other military tools. We might eventually treat powerful AI like how the world addresses nuclear weapons. (Think non-proliferation.)

So, get your popcorn ready. How AI unfolds and is regulated will be fascinating and maybe scary to watch.

Written on October 18, 2023

by John B. Farmer

© 2023 Leading-Edge Law Group, PLC. All rights reserved.