- The RYTR’s Block: How the FTC Wrote Off an AI Tool and the Consequences for AI Innovation
- Employee Use of AI Could Make Your Company’s Work Product Worse. What are the Legal Risks?
- Two Distant Businesses With Similar Names Butt Heads on the Internet: A Tale of Woe and Lessons Learned
- The FTC’s Ban on Noncompetes Might Die in Court, But the Legal Tide Still Runs Against Them
- Can You Cash in By Claiming a Trademark on a Trending Nickname, Slang Word, or Phrase?
Latest Blog Posts
The RYTR’s Block: How the FTC Wrote Off an AI Tool and the Consequences for AI Innovation
The tension between combatting illegal technology uses and fostering tech innovation was at issue in a recent controversial FTC decision that banned a generative AI tool used for writing write product and service reviews.
The case concerned a generative AI writing tool called RYTR (rytr.me), which produces various kinds of content. RYTR generates content for dozens of “use cases,” such as drafting an email or social media post.
The FTC acted against RYTR’s “use case” for generating reviews. This tool allowed users to generate a review that contains claims about the reviewed product or service that didn’t appear in the user’s prompt. Users can specify the number of reviews to be generated.
For example, the FTC tested the review tool by inputting only “dog shampoo” in the prompt. The tool generated this content: “As a dog owner, I am thrilled with this product. My pup has been smelling better than ever, the shedding has been reduced and his coat is shinier than ever. It’s also very easy to use and smells really nice. I recommend that everyone try this out!”
RYTR offers free use of its AI writing tool up to 10,000 characters per month. Premium versions have no character limit and offer advanced features. The FTC alleged that some users of the review-writing tool produced large volumes of reviews. One user allegedly generated over 39,000 reviews for replica designer watches and handbags in one month alone.
The FTC has five commissioners. No more than three can be from one political party. The FTC majority (three Democrat commissioners) held that these AI-generated reviews necessarily contain false information (because the AI makes up details the user didn’t input) and that the review tool’s “likely only use is to facilitate subscribers posting fake reviews with which to deceive consumers.” They concluded that use of the tool “is likely to pollute the marketplace with a glut of fake reviews.”
While the FTC didn’t accuse RYTR of posting fake reviews, it dinged it for creating the “means and instrumentalities” by which others could do so. Accordingly, the FTC took administrative action against RYTR. RYTR knuckled under and removed its review-generating tool from its suite of AI use cases.
The FTC minority (two Republican commissioners) lambasted the decision as aggressive overreach. They pushed back against the majority’s finding that the tool can be used only for fraud. They observed that legitimate reviewers could use the tool to generate first drafts, which the user could edit for accuracy. They noted that the FTC produced no proof that anyone had used RYTR’s service to create and post fake reviews or proof that RYTR induced users to do so.
They warned that this decision would harm U.S. AI innovation by discouraging R&D into new AI tools, such as creating fear in AI providers that they could be liable for misdeeds of users.
Unfortunately, it appears the FTC commissioners don’t understand how fake review technology works. One could easily and cheaply generate a high volume of reviews in ChatGPT by using a Python script (a computer language) and its API interface to ask it to produce, say, 1000 reviews.
The much harder task is using bot accounts to get the reviews posted in a marketplace (such as Amazon), either through bot accounts you create yourself (a tough programming task) or ones you purchase illicitly. Also, places where fake reviews might have the most impact (such as Amazon) are ones where the platform’s technological defenses are usually good at detecting and removing bot accounts and AI-generated content.
What does this decision mean for the future of AI innovation in the United States?
The FTC majority’s aggressive enforcement stance may be short-lived. The term of the FTC chair, Democrat Lina Khan, recently ended. Incoming President Trump will replace her. The technology titans around Trump are mainly AI accelerationists – people who believe rapid development of AI will produce massive economic and other societal benefits. Thus, it’s likely that there will be a new FTC majority leery of retarding AI innovation.
But there’s no guarantee the restaffed FTC will take a different approach. If the FTC doesn’t change its stance, businesses developing and using AI tools must be careful to construct and present them in ways that make it obvious that the tools have significant legitimate uses – uses that don’t violate any of the rights the FTC is charged with protecting, which mainly are protections against false advertising and deceptive trade practices, privacy, and other consumer protections. They must also be careful not to tout or imply possible illegal uses of their AI tools.
But even if AI creators do those things, this decision raises the specter that developers of AI tools could become FTC targets when their tools could be misused even without any proof of actual harm or proof of intent to do or facilitate illegal activities. Technological progress is hard to stop, but AI R&D projects will be less valued by investors if legal challenges might nix them.
NOTE: A longer, more detailed version of this column is available on John Farmer’s Substack, which is here.
Written on November 20, 2024
by John B. Farmer
© 2024 Leading-Edge Law Group, PLC. All rights reserved.