The Legal Problems AI Now Creates Should Pave the Way to a Robust Industry
Is artificial intelligence, or more specifically OpenAI a risk to public safety? Can ChatGPT be ruining reputations with false statements? The Federal Trade Commission (FTC) sent a 20-page demand for records this week to OpenAI to answer questions and address risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers. The results could set the stage defining the place artificial intelligence will occupy in the US.
Background
The FTC investigation into OpenAI began on March 2023. It resulted from a complaint from the Center for AI and Digital Policy (CAIDP). The complaint alleged that OpenAI’s ChatGPT-4 product violated Section 5 of the FTC Act. Section 5 prohibits unfair and deceptive trade practices. More specifically, CAIDP argues that ChatGPT-4 is biased, deceptive, and a risk to public safety.
The complaint cited a number of concerns about ChatGPT-4, including:
- The model’s potential to generate harmful or offensive content.
- The model’s tendency to make up facts that are not true.
- The model’s lack of transparency and accountability.
The CAIDP also argued that OpenAI had not done enough to mitigate these risks. The complaint called on the FTC to investigate OpenAI and to take action to ensure that ChatGPT-4 is not used in a harmful way. The FTC has not yet made any public statements about the investigation. OpenAI has not commented publicly on the investigation.
It is not clear what action, if any, the FTC can or will take.
Negligence?
With few exceptions, companies are responsible for the harm done by their products when used correctly. One of the questions the FTC asked has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging.” The outcome of this investigation, including any regulation could set the tone and define where responsibility lies regarding artificial intelligence.
As the race to develop more powerful AI services accelerates, regulatory scrutiny of the technology that could upend the way societies and businesses operate is growing. What is difficult is computer use generally isn’t isolated to a country, the internet extends far beyond borders. Global regulators are aiming to apply existing rules covering subjects from copyright and data privacy to the issues of data fed into models and the content they produce.
Legal Minefield
In a related story out this week, Comedian Sarah Silverman and two authors are suing Meta and OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.
The copyright lawsuits against the ChatGPT parent and the Facebook parent were filed in a San Francisco federal court on Friday. Both suits are seeking class action status. Silverman, the author of “The Bedwetter,” is joined in her legal filing by authors Christopher Golden and Richard Kadrey.
Unlike the FTC complaint, the authors’ copyright suits may set a precedent on intelligence aggregation. The sudden birth of AI tools that have the ability to generate written work in response to user prompts was “taught” using real life work. The large language models at work behind the scenes of these tools are trained on immense quantities of online data. The training practice has raised accusations that these models may be pulling from copyrighted works without permission – most worrisome, these works could ultimately be served to train tools that upend the livelihoods of creatives.
Take Away
Investing in a promising new technology often means exposing oneself to a not yet settled legal framework. As the technology progresses, the early birds investing in relatively young and small companies may find they hold the next mega-cap company. Or, regulation may limit, to the point of stifling, the kind of growth experienced by Amazon and Apple a few short decades ago.
If AI follows the path of other technologies, well-defined boundaries, and regulations will give companies the confidence they need to invest capital in the technology’s future, and investors will be more confident in providing that capital.
The playing field is being created while the game is being played. Perhaps if the FTC has a list of 20 questions for OpenAI in ten years, it will just type them into ChatGPT and get a response in 20 seconds.
Managing Editor, Channelchek