The US Federal Trade Commission (FTC) has opened an investigation into OpenAI, probing whether the maker of ChatGPT has harmed consumers by putting reputations and data at risk.
In a letter sent to OpenAI, first reported by the Washington Post and verified by other major outlets, the FTC stated that this probe will focus on whether the company has "engaged in unfair or deceptive" practices related to data security or that resulted in harm to consumers.
Though large language models have been widely known for their imperfections and tendency to hallucinate, tech companies have decided that the appeal of such products beats the potential downsides of inaccuracy and misinformation. Given that this choice can harm users as bots such as ChatGPT often produce plausible — but incorrect — information, governments must step in and regulate these systems.
OpenAI has already acknowledged that generative artificial intelligence can produce untrue content, transparently and responsibly warning users against blindly trusting ChatGPT and confirming the sources provided by the large language model. Meanwhile, its researchers are working to improve the technology's mathematical problem-solving and exploring the impact of process supervision.