Though large language models have been widely known for their imperfections and tendency to hallucinate, tech companies have decided that the appeal of such products beats the potential downsides of inaccuracy and misinformation. Given that this choice can harm users as bots such as ChatGPT often produce plausible — but incorrect — information, governments must step in and regulate these systems.
OpenAI has already acknowledged that generative artificial intelligence can produce untrue content, transparently and responsibly warning users against blindly trusting ChatGPT and confirming the sources provided by the large language model. Meanwhile, its researchers are working to improve the technology's mathematical problem-solving and exploring the impact of process supervision.