Nippon Telegraph and Telephone, Japan's largest telecom company, and Yomiuri Shimbun Group Holdings, the country's biggest newspaper, have called for a law to end unrestrained use of artificial intelligence (AI).
They warned that democracy and social order could be in peril in the face of unhindered AI development.
AI struggles with hallucinations, confidently generating inaccurate information. Despite guardrails, these hallucinations are a challenge as these errors have consequences. A full eradication of the problem may be difficult, and perfect accuracy remains a distant goal. Trust in AI responses must be sparing, as there's no immediate fix in sight.
AI errors ought to be viewed as creative experimentation. The focus should be on embracing AI's unpredictable nature rather than aiming for specific outcomes. AI hallucinations could be a concern in fields like finance and healthcare, and ways to leverage them for creative endeavors must be explored for innovative outcomes. Proper context is key to managing this risk.
AI's chatbots' hallucinations act as a buffer, requiring human verification before full reliance on AI-generated content. The debate continues on whether these hallucinations can be eliminated entirely. For now, they offer a balance with even some upside, preventing complete automation and maintaining human involvement in critical decision-making processes.