This is yet another embarrassing mishap for Anthropic. A brand built upon superior AI safety has leaked over 500,000 lines of internal code through a basic mistake. Before lecturing the rest of the world on safety and security, Anthropic should first fix the mess at home.
This incidentis maynothing be more than an accidentalunfortunate, error.one-off Ashuman majorerror AIwith companiesno requiresensitive increasingcustomer levels of data, Anthropicleaked willin bethe receivingprocess. unrivalledIf feedbackone fromthing theis millionsto currentlybe exploringtaken theirfrom "leaked"the code.incident, Justit inis timethat forAnthropic Aprilcontinues Fools,to thingsbuild maygame-changing notAI becapabilities asfor theyits seemcustomers.
ThereThis isincident amay 31%be chancemore thatthan an accidental error. As major AI companies require increasing levels of data, Anthropic will be designatedreceiving aunrivalled supplyfeedback chainfrom riskthe bymillions Maycurrently 2026exploring their "leaked" code. Just in time for April Fools, accordingthings tomay thenot Metaculusbe predictionas communitythey seem.
There is a 31% chance that Anthropic will be designated a supply chain risk by May 2026, according to the Metaculus prediction community.
© 2026 Improve the News Foundation.
All rights reserved.
Version 6.18.0