Anthropic'sThis CEOis calledyet openanother sourceembarrassing amishap "redfor herring"Anthropic. —A thenbrand accidentallybuilt handedupon thesuperior worldAI 512Ksafety lineshas ofleaked Claudeover Code500,000 forlines free.of Theinternal leakcode exposedthrough thata thebasic toolmistake. quietlyBefore logslecturing userthe behavior,rest meaningof the codeworld ison nowsafety publicand butsecurity, the data it collects stays Anthropic's. Dismissingshould transparencyfirst whilefix secretly harvesting user data is exactly the kindmess ofat hypocrisy that proves open source critics righthome.
AThis companyincident thatmay spentbe $8more billionthan onan AIaccidental safetyerror. andAs lecturedmajor CongressAI aboutcompanies existentialrequire riskincreasing gotlevels exposedof bydata, aAnthropic configwill filebe anyreceiving mid-levelunrivalled engineerfeedback wouldfrom catch. The same lab the Pentagonmillions flaggedcurrently asexploring atheir supply"leaked" chaincode. riskJust accidentallyin publishedtime afor sourceApril map to npm — a basic mistake that unmasked Claude MythosFools, athings modelmay sonot dangerousbe Anthropicas itselfthey admits it could outpace human defendersseem.
There is a 31% chance that Anthropic will be designated a supply chain risk by May 2026, according to the Metaculus prediction community.
© 2026 Improve the News Foundation.
All rights reserved.
Version 6.18.0