ADVERTISEMENT
Elon Musk has once again ignited debate in the artificial intelligence community, this time with a pointed remark aimed at Anthropic, the AI company behind the chatbot Claude. Responding to news about Anthropic’s updated “constitution” for Claude, Musk suggested that AI companies inevitably evolve into the opposite of what their names imply.
In a post on X, Musk commented that “any given AI company is destined to become the opposite of its name,” adding that Anthropic would therefore end up becoming “Misanthropic.” The remark appeared to question the company’s stated commitment to building AI systems aligned with human values and safety.
The exchange followed Anthropic’s announcement of an updated constitution for Claude, a document designed to define the principles, values, and behavioural boundaries the AI should follow. The update was shared online by Amanda Askell, a member of Anthropic’s technical staff, who later responded to Musk’s comment with humour, expressing hope that the company could “break the curse.” She also remarked that naming an AI company something like “EvilAI” would likely be difficult to justify.
Also read: Jeff Bezos’ Blue Origin plans TeraWave satellite internet network to rival Elon Musk's Starlink
Musk’s comment drew additional attention given his role as the founder of xAI, an AI startup operating in the same competitive landscape as Anthropic and other major players. The interaction underscored the growing rivalry and philosophical differences shaping the AI sector.
According to Anthropic, Claude’s constitution serves as a foundational guide that explains what the AI represents and how it should behave. The document outlines the values Claude is expected to uphold and the reasons behind them, with the aim of balancing usefulness with safety, ethics, and adherence to company policies.
The constitution is primarily written for the AI itself, providing guidance on handling complex scenarios such as maintaining honesty while being considerate, or safeguarding sensitive information. It also plays a role in training future versions of Claude, helping generate example conversations and rankings so newer models learn to respond in line with these principles.
Also read: YouTube to let creators use 'AI digital twins' to produce Shorts
In its latest update, Anthropic outlines four core priorities for Claude: being broadly safe, acting ethically, following company rules, and remaining genuinely helpful to users. When these goals conflict, the AI is instructed to prioritise them in that order.
While Musk’s comment was brief, it has once again brought attention to a broader question facing the AI industry: whether companies can consistently uphold ethical frameworks as their technologies scale and compete in an increasingly crowded and fast-moving market.