Back in 2022, Anthropic CEO Dario Amodei chose not to release the super-powerful AI chatbot, Claude, that his company had just finished training, opting instead to focus on further internal safety testing. That move likely cost the company billions — three months later, OpenAI launched ChatGPT.
Having a reputation for credibility and caution in an industry that appears to have thrown a large chunk of it to the wind is not a bad thing though. Claude is now in its third iteration, but that caution remains, with the company pledging not to release AIs above certain capability levels until it can develop sufficiently robust safety measures.
TIME’s interview with Amodei gives an insight into what the AI industry might look like when safety is considered a core part of the strategy.