OpenAI’s GPT-4 (OpenAI, 2023) reignited the public discussions regarding Artificial Intelligence (AI) and its risks. In a recent open letter (Future of Life Institute, 2023) technology leaders (including Elon Musk and Steve Wozniak), prominent academics (including Yoshua Bengio and Stuart Russell) and many others, call for a six-months ‘pause’ of ‘giant AI experiments’, or more precisely, a pause in the training of AI systems more powerful than GPT-4. The letter has sparked much needed broad public discussion, but has also led to unhelpful debates on matters such as who did and did not sign, the goals and intentions of the founders of the Future of Life institute and speculations about hypothetical artificial general intelligence (AGI) and its capabilities. We welcome the public discussion the letter has generated, but we also see an urgent need to move beyond those debates; in fact, it is crucial to address current problems with the development and use of AI. As it has been our continued position for years , we advocate greater support for, and engagement with, the ongoing comprehensive debate and regulatory actions concerning all aspects related to the impact, development, use, and governance of advanced AI systems, whether generative or not. “Efforts towards shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”, as is proposed in the open letter, should not be left to AI labs, nor is it s...