Global Tech Giants Embrace New AI Safety Guidelines

Leading AI companies have committed to a new set of voluntary safety guidelines, announced by the UK and South Korean governments ahead of a two-day AI summit in Seoul. Sixteen tech powerhouses, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI, have opted into this groundbreaking framework.

This framework represents the first such agreement spanning North America, Europe, the Middle East (including the Technology Innovation Institute), and Asia, with China’s Zhipu AI joining the initiative. Among their pledges, these companies have vowed not to develop or deploy any AI model if its severe risks cannot be managed. They also committed to publishing their risk measurement and mitigation strategies.

New Artificial Intelligence Guidelines

The new guidelines follow recommendations from a paper published in Science by prominent artificial intelligence researchers such as Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and Yuval Noah Harari. Their paper, titled “Managing Extreme AI Risks Amid Rapid Progress”, emphasizes the need for transparency, robustness, interpretability, and inclusive development in artificial intelligence systems. It also calls for comprehensive risk assessments and resilience against AI-enabled threats.

Anna Makanju, Vice President of Global Affairs at OpenAI, praised the new recommendations, stating, “The field of artificial intelligence safety is quickly evolving, and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science. We remain committed to collaborating with other research labs, companies, and governments to ensure AI is safe and benefits all of humanity.”

Similarly, Michael Sellitto, Head of Global Affairs at Anthropic, highlighted the importance of these commitments for safe and responsible AI development. He noted, “As a safety-focused organization, we have made it a priority to implement rigorous policies, conduct extensive red teaming, and collaborate with external experts to make sure our models are safe. These commitments are an important step forward in encouraging responsible AI development and deployment.”

A Step Towards Regulation

This new framework mirrors the “voluntary commitments” made at the White House last July by companies such as Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI. These earlier commitments aimed to ensure the safe, secure, and transparent development of AI technologies. The current rules require the 16 companies to “provide public transparency” on their safety implementations, with allowances for withholding information that might increase risks or expose sensitive commercial data.

UK Prime Minister Rishi Sunak hailed the initiative, saying, “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on artificial intelligence safety.” The inclusion of firms from beyond Europe and North America, such as China’s Zhipu.ai, underscores the global reach of this framework.

However, some critics argue that these voluntary commitments lack enforcement mechanisms, making them largely symbolic. Dan Hendrycks, safety adviser to Elon Musk’s startup xAI, acknowledged that the commitments could help “lay the foundation for concrete domestic regulation.” Yet, the effectiveness of such commitments remains in question, especially when extreme risks loom large.

International Cooperation on AI Safety

As the AI safety summit begins in Seoul, ten nations and the European Union (EU) have agreed to establish an international network of publicly backed “Artificial Intelligence Safety Institutes.” The “Seoul Statement of Intent toward International Cooperation on Artificial Intelligence Safety Science” includes countries such as the UK, the United States, Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore, and the EU.

While China did not formally join this agreement, it participated in the summit and had its firm Zhipu.ai sign the new safety framework. This indicates China’s willingness to cooperate on AI safety despite ongoing “secret” talks with the US.

This smaller summit, though less publicized than the previous one held at the UK’s Bletchley Park in November, attracted notable figures in the tech industry, including Elon Musk, former Google CEO Eric Schmidt, and DeepMind founder Sir Demis Hassabis. More developments and commitments are expected to emerge from these discussions in the coming days.

See also: Barriers Hamper Generative AI Progress In EMEA

Vinod Khosla to Discuss AI’s Impact on the Future at Disrupt
New AI Framework Creates Images Without Copyrighted Material

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu