AI Action Paris Summit 2025: Key Takeaways on Global AI Governance

Posted

The AI Action Summit brought together a wide-ranging assembly of influential figures to discuss the future of artificial intelligence (AI) governance, risk mitigation and international cooperation. The attendees included government leaders and executives from multinational and emerging companies. The event was held on February 10 – 12, 2025, in Paris.

The International AI Safety Report and its Key Findings
Ahead of the Summit, an independent consortium of AI experts, policymakers and industry leaders published the International AI Safety Report. The Report, commissioned by the UK government, provides a comprehensive overview of AI’s evolving risks and governance challenges, reinforcing key themes established in the Bletchley Declaration on AI Safety, published by the attendees of the AI Safety Summit at Bletchley Park in November 2023. (For a more in-depth analysis of the Bletchley Declaration and its implications, see our previous post here.)

The Report states that AI’s transformative potential is undeniable, with the ability to drive economic growth and enhance industries from health care to finance. However, the Report also highlights the urgent need to mitigate associated risks such as bias, misinformation and safety vulnerabilities. The Report emphasizes the importance of addressing frontier AI risks, particularly the challenges posed by highly capable AI models that could have unintended or catastrophic consequences if left unchecked. Global coordination in developing safety measures and ethical frameworks is a recurring theme of the Report, as the risks and opportunities of AI transcend national borders. The Report urges governments, businesses and researchers to prioritize transparency, accountability and ethical AI development to foster public trust and ensure responsible innovation.

Key Takeaways from the AI Action Summit
The Summit concluded with the release of the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.” Aside from the headline themes of inclusivity and sustainability, the joint statement also promotes bridging digital divides, AI safety and security, trustworthiness, and avoiding market concentration.

The Statement was signed by some 60 nations and supranational organizations, including China, India, the EU and African Union. However, there were two notable exceptions to the list of signatories—the U.S. and UK.

According to the UK’s Communities Minister, the reason for not signing the Statement was due to a lack of “practical clarity” on the “global governance” of AI, stating that the UK makes decisions “based on what’s best for the British people.”

U.S. Vice President JD Vance, while speaking at the Summit, urged the embrace of a “new frontier of AI with optimism and not trepidation” and called for “international regulatory regimes that fosters the creation of AI technology rather than strangles it.” (For our discussion on AI and the new Trump administration, see our recent alert here.)

The Summit featured discussions led by a broad coalition of stakeholders, including on investment, AI governance, sustainability, and regulatory strategies. One such initiative was the UK-backed Coalition for Sustainable AI which aims to make AI beneficial to the environmental goals of the global community. Other initiatives announced included the launch of “Current AI”—a public interest foundation aimed at investing in open-source, people-first technologies to make AI more transparent, and the launch of the nonprofit “Robust Open Online Safety Tools” by major tech companies (including OpenAI and Google), which will focus on building a scalable safety infrastructure to help organizations detect and report child sexual abuse material and implement other safety features.

At the Summit, French President Emmanuel Macron seized the opportunity to encourage the positioning of France as a leading hub for AI investment, unveiling a €109 billion in private-sector commitments to advance AI research and infrastructure. A significant portion of this investment comes from the United Arab Emirates, which has pledged between €30 and €50 billion to finance the development of a state-of-the-art 1-gigawatt data centre, aimed at bolstering Europe’s AI computing capabilities and supporting large-scale AI model training. Ursula von der Leyen also announced a total of €200 billion in investment across the EU for “AI-related opportunities,” including AI gigafactories.

The Global AI Landscape and Policy Context
The Summit took place against a backdrop of significant AI policy shifts worldwide, highlighting the stark contrast between regulatory approaches in the U.S. and Europe. The EU has taken a proactive stance with the EU AI Act, imposing strict guidelines on AI safety, transparency, and human oversight, reflecting a regulatory-first approach aimed at minimizing risks before AI systems become deeply embedded in society. (For more insights into the EU AI Act and its implications, see our recent alert here.)

Across the Atlantic, the focus of the U.S. federal government has shifted with the recent change in administration. While President Biden issued a 2023 executive order (EO) focusing on the safety, security and trustworthiness of AI, and requiring regulators to begin setting governance standards, President Trump revoked the EO on his first day in office and subsequently released his own EO, “Removing Barriers to American Leadership in Artificial Intelligence,” which proposes the creation of an AI Action Plan to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

On the state level, a spectrum of approaches has also cropped up, with states like California, Colorado and Utah implementing more pointed and stringent AI safety requirements, while other states, like New Jersey, have only, as yet, passed legislation encouraging investment and innovation.

In Asia, China has been actively shaping its AI governance landscape with a mix of regulatory control and industry expansion. While China’s approach remains distinct from both the U.S. and EU models, China’s increasing presence in global AI discussions underscores the necessity of cross-border cooperation in establishing shared principles for AI safety and governance.

What Comes Next for AI Governance?
With the Summit now concluded, attention turns to how governments and organizations will implement the policies and strategies discussed. The coming months will likely see continued negotiations on global AI standards, increased regulatory clarity, and expanded efforts to enhance AI safety research. Industry leaders are expected to play a crucial role in shaping governance frameworks, with companies working alongside policymakers to ensure AI development aligns with ethical and safety considerations.

A key aspect to watch will be the coordination between regulatory bodies and AI developers, particularly as AI capabilities continue to evolve at an unprecedented pace. The ability to strike a balance between fostering innovation and addressing risks will determine the trajectory of AI policy in the years ahead.

The next AI Summit will be held in India.

The authors would like to thank trainee solicitor Samson Verebes for his contributions to this post.


RELATED ARTICLES

Key Takeaways from the UK’s AI Summit: The Bletchley Declaration