Articles Posted in Artificial Intelligence

Posted

The AI Action Summit brought together a wide-ranging assembly of influential figures to discuss the future of artificial intelligence (AI) governance, risk mitigation and international cooperation. The attendees included government leaders and executives from multinational and emerging companies. The event was held on February 10 – 12, 2025, in Paris.

Continue reading

Posted

The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the Act), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a comprehensive overview of the Act, see our earlier client alert here.

Continue reading

Posted

A recent code leak indicated that OpenAI is set to release its first true AI Agent. An AI agent is a system designed to perceive its environment, process information, and autonomously take actions to achieve specific goals. Unlike traditional software that operates based on direct input and predefined instructions, AI agents can analyze situations, make decisions, and sometimes learn or adapt over time to better achieve its goals.

Continue reading

Posted

Businessperson checking yes or no (green checkmarks or red X's) to unspecified textImagine you’re an associate at a consulting firm. You’re surprised to see a new “AI Assist” button appear in your email application one morning. Without any training or guidance from your firm’s IT department, you decide to try it out, asking the AI to draft a response to a client’s inquiry about tax implications for a proposed merger. The AI confidently generates a response that looks professional and well-written, which you quickly review and send. Three days later, your managing partner calls you into their office—the AI cited outdated tax regulations and recommended a structure that would create significant liability for the client. The incident triggers an urgent internal review, revealing that dozens of employees have been using the undisclosed AI feature for weeks, potentially exposing the firm to professional liability and damaging client relationships.

Continue reading

Posted

Earlier this year, the UK’s Competition and Markets Authority (CMA) published an update to its initial report on AI foundation models which presented the CMA’s findings on key changes in the foundation model sector and included stakeholder feedback. (Our thoughts on the initial September 2023 report, which provides a summary of foundation models and the CMA’s initial review, can be found here.) The updated report confirms the CMA’s final competition and consumer protection principles and details how the CMA plans to continue its investigations into the impact of foundation models on digital markets and take enforcement against unfair competition. The updated report also outlines recent and upcoming initiatives and publications from the CMA, emphasizing the importance of collaboration and cooperation between regulators in the digital markets sector.

Continue reading

Posted

ai-data-governance-722210487-e1730305423540-300x246Google recently unveiled its latest AI-integrated search engine—and the internet didn’t hold back, roasting it for suggesting recipes like glue-infused pizza sauce and recommending rocks as a nutritious snack. The tech giant’s AI bot could scrape the web’s boundless resources and serve up answers, but apparently, it couldn’t tell sincerity from sarcasm or facts from wild fiction. This fiasco is just another reminder of why data governance really matters.

Continue reading

Posted

The California legislature recently passed Assembly Bill 2013 (AB 2013) on August 27, 2024, a measure aimed at enhancing transparency in AI training and development. If signed into law by Governor Gavin Newsom, developers of generative AI systems or services that are made available to Californians would be required to disclose significant information on the data used to train such AI systems or services. This, in turn, may raise novel compliance burdens for AI providers as well as unique challenges for customers in interpreting the information.

Continue reading

Posted

In a landmark moment for global AI governance, the United States, European Union and United Kingdom have signed the Council of Europe’s framework convention on artificial intelligence and human rights, democracy, and the rule of law (the “Convention”), the first legally binding international treaty on AI.

Continue reading

Posted

Reliability, security, and legal compliance. These are assurances that customers purchasing technology products expect from their providers, and which are often required as part of the contracts for such products. AI Providers, however, are lagging in their willingness to contractually commit to such assurances, let alone deliver in practice. Thus, as AI products grow in both popularity and technical complexity, robust testing tools become indispensable. Unfortunately, utilization of such tools may unwittingly expose companies to legal risks, particularly in that user testing breaches the use rights, license restrictions, or allocation of intellectual property rights to which the parties commit in the contract for the AI product.

Continue reading

Posted

Since the release of OpenAI’s ChatGPT, the intense hype around large language models (LLMs) and complex AI systems has exploded. Organizations have rushed to both try and buy these new tools. Along with it, a flood of commentary continues to flow regarding how to use the tools productively and responsibly, along with the legal issues that might arise through such use. Those topics are certainly novel—but when it comes to procuring AI tools, what if the key to successfully purchasing the products is not?

Continue reading