Articles Posted in Artificial Intelligence

Posted

ai-data-governance-722210487-e1730305423540-300x246Google recently unveiled its latest AI-integrated search engine—and the internet didn’t hold back, roasting it for suggesting recipes like glue-infused pizza sauce and recommending rocks as a nutritious snack. The tech giant’s AI bot could scrape the web’s boundless resources and serve up answers, but apparently, it couldn’t tell sincerity from sarcasm or facts from wild fiction. This fiasco is just another reminder of why data governance really matters.

Continue reading

Posted

The California legislature recently passed Assembly Bill 2013 (AB 2013) on August 27, 2024, a measure aimed at enhancing transparency in AI training and development. If signed into law by Governor Gavin Newsom, developers of generative AI systems or services that are made available to Californians would be required to disclose significant information on the data used to train such AI systems or services. This, in turn, may raise novel compliance burdens for AI providers as well as unique challenges for customers in interpreting the information.

Continue reading

Posted

In a landmark moment for global AI governance, the United States, European Union and United Kingdom have signed the Council of Europe’s framework convention on artificial intelligence and human rights, democracy, and the rule of law (the “Convention”), the first legally binding international treaty on AI.

Continue reading

Posted

Reliability, security, and legal compliance. These are assurances that customers purchasing technology products expect from their providers, and which are often required as part of the contracts for such products. AI Providers, however, are lagging in their willingness to contractually commit to such assurances, let alone deliver in practice. Thus, as AI products grow in both popularity and technical complexity, robust testing tools become indispensable. Unfortunately, utilization of such tools may unwittingly expose companies to legal risks, particularly in that user testing breaches the use rights, license restrictions, or allocation of intellectual property rights to which the parties commit in the contract for the AI product.

Continue reading

Posted

Since the release of OpenAI’s ChatGPT, the intense hype around large language models (LLMs) and complex AI systems has exploded. Organizations have rushed to both try and buy these new tools. Along with it, a flood of commentary continues to flow regarding how to use the tools productively and responsibly, along with the legal issues that might arise through such use. Those topics are certainly novel—but when it comes to procuring AI tools, what if the key to successfully purchasing the products is not?

Continue reading

Posted

GettyImages-804492304-300x200The Council of the European Union and the European Parliament reached a provisional agreement on a new comprehensive regulation governing AI, known as the “AI Act,” late on Friday night (December 8, 2023). While the final agreed text has not yet been published, we have summarized what are understood to be some of the key aspects of the agreement.

Continue reading

Posted

https://www.internetandtechnologylaw.com/files/2023/11/1200px-Competition_and_Markets_Authority.svg_-300x155.pngThe Competition and Markets Authority (CMA), the UK’s competition regulator, announced this month that it plans on publishing an update in March 2024 to its initial report on AI foundation models (published in September 2023). The update will be the result of the CMA launching a “significant programme of engagement” in the UK, the United States and elsewhere to seek views on the initial report and proposed competition and consumer protection principles.

Continue reading

Posted

GettyImages-1386414642-1-300x200The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.

Continue reading

Posted

GenAI-workplace-1488370396-scaled-e1683821460389-300x280The use of generative AI tools, like ChatGPT, are becoming increasingly popular in the workplace. Generative AI tools include artificial intelligence chatbots powered by “large language models” (LLMs) that learn from (and share) a vast amount of accumulated text and interactions (usually snapshots of the entire internet). These tools are capable of interacting with users in a conversational and iterative way with a human-like personality, to perform a wide range of tasks, such as generating text, analyzing and solving problems, language translation, summarizing complex content or even generating code for software applications. For example, in a matter of seconds they can provide a draft marketing campaign, generate corresponding website code, or write customer-facing emails.

Continue reading

Posted

Innovation has historically been driven by companies in regulated industries—e.g., financial services and health care—and some of the most intriguing use cases for generative AI systems will likely transform these industries.

At the same time, regulatory scrutiny could significantly hamper AI adoption, despite the current absence of explicit regulations against the use of AI systems. Regulators are likely going to focus on confidentiality, security and privacy concerns with generative AI systems, but other issues could arise, as well. Companies operating in key regulated industries appear to be anticipating regulatory scrutiny, which is why adoption of the newest generative AI systems will likely be slow and deliberate. However, in some cases, AI systems are being outright banned.

Continue reading