An Early Primer on Contracting for Agentic AI Procurement

Posted

A recent code leak indicated that OpenAI is set to release its first true AI Agent. An AI agent is a system designed to perceive its environment, process information, and autonomously take actions to achieve specific goals. Unlike traditional software that operates based on direct input and predefined instructions, AI agents can analyze situations, make decisions, and sometimes learn or adapt over time to better achieve its goals.

As organizations increasingly explore the deployment of agentic artificial intelligence systems, legal teams should consider the novel challenges in structuring purchase agreements that adequately address the unique characteristics and risks of these agentic systems.

Admittedly, we are barely at the beginning of large-scale procurement and deployment of agentic AI. But by applying existing risk allocation models we know and understand for the context of professional service delivery, we have a model for a customer-protective approach that provides a fair allocation of risk when purchasing and using agentic AI.

Key Differentiators of Agentic AI Compared to Other Generative AI
Unlike LLMs that respond with purely text, image or video-based output in response to prompts, agentic AI can independently initiate actions and make decisions without direct human input. Outputs, rather than being confined to your chat window, interact with, and influence external systems, people or organizations in order to execute tasks. By design, this results in a higher degree of machine learning, iteration and adaptation, happening in real time with tangible—possibly physical—consequences.

Agentic AI actions can quickly create complex chains of causation and responsibility, which introduces unique liability challenges, particularly in the event of harm. Agentic AI demands higher levels of monitoring and intervention compared to traditional generative AI’s static outputs.

Models for Bridging the Techno-Responsibility Gap
In a prophetic article published in 2004, Andreas Matthias described the point at which a manufacturer of a machine is not capable of predicting the future machine behavior any more, and thus cannot be held morally responsible or liable for its actions. Matthias concluded that there is a spectrum of potential control and responsibility. He says:

“In the beginning we find the programmer as coder, that is, someone who expresses the program (and thus the operating behaviour of the machine) line by line and statement by statement in a linguistic representation that can be executed directly by the machine (the statements of a programming language) … the programmer can rightly be held responsible for any misbehaviour of the machine [in that scenario].”

On the other end of the spectrum Matthias says, particularly of agentic AI, the designer of a machine gradually transfers this control to the machine itself, and as a result, suboptimal behavior must be attributed to the machine, rather than the designer.

We find a similar construct, of shifting responsibility from the “operator” or client of the machine to the provider, or “manufacturer,” when a customer is hiring a consultant employed by an agency to perform a task. The risk associated with the outcome varies depending on the service delivery model. You, as the operator/client may hire the consultant to simply provide a report on how to do the task; this is like asking an LLM to provide output. Alternatively, you could be operating under a staff augmentation model, where you provide the explicit instructions, and close oversight, until the task is complete. If the agentic AI performs this way, much of the risk associated with the outcomes remains with the manager. However, if you’re hiring the consultant on a managed service basis, giving the consultant free reign to perform the task and associated functions so long as they achieve a given outcome, the balance of risk shifts. So, too, should the risk of the agentic AI provider who offers a system capable of exercising that much carte blanche.

Regardless of the delivery model, there are some responsibilities that cannot be shared or shifted between parties, like the product measuring up to the service description and documentation, or a customer’s obligation to pay for the services themselves. In the next section, we will discuss some of those responsibilities, as well as the risk allocation provisions typical of contracts, and how they should be applied in the context of agentic AI.

Contract Considerations
The following table identifies some key risks that are a possible result of the agentic AI tool’s independent action that interacts and influences environments outside the AI tool’s interface, coupled with contractual and operational tools you might consider to mitigate those risks:

2025-01-23_13-07-29-194x300

Agentic AI Risks and Mitigation Steps

Looking Ahead
Is agentic AI another paradigm-shifting piece of technology that will result in a frenzy of legal requirements and oversight? Maybe. But in comparison to the technological earthquake that preceded it, this is more a reverberating aftershock. We know from experience that an ounce of governance is worth more than a pound of risk acceptance. Unlike the first time we were contracting for AI, where legal frameworks scrambled to catch up, the agentic AI landscape offers a unique opportunity for proactive, anticipatory planning and institutional alignment. The challenge is not to constrain innovation but to create flexible, intelligent frameworks that allow transformative technologies to evolve while protecting fundamental human and organizational interests.