AI systems seem like an exciting, effective new tool. But, as we have seen with Google’s recent struggles with accuracy, and Microsoft’s trouble with sentient, unhinged chat bots, not all of the kinks have been worked out with these tools.
In our last post, we discussed the legal risks, and related contractual mitigants for entering into agreements with AI vendors, but perhaps a more pressing question is whether one can trust AI systems in the first place.
Bias and Reliability
As some say, you are what you eat. AI systems eat up immeasurable amounts of data, and ultimately, AI output and results are only as good as the inputs they process. If the inputs are unreliable, or biased in any manner, they will invariably result in biased or unreliable outputs.
As mentioned in our introduction to AI systems, we know that the decision-making is based on training data, overlaid with probability-based decision-making. Resultantly, errors or idiosyncrasies inherent in the training data lead to cumulative errors in the results.
For example, an AI system used to screen job applications may inadvertently favor male applicants over female applicants if the system was trained on a dataset that contained more résumés from men than from women. Similarly, a facial recognition system may have higher error rates for people with darker skin tones, as the training data may have included fewer examples of darker-skinned individuals.
Another example is an AI system queried to provide insights on the risks of certain medical procedures, and pulling resources from chat pages or message boards that provide unqualified or misinformed advice on such procedures.
AI bias and unreliability can have serious consequences, particularly in applications such as medicine, hiring, lending and criminal justice, where biased or faulty decision-making can perpetuate discrimination or misinformation—and could result in running afoul of fraud or discrimination laws.
Mechanisms to avoid receiving AI services that are unreliable or discriminatory can be legal or operational. Some strategies to operationally mitigate the risk include having open conversations with the AI provider to discuss their efforts to avoid AI bias and unreliability. Generally understanding how the AI technology functions (including if inherent bias exists or if the data inputs are vetted), or if the AI provider has not fully accounted for procedures to avoid bias or misinformation, allows customers to implement their own compliance mechanisms to close the gaps themselves, and make more informed decisions to reduce or avoid the possibility of bias or unreliability.
Some elements a customer might consider building into its contract with an AI provider are:
- clear descriptions of the AI system specifications including non-discriminatory and fact-checking features and practices;
- representations and warranties that shift the burden of proving that discrimination or fraud did not occur to the AI provider; and
- indemnification obligations requiring the AI provider to cover claims that the AI system caused discrimination or were factually incorrect.
We recommend operationalizing internal controls, but also developing legal standards that address the above points.
Transparency and Explainability
Many AI systems are considered “black boxes,” meaning their decision-making processes are not transparent. This can make it difficult to understand why the AI system is making certain decisions or predictions, and it can be challenging to identify errors or detect problems.
“Explainability” is becoming increasingly important as AI systems are deployed in high-stakes applications. For example, if a self-driving car causes an accident, having the ability to determine why the car made the decision it did and whether the decision was reasonable prevents future accidents or errors.
In addition to helping with accountability and transparency, explainability can also help developers and researchers to identify errors, biases and other problems with AI systems, and to improve the accuracy and reliability of the models.
To improve explainability, researchers and developers are exploring various techniques, such as creating models that are more transparent and interpretable, developing algorithms that can explain their decisions in natural language, and using visualization tools to help users understand how the AI is working. Feature importance analysis identifies which input features are the most important for the model’s predictions, and decision rule extraction, as the phrase suggests, extracts decision rules from the model.
Admittedly, measuring the explainability of an AI system can be subjective, as it often requires human interpretation. One approach is to use surveys or user studies to evaluate the interpretability of the model. Another approach is to use complexity metrics such as the number of parameters or the size of the model to measure the complexity of the model.
Customers utilizing tools may have difficulty controlling for AI explainability. To mitigate the risk, customers should consider requesting references from its AI service providers to learn from other customers whether the AI services are functioning in a clear and transparent manner, implementing frequent testing of results to ensure a human is assessing the quality of the output. More broadly, given the burgeoning uses of AI systems, it may not even be transparent when AI systems are actually being used. To avoid leveraging products that are unknowingly subject to the risks and issues we have identified related to AI systems, consider requesting clearer technical descriptions of products and any machine learning that might occur. Also consider preemptively building standard machine learning and artificial intelligence requirements into your technology and professional services master agreements.
While AI systems hold immense promise, their risks and limitations cannot be ignored. Bias, unreliability and lack of transparency are just some of the issues that need to be addressed when considering the use of AI systems. It is important for customers to have open discussions with AI providers about their efforts to mitigate these risks and to understand how the technology functions. Pillsbury’s team of attorneys and consultants familiar with AI systems can assist with due diligence, contracting and consideration of the technology. By taking these steps, customers can reduce the possibility of bias or unreliability, promote accountability and transparency, and ultimately make more informed decisions about the use of AI systems.
Related Articles in the AI Systems and Commercial Contracting Series
AI Systems Adoption: Finding a Balance in Regulated Industries
Artificial Intelligence Systems and Risks in Commercial Contracting