Practically speaking, the above approach to assigning ownership rights is a relatively easy drafting exercise that relies on standard terms used in outsourcing engagements: the customer owns all deliverables created in performance of the services under the agreement, excluding the vendor’s pre-existing IP, which in this case is the AI algorithm in its original state, prior to its introduction into the customer’s environment. The factors that lead customers to push for this allocation of IP ownership rights are more difficult to tackle. Customer considerations in this space are in many ways analogous to contracting around residual knowledge in the minds of vendor personnel who work for a particular customer, learn confidential information about the customer’s business and trade secrets, then move on to work for others, including competitors of the original customer. When vendor personnel obtain knowledge from working on a customer’s account, they become better at their job, more skilled, and the vendor retains that benefit and passes it on to competing businesses. To counterbalance those vendor gains, in addition to standard confidentiality obligations, contracts may (1) restrict the vendor’s ability to reassign such personnel to competitors of a customer for a set period of time after the individual leaves the customer’s account, and/or (2) stipulate that such residual knowledge may not be used if the individual has intentionally memorized customer information for purposes of leveraging it to the vendor’s or its other customers’ benefit. Would it be reasonable to take a similar approach with machine learning that allows the vendor to make use of machine learning in support of advancing its AI—subject to fair compensation to the customer, but restricting such use from the customer’s competitors?
Of course, the analogy to residual knowledge falters when one takes into consideration that an individual’s memory is fallible and does not allow for an exact duplication of the customer’s data or processes within the minds of vendor personnel. Machine learning models on the other hand perfectly preserve derivatives of customer data indefinitely, unless intentionally scrubbed or destroyed. In light of that fact, customers may negotiate for ownership of the knowledge gained by proprietary AI, even where the customer is not able to leverage or build on learnings for any practical benefit in the future, or require that the customer-specific models be destroyed at the end a vendor relationship to address data privacy concerns. Are these the best strategies for customers? Or could machine learning, even that result from a customer-specific environment, be put to better use if retained by the vendor? Would the benefit of smarter AI truly accrue for the greater good or just shift to the vendor?
While it is worth considering the potential contracting strategies for ownership of machine learning-enabled knowledge as AI becomes more and more prevalent, the most important question for customers may be: does a single business owe anything to the larger community when contracting in the AI space? A company’s duty is first and foremost to its shareholders and to its clients, whose private data and unique business strategies it must protect above all else. If vendors seek to own the information harvested by machine learning for future use, the AI industry as a whole must establish a track record of addressing privacy concerns specific to the field, including fairly allocating the resulting liability for failure to properly do so, and compensating customers for the benefits they provide in the advancement of the vendor’s AI. Until then, the most rational choice may well be for customers to contract to own the fruits of machine learning—“one for all” ideals may just have to wait.