Though the use of artificial intelligence has grown steadily during the past decade, the recent release of OpenAI’s generative AI system, ChatGPT, has led to a precipitous increase in attention and publicity accompanying the rise of powerful generative AI systems.
With these generative AI systems come mounting issues and concerns around the use of AI systems by technology service providers.
What is an AI system?
To demonstrate the function of these generative AI systems, we asked ChatGPT to answer the question, “What is an AI system, and how does it work?” Its response:
An AI system is a computer program that can perform tasks that normally require human intelligence, such as understanding natural language, recognizing objects in images, or making decisions. Generative AI is a type of artificial intelligence that involves the use of algorithms to generate new data, such as images, text, or audio, that has not been seen before.
If we were to provide this query to a general search engine, we might be provided responses that include the terms “AI system,” and would then review each resource and compile a response based on the information provided. The difference between conducting a general search, and using the generative AI, is that the generative AI compiles the response for you, complete with correct syntax and wording responsive to the question.
How does an AI system work?
AI systems work by analyzing large amounts of data and extract patterns and insights from it. The analytical system can learn to recognize patterns in data and use those patterns to make predictions or classifications about new data. In other words, artificial intelligence is basically an incredibly complex and exceedingly large decision tree, powered by statistical probability.
More technically, these decision trees take the form of Markov chains, or stochastic methods. A Markov chain is a mathematical model that describes a system that transitions between different states over time. It is a stochastic process, which means that the next state of the system is determined by a probability distribution that depends only on the current state of the system and not on any prior states. In natural language processing, Markov models are often used to model the probability distribution of words in a text. For example, a first-order Markov model would predict the probability of a word based on the probability of the previous word. A second-order Markov model would predict the probability of a word based on the probability of the previous two words.
Markov chains provide a powerful framework for modeling and understanding sequential data in machine learning and AI applications. But it is important to note that AI systems need a significant amount of data to “train” the algorithm, or derive the probabilities necessary to create the chain. In the case of a language model like ChatGPT, the application has been trained on vast amounts of text data and has learned to generate natural-sounding language by predicting the most likely next word or phrase based on the previous context. ChatGPT touts that it draws its material from a wide variety of sources and domains, including books and literature, webpages and articles, and social media and messaging. Similar to the repetition required to train a dog, an AI system must experience repeated patterns in order to “learn” to provide the required result. The source of this training data can be a hot topic, and worth considering as AI becomes more and more prevalent.
What are some examples of an AI System?
We have grown accustomed to AI systems like “Siri” and “Alexa,” who each can be fed queries and requests, and in turn select responses and complete tasks from a closed list of possible responses. Other familiar AI uses are the text-completion function in certain email products, image recognition in popular social media sites that suggests location or personal tagging, and autonomous driving or self-driving mechanisms in cars. And of course, we have grown accustomed to AI Chatbots in a number of contexts.
Unlike AI bots of yore, ChatGPT’s responses remember context of the ongoing conversation and can be prompted to perform deeper or second-level instructions, like providing a response that fits a certain style, or using certain defined terms. These generative capabilities have enabled a number of new players on the technology scene around AI, but also some more familiar service providers are developing competitive AI systems.
As these AI systems become more common in business settings, the reality is that using this technology is not without risk. In Part 2 of this series, we examine the legal risks of AI systems in technology services.
If you are interested in learning more about the contents of this series, or if you have questions about AI systems, Pillsbury’s team of technology transactions, intellectual property and regulatory counsel are available to discuss and support you on these issues.
Related Articles in the AI Systems and Commercial Contracting Series
Earning Your Trust: The Need for “Explainability” in AI Systems