Search
Implementing a Policy for Employee Use of ChatGPT in the Workplace
Posted
The use of generative AI tools, like ChatGPT, are becoming increasingly popular in the workplace. Generative AI tools include artificial intelligence chatbots powered by “large language models” (LLMs) that learn from (and share) a vast amount of accumulated text and interactions (usually snapshots of the entire internet). These tools are capable of interacting with users in a conversational and iterative way with a human-like personality, to perform a wide range of tasks, such as generating text, analyzing and solving problems, language translation, summarizing complex content or even generating code for software applications. For example, in a matter of seconds they can provide a draft marketing campaign, generate corresponding website code, or write customer-facing emails.
As organizations explore potential use cases for generative AI, and employees adopt the technology informally to assist with day-to-day tasks, it is important to create and implement a corporate policy that governs such use. All businesses likely already have employees using ChatGPT or other tools for work purposes (either with management approval or informally), so this is an issue worth considering.
Why Is a Policy Necessary?
There are many reasons why it is prudent to have a corporate policy governing the use of generative AI tools. First, generative AI tools are not necessarily effective in ascertaining the quality of the information they digest and therefore output and responses may include inaccurate “facts” (also called “hallucinations”) or misleading, incomplete or biased analysis and conclusions. A clear policy will help ensure employees are aware of this and are validating or vetting all output.
Second, there is a risk that anything shared with publicly available generative AI tools is made available to others for purposes of quality control or debugging, or, in some cases, could be subject to incorporation into the AI tools’ dataset, which might mean that an employee’s questions/input could be used by the AI tool in constructing responses to prompts from other users. While some tools offer the capability to opt out of such uses of input data, it is not clear how effective this is in practice. Ensuring employees are aware of this issue and know in what circumstances (if any) personal data, confidential or proprietary information can be shared with generative AI tools, will help the organization protect such information and remain compliant with confidentiality obligations and data privacy law.
Third, by implementing a corporate policy, an organization can mitigate potential legal risks associated with the misuse of generative AI tools. A well-crafted policy will define the scope of permitted use and help reduce the likelihood of legal issues arising from employee misuse of the generative AI tools. This is similar to the approach often taken with the use of other company-provided IT and communication tools, as well as the use of the internet and social media by employees.
Fourth, a properly constructed policy can educate employees about the intellectual property ownership issues involved in creating content using generative AI tools, and shape use prohibitions and other sections of the policy accordingly. For instance, currently, the U.S. Copyright Office holds the view that expressive works generated when using an AI tool (which would include software code, images, marketing text, etc.) are not subject to copyright protection because copyright law only protects human expression, not computer-generated works. This means that use of generative AI for the creation of these types of works precludes considering the output to be owned under copyright law the way the same type of works would be owned by the company if generated by a human. Considering the implications of this in light of the different ways that employees or contractors may want to use generative AI tools is an important part of developing and implementing a policy that is consistent the organization’s goals.
Finally, a corporate policy can help ensure that employees use generative AI tools effectively and responsibly, maximizing the benefits for an organization while minimizing potential distractions and inefficiencies.
Developing and Implementing a Corporate Policy
The following issues should be considered when preparing a corporate policy:
- Scope. A first step is to consider the specific purposes for which employees should be permitted to use generative AI tools. This may include drafting emails, generating reports, conducting research, or developing software code, among other tasks. By specifying the permitted uses, an organization can prohibit employees from using these tools for high-risk activities, such as making investment or employment-related decisions or deciding whether to provide services to individuals (which could run afoul of laws such as the GDPR if such decisions are solely automated without any human involvement). It is also important to ascertain who the target audience is for the corporate policy, and whether sub-policies should apply to specific teams. For example, there may be different concerns and risks associated with the use of generative AI tools by the HR team compared to software engineers.
- Set guidelines for data privacy and confidentiality. Establish clear protocols for employees to follow when handling sensitive information with generative AI tools (such as personal data and confidential or proprietary information). This may include guidelines for sharing such information or a clear rule that it should never be shared. It is also important to set out clear security guidelines which align with other security policies and practices, e.g., relating to the storing of generated content, and deleting sensitive information after use.
- Train employees. Consider how the new policy will be communicated to employees and whether training is required. It may be prudent to provide training to ensure that employees understand the policy and know how to use generative AI tools responsibly. Training could cover the policy’s guidelines, as well as practical tips for using the tools in a way that aligns with the policy.
- Monitor compliance. Like all corporate policies, it will be important to regularly monitor employee usage of generative AI tools to ensure adherence to the policy. Consider implementing an auditing process to review generated content and employee interactions with the tools. Employees should also be encouraged (or required) to inform colleagues that work product has been generated using generative AI to ensure that the output is properly validated or vetted.
- Update the policy. As generative AI tools evolve, it is essential to review and update your corporate policy periodically to address new developments and potential risks. It may be helpful to identify a team or individual responsible for this review and review date (e.g., every three months initially then longer once the use of the technology is better understood).
Conclusion
The adoption of generative AI tools like ChatGPT in the workplace presents both opportunities and challenges for organizations. By developing and implementing a comprehensive corporate policy governing employee use, while adherence does of course depend on the integrity of the employees to whom it is addressed, organizations can look to harness the full potential of these tools while seeking to mitigate potential risks and pursue compliance with legal and regulatory requirements.
RELATED ARTICLES
Ability of AI to Invent Struck a Resounding and Uncompromising Blow