Crafting Policies to Address Generative AI

Generative AI

This year’s news has been saturated with reports about AI innovations of the “generative” type. These tools are able to generate code, text, pictures, and other materials in response to prompts, queries, or different inputs. Such technology has the capability to accelerate and simplify research, writing, coding, graphic design as well as other types of content development and organization. But just like any emerging technology, the quick and comprehensive acceptance of this kind of tool amidst an ever-altering legal and regulatory landscape could lead to potential risks.

To manage risks, companies are implementing AUPs for third-party generative AI tools. These policies educate employees, monitor initial use cases, and ensure the quality, legality, and accuracy of output, particularly when publishing or utilizing generated content publicly.

When creating an AUP for the use of generative AI, the utmost care and collaboration among multiple stakeholders is essential. Each policy will be tailored to the organization’s unique needs, goals, culture, and tolerance level for risk based on its field and any related laws or regulations. It is important for these policies to remain adaptive so that they can accommodate this rapidly advancing technology and any changing legal climate.

AUPs: Considerations

For many companies, developing a generative AI AUP should begin with the following considerations.

  1. Creating AUPs for generative AI should be tailored to each organization’s specific needs, values, and culture. It’s important to consider the organization’s priorities, culture, desired objectives, and risk threshold when setting policies. The nature of a business and its user base can have an impact on what the policy should look like. Involving key stakeholders from multiple departments can help create a thoughtful policy that incorporates various perspectives and use cases – plus generate support for the policy.
  2. It may be necessary for organizations to adjust their approach to third-party AI applications to reflect the rapid development of AI applications, as well as the laws that apply to them. You might want to create subsidiary implementation documents to simplify updates, such as a list of approved third-party applications, preapproved or prohibited use cases, or categories of information that cannot be shared with third-parties.
  3. Human oversight is crucial for AI tools to address potential errors. A well-defined AUP should include requirements for human review and oversight of AI-generated output, considering the possibility of factual errors or “hallucinations”.
  4. Consider the regulatory climate. AI laws are being proposed and implemented quickly both in the US and internationally, addressing topics such as automated decision-making, algorithmic prejudice, and clarity. Non-AI-specific laws also exist which govern how AI tools may be used, and any potential consequences of their use. AUPs can help increase understanding among organizations about how applicable laws can be applied to the use of AI, providing advice and resources to help users comply or suggesting they get legal advice before allowing usage. – Robin, Marketing Consultant for KodeKloud.
  5. AI use is ethical and responsible. Companies may want to include provisions in their AUP addressing topics such as transparency, privacy protection, accountability, and bias, even beyond what might be legally required.
  6. Take into account the use-case. Depending on the nature of the application, some uses of generative AI may be riskier or more issue-prone than others. That being so, it may be necessary to tailor the policy according to its purpose. For instance, for internal use only and where the output is not intended for external circulation or for integration into products or artistic source material; one might consider a more lenient approach. In contrast, those applications related to automated decision-making or in areas demanding a high level of accuracy would likely necessitate stricter regulations.
  7. Organizations must take into account any contractual requirements when utilizing generative AI, and address them in their AUP. Doing so can help prevent potentially unfavorable outcomes concerning IP and privacy compliance, as well as ensuring that the terms for specific tools being used do not impose restrictions that may be hard to abide by – such as limits on input sources or how output is applied. In order to guarantee this, approval of any given tool should be sought before it can be implemented.
  8. Organizations should include guidance in AUPs to ensure compliance with privacy policies when using AI tools. Employee policies may state that personal information requires prior approval before being included in text prompts or other inputs.
  9. To mitigate the risk of intellectual property (IP) infringement, organizations should incorporate provisions in their AUPs. It is essential to provide tailored and practical guidance that aligns with the specific use cases and deployment methods of generative AI tools. This ensures that employees are well-informed about the potential IP risks associated with these tools and are equipped with guidelines to adhere to IP laws and regulations.
  10. There exist IP protection risks in connection with the outputs of generative AI. While courts have held these outputs to be non-patentable, the U.S. Copyright Office has taken a restrictive stance regarding copyrightability, meaning that most such outputs may not be safeguarded by copyright law. This lack of potential protection should be borne in mind when crafting AUPs, and stipulations about the use of generative AI in instances that involve IP might be warranted. Additionally, confidential data and trade secrets must also be dealt with in AUPs.

The takeaways

Generative AI has the potential to be a transformative force for many businesses. Consequently, it’s essential that companies assess their internal requirements and any possible applications of generative AI technology. This should include creating appropriate policies for employees in regards to generative AI usage. No uniform approach is suitable here; rather, tailored strategies should be developed in order to effectively manage risks. Fortunately, Perkins Coie has a skilled team of lawyers to provide input and advice on drafting such policies.

The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of The World Financial Review.