Managing Legal Risks in AI Implementation

Chinh H. Pham , Samuel S. Stone

|

September 24, 2024

Integrating artificial intelligence and other emerging technologies has become a strategic imperative for companies aiming to expand their product offerings and remain competitive. Product development and legal teams must carefully navigate legal and regulatory challenges when planning and developing an AI tool to avoid common pitfalls. To seamlessly and effectively implement AI and other emerging technologies in products and internal company tools, company leaders must proactively manage risk by focusing on three common overarching principles. 

1. Aligning AI with Business Goals and Practices

Organizations deploying new technology should consider having a team of executives, legal personnel and technology experts in place to align business goals with the management of any potential risks.

The first step to achieving alignment is defining how the company will use AI to achieve its business goals. Will workers use AI tools internally to enhance productivity? Will the organizations offer AI tools as products to consumers or other businesses? Will the company use AI tools to provide new or existing services? Often, the failure to align an AI roadmap with business objectives and risk management practices leads to wasteful investment of company resources in AI projects that have little value, missed opportunities to use AI to get a leg up on the competition and unnecessary exposure to additional (sometimes catastrophic) risks.

2. Understanding and Prioritizing the Risk Involved

When integrating AI and emerging technologies into its business model, an organization must balance the benefits with careful consideration of legal and regulatory requirements to avoid potential pitfalls.

Conducting a risk assessment to identify potential legal, ethical and operational risks is good practice. For example, aim to understand the provenance of the data used to train and improve the AI tool, whether the AI tool can or should share data externally, and what guardrails are in place to ensure that the AI tool complies with applicable data security policies.

In addition, a compliance evaluation of the AI tool can also safeguard the organization against potential risks. As part of the evaluation, documenting the process can validate any underlying assumptions taken and mitigate potential risks in the event of an investigation or audit.

Once an organization identifies risks, prioritizing the mitigation of those risks can help guide the AI model’s development. Organizations must consider certain risks when implementing an AI tool, including:

  • Privacy and Security: Considering the privacy and security of data used with the AI model is necessary to safeguard sensitive information and prevent misuse or unauthorized access.
  • Ethical Implications: AI models can mistakenly learn biases. In regulated industries, basing a business decision (e.g., whether to approve a mortgage for an applicant) on the output of a biased model can result in illegal discrimination or other prohibited practices. Thus, it is important to establish procedures for auditing AI tools to detect and mitigate bias and prevent prohibited practices.
  • Liability Concerns: Organizations must consider liability concerns with respect to the data and AI decisions. Clear policies and procedures for handling complaints and inquiries related to AI systems help address potential issues.
  • Intellectual Property: Clear ownership of the data and licensing agreements are essential to prevent unnecessary losses of IP assets and to limit exposure to IP infringement claims. Entering into clear licensing agreements when using third-party AI solutions clarifies ownership and usage rights.
  • Accuracy and Safety: Determining whether an AI output is accurate and safe is necessary for reliable technology deployment.

3. Establishing Trust in the AI Model

Organizations have to address several issues when establishing trust in an AI tool. The first step is to educate employees about the legal and ethical implications of AI tools through comprehensive training programs. Ensuring that the personnel involved in AI deployment understand their obligations under applicable laws and regulations is key. Similarly, fostering a culture of compliance and ethical behavior within an organization mitigates risks associated with AI misuse or negligence. Organizations that prioritize compliance are better prepared to navigate complex legal landscapes and avoid costly penalties and reputation risks.

Trust in an AI tool is greatly enhanced when it exhibits “explainability”—effectively articulating why specific data inputs produce specific data outputs or recommendations. By enhancing explainability, organizations can address user concerns and foster assurance in AI technology.

Consider whether you will need insurance to cover unavoidable risks and liabilities, such as an AI system failure. By preparing for such challenges in advance, organizations can minimize their impact, ensure responsible AI implementation and increase stakeholder trust.

While AI technology can increase efficiency, human supervision often plays an important role in guiding AI behavior, minimizing errors and addressing ethical concerns. By involving humans in critical decisions, organizations can verify that AI systems align with organizational values and ethical considerations. Built-in human oversight of an AI tool may also be necessary for quality control.

Chinh H. Pham is an intellectual property attorney, co-chair of Greenberg Traurig’s venture capital and emerging technology practice and a member of the firm’s innovation and artificial intelligence group.


Samuel S. Stone is an intellectual property attorney and associate in Greenberg Traurig’s Boston office and a member of the firm’s venture capital and emerging technology practice and innovation and artificial intelligence group.