The use of generative AI can create new and complex issues for businesses across all industries. One potentially overlooked area of concern for organizations occurs when acquiring a company that has begun using AI. Proper use of AI may significantly enhance business value, while improper use can be risky and sometimes fatal. Understanding, evaluating and addressing AI risk requires knowing which algorithm the AI uses, what data it accesses, how it is being used and who is using it. Obtaining and verifying this information often involves a deep analysis of the technology and business model in addition to a typical legal review. The following is a high-level approach to understanding and addressing many AI-related risks when acquiring a business.
Understanding the Functionality of the AI
At the outset, a buyer must understand the nature of the AI in question because AI is a broad concept encompassing many forms. It can involve a wide array of technologies, including logic, decision trees, machine learning and deep learning algorithms. Each of these works differently and has its strengths and weaknesses. To assess the risk, a buyer needs to understand the purpose for which the current or previous business owner designed, used and developed the AI system. Additionally, a buyer must know who owns the AI and and at least the high-level aspects of its functionality. Is it licensed, unlicensed or open-sourced? Has it been developed or enhanced in-house? How was it trained? Is it prone to hallucinations? Is there transparency as to how it operates and why it reached certain results?
It is also important to understand that, in many cases, the ability to access and use key data (and prevent the competition from doing the same) is what creates value for the target company, not the algorithm. Therefore, to assess value and risk in a transaction involving AI, a buyer must determine:
- Who owns the data that underpins the AI
- If the data is licensed
- If the data is subject to confidentiality or privacy restrictions
- If the data utilizes someone else’s intellectual property (IP)
- If the target can effectively isolate and protect the data to create a competitive advantage
Perhaps the biggest contributing factor to risk is the use case itself. AI can be used for everything from mundane to mission-critical tasks. Moreover, unlike technologies before it, AI can process information itself and sometimes even replace humans in the decision-making process. A buyer should be able to answer the following questions:
- How is the target using AI?
- Are those uses for high-risk purposes?
- Is it making decisions without oversight, or is there human involvement?
- Is it being used in a highly regulated industry?
- Does it impact key customer relationships?
- Is AI necessary to operate key operational or financial systems?
Assessing and Understanding Identified Risks
A buyer must also understand the many aspects and potential impacts of AI risks, starting with the legal and regulatory risks. For example, ensuring that AI models are not trained on proprietary data without permission is critical to avoid IP violations. Companies also need to ensure that the acquisition target has not been overselling its use of AI or the value derived from it in its interactions with investors or customers.
Buyers need to consider the operational risks of AI as well. Ensuring the accuracy and oversight of AI systems is crucial. AI algorithms must be regularly tested and validated to ensure they produce reliable results. Moreover, managing third-party vendors that provide AI solutions requires verifying their security and compliance standards. Failure to do so can result in security risks and operational disruptions and deficiencies in service and products.
Businesses also need to assess technology risks. AI systems must be built on high-quality data to function effectively. Poor data quality can lead to inaccurate outputs. Addressing inherent biases in AI algorithms is crucial to prevent discriminatory outcomes. Ensuring the explainability and transparency of AI decisions is necessary for accountability and trust. Additionally, robust cybersecurity measures are required to protect AI systems from cyberrisks.
Once a buyer understands the potential risks associated with the target’s specific use of AI, it must assess what the target has done to mitigate those risks. With AI’s rapid, widespread adoption, the relative ease of access and use, and the lack of industry standards or guidance, the approach and measures adopted will vary tremendously by company. The seriousness with which the target has approached risk mitigation can substantially impact the company’s value. Indeed, a target’s inability to intelligently discuss its AI guardrails is a serious red flag.
A buyer should investigate whether the following measures are employed:
- Comprehensive AI governance framework, including development and deployment policies and procedures
- Stress-testing programs that evaluate AI system performance under various conditions
- Due diligence to ensure third-party vendors meet security and compliance standards
- Ongoing vendor compliance program, including the right to audit and keep appropriate records.
A buyer should also examine the target’s compliance controls, including its process to ensure adherence to the rapidly developing regulations and laws governing AI use. Does the target regularly monitor and update its compliance programs? How does it ensure it adheres to contractual obligations and uses AI ethically?
A buyer should further consider the target’s business model and ask the following questions:
- Is AI a critical component of the company’s goods or services?
- Is it used in a highly regulated area or where the consequences of inaccuracy are significant, such as health care decisions?
- Does the target limit who uses the AI and for what purposes?
- Does it limit access to data outside of the company?
- Does it check for hallucinations?
- Does it use multiple methods to verify the accuracy of results?
- Do humans oversee decisions?
- Does the target have insurance for any of its AI-related activities?
Mitigating Transaction Risks
Next, a buyer should consider how to manage AI risks in the transaction process. The basic techniques for identifying and mitigating risks in an acquisition fall into the same broad buckets: due diligence, risk allocation in the deal documents (e.g., representations and warranties, indemnification, payment terms) and insurance. With AI, however, it is important to understand that mitigation may also involve changes to the business model and the technology itself after acquisition.
Careful due diligence of AI usage and capabilities is critical during transactions. Note that this will involve coordination between the legal, business and technology teams. Assessing each risk category can be highly detailed and complex, depending on the AI usage and the size and importance of the transaction. The process may be even harder than traditional due diligence as many companies may not yet fully understand the scope of potential risks their AI use creates and may be unable to answer specific questions. Even seasoned deal and integration teams may not fully appreciate the complexity and risk AI poses.
Representations and warranties are important. “Standard reps” are still developing as lawyers begin to understand the risk better. Too often, representations are overly broad and do not force the review and discussion of specific risks in the context of the transaction. A buyer should be wary of a target that uses AI in any meaningful way and readily agrees to a super broad representation. Also, a buyer should understand that traditional representations, like those involving product liability, may take on additional meaning and complexity where AI is involved. For example, the risk of an AI-enabled medical device that is constantly learning and evolving may be very different at the time of acquisition than it was last year.
The appropriate allocation of risk and responsibility between parties in AI-related transactions is still developing. A buyer should consider obtaining representation and warranty insurance to cover potential AI-related claims. However, keep in mind that this insurance market is evolving as there has not been enough time for generative AI’s risks to manifest into actual damages subject to coverage.
It is also essential for a buyer to understand whether the target company's AI risks can be mitigated by changing how the AI is used after the acquisition is completed. For example, a buyer may already have robust training, compliance and quality controls to deploy in the new company. A buyer may also want to isolate the newly acquired company as a separate entity until identified risks can be fully assessed and mitigated.
The ultimate mitigation is knowing when to walk away from a deal. This, of course, depends on a full understanding of the risks. While some can be mitigated, a few risks stand out as red flags, such as AI use in high-risk industries without appropriate safeguards, unclear data rights, known privacy violations, significant security risks and failure to address IP issues in AI training and use.
AI offers significant benefits and opportunities and can greatly increase the value of a target company, but it also poses challenges and risks. Understanding the complexities of AI and leveraging a combination of risk mitigation strategies can help a buyer navigate these challenges and enhance the value of an acquisition.