From optimizing revenue to refining strategy and decision-making to safeguarding data, AI promises major benefits for businesses and their leaders. However, businesses and their corporate fiduciaries face a dilemma: Too little reliance on AI can leave a company behind its competitors and breach standards of care. On the other hand, relying too much on AI can damage a company’s operations and reputation, create significant risks and legal challenges, and threaten insurance coverage. To fully capitalize on the power of AI, it is essential to understand critical risks and the vital steps companies can take to mitigate them.
The Risks of AI for Businesses
AI is used in applications for products ranging from automobiles to health care to internet-of-things devices. Businesses and insurers have already benefited significantly from the speed with which AI generates and processes data. According to a recent report by the National Association of Insurance Commissioners’ Center for Insurance Policy and Research, “Traditional statistical models cannot handle the large quantity of data [that AI can process]. As AI can execute complex analyses and computations at a speed impossible for humans, it generates faster insights.”
While AI may minimize some risks, it also introduces new exposures, partly because AI depends on the massive amounts of data humans feed it. For example, a business training AI with sensitive personal information may face financial damage if the data is improperly used or disclosed. The effectiveness of merger due diligence can be eviscerated if a company blindly uses AI that cannot filter out wrong information. AI also introduces new public-facing service endpoints, creating more potential vulnerabilities for cyberattacks. Failing to take appropriate precautions against these attacks can lead to a breach of fiduciary duties and civil liability. Moreover, faster data processing means a quicker rate of errors, more complex vulnerabilities and more potential legal violations. AI creates questions about legal exposure, risk management, adequate insurance coverage and the discharge of fiduciary duties.
Misuse of AI can also harm the cornerstone of any organization's business model: its reputation. For example, there is ample evidence of bias in AI-driven outcomes. AI trained using data from primarily Caucasian and male subjects may mistreat people of color and women. This tendency has led to discrimination in housing, financial lending and hiring.
AI systems can also experience malfunctions and failures resulting from improper maintenance, design defects or human error. These defects can lead to financial loss, property damage or bodily injury. For example, generative AI has a well-documented tendency to respond to prompts with “hallucinations”—plausible-sounding answers that are factually incorrect or misleading. AI could generate a description of a nonexistent product or provide dangerous product instructions. Such incorrect information may make companies liable for deceptive marketing or for injuries caused by defects in the AI components of their products.
Critically, AI developers keep the algorithms that underpin their technology under lock and key. This lack of transparency makes it hard to determine the cause of errors. Insureds, in turn, may not fully understand risks when purchasing AI products. Meanwhile, insurers cannot differentiate between unintended errors that would be covered and intentional acts that would be excluded from coverage. This confusion hinders risk assessment and accurate pricing.
The Impact of AI on Fiduciary Duties
Boards and management owe fiduciary duties to their company and stakeholders. The law generally recognizes that corporate directors and officers often make hard choices and avoids second-guessing decisions made with reasonable information. This core legal principle is called the business judgment rule (BJR). The BJR fundamentally protects decisions made in good faith that, in retrospect, may prove erroneous. This encourages innovation and promotes risky decisions with high returns. Acts that exceed the BJR are decisions made without proper diligence or good faith, such as fraud, self-dealing and indecision.
If a board overrelies on flawed AI tools to make business decisions, it may exceed BJR protection and breach its fiduciary duties. This is especially likely when the goals and values of AI, the corporate fiduciary, the shareholders and the data subjects are not aligned. When a fiduciary uses a human consultant, the fiduciary can at least perform reasonable due diligence about that consultant’s training background, experience and level of performance for other corporate clients. However, a corporate fiduciary relying on an AI tool is unlikely to know the background and expertise of the developers or why the technology reaches its conclusions. An AI system could recommend a course of action that leads to negative consequences and the fiduciary will not be able to explain its rationale. For example, in 2010 and 2015, AI-managed EFTs caused crashes in the stock market, resulting in trillions of dollars in losses for investors, but human fiduciaries were left with the blame.
Finally, AI may adversely impact corporate fiduciaries’ other duties. The duty of supervision is a derivative of the duty of care and requires corporate leaders and the board to assure that an adequate corporate information and reporting system has been put in place. A sustained or systematic failure of the board to exercise oversight may constitute bad faith for which fiduciaries may be personally liable. This duty to monitor and ensure adequate information and reporting systems certainly extends to AI systems. Thus, boards should implement reporting, information systems and controls that govern the company’s use of AI technology. Furthermore, passively deferring to an algorithmic decision may violate the duty of loyalty because such abdication constitutes bad faith. Legal experts have even suggested imposing a new fiduciary duty of loyalty, an affirmative duty to protect sensitive information.
At the same time, failing to incorporate AI into decision-making could also constitute a breach of fiduciary duty. AI’s potential to create value will become an avenue for differentiation as management continues incorporating AI even deeper within their operations. But with this demand comes increased regulatory scrutiny of AI practices from regulatory agencies and international authorities. For example, the SEC has pursued companies that make false and misleading statements regarding the use, value and risks within the company’s AI systems, while the FTC recently signaled its intent to increase consumer protections for AI-related cybersecurity incidents. These regulatory changes will likely increase demand for AI insurance while simultaneously shaping what acts are excluded from coverage.
Experts have not yet determined how to remedy and apportion liability from harm, worsening the novel risks AI presents. Some have argued that AI is a judgment-proof agent and that the human principal who exercises control over the AI is responsible for the damages caused by its AI agent. The question of whether the AI itself or a human—and, if so, which human—is liable for harm can have a massive impact on insurance coverage. Further, regulatory change in the United States and abroad could demand policy exclusions or affect what constitutes a regulatory violation that leads to a denial of coverage.
How Boards Can Mitigate AI Risk
To avoid the risks AI presents to boards’ compliance with their fiduciary duties, boards must supervise all AI-facilitated processes, ensure adequate data security controls and conduct at least annual reviews of AI vulnerabilities. Board oversight of AI risks and risk mitigation is vital. Therefore, boards should actively engage in overseeing AI strategy and implementation.
A comprehensive AI governance policy should incorporate the following risk management considerations:
- Require regular risk assessments for AI systems to identify potential vulnerabilities
- Develop mitigation strategies and implement controls to safeguard against identified risks
- Monitor AI systems continuously for emerging risks
- Establish incident response plans to handle AI-related failures and breaches effectively
Furthermore, AI governance policies should clearly define roles and responsibilities for oversight of AI use. Those assigned roles should include an AI ethics officer, data protection officer and AI project managers to ensure accountability and effective management of AI. Moreover, a company’s AI governance policy should establish an AI governance committee responsible for overseeing AI strategy, policy enforcement and risk management. The governance committee, oversight officers and AI project managers should provide regular updates and reviews to the board on the progress, challenges and compliance status of all AI initiatives. Upon receiving any such updates, it is important for boards to ask critical questions about AI’s strategic potential, risks and ethical implications. When overseeing AI use and establishing governance policies, boards should keep in mind the following ethical and legal considerations:
- Bias mitigation: Boards must implement measures to identify and mitigate biases in AI systems to reduce the potential for erroneous, harmful and discriminatory outputs.
- Transparency: It is important for companies and their boards to maintain transparency and accountability in AI decision-making processes to help consumers understand and trust how AI systems work.
- Compliance: Boards must ensure compliance with data protection regulations and intellectual property rights. This includes establishing liability frameworks for AI-related errors and harms.
Using Insurance to Mitigate AI Risks
In addition to establishing a comprehensive AI governance policy, companies should explore potential insurance products that may provide coverage to further mitigate AI risks. Companies should consider errors and omissions (E&O) and other professional liability policies that cover claims arising from negligence, errors and omissions in providing personal services. Directors and officers (D&O) policies cover corporate executives’ assets in the event of claims alleging mismanagement or breach of fiduciary duty. Finally, employment practices liability (EPL) provides coverage for claims related to discrimination, harassment or wrongful termination.
In these critical areas, businesses must avoid blind overreliance on AI. Yielding too much discretion to AI algorithms is likely to lead to denial of E&O, D&O and EPL insurance coverage and result in claims of breaches of fiduciary duty. To avoid this pitfall, companies should not only prioritize AI models that aid in risk analysis and liability determination, but that are also transparent and explainable.
Traditional insurance products may soon become outdated if companies do not review and update them regularly. In the age of AI, substantial gaps in current coverage exist. For example, human error remains a driving cause of cyberrisks, and many insurance policies do not cover incidents like fraudulent fund transfers that AI may worsen if improperly trained. Insurers may exclude the use of unlawfully collected data to train AI from cybersecurity coverage. Intellectual property insurance that protects against infringement claims arising from software or computer hardware may contemplate infringement from fixed code but not from an AI algorithm.
To better address AI threats, businesses need insurance policies that integrate machine learning to more accurately reflect patterns in business, make data-driven decisions and account for unknowns. Businesses may also want to consider specialized AI insurance. Some insurers have already begun offering new insurance products tailored to the unique risks posed by AI as they may potentially provide more comprehensive coverage than traditional policies. These products can help cover hallucinations, algorithmic bias, regulatory investigations, IP infringement claims and other class action lawsuits. When obtaining coverage for AI use, companies should:
- Evaluate and potentially obtain AI-specific insurance coverage to address unique risks posed by AI
- Consider specialized AI insurance products for comprehensive coverage
- Regularly review and update insurance policies to reflect the evolving AI landscape
- Prepare for renewal discussions by articulating AI strategies, current uses and compliance status
However, these new products are not without their limits. Like cyber insurance before it, AI insurance premiums may be high and coverage limits may be low until insurers have more certainty about the risks AI presents. For this reason, insureds need to understand the technology that underlies AI. Furthermore, new AI-related insurance exclusions are sure to come. Insurers may soon seek to exclude from coverage:
- Losses stemming from intentional misuses of AI
- Standard software failures
- Cybersecurity breaches caused by vulnerabilities not accounted for by existing cyber insurance
- Non-compliance with data privacy and other regulations, particularly as the regulatory landscape of AI evolves
- Other unforeseen events caused by the unpredictability of AI behavior
These strategies are essential in the context of insurance renewal. Companies should prepare for renewal discussions by articulating their AI strategy, the types of AI they currently use, the credentials of those overseeing and monitoring AI use, and regulatory compliance status.
Getting the Most Out of AI
To make accurate and responsible AI decisions, directors and management must also be adequately trained regarding how AI works and, by extension, its potential for failure. When corporate fiduciaries understand AI's risks, they can select AI structures that best fit their business needs and vulnerabilities and minimize the possibility of breach of fiduciary duty.
To show that their use of AI products aligns with their fiduciary duties, at minimum, the board of any company should incorporate the following best practices into their AI framework:
- Every member of the board and executive teams must understand how AI works, its different forms, and how the company uses AI. This understanding must include what data is being used to train any AI the company will use and regular reports from the IT department.
- Risk-mitigation strategies should be institutionalized by creating an AI-specialized team across the organization. The business, legal, technology and public relations departments should all take responsibility for evaluating and mitigating AI risk.
- The board should include well-rounded perspectives on AI and even establish an AI committee that oversees AI risks and opportunities.
- Executives should understand what regulatory requirements apply to their AI infrastructure. Most importantly, how the SEC’s, FTC’s and various states’ data breach public disclosure laws will affect the company’s AI usage should be incorporated into disclosure protocols.
- Technological and HR mechanisms should be deployed to monitor the performance of any AI controls regularly, assess the impact on business indicators, adapt to any weaknesses at least annually and follow any AI-related incident.
- Clear and consistent communication should be maintained with legal counsel, data security vendors, and internal IT and public relations teams to ensure compliance, minimize AI risk, standardize AI usage and proactively manage incident response.
- All AI protocols and procedures should be written and accessible, address ethical standards and be included in disclosures to regulators as appropriate.
- Corporate leaders should understand how the spectrum of AI risks affects their business profile and insurance needs, ensure the reliability of data used in AI, and implement a data security policy and other governance and oversight mechanisms that account for the threats AI poses.
- AI initiatives should be integrated into the organization’s strategic planning processes. Regular reviews should be conducted to ensure AI projects strategically align with business goals and objectives.
The Evolving AI Landscape
AI is growing both rapidly and exponentially. AI tools and devices are likely to increase, be embedded in existing solutions and become commonplace. Large language models may develop into even more complex multimodal systems, deepfake technology will become more convincing, and more manual functions will be automated. The implication for legal and insurance frameworks is an increased rate of obsolescence. Regulatory change is also inevitable, whether that means more jurisdictions with comprehensive AI regulations or allocating liability for AI harms to its human controllers.
Thus, specialized AI risk insurance is vital to promoting growth in the AI field by giving businesses a financial safety net when they adopt AI technologies and facilitating a responsible approach to the risks presented by AI. By proactively addressing AI-specific insurance complexities with brokers and counsel, businesses can ensure adequate protection and customized risk mitigation strategies as they navigate this rapidly evolving technological landscape.