The Impact of AI on Insurance Underwriting

Neil Hodge

|

March 26, 2024

AI Underwriting

The increasing acceptance and adoption of artificial intelligence in the insurance industry promises to have a significant impact on insurers and insureds alike. The ability to analyze huge datasets quickly and effectively will allow insurers to understand risk as never before, leading to more accurate risk identification, improved underwriting and claims handling, and better premium pricing.

The technology does not come without risks, however, as important questions remain around the accuracy, fairness and security of AI-driven processes and decision-making. Therefore, insurers and risk professionals need to better understand the potential pitfalls of AI technology and take steps to ensure that the insurance purchasing process does not introduce greater risks than what it was intended to cover.

AI Bias in the Underwriting Process

AI can bring more precision to actuarial models and underwriting, allowing insurers to provide tailored coverage to their client base and bolster risk management. The technology can also improve risk assessment and underwriting by analyzing vast amounts of information from diverse data sources, including internal data such as historical claims and customer behavior, and external data such as litigation trends, market changes, extreme weather events and social media posts. This data enables insurers to establish a more comprehensive understanding of risk factors and thus allows for better and more specific underwriting decisions. Additionally, insurers can use AI algorithms to create more personalized insurance policies that are based on individual behavior, preference and risk profile, resulting in a more bespoke set of coverage options that should better satisfy customers’ needs.

Despite the benefits, experts warn that the insurance industry is not immune to the same problems associated with AI that have impacted every other sector—namely, the risks of bias, data misuse and data insecurity. As a result, risk professionals need to ask for more details about how AI is used when underwriting their company's policies and what checks and balances are employed to ensure the accuracy of results.

According to Wilson Chan, CEO at AI fintech firm Permutable AI, it is “absolutely critical” to address the repercussions of biased data on AI systems within the insurance industry. “Companies often face inflated premiums and coverage restrictions due to insurers training their underwriting AI on limited or biased data,” he said. “The inherent nature of AI systems means that if the input data is biased, the decisions made by the AI will inevitably reflect those biases. To ensure fair treatment in insurance purchases, companies must engage insurers with crucial questions about the training data, its bias mitigation, and the transparency of AI-driven decision-making.”

Insurers must be certain that AI systems are trained on representative, unbiased data, and that they regularly review and update AI systems to eliminate biases. They also need to provide transparency about the functionality of the AI systems they are using and what processes AI is being used in. “By adhering to these measures, both companies and insurers can contribute to the fair and responsible use of AI systems in the insurance industry,” he said. “This commitment to transparency, unbiased data and ongoing vigilance is fundamental to fostering a trustworthy and equitable insurance landscape.”

To illustrate the risk of biased decision-making, Chan offered an example using flood risk insurance. “In this instance, AI models trained on historical data might unfairly impact companies in areas prone to increased flood risk, overlooking current climate patterns,” he said. “This could result in companies facing higher premiums or coverage limitations, irrespective of the mitigation measures they have implemented, such as building floodwalls or elevating properties above sea level.”

Other common types of business insurance may also be prone to AI bias. Business continuity insurance faces challenges when AI models—limited by data constraints—inaccurately assess a company’s risks based on industry or location. For example, a manufacturing company in a rural setting might encounter higher premiums due to insufficient data that fails to consider its robust supply chain relationships, remote operability or contingency plans for power outages. Similarly, AI bias can impact directors and officers (D&O) insurance because AI models trained on industry-specific lawsuit data could inflate prices and restrict coverage for companies operating in sectors prone to litigation and insurance claims, overlooking these specific companies’ clean legal records and key governance practices.

The historical data used to train AI systems can also be problematic, said Peter Wood, an entrepreneur and chief technology officer at tech recruitment firm Spectrum Search. Historical biases rooted in the data used in AI algorithms can adversely impact companies and lead to “skewed” risk assessments, especially in niche or emerging sectors where historical data may not accurately reflect current realities. “As AI systems learn from past data, they might assign undue risk to certain companies based on outdated or irrelevant criteria, leading to higher premiums and restrictive coverages,” he explained.

To counter AI bias concerns, Ryan Purdy, senior director and consulting actuary at tech and professional services firm Davies Group, said insurers need to understand the nature of any external data sources they intend to use for underwriting, including who provides the information in its root state, how it is updated and how often. “Data ages and can become less important to the assessment of risk or product suitability for a customer over time,” he said. 

Addressing AI Underwriting Concerns

Companies need to adopt proactive approaches when dealing with AI-driven insurance underwriting. The key is to engage in transparent dialogue with insurers. “Companies should inquire about the nature of data sets used for training the AI models,” Wood said. “It is essential to understand whether these datasets encompass a wide range of industries, including the latest trends and developments.”

He added, “Companies should ask insurers about the mechanisms in place to identify and mitigate biases. This includes questioning whether the AI systems are regularly audited for fairness and accuracy. Additionally, they should inquire about the possibility of manual reviews or overrides in cases where AI-driven decisions seem unjustly skewed.”

Due to the potential for flawed outcomes, companies need to ask more questions about how risks evaluated through AI technologies are assessed and priced. While regulators may be keenly watching insurers for possible abuses regarding the treatment of consumers, “there are fewer safeguards for corporate insureds that are viewed as ‘sophisticated purchasers,’” said Tom Davey, co-founder and director of litigation finance and insurance consultancy Factor Risk Management. As such, there is a greater need for companies to raise questions and concerns themselves.

According to Jeremy Stevens, EMEA business unit director at insurance services provider Charles Taylor Group, companies need to ensure that their insurers can guarantee transparency in their AI decision-making processes. To do so, he said, “companies can ask for explanations on how these models arrive at decisions affecting premium pricing, underwriting and claims handling.” Insurers, in turn, “should provide detailed documentation or reports that outline the factors and data inputs considered by AI models as these will help companies understand the rationale behind decisions,” he said.

Companies should make sure that their insurers maintain comprehensive audit trails that trace the decision-making process of AI models to ensure full accountability. “Insurers must comply with industry standards and regulations that govern AI in insurance,” Stevens said. “Companies can request information on how the insurer adheres to ethical AI practices and regulatory guidelines, so insurers must ensure their audit functions do not lag behind regulations.”

 Companies should also ask whether the insurer is continuously evaluating and monitoring the AI algorithms’ performance and how it arrives at specific decisions, and whether it regularly checks for biases, errors or changes in the data that might affect underwriting decisions. Other steps include checking that the insurer’s AI-based underwriting system complies with various data laws such as the European Union’s AI Act, the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA) and the U.S. Health Insurance Portability and Accountability Act (HIPAA), as well as various ethical standards. To better address these issues, companies can establish a collaborative relationship with their insurer. “Provide feedback on decisions and discuss how they align with your company’s risk assessment,” Stevens said.

It is also important to understand what kind of tech support insurers are getting if they use third-party AI tools. “How often their data is captured is important, but insurers should also work to understand how long it might be until the next update of the technology is available,” Purdy said. “Are these future changes in data collection, data structures or technology versions going to force additional changes from the insurer side to keep making effective use of these technologies? Working to line up these providers’ development timelines to the insurer’s own timelines can alleviate substantial headaches in the future.”

Data security is another area of concern. Experts warn that companies could be in danger of making key risk information publicly available if insurers use or share their data on AI systems—which often retain rights to the intellectual property of any inputted data—when training AI technologies to improve their underwriting. Companies need to actively protect their risk data by maintaining confidentiality, sharing it selectively, and enforcing contractual clauses for data protection, Wood said. They also need to vigilantly monitor the use of its data and check on what cybersecurity measures the insurer has in place to protect data from potential breaches or misuse.

“Companies should demand clarity on how their data will be used and ensure that their information is anonymized before being incorporated into larger datasets,” Wood said. “This includes negotiating agreements that restrict the use of their data solely for underwriting purposes and not for training AI models. Insurers, for their part, must adhere to stringent data protection regulations and employ advanced encryption and access control mechanisms to prevent unauthorized data usage, too.”

He added, “Furthermore, there should be transparency about data handling practices. Regular audits and compliance checks can help maintain trust and ensure that both parties adhere to the agreed-upon terms regarding data usage and privacy.”

Neil Hodge is a U.K.-based freelance journalist and photographer.