RISKWORLD 2024: Risk Professionals Discuss Top AI Concerns

Jennifer Post

|

May 8, 2024

Excellence in Risk Management RiskworldAt a May 7 panel of risk management professionals took the RISKWORLD stage to discuss the World Economic Forum’s Global Risks Report 2024 and how risk managers are handling those risks. On the panel were Carina Klint, chief commercial officer at Marsh McLennan Europe, Reid Sawyer, managing director and head of emerging risks at Marsh Advisory, and two RIMS board members, David Arick and Kevin Bates, who are the managing director of global risk management at Sedgwick and the group head of risk and insurance at Lend Lease respectively.  

Starting the presentation off with a few key points from the report, Klint explained how the top risks have changed over the years, from 2012, when the top risk was systematic financial failure, to 2024, when the top risk was misinformation and disinformation fueled by AI. “This is very much connected to the fact that AI has hit us hard," Klint said. "AI has been around for several, several years, but in this past year, it has really become common knowledge but also very accessible.” AI-fueled misinformation and disinformation will impact elections and large economies this year and the next year, but spreading misinformation is not the panel’s only concern regarding AI. 

“I think there are two core dimensions that we can start to tackle this question of AI in our work, and the first is one of those questions I am not sure we are asking enough of ourselves and our leaders, which is, what is the liability assessment that we are deploying as we start to use tools in our environments,” Reid said. As an example, Reid pointed out that almost all organizations are experimenting with AI, and even though some are working to make AI tools and models safe for use within their organization, at the end of the day, it is still somebody else’s tool and model. Reid even mentioned a client that is using one AI model to run and deploy a different AI model. 

Arick presented another interesting issue: how to keep these AI models up to date and to prevent falling into a pattern of “garbage in, garbage out.” He used the medical field as an example and said that if they need the best medical journal on the existing diagnosis of a particular disease, how do you keep the model updated properly to prevent conflicting information or diagnosis? “I think you can apply that to professional services and other business uses,” he said. “You need to have a governance structure around ‘how do we be in care for this little animal and make sure that we have controls around the outcomes and how we test those outcomes.’” 

Bates added that it comes down to company culture and how leaders communicate risks to employees. There can be processes in place and covenants in the form of “thou shall not,” but acting in accordance with them really comes down to corporate culture and risk culture. However, deepfakes are real and duping otherwise intelligent and diligent employees, and policies and a strong company risk culture only get you so far. Driving the point home, Bates reminded the audience, “Your CFO will never tell you to transfer money. That will not happen. Or the Nigerian prince.”

Jennifer Post is an editor at Risk Management.