As artificial intelligence applications continue to proliferate, jurisdictions around the world are increasingly considering new regulations to ensure that organizations use the technology responsibly. The resulting patchwork of laws will make compliance more complicated and, as a result, companies may be at serious risk of fines from multiple regulators over the way they process data in their AI-based systems.
The difficulty often stems from a lack of understanding around the scope of the regulations. Many organizations could face sanctions if they mistakenly think AI rules only target tech firms and other developers rather than those using the technology. As a result, they could ultimately be sanctioned by data regulators in different jurisdictions, as well as across different industries and areas of regulatory oversight.
One such example is a financial services company's use of personal data. Their data use practices must comply with any specific AI rules and also must be in line with data protection and cybersecurity legislation, financial services legislation, and possibly consumer protection legislation. Ultimately, this means the processes could be overseen by three or more regulators, depending on jurisdiction, and there is a risk of parallel fines for the same violations.
A company’s AI use can also raise red flags with other regulators for business practices that may seem to have little in common with the technology. For example, according to Michael Cordeaux, senior associate in the regulatory and compliance team at law firm Walker Morris, businesses must be extremely careful with their marketing communications to ensure they do not make any misleading claims about AI capabilities. Such claims could be subject to sanctions from broadcasting, advertising and marketing watchdogs. Meanwhile, competition and consumer protection authorities may examine how firms use algorithms and AI systems to set prices, target consumers or make personalized offers.
“Businesses often assume that technology developers themselves are solely responsible for any AI-related issues,” said Hilary Wandall, chief ethics and compliance officer at data analysis firm Dun & Bradstreet. “In reality, any company using AI and machine learning in their operations is directly accountable. It is vital that all businesses using AI—whether in-house or through third-party solutions—understand how these overlapping regulations apply so that they can adhere to strict compliance standards.”
Addressing Compliance Challenges
The EU’s AI Act, which came into force on August 1, 2024, is the first major set of rules to explicitly govern AI use and allows regulators to impose fines of up to €35 million or 7% of worldwide group turnover, whichever is higher. While the legislation categorizes the level of risk associated with AI applications, it does not concern itself with every AI system, instead focusing on those that are “prohibited” and “high-risk.” Companies need to judge what category their systems and AI use fall into and make any necessary revisions to ensure compliance.
In general, however, many countries’ existing data laws already have provisions concerning how personal data is collected, used, stored and shared that would apply to AI oversight and enforcement. The problem is that there are different understandings of what constitutes “personal data,” “data protection” and “consent” around the world.
According to Sarah Pearce, partner at law firm Hunton Andrews Kurth, while the concept of what constitutes lawful data processing differs under countries’ data protection laws, a common theme is that processing must be based on an individual’s consent and initiated for the performance of a contract with the individual or for the legitimate interests of the relevant business. However, this can be difficult to demonstrate in the context of AI, “as purposes for processing tend to develop and change over time, meaning the original ground or basis may not always be sufficient for the processing by the AI system,” she said.
Similarly, it may also be difficult to meet typical transparency requirements regarding how personal data will be collected, used and shared. The very nature and technical complexities of AI “means that the purpose of the processing may well change over time, making it challenging to be fully transparent about how people’s personal data will be used,” Pearce explained.
Another problem is that companies will likely need to prove they are not at fault for AI misuse, rather than any complainant needing to prove their guilt, which means it is incumbent upon companies to explain their use of AI and their understanding of compliance. As such, Richard Kerr, senior director at Kroll, advised companies to set up AI functions with cross-disciplinary skills to enable proactive and reactive risk management. Businesses should set up risk mapping, measuring, monitoring and governance policies and systems. “The outcome of a case will depend on the quality and robustness of the evidence, so having a strong and defensible strategy on how the AI was trained and its actions taken will become essential to successful outcomes,” he explained. “It is a question of degree, but, as ever, ignorance of the law is no defense at all.”
Companies should aim to integrate compliance and risk reduction at all stages of the development of products and services, largely by adopting the approach of “privacy by default and by design,” according to Alexander Roussanov, partner at law firm Arnold & Porter. Other steps to reduce AI compliance risks could include implementing holistic risk analysis and compliance programs (as opposed to siloed policies and procedures), having robust vendor/partner due diligence, audits and template agreements in place.
Organizations should also track legislative developments in relevant jurisdictions because AI regulation is set to develop country by country and details and compliance requirements may differ significantly, said Adam Penman, an employment lawyer at McGuireWoods. In addition, businesses need to need to comprehensively audit their existing and anticipated use of AI systems. “It often surprises users of AI just how prevalent the technology is already,” he said. “From recruitment and HR tools to accounting processes and marketing, low-level AI has been integrated into working life for some time now and may not be immediately apparent.”
He added that appointing a “go-to” AI person within the business who is sufficiently empowered to make decisions around navigating AI risk and who will be held accountable for managing those risks will also be increasingly important. Penman likened this to the recent rise in roles like dedicated data protection officers and money laundering reporting officers, similarly fueled by increased regulatory requirements, scrutiny and financial penalties.
As with money laundering and data protection regulation, effective AI risk management will place a lot of value around codifying the processes. “There will never be a bad time to classify, assess and document the associated risk level of all AI to inform a risk mitigation strategy,” Penman said. “Documenting AI use in policy now is important to sit alongside existing data privacy governance policies—the key principle being one of transparency to the end user and customer. A holistic AI policy, setting out clear reporting and responsibility lines, will be key.” Companies that lean heavily on AI will also need to consider the insurance implications of their business model since “the potential liabilities and costs will need to be factored into business strategy,” he said.
Uncertainty Remains
One of the key problems for companies in terms of compliance is that there is little legal case history of regulatory action to provide guidance on what not to do.
Thus far, the GDPR and EU AI Act have relatively little to say about emerging risks around generative AI. “We are waiting for judgments on some of the new processes,” said Dr. Clare Walsh, director of education at the Institute of Analytics. “Traditionally, under international data protection laws, judges have not been afraid to use ‘algorithmic disgorgement,’ which requires companies to delete an algorithm that they have trained on illegally obtained data. This is important as a lot of time and money goes into training a machine. This threat of deletion has been both an effective punishment and a deterrent.”
However, this has not yet been enforced. “We are in a position where companies openly admit that they have used data illegally because they could not afford to obtain it legally. They argue that the end goal was worth breaking the law as nobody has been harmed,” she explained. “We have not actually had a definitive judgment yet whether algorithmic disgorgement is off the table for generative AI technologies. That is extraordinary considering how many companies have already embedded the technology into their business model.”
There is little doubt that regulators are more closely scrutinizing AI use--and misuse. The threat of multiple, parallel fines means companies need to review how they use the technology and understand the circumstances under which they could be held accountable. Failure to recognize their own responsibilities for safe AI use could pose a significant regulatory and business risk.
“It would be folly to think that only the tech firms are on the hook,” said Lee Ramsay, practice development lawyer at law firm Lewis Silkin. “Ignoring danger signs and failing to carry out usual compliance and risk management checks mean potential exposure to significant legal and operational risk. This, in turn, can lead to regulatory action, reputational damage and loss of consumer trust.”