Getting Smart About Artificial Intelligence

Neil Hodge

|

September 1, 2021

A blue silhouette of a brain with a circuit board superimposed, with a red square center.

Artificial intelligence and machine learning have become so prevalent that many businesses now heavily rely on algorithms to make decisions that can impact our daily lives, from security scans and medical diagnostics to mortgage approvals and customer service interactions. Indeed, the technology has become so ingrained into routine operations that there are serious concerns that AI does not just facilitate processes, but often dictates them—without effective oversight, transparency or accountability.

There have been numerous high-profile incidents involving AI failures. In 2018, Amazon scrapped the AI-powered recruiting software it was developing after finding the system vastly preferred male candidates. The machine learning models at the heart of the system were trained on 10 years’ worth of resumes submitted to the company, most of which were from men, a reflection of the gender gap in the tech industry. As a result of that training data, the system learned that male candidates were preferable and started penalizing resumes that included the word “women’s” and even downgraded candidates from all-female colleges. In effect, the technology learned the existing gender bias in hiring and attempted to formally incorporate it into the system.

In a 2019 study in the journal Science, researchers found that a healthcare prediction algorithm used by hospitals and insurance companies throughout the United States to identify patients in need of “high-risk care management” programs was far less likely to select Black patients. The algorithm based its decisions partly on a person’s level of healthcare spending. As people of color in the United States are statistically more likely to have lower incomes and less health insurance coverage, the algorithm’s implicit bias determined that they would receive lower-quality care, or none at all.

Such incidents have drawn concern from regulators worldwide. Many believe increased scrutiny is necessary, and some have warned that rules—and tough enforcement measures—are likely to come soon.

On April 19, the U.S. Federal Trade Commission sent out a reminder to companies that it already has “decades of experience enforcing three laws important to developers and users of AI”—the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and Section 5 of the Federal Trade Commission Act, which focuses on unfair or deceptive acts or practices. The FTC warned that these three piece of legislation would apply to the sale or use of biased algorithms; certain circumstances where an algorithm denies people employment, housing, credit, insurance or other benefits; or any instance where a person is subject to credit discrimination on the basis of race, religion, national origin, sex, marital status, age or status as a recipient of public assistance.

Just days later, the European Commission, the European Union’s executive body, unveiled a raft of proposals to ensure trustworthy AI that would apply to both the developers and users of artificial intelligence technology. If approved, the rules will have an extra-territorial reach, applying if an AI system is used in the EU or if its use affects people in the region. The rules would be enforced through maximum fines of up to €30 million ($35 million) or 6% of total worldwide revenues for the previous financial year, whichever is greater.

In terms of oversight, the European Commission proposed a risk-based approach with four risk levels: unacceptable, high, limited and minimal. Any AI technology that poses an “unacceptable” risk to consumers and/or violates fundamental rights will be automatically banned. Examples include credit scoring, technology that exploits the vulnerabilities of children, live remote biometric identification systems used in public places, and subliminal techniques such as behavioral advertising.

AI systems deemed “high-risk” will need to pass a “conformity assessment” prior to launch and follow stricter compliance requirements, including ensuring data set quality, improving technical documentation and record keeping, and providing better user information and transparency.

AI systems with “limited risks” will likely have to comply with specific transparency requirements—for example, companies must inform users that they are engaging with chatbots rather than humans. “Minimal risk” AI systems, which constitute the vast majority, can be developed and used without additional legal obligations.

Alex van der Wolk, partner in the privacy and data security practice at law firm Morrison & Foerster, warned that the threshold for what qualifies as AI is extremely low—so much so that “most data analytics and query tools will be required to meet this regulation.” This means that nearly all developers, users and companies building their own in-house AI tools will be subject to greater compliance requirements. 

There is still a long way to go before the EC’s regulation is approved or comes into force. However, the proposals send a signal from one of the world’s most active enforcement agencies that companies must think about how and why they use AI technologies, and what steps they should take if such systems could cause harm. Further, as the EC has signaled plans to regulate AI with such strong sanctions and the FTC was keen to remind companies of its enforcement record in this area, it is very likely that regulators in other countries will soon follow suit.

Lack of Understanding

Experts believe companies are largely ignorant about what they use AI for, how decisions generated by the technology are reached, or what their legal and ethical responsibilities for AI might be. As such, many have not rated AI compliance as a serious risk, and boards are overly reliant on IT taking ownership of what should actually be a management issue.

In May, global analytics software firm FICO’s State of Responsible AI survey found that, despite the increased demand for and use of AI tools, almost two-thirds of companies cannot explain how specific AI model decisions or predictions are made. Additionally, only 20% of respondents actively monitor their models in production for fairness and ethics. The study also found leadership awareness is low, with respondents reporting only 39% of executive teams and 33% of board members have a complete understanding of AI ethics. Further, 78% of respondents said they have struggled to get executive support for prioritizing AI ethics and responsible AI practices. This lack of boardroom understanding or engagement translates into material ethical, regulatory and reputation risks that demand attention.

Experts warn that it is a mistake to underplay the seriousness and the significance of AI risk—or its potential liability. This applies not only to companies developing AI tools, but also those that simply use them. “The company offering its AI-based services is legally responsible for any bad or harmful decisions generated by the system,” said Tim El-Sheikh, CEO at AI tech firm Nebuli. “It does not matter whether or not the algorithms behind these AI systems were supplied by third-party providers, such as cloud-based AI tools. From an end-user’s point of view, you are liable for any harmful or destructive decisions caused by your AI-powered service.”

According to Andre Franca, director of applied data science at AI-tech company causaLens, one of the main problems is that a lot of existing AI and ML technologies are “blackbox” solutions where data is put into the system and a result is generated, but without any knowledge of how it was achieved. Consequently, companies using them are unable to “connect the dots” about how the technology reaches certain decisions. “If organizations are not able to understand how the machine reaches a decision, this can lead to disastrous and unfair outcomes,” he said.

Ensuring Trustworthy AI

To find out how AI technologies are supposed to work, companies need to ask a series of simple but critical questions before implementation: What is the algorithm supposed to do? How will it do it? What data will it use to reach an outcome? And is the data truly representative?

“Organizations must define the exact task they want an algorithm to perform,” said Alix Melchy, vice president of AI at online payments company Jumio. “This ensures they can explain the design of the AI system and the parameters that have been taken into account, and creates a solid framework of development and evaluation of an algorithm.”

As a first step, experts advise that companies comprehensively review their current and intended use of AI technologies. They should also conduct a risk assessment to check the quality and type of source data being used and if or where any bias may occur. For example, if a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. The FTC’s formal guidance recommends that companies think about ways to improve their data set, design their model to account for data gaps and, if there are any shortcomings, limit where or how they use the model.

It is also crucial to build trust in AI through fairness and explainability. As companies develop and use AI, they should think about ways to embrace transparency—for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening data or source code to outside inspection. They should also ensure that claims about what the algorithms can do are properly explained and are not exaggerated or misleading. AI solutions that have been built with “explainable AI” at their core can provide better assurance that the technology will not cause harm.

“Assistive AI” may also be a sensible option for some companies. According to Nick Chorley, EMEA director in the safety and infrastructure practice at AI tech firm Hexagon, retaining human oversight of AI decision-making capabilities may help organizations steer clear of regulatory noncompliance and potential legal troubles. “From a day-to-day standpoint, some AI technologies require that humans monitor the system continuously by reviewing and vetting each recommendation,” he said. “This approach, known as ‘assistive AI,’ keeps people in the driver’s seat as the decision-maker instead of technology. It equips first responders with more information and empowers them to act quicker, but it also gives them the ability to ignore irrelevant or unnecessary recommended actions.”

Another key concern—and one that is often overlooked or undervalued—is clarity from the company about who in the organization is responsible for its AI plans. Too often, organizations assign AI development, strategy and policies to the IT department. As both the use of such technologies and resulting regulatory action remain nascent, AI is rarely seen as an organization-wide business or strategic issue that needs the input of other functions, let alone senior management. To succeed, AI strategies do not just require in-house IT expertise, but also input from departments with other technical skillsets, such as legal, compliance, risk management and procurement. If the company has them, it is important to involve corporate social responsibility and ethics committees as well.

Companies should also be very wary about leaving algorithm development solely to tech teams because the end result is more likely to suffer from bias as a result of “groupthink.” According to Dr. Janet Bastiman, chief data scientist at compliance tech firm Napier AI and a committee member for the U.K.’s Royal Statistical Society Data Science Section, the biggest contributing factor to algorithm failures is a lack of diversity in the teams building them.

“All humans have inherent bias,” she said. “If you surround yourself with individuals with similar backgrounds and experiences, then you will get agreement on approach, similarity of thinking, and a lack of realization of different points of view. Teams of this type will be unlikely to identify bias issues in applications, models or even the data.”

Bastiman added, “This has been the key reason behind the majority of public AI fails in the past few years. Conversely, a team with diversity of thought and experience has a much higher chance of spotting issues with bias. They will be able to spot examples missing from the training data and provide a more critical analysis on the results as there will not be homogeneity of experience to accept them.”

Another frequent problem for companies is getting overwhelmed by how sophisticated and cutting-edge the technology is and forgetting to focus on simply getting the basics right. Companies should apply many of the “common sense” rules they use to address privacy concerns, said Peter van der Putten, assistant professor of AI at Leiden University in the Netherlands. “Keep it simple,” he said. “For example, from a privacy perspective, do you have consent from the customer to use their data for the objective for which you want to apply the AI? Can customers exercise their rights to information or rights to be forgotten? Are the AI tools explainable?”

He also suggested that companies test AI assets, models and decisions before taking them live. They should also check for scope creep to see if the AI-driven systems are still operating within the limits of their original remit, as regulatory investigations often center on concerns that personal data collected for one purpose is then used for another without obtaining informed consent. In addition, companies should continuously test and assess the robustness, accuracy and algorithmic fairness of AI models, and work to minimize algorithmic bias by, for example, simulating predictions and decisions on historical data sets of customers or applications. Companies should also keep a central register of any incidents, provide a root cause analysis and, if applicable, document what measures were put in place to correct the issue. “Corrective action is important, but prevention is even more important,” van der Putten said.

If an algorithm exhibits problems, companies must quickly take steps to prevent further harm. “If there is an inherent flaw with the AI tool, its further use should be put on hold,” said Ed Hayes, partner at law firm TLT. “If it is because of a problem with the data feed or incorrect use, steps should be taken to remove and replace problematic data to correct the learning and to constrain future use to ordinary parameters.”

If the use of AI has led to harmful decisions affecting individuals, Hayes said it will almost certainly be a data protection breach and potentially require rapid notification to both the regulator and those individuals, particularly in the EU. The duty to mitigate the damage would likely include reviewing other decisions made using the tool to check for any problems with them.

The company will also need to determine the nature of the resulting harm. “If bias in the way the AI works is causing discrimination against individuals based on protected characteristics such as sex, age or religious belief, under equalities legislation, checks will be needed to decide if that is because of an inherent flaw in the AI tool, because of bias in learning data provided, or because of human use of the tool,” Hayes said.

“It will be important to clarify whether a data protection impact assessment (DPIA) was conducted before deployment of the AI tool in question, and if any risk mitigations identified were put into place,” he added. “If a DPIA was not conducted, a new one should be undertaken as soon as possible if there is to be ongoing use of the tool.”

There is no doubt that AI is revolutionizing the way that companies conduct business operations, but like any other technology, it is not infallible. Companies must question the motive for adopting any AI solution and be able to justify why it is needed. They must also be clear with internal and external stakeholders about how it will be used and for what limited purposes.

Once live, the technology must be constantly monitored and tested. “AI models will inevitably make bad decisions,” said Adam Gibson, co-founder of Skymind, a tech firm that enables companies to build their own AI systems. “Treat AI as something that will break and need maintenance just like you would any other technology.”

Neil Hodge is a U.K.-based freelance journalist and photographer.