
In a climate where tech optimism is high, understanding is low, and regulatory oversight lags behind innovation, companies constantly seem to be making bold proclamations about how “next-generational artificial intelligence” has transformed their business and will push them ahead of competitors. Unfortunately, claims that seem too good to be true often are. Indeed, many businesses are either being misleading or even lying about the extent of their AI use, as well as the true capabilities and benefits of such technology.
For years now, many companies have been “AI washing”—a deceptive marketing tactic where companies exaggerate or fabricate their level of development or implementation of AI solutions to give the impression the technology is more pivotal that it is. According to Carrie Osman, CEO and founder of business consultancy Cruxy, AI washing is “a symptom of a market gripped by hype and starved of scrutiny” that is exacerbated by boards under pressure to “do something with AI.”
“Every company wants to say they are using AI, but the hype has started to outpace the truth,” said Ravi de Silva, founder of compliance advisory firm De Risk Partners. “We have seen this before with other tech cycles. There are big promises, fast funding and not enough oversight. The difference now is the speed of adoption and how deeply AI is being tied to valuations and business models.”
Companies that engage in AI washing tactics can face serious consequences. By either deliberately or inadvertently misleading customers, investors and regulators, businesses expose themselves to increased risk of litigation and regulatory scrutiny and can needlessly increase their compliance costs as they provide assurance on risks that the technology does not actually pose.
Regulatory Scrutiny Ramps Up
In the United States, investor appetite to pour money into the “next big thing” has led the Securities and Exchange Commission (SEC) to step in to investigate false AI claims. In March 2024, the SEC issued its first enforcement action for AI washing, charging two investment firms with misleading clients and potential investors for years by pretending that their investment strategies were driven by a proprietary deep-learning AI model.
In June 2024, the regulator also charged Ilit Raz, the CEO of now-defunct AI recruitment firm Joonko with conning investors out of at least $21 million after enticing them with falsified testimonials about what the technology could supposedly do. According to a statement by Gurbir S. Grewal, director of the SEC’s Division of Enforcement, Raz allegedly “engaged in an old-school fraud using new-school buzzwords like ‘artificial intelligence’ and ‘automation.’”
In April 2025, the FBI and the U.S. Attorney for the Southern District of New York charged Albert Saniger, the former CEO of e-commerce firm Nate, with attempting to defraud investors out of $40 million by allegedly inflating claims about the supposed AI capabilities of its shopping app. In pitch materials transmitted to investors and venture capital firms, Saniger claimed the Nate app was “fully automated based on AI” and “able to transact online without human intervention.” However, the app’s actual level of automation was effectively zero. Instead, it relied heavily on bots and hundreds of workers manually processing transactions from a call center in the Philippines.
Even major technology companies have been exposed. In 2024, Amazon was lambasted after reports questioned the AI capabilities of the “Just Walk Out” technology installed at many of its Amazon Fresh and Amazon Go grocery stores. The AI-powered system enables customers to pick their items and leave, with AI sensors allegedly working out what the shopper chose and then automatically billing them. Unfortunately, the cutting-edge tech also relied on around 1,000 workers in India to manually check almost three-quarters of the transactions. In response, Amazon said the reports were “erroneous” and that staff in India were reviewing the system, not the video footage from all the shops.
Meanwhile, Apple, Google, Microsoft and Samsung have been revising or retracting some of their claims about their latest AI products in response to a probe into whether marketers are overstating the capabilities or availability of AI features led by the national advertising division of BBB National Programs, a nonprofit industry self-regulatory organization.
The Federal Trade Commission (FTC) has also been monitoring how companies are using AI to mislead and lure consumers or investors into bogus schemes, and how the tech is being used to “turbocharge” fraud. In September 2024, it initiated a crackdown against misleading AI claims called “Operation AI Comply.” The agency launched legal actions against various companies, including one that promoted an AI tool that enabled its customers to create fake reviews, another that claimed to sell “AI lawyer” services, and others that claimed that they could use AI to help consumers make money through online storefronts.
The agency made it clear that this kind of AI-driven fraud will not be tolerated. “Using AI tools to trick, mislead or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”
Business Risks of AI Washing
While regulators have primarily focused on the risks to consumers and investors thus far, AI washing also poses several risks to the companies that engage in it. The most obvious risk is losing public trust in their products and brands by making claims that do not stand up. This can then result in a drop in sales and an increase in the threat of litigation and regulatory scrutiny.
Even unintentional misstatements can carry major consequences. “The legal risks are significant,” said Maryam Meseha, partner and co-chair of privacy and data protection at law firm Pierson Ferdinand. “If a company’s disclosures, whether in pitch decks, investor reports or public filings, materially misrepresent the role AI plays in their business, they may be liable for securities fraud, consumer deception or even FTC violations. We are already seeing the SEC scrutinize AI-related statements, and I expect enforcement to increase alongside new AI-specific disclosure rules.”
It is important to note that companies are not the only ones that may face legal action—individuals within a company could also be sued, fined or face other regulatory sanction for AI-related misstatements. “In investor agreements, there will often be warranties and possibly indemnities around disclosed information and breaching those terms could lead to damages claims—not just for the organization, but potentially for individuals too,” Hall said.
Another risk is that companies could waste time, money and energy investing in solutions that do not deliver the benefits promised, making them wary about investing in future technologies and strategies that could actually pay off. “By chasing after hyped-up solutions, businesses may overlook genuine AI technologies that could truly drive efficiency and innovation within their operations,” said Chris Carriero, chief technology officer at data tech firm Park Place Technologies. “The focus on superficial AI can blind organizations to the transformative potential of well-implemented AI and ultimately slow down their ability to adapt and thrive in a competitive landscape.”
Another consequence of companies erroneously saying or believing they are AI-enabled is that they could be “unintentionally saddling themselves with a raft of compliance obligations under AI regulations,” which will unnecessarily add to their costs for no apparent gain, said James Clark, data protection, AI and digital regulation partner at law firm Spencer West.
For example, the EU’s AI Act, which will be fully implemented by August 2026, states that “customers and regulators may legitimately assume that products referred to as ‘AI’ or ‘AI enabled’ constitute regulated AI systems.” Under the legislation, companies deploying AI need to categorize how the technology is being used and assess the level of risk it could pose to the user. The higher the risk and potential for harm, the greater the company’s compliance requirements will be. Adequately meeting these requirements can involve risk mapping to categorize AI applications according to the act’s risk levels (unacceptable, high, limited, and minimal) and conducting impact assessments to evaluate how the AI processes data and whether the outcomes could lead to prohibited uses. To meet transparency obligations, companies will need to document decision-making processes, ensure human oversight where required, and implement audit trails. This could result in significant compliance costs, particularly up front.
Additionally, if a company uses the term “AI” when describing its products or services, “it may trigger a greater level of scrutiny from its corporate customers and third-party suppliers who will want additional assurances—and potentially stricter contractual terms and liability positions—to address perceived AI specific risks,” Clark said. As such, companies may be investing in sophisticated—and costly—AI compliance and governance programs unnecessarily if their AI capabilities are minimal or non-existent.
Clearing Up Confusion and False Promises
At the heart of the problem is the lack of a universally accepted definition of AI and an inconsistent approach to regulation or enforcement. According to Ryan Gracey, technology and AI lawyer and partner at law firm Gordons, this “further fuels ambiguity and allows companies to stretch the term to cover a wide range of technologies, some of which do not meet the generally understood criteria for AI.”
The most basic definition of AI is technology that can learn and evolve from massive amounts of data. But because some AI-associated technologies have been around for many years—semantic search and natural language processing go back at least a decade, for example—the term has become so broad that some businesses are even including technological processes as simple as automation under the banner of AI. Even in the European Union—which has led the way with regards to regulating the technology under the EU AI Act—the definition of what constitutes AI is open-ended. As a result, most corporate claims around AI “have enough of a basis in fact to evade regulation,” said Tim Rosenberger, legal policy fellow at the Manhattan Institute.
Experts believe that the main reason companies tend to embellish their AI capabilities is that they see it as a harmless sales gimmick that is no worse than any other corporate boast. “AI washing is not often malicious—it is marketing teams getting ahead of technical reality,” Meseha said. Joe Davies, CEO at content marketing and SEO agency fatjoe, said companies simply “leverage the term for clout, not capability” without thinking they are causing any real harm.
To prevent misleading claims, companies should play it safe and define the technology as being at least capable of self-learning. “AI is generally understood to mean a method by which a computer can learn and solve problems further to training based on large datasets,” said Iona Silverman, IP and media partner at law firm Freeths. “Anything less than that should not be labeled AI.” Similarly, Melissa Hall, senior associate at law firm MFMac, said, “If the underlying tech does not genuinely involve AI, they should not say it does. It is that simple.”
The key to avoiding accusations of AI washing is to ensure transparency and accuracy when referencing the technology in marketing, investor materials and contracts. Companies need to ground their claims in demonstrable use cases. “Transparency is key,” Davies said. “Disclose what AI is doing, where it fits in your workflow, and where human oversight still plays a role. Using terms like ‘AI-assisted’ rather than ‘AI-powered’ can strike a more honest balance.”
Companies should make sure any AI claims can be backed up with solid evidence. “If companies are saying their AI product outperforms a non-AI product, they should have the data to prove it,” Gracey said. “They must not make aspirational or hypothetical claims and should only talk about what technology can do right now.”
Experts believe it is important for companies to have proper governance in place so that AI claims are not exaggerated and that everyone in the organization—from the board to sales teams—knows the difference between what can be deemed AI and what should be classed as generic, standard information technology. This includes training marketing and communications teams so that they understand what qualifies as AI and making sure all public statements are reviewed by the legal, compliance and other assurance functions. Companies should also maintain internal documentation showing how AI models were developed, validated and integrated.
According to business consultant Steve Fisher, companies need to build “internal literacy” to help combat the problem. “Executives, product leaders, and risk and compliance teams need a baseline understanding of AI—not to code it, but to question it,” he said. “A company without AI fluency at the leadership level is vulnerable to both exaggeration and exploitation.” Companies should create internal audit trails, ethics reviews and cross-functional reviews to evaluate both capability and risk. “If you would not let your security claims go unchecked, do not do it with AI either,” he said.
Companies should also regularly review and update their claims as technology evolves, and if they are working with suppliers or partners, they should execute due diligence to verify those companies’ AI claims so that everyone involved is clear exactly what technology is used in the business and what the correct label for it is. Using established frameworks like the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework may provide a blueprint for companies to follow, but organizations should also ask themselves a series of questions to avoid using “AI” as a catch-all term for technical investment. These should include: Is what we are building truly autonomous? Are human decisions being disguised as algorithmic ones? Who might be harmed by our claims?
Despite the increased scrutiny, AI washing is likely to continue with more companies exposed to reputational risk and legal hazard from false and misleading claims. Zorina Alliata, principal AI strategist at Amazon and professor of responsible AI at the Open Institute of Technology, believes companies need to treat AI washing like they would any other product risk. “We can ask investors and stakeholders to expect technical due diligence, boards to take AI claims as significant disclosures, and C-suites to educate themselves and understand the opportunity—but also the limitations and hazards—the technology brings.”