How to Manage AI Projects Successfully

Neil Hodge

|

January 28, 2025

Managing AI Projects

In the rush to compete and survive, companies are pinning their hopes on artificial intelligence and other emerging technologies to offer more products and services and stay viable. But while AI has tremendous business benefits and can unveil potentially massive opportunities, the technology has one key underlying problem: its success depends on the people behind the project knowing what they are doing. 

Many corporations have thrown buckets of cash at AI because there is an expectation that they need to. However, the financial outlay does not guarantee success. In fact, a bad investment can damage the company badly and cost even more to resolve.

In June 2024, fast-food chain McDonald’s discontinued the AI-based automated order-taking service it was using in over 100 restaurants after the system mistakenly created orders for items like bacon-topped ice cream, hundreds of dollars of chicken nuggets, and butter and ketchup packets, all of which were publicly documented on social media. The company had put its faith in AI early on, announcing in 2019 that it wanted to embrace the technology as part of a drive to cut costs and had been trialing IBM’s voice recognition software for two years. While McDonald’s maintains that such technology remains part of the company’s future, so far it seems that five years of planning and investment have not yielded the desired results and that even more cash is needed to ensure success.

There have been other recent notable examples of companies either poorly implementing AI or neglecting to monitor the results. Last May, facial recognition software used by British retailer Home Bargains misidentified a customer as a known shoplifter, leading staff to search her bag, escort her from the premises, and ban her from the chain. Facewatch, the AI software provider, later acknowledged its error. In February 2024, Air Canada was ordered to pay damages to a passenger after its chatbot incorrectly told him he could apply for a “bereavement discount” to attend a relative’s funeral even after purchasing a return airfare (rather than at the time of purchase). The airline tried to claim it could not be held liable for the misleading information provided by its virtual assistant, but the tribunal agreed with the passenger’s claim, saying Air Canada did not take “reasonable care to ensure its chatbot was accurate.”

Like other technologies, AI can be a wonderful tool. However, many companies struggle to integrate it with existing technology, fail to properly monitor its effectiveness and impact, and lose sight of what it is supposed to do.

Ensuring Return on Investment

Experts warn that the return on investment for AI projects is currently very poor, with around 70% never getting past the proof-of-concept stage. This means many AI implementations are unfeasible, unrealistic or simply doomed from the start. According to Pete Foley, CEO of AI governance software firm ModelOp, over 50% of leaders cannot measure or know the return on investment from their AI spend. Furthermore, he said, “less than 2% of CEOs can identify how AI is being used and its risks in their organizations.”

Experts say that part of the problem is that organizations get blindsided by what the technology can theoretically achieve, succumb to scope creep, inflate budgets without adequate controls, and generally get “carried away.” But Sidharth Ramsinghaney, director of strategy and operations at cloud communications company Twilio, said businesses should conduct AI implementation the same way they would any other business project. “Just because the technology may be cutting-edge does not mean you forget about the basics of project management,” he said.

Organizations first need to make sure the technology aligns with business needs and establish clear objectives about what the AI will do and how its success and impact can be measured through key performance indicators (KPIs), Ramsinghaney said. A common mistake is that project teams tend to talk up the benefits of AI implementations (particularly large-scale ones) rather than assess them realistically and objectively. Sometimes, this is because management has already given the budget the green light.

“In the corporate world, management buy-in for any major project means the C-suite thinks it will succeed, deliver a great return on investment, and is aligned to strategy,” Ramsinghaney said. “Who is going to be the first to report to the board that the tech is not going to deliver? Out of fear, people tend to only report good news and bury the bad, which is why it is so important to have a set of clear objectives and KPIs from the start.”

It is also important that AI projects involve cross-functional teams that include data scientists, domain experts and business leaders. This collaborative approach ensures that the project benefits from diverse perspectives and expertise. “It is a common mistake that AI implementations are left to the IT function to lead and manage on their own,” Ramsinghaney said. “IT can put the technology in place, but they are not there to think about the business case or whether the tech is driving the business—that is management’s job.”

To keep the project on track from the beginning, it is vital to have a robust risk management framework in place to identify potential risks, assess their impact and develop mitigation strategies. Regular risk assessments should be conducted throughout the project to ensure that any emerging risks are promptly addressed. Meanwhile, adopting an agile, iterative development approach allows for continuous testing and refinement of AI models. “This helps in identifying and addressing issues early in the project lifecycle, reducing the risk of large-scale failures,” Ramsinghaney said. Companies should also conduct thorough post-mortem analyses of failed AI projects to identify root causes, “whether they stem from technical issues, data quality problems or misalignment with business objectives,” he added. “This helps in developing strategies to avoid similar pitfalls in future projects.”

At the heart of any AI project, there should also always be a focus on data quality. Data fuels AI, and handling it effectively is non-negotiable. Companies must channel resources into solid data infrastructure and governance practices. “Before embarking on an AI project, companies must assess their data quality,” Ramsinghaney said. “The first step should be to establish clean data and ensure transparency of data sources. Poor data quality can significantly hinder the effectiveness of AI models and lead to inaccurate outcomes, as well as significant regulatory fines.”

Choosing the Right Tool

The market for AI is growing all the time, and products vary in price, complexity, adaptability and compatibility. Therefore, companies need to have discussions about the key tasks they want the AI to deal with. For example, there is no point in buying an AI product that offers a broad range of capabilities if the company just needs it to perform one specific task.

Sue Williams, managing director at business consultancy Hexagon Consultants, said it is important to choose the right tool from the start. “Once you are sure that AI is the right way to achieve your objectives, evaluate the available AI tools and technologies to ensure you select one that is designed for the defined task,” she said. “Consider scalability, compatibility with existing systems and ease of integration.”

Companies should also balance innovation with practicality. “Ensure your solution can be implemented and is not just a theoretical concept,” she said. “Users are more likely to adopt AI solutions that are practical and immediately beneficial to their workflows.”

Contrary to popular opinion, companies do not need deep pockets to embrace AI. In fact, they should be cautious about what they do spend, said Peter Wood, chief technology officer at Spectrum Search, a recruitment consultancy that uses AI. He advises companies to “start small” and ask how the AI will change processes. For example, will it drive more people to the website, increase sales or cut delivery times? “You do not need to invest heavily from the outset--simply add the AI to what you already have," he said. "Pick an area where you think you can achieve discernible benefits early on and learn from that process—both good and bad.”

Many experts agree that it is best to avoid overspending from the start and being overly optimistic about what the technology might be able to deliver. “AI is simply another tool that can be leveraged for business problems and not every tool is correct for every project,” warned Harrison Murphy, director of data analytics solutions for tech consultancy firm Altair. Instead, it is a better strategy to establish a solid foundation in an organization’s data processing workflow and integrating some models within the process than to layer AI models atop disparate, unorganized data. “AI models are only going to provide value and be effective based on the data they are given and the questions they are answering,” he said. “If the data is incomplete or the challenges it is trying to solve are too intangible, users and organizations alike will grow frustrated and see their implementation of AI, or AI as a whole, as a failed endeavor.”

Avoiding Common Causes of Failure

Murphy believes half of all AI projects fail and that this is due to three AI “friction points.” “Organizational friction” is one of the most common stumbling blocks with AI projects and happens when departments, teams and individuals are not properly aligned with (or supportive of) the plan and lack the skills, training and experience to make it happen. “Technological friction,” meanwhile, is a byproduct of IT infrastructure like hardware, software, cloud resources and vendors. Together, these elements often act as bottlenecks, constraining project speed, scale and scope. To mitigate technological friction, Murphy said companies need to look for AI solutions that both support the organization’s business approach and are also compatible with its IT infrastructure. Lastly, “financial friction” arises from tight budgets, stretched resources and a rush to see a return on investment. This is because organizations often try to tack AI efforts onto legacy systems that cannot support the technology rather than use more suitable alternatives, such as data analytics, AI technologies with flexible licensing, and solutions that are more scalable and easier to deploy. 

A lack of proper testing and failures to resolve problems uncovered during the testing phase are also major causes of AI project failure. According to Ryan James, managing director of automated software testing firm nFocus Testing, teams often implement AI tools without conducting the necessary tests to make sure they work as they should before they go live. “If gaps have emerged between third-party development companies or system integrators and in-house teams during the initial installation, it can become almost impossible for that technology to deliver its intended benefits,” he said.

Project management must factor in stringent tests throughout the installation period, James said. “Too often, quality assurance principles and practices are left until the end of a project,” he explained. “This can be damaging because if any errors or problems have occurred during the initial installation, those issues can escalate into significant conflicts elsewhere. Failure to test early enough can cause projects to become delayed and overrun because those issues then need to be rectified before any project can go live.”

Companies should also test regularly to uncover problems when they are easier—and cheaper—to fix. “Test, test and test again,” said Erik Severinghaus, CEO of software development firm Bloomfilter. “Release your AI step by step. Check each version carefully and make adjustments before launching widely. This iterative approach helps catch bugs and fine-tune performance. Write down everything, take the lessons from it, and use these to make your future AI projects better and more efficient.”

Achieving Real-World Success

Ultimately, the core of any successful AI project is whether it works for those meant to use it. “People developing tech solutions get very excited about what they are creating,” said Rick Bentley, founder of AI surveillance and remote guarding company Cloudastructure. “The problem is they are not the ones who are ultimately going to be using it, so the tech has to work for those who have no interest in how any of it was put together—it just has to do what it is supposed to do all of the time.”

User experience and user interface (UX and UI) are key. “You have to test your AI on actual users, not your internal staff,” he said. When he was involved in developing toy retailer Fisher Price’s first AI speech recognition-based electronic learning aid, the team brought young mothers and their children to try out the prototype.

“It did not go nearly as well for us as we had hoped,” he said. “The first kid came in to use the toy, but he was not following the voice prompts. The mom and toy company staffer tried to guide him to use the toy correctly, but it just was not happening. Next kid, same thing. Next kid too. And the kid after that. That lesson stuck with me: Users will always use our products ‘wrong.’ It is our job to make the product work for how the user wants to use it, not how we want the user to use it.”

Handling errors is also important. “AI loves to make guesses,” Bentley said. “The good news is that it is happy to tell you how confident it is in its guesses.” One commonly cited example is AI tagging a picture of a dog and saying it is 90% certain the animal is a dog, but 10% sure it could be a cat.

“Nothing is 100% accurate,” he said. “Even an AI program that is right 99.9% of the time will be wrong 0.1% of the time. If it has millions or billions of uses, then that is thousands or millions of wrong answers. You have to think ahead of time: What is the worst that can happen when—not if—the AI is wrong? What level of harm can it do? Unless someone has a food allergy, the McDonald’s bacon-topped ice cream will not harm anyone, so the action you take to remedy it can be limited to patching the problem quickly so the gaffe is not repeated rather than pulling the AI altogether. A telemedicine AI doctor that suggests physician-assisted suicide for a cold, on the other hand, is naturally a horrible scenario that demands immediate withdrawal and widespread notification.”

In reality, Bentley said, companies “always need to release an AI product to users before it is ever really ready.” The trick is to get user feedback to find errors, log error events and fix them as soon as possible, as well as “incorporate what they learn from them back into the system in as close to real time as possible.”

This means having humans in the loop. “A human has to supervise what the AI is doing,” he said. “Maybe they are taking in customer feedback. Maybe they are examining user edge cases or looking at results when the AI was not terribly confident in any answer but was still spitting an answer back. In any case, as problems are found—and they will be found—the system needs to be rapidly improved.”

AI can help deliver or even drive business success. However, it is only useful if companies know how to incorporate it properly and get the best from it. Otherwise, AI implementations can become a financial burden and a significant business risk. Companies need to seriously consider where the technology can make an immediate impact and establish benchmarks to measure success. If they do not, they could be throwing away not only their money, but their reputations as well.

Neil Hodge is a U.K.-based freelance journalist.