Are AI Projects Different to Software Projects?

Exploring considerations unique to AI projects and how to tackle them

In partnership with

There’s an argument that all projects or change initiatives can be delivered the same way, regardless of the product being created. I thought the same when transitioning away from more linear delivery methods (aka waterfall), to more iterative and Agile delivery approaches.

I believed the best way to deliver any project was by using an agile approach and that was that.

I later realised that whilst an agile mindset should be applied across the board, sometimes there is a place for more waterfall delivery. For example, if a major finance platform needs to be transformed and replaced, there are elements of waterfall stages that can help reduce customer impacts.

Similarly, before I started thinking about AI, I believed that any approach to delivering innovation would work for AI solutions.

This is largely true, for example, the Kanban or Scrum methods of driving change are very powerful approaches to releasing AI functionality. As I started to dig deeper, I began to realise there can be additional factors to consider when delivering AI, which may need changes to some of our methods.

This post is in no way detailed or complete, and I'll continue to explore this topic in future posts, so I invite you to keep reading to find out what I’m discovering.

Why Most AI Projects Fail

By 2025, over 90% of organisations surveyed by Cognylitica will implement some form of AI or machine learning projects. In addition, roughly 80% of AI projects fail.

AI in this instance can vary from autonomous systems, natural language processing, analytics and more besides.

There will be a host of reasons for this failure and Cognilytica’s research summarises ten:

  1. Applying application development approaches to data-centric AI

  2. Lack of sufficient quantity of data

  3. Lack of sufficient quality of data

  4. ROI Misalignment of AI solution to problem

  5. Lack of planning for continued AI, model, data iteration and lifecycle

  6. Misalignment of real world data and interaction against training data and models

  7. Applying proof of concept thinking to real-world pilots

  8. Underestimating time and cost of the data component of AI projects

  9. Vendor misalignment on promise vs. reality

  10. Overpromising AI capabilities and underdelivering projects

(Source Cognylitica)

Of these ten causes, most appear to be unique to AI as a subject, although some relate to the failure to mange expectations of what a solution can do, and how long it might take to deliver.

If I were to add #11, it would be the heavy reliance on AI experts, causing misplaced trust in their abilities, and a fear of challenging their knowledge. Whilst AI experts are critical to the success of an AI initiative, AI on the whole is still experimental, therefore, even the experts will have an optimism bias that could result in delays.

This study left me believing there are additional factors to consider when working in a team delivering AI solutions.

The most notable factor appears to be acquiring the right quality and quantity of data to enable the AI to achieve the desired results, whilst demonstrating value early and often, to keep stakeholders on board.

On reflection, we typically treat AI projects much the same as application development projects, using existing delivery methodologies with little or no adjustments.

Whilst delivery methods such Scrum can work, the nature of developing code to fulfil a specific function is different to building credible data and training AI to think independently.

This differentiation causes a greater variety of behaviour between AI and traditional software, causing additional measures to be considered.

Learn AI in 5 minutes a day.

The Rundown is the world’s most trusted AI newsletter, with over 700,000+ readers staying up-to-date with the latest AI news, understanding why it matters, and learning how to apply it in their work.

Their expert research team spends all day learning what’s new in AI, then distills the most important developments into one free email every morning.

What Needs to Change?

Cognilytica, who are part of the Project Management Institute (PMI), suggest an AI-specific delivery approach that includes both Agile development and Agile-powered data methods.

This method is known as Cognitive Project Management for AI (CPMAI), created for developing and iterating AI, machine learning and/or cognitive technologies. CPMAI is based on a combination of standard Agile methods and the Cross-Industry Standard Process for Data Mining (CRISP-DM).

CPMAI focuses on six primary phases, which can iterate between each other both forwards and backwards.

  • Business Understanding – “Mapping the business problem to the AI solution.”

  • Data Understanding – “Getting a hold of the right data to address the problem.”

  • Data Preparation – “Getting the data ready for use in a data-centric AI Project.”

  • Model Development – “Producing an AI solution that addresses the business problem.”

  • Model Evaluation – “Determining whether the AI solution meets the real-world and business needs.”

  • Model Operationalization – “Putting the AI solution to use in the real-world, and iterating to continue its delivery of value:”

(Source Cognylitica)

What are the key features that makes this different?

  • A focus on two simultaneous iterative cycles of Agile development and Agile data methods.

  • Breaking down data preparation, management, model consumption and operationalising into their own activities rather than one user story or activity.

  • Continuous iteration of the six AI data management phases either inside a user story or spread across user stories, depending on size, scale or need.

  • Focus on iterating and refining machine learning models as a core focus, to drive the necessary value from the released product.

  • Ability to go back and revisit phases as needed to ensure data accuracy.

The Data Science Process Alliance backs up the view that AI projects should not be treated solely as software projects.

This is due to AI product development typically having less clear objectives, require more experimentation than expected and need significantly more data management amongst, other factors.

As a result they identified the AI Lifecycle, to help the development of AI capability, which includes:

  1. Problem definition

  2. Data acquisition and preparation

  3. Model development and training

  4. Model evaluation and refinement

  5. Deployment

  6. Machine learning operations

(Data Science Process Alliance)

To minimise uncertainty, the Data Science Process Alliance suggests building a minimum viable AI (MVAI), to identify a scaled down yet valuable version of the solution that can test the value and speed up overall development.

These findings remind me of a project I once led, where the solution was based on machine learning and the need to gather a large amount of data.

In my example, a generous amount of time was allowed to gather the data to feed the data model and produce reliable results, however, much more time and data was eventually required to produce the results we needed, even when working with experts.

80% of AI projects fail

Considerations

From this initial look at what makes AI projects different, alongside my past experience, here are 5 considerations I would follow when embarking on an AI project:

 Be realistic about the unknowns: Avoid overselling the functionality, and be open about the need to continually iterate the data. Share the improvements in capability and data accuracy with each iteration to build confidence that useful features are being created.

 Prepare for higher volumes of data: AI project delays can often be caused by requiring more data than originally planned. Allow time for more data to be gathered and processed. Include continual data gathering, modelling and application in your plans.

 Accept the need to experiment and iterate: AI is less predictable than traditional software development. Rather than coding an expected behaviour, data is modelled to enable the system to work out some of it’s own behaviour. This can take more time to come to fruition.

 Embrace the ambiguity: Requirements are more likely to be unclear when delivering AI. Creating an early version, even without substantial data, can help narrow the scope and enable more time to focus on the functionality that matters most.

 Remember AI Ethics: It’s essential to review good practice for managing data and the use of AI, to prevent using them in a way that crosses ethical or legal boundaries.

Book

Articles

This investigation into delivering AI is far from over. If you want to know more, join the newsletter and subscribe to the Change Leaders Playbook podcast series on Youtube, Spotify, Apple and Audible.

p.s.

How was this article?

Your feedback helps to make future posts even more relevant and useful.

Login or Subscribe to participate in polls.

Reply

or to participate.