Waterfall vs. Agile – Why do projects fail? Is it a knowledge problem or a requirements problem

Even though the Project Management Institute (PMI) began delivering their Certified Practitioner program some years ago, many experienced, veteran, project managers still disregard agile methods and stress the importance of Waterfall approaches. Their rationale is simple:

Premise: Many projects fail because they have poor planning, analysis and design. Hence, more upfront planning and better analysis decreases risk of failure.

The literature over the last decade clearly shows the reasons for project failure. Whether it’s Forbes, Standish, PMI or Gartner, they all report the very similar outcomes for delivery with traditional project management frameworks.

The Project Management Institute states that six factors that must be met for a project to be successful. 


Customers are happy


Costs don't exceed the budget


Works as designed


People use it


The people who funded the project are happy with it


It meets the goals that drove the project

According to many sources over the last decade, the outcomes defined by PMI for success are rarely achieved. So, why do projects fail? 

Why the experts say project fail?

IAG Consulting 

  • Companies with poor business analysis capability will have three times as many project failures as successes.
  • 68% of companies are more likely to have a marginal project or outright failure than a success due to the way they approach business analysis. In fact, 50% of this groups projects were runaway which had any 2 of: taking over 180% of target time to deliver; consuming in excess of 160% of estimated budget; or delivering under 70% of the target required functionality.
  • Companies pay a premium of as much as 60% on time and budget when they use poor requirements practices on their projects.
  • Over 41% of the IT development budget for software, staff and external professional services will be consumed by poor requirements at the average company using average analysts versus the optimal organization.
  • The vast majority of projects surveyed did not utilize sufficient business analysis skill to consistently bring projects in on time and budget. The level of competency required is higher than that employed within projects for 70% of the companies surveyed.


  • Poorly defined applications (miscommunication between business and IT) contribute to a 66% project failure rate, costing U.S. businesses at least $30 billion every year.

Meta Group

  • 60% – 80% of project failures can be attributed directly to poor requirements gathering, analysis, and management.


  • 50% are rolled back out of production.
  • 40% of problems are found by end users.

Carnegie Mellon

  • 25% – 40% of all spending on projects is wasted as a result of re-work.

Dynamic Markets Limited

  • Up to 80% of budgets are consumed fixing self-inflicted problems.

Is more upfront planning and analysis the solution?


The natural tendency in traditional, 20th century project management practice is to attempt to reduce risk through more planning, user research, requirements analysis and definition. Unfortunately, the these activities just create a false sense of security that the future solution is truly knowable through extensive analysis, definition and planning. It’s arrogant at best. You can’t predict a hand of poker. Why does anyone think that planning and requirements gathering will accurately predict people’s requirements 6, 12 or 24 months away? 

In a complex environment, where more is “unknown than known”, the only true way to reduce risk is to apply Complexity Theory. Can we predict the outcome? Can we know what problems will occur during integration? Do stakeholders know what they want before they see it? Can we account today for what might happen tomorrow, or next month, or in 6-months time? Complexity theory helps us understand the future by doing the following:

  • Conducting “fail fast” experiments with short timeframes or work cycles. Essentially, don’t do months of work and ask for feedback, or assess alignment to a goal, do a few weeks of work, produce an outcome, and then assess what to do next.
  • Examining the results when the experiment has concluded and providing fast feedback. Do this in 2-week cycles (“Sprints”). Produce something that can be inspected. This means don’t just document or collect requirements. Produce a working solution. People can comment on a working solution easier, and produce feedback with greater certainty about what they want next than requirements in a Word document.
  • Then, choosing what experiment to conduct next. Only with feedback on something tangible can you truly know what to do next – deliver more of the same, or change tact to deliver something different. Ultimately if you’ve chosen the wrong solution you have only lost 2-weeks instead of months.

This way of working is what Dave Snowden’s Cynefin framework refers to as “Probe, Sense, Respond”. 

More upfront analysis, requirements documentation, and planning – typified by linear and Waterfall delivery frameworks – will only reduce risk and project failure in a “complicated domain” or “simple domain”. If won’t help in a complex domain.

Experiments are designed to build knowledge

Good planning, analysis and design are critical to project success, as is communication, and a shared vision of what is being delivered. The fallacy is assuming that all this effort must be entirely done up-front, in totality, by specialists and then handed down to developers in the form of documentation for them to interpret. Documented requirements can only capture explicit knowledge. It can’t capture knowledge gained about the subtleties of the context and its associated needs.

An experiment, on the other hand, creates a shared experience. The team collectively discovers whether an assumed solution creates the desired outcome. 

tacit vs explicit knowledge
Unlike Waterfall, agile frameworks like Scrum have a collaborative and cross-functional approach to planning, analysis, design, and delivery. This involves the whole team. In fact, 10% of each Sprint in Scrum is dedicated by the team to planning, user research, analysis, and design on scope that is coming up (i.e.: Backlog Refinement). Scrum’s regular application of empiricism ensures that both tacit and explicit forms of knowledge are shared throughout the team for the life of the product they’re developing and supporting. When a Scrum team creates documentation, it tends to be light-weight — just enough so that the team has consensus on the issue. Each item the team works on is still planned, designed, tested, etc, with each item recording the equivalent of requirements for that item.

Agile frameworks are more successful

The CHAOS Report by the Standish Group (2020) show that agile projects are more successful than Waterfall projects, have fewer challenges and fewer failures. This is not to say, though, that Agile methods like Scrum are a panacea for all project failures and associated problems. Rigour is still needed in the areas of governance, reporting, and risk management in order to increase the project’s health and assure success.


Many people will still assert that up-front design and requirements definition is required to reduce the risk of project failure. The question is how much is needed in a complex environment? Complex environments require experimentation instead of de-risking outcomes through requirements gathering. Scrum works very well in complex environments by using Sprints and building a collective knowledge within the whole team of, not only what the requirements are, but what it takes to actually deliver on them. This combined knowledge – creating lightweight documentation in the form of User Stories, plans formed each Sprint, the experience of delivery, assessment of outcomes each Sprint and then using that total experience to input into what to do next Sprint – is what improves delivery success in complex environments. In a 21st century product development world, change is the only certainty. A whole team, working collaboratively in short work cycles and collaboratively running small experiments, improves a team’s ability to be successful. Being more effective in collecting requirements will never achieve this outcome. M – – – – 1. Polanyi, M. (1958) Personal Knowledge 2. Nonoka, I. & Takeuchi, H. (1995). The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation

About the author

Related Posts

Agile metrics and why team surveys fail

Agile project management metrics often rely on team surveys to find out how agile the organisation is. Team surveys fail for many reasons. Here’s our top tips on what to look out for and how to measure agile in a repeatable and scalable way.

agile iq academy logo 2022-05-05 sm

Enter your details

search previous next tag category expand menu location phone mail time cart zoom edit close