Waterfall vs. Agile – Why do projects fail? Is it a knowledge problem or a requirements problem

Even though the Project Management Institute (PMI) began delivering their Certified Practitioner program some years ago, many experienced, veteran, project managers still disregard agile methods and stress the importance of Waterfall approaches. Their rationale is simple:

Premise: Many projects fail because they have poor planning, analysis and design. Hence, more upfront planning and better analysis decreases risk of failure.

The literature over the last decade clearly shows the reasons for project failure. Whether it’s Forbes, Standish, PMI or Gartner, they all report the very similar outcomes for delivery with traditional project management frameworks.

The Project Management Institute states that six factors that must be met for a project to be successful. 


Customers are happy


Costs don't exceed the budget


Works as designed


People use it


The people who funded the project are happy with it


It meets the goals that drove the project

According to many sources over the last decade, the outcomes defined by PMI for success are rarely achieved. So, why do projects fail? 

Why the experts say project fail?

IAG Consulting 

  • Companies with poor business analysis capability will have three times as many project failures as successes.
  • 68% of companies are more likely to have a marginal project or outright failure than a success due to the way they approach business analysis. In fact, 50% of this groups projects were runaway which had any 2 of: taking over 180% of target time to deliver; consuming in excess of 160% of estimated budget; or delivering under 70% of the target required functionality.
  • Companies pay a premium of as much as 60% on time and budget when they use poor requirements practices on their projects.
  • Over 41% of the IT development budget for software, staff and external professional services will be consumed by poor requirements at the average company using average analysts versus the optimal organization.
  • The vast majority of projects surveyed did not utilize sufficient business analysis skill to consistently bring projects in on time and budget. The level of competency required is higher than that employed within projects for 70% of the companies surveyed.


  • Poorly defined applications (miscommunication between business and IT) contribute to a 66% project failure rate, costing U.S. businesses at least $30 billion every year.

Meta Group

  • 60% – 80% of project failures can be attributed directly to poor requirements gathering, analysis, and management.


  • 50% are rolled back out of production.
  • 40% of problems are found by end users.

Carnegie Mellon

  • 25% – 40% of all spending on projects is wasted as a result of re-work.

Dynamic Markets Limited

  • Up to 80% of budgets are consumed fixing self-inflicted problems.

Is more upfront planning and analysis the solution?


The natural tendency in traditional, 20th century project management practice is to attempt to reduce risk through more planning, user research, requirements analysis and definition. Unfortunately, the these activities just create a false sense of security that the future solution is truly knowable through extensive analysis, definition and planning. It’s arrogant at best. You can’t predict a hand of poker. Why does anyone think that planning and requirements gathering will accurately predict people’s requirements 6, 12 or 24 months away? 

In a complex environment, where more is “unknown than known”, the only true way to reduce risk is to apply Complexity Theory. Can we predict the outcome? Can we know what problems will occur during integration? Do stakeholders know what they want before they see it? Can we account today for what might happen tomorrow, or next month, or in 6-months time? Complexity theory helps us understand the future by doing the following:

  • Conducting “fail fast” experiments with short timeframes or work cycles. Essentially, don’t do months of work and ask for feedback, or assess alignment to a goal, do a few weeks of work, produce an outcome, and then assess what to do next.
  • Examining the results when the experiment has concluded and providing fast feedback. Do this in 2-week cycles (“Sprints”). Produce something that can be inspected. This means don’t just document or collect requirements. Produce a working solution. People can comment on a working solution easier, and produce feedback with greater certainty about what they want next than requirements in a Word document.
  • Then, choosing what experiment to conduct next. Only with feedback on something tangible can you truly know what to do next – deliver more of the same, or change tact to deliver something different. Ultimately if you’ve chosen the wrong solution you have only lost 2-weeks instead of months.

This way of working is what Dave Snowden’s Cynefin framework refers to as “Probe, Sense, Respond”. 

More upfront analysis, requirements documentation, and planning – typified by linear and Waterfall delivery frameworks – will only reduce risk and project failure in a “complicated domain” or “simple domain”. If won’t help in a complex domain.

Domain Actions to take Framework to use
Simple This is the domain of legal structures, standard operating procedures, defined processes, and practices, that are proven to deliver the intended outcome every single time. Here, decision-making lies squarely in the realm of reason: Find the proper rule and apply it.

Waterfall works best here because the future can be predicted. Analyse everything. Create a plan for execution based on the repeatable process, then execute the plan. Manufacturing typically works this way. Work to the plan and you'll always get the same outcome.
Complicated Here, it is possible to work rationally toward a decision, but doing so requires refined judgment and expertise. This is the province of engineers, surgeons, intelligence analysts, lawyers, UX practitioners, software architects, and other experts. They will plan out analysis activities, collect requirements, analyse their findings, and then determine the best course of action to take or solution to apply.

Artificial intelligence copes well here: Deep Blue plays chess as if it were a complicated problem, looking at every possible sequence of moves.
Design Thinking.
Complex The complex domain represents the "unknown unknowns". Cause and effect can only be deduced in retrospect. In this domain, an experiment needs to be run, and its results examined, before action to proceed again with another experiment is then taken.

Scrum - an agile framework - operates best here. A goal is set with a plan for an outcome that is to be achieved in two-weeks. It's ultimately a small experiment with an hypothesis: this plan will achieve this goal by the end of the two weeks. Progress toward that goal is inspected each day, and then the outcome of this experiment assessed with insights and feedback discussed and then input into the next work cycle (Sprint). The goal is to produce working software, not documentation, as the review of the experiment ("Sprint Review") is about asking what did we learn from creating and achieving the goal and then collaborating on what to do next. Inspecting a list of requirements won't tell you if the end result will be successful in a complex domain. Only working software and the lessons learned from building it, will de-risk a complex project.
Chaotic In the chaotic domain, a leader’s immediate job is not to discover patterns but to staunch the bleeding. A leader must first act to establish order, then look to see where stability is present and where it is absent, and then respond by working to transform the situation from Chaos to Complex, where the identification of emerging patterns can both help prevent future crises and discern new opportunities. Communication of the most direct top-down or broadcast kind is imperative; there’s simply no time to ask for input. Command and control

Experiments are designed to build knowledge

Good planning, analysis and design are critical to project success, as is communication, and a shared vision of what is being delivered. The fallacy is assuming that all this effort must be entirely done up-front, in totality, by specialists and then handed down to developers in the form of documentation for them to interpret. Documented requirements can only capture explicit knowledge. It can’t capture knowledge gained about the subtleties of the context and its associated needs.

An experiment, on the other hand, creates a shared experience. The team collectively discovers whether an assumed solution creates the desired outcome. 

tacit vs explicit knowledge
Unlike Waterfall, agile frameworks like Scrum have a collaborative and cross-functional approach to planning, analysis, design, and delivery. This involves the whole team. In fact, 10% of each Sprint in Scrum is dedicated by the team to planning, user research, analysis, and design on scope that is coming up (i.e.: Backlog Refinement). Scrum’s regular application of empiricism ensures that both tacit and explicit forms of knowledge are shared throughout the team for the life of the product they’re developing and supporting. When a Scrum team creates documentation, it tends to be light-weight — just enough so that the team has consensus on the issue. Each item the team works on is still planned, designed, tested, etc, with each item recording the equivalent of requirements for that item.

Agile frameworks are more successful

The CHAOS Report by the Standish Group (2020) show that agile projects are more successful than Waterfall projects, have fewer challenges and fewer failures. This is not to say, though, that Agile methods like Scrum are a panacea for all project failures and associated problems. Rigour is still needed in the areas of governance, reporting, and risk management in order to increase the project’s health and assure success.


Many people will still assert that up-front design and requirements definition is required to reduce the risk of project failure. The question is how much is needed in a complex environment? Complex environments require experimentation instead of de-risking outcomes through requirements gathering. Scrum works very well in complex environments by using Sprints and building a collective knowledge within the whole team of, not only what the requirements are, but what it takes to actually deliver on them. This combined knowledge – creating lightweight documentation in the form of User Stories, plans formed each Sprint, the experience of delivery, assessment of outcomes each Sprint and then using that total experience to input into what to do next Sprint – is what improves delivery success in complex environments. In a 21st century product development world, change is the only certainty. A whole team, working collaboratively in short work cycles and collaboratively running small experiments, improves a team’s ability to be successful. Being more effective in collecting requirements will never achieve this outcome. M – – – – 1. Polanyi, M. (1958) Personal Knowledge 2. Nonoka, I. & Takeuchi, H. (1995). The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation

About the author

Related Posts:

Better together: Agile + Lean UX

How do you make Design Thinking, Lean UX, and Agile work together. Sprint 0? Design Sprints? Upfront design and planning tends to delay the delivery of value, so there must be a better way to use Scrum but also engage in discovery work at the same time without devolving into parallel design work. Integrating design, user research, and experimentation into Sprints is the key.


How do I run a Sprint Review?

The Sprint Review is one of five events in Scrum. It’s purpose is to inspect the Increment of work, get feedback, and then adapt the Product Backlog. And while many people refer to the Sprint Review as the “demo” or “showcase”, this is only one aspect of the Sprint Review.


How do I run a Daily Scrum?

Many people use the Daily Scrum to provide a status report to the Product Owner or Scrum Master, and even to stakeholders, but this event plays a more critical part in ensuring that the team continues to stay focussed on their goal and adapt their work so they improve their chance of achieving it.


Setting up a New Agile Team

When setting up new agile teams we have found that starting small with the basics and adding patterns as they start to develop capability has helped us get new teams up and running within 2-3 days and acheive a baseline of agile capability within 3 months


Measuring agility with Agile IQ

Agile delivers significant benefits over traditional ways of working, but how do you know when you’re agile? How do you use a metrics-driven approach to create repeatability, consistency, and scalability of agile capability across the enterprise? Agile IQ is the key.

search previous next tag category expand menu location phone mail time cart zoom edit close