Even though the Project Management Institute (PMI) began delivering their Certified Practitioner program some years ago, many experienced, veteran, project managers still disregard agile methods and stress the importance of Waterfall approaches. Their rationale is simple:
Premise: Many projects fail because they have poor planning, analysis and design. Hence, more upfront planning and better analysis decreases risk of failure.
The literature over the last decade clearly shows the reasons for project failure. Whether it’s Forbes, Standish, PMI or Gartner, they all report the very similar outcomes for delivery with traditional project management frameworks.
The Project Management Institute states that six factors that must be met for a project to be successful.
According to many sources over the last decade, the outcomes defined by PMI for success are rarely achieved. So, why do projects fail?
Why the experts say project fail?
- Companies with poor business analysis capability will have three times as many project failures as successes.
- 68% of companies are more likely to have a marginal project or outright failure than a success due to the way they approach business analysis. In fact, 50% of this groups projects were runaway which had any 2 of: taking over 180% of target time to deliver; consuming in excess of 160% of estimated budget; or delivering under 70% of the target required functionality.
- Companies pay a premium of as much as 60% on time and budget when they use poor requirements practices on their projects.
- Over 41% of the IT development budget for software, staff and external professional services will be consumed by poor requirements at the average company using average analysts versus the optimal organization.
- The vast majority of projects surveyed did not utilize sufficient business analysis skill to consistently bring projects in on time and budget. The level of competency required is higher than that employed within projects for 70% of the companies surveyed.
- Poorly defined applications (miscommunication between business and IT) contribute to a 66% project failure rate, costing U.S. businesses at least $30 billion every year.
- 60% – 80% of project failures can be attributed directly to poor requirements gathering, analysis, and management.
- 50% are rolled back out of production.
- 40% of problems are found by end users.
- 25% – 40% of all spending on projects is wasted as a result of re-work.
Dynamic Markets Limited
- Up to 80% of budgets are consumed fixing self-inflicted problems.
Is more upfront planning and analysis the solution?
The natural tendency in traditional, 20th century project management practice is to attempt to reduce risk through more planning, user research, requirements analysis and definition. Unfortunately, the these activities just create a false sense of security that the future solution is truly knowable through extensive analysis, definition and planning. It’s arrogant at best. You can’t predict a hand of poker. Why does anyone think that planning and requirements gathering will accurately predict people’s requirements 6, 12 or 24 months away?
In a complex environment, where more is “unknown than known”, the only true way to reduce risk is to apply Complexity Theory. Can we predict the outcome? Can we know what problems will occur during integration? Do stakeholders know what they want before they see it? Can we account today for what might happen tomorrow, or next month, or in 6-months time? Complexity theory helps us understand the future by doing the following:
- Conducting “fail fast” experiments with short timeframes or work cycles. Essentially, don’t do months of work and ask for feedback, or assess alignment to a goal, do a few weeks of work, produce an outcome, and then assess what to do next.
- Examining the results when the experiment has concluded and providing fast feedback. Do this in 2-week cycles (“Sprints”). Produce something that can be inspected. This means don’t just document or collect requirements. Produce a working solution. People can comment on a working solution easier, and produce feedback with greater certainty about what they want next than requirements in a Word document.
- Then, choosing what experiment to conduct next. Only with feedback on something tangible can you truly know what to do next – deliver more of the same, or change tact to deliver something different. Ultimately if you’ve chosen the wrong solution you have only lost 2-weeks instead of months.
This way of working is what Dave Snowden’s Cynefin framework refers to as “Probe, Sense, Respond”.
More upfront analysis, requirements documentation, and planning – typified by linear and Waterfall delivery frameworks – will only reduce risk and project failure in a “complicated domain” or “simple domain”. If won’t help in a complex domain.
|Domain||Actions to take||Framework to use|
This is the domain of legal structures, standard operating procedures, defined processes, and practices, that are proven to deliver the intended outcome every single time. Here, decision-making lies squarely in the realm of reason: Find the proper rule and apply it.
Waterfall works best here because the future can be predicted. Analyse everything. Create a plan for execution based on the repeatable process, then execute the plan. Manufacturing typically works this way. Work to the plan and you'll always get the same outcome.
Here, it is possible to work rationally toward a decision, but doing so requires refined judgment and expertise. This is the province of engineers, surgeons, intelligence analysts, lawyers, UX practitioners, software architects, and other experts. They will plan out analysis activities, collect requirements, analyse their findings, and then determine the best course of action to take or solution to apply.
Artificial intelligence copes well here: Deep Blue plays chess as if it were a complicated problem, looking at every possible sequence of moves.
The complex domain represents the "unknown unknowns". Cause and effect can only be deduced in retrospect. In this domain, an experiment needs to be run, and its results examined, before action to proceed again with another experiment is then taken.
Scrum - an agile framework - operates best here. A goal is set with a plan for an outcome that is to be achieved in two-weeks. It's ultimately a small experiment with an hypothesis: this plan will achieve this goal by the end of the two weeks. Progress toward that goal is inspected each day, and then the outcome of this experiment assessed with insights and feedback discussed and then input into the next work cycle (Sprint). The goal is to produce working software, not documentation, as the review of the experiment ("Sprint Review") is about asking what did we learn from creating and achieving the goal and then collaborating on what to do next. Inspecting a list of requirements won't tell you if the end result will be successful in a complex domain. Only working software and the lessons learned from building it, will de-risk a complex project.
|Chaotic||In the chaotic domain, a leader’s immediate job is not to discover patterns but to staunch the bleeding. A leader must first act to establish order, then look to see where stability is present and where it is absent, and then respond by working to transform the situation from Chaos to Complex, where the identification of emerging patterns can both help prevent future crises and discern new opportunities. Communication of the most direct top-down or broadcast kind is imperative; there’s simply no time to ask for input.||Command and control|
Experiments are designed to build knowledge
Good planning, analysis and design are critical to project success, as is communication, and a shared vision of what is being delivered. The fallacy is assuming that all this effort must be entirely done up-front, in totality, by specialists and then handed down to developers in the form of documentation for them to interpret. Documented requirements can only capture explicit knowledge. It can’t capture knowledge gained about the subtleties of the context and its associated needs.
An experiment, on the other hand, creates a shared experience. The team collectively discovers whether an assumed solution creates the desired outcome.