image

The Problem

Don’t be fooled, as despite what you might have heard, told or read, projects’ failure rate is not as high as some might want you to believe. I’ll say it again, now more explicitly: There is no reason to believe any of all these doom and gloom articles and expert papers, suggesting that a catastrophic ratio of projects have failed to deliver.

What I am referring to are expert reports, published in the last 10-15 years, all of which suggesting that the number of projects failed to meet some sort of criteria is nearing 70%! Got that? 7 out of every 10 projects are a failure. Let’s see what they say:

  1. Chaos Report (1994) – only 16.2% of projects were successful by all measures. Of the 70% of projects that were not successful, over 52 percent were partial failures and 31% were complete failures.
  2. OASIG survey (1995) – the IT project success rate quoted revolves around 20-30% based on its most optimistic interviews.
  3. Chaos Report (1995) – The Standish Group research predicts that 31.1% of projects will be cancelled before they ever get completed. Further results indicate 52.7% of projects will cost over 189% of their original estimates.
  4. KPMG Canada Survey (1997) – 61 % reported details on a failed IT project.
  5. Conference Board Survey (2001) – 40 % of the projects failed to achieve their business case within one year of going live.
  6. Robbins-Gioia Survey (2001) – 51 % viewed their ERP implementation as unsuccessful
  7. Dr. Dobb’s Journal (DDJ) Survey (2007) – 72% of all Agile projects were successful, compared to only 63% of traditional of Data Warehouse projects.
  8. Chaos Report (2009) – Only 32% of projects have been defined as being ‘successful’ compared with 35% in 2006.

The Conventional Wisdom

Ok, let’s leave these surveys for a moment and attempt to define what does ‘success’ actually mean. I believe it was John Kenneth Galbraith who, in his 1958 book “The Affluent Society”, coined the term Conventional Wisdom. Conventional Wisdom can be defined as “A belief or set of beliefs that is widely accepted, especially one which may be questionable on close examination”. JKG himself had the following to say about this term: “We associate truth with convenience, with what most closely accords with self-interest and personal well-being or promises best to avoid awkward effort or unwelcome dislocation of life. We also find highly acceptable what contributes most to self-esteem”. Conventional Wisdom, according to JKG represents a convenient view that may or may not be true. It does not mean that any Conventional Wisdom is false, it does suggest though that in some cases underlying assumptions and truths might collapse on close examination.

So what does it mean to have a ‘successful’ project? Furthermore, once defined, could this definition withstand the rigour of life, i.e. can it actually be achieved in real life situations?

The strictest definition for a project success would probably require that the project is completed on-time, on-budget while meeting all its in-scope requirements/specifications. This, by the way, seems to be the definition suggested by The Standish Group whose Chaos Report seems to be the driver for most future projections and success trends. If we were to adopt this definition how would be rate the following scenarios?

  1. The project was delivered on time but was 10% over budget
  2. The project was delivered on budget but was 10% over time
  3. The project was delivered on-time and on-budget but lacked a number of in-scope features.

The Conventional Wisdom is Wrong

According to the current Conventional Wisdom, projects exhibiting the above attributes will most likely be classified as failed projects. This however represents false reality as it is based on a false assumption, according to which project planning is a scientific process that can be executed with a high level of predictability and success rate. This is clearly not the case. Project planning and estimation is, despite all claims to the contrary, a process that is largely dependant on subjective human input and as such cannot be relied upon to guarantee 100% success rate. It is not that humans cannot produce a close to 100% successful processes, they most certainly can, and space missions are the best example to attest to this success. After all each space mission is a project that delivers (at least in most cases) a successful delivery that ends up with successfully returning the astronauts to earth. But the successful completion of the mission only proves that the scope was achieved. It says nothing about the timeliness and costliness of the project. If we were to adopt the Standish Group’s strict definition, there’s a good chance that some (if not all) of the ‘successful’ space missions will be deemed as failures as they failed to meet at least one of the ‘cost’, ‘time’ or ‘scope’ criteria.

As I was writing this article I came across a fascination article in “Scientific American” titled “War Is Peace: Can Science Fight Media Disinformation?” with the sub-title “In the 24/7 Internet world, people make lots of claims. Science provides a guide for testing them”. The author, Lawrence M. Krauss, states that “The increasingly blatant nature of the nonsense uttered with impunity in public discourse is chilling. Our democratic society is imperilled as much by this as any other single threat, regardless of whether the origins of the nonsense are religious fanaticism, simple ignorance or personal gain.”

I couldn’t agree more. The fast pace in which information is released and the large quantities of it do not allow us to apply due diligence and apply common sense and challenge the conventional wisdom thrown at us by experts – all claiming to provide us with their processed truth.

So I, for one, choose not to accept this Conventional Wisdom. I do not accept the definition that requires 100%, all round, success for a project to be deemed successful. If I were to accept it I would probably look for another profession as it would make me, in most cases, a failed professional – which I don’t believe I am.

What do YOU think?

I value your opinion, if you have any thoughts on the above please join in and share with others!

Print Friendly

Related Post

Projects Failure Rate – the Threequel A quick recap Over the past week I've had an interesting discussion with Steven Romero about certain aspects associated with the use of the Standish C...
Project Management 2.0 – a fool with a tool is sti... Many years ago I worked with an Information Engineering guru who specialised, amongst other things, in the delivery of high quality data models. Those...
The Scientific Method and the IT Projects Failure ... Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledg...
Implementing Earned Value Management in Agile Introduction The purpose of this article is to illustrate the way in which the Earned Value Measurement (EVM) approach is introduced into an Agile de...

15 Comments

  1. Pingback: Shim Marom

  2. The fundamental issue with every one of these surveys is none define the sample space from which they were drawn. This is a common error with those unfamiliar with the guidance of statistical sampling. They send out a survey, collect the response, make some kind of adjust for those not responding.

    In fact the proper way is to identify who they should send the surveys to. This is a population sampling design issue.

    Never believe survey results from consulting firms selling their services.
    Never believe survey results in the absence of the raw data that can be confirmed independent of the survey provider.
    Never believe survey results that do not have margin of errors associated with the numbers.

    Reply

  3. Pingback: The Wisdom Of Cornelius. | 7Wins.eu

  4. Pingback: Shim Marom

  5. Pingback: Ron Rosenhead

  6. I present on the topics of Project and Portfolio Management (PPM) and PMOs with great regularity. I ask all of my audiences if they are aware of their project failure rate and I yet to have someone say, \yes.\

    I use project failure rate statistics to support my contention something needs to be done about it. When doing so, I always point out the vast differences in project failure definition and computation. Then I share my definition of project failure.

    I believe a project should be characterized as a failure when it takes longer than we said, costs more than we said, or does not deliver what we said. I go on to explain I am allowing for the \acceptable variances defined at the onset of the project\ – and that projects are only classified as failures if they are outside of those established thresholds. I also go on to say that just because a project fails, it does not necessarily mean it should not have been undertaken. I have never viewed any of my perceptions as \conventional wisdom.\

    First, I find few organizations that even use the \f-word\, let alone provide their definition of failure when it comes to projects. So in my opinion, the discussion of project failure rates in Enterprises is unconventional in itself.

    Second, few organizations establish reasoned and rational cost, schedule and performance (scope) variance thresholds for projects. I agree with your statement that \a process that is largely dependent on subjective human input and as such cannot be relied upon to guarantee 100% success rate.\ So if we know that going in, we should ask ourselves what the acceptable level of deviation is in each of the categories for us to approve and sanction the effort. I have found project variance threshold management and administration for the purpose of improving PPM and PM decision-making to be incredibly unconventional.

    Third, a project can \fail\ and still be a good business decision. The failure was in the PPM or project execution \process.\ If we are outside of our variance thresholds, then something failed somewhere and calling the project a failure simply alerts us to that fact.

    Lastly, if Enterprises do use the \f-word\ they do so for the purpose of assigning blame. Most have a punitive response to project failure. Until organizations accept the inevitability of project failure and use it as a tool to continually improve their PPM and PM decision-making processes, we won’t see much improvement in the project failure statistics you question in your post.

    Steve Romero, IT Governance Evangelist
    http://community.ca.com/blogs/theitgovernanceevangelist/

    Reply

    • Hey Steve, thanks for your detailed response.

      Although I don’t fully agree with some aspects of your response I can’t but fully endorse your last paragraph. At the end of the day it is about awareness and preparedness for facing the inevitability of project failure (with the understanding that this failure is conceptual only – i.e. based on the definition of failure being the failure to meet a subset of the planned project parameters – cost, schedule, cost, quality, etc). Using this understanding as a tool for continuous improvement is the key for better performance in the future.

      Thanks again for your response.

      Shim.

      Reply

  7. Pingback: Steven Romero

  8. Pingback: Steelray Software

  9. Pingback: Crazie Eddie

  10. Pingback: Tony Collins

  11. Pingback: Dr. Jim Steffen

  12. Pingback: Shim Marom

  13. While I look at result of surveys critically, including infamous CHAOS Reports, I also treat them as valuable source of knowledge.

    First, it isn’t a secret that those reports base on surveys. They ask some decision-makers on whether the project was on scope, to take the first example. Well, what do you expect as an answer? That the project lacked a few small in-scope features? Come on. Barely anyone in project team goes so deep into details to know that.

    Second, no matter how screwed is the method, as long as the organization keep it stable throughout the years you can see trends (http://progressblog.com/2010/11/24/organizations-arent-good-at-doing-projects/) and they basically tell you nothing changes in terms of rate of success of projects.

    While I acknowledge flaws of reports you mention they at least try to be objective confronting all the data against the same criteria. If we followed the logic of “space flight is a success because we brought people home” we would have success rate like 80%+ which would be even more misleading than statistics we have now.

    I know first-hand a number of projects with significant schedule and budget overruns which ended up with the goal being achieved. The question is: were it successes or not? Was building New Wembley Stadium a success for its main contractor Multiplex? See http://en.wikipedia.org/wiki/New_Wembley#Litigation as reference. After all the second biggest stadium in Europe was built and this was the goal…

    I wouldn’t be as quick as Glen to discard reports as the source of knowledge about the industry, yet I don’t trust statistics uncritically. We are probably not that bad as CHAOS Reports states but we aren’t much better either.

    Reply

    • Hi Pawel, thanks for your thoughtful response that raises a number of conceptual questions to which I will attempt to respond below.

      The key problem with your assessment is that you both relate to the fact that the Chaos report is “infamous” but at the same time refer to the fact that it is a valuable source of knowledge. From my perspective this is a bit of an oxymoron as something in that statement got to give, and based on my assessment, and many others in the industry, the credibility of the Chaos report as a source of knowledge is highly suspect.

      The question we need to ask ourselves is what are the conditions for determining whether or not a project has failed. The definition employed by the authors of the Chaos report define success as “The project is completed on-time and on-budget, with all features and functions as initially specified”. This is a highly contentious definition as it would objectively render most project as unsuccessful. I’ve made the observation in other posts that by definition, due to our physical limitations, it is not possible to achieve %100 success rate in any human endeavour. Predicating success on a definition according to which all parameters need to be achieved is a recipe for failure that does not bode well with reality.

      I know of only one serious research into the issue of projects’ success and failure, conducted by Professor Bent Flyvbjerg (see http://quantmleap.com/blog/2010/11/bent-flyvbjergs-research-on-cost-overruns-in-transport-infrastructure-projects/). His research paper is what is so desperatly needed in the project management domain, addressing the core issues of projects’ failure. Contrasting his research papers with the ones published by the Standish Group demonstrates more than anything how lacking the Chaos report is and how deficient it is compared with studies based on real and credible information.

      My key concern regarding these ‘expert reports’ is the fact that they are aimed primarily at promoting their own agendas. This is not unsimilar to ‘expert’ papers about Social Media, Project Management 2.0, the role of Social Networking in Project Management, etc. All are catchy and easy to remember (and subsequently quote) concepts but their validity, based on the published evidence, is questionable, to say the leaset.

      Cheers, shim.

      Reply

  14. Pingback: Pawel Brodzinski

  15. Pingback: Ty Kiisel

    • Hi Ty, interesting comment but how would you define it succinctly so it is easy to ascertain whether the intended value was attained or not?

      Reply

  16. Pingback: Project Failure: We’re at it Again

  17. Started to leave a comment and…couldn’t…keep…the..word…limit…down.

    So I wrote a post with my rant instead:

    http://pmstudent.com/project-failure/

    -Josh

    Reply

  18. Pingback: Purple Projects

  19. Hopefully we can all agree that the point of project work in the first place is to provide some kind of value to the org. It might be a positive ROI for some, or better governance for others. Whatever the value, if it was identified and articulated before the project started, whether or not the project succeeded is a measure of how well it met those original objectives. (I think this allows for changes in scope, timeline, and costs while providing a pretty straightforward measure for success.)

    Reply

    • Hi Ty, thanks for joining in as I know you’ve been long attending to this issue in your own blog.

      While I agree in general terms with your comment I wonder how many organizations you ACTUALLY know that would be happy to commit themselves to a clear definition, such the ones your propose. My experience is that software development in commercial organizations lacks that level of articulation, the result of which is that projects can be declared as a success or failure based on the whim of this executive or another. After all, what is the Chaos report if not a collection of subjective views by certain stakeholders, based on their own interpretation and subjective experience?

      Glen Alleman is a harsh critique of IT Software Development projects, and as sad as it is, he is 100% correct. My experience demonstrates that projects run in that domain lack that level of robustness that allows the articulation of what success ought to look like, with all the ranges and variances that define its acceptable boundaries. And until such time that this is universally accepted we will have to discuss the merits of such reports as the Chaos reports and others like it.

      Cheers, Shim.

      Reply

  20. Shim,

    My point wasn’t defending Chaos Report. But to answer you: it’s not me who made it infamous (or famous in the first place). It is just probably most frequently cited report on the state of projects. Both in positive and negative context. That’s why I label it infamous on occasions.

    And yes, I consider is as valuable source of knowledge despite the fact it has it flaws. I even wrote about that a few year ago: http://blog.brodzinski.com/2007/06/flaws-of-chaos-report.html.

    I never said Chaos Report is the best one around or used the best method to assess the state of projects. But it shows some patterns very clearly. So it does with trends.

    Now the more interesting thing to discuss than this or that report is how do you assess projects I’ve mentioned. It is an interesting observation that we consider projects like Apollo 13 mission or Shackleton’s antarctic adventure successes. Why? Thanks to the simple fact human lives have been saved. Lives which wouldn’t have been put into hazard if those projects hadn’t exist, by the way. But in terms of pretty much any project evaluation they would be both marked as failures.

    Unfortunately in every survey we need some static criteria which we assess answer against. And last time I checked gut feeling didn’t count as one.

    So as long as criteria are clear for everyone you can draw your own conclusions, even if your definitions are different. The trick is to be critical, but to avoid instant rebuttal of everything which doesn’t resonate well with our point of view.

    Reply

    • Hi Pawel, thanks again for taking the time to respond.

      I suspect we agree on the concept (i.e. that it is necessary to have clear criteria defining the range of success failure). Where we seem to differ is on whether or not it is ok to take any criteria (for instance the one employed by the Standish Group) and use those criteria for the purpose of credible analysis. And I’m happy to leave this point unresolved given that I simply can’t accept a definition that uses %100 as success and anything less as ‘not’ success.

      In most projects (at least the ones I’ve been ‘lucky’ enough to get involved with) there is lack of clear articulation of what ‘success’ looks like (what Glen might call ‘done’) and a lack of boundaries, ranges, or thresholds for that ‘done’. From my experience, organizations, IT departments and business stakeholders do not want to engage in a serious discussion regarding what success ought to look like as a) it is time consuming and requires some mental thinking; and b) it provides an easy way out to apportion blame should things go belly up.

      So until such time that I (and I suspect you and others) are able to add that level of clarity and accountability, we will be at the mercy of surveys based on executives’ subjective appreciation of reality, and the picture of that subjective reality (as we all damn know) is far from being pretty.

      Cheers, Shim

      Reply

  21. Shim,

    You’re right – we all wander around definition of doneness. The problem which hardly seems solvable for me now is a general definition which would suit (at least) most of projects.

    I was a part of a number of projects which Chaos Report would qualify as challenged but for a team and (most importantly) for a client they were successes. However most of them had ‘done’ definition attached, which was specifications. The problem was there wasn’t enough work put in creating good specs, so the definition was half-assed.

    And then I’ve seen it a number of times that definition of doneness is changing over time. Apollo 13 had some mission to accomplish but no one remember what was it. It has changed to “save people’s lives” after the accident.

    But then again there are projects, like New Wembley, which fulfill planned goal – building huge stadium in this example – but can’t be unequivocally assessed. I guess for the client it was a success. After all contract was constructed in a way they paid what they expected to pay and they got what they expected to get. On the other hand I don’t believe the main contractor consider it as a success, considering huge loss on the project and lawsuits which followed it.

    To summarize: I don’t think we’re going to find criteria which would be widely accepted by the community. There would always be some doubts and comments.

    Reply

    • You’re right mate. There is unlikely to emerge a universally agreeable definition and as project managers we’d better to our best to ensure it is not our stakeholders who report back to the Standish Group that their project has been unsuccessful.

      Cheers, Shim.

      Reply

  22. Pingback: Pawel Brodzinski

  23. Pingback: Project Management At Work » Blog Archive » Weekly project management news roundup: Projects failure rate – the conventional wisdom is wrong; Definition of done; and other interesting posts

  24. Pingback: What Is Project Success Anyway? | Project Management Competence Assessment Tools

  25. Pingback: Shim Marom

  26. Just one question/request: I am looking for references to results (‘expert reports, published in the last 10-15 years’) you are presenting on this webpage. I really need them.

    Cheers,
    MAP

    Reply

    • Hi MAP, I lost track of the actual references but if you do a search based on the 6 or 7 items mentioned in my post you will find them easily.

      Regards, Shim.

      Reply

  27. Pingback: Richard House

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: