Introduction

The third installment in the ‘The Case for the Business Case‘ focuses on a review of  a 2006 paper by Peter Weill and Sinan Aral, published in the MIT Sloan Management Review, titled “Generating Premium Returns on Your IT Investments“.

But first a recap:

In the first part of this series I elaborated on a model suggested by Jeanne W. Ross and Cynthia M. Beath; focusing on the association between two dimensions: Technology Scope and Strategic Objectives. Technology Scope covered the investment spectrum between Shared Infrastructure and Business Solutions and Strategic Objectives covered the spectrum between Short Term Profitability and Long Term Profitability; resulting in four investment areas categorized as: Renewal, Transformation, Process Improvement and Experiments. The authors suggested that organizations need to make consciousness decision as to what portion of their investment allocation will be allocated to each of these categories and then evaluate any request for funds against that allocation.

In the second part I introduced an approach suggested by Professor John Ward, Professor Elizabeth Daniel and Professor Joe Peppard, where they were suggesting a method which will enable the organization to better understand the stated benefits expected from executing its IT investments. This approach was a result of evidence suggesting that many business cases either exaggerate the expected benefits or lacked elaboration on the business change required to have them realized.

Generating Premium Returns on Your IT Investments

The paper addresses two points:

  1. Most organizations make IT investment decisions along a range of four categories (and this will be the focus on this post); and
  2. Gaining a better return on the IT spend requires organizations to exhibit a certain set of practices and capabilities – and the authors refer to organizations that exhibit these practices and capabilities ‘IT Savy’. This post, however, will not focus on this point as it is not directly contributing to the discussion concerning the way organizations structure their approach to approving IT investment requests.

In an approach somewhat similar to the one we already encountered in Jeanne W. Ross and Cynthia M. Beath’s article, the authors suggest that most organizations make IT investment along four broad classifications:

Considering IT Investments as a Portfolio

 

Transactional Investments:

This are investments that “are used primarily to cut costs or increase throughput for the same cost“.

Information Investments:

These are used to “provide information for purposes such as accounting, reporting, compliance, communication or analysis“.

Strategic Investments:

These are used to “gain competitive advantage by supporting entry into new markets or by helping to develop new products, services, or business processes“.

Infrastructure Investments:

These are funds allocated to the creation of “shared IT services used by multiple applications…depending on the service, infrastructure investments are typically aimed at providing a flexible base for future business initiatives or reducing long-term It costs via consolidations”.

The authors acknowledge the fact that for any single project, the associated investment can spread over one or more investment types, meaning that when requesting funds towards making an IT investment, that – most often – will serve more than just one purpose and thus the expected outcomes and benefits are likely to be spread across different investment domains.

They further suggest that this portfolio allocation approach “underscores the importance of how organizations use technology instead of focusing on the technology itself“. This puts the responsibility on senior managers to have a clear vision about what is it that they want to achieve from their IT investment. The paper further suggests that the business value derived from each investment classification is different. For example. transactional investments are likely to result in lower cost of goods sold, while strategic investments are likely to result in greater innovation and provide a platform for future growth, etc.

Some final thoughts

The importance of this paper is not so much in the actual model if suggests for IT Investments’ classification as much as in the principle it establishes. Given the limited pool of funds available for IT Investments, organizations ought to create a strategic view of how they want their funds to be split in order to achieve a balance between their various competing needs  - a sentiment firmly planted in the discussion made in the first review in this series. Unlike the first review, the classification system proposed does not explicitly recognize the need to allocate funds to ‘experimentation’ (unless this can be bundled under Innovation).

 

Introduction

In the first part of this series I elaborated on a model suggested by Jeanne W. Ross and Cynthia M. Beath; focusing on the association between two dimensions: Technology Scope and Strategic Objectives. Technology Scope covered the investment spectrum between Shared Infrastructure and Business Solutions and Strategic Objectives covered the spectrum between Short Term Profitability and Long Term Profitability; resulting in four investment areas categorized as: Renewal, Transformation, Process Improvement and Experiments. The authors suggested that organizations need to make consciousness decision as to what portion of their investment allocation will be allocated to each of these categories and then evaluate any request for funds against that allocation.

Building Better Business Cases for IT Investments

Today’s literature review will provide an overview of an approach suggested by Professor John Ward, Professor Elizabeth Daniel and Professor Joe Peppard, in a paper titled “Building Better Business Cases for IT Investments” (the paper was originally submitted for publication to the California Management Review in September 2007, though I could not find the published article in the journal itself).

The focus of the paper is on establishing a method which will enable the organization to better understand the stated benefits expected from executing its IT investments. This focus is a result of evidence suggesting that many business cases either exaggerate the expected benefits or lacked elaboration on the business change required to have them realized (and see also ”Bent Flyvbjerg’s Research on Cost Overruns in Transport Infrastructure Projects ” where it could be argued that the affected business cases could have also been ‘inflicted’ with over estimating the expected benefits).

The approach suggested in the paper is based on the following principles:

Different types of benefits are recognized – the focus is not solely on financial benefits 
Measures are identified for all benefits, including subjective or qualitative benefits 
Evidence is sought for the size or magnitude of the benefits included 
A benefit owner is identified for each benefit – to ensure commitment and aid benefit delivery 
Benefits are explicitly linked to both the IT and the business changes that are required to deliver them– this ensures a consideration of how the business case will be realized is also included 
Owners are also identified for the business changes – in order to ensure they are completed and result in benefit delivery.

The paper suggests a multi-step approach to building an adequate business case:

1. Define Business Drivers and Investment objectives

Business Drivers refer to the issues and problems (internal and external) faced by the organization. Investment Objectives refer to what the proposed investment seeks to achieve. Determining and enumerating the Business Drivers provide an answer to the ‘Why’ we want to make this investment, while the articulation of the Investment Objectives provide and elaboration of ‘What’ would the proposed investment seek to resolve.

2. Identify Benefits, Measures and Owners

Having identified the Business Drivers and Investment Objectives “it is then necessary to identify the expected benefits that will arise if the objectives are met. Investment objectives and benefits differ in the following way: investment objectives are the overall goals or aims of the investment, which should be agreed by all relevant stakeholders. In contrast, benefits are advantages provided to specific groups or individuals as a result of meeting the overall objectives. Provided the benefits to  different groups or individuals do not give rise to conflict, there is no need to agree them“.

Identifying the benefits, while important, must be supplemented by two additional activities. The first being a clear articulation of how the expected benefits could be measured and the second being naming the person that will be the owner of the benefit. The owner’s ‘job’ would be to provide a ‘value’ to the benefit and ensure there is a plan to have the benefit realized once the investment has been made.

3. Structure the Benefits

Benefits identified are required to under-go another level of categorization and classification along two specific dimensions:

Benefits - Dimensions

1. Organizational changes enabling Benefits

Benefits can arise as a result of different types of business changes, classified by the authors as:

a. Do New Things – where the “organization, its staff or trading partners can do new things, or do things in new ways, that prior to this investment were not possible“; or

b. Do Things Better – where the “organization can improve the performance of activities it must continue to do“; or

c. Stop Doing Things – where the organization “stop doing things that are no longer needed to operate the business successfully“.

2. Degree of explicitness

The ‘Degree of explicitness’ refers to the degree to which a value assigned to the benefit could be substantiated (perhaps a better name could be ‘the level of substantiation’). The authors identify four level of ‘explicitness’ that could be ascribed to each benefit:

a. Observable benefits – “these are benefits which can only be measured by opinion or judgement. Such benefits are also often described as subjective, intangible or qualitative benefits“.

b. Measurable benefits – “these are defined as benefits where there is already an identified measure for the benefit or where one can be easily put in place“.

c. Quantifiable benefits – these are benefits “where an existing measure is in place or can be put in place relatively easy. However, in addition to being able to measure performance before the investment is made, the size or magnitude of the benefit can also be reliably estimated“.

d. Financial benefits – “these are benefits that can be expressed in financial terms. A benefit should only be placed in this [category] when sufficient evidence is available to show that the stated  value is likely to be achieved“.

It will be easier to understand the way the two dimensions (Organization Changes and Degree of Explicitness) interact with each other by using a number of examples (taken from the paper – see there for the complete list):

Benefits - Observable

Benefits - Measurable

Benefits - Quantifiable

Benefits - Financial

4. Identify Costs and Risks

In addition to the benefits, a full business case must obviously include all the costs and an assessment of the associated risks“.

In Summary…

The authors make the observation that “of all the aspects of business case development that differentiated the successful from the unsuccessful groups, evaluation and review of the benefits was where the differences were most pronounced. It would seem reasonable to suggest that the rigor with which an organization appraises the results of its IT investments will significantly affect the quality of the business cases on which investment decisions are made“.

Some final thoughts

The key message of the reviewed paper is that there is more to the Business Case than just being a vehicle for obtaining funding. Important as it may be, funds allocation is consequential and is secondary to the ability to clearly articulate, understand, manage and realize benefits. The paper allocate very little space to explaining how costs and risks are to be identified, assuming – I guess – that compared to the task of identifying and classifying the benefits, cost identification is relatively easy or, perhaps, not as important (in relative terms) to the task of identifying the benefits. Better understanding benefits would enable management to determine how much they will be willing to invest to realize those benefits.

Think about it!

Introduction

The Business Case is used in most project management methodologies as a fundamental tool for capturing, articulating and rationalizing the call for taking a particular business action (in our context – approving the initiation or the continuation of a project or a program).

The need to have a Business Case arises from the fact that organizations (or those who ‘own’ the money) need to make investment decisions and select those investments that provide a better value for money proposition then other possible investments.

The key for understanding business cases is to understand the notion of ‘decision-making’. Business Cases are a tool used to assist in the process of rational decision making and as such they need to provide the necessary information, and only the necessary information, required to help facilitate that process.

The professional literature is rife with examples of public and private sector projects that have failed to deliver on their expected benefits; and while not all these failed projects represent an invalid or defunct business case, a case can be made (pun not intended) to suggest that some proportion of these business cases have been found, in hindsight, to be – at the very least – incorrect if not completely invalid.

Furthermore, business cases (as pointed out by Andrew McAfee in “The Case Against the Business Case“) by their very nature, are premised on a notional ROI that is – almost by definition – greater than 100% (a point Andrew used to pose the following question: If so many business cases promise a ROI > 100%, shouldn’t they start them all “and not stop until the marginal return is less than the return from traditional investments like advertising, R&D, capacity expansion, etc?”). With the influx of business cases, all promising a positive ROI, organizations lack a sustainable process for determining which of these business cases ought to go through the investment pipeline and which ones should be put on hold or outright rejected.

This post is the first in a series of posts I plan to publish on the topic of IT Investments. Each post will focus on reviewing a single idea / approach published in the professional literature.

Beyond the Business Case: New Approaches to IT Investment

In a 2002 article in the MIT Sloan Management Review, Jeanne W. Ross and Cynthia M. Beath propose a framework for tying business objectives with successful investments. Their framework (see chart below) is built around the association between two dimensions: Technology Scope and Strategic Objectives.

A Framework for IT Investment

A Framework for IT Investment

The intersection between these two dimensions results in the identification of four types of investment opportunities (see chart below):

Investment in Transformation:

Transformation investments are necessary when an organization’s core infrastructure limits its ability to develop applications critical to long-term success…Transformation initiatives are often risky, undertaken when companies have determined that not rebuilding infrastructure significantly is even riskier“.

Investment in Renewal:

The shared or standard technologies introduced when infrastructures are transformed eventually become outdated. To maintain the infrastructure’s functionality and keep it cost-effective, companies engage in renewal“. Renewal Investment enables “the same business outcomes, but reduces downtime and maintenance costs“.

Investment in Process Improvement:

Business applications leverage a company’s infrastructure by delivering short-term profitability through process improvements. Business-process improvements should be low-risk investments because, unlike transformation initiatives, they focus on operational outcomes of existing processes“.

Investment in Experiments:

New technologies present companies with opportunities or imperatives to adopt new business models. To learn about those opportunities or imperatives and about the capabilities and limitations of new technologies, companies need a steady stream of business and technology experiments. Successful experiments can lead to major organizational change with accompanying infrastructure changes or to more incremental process-improvement initiatives“.

Characterizing IT Investments

Characterizing IT Investments

The authors acknowledge that while, on the surface, these four types of IT investment are conceptually unique and easily distinguishable, in practice they are much more difficult to categorize.  For example, a successful experiment may trigger an investment in process improvement; and a process improvement may result in a transformation, etc. With that in mind organization need to be able to deal with the following two challenges:

  1. How much of should be allocated to each investment type; and
  2. How to select projects within each type.

The authors suggest that “The process of distributing funds across investment types demands a vision for how IT will support its core business processes…most companies constantly compare their core-process-support capability with their desired capability. The comparison usually provides the initial basis for allocating funds to transformation, renewal and process improvement. In contrast, funding for experiments may depend more on perceived opportunities of new technologies and the condition of the infrastructure“.

They acknowledge that “No single technique can guide investment within all four types. Ways of prioritizing differ according to the investment’s technology scope and business objectives” but then go on to offer various considerations and case studies to show how some companies have determined what investments they will initiate within each of these types.

Some final thoughts

The article is a great brain teaser in that it provides a tool that can be easily integrated into the organization’s decision-making process. The parameters introduced are easy to understand and can be integrated into the organization’s vocabulary without a need to make substantial vocabulary adaptations. The investment spectrum of transformation, renewal, process improvement and experiments is one that most will be comfortable one and one which executives will be able to use without the need to under go substantial conceptual learning.

 

 

Greetings from Melbourne, Australia.

I can’t prove it but I believe that the way we execute the precepts of project management is, at least to some degree, determined and influenced by our cultural surroundings.

Melbourne is, by all accounts, a fantastic cosmopolitan city. It is a place where people of diverse and different backgrounds, cultures and nationalities have come together and call this place ‘home’.

While the ‘hard’ aspects of project management (in which I mean to include the core techniques used by most project management methodologies) are fairly universally implemented, it is in the ‘soft’ aspects that people of differing backgrounds will differ from one another.

Using a bit of an extrapolation from my experience in the Southern Hemisphere (both in New Zealand and Australia) I would be happy to suggest that the management style exhibited in these countries is by far different than the one illustrated in American based movies and TV series. This is equally manifested in the area of project management where the need to deliver is complemented by a ‘fair go’ approach.

On a different front, just like the rest of the world, Australia is always quick to adopt current and innovative management ideas. In recent years the ‘migration’ to agile and agility is taking hold in many organizations. And just like other countries, Australia too is undergoing a transition from a bottom-up to a top-down drive for implementing agility across the organization. This, however, is not a seamless or easy journey and many companies are in the process of finding their own way and inventing their own path in that transformation effort.

 While I live and work in Melbourne, I have access to the vast knowledge base, experience and stories produced on a regular basis around the world. The ability to listen, read and collaborate with people from all over the world makes the geographical separation that much less important. The use of social media tools enables me to connect with other like minded individuals at any time and in any place and I can’t wait to see what other technologies will come our way in years to come to make such interactions and collaborations even easier.

About “#PMFlashBlog – Project Management Around the World”: This post is part of the second round of the #PMFlashBlog where over 50 project management bloggers will release a post about their view of project management in their part of the world. 

Cynefin - 0Introduction

Despite our best endeavors in codifying many aspects of project management, a lot of what we do in project management is still ‘situational’ and ‘variable’. That is, in project management we need to react to various situations and these situations are a result of any number of variables, the impact of which cannot be fully predictable, understood or comprehended in advance (and occasionally not even while or after they occur).

In this post I want to elaborate on the applicability of a common framework that project managers can use in order to map various situations in their project landscapes. The framework I would use for this discussion is called Cynefin (pronounced cenevin).

Discussion

Cynefin, the details of which will be elaborated on below, provides a conceptual framework for making sense of the different landscapes faced within and by projects. In ‘faced within and by projects’ I mean to say that this framework can be used by the project manager to understand the various landscapes faced by the project (for example the stakeholders’ landscape, vs the development team landscape – and don’t worry if you can’t understand it now, all will be clear by the time you complete reading this post) or the landscape of the entire project, compared with other projects (for example, a web development project vs an R&D project).

The Cynefin framework recognizes that the situations and challenges we face can belong to one of four domains:

The ‘Simple’ (‘known’) domain

The ‘Simple’ domain is characterized by the following attributes:

  • There are known Cause and Effect relationships and these relationships are repeatable, perceivable, predictable and can be determined in advance
  • Challenges can be addressed using Best Practice approaches
  • Activities can be usually codified and documented into Standard Operating Procedures and Work Instructions
  • Best suited for Command and Control style of management

In the context of project management, projects that operate in this space would be ones where the domain of execution is known, regular, predictable and with very low risk. For example, a software house specializing in the delivery of basic small business web-sites, where these web-sites are subject to a regular delivery routine and are subject to similar terms and conditions.

The way to deal with ‘simple’ problems (i.e. the way to make decisions) is by applying the sequence of sense-categorize-respond. That is, you start by assessing (or analyzing) the facts of the situation, followed by categorizing them (i.e. determining what best practice is relevant to deal with the situation) and then implement and execute this practice.

The ‘Complicated’ (‘knowable’) domain

The ‘Complicated’ (‘knowable’) domain is characterized by the following attributes:

  • The links between Cause and Effect are less apparent and not self-evident; but are able to be uncovered
  • No clear ‘Best Practices’ but there are known ‘Good Practices’

In the context of project management, projects that operate in this space would be ones where the domain of execution can be determined by utilizing existing expertise and the project’s risk can be assessed and managed. Less than trivial software development projects (i.e. projects where the level of uncertainty is not insurmountable) would fall into this space.

The way to make decisions in the ‘complicated’ domain is by applying the sequence of sense-analyse-respond. That is, you determine what possible practices would be appropriate for dealing with the situation and then, having selected one (based, perhaps, on the availability of experts in that particular domain) you then implement and execute this practice.

The ‘Complex’ (‘Unknowable’) domain

The ‘Complex’ (‘unknowable’) domain is characterized by the following attributes:

  • The links between Cause and Effect are only clear in retrospect
  • No obvious ‘best practices’ or even ‘good practices’ but a possible practice can emerge as a result of controlled experimentation where quick learning can be achieved.

In the context of project management, projects that operate in this space would be ones with high level of uncertainty but where low-cost-of-failure experiments can be used to narrow down the uncertainty and suggest an acceptable path forward. The type of projects that fall into this space will be innovation or R&D projects.

The way to make decisions in the ‘complex’ domain is by applying the sequence of probe-sense-respond. That is, you start by probing (i.e. trialing out various options using experimentation), then identifying the methods that succeeded and can be used as future patterns of operations for the future.

The ‘Chaotic’ (‘Unknowable’) domain

The ‘Chaotic’ (‘unknowable’) domain is characterized by the following attributes:

  • The are no Cause and Effect relationships
  • No point in looking for the right answers (as no right answer exists)

In the context of project management, project that operate in this space would be ones with high levels of uncertainty through and through. This could include projects with lack of agreement on the project’s scope, business value, mode of execution in a technologically shifting environment.

The way to make decisions in the ‘chaotic’ domain is by applying the sequence of act-sense-respond. The first thing that needs to be done is to take some action (which may or may not work) in an attempt to stabilize the environment and reduce the chaotic nature of the project. One example for a possible action would be to simply stop the project but other options are certainly possible.

Some final notes

There is much more to the Cynefin framework than described above and you are encouraged to explore it further here.

Determining your position in the project’s organizational landscape is important not only because it can prompt you to take the appropriate corrective-actions but also because it could prevent you from applying the wrong solutions.

In the context of recent #NoEstimates discussions it seems to me that the application (or rather the suitability) of the #NoEstmates argument is really only applicable to the ‘Unknowable’ domains. There is no valid reason to suggest that within the ‘Known’ and ‘Knowable’ domains estimates could not be provided (as a matter of fact in the ‘Simple’ domain and with some expert advice in the ‘Complicated’ domain).

Think about it!

Introduction

The raging debate surrounding the #NoEstimates concept has left me with a sense of frustration one usually feels when coming across an unfinished business. While a number of individuals on both sides of the debate have attempted to address the key differences of opinion, the question still seems to be unresolved with each side sticking to their guns, unable to agree on a common ground or a framework of understanding in which all parties can feel vindicated. In this post I will attempt to outline the key arguments for and against #NoEstimates. When possible and if appropriate I will quote these arguments verbatim, sourcing them from a number of blogs where this topic has been discussed (sometimes indirectly) extensively. I will provide links to these quotes but do my utmost to ensure I do not take these quotes out of context.

The format I plan to adopt for this post is as follows. Each argument, in favor or against the #NoEstimates proposition, will be listed and a statement in support and against the proposition will be provided. whenever appropriate I will add further commentary where I will attempt to narrow  the gap and articulate the assumptions arising from each or both arguments.

As the end of this post I hope you will have a better appreciation of the arguments and you will gain the ability to articulate your own answer to the following questions:

  1. Are cost estimates required in order to manage software projects?
  2. Are cost estimates an effective tool for controlling costs?
  3. Do estimates stifle creativity and kill innovation?
  4. Do people need estimates? And if so, why?
  5. Is the very act of estimation results in the creation of uncertainty?
  6. Is estimation a practice still hanging over from the Waterfall era?
  7. Is No Estimation better than Bad Estimation?
  8. Is  Estimation really just a form of Guessing?
  9. Are estimates necessary for Governance? Is it reasonable to require estimates for the purpose of pacifying governance needs?
  10. Is there any point in providing estimates when it is known that many projects fail due to lack of credible estimates? And aren’t estimates a tool used to apportion blame afterwards anyway?
  11. Is costing a necessary tool for determining business value?
  12. Does the scope drive the budget or does the budget drive the cost?
  13. If estimates are required in order to support decisions, can decisions be made without estimates?

We have a lot to cover so let’s make a start!

Discussion

1. Making software development decisions based on estimates are counter intuitive

#NE – There is a tendency in software development to, counter intuitively, take a reverse approach to budgeting we would employ when we make purchasing decisions in our lives. Instead of deciding our budget based on how much we have available, or are willing to spend – as we would personally do – we decide it using an estimate from the supplier of how much they tell us the software that we want will cost. So, while we have a pretty good appreciation of how much much we have, want or willing to spend, we are unnecessarily engaged in a bidding process with the view to find the cheapest bidder in order to save money and squeeze as much as we can for our real budget. In such case, would we not gain a better outcome if we work iteratively and collaboratively with the supplier based on our budgetary and / or time constraints? (link).

#E – The claim that iterative and collaborative work can be used as a substitute for up-front estimation of cost is challenged by Mike Cottmeyer in “Should You Use Agile to Build Your Next Home?“. The gist of his argument is that most people when they are spending their own money, want to have some idea of what they are going to get when the time and money run out. They want some assurance that they’ll have enough of the product they have ordered to make the whole investment worthwhile. Suggesting an incremental delivery, while appealing in theory, is not necessarily a practical outcome as even a fully functional software, when lacking the greater context might be unusable (just like having a fully functional kitchen might not be usable unless the bedrooms and the other amenities have also been complete).

The other factor worth mentioning here is the behavioral aspect driven through our understanding of economic theories. The concepts of Scarcity, Opportunity Costs and Cost Benefit Analysis dictate that people, intuitively, would look to determine future value before they make an investment decision. This determination is done, among other methods, by comparing one perceived outcome against other alternatives, and selecting the one that maximizes the perceived value (or Return on Investment).

2. Estimates are seldom correct so why use them?

#NE - A casual search on Google will reveal that faulty or wrong estimation is seen as one of the leading reasons for projects’ failures. Proponents of  the #NoEstimates proposition raise the following arguments regarding the lack of credibility associated with the provision of estimates:

  • Estimates are often gamed, and for various reasons (link)
  • They are often based on zero to low knowledge (link)
  • They are often being plucked out of thin air (link)
  • Time and cost estimates are based on an up-front estimate of scope (i.e. a guess) and as such are guesses at best, leading to obvious dysfunctions like death-marches, low quality, etc. (link)
  • Asking teams to estimate how long their work will take has connotations that their output is being measured by an external party (management), creating an environment of fear and massaging figures to reflect what is desired rather than what i predicted (link)
  • We should’t be defining all our scope up-front, meaning we shouldn’t estimate all our scope up-front, meaning we shouldn’t be defining our delivery date based on our scope (link)
  • It seems like we keep failing at getting good at estimates, even though we are working hard at becoming better at doing so, then maybe we are expecting estimates to do something for us that just can’t be done. (link)
  • Estimates, quite often, are a result of a a wild guess, aimed at providing some numbers that seem plausible and will allow the decision makers to approve the start of a project with a clear conscience. That is, when things sour later on, they can always blame the developers for giving them inaccurate estimates. And the developers can always blame the ‘requirements people’ for giving them unclear, incorrect, or incomplete requirements. And the requirements people can always blame the stakeholders for providing bad direction about what was important. And so on. Regardless of who is part of this process, it is one big happy circle of blame that lets us all do the wrong thing and still keep our jobs… (link)
  • When estimating a date or cost you are creating uncertainty around those things, because you are guessing. You are saying “we’ll deliver somewhere between here and here”. However, if your delivery date and/or cost is set by a real constraint, as advocated by the #NoEstimates approach, you have created certainty around those things. (link)
  • And see also Jens Schauder’s ‘8 Reasons why Estimates are too low‘ and Glen Alleman’s specific response.

#E – The arguments against estimates can be classified into two categories; practical and behavioral.

Lets’ start with the practical considerations first:

‘Doing’ estimates right (though not necessarily correct) requires effort. While we are all accustomed to performing estimates in most facets of our lives (when we drive our car, rush to the station to catch the train, do our shopping at the supermarket, etc) there are instances or circumstances in which making the wrong estimate can have more severe consequences. If we try to cross the street and misjudge the distance and speed of an approaching car we could be involved in a fatal accident. If we misjudge the effort required to develop a software feature we could end up being responsible for a business loss that could affect our very livelihood.

When we make an estimate we always take a risk. When we buy one bottle of milk for our daily morning coffee we base our purchasing decision on past experience coupled with some probability assessment regarding the likelihood of past patterns breaking up in the future. While we don’t explicitly enunciate that level of uncertainty, we subconsciously are aware of it. We know, intuitively, that it is never 100% but are happy to concede that there is X% probability (perhaps 90%) that one bottle of milk would be sufficient to meet our needs for the next few days.  Had the probability been low, we would most likely buy two bottles.

While making decisions (based on estimates) come naturally to us when we cross the road, the reason why we are comfortable making these probability assessments is because we have learned to execute them based on past experience. From an early childhood our parents have taken our hand and instilled in us the need to accumulate experience and knowledge we could utilize at a later age, without giving it much thought. Statisticians and Economists would call this process Reference Class Forecasting (and see Kailash Awati’s elaboration on this topic in Improving Project Forecasts).

In the world of software development we are asked to exhibit the same assessment skills with the appreciation that our reference class might not be obvious up front – i.e we might be having a Reference Class Problem. In a nutshell, this problem “…arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified ” (from The Reference Class Problem is Your Problem Too, in Kailash Awati’s The reference class problem and its implications for project management).

At the practical level, being dismissive of the validity of estimation could be an attempt at avoiding the need to establish a reference class. If you don’t have a reference class on which to base your estimates then (as often argued by Glen Alleman) build one, run a pilot project, execute an experimental iteration, ask others who’ve done something similar. Do something to reduce the uncertainty and increase the certainty. There should be no excuses for not trying to establish some level of reference class baseline and progressing based on that starting point.

And now to the behavioral considerations…

The notions of gaming the estimates, plucking figures out of thin air, perceiving estimates as a mean to establish a management control and associating estimates with future apportioning of blame are all symptoms of a dysfunctional organizations. The business dilemma we need to address is whether the elimination of estimates would turn the organization into a functional one or would that just be a way of dealing with the symptom while ignoring the real problem. A better proposition should be to suggest methods to change the organizational behavior such that these attitudes are transformed into more suitable ones. Implementing the practical practices established for the delivery of estimates can go someway towards ‘fixing’ the organization. If you are a strong and vocal proponent of #NoEstimates you could equally be in a position to strongly support calls for taking the necessary steps needed to change the organizational attitude such that developers would not feel under siege every time an estimate is required (and see also the discussion here re. dysfunctional organizations).

3. Estimation = Waterfall

#NE – Estimation is seemingly one of the few remaining immutable practices hanging over from the waterfall era (link). Waterfall and predictions are harmful (link).

#E – See Mike Cottmeyer in “Should You Use Agile to Build Your Next Home?“. Tying estimates with Waterfall is not a solid argument and it seems to appeal to those with emotional associations tied to the Agile vs Waterfall metaphorical divide.

4. Why do customers need estimates?

#NE – The main reason we do estimates is because people need to make decisions. This means that the key requirement is for people to be able to make decisions. It is true that estimates are often used to help us make decisions. But estimates are not the only way and for many of the decisions we need to make they are likely not the best way to go. One good way to make decisions is by adopting the “lean approach”, by learning from our mistakes and by frequently taking corrective action (link).

#E – There is no disagreement about the conjecture that estimates are required to make decisions. As highlighted earlier, whether or not we are engaged in formal and conscious estimation process – we are still estimating sub-consciously.  And we do it because this is how we intuitively engage in risk management. We are always on the look for signals of danger and for opportunities. And while we are engaged in this type of behavior in every facet of our lives, we should more inclined to exhibit such behavior when spending other people’s money. The type of decisions we need to be able to ethically make are about whether we ‘buy’ this software or ‘make’ it.  Do we invest the money in this product or in any one of a number of other opportunities.

The decision regarding the investment path to be taken cannot be solely based on the perceived business value, as while a number of development opportunities might present a value proposition, the value differentiation might not be so clear as to determining the one we ought to pick for further development. And this is where cost becomes a deciding factor. It is called cost-benefit-analysis and it is a sensible risk management strategy.

Some final notes

The discussion surrounding the #NoEstimates proposition is one we need to have. The issues highlighted here are not raised with the intention to keep people from exploring and challenging established assumptions but rather as a concern against applying judgement without considering the context and landscape within which these judgments ought to be made.

No doubt #NoEstimates could work in environments where such a proposition will be accepted by the stakeholders who’s money is put on the line.  While I have not yet had the opportunity to operate in such an environment does not mean that such environment does not exist. And this is exactly the point where context need to be injected into the discourse around this issue. It is about establishing known boundaries or parameters within which #NoEstimates can (and perhaps also ought to) work. Some mention areas of R&D as suitable for this and perhaps this is correct. Establish a budget you are willing to use for experimentation (based on your risk appetite) and let the team iterate until such time that you either run out of money, run out of risk appetite or hit the bull eye. Other areas could be small developments with low risk. The domain is known and the time taken to produce formal estimates is better get used on actual development. There is, however, a need to avoid broad categorizations that add little to no credibility to the argument and add a sense of religious ideology to a domain that requires no such similarities.

Think about it!

I’ve done a lot of reading lately trying to put my head around the arguments and counter arguments associated with the #NoEstimates idea. I now have a better understanding, and a bit more sympathy, to some of the key propositions raised by the proponents of this concept. I will elaborate on those in a separate post but during my research I’ve come across the below proposition in Glen Alleman’s blog. It is simple yet practical proposition for dealing with the need to offer an estimate. If you can’t answer any of the questions outlined below you could be in the wrong business.

You can get within 15% in 5 questions using a binary search for any software component:

(1) Can you do this in six months? Sure
(2) Can you do it in 2 weeks? No way
(3) Can you do it in 3 months? Likely
(4) Can you do it in 6 weeks? That’ll be tight.
(5) How about 8 weeks? Sure that’s possible.

How possible? OK< maybe an 80% confidence of 8 weeks.

Good, get started, up date me every week on your Estimate to Complete.

Think about it!

Introduction

In a paper (presented as an overview to his completed doctoral thesis) titled An investigation into the prevalence of modern project management by means of an evolutionary framework”, Dr. Jon Whitty deals with the application of evolutionary principles to the domain of project management. He explains at the outset that:

Evolutionary principles can be applied to cultural matters too, in the sense that the practices, artefacts, beliefs, concepts and ideas we find around us today are there because they too have been selected (allowed or enabled to survive and reproduced) by individuals (because by using them or taking them on board benefits the individual is some way) or by organisations (because individuals acting for the organisation select them because of perceived benefits to the organisation).

This provides the foundational justification for comparing a seemingly biological process to a domain that on the surface seems completely disconnected from such discussions (and see my extended review of Jon Whitty’s paper here).

Discussion

Terry McKenna (whose earlier paper was reviewed here some time ago – see here and here) has collaborated with Jon Whitty to produce another thought provoking paper, titled: “Agile is Not the End-Game of Project Management Methodologies“.

The paper sets out to challenge the common perception, held by many (and admittedly by me as well), that the Agile movement represents a revolutionary movement, i.e a movement introduced as a counter measure to its surrounding environment and not as a by product of it (by which means it would be seen as an evolutionary movement).

Proving this point, one way or another, can only be done by exploring methods and ideas used in the past and then evaluating whether these methods and ideas have been replicated into an accepted method and ideas within the area of Agile thinking. To explore this question the paper makes use of the concepts of Memes and Memeplexes.

For those unfamiliar with Richard Dawkins‘ ‘The Selfish Gene‘, a Meme (as define in Wikipedia) is “an idea, behavior, or style that spreads from person to person within a culture. A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena.” In the context of Project Management, concepts and methods like the Gantt Chart, CPM and the WBS could be seen as Memes that over time established themselves as acceptable ideas within the project management community, as summarized in the paper:

For our purposes, a ‘meme’ will likely equate to a specific project management method, tool or artefact which is sufficiently recognisable as representing a discrete ‘idea.

A Memeplex (again, according to Wikipedia) is a “group of Memes that are often found present in the same individual. Applying the theory of Universal Darwinism, Memeplexes group together because memes will copy themselves more successfully when they are ‘teamed up’.” In the context of project management, a project management tool like MS Project could be looked at as a Memeplex as it is a collection of Memes that copy, or replicate, more effectively together. Or, as summarized in the paper:

a ‘memeplex’ can be seen as a means to facilitate conjoining or interacting of these memes to their greater good (i.e. survival and propagation).

The paper, meticulously, demonstrates how the thread of Memes that started with the Taylorism and the Scientific Management movement, through the creation of the Gantt Chart, the establishment of the ’4-step training process’, the creation of the Process Charts and work Simplification (and a few other processes and movements) have culminated, in the post WWII era, with the Toyota Production System (TPS) and the Lean Software Development. And on another evolutionary path, and another thread of Memes – resulting in the Iterative and Incremental Development – is a culmination of processes that started with the American Air-force in the 1950′s, through NASA’s Project Mercury, the Iterative approaches to modelling and Iterative enhancements, all leading to what is currently known as incremental development.

Based on the above analysis the paper concludes that

‘agile’ approaches are by no means ‘new’, but rather are result of a selection process

[...]

the current state is an accumulation of memes and memeplexes, chosen to suit environmental circumstances.

With that in mind the paper posits that the evolutionary nature ascribed to the formulation of the current set of agile methods suggests that this evolution is far from over and that further changes are likely to emerge in the future.

Some concluding notes

The paper provides a solid and compelling argument regarding the evolutionary rather than revolutionary nature of the Agile methods. Indeed, the methods themselves and the memes within which they are wrapped are not new. The paper mentions the fact that these various methods (and memes) have been incorporated into agile memeplexes and while it acknowledges the fact that in some contexts a memeplex can also be seen as being a meme in its own right – as the group of memes making up the memeplex become a unified idea that attract an independent evolutionary cycle – a greater importance could have perhaps been attributed to the possibility that while the memes have been around for quite some time it is the memeplex, i.e. the grouping and collection of these methods and ideas, that should be the center of this investigation and as such its creation is a less obvious or trivial occurrence and thus it does represents a somewhat more revolutionary idea.

The paper is also a timely and important reminder to the fact that a lot of what we see, hear or read around us is a result of or a spin off past actions. This is fantastically illustrated in the Everything is a Remix series and, when taken in context, is a timely reminder for placing our egos and achievements in the appropriate context of those who preceded us.

Think about it!

When it comes to the art of estimating, forecasting and predicting the future there is a certain level of control we inadvertently inject into that process, over inflating our capacity to ‘see’ the future:

Control is an illusion, you infantile egomaniac. Nobody knows what’s gonna happen next: not on a freeway, not in an airplane, not inside our own bodies and certainly not on a racetrack with 40 other infantile egomaniacs.

- Days of Thunder

%d bloggers like this: