It’s 07:38. ​
 
I just stepped out of the train taking me to work when my phone rang. “Raj is looking for you”, told me Darren, my project administrator. “He asked me to urgently let you know that while his off-shore team is scheduled to commence integration testing later on today, over night security incidents in the city will prevent the majority of his team from getting to work. Given the internal political situation it is not clear when regular, normal, work will resume”. 
 
The preparation for the integration testing phase has taken a considerable amount of time and required the collaborative effort of quite a few resources, representing the core technologies and support teams involved in the new infrastructure deployment. While planning was carried out both on-site and off-shore, testing itself was planned to execute using a predominantly off-shore resource pool across a number of geographical locations, thus allowing for testing to proceed almost around the clock.
 
This is not the first time one of my infrastructure projects is impacted by off-shore events. Heavy rain and flooding in one of our regional data centres in South East Asia has caused me major headaches in the past. The transition to the new data centre has just taken place few months earlier and the risk realisation of weather impacts and how to prepare for them has not yet sunk in. This lack of proper risk identification has caused my project major delays and cost overruns as I have had to divert some of the project operations to a different site.
 
This time I was much better prepared. Though not specifically targeting security risks on my risks register I did investigate the possibility of delays due to regional issues like weather conditions, terror threats and alike. I have involved the company’s risk department in the project’s risk reviews and assessments and have prepared a mitigation strategy I could invoke should such circumstances arise.
 
The risk documented in my risk register was documented as follows:
 
Local circumstances (e.g. unusual weather conditions, political tension or terror activities) at offshore site1 could result in major delays to scheduled activities due to inability of local resources to reach the office and perform their allocated tasks“.
 
Offshore Site1 has not experienced devastating weather conditions since the one I mentioned above and having consulted weather forecasts indicated that the likelihood of such event re-occurring during the lifetime of my project was very low.
 
The political tension in the area however has been steadily rising over recent months and security experts consulted on this matter have estimated the likelihood of a disturbing incident taking place at being likely, or about 60%.
 
From a project perspective, the consequences of not being able to execute the testing on time was quite severe. The project outputs were directly contributing to a business outcome whose needs were based on strict regulatory requirements. Not meeting those requirements could result in harsh financial penalties and loss of public credibility.
 
It was clear from the outset that this risk cannot be accepted and that an operative mitigation strategy needs to be put in place in order to have it controlled.
 
The plan of attack for mitigating and addressing this risk was basically agreed as following:
  1. A second, alternative, team will be put on stand-by on-site to be able to pick up the testing should the off-shore team become unapproachable.
  2. The off-shore team will be provided with remote access capability (mobile phone and laptops) so they could execute their test scripts remotely if required and be able to communicate with the team as required.
  3. Operational instructions and guidelines were distributed and reviewed by all respective teams, on site and off-shore, so they are aware of the communication plan and execution plan associated with invoking the alternative mitigation plan.
“Thanks Darren”, I said, Please convene the project risk committee so we can discuss and obtain an approval to embark on plan B, and execute the communication plan associated with it”.
 
In a bizarre way, our previous failure to deal with a similar event in the past has made us aware of the scenario and allowed us to better prepare for the event we’re just facing now. Lesson learned well executed.
 
Think about it!
 
*Disclaimer: This is a fictional story and is meant for educational purposes only. Any resemblance to real persons, living or dead is purely coincidental.
 
P.S. After publishing this post I discovered that Chris O’Halloran published a post with a similar message (see HERE).

When leading a project, risk management is an essential tool. There are always internal and external variables that can create delay or increase costs, and it’s best to deal with these well in advance. By using standard project risk management techniques, you can offset some of these dangers and keep your project on a steady track. For risk management to be effective, you need to pursue the following steps.

Make Risk Management Part of Company Operations

Though most well-established companies utilize risk management as a matter of course, there are still some businesses in which it is not used at all. If you happen to work in a place where risks are often not analyzed, the first step is to make sure risk management becomes accepted as a regular feature of company business. Make sure that the stakeholders understand how risk management can save time and money. Do your best to get everyone on board.

Determine Risks at an Early Stage

As your project gets started, reserve some time to think over the potential risks. Use your company’s list or chart of personnel and overall hierarchy to carefully think through the process and the people at each stage of the project. Perhaps you can find a weak link here, such as an employee who lacks experience or a place in which the chain of command is unclear. Another basic technique is to carefully think through the communication trail. Which documents are being created for which purpose? Who communicates with whom at each stage? By visualizing the roles of people and the documents to be produced, you may uncover some important risks.

Establish Regular Communication on Risks

Sometimes people worry that an emphasis on risks will create negativity within work teams. This is not necessary at all if you approach the subject as a leader who wants to create efficiency and confront challenges head-on. In fact, if you do not do this, the team will feel more negative when risks arise because the risks will seem surprising, and the team will not know of an action plan.

A basic and effective approach is to bring up risk management at your regular meetings. Give it a place near the top of the agenda. Give your team members a chance to air their concerns and brainstorm on how important risks might be overcome. Use this as a time to empower your team with competitive, realistic, proactive thinking.

Make Sure People are Held Accountable

Do not allow people to brainstorm on risks in such a way that passing the buck appears to be status quo. When risks are mentioned and analyzed, be sure to conclude by choosing a person who will be held accountable for each risk. Do this in a way that gives your employees a chance to shine. Responsibilities lead to quantifiable work experience which in turn can facilitate promotions. Show that you believe in your team. Accountability with strong, positive leadership can create excellent results.

Prioritize and Analyze

Not all risks are created equal. As potential risks are discovered, create a list and prioritize the risks by potential impact. This helps you to manage your own time and also lead the project most effectively. After prioritizing, analyze the risks on both a small scale and a large scale. A risk may have temporary impacts and/or it may have an unfortunate ripple effect on the project as a whole. By looking at risks on both the micro and macro level, you ensure that you are analyzing thoroughly, and you may uncover potentials you would not have realized any other way.

Ongoing Project Risk Management

Now that you have the basics, be sure to keep records of your projects, project risks and outcomes. If your company is slow to accept project risk management as status quo, your records could help to change that. If your project suffers a serious setback which you foresaw, your records demonstrate your foresight. If your risk management process caused your team to work more efficiently, you will have proof of that as well. Use risk management at your own optimal level and realize success in your projects.

Ryan Sauer is a writer and editor for Bisk Education in association with University Alliance. He actively writes about project management in different industries and strives to help professionals prepare for the PMP certification exam. Through the University Alliance, Ryan writes to help enable professionals obtain their project management certification

Expect the best,

plan for the worst and

prepare to be surprised

Denis Waitley (American motivational Speaker and Author of self-help books. b.1933)

Think about it!

p.s. Having thought about this quote a bit further, I’m not sure I totally agree with the first proposition. This would work for people who cannot deal with failures or shy away from challenges. Not sure what I would change the first statement to. Perhaps something along the lines of “expect the obvious”, as this suggests accepting that the law of averages does apply, so on average you’ll be correct. The rest follows quite nicely from there.

A 2007 article by Young Hoon Kwak and Lisa Ingall, titled “Exploring Monte Carlo Simulation Applications for Project Management”*, examines the Monte Carlo Simulation method and its uses in the field of Project Management.

Apart from being a good reference document, where a brief history of this technique is being discussed and explained, this article provides a good review of various studies published around the benefits as well as the potential complexities associated with implementing this technique in real life situations.

The article points out that one of the limitations of using this technique as being project managers’ discomfort with statistical approaches, as well as lack of thorough understanding of the method (see also my earlier post discussing  this issue in “Some Risk Management Related Thoughts“.

Discussing the process for utilizing Monte Carlo Simulation in the context of Time Management, the article suggests the following steps are commonly used:

  1. Utilize subject matter expertise to assign a probability distribution function of duration to each task or group of tasks in the project network;
  2. Possibly use Three-Point-Estimates to simplify this process, where an expert knowledge is used to supply the most-likely, worst-case and best-case durations for each task or group of tasks;
  3. Fit the above estimates to a duration probability distribution (such as Normal, Beta, Triangular, etc.) for the task;
  4. Execute the simulation and use the results to formulate expected completion date and required schedule reserve for the project.

Outlining the advantages of utilizing Monte Carlo Simulation applications in Project Management, the article points out that its primary advantage is in being an “extremely useful tool when trying to understand and quantify the potential effects of uncertainty in projects“. Clearly, not utilizing this technique, project managers lack a powerful tool that can result in not meeting the project’s schedule and cost targets. Better quantification of the necessary schedule and cost reserves can substantially reduce such risks.

The article highlights the importance of having access to expert knowledge and prior experience and detailed data from previous projects in order to mitigate the inherent issues of estimates uncertainly which would ultimately affect the quality of the simulation results. This is correct not just with respect to the three-point-estimates but also with respect to choosing the correct probability distributions with which to model these estimates.

An interesting point is raised when referring to an earlier study published  by Grave R. (2001. “Open and Closed: The Monte Carlo Model,” PM Network, vol. 15, no. 2, pp. 48-52) which discusses the merits of using different types of probability distributions for project task duration estimates. Grave suggests the use of open-ended distribution (the lognormal distribution) instead of using closed-ended distributions (such as the triangular distribution) when performing the Monte Carlo Simulations.

His logic is as follows: A closed-ended distribution (e.g. triangular distribution) does not consider the possibility  of a task duration completing BEFORE the best-case or AFTER the worst-case duration estimate. However, in real life projects, due to various constraints, it is possible for a task to complete before or after the best or worst scenarios.

When an open-ended distribution is used, the possibility for exceeding the upper limit of the task duration is recognized, thus making the simulation more realistic.

The article touches (although very briefly) on one of the aspects used within the context of Monte Carlo Simulation which is the Criticality Index (I will endeavor to provide a more detailed discussion about  this feature in a future post). In a nutshell, the Criticality Index is a reflection of the rate at which the task appears on the critical path of the project through the simulation iterations.

Overall an interesting article. If you are already using Monte Carlo Simulation as part of your portfolio of project tools, this article will encourage you to keep doing so. And if you’re not – you’ll need to ask yourself – WHY?

*see in http://home.gwu.edu/~kwak/Monte_Carlo_Kwak_Ingall.pdf

Ok, I’m having a bit of a fun with this one.

Submit a copy of your project plan (in Microsoft Project format only at the moment) and I will send back to you a high level risk assessment of your project schedule’s based on a Monte Carlo Simulation.

There’s absolutely no catch. The reason I’m doing it is because it seems to me that many don’t yet understand the benefits of using this technique to better understand the risks that lie dormant within their project schedules.

If you are interested in this FREE offer fill in and submit the details below.

And by the way, I give you my guarantee that once I process your project schedule I will not make any further use of it and it will be erased/discarded and will not be used for any purpose other than attempting to provide you with this service.

To learn / read more about this topic check out the Related Posts listed below.

To ensure the successful completion of a project, it is of utmost importance for the project manager to find ways to handle uncertainties that can pose potential risks for a project. Risk management is an iterative process. Risks can relate to any aspect of the project – be it the cost, schedule, or quality. The key to managing risks is to identify them early on in the project and develop an appropriate risk response plan.

To develop a Risk Response Plan, you need to quantify the impact of risks on the project. This process is known as quantitative risk analysis wherein risks are categorized as high or low priority risks depending on the quantum of their impact on the project. The Project Management Body of Knowledge (PMBOK) advocates the use of Monte Carlo analysis for performing quantitative risk analysis.

What is Monte Carlo Analysis?

Monte Carlo analysis involves determining the impact of the identified risks by running simulations to identify the range of possible outcomes for a number of scenarios. A random sampling is performed by using uncertain risk variable inputs to generate the range of outcomes with a confidence measure for each outcome. This is typically done by establishing a mathematical model and then running simulations using this model to estimate the impact of project risks. This technique helps in forecasting the likely outcome of an event and thereby helps in making informed project decisions.

While managing a project, you would have faced numerous situations where you have a list of potential risks for the project, but you have no clue of their possible impact on the project. To solve this problem, you can consider the worst-case scenario by summing up the maximum expected values for all the variables. Similarly, you can calculate the best-case scenario. You can now use the Monte Carlo analysis and run simulations to generate the most likely outcome for the event. In most situations, you will come across a bell-shaped normal distribution pattern for the possible outcomes.

Let us try to understand this with the help of an example. Suppose you are managing a project involving creation of an eLearning module. The creation of the eLearning module comprises of three tasks: writing content, creating graphics, and integrating the multimedia elements. Based on prior experience or other expert knowledge, you determine the best case, most-likely, and worst-case estimates for each of these activities as given below:

Tasks Best-case estimate Most likely estimate Worst-case estimate
Writing content 4 days 6 days 8 days
Creating graphics 5 days 7 days 9 days
Multimedia integration 2 days 4 days 6 days
Total duration 11 days 17 days 23 days

The Monte Carlo simulation randomly selects the input values for the different tasks to generate the possible outcomes. Let us assume that the simulation is run 500 times. From the above table, we can see that the project can be completed anywhere between 11 to 23 days. When the Monte Carlo simulation runs are performed, we can analyse the percentage of times each duration outcome between 11 and 23 is obtained. The following table depicts the outcome of a possible Monte Carlo simulation:

Total Project Duration Number of times the simulation result was less than or equal to the Total Project Duration Percentage of simulation runs where the result was less than or equal to the Total Project Duration
11 5 1%
12 20 4%
13 75 15%
14 90 18%
15 125 25%
16 140 28%
17 165 33%
18 275 55%
19 440 88%
20 475 95%
21 490 98%
22 495 99%
23 500 100%

This can be shown graphically in the following manner:

What the above table and chart suggest is, for example, that the likelihood of completing the project in 17 days or less is 33%. Similarly, the likelihood of completing the project in 19 days or less is 88%, etc. Note the importance of verifying the possibility of completing the project in 17 days, as this, according to the Most Likely estimates, was the time you would expect the project to take. Given the above analysis, it looks much more likely that the project will end up taking anywhere between 19 – 20 days.

Benefits of Using Monte Carlo Analysis

Whenever you face a complex estimation or forecasting situation that involves a high degree of complexity and uncertainty, it is best advised to use the Monte Carlo simulation to analyze the likelihood of meeting your objectives, given your project risk factors, as determined by your schedule risk profile. It is very effective as it is based on evaluation of data numerically and there is no guesswork involved. The key benefits of using the Monte Carlo analysis are listed below:

  • It is an easy method for arriving at the likely outcome for an uncertain event and an associated confidence limit for the outcome. The only pre-requisites are that you should identify the range limits and the correlation with other variables.
  • It is a useful technique for easing decision-making based on numerical data to back your decision.
  • Monte Carlo simulations are typically useful while analyzing cost and schedule. With the help of the Monte Carlo analysis, you can add the cost and schedule risk event to your forecasting model with a greater level of confidence.
  • You can also use the Monte Carlo analysis to find the likelihood of meeting your project milestones and intermediate goals.

Now that you are aware of the Monte Carlo analysis and its benefits, let us look at the steps that need to be performed while analysing data using the Monte Carlo simulation.

Monte Carlo Analysis: Steps

The series of steps followed in the Monte Carlo analysis are listed below:

  1. Identify the key project risk variables.
  2. Identify the range limits for these project variables.
  3. Specify probability weights for this range of values.
  4. Establish the relationships for the correlated variables.
  5. Perform simulation runs based on the identified variables and the correlations.
  6. Statistically analyze the results of the simulation run.

Each of the above listed steps of the Monte Carlo simulation is detailed below:

  1. Identification of the key project risk variables: A risk variable is a parameter which is critical to the success of the project and a slight variation in its outcome might have a negative impact on the project. The project risk variables are typically isolated using the sensitivity and uncertainty analysis.

    Sensitivity analysis is used for determining the most critical variables in a project. To identify the most critical variables in the project, all the variables are subjected to a fixed deviation and the outcome is analysed. The variables that have the greatest impact on the outcome of the project are isolated as the key project risk variables. However, sensitivity analysis in itself might give some misleading results as it does not take into consideration the realistic nature of the projected change on a specific variable. Therefore it is important to perform uncertainty analysis in conjunction with the sensitivity analysis.

    Uncertainty analysis involves establishing the suitability of a result and it helps in verifying the fitness or validity of a particular variable. A project variable causing high impact on the overall project might be insignificant if the probability of its occurrence is extremely low. Therefore it is important to perform uncertainty analysis.

  2. Identification of the range limits for the project variables: This process involves defining the maximum and minimum values for each identified project risk variable. If you have historical data available with you, this can be an easier task. You simply need to organize the available data in the form of a frequency distribution by grouping the number of occurrences at consecutive value intervals. In situations where you do not have exhaustive historical data, you need to rely on expert judgement to determine the most likely values.
  3. Specification of probability weights for the established range of values: The next step involves allocating the probability of occurrence for the project risk variable. To do so, multi-value probability distributions are deployed. Some commonly used probability distributions for analyzing risks are normal distribution, uniform distribution, triangular distribution, and step distribution. The normal, uniform, and triangular distributions are even distributions and establish the probability symmetrically within the defined range with varying concentration towards the centre. Various types of commonly used probability distributions are depicted in the diagrams below:



  4. Establishing the relationships for the correlated variables: The next step involves defining the correlation between the project risk variables. Correlation is the relationship between two or more variables wherein a change in one variable induces a simultaneous change in the other. In the Monte Carlo simulation, input values for the project risk variables are randomly selected to execute the simulation runs. Therefore, if certain risk variable inputs are generated that violate the correlation between the variables, the output is likely to be off the expected value. It is therefore very important to establish the correlation between variables and then accordingly apply constraints to the simulation runs to ensure that the random selection of the inputs does not violate the defined correlation. This is done by specifying a correlation coefficient that defines the relationship between two or more variables. When the simulation rounds are performed by the computer, the specification of a correlation coefficient ensures that the relationship specified is adhered to without any violations.
  5. Performing Simulation Runs: The next step involves performing simulation runs. This is typically done using a simulation software and ideally 500 – 1000 simulation runs constitute a good sample size. While executing the simulation runs, random values of risk variables are selected with the specified probability distribution and correlations.
  6. Statistical Analysis of the Simulation Results: Each simulation run represents the probability of occurrence of a risk event. A cumulative probability distribution of all the simulation runs is plotted and it can be used to interpret the probability for the result of the project being above or below a specific value. This cumulative probability distribution can be used to assess the overall project risk.

Summary

Monte Carlo simulation is a valuable technique for analyzing risks, specifically those related to cost and schedule. The fact that it is based on numeric data gathered by running multiple simulations adds even greater value to this technique. It also helps in removing any kind of project bias regarding the selection of alternatives while planning for risks. While running the Monte Carlo simulation, it is advisable to seek active participation of the key project decision-makers and stakeholders, specifically while agreeing on the range values of the project risk variables and the probability distribution patterns to be used. This will go a long way in building stakeholder confidence in your overall risk-handling capability for the project. Moreover, this serves as a good opportunity to make them aware of the entire risk management planning being done for the project.

Though there are numerous benefits of the Monte Carlo simulation, the reliability of the outputs depends on the accuracy of the range values and the correlation patterns, if any, that you have specified during the simulation. Therefore, you should practice extreme caution while identifying the correlations and specifying the range values. Else, the entire effort will go waste and you will not get accurate results.

Although, generally speaking, project managers are not expected to demonstrate complicated mathematical and / or statistical capabilities, there are some aspects of both these disciplines where basic knowledge and understanding of some basic concepts can enhance the project managers’ ability to perform fundamental project management duties – primarily around risk management.

The Project Management Body of Knowledge advocates the use of “Monte Carlo Simulation” within the context of performing quantitative risk assessment analysis. Although in most cases, executing such analysis will require the invocation of some sort of automated software tools, it is important for the project manager to understand the key principles behind the mathematical and statistical analysis performed by this sort of tools.

Today’s post will focus on three basic concepts (all of which, funnily enough, start with the letter ‘M’):

  • Mean
  • Median
  • Mode

The Mean

The mean (or average) of a set of data values is the sum of all of the data values divided by the number of data values. That is:

image

Using mathematical symbols, the above equation will look as follows:

Where:

  • X is the mean of the set of x values
  • is the sum of all x values in the set
  • n is the number of x values in the set

For example:

A project schedule consists of 10 tasks, with the following estimated durations:

Task ID Estimated Duration (days)
1 3
2 4
3 5
4 6
5 3
6 7
7 4
8 5
9 2
10 4

Based on the above:

  • Sum of all data values = 3 + 4 + 5 + 6 + 3 + 7 + 4 + 5 + 2 + 4 = 43 days
  • Number of data values = 10
  • The mean = 43 / 10 = 4.3 days

The Median

The median of a set of data values is the middle value of the data set after it has been arranged in an ascending order.

Median = ½ (n + 1)th value in a set, where: n is the number of data values in the set

Note: If the number of values in the set is even, the median is calculated as the average of the two middle values.

For example:

The above task list, ordered in an ascending order, will look as follows:

Task ID Estimated Duration (days)
9 2
1 3
5 3
2 4
7 4
10 4
3 5
8 5
4 6
6 7

Given that there are 10 tasks in this list, the then ½(10+1) = 5.5.

Given that in this case n = 10, the median will be calculated as the average between the two middle values (being tasks 7 & 10) = (4 + 4) / 2 = 4 days.

The Mode

The mode represents a data value that appears most frequently within a set of values. Obviously if one or more values appear in exactly the same frequency, all such values will be considered to be part of the set Mode.

For example:

Given the following set of numbers: 1, 2, 3, 2, 3, 4, 1, 3; the number 3 appears the most times and is therefore the Mode.

To Summarize:

  • Mean = average value
  • Median = middle value
  • Mode = most often occurring value

Easy?

Stay tuned for the next installment as things will get slightly spicier.

imageCraig Brown reminded me in a recent post (titled “The costs and risks of decision making“) that there are some hidden aspects related to decision making and that behind the complex process of decision making lie some fundamental behavioral and psychological concepts.

The question raised by Craig was about the differences between operational managers and project managers. The obvious (though simplistic) answer could be that operational managers need to make operational decisions while project managers need to make project related decision.

Another dimension to this question could be, how do people generally make decisions, and how can one aspire or set the scene for making better decisions?

At the outset I should point out the (obvious, I hope) observation that decision making is just another facet of risk management. After all, a decision making is nothing but a commitment to take (or avoid) a particular course of action based on the positive or negative risks associated with the outcome. It is a fairly well understood and generally widely accepted that very few (if any at all) decisions are risk free (as in that case no decision will be required).

So what does science have to say about the process of decision making? There is a vast body of scientific literature dealing with this issue. The outline below is all but a small and brief introduction to this topic and certainly does not cover the topic in its entirety:

To deliberate or not to deliberate?

Scientists are at odds regarding the question of deliberations vs. gut-feel. A study published in the Science Magazine in Feb 2006 (titled “On making the Right Choice: The Deliberation-Without-Attention Effect“) argues that thorough deliberations do not necessarily result in better outcome. The study goes further to suggest that in certain circumstances, when relating to both simple or complex choices, ‘less’ (deliberations) were found to be producing better decision than ‘more’.

The above study was further analyzed in an article published in the Scientific American in Feb 2007 (titled “Big Decision: Head or Gut? Hmm…“). The article makes the observation that, on one hand there is a growing body of evidence suggesting that in many circumstances, ‘snap’ (or what we might call gut-feel) decisions will result it better outcomes than more elaborate ones. This however is contrasted with other evidence that suggests that the above cannot be taken as a blank cheque, and that in some cases, thinking things through results in better outcome over the long run.

It is interesting to note that one of the arguments in support of a consultative and deliberative approach is that people who are involved in a deliberative process will be more likely to abide by its decisions. It doesn’t suggest that the decision will be a better decision but only that once a decision is made (for better or worse) it is more likely that those who were involved in making the decision will follow it up.

And while on this topic, a recent research done by the Maastricht University School of Business and Economics (see in “Making a Decision? Take Your Time” – Scientific American, April 2010), concludes that delaying a choice, in general, can help us make better decisions. The research further makes the observation that delaying a decision allows us to ‘chill out’, the outcome of which is that we are able to make a better choice.

The Executive Function – the law of diminishing returns

The Encyclopaedia of Mental Disorders defines the Executive Function as “a set of cognitive abilities that control and regulate other abilities and behaviors. Executive functions are necessary for goal-directed behavior. They include the ability to initiate and stop actions, to monitor and change behavior as needed, and to plan future behavior when faced with novel tasks and situations. Executive functions allow us to anticipate outcomes and adapt to changing situations. The ability to form concepts and think abstractly are often considered components of executive function.”

What does it all mean?

The human brain has got limited processing capacity that can, under certain conditions, deteriorate due to over use. Decision making requires cognitive resources. These resources, when used in the context of making complex decisions, get increasingly strained to the point that the quality of our decision making gets affected. This is a clear case of the law of diminishing returns in action. An incremental demand for cognitive resources can result in a diminishing return where the quality of the decision made is of a lower quality than the ones achieved previously (see further details in “Though Choices: How Making Decisions Tires Your Brain” – Scientific American, July 2008; and “Mindless Collectives Better at Rational Decision Making Than Brainy Individuals” – Scientific American, July 2009).

imageBeing mindful about the way we make decisions

The human brain is a sophisticated yet unpredictable organ. Using our heuristic thinking capabilities we are able to make wonderful, yet inaccurate and completely disastrous predictions and decisions.

Knowing our ‘built-in’ inefficiencies we are able to fine-tune our decision making process by risk mitigating the potential for the deteriorating quality built into our very consciousness.

imageIn part 1 of this article I raised a number of risk related observations, particularly around the validity of Murphy’s Law as well as the reality behind the Law of Averages.

Another series of Scientific American articles (sorry but I’m a real Scientific American fan), titled “Why Our Brains Do Not Intuitively Grasp Probabilities” and How Randomness Rules Our World and Why We Cannot See It describes the concept of “Folk Numeracy” which is “our natural tendency to misperceive and miscalculate probabilities, to think anecdotally, instead of statistically, and to focus on and remember short-term trends and small-number runs”. In a nutshell, we are evolutionarily evolved to clearly notice short term trends but are predisposed to forget or ignore long term trends. The author of these articles goes on to suggest that our intuition has evolved in a manner which enables us to utilize this capability in the context of social interactions and social relationships (which means that our intuition does play an important role in our ability to form alliances and identify social path that could be of some usefulness to us)  we are nevertheless ill equipped to use this capability when it comes to probabilistic problems.

In “Knowing Your Chances” (Scientific American Mind – April/May 2009), the authors make a reference to an early book published in 1938 by the English writer H. G. Wells, who predicted in his “World Brain” that statistical thinking would become an indispensable trait, similar to reading and writing. This prediction, however, has not materialized and the authors of the article make the observation that “At the beginning of the 21st century, nearly everyone living in an industrial society has been taught reading and writing but not statistical thinking – how to understand information about risks and uncertainties in our technological world.  That lack of understanding is shared by many physicians, journalists and politicians…and as a result, spread misconceptions to the public.”

So what does it all mean?

We are all naturally pre-disposed to a certain level of Risk Attitude. Risk Attitude (as defined by David Hillson & Ruth Murray-Webster) is a “chosen state of mind with regard to those uncertainties that could have a positive or negative effect on objectives, or more simply a chosen response to perception of  significant uncertainty”.

Josh Nankivel, based on a podcast by Cornelius Fichtner (which I thoroughly enjoyed while preparing for my PMP) gives a good summary of the commonly referenced Risk Attitudes (a complete copy of which is given below):

  1. Risk Seeker – enjoys and seeks uncertainty in search of greater opportunities, can be overly optimistic and not take possible negative consequences seriously.
  2. Risk Averse – uncomfortable with uncertainty, doesn’t like risk
  3. Risk Tolerant – reasonably comfortable with uncertainty, but usually sticks head in the sand and ignores them
  4. Risk Neutral – analyzes risks and weighs negative/positive possible outcomes and probabilities objectively.

Josh makes the observation, which I tend to agree with, that most project managers will tend to be Risk Tolerant. They will conduct basic Risk Identification process early in the piece but then rely on their gut-feel and ‘lets hope for the best’ approach when faced with reality. Josh goes on to suggest that the Risk Neutral is the goal and he is probably (excuse the pan) correct. The problem, as indicated above, is that for most of us this will require conscious effort and elaborate attention to details we are not naturally inclined to adopt.

Formal adherence to Risk Management processes can cut through the complexity and the PMBOK is certainly a good place to start as it refers to the basic tools and techniques required to ensure you manage your risks adequately.

imageI’ve had some interesting professional challenges lately, all of which can be traced back to issues associated with risk management. This is not surprising. In my view, the biggest challenge in any project is properly managing risks. It’s not that all other areas of project management are a walk in the park. It’s more around the fact that when it comes to identifying and managing risks some tend to be swayed by subjective arguments, wishful thinking and gut feel.

Most people subscribe to the reality of Murphy’s Law, namely that “if something can go wrong, it will”. Despite the common wisdom hidden in this simple, yet powerful, statement, some people tend to dismiss it on the grounds that statistically speaking our chances of hitting a bad run are equal to our chances of hitting a good run. So no reason for overwhelming concern as the Law of Averages will sort things out.

This notion is not quite correct, as demonstrated in an article published in the April 1997 edition of Scientific American under the heading of “The Science of Murphy’s Law”. The article’s conclusion is that “life’s little annoyances are not as random as they seem; the awful truth is that the universe is against you‘. So in that respect, when we say that “if something can go wrong, it will”, we actually mean it. Not that things will go wrong 100% of the time, but there are good chances that they will go wrong over 50% of the time.

Which, puts in question the Law of Averages. Well, things are not quite straightforward there either. Another Scientific American article (this time from April 1988, titled: “Repealing the Law of Averages”) tackles the common wisdom, according to which, when tossing a fair coin and maintaining a running count of how many times each side turns up, then after a large number of tossing in the air, we will get a relatively even number of heads and tails. This assertion is mathematically correct but only in VERY large numbers (can you count to infinite?). In real life situations, where the sample group is limited, the Law of Averages cannot be invoked, at least not as a serious planning tool.

To be continued…

%d bloggers like this: