A 2007 article by Young Hoon Kwak and Lisa Ingall, titled “Exploring Monte Carlo Simulation Applications for Project Management”*, examines the Monte Carlo Simulation method and its uses in the field of Project Management.

Apart from being a good reference document, where a brief history of this technique is being discussed and explained, this article provides a good review of various studies published around the benefits as well as the potential complexities associated with implementing this technique in real life situations.

The article points out that one of the limitations of using this technique as being project managers’ discomfort with statistical approaches, as well as lack of thorough understanding of the method (see also my earlier post discussing  this issue in “Some Risk Management Related Thoughts“.

Discussing the process for utilizing Monte Carlo Simulation in the context of Time Management, the article suggests the following steps are commonly used:

  1. Utilize subject matter expertise to assign a probability distribution function of duration to each task or group of tasks in the project network;
  2. Possibly use Three-Point-Estimates to simplify this process, where an expert knowledge is used to supply the most-likely, worst-case and best-case durations for each task or group of tasks;
  3. Fit the above estimates to a duration probability distribution (such as Normal, Beta, Triangular, etc.) for the task;
  4. Execute the simulation and use the results to formulate expected completion date and required schedule reserve for the project.

Outlining the advantages of utilizing Monte Carlo Simulation applications in Project Management, the article points out that its primary advantage is in being an “extremely useful tool when trying to understand and quantify the potential effects of uncertainty in projects“. Clearly, not utilizing this technique, project managers lack a powerful tool that can result in not meeting the project’s schedule and cost targets. Better quantification of the necessary schedule and cost reserves can substantially reduce such risks.

The article highlights the importance of having access to expert knowledge and prior experience and detailed data from previous projects in order to mitigate the inherent issues of estimates uncertainly which would ultimately affect the quality of the simulation results. This is correct not just with respect to the three-point-estimates but also with respect to choosing the correct probability distributions with which to model these estimates.

An interesting point is raised when referring to an earlier study published  by Grave R. (2001. “Open and Closed: The Monte Carlo Model,” PM Network, vol. 15, no. 2, pp. 48-52) which discusses the merits of using different types of probability distributions for project task duration estimates. Grave suggests the use of open-ended distribution (the lognormal distribution) instead of using closed-ended distributions (such as the triangular distribution) when performing the Monte Carlo Simulations.

His logic is as follows: A closed-ended distribution (e.g. triangular distribution) does not consider the possibility  of a task duration completing BEFORE the best-case or AFTER the worst-case duration estimate. However, in real life projects, due to various constraints, it is possible for a task to complete before or after the best or worst scenarios.

When an open-ended distribution is used, the possibility for exceeding the upper limit of the task duration is recognized, thus making the simulation more realistic.

The article touches (although very briefly) on one of the aspects used within the context of Monte Carlo Simulation which is the Criticality Index (I will endeavor to provide a more detailed discussion about  this feature in a future post). In a nutshell, the Criticality Index is a reflection of the rate at which the task appears on the critical path of the project through the simulation iterations.

Overall an interesting article. If you are already using Monte Carlo Simulation as part of your portfolio of project tools, this article will encourage you to keep doing so. And if you’re not – you’ll need to ask yourself – WHY?

*see in http://home.gwu.edu/~kwak/Monte_Carlo_Kwak_Ingall.pdf

Ok, I’m having a bit of a fun with this one.

Submit a copy of your project plan (in Microsoft Project format only at the moment) and I will send back to you a high level risk assessment of your project schedule’s based on a Monte Carlo Simulation.

There’s absolutely no catch. The reason I’m doing it is because it seems to me that many don’t yet understand the benefits of using this technique to better understand the risks that lie dormant within their project schedules.

If you are interested in this FREE offer fill in and submit the details below.

And by the way, I give you my guarantee that once I process your project schedule I will not make any further use of it and it will be erased/discarded and will not be used for any purpose other than attempting to provide you with this service.

To learn / read more about this topic check out the Related Posts listed below.

Over a week ago I came up with an offer to provide readers with the unique opportunity to run their project plans through a Monte Carlo Simulation and provide them with a high level summary of the results.

The response exceeded my expectations and up until now I was approached by 30 readers to provide them with an assessment of their plan.

Having gone through this experience I thought it would be appropriate to summarize my findings from this exercise, without, obviously, revealing the individual nature of the plans that have come under my hands.

All plans provided were in a Microsoft Project format and none was larger than 500 lines. To my surprise, none of the plans provided had the basic scheduling distribution populated (Most Likely, Optimistic and Pessimistic estimates) for each of the tasks. In order to overcome this issue I have applied an across the board triangular distribution rule of 75%, 100%, 125% (i.e. assuming that the Optimistic Estimate will also be 75% of the Most Likely Estimate and that the Pessimistic Estimate will always be taken as 125% of the Most Likely Estimate).

Not surprisingly and as expected, in all cases, the 80% likelihood of finishing the project on or before a certain date was well after the plans’ deterministic date (i.e. the date predicted by the software as being the project’s completion date).

Not all project schedules provided had costing information provided but in those who had an expected project cost, this as well has shown a deterministic cost well below the 80% likelihood mark.

As I’m excited by what I’ve seen I intend to run with this activity for few more days so the opportunity to use this option is still open (but not for much longer).

To ensure the successful completion of a project, it is of utmost importance for the project manager to find ways to handle uncertainties that can pose potential risks for a project. Risk management is an iterative process. Risks can relate to any aspect of the project – be it the cost, schedule, or quality. The key to managing risks is to identify them early on in the project and develop an appropriate risk response plan.

To develop a Risk Response Plan, you need to quantify the impact of risks on the project. This process is known as quantitative risk analysis wherein risks are categorized as high or low priority risks depending on the quantum of their impact on the project. The Project Management Body of Knowledge (PMBOK) advocates the use of Monte Carlo analysis for performing quantitative risk analysis.

What is Monte Carlo Analysis?

Monte Carlo analysis involves determining the impact of the identified risks by running simulations to identify the range of possible outcomes for a number of scenarios. A random sampling is performed by using uncertain risk variable inputs to generate the range of outcomes with a confidence measure for each outcome. This is typically done by establishing a mathematical model and then running simulations using this model to estimate the impact of project risks. This technique helps in forecasting the likely outcome of an event and thereby helps in making informed project decisions.

While managing a project, you would have faced numerous situations where you have a list of potential risks for the project, but you have no clue of their possible impact on the project. To solve this problem, you can consider the worst-case scenario by summing up the maximum expected values for all the variables. Similarly, you can calculate the best-case scenario. You can now use the Monte Carlo analysis and run simulations to generate the most likely outcome for the event. In most situations, you will come across a bell-shaped normal distribution pattern for the possible outcomes.

Let us try to understand this with the help of an example. Suppose you are managing a project involving creation of an eLearning module. The creation of the eLearning module comprises of three tasks: writing content, creating graphics, and integrating the multimedia elements. Based on prior experience or other expert knowledge, you determine the best case, most-likely, and worst-case estimates for each of these activities as given below:

Tasks Best-case estimate Most likely estimate Worst-case estimate
Writing content 4 days 6 days 8 days
Creating graphics 5 days 7 days 9 days
Multimedia integration 2 days 4 days 6 days
Total duration 11 days 17 days 23 days

The Monte Carlo simulation randomly selects the input values for the different tasks to generate the possible outcomes. Let us assume that the simulation is run 500 times. From the above table, we can see that the project can be completed anywhere between 11 to 23 days. When the Monte Carlo simulation runs are performed, we can analyse the percentage of times each duration outcome between 11 and 23 is obtained. The following table depicts the outcome of a possible Monte Carlo simulation:

Total Project Duration Number of times the simulation result was less than or equal to the Total Project Duration Percentage of simulation runs where the result was less than or equal to the Total Project Duration
11 5 1%
12 20 4%
13 75 15%
14 90 18%
15 125 25%
16 140 28%
17 165 33%
18 275 55%
19 440 88%
20 475 95%
21 490 98%
22 495 99%
23 500 100%

This can be shown graphically in the following manner:

What the above table and chart suggest is, for example, that the likelihood of completing the project in 17 days or less is 33%. Similarly, the likelihood of completing the project in 19 days or less is 88%, etc. Note the importance of verifying the possibility of completing the project in 17 days, as this, according to the Most Likely estimates, was the time you would expect the project to take. Given the above analysis, it looks much more likely that the project will end up taking anywhere between 19 – 20 days.

Benefits of Using Monte Carlo Analysis

Whenever you face a complex estimation or forecasting situation that involves a high degree of complexity and uncertainty, it is best advised to use the Monte Carlo simulation to analyze the likelihood of meeting your objectives, given your project risk factors, as determined by your schedule risk profile. It is very effective as it is based on evaluation of data numerically and there is no guesswork involved. The key benefits of using the Monte Carlo analysis are listed below:

  • It is an easy method for arriving at the likely outcome for an uncertain event and an associated confidence limit for the outcome. The only pre-requisites are that you should identify the range limits and the correlation with other variables.
  • It is a useful technique for easing decision-making based on numerical data to back your decision.
  • Monte Carlo simulations are typically useful while analyzing cost and schedule. With the help of the Monte Carlo analysis, you can add the cost and schedule risk event to your forecasting model with a greater level of confidence.
  • You can also use the Monte Carlo analysis to find the likelihood of meeting your project milestones and intermediate goals.

Now that you are aware of the Monte Carlo analysis and its benefits, let us look at the steps that need to be performed while analysing data using the Monte Carlo simulation.

Monte Carlo Analysis: Steps

The series of steps followed in the Monte Carlo analysis are listed below:

  1. Identify the key project risk variables.
  2. Identify the range limits for these project variables.
  3. Specify probability weights for this range of values.
  4. Establish the relationships for the correlated variables.
  5. Perform simulation runs based on the identified variables and the correlations.
  6. Statistically analyze the results of the simulation run.

Each of the above listed steps of the Monte Carlo simulation is detailed below:

  1. Identification of the key project risk variables: A risk variable is a parameter which is critical to the success of the project and a slight variation in its outcome might have a negative impact on the project. The project risk variables are typically isolated using the sensitivity and uncertainty analysis.

    Sensitivity analysis is used for determining the most critical variables in a project. To identify the most critical variables in the project, all the variables are subjected to a fixed deviation and the outcome is analysed. The variables that have the greatest impact on the outcome of the project are isolated as the key project risk variables. However, sensitivity analysis in itself might give some misleading results as it does not take into consideration the realistic nature of the projected change on a specific variable. Therefore it is important to perform uncertainty analysis in conjunction with the sensitivity analysis.

    Uncertainty analysis involves establishing the suitability of a result and it helps in verifying the fitness or validity of a particular variable. A project variable causing high impact on the overall project might be insignificant if the probability of its occurrence is extremely low. Therefore it is important to perform uncertainty analysis.

  2. Identification of the range limits for the project variables: This process involves defining the maximum and minimum values for each identified project risk variable. If you have historical data available with you, this can be an easier task. You simply need to organize the available data in the form of a frequency distribution by grouping the number of occurrences at consecutive value intervals. In situations where you do not have exhaustive historical data, you need to rely on expert judgement to determine the most likely values.
  3. Specification of probability weights for the established range of values: The next step involves allocating the probability of occurrence for the project risk variable. To do so, multi-value probability distributions are deployed. Some commonly used probability distributions for analyzing risks are normal distribution, uniform distribution, triangular distribution, and step distribution. The normal, uniform, and triangular distributions are even distributions and establish the probability symmetrically within the defined range with varying concentration towards the centre. Various types of commonly used probability distributions are depicted in the diagrams below:



  4. Establishing the relationships for the correlated variables: The next step involves defining the correlation between the project risk variables. Correlation is the relationship between two or more variables wherein a change in one variable induces a simultaneous change in the other. In the Monte Carlo simulation, input values for the project risk variables are randomly selected to execute the simulation runs. Therefore, if certain risk variable inputs are generated that violate the correlation between the variables, the output is likely to be off the expected value. It is therefore very important to establish the correlation between variables and then accordingly apply constraints to the simulation runs to ensure that the random selection of the inputs does not violate the defined correlation. This is done by specifying a correlation coefficient that defines the relationship between two or more variables. When the simulation rounds are performed by the computer, the specification of a correlation coefficient ensures that the relationship specified is adhered to without any violations.
  5. Performing Simulation Runs: The next step involves performing simulation runs. This is typically done using a simulation software and ideally 500 – 1000 simulation runs constitute a good sample size. While executing the simulation runs, random values of risk variables are selected with the specified probability distribution and correlations.
  6. Statistical Analysis of the Simulation Results: Each simulation run represents the probability of occurrence of a risk event. A cumulative probability distribution of all the simulation runs is plotted and it can be used to interpret the probability for the result of the project being above or below a specific value. This cumulative probability distribution can be used to assess the overall project risk.

Summary

Monte Carlo simulation is a valuable technique for analyzing risks, specifically those related to cost and schedule. The fact that it is based on numeric data gathered by running multiple simulations adds even greater value to this technique. It also helps in removing any kind of project bias regarding the selection of alternatives while planning for risks. While running the Monte Carlo simulation, it is advisable to seek active participation of the key project decision-makers and stakeholders, specifically while agreeing on the range values of the project risk variables and the probability distribution patterns to be used. This will go a long way in building stakeholder confidence in your overall risk-handling capability for the project. Moreover, this serves as a good opportunity to make them aware of the entire risk management planning being done for the project.

Though there are numerous benefits of the Monte Carlo simulation, the reliability of the outputs depends on the accuracy of the range values and the correlation patterns, if any, that you have specified during the simulation. Therefore, you should practice extreme caution while identifying the correlations and specifying the range values. Else, the entire effort will go waste and you will not get accurate results.

imageIn part 1 of this article I raised a number of risk related observations, particularly around the validity of Murphy’s Law as well as the reality behind the Law of Averages.

Another series of Scientific American articles (sorry but I’m a real Scientific American fan), titled “Why Our Brains Do Not Intuitively Grasp Probabilities” and How Randomness Rules Our World and Why We Cannot See It describes the concept of “Folk Numeracy” which is “our natural tendency to misperceive and miscalculate probabilities, to think anecdotally, instead of statistically, and to focus on and remember short-term trends and small-number runs”. In a nutshell, we are evolutionarily evolved to clearly notice short term trends but are predisposed to forget or ignore long term trends. The author of these articles goes on to suggest that our intuition has evolved in a manner which enables us to utilize this capability in the context of social interactions and social relationships (which means that our intuition does play an important role in our ability to form alliances and identify social path that could be of some usefulness to us)  we are nevertheless ill equipped to use this capability when it comes to probabilistic problems.

In “Knowing Your Chances” (Scientific American Mind – April/May 2009), the authors make a reference to an early book published in 1938 by the English writer H. G. Wells, who predicted in his “World Brain” that statistical thinking would become an indispensable trait, similar to reading and writing. This prediction, however, has not materialized and the authors of the article make the observation that “At the beginning of the 21st century, nearly everyone living in an industrial society has been taught reading and writing but not statistical thinking – how to understand information about risks and uncertainties in our technological world.  That lack of understanding is shared by many physicians, journalists and politicians…and as a result, spread misconceptions to the public.”

So what does it all mean?

We are all naturally pre-disposed to a certain level of Risk Attitude. Risk Attitude (as defined by David Hillson & Ruth Murray-Webster) is a “chosen state of mind with regard to those uncertainties that could have a positive or negative effect on objectives, or more simply a chosen response to perception of  significant uncertainty”.

Josh Nankivel, based on a podcast by Cornelius Fichtner (which I thoroughly enjoyed while preparing for my PMP) gives a good summary of the commonly referenced Risk Attitudes (a complete copy of which is given below):

  1. Risk Seeker – enjoys and seeks uncertainty in search of greater opportunities, can be overly optimistic and not take possible negative consequences seriously.
  2. Risk Averse – uncomfortable with uncertainty, doesn’t like risk
  3. Risk Tolerant – reasonably comfortable with uncertainty, but usually sticks head in the sand and ignores them
  4. Risk Neutral – analyzes risks and weighs negative/positive possible outcomes and probabilities objectively.

Josh makes the observation, which I tend to agree with, that most project managers will tend to be Risk Tolerant. They will conduct basic Risk Identification process early in the piece but then rely on their gut-feel and ‘lets hope for the best’ approach when faced with reality. Josh goes on to suggest that the Risk Neutral is the goal and he is probably (excuse the pan) correct. The problem, as indicated above, is that for most of us this will require conscious effort and elaborate attention to details we are not naturally inclined to adopt.

Formal adherence to Risk Management processes can cut through the complexity and the PMBOK is certainly a good place to start as it refers to the basic tools and techniques required to ensure you manage your risks adequately.

imageI’ve had some interesting professional challenges lately, all of which can be traced back to issues associated with risk management. This is not surprising. In my view, the biggest challenge in any project is properly managing risks. It’s not that all other areas of project management are a walk in the park. It’s more around the fact that when it comes to identifying and managing risks some tend to be swayed by subjective arguments, wishful thinking and gut feel.

Most people subscribe to the reality of Murphy’s Law, namely that “if something can go wrong, it will”. Despite the common wisdom hidden in this simple, yet powerful, statement, some people tend to dismiss it on the grounds that statistically speaking our chances of hitting a bad run are equal to our chances of hitting a good run. So no reason for overwhelming concern as the Law of Averages will sort things out.

This notion is not quite correct, as demonstrated in an article published in the April 1997 edition of Scientific American under the heading of “The Science of Murphy’s Law”. The article’s conclusion is that “life’s little annoyances are not as random as they seem; the awful truth is that the universe is against you‘. So in that respect, when we say that “if something can go wrong, it will”, we actually mean it. Not that things will go wrong 100% of the time, but there are good chances that they will go wrong over 50% of the time.

Which, puts in question the Law of Averages. Well, things are not quite straightforward there either. Another Scientific American article (this time from April 1988, titled: “Repealing the Law of Averages”) tackles the common wisdom, according to which, when tossing a fair coin and maintaining a running count of how many times each side turns up, then after a large number of tossing in the air, we will get a relatively even number of heads and tails. This assertion is mathematically correct but only in VERY large numbers (can you count to infinite?). In real life situations, where the sample group is limited, the Law of Averages cannot be invoked, at least not as a serious planning tool.

To be continued…

Cursory discussions with young project managers reveal a simple yet concerning fact. Most project managers are aware of the need to identify and manage project risks and most will be aware of the need to establish and publish a project risk register. That’s the good news. Where most inexperienced project managers fail is in their lack of understanding of the need to rigorously manage project risks arising inherently from their project schedule.

A short example can illustrate the issue quite clearly. Assume your project has 5 tasks, each estimated with a confidence level of 90%. Based on the above, would you say that your overall chances of meeting your project target delivery date are 90%? You might intuitively say ‘Yes’ but then you’ll be wrong. The correct answer is actually less than 60% (being the product of the following calculation: 0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0.59). So, in this example, if you were confidently managing your project, expecting a very high chance of meeting your deadline, you could be up for a surprise when some of the odds start playing against you.

The initial lesson from this example is that as a project manager you need to understand and manage the risks inherently built into your schedule. These risks require the least brainstorming or group discussions as they are directly and explicitly jump out of your project schedule and as such should be the most apparent ones.

The reality of most projects is that they will have a substantially larger number of tasks about them than the example above. Not only will the average project have substantially more tasks, each task is likely to have varying degrees of confidence levels, such that establishing an overall project scheduling risk factor will be far more complicated.

A simplistic approach for taking scheduling confidence into account is the PERT (also known as Three Point Estimate) technique. This technique uses three estimates to define an approximation of the activity’s duration (and cost). This technique works as follows: Determine your Optimistic (O), Pessimistic (P) and Most Likely (M) estimates for each activity. Having determined these parameters calculate the expected activity duration using a weighted average of the three parameters, based on the following formula:

Expected activity duration = (O + 4M + P) / 6.

The reason I referred to this approach as being “simplistic” is because despite its elegant look, it is statistically incorrect, due primarily to the fact that it assumes that the duration of each activity can be determined independently from all other activities, which is hardly ever the case.

This is the point where using Monte Carlo Analysis can come handy. Rather than using the simplistic approach suggested by the PERT technique, the Monte Carlo Analysis technique utilizes the three estimates to repeatedly simulate the project’s completion date, while taking into account the statistical likelihood that each activity’s duration will be somewhere on the continuum between the three estimates. The result of this analysis will not be a definitive answer, i.e. the answer will not be in the form of “based on the individual activity duration estimates, the project is expected to finish of date X”. Rather, the answer will be in the form of “based on the individual activity duration estimates, there is X% chance that the project will be complete on or before date Y”.

The chart below was produced using a Monte Carlo Simulation software, and highlights the type of outputs that such a tool will produce. Let’s examine the chart and understand its content. The overall purpose of the chart is to present the likelihood (or a better term – probability) of the project completing on any particular date. The left axis (Hits) can be used to review the number of times, during the simulation, that a particular date was identified as a potential completion date. The right axis (Cumulative Frequency) shows the total accumulative times that the project was determined to complete on or before a particular date. The bottom axis (Distribution) shows the identified potential completion dates, while the height of the bar associated with that date is determined by the number of times that the date has been identified by the simulation as being the project’s completion date.

In this example, having analyzed the project activity durations, the following statements can now be made:

  • Based on the individual activity duration estimates, there is 80% chance that the project will be complete on or before 21/05/02.
  • Based on the individual activity duration estimates, there is 50% chance that the project will be complete on or before 15/05/02.

By the way, if you notice the yellow arrow, pointing at 08/05/02, this is the date shown as the project completion date on the project plan. Now that we’ve performed the risk analysis we can determine that our chances of actually finishing the project on or before that date are just 15%!

As the name of this post suggests, there is much more to Monte Carlo Simulation software than what I’ve presented here. The above, however, highlights the fundamental need to consider project scheduling risks as if you don’t they WILL come back to haunt you.

Can you see how this type of analysis can help you better manage your project risks?

I value your comments, if you have any thoughts on the above please join in and share with others!

Here is a list of relevant articles you might want to read:

%d bloggers like this: