A new research from the University of Minnesota provides further insight into the way ‘rational’ decisions are made. The research was set to explore the apparent lack of applied rationality when faced with binary choice tasks, where one choice has a higher probability of being correct than the other.

Lets’ take for example a situation where we are introduced to a coin which is known to be biased such that on an average 70% of the times it will land on heads and 30% on tails. The logical thing to do when using such a coin would be to always call the option with the higher probability of occurring, i.e. heads, irrespective of how many times that option has occurred in the immediate past.

Empirical trials suggest that this is not quite the case and human tendency is to incorporate information gathered from immediate past experience and attempt to incorporate it as part of an overall understanding of world around. So, in the case of the biased coin, the human instinct will dictate that the decision regarding the next throw of the coin will be dependent not only on our knowledge about the coin itself (i.e. the 70/30 bias) but also based on the results of previous throws.

The research interprets this tendency as an attempt to see order when one is not readily apparent. We know that the coin is biased but we just can’t ignore the information we’ve just gathered regarding previous throws, assuming that not incorporating this additional and newly available information will be an illogical thing to do.

We, human beings, are structure seeking animals and when such obvious structure and order are not readily available out mind simply creates one for us – nice and simple!

Another interesting observation published in the above research is that education can mitigate the illogical behaviour, i.e. subjects who were let to understand that the outcomes were completely independent (implying that past observations are not a reason to change future behaviour) did change their behaviour and chose the options reflecting how the world really is as opposed to how they wanted it to work.

This last observation nicely integrates with the Dunning-Kruger Effect where it was found that education is a key to altering and ‘correcting’ people’s attitudes, such that people with limited understanding of a certain subject area were found to exhibit higher levels of (mistaken) confidence while those with a better understanding were found to exhibit lover levels of confidence.

Knowledge acquired through education seems to enable us to make better decisions while somehow causing us to acquire some level of humility in acknowledging that we still don’t know it all and there’s more to learn.

This is fascinating stuff. Think about it! 

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...

A colleague and friend (thanks Leon) has introduced me to the Dunning-Kruger effect.

The Dunning-kruger effect (which is a scientific elaboration on a famous quote by Charles Darwin that “Ignorance more frequently begets confidence than does knowledge”) simply states that ignorance results fairly frequently with over confidence and self certainty, even in the face of evidence and body of knowledge suggesting otherwise.

In David Dunning’s own words:

There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth.  We literally see the world the way we want to see it.  But the Dunning-Kruger effect suggests that there is a problem beyond that.  Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it.  Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.”

The first thing that came to my mind when investigating this phenomena were some previous discussions I’ve been party to with regard to the issue of projects’ failure rate and the various studies / publications attempting to persuade us that a large portion of IT projects end up in utter failure.

Are we witnessing a live example of the Dunning Kruger Effect, where people without the proper scientific or methodological experience make use of unsubstantiated data to prove a point they’ve already had in their mind but were simply looking for a ‘study’ to confirm the point they’ve already formed?

Think about it!

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...

MotivationI got intrigued by an article in LiveScience, titled “Angry Boss Can Bring Out Workers’ Creativity“.

The gist of the article is that, contrary to simple intuition, an angry boss, telling his/her employees off about their performance, may bring about as a consequence better performance and creativity. The key, however, is the level of engagement and desire to understand exhibited by the employee, a trait called epistemic motivation.

Epistemic Motivation, in simple words, relates to the level of involvement and dedication one has towards a particular area of involvement. It is less about the way you apply yourself physically towards a particular task as much at the level of intellectual and emotional involvement and your level of motivation to see that task done in the best possible way.

So here’s what the study shows. Study participants with a high degree of epistemic motivation reacted positively to angry feedback, resulting in more ideas, increased originality and breadth, and higher level of engagement. Those participants with low degree of epistemic motivation showed opposite results.

This and previous studies performed by the same research team conclude that for some employees (those exhibiting high degree of epistemic motivation) working with an angry boss would yield better results than working for a non-angry/happy one.

I personally have some methodological issues with the above study as although it demonstrates an increased level of performance as a result of angry reaction it does not necessarily imply that an angry boss is a better boss. I would suggest, and would await further research to confirm it, that a motivational, happy and easy going boss is likely to achieve the same increased performance from his/her team while, obviously, also appealing to those team members who, for what ever reason, do not show the same level of high epistemic motivation.

So, watch this space.

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...

imageScientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses (source: http://en.wikipedia.org/wiki/Scientific_method).

If you are a regular reader of this blog you will notice that I take great care in substantiating my arguments (for better or worse) with collaborative evidence based on a scientific approach. Applying a scientific approach means that arguments can be independently verified, and supporting sources can be checked and their authenticity confirmed.

Last week I read two articles which made me further concerned about the need to realize and advocate a more scientific approach in regards to project management blogs, some of which are evidently based on gut-feel at levels that are below a reasonable threshold.

Lawrence M. Krauss, in a Scientific American article (Dec 2009 – titled “War Is Peace: Can Science Fight Media Disinformation?”) makes the observation that “The increasingly blatant nature of the nonsense uttered with impunity in public discourse is chilling. Our democratic society is imperiled as much by this as any other single threat, regardless of whether the origins of the nonsense are religious fanaticism, simple ignorance or personal gain.”

A similar note is raised by Jim Giles, in “Living in Denial: Unleashing a lie” (NewScientist.com – 21/05/2010). I encourage you to read Jim’s article as it is a fascinating tale of information (or more precisely dis) information management in the modern era. The key point arising from his article is the ease in which false data can propagate and promulgate to the point where fiction and reality are no longer indistinguishable.

The second article I read last week was published by Geoff Crane of Papercut Edge. In his article (titled “Annual Cost of Project Failure“) he makes a reference to a paper published by Roger Sessions in Nov 2009, titled “The IT Complexity Crisis: Danger and Opportunity“. Again, I encourage you to read Roger Sessions document as it is crucial for understanding the logical flaws in his approach. In a nutshell though, Roger Sessions makes the following assertions:

  • A = 66% of all Federal IT dollars are invested in projects that are “at risk”;
  • B = Let’s assume 65% of the above projects will fail;
  • C = 2.75% = Proportion of GDP spent on IT
  • D = 7.5 = a multiplier representing the total $ impact of a failed project on the economy
  • E = $69,800 (USD Billion) – World Wide GDP
  • Cost of failure = A x B x C x D x E = $6,180 (USD Billion).

I’ve read Roger Sessions paper and realized it heavily relies on data provided in the Budget of the United States Government, Fiscal Year 2009, Analytical Perspective. Before you run quickly to read this document, make sure you go straight to chapter 9, titled “Integrating Services with Information Technology” as this is the place where intellectual challenges associated with the above can be found.

As I was researching the above topic I’ve come across an excellent analysis done by Bruce Webster (see his article titled “The Sessions paper: an analytical critique“). The article provides excellent analytical explanation  outlining the methodological issues arising from Roger Session’s paper. I’m not going to repeat this here because I’d like to encourage you to read Bruce’s article.

The bottom line is that there is on-going state of confusion and misinformation regarding the current rate of IT projects’ failure. I’ve addressed in a number of previous posts (see Related Posts below) my reservations regarding the Standish report and its prolific interpretation. Now both Roger Sessions and Geoff Crane make the point that it is not the numbers that are important as much as the magnitude. Both fail to see that if the numbers are questionable, the magnitude is of no value what-so-ever.

I am seriously concerned, professionally, with the fact that further analysis and interpretation is carried out on the basis of shaky foundations, to the point where claims are taken as facts and these facts are further used to prove unsubstantiated assumptions. I suspect that some of the claims are self propelled by consultants who want to advance their services. Others by those who believe they simply come across a good idea to write an article about. At the end of the day though, regardless of the motives, it is the readers who need to make up their mind. The only way to allow our audience to make a proper judgement call is by providing properly presented, fairly substantiated information. Without such due diligence the words we write are not worth the 0’s and 1’s they are written on.

Related Post

Projects failure rate – the conventional wisdom is... The Problem Don’t be fooled, as despite what you might have heard, told or read, projects’ failure rate is not as high as some might want you to beli...
Projects Failure Rate – the Threequel A quick recap Over the past week I've had an interesting discussion with Steven Romero about certain aspects associated with the use of the Standish C...
A quote of the day – re. the Standish Report “Our research shows that the Standish definitions of successful and challenged projects have four major problems: they’re misleading, one-sided, ...

I love science. Statistically speaking (i.e. in the vast majority of the cases), science is reliable, straight forward, and provides the tools and techniques required to understand or explain various human endeavors.

I was thinking about it  in the context of the effort that we, humans, need to expand in order to just keep things running smoothly. More specifically, I was pondering the amount of human capital and human emotions required to keep projects on track, ensure that deliverables are produced on time, keep people’s well being and attitude under control, etc, etc.

The reality is that substantial effort needs to be spent just to ensure that things progress OK – or in other words, keep the lights on.

Ensuring optimal performance under tight and pressured conditions require even more effort and management talent, without which meeting objectives would be a definite challenge.

So, why is it?

Well, embrace yourselves, because it is all about the second law of thermodynamics.

The Second Law of Thermodynamics states simply that:

S(t+1) >= S(t) S(t) = k*ln(w) where S is entropy, t is time, ln is the natural log operator, k is Boltzmann’s constant (1.38E-23 J/K), and w is the number of quantum states in the isolated system (see http://www.talkorigins.org/origins/jargon/jargonfile_s.html)

OK, OK, just kidding.

imageThe Second Law of Thermodynamics states simply that systems have a universal tendency to gravitate towards disorder (see a more detailed definition in http://en.wikipedia.org/wiki/Second_law_of_thermodynamics).

This law has got profound implications when applied to managing projects as it clearly implies that unless effort (i.e. energy) is applied on the various aspects of the project activities, there is a high degree of certainty that processes will fail to deliver.

We all know it from our own experience. Things don’t just happen. Business Cases, Business Requirements, Functional Specifications, Design documentation, Code execution, test plans and test executions; they all need to be constantly monitored, controlled, co-ordinated and fine-tuned, as even with the best up-front planning and collaborated approach, making sure that things progress as planned on path to a successful completion is most often far from being guaranteed.

This, by the way, links very tightly with a concept discussed here earlier – The Murphy Law. Understanding that “if something can go wrong, it will” is all but a logical extension to the Second Law of Thermodynamics.

Think about it as it is this law that is more likely than anything else to drive your plans.

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...

imageCraig Brown reminded me in a recent post (titled “The costs and risks of decision making“) that there are some hidden aspects related to decision making and that behind the complex process of decision making lie some fundamental behavioral and psychological concepts.

The question raised by Craig was about the differences between operational managers and project managers. The obvious (though simplistic) answer could be that operational managers need to make operational decisions while project managers need to make project related decision.

Another dimension to this question could be, how do people generally make decisions, and how can one aspire or set the scene for making better decisions?

At the outset I should point out the (obvious, I hope) observation that decision making is just another facet of risk management. After all, a decision making is nothing but a commitment to take (or avoid) a particular course of action based on the positive or negative risks associated with the outcome. It is a fairly well understood and generally widely accepted that very few (if any at all) decisions are risk free (as in that case no decision will be required).

So what does science have to say about the process of decision making? There is a vast body of scientific literature dealing with this issue. The outline below is all but a small and brief introduction to this topic and certainly does not cover the topic in its entirety:

To deliberate or not to deliberate?

Scientists are at odds regarding the question of deliberations vs. gut-feel. A study published in the Science Magazine in Feb 2006 (titled “On making the Right Choice: The Deliberation-Without-Attention Effect“) argues that thorough deliberations do not necessarily result in better outcome. The study goes further to suggest that in certain circumstances, when relating to both simple or complex choices, ‘less’ (deliberations) were found to be producing better decision than ‘more’.

The above study was further analyzed in an article published in the Scientific American in Feb 2007 (titled “Big Decision: Head or Gut? Hmm…“). The article makes the observation that, on one hand there is a growing body of evidence suggesting that in many circumstances, ‘snap’ (or what we might call gut-feel) decisions will result it better outcomes than more elaborate ones. This however is contrasted with other evidence that suggests that the above cannot be taken as a blank cheque, and that in some cases, thinking things through results in better outcome over the long run.

It is interesting to note that one of the arguments in support of a consultative and deliberative approach is that people who are involved in a deliberative process will be more likely to abide by its decisions. It doesn’t suggest that the decision will be a better decision but only that once a decision is made (for better or worse) it is more likely that those who were involved in making the decision will follow it up.

And while on this topic, a recent research done by the Maastricht University School of Business and Economics (see in “Making a Decision? Take Your Time” – Scientific American, April 2010), concludes that delaying a choice, in general, can help us make better decisions. The research further makes the observation that delaying a decision allows us to ‘chill out’, the outcome of which is that we are able to make a better choice.

The Executive Function – the law of diminishing returns

The Encyclopaedia of Mental Disorders defines the Executive Function as “a set of cognitive abilities that control and regulate other abilities and behaviors. Executive functions are necessary for goal-directed behavior. They include the ability to initiate and stop actions, to monitor and change behavior as needed, and to plan future behavior when faced with novel tasks and situations. Executive functions allow us to anticipate outcomes and adapt to changing situations. The ability to form concepts and think abstractly are often considered components of executive function.”

What does it all mean?

The human brain has got limited processing capacity that can, under certain conditions, deteriorate due to over use. Decision making requires cognitive resources. These resources, when used in the context of making complex decisions, get increasingly strained to the point that the quality of our decision making gets affected. This is a clear case of the law of diminishing returns in action. An incremental demand for cognitive resources can result in a diminishing return where the quality of the decision made is of a lower quality than the ones achieved previously (see further details in “Though Choices: How Making Decisions Tires Your Brain” – Scientific American, July 2008; and “Mindless Collectives Better at Rational Decision Making Than Brainy Individuals” – Scientific American, July 2009).

imageBeing mindful about the way we make decisions

The human brain is a sophisticated yet unpredictable organ. Using our heuristic thinking capabilities we are able to make wonderful, yet inaccurate and completely disastrous predictions and decisions.

Knowing our ‘built-in’ inefficiencies we are able to fine-tune our decision making process by risk mitigating the potential for the deteriorating quality built into our very consciousness.

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...

imageI’ve referred in an earlier post to the impact that Multitasking is having on project deliveries.

A recent article in the Scientific American is challenging the conventional wisdom and suggests [based on a study conducted by the cognitive neuroscience laboratory at the French National Institute for Health and Medical Research (Inserm) in Paris], that our mind is better suited to deal with multitasking than previously thought.

The above, however, is predicated on specific set of circumstances, whereby the two tasks carry a sufficiently high level of incentive (i.e. carrying out each task carries the promise of a reward) AND that none of the conflicting tasks results in too many unrelated thoughts (in which case our brain will lose the capacity to keep track of the other task and greater inefficiencies will be introduced into the multitasking process).

The above study cannot (as yet) be used as a justification for encouraging multitasking in the workplace but it does provide some scientific evidence to the fact that in certain circumstances (restricted as they are) multitasking will not necessarily result in negative productivity.

Related Post

Letter to a Young Project Manager Dear L.J. We have barely met and had only the brief and passing opportunity to exchange a mere few words before a daunting and sombre thought enter...
The First Ever PM FlashBlog is Coming to a Blog Ne... Over the past couple of weeks I have been in touch with dozens of project management related bloggers to organize the first ever coordinated blogging ...
The Ten Commandments of Project Management Over the years I've seen many attempts to construct the "10 commandments of project management". I believe there is an element of cheekiness in this a...
The Secret to Clearing the PMP Certification Exam ... The Project Management Body of Knowledge (PMBOK) The PMBOK, published by the PMI, is a compilation of the project management guidelines to be adopted...
%d bloggers like this: