The Magical “All Possibilities”:

When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth. – Holmes

Imagine that you have a coin in your hand, and you are throwing it up in the air. How would you assign probabilities for the outcome? Generally, we are taught that a coin flip has a 50% chance of tails and 50% chance of heads, assuming that we are using a fair coin. The reasoning is that there are only two possible outcomes (heads, tails). Therefore, the probability of either one happening is 50%.

I have written about Bayesian epistemology before. If we evaluate the coin flip example, there is more going on here than meets the eye. The basis of all this is – from whose perspective? In Bayesian epistemology, probability is not a feature of the phenomenon such as the coin flip. The coin is not aware of the probabilities with which it should fall. The probabilities that we assign is a feature of our uncertainty, and it has nothing to do with the coin. In the example, only two outcomes were considered. Depending on the observer, this could be expanded. For example, we can consider the coin falling on its edge. Or perhaps, the coin may not land at all if we can imagine a bird catching it in midair and swallowing it, or it could be that the coin is being thrown in space. Based on our experience, we may conclude that the last two scenarios are unlikely. But the key points here are:

  1. Every description requires a describer. Every observation requires and observer. In science and in general language, we ignore the describer/observer. We engage in conversation or studies as if, we have access to objectivity. The science we have is a human science in the sense that it is a version that we have generated based on what our human interpretative framework affords.
  2. We need to be aware of how we made our observation, and be open to modifying it. Whatever we say or do if based on the current state of our knowledge/belief system. This needs to be updated based on the feedback from the environment.
  3. Any attempt at an experiment or study is to reduce our uncertainty about something. Going back to Bayesian epistemology, any expression in probability is an expression of our uncertainty. The phenomenon that we are studying are not following any rules. They do not have a mind of their own. We are projecting our “certainties” as rules onto them. A great example is the often-quoted scenario of birds flocking together to explain complexity. The birds do not know these rules. They exhibit a behavior that got reinforced through natural selection. The rules are our merely a projection of what we think is going on. In other words, the complexity of the flight of birds coming from the simple rules is just our construction.

The idea of “all the possibilities” is made quite clear in the Arthur Conan Doyle quote at the start of this post. This quote is often touted in TV shows and movies alike. However, the quote represents a fallacious idea, the root of which stems from an incorrect assumption. The assumption here is that one can eliminate ALL which is impossible. Similar to the coin toss example, this depends on the observer and their ability to know ALL that can happen, which requires omniscience. Additionally, one has to disprove every one of those possible outcomes. Only after this can one truly look at whatever remains. Aptly, this fallacy is termed as “Holmesian Fallacy”. We simply do not have access to ALL possibilities.

In Cybernetics, a key idea that is relevant here is variety. Variety is the number of possible states. This was put forward by one of the pioneers in Cybernetics, Ross Ashby. For example, we could say that a coin has a variety of 2 – heads or tails. Or we could say that a coin has a variety of 3 – heads, tails or its edge. As we can see the variety is dependent upon the observer. Being aware of this dependency is part of second order cybernetics. If we could restate the definition of variety in second order cybernetics, it would be – variety is the number of possible states as perceived by an observer. Variety is tightly linked to the concept of entropy.

Ashby noted that the initial variety that we have perceived will tend to decay over time if nothing changes. A great example that Ashby gives is the example of a wife visiting a prisoner. Let’s say that the wife wishes to convey a message to the prisoner using a cup of coffee that she can send to him. The warden is smart and he foretells the wife that he will add cream and sweetener to the coffee, and will also remove the spoon from the coffee. In addition, the coffee will always be filled to the brim. The warden has removed a lot of variety from the cup of coffee. The wife realizes now that the available variety that she has is to do with how hot the coffee is. She perceives the variety as 3 – HOT, TEPID or COLD. However, the warden is able to block this with time. If the warden is able to delay giving the coffee to the prisoner, then this variety is also lost. As Ashby put it, as time progresses the variety in the set cannot increase and will usually diminish.

On a similar note, Ashby also spoke of the law of experience. He noted that when we impose a change in a ‘system’, we tend to reduce its knowledge of its initial state or variety. The example he gave is that of a group of boys who have been to the same school – it is found that a number of boys of marked individuality, having all been through the same school, develop ways that are more characteristic of the school they attended than of their original individualities.

If we are including the idea of observer here, we see the “system” as the “system” that also includes the observer. This brings in a self-referential nature to this. If nothing changes, then our useful information regarding a phenomenon will either stay the same or decay over time. The useful variety that we have perceived will remain a constant or will decay over time. In addition, as the observer, we ourselves tend to fall along a line or conform to whichever tribe or community we belong to. We lose our original variety with time. The first step in overcoming these is to be aware. Be aware of our blindness; be aware of our limitations and biases; be aware of our shortcomings. We have to be aware that we do not have knowledge of “ALL possibilities”. We have to be open to challenging our worldviews. We have to evaluate and error-correct our beliefs on a regular basis. We do not perform error-correction on a continuous basis, but on a discontinuous basis.

I will finish with an anecdote on the apparent randomness of quantum mechanics that prompted Einstein to say that God does not play dice. As noted Italian physicist Carlo Rovelli wrote:

When Einstein objected to quantum mechanics by remarking that “God does not play dice,” Bohr responded by admonishing him, “Stop telling God what to do.” Which means: Nature is richer than our metaphysical prejudices. It has more imagination than we do.

Einstein was worried about the uncertainties he faced with quantum mechanics and he noted that the metaphorical God does not play dice like that. In a similar way the late Stephen Hawking noted:

So God does play dice with the universe. All the evidence points to him being an inveterate gambler, who throws the dice on every possible occasion… Not only does God definitely play dice, but He sometimes confuses us by throwing them where they can’t be seen. 

Stay safe and always keep on learning… In case you missed it, my last post was The “Mind Projection Fallacy” in Systems Thinking:


Information at the Gemba:


Uncertainty is all around us. A lean leader’s main purpose is to develop people to tackle uncertainty. There are two ways to tackle uncertainty; one is Genchi Genbutsu (go and see) and the other is the scientific method of PDCA. Claude Shannon, the father of Information Theory, viewed information as the possible reduction in uncertainty in a system. In other words, larger uncertainty presents a larger potential for new information. This can be easily shown as the following equation;

New Information gain = Reduction in Uncertainty

Shannon called the uncertainty as entropy based on the advice from his friend John Von Neumann, a mathematical genius and polymath. The entropy in information theory is not exactly the same as the entropy in Thermodynamics. They are similar in that entropy is a measure of a system’s degree of disorganization. In this regard, information can be viewed as a measure of a system’s degree of organization. Shannon recalled his conversation with Von Neumann as below;

“My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.”

I loved the encouragement from Von Neumann that Shannon would have an advantage in a debate since “nobody knows what entropy really is”.

In this post, I am not going into the mathematics of Information Theory. In fact I am not even going to discuss Information Theory but the philosophical lessons from it. From a philosophical standpoint, Information Theory presents a different perspective on problems and failures at the gemba. When you are planning an experiment, and things go well and the results confirm your hypothesis, you do not learn any new information. However, when the results do not match your hypothesis, there is new information available for you. Thus, failures or similar challenges are opportunities to have new information about your process.

There are seven lessons that I have and they are as follows;

  • Information Gain ≠ Knowledge Gain:

One of the important aspects from the view of the information available at the Gemba is that information does not translate to knowledge. Information is objective in nature and consists of facts. This information gets translated to knowledge when we apply our available mental models to it. This means that there is potentially a severe loss based on the receiver. A good analogy is Sherlock Holmes and Dr. Watson at the crime scene – they are both looking at the same information available, but Holmes is able to deduce more.

  • Be Open:

When you assume full knowledge about a process, you are unwilling to gain knowledge from any new information available. You should be open to possibilities in order to welcome new information and thus a chance to learn something new. Sometimes by being open to others viewpoints, you can learn new things. They may have a lot more experience and more opportunities for information than you may have.

  • Go to the Gemba:

The majority of times, the source of information is the gemba. When you do not go to the source, the information you get will not be as pure as it was. The information you get has been contaminated with the subjective perspectives of the informer. You should go to the gemba as often as you can. The process is giving out information at all times.

  • Exercise Your Observation Skills:

As I mentioned above in the Holmes and Watson analogy, what you can gain from the information presented depends on your ability to identify information. There is a lot of noise in the information you might get and you have to weed out the noise and look at the core information available. One of my favorite definitions of information is by the famous Cerbernetician Gregory Bateson. He defined information as “the difference that makes the difference.” The ability to make the difference from the information given depends mostly on your skill set. Go to the Gemba more often and sharpen your observation skills. Ask “For what Purpose” and “what is the cause” more often.

  • Go Outside Your Comfort Zone:

One of the lessons in lean that does not get a lot of attention is – “go outside your comfort zone”. This is the essence of Challenge in the Continuous Improvement Pillar of the Toyota Way. When you stay inside your comfort zone, you are not willing to gather new information. You get stuck in your ways and trust your degrading mental model rather than challenging and nourishing your mental model so that you are able to develop yourself. Failure is a good thing when you understand that it represents new information that can help you with understanding uncertainties in your process. You will not want to try new things unless you go outside your comfort zone.

  • Experiment Frequently:

You learn more by exposing yourself to more chances of gaining new information. And you do this by experimenting more often. The scientific process is not a single loop of PDCA (Plan-Do-Check-Act). It is an iterative process, and you need to experiment frequently and learn from the feedback.

  • Challenge Your Own Perspective:

The Achilles’ heel for a lean leader is his confirmation bias. He may go to the gemba more often, and he may experiment frequently. Unless he challenges his own perspective, his actions may not be fruitful. My favorite question to challenge my perspective is “What is the evidence I need to invalidate my viewpoint right now, and does the information I have hint at it?” Similar questions ensure that the interpretation of the information you are getting is less tainted.

I will finish off with a funny story I heard about Sherlock Holmes and Watson;

Sherlock Holmes and Dr. Watson decide to go on a camping trip. All the way to the campsite, Holmes was giving observation lessons to Dr. Watson and challenging him. After dinner and a bottle of wine, they lay down for the night, and go to sleep.

Some hours later, Holmes awoke and nudged his faithful friend.

“Watson, look up at the sky and tell me what you see.”

Watson replied, “I see millions of stars.”

“What does that tell you?” Holmes asked.

Watson pondered for a minute.

“Astronomically, it tells me that there are millions of galaxies and potentially billions of planets.”
“Astrologically, I observe that Saturn is in Leo.”
“Horologically, I deduce that the time is approximately a quarter past three.”
“Theologically, I can see that God is all powerful and that we are small and insignificant.”
“Meteorologically, I suspect that we will have a beautiful day tomorrow.”
“What does it tell you, Holmes?” Watson asked.

Holmes was silent for a minute, then spoke: “Watson, you idiot. Someone has stolen our tent!”

Always keep on learning…

In case you missed it, my last post was The Pursuit of Quality – A Lesser Known Lesson from Ohno.

Qualities of a Lean Leader:


In today’s post I will look at the qualities of a lean leader. I have been using the term “lean leader” in my posts. This is not an official title, and this does not mean “supervisor” or “manager”. A lean leader is someone who takes initiative in improving one’s process and in developing those around them.

I have wondered which qualities a lean leader needs. I believe that the best source for this is Michael J Gelb’s 1998 book, “How to Think Like Leonardo Da Vinci.”Michael researched Leonardo’s life and identified seven attributes to help one think like Leonardo Da Vinci. Michael listed them as Italian words to pay homage to the master. These are as follows;

  • Curiosità – An insatiable quest for knowledge and continuous improvement
  • Dimostrazione – Learning from experience
  • Sensazione – Sharpening the senses
  • Sfumato – Managing ambiguity and change
  • Arte/Scienza – Whole-brain thinking
  • Corporalità – Body-mind fitness
  • Connessione – Systems thinking

1) Curiosita:


Being curious is an essential attribute a lean leader should have. Being curious forces you to ask questions. Asking questions allows the other party to be involved. This leads to continuous improvement and discoveries. Michael defined this as “an insatiably curious approach to life and an unrelenting quest for continuous learning.”

2) Dimostrazione:

Soichiro Honda

This can be described as a willingness to fail in  order to learn from mistakes. Michael described this as “a commitment to test knowledge through experience, persistence, and a willingness to learn from mistakes.”The example I have here is of Soichiro Honda. Soichiro did not have any formal education, and he went on to build Honda Motor Co.

3) Sensazione:


Taiichi Ohno would be proud of this attribute. Michael described this as “the continual refinement of the senses, especially sight, as the means to enliven experience.” As the lean learners know, Ohno was famous for his “Ohno circle”. Ohno used to teach supervisors, managers and engineers alike to learn to observe the wastes by making them stand inside a hand drawn chalk circle. They had to stay inside there until they start seeing the wastes like Ohno did.

4) Sfumato:

less is more
Sfumato refers to the style of painting Leonardo used. Sfumato is the technique of allowing tones and colors to shade gradually into one another, producing softened outlines or hazy forms. Michael described this as “a willingness to embrace ambiguity, paradox and uncertainty.” Toyota Production System has many paradoxes and counter-intuitive principles. Most of this is because of the trial and error methods that Ohno utilized. All of the manufacturing norms were challenged and broken.

5) Arte/Scienza:


This attribute represents the synergy between art and science; logic and intuition. The classic TV show Star Trek played around this theme since the two main characters Spock and Kirk represented logic and intuition respectively. A lean leader needs both logic and intuition in order to develop oneself. Michael described this as “the development of balance between science and art, logic and imagination”.

6) Corporalità:


In the Book of Five Rings, Miyamoto Musashi talked about fluidity. “Really skilful people never get out of time, and are always deliberate, and never appear busy.”To me, this is the essence of Corporalita. Michael described this as “the cultivation of grace, ambidexterity, fitness and poise.” The quality of Corporalita is achieved only through constant practice as one strives towards their ideal state.

7) Connessione:


Dr. Deming and Eliyahu Glodratt would be proud to see this attribute on the list. This attribute is about “systems thinking”. Michael described this as “a recognition and appreciation for the interconnections of all things and phenomena.” A lean leader should be able to see everything from a big picture as well as a small picture view points. My favorite meme about Systems Thinking is the Never Miss A Leg Day meme. Local optimization of the just exercising the upper body leads to poor system optimization (muscular upper body and disproportionate skinny legs).

Leonardo, the Writer:

Leonardo da Vinci was also a writer. In his notebooks, he wrote numerous “jests” and fables. I will finish this post with a jest and a fable from the great mind of Leonardo Da Vinci:

A Jest:

It was asked of a painter why, since he made such beautiful figures, his children were so ugly; to which the painter replied that he made his pictures by day, and his children by night.

 The Tree & the Pole, A Fable:

 A tree which grew luxuriantly, lifting to heaven its plume of green leaves, objected to the presence of a straight, dry old pole beside it.

“Pole, you are too close to me. Can you not move further away?”

The pole pretended not to hear and made no reply.

Then the tree turned to the thorn hedge surrounding it.

“Hedge, can you not go somewhere else? You irritate me.”

The hedge pretended not to hear, and made no reply.

“Beautiful tree,” said a lizard, raising his wise little head to look up at the tree, “do you not see that the pole is holding you up straight? Do you not realize that the hedge is protecting you from bad company?

Always keep on learning…

In case you missed it, my last post was Dorothy’s Red Shoes and Toyota.

The Mystery of Missing Advent Calendar Chocolates:


It is Christmas time, which means it is advent calendar time for the kids and for those of us who are kids at heart. My wife bought our kids chocolate advent calendars from Trader Joe’s. For those who do not know advent calendars, these are countdown calendars to Christmas starting on December 1st. Each day has a window which you can open to reveal a chocolate. Each day has a uniquely shaped chocolate, a Christmas tree, a stocking etc. The kids love this.

We keep the advent calendars on the top of our refrigerator to ensure they are not tempted to eat all of the chocolate at once. This morning, I found the advent calendars on the table and a crying Annie. Annie is our youngest daughter. She was very upset.

“I did not get any chocolate today from my calendar”, she said while crying.

“You must have eaten it already”, was my response. Of course, the kids eat chocolate and sometimes they are impatient and eat more than one day’s worth. In my mind, it was a reasonable assumption to make.

Annie explained that she opened the window with 6 on it and did not find any chocolate. I looked at the calendar, and sure enough, the window for day 6 on it was open. My initial hypothesis stayed the same – Annie ate the chocolate, and she is not telling me the entire truth.

My wife suggested she open the window for day 7 and eat that chocolate. Annie then proceeded to open the window with 7 on it, in front of me. Lo and behold, it did not have any chocolate. Annie looked at me with sad eyes. I realized, I was wrong to have assumed that Annie had eaten the chocolate!

“This is a mystery”, said Audrey, her twin sister.

Now I had a second hypothesis – those darn calendar makers; they do not know what they are doing. They obviously missed filling all the spots with chocolate. As a Quality Engineer, I have seen operator errors. I have now jumped to my second hypothesis.

Having thought about for a bit, I looked at the available information. Based on what Annie told me, the chocolate was not in its spot for two consecutive days. These calendars did not have the numbers in the consecutive order. They were placed in random order. It did not strike to me that two candies at different locations would be missing candy. She had opened a spot between 6 and 7 on an earlier day, and it had the candy.

I had a reasonable hypothesis – the operator/equipment missed the spots in the calendar. I have seen it happen before in different environments. But still, something was not right.

I proceeded to put the advent calendar back onto the top of the refrigerator. Then I thought of something. I wanted to test the calendar more. I carefully opened the calendar from the base. It was a card board box with a plastic tray inside.

Just then I found out what happened! On multiple places, the chocolate was missing. The chocolate were misplaced from its cavities. They were all gathered at the bottom of the box. It could be from the transportation. It could be the end user i.e. my excited young daughter who shook the calendar. It could be the design of the calendar that allows extra space between the tray and the cardboard.

The most important thing was that Annie was now happy that she got her candies. Audrey was happy that we indeed had a mystery that we could solve. My wife and I were happy that our kids were happy.

Final Words:

This personal story has made me realize again that we should not jump to conclusions. Listen to that tiny little voice that says “there is something more to this”…

Always keep on learning…

In case you missed it, my last post was about “Lady Tasting Tea”.

The Mysterious No Fault Found:


As a Quality Engineer working in the Medical Device field, there is nothing more frustrating than a “no-fault-found” condition on a product complaint. The product is returned by the customer due to a problem while in use, and the manufacturer cannot replicate the problem. This is commonly referred to as no-fault-found (NFF). I could not find a definite rate on NFF for medical devices. However, I did find that for the avionics industry it is 40-60% of all the complaints.

The NFF can be also described as “cannot duplicate”, “trouble not identified”, “met all specifications”, “no trouble found”, or “retest ok”. This menacing condition can be quite bothersome for the customer as well as the manufacturer. In this post, I will try to define some red flags that one should watch out for, and a list of root causes that might explain the reasons behind the NFF condition. I will finish off with a great story from the field.

Red flags:

The following list contains some of the red flags that one should watch out for, if no-fault was found with the product that was returned. This list is of course by no means meant to be an exhaustive list, but might provide some guidance.

  • Major incident associated with the complaint – If the return was associated with a major incident such as a serious injury or even worse, death, one should test the unit exhaustively to identify the root cause.
  • Unit was returned more than once – If the unit was returned for the same problem, it is an indicator of an inherent root cause creating the problem. Sometimes, an existing condition can act as an enabling condition and can create more than one effect. In this case, the problem may not be the same for the second or third return. Alternatively, the enabling condition can be present at the customer’s site.
  • Nonstandard Rework(s) performed on the unit during production – I am a skeptic of reworks. A rework is deviation from the normal production. And sometimes, fixing one thing can cause another thing to fail.
  • The product is part of the first lots produced after a major design change – If the product validation process is not adequate or if proper stress tests were not performed, the unit can be produced with latent issues/bugs.
  • More than one customer reporting the same problem – If there is more than one source reporting the problem, it is a clear indication of an inherent issue.

Potential root causes for NFF condition:

The following list contains some of the root causes that might be associated with a no-fault condition. This list is of course by no means meant to be an exhaustive list.

  • Adequacy of test methods – If the test method is susceptible to variations, it may not catch failures. This cause is self-explanatory.
  • Excess stress during use – Reliability Engineering will tell you that if the stress during use exceeds the inherent strength of the product, the product will fail. This stress can be environmental or can be due to use beyond the intended use of the product. An example is if the product is used at a wrong voltage.
  • New user or lack of training – If the end user is not familiar with the product, he/she can induce the failure that might not occur otherwise. This is not an easy root cause to figure out. Sometimes this is evident by the appearance of the product in the form of visible damages (dents, burn marks etc.)
  • High number of features – Sometimes, the higher the number of features, the more the complexity of the product and worse the ease of use of the product. If the product is not easy to use, it can create double or triple fault conditions more easily. A double or triple fault condition occurs when two or three conditions are met for the fault to happen. This is considered to be isolated in nature.
  • Latent bugs/issues – No matter how much a new product design is tested, all the issues cannot be identified. Some of the issues are left unidentified and thus unknown. These are referred to as latent issues/bugs. This is the reason why your mobile phone or your computer requires updates or why some cars are recalled. These bugs will result in failures that are truly random and not easy to replicate.
  • Problem caused by an external accessory or another component – The product is sometimes used as part of a system of devices. Sometimes, the fault may lie with another component, and when the product is returned, it may not accompany all the accessories, and it will be quite hard to replicate the complaint.
  • Lack of proper validation methods – Not all of the problems may be caught if the validation methods are not adequate. This cause is similar but not the same as latent bugs/issues. Here, if there was no stress testing performed like transportation or environmental, obvious failures may not be caught.
  • Customer performed repairs – Sometimes, the failure was induced by something that the customer did on the product. This may not always be evident unless revealed by the customer.
  • Customer bias – This is most likely the hardest cause to identify on this list. Sometimes, the customer may “feel” that the product is not functioning as intended. This could be because they experienced a failure of the same brand at another time, and the customer feels that the entire product brand is defective.
  • Other unknown isolated event – Murphy’s Law states that “whatever can go wrong will go wrong.” Law of Large Numbers loosely states that “with enough number of samples, even the most isolated events can happen.” Combined together, you can have an isolated incident that happened at the customer site and may never happen at the manufacturer site.

The mystery of diathermy burns:

I got this story from the great book “Medical Devices: Use and Safetyby Bertil Jacobson MD PhD and Alan Murray PhD. Sometimes, a surgery that uses a device like an RF Generator can cause burns on the patient from the heat induced by the device. This is referred to as “diathermy burns”.

A famous neurosurgeon retired and started working at a private hospital. Curiously, after a certain date, five of his patients reported that they have contracted ugly, non-healing ulcers. These were interpreted as diathermy burns. These burns were present on the cheek bones of the patients who were placed face-down for the surgery and on the neck region of the patient who were operated in the supine position (face-upward). The surgeon has had a long uneventful and successful career with no similar injuries ever reported.

No issues were found with the generator used for the surgery. A new generator was purchased, and the problem persisted. The manufacturer of the generator advised replacing the wall outlet. The problem still persisted. The surgery routines were updated and rigorous routines involving specific placement of electrodes were put in place. The problem still persisted.

A clinical engineer was consulted. He also could not find any fault with any of the equipment. At that point he requested witnessing the next operation. During this, it was discovered that the new assistant surgeon was placing his hands heavily on the patient’s head during the operation. Thus, the diathermy burns were actually pressure necroses caused by the assistant surgeon. These apparently can be misinterpreted as diathermy burns!

This story, in a curious way, implies the need to go to the gemba as well! Always keep on learning…

The greatest barrier to scientific thinking:


If one were to ask me, what I am afraid of as an Engineer, I will unequivocally declare “Confirmation Bias”.

“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”

– Francis Bacon, Novum Organum, 1620

Confirmation bias is part of everybody’s thinking process. When confronted with a problem, one has to determine how to solve it. The first step is to analyze the problem, and this requires looking inward and finding the mental model that might explain the problem at hand. If one such pattern is available, then he tries to fit the problem into the model, as if it is a suit tailored to fit the body of the problem. This is a form of deductive thinking.

In the absence of a pattern, he tries to gather further information to form a mental model. The newly created model may fit the problem much better. This is a form of inductive thinking.

Sometimes, in the absence of a pattern, one might try to find multiple mental models and see which model fits the problem the best. This is a form of abductive thinking.

No matter what form of thinking is used, the problem occurs when one tries to find evidence to prove the model, and ignores any evidence that might otherwise prove it wrong. This is the curse of confirmation bias. It can create blind spot that sometimes is large enough to hide an elephant!

“When men wish to construct or support a theory, how they torture facts into their service!”

John Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852

This creates quite a challenge for any form of activity involving brain functioning like problem solving or decision making. I have attempted to create a list of steps that one can use to minimize the impact of confirmation bias. I will be the first person to tell you that this is a daily struggle for me.

  • Be aware that confirmation bias exists:

The first step is to be aware that confirmation bias is part of what we are. Just being aware of this can help us in asking the right questions.

  • Walk the process:

Walking the process allows us to understand the big picture, and helps us in seeing the problem from other people’s perspective. If a problem is identified on the floor during assembly, it helps to walk the process with the component starting at the receiving dock all the way to the assembly on the floor. This helps to slow us down, and we may see things counter to our initial hypothesis that we may have missed otherwise.

  • Can you turn the problem on and off?:

When a problem occurs, either in the field or on the production floor, I always try to see if I can turn the problem on and off. This helps to clarify my mental model and validate my thinking. The cause alone does not result in the effect. The cause, in the presence of enabling conditions creates the effect. Understanding the enabling conditions help us to turn the problem on and off.

  • Challenge yourself to disprove your model:

Challenging yourself to disprove your own model is probably the most challenging yet most effective way to combat confirmation bias. It is after all, easier to disprove a theory than prove it. This helps to purify one’s thinking.

In a recent conversation with my brother-in-law, he talked about the “tenth man” scene from the movie “World War Z”. In the movie, the whole world is under attack from a zombie virus. Israel had built a wall around the nation that prevented the outbreak up to a certain point in the movie. This was achieved through a policy referred to as “tenth man”. It basically states that if 9 out of 10 people in a council agree on something, the tenth person has to take the opposite side, no matter how improbable it might seem.

  • Understanding the void:

My first blog post here was about understanding the void. This is similar to the negative space idea in designing. The information that is not there or not obvious can sometimes be valuable. Looking for the negative space again helps us in looking at the big picture.

In the short story “Silver Blaze”, Sherlock Holmes talks about the “curious incident about the dog.” Holmes was able to solve the mystery that the crime was committed by somebody that the dog knew.

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

I will finish this post off with a Zen story.

There was a great Zen teacher. Some of his disciples came to him one day to complain about the villagers.

They told him that the villagers were spreading rumors that the teacher was immoral, and that his followers were not good people. They asked him what they should do.

“First, my friends,” he responded, “you should definitely consider whether what they say is true or not.”

Always keep on learning…

8 Things I learned from Spock


Spock (Leonard Nimoy) is no more. The character of Spock from Star Trek has had a huge impact on many peoples’ lives. Leonard Nimoy will be deeply missed.

Here are 8 things that I learned from Spock.

1) If you do not have enough information, say that you do not have enough information or that your hypothesis is based on the limited information. As a Science Officer, he very well knew that he had to give the best possible opinion at all times. But he was open about his lack of information to form an effective hypothesis. For example, Spock would respond “I simply do not have enough data to form an opinion”, to Kirk’s “Opinion, Mr. Spock?” question. Kirk would then follow up with “Speculation, Mr. Spock.”.

2) Do not mix emotions with your hypothesis. In other words, try to eliminate or minimize confirmation bias. This was what separated Spock from Bones in the show.

3) Always have an open mind. Spock always remarked “Fascinating” anytime he came across something new. This also tells us to minimize our confirmation bias.

4) Look for patterns to form your hypothesis. After all, that is the role of a Science Officer.

5) Try to think rationally. Spock put a lot of emphasis on logic.

6) Always be abreast with the latest in your field. This was essential for Spock as a Science Officer. Always keep on learning.

7) Things are not always black and white. Spock learned this from Kirk. Kirk was always willing to challenge the status quo.

8) Improbable things can happen. As Spock said “It would be illogical to assume all conditions remain stable.” With enough iterations, even highly unlikely events can happen.

Thank you and Good Bye, dear Leonard Nimoy.

Keep on learning…

Understanding the void

As a data scientist or a quality professional, one should understand the whole picture. Sometimes this means that you have to gain information from what is there as well as what is not there. I like to call this the void.

A great story that comes to my mind regarding this is from a talk from Jeffrey S. Rosenthal. He also posted this in a great article called “I am biased, You are biased“.

” During World War II, the U.S. Air Force wanted to strategically reinforce the hull plating of its fighter planes to better withstand enemy fire — but which parts of the plane should be reinforced? Charts and graphs were carefully constructed, showing the location of bullet holes on returning aircraft. The military then decided to consult a statistician — always a clever move. Professor Abraham Wald immediately realised that those graphs were based on a biased sample: they only included data for the planes which actually returned from battle. The real issue was the location of bullet holes on the planes which were shot down and never made it home. The military wisely followed Wald’s advice, to reinforce those parts of the hull that came back clean and bullet-free — those were the places where any shots would be fatal”

Keep on learning…