Information at the Gemba:

Info

Uncertainty is all around us. A lean leader’s main purpose is to develop people to tackle uncertainty. There are two ways to tackle uncertainty; one is Genchi Genbutsu (go and see) and the other is the scientific method of PDCA. Claude Shannon, the father of Information Theory, viewed information as the possible reduction in uncertainty in a system. In other words, larger uncertainty presents a larger potential for new information. This can be easily shown as the following equation;

New Information gain = Reduction in Uncertainty

Shannon called the uncertainty as entropy based on the advice from his friend John Von Neumann, a mathematical genius and polymath. The entropy in information theory is not exactly the same as the entropy in Thermodynamics. They are similar in that entropy is a measure of a system’s degree of disorganization. In this regard, information can be viewed as a measure of a system’s degree of organization. Shannon recalled his conversation with Von Neumann as below;

“My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.”

I loved the encouragement from Von Neumann that Shannon would have an advantage in a debate since “nobody knows what entropy really is”.

In this post, I am not going into the mathematics of Information Theory. In fact I am not even going to discuss Information Theory but the philosophical lessons from it. From a philosophical standpoint, Information Theory presents a different perspective on problems and failures at the gemba. When you are planning an experiment, and things go well and the results confirm your hypothesis, you do not learn any new information. However, when the results do not match your hypothesis, there is new information available for you. Thus, failures or similar challenges are opportunities to have new information about your process.

There are seven lessons that I have and they are as follows;

  • Information Gain ≠ Knowledge Gain:

One of the important aspects from the view of the information available at the Gemba is that information does not translate to knowledge. Information is objective in nature and consists of facts. This information gets translated to knowledge when we apply our available mental models to it. This means that there is potentially a severe loss based on the receiver. A good analogy is Sherlock Holmes and Dr. Watson at the crime scene – they are both looking at the same information available, but Holmes is able to deduce more.

  • Be Open:

When you assume full knowledge about a process, you are unwilling to gain knowledge from any new information available. You should be open to possibilities in order to welcome new information and thus a chance to learn something new. Sometimes by being open to others viewpoints, you can learn new things. They may have a lot more experience and more opportunities for information than you may have.

  • Go to the Gemba:

The majority of times, the source of information is the gemba. When you do not go to the source, the information you get will not be as pure as it was. The information you get has been contaminated with the subjective perspectives of the informer. You should go to the gemba as often as you can. The process is giving out information at all times.

  • Exercise Your Observation Skills:

As I mentioned above in the Holmes and Watson analogy, what you can gain from the information presented depends on your ability to identify information. There is a lot of noise in the information you might get and you have to weed out the noise and look at the core information available. One of my favorite definitions of information is by the famous Cerbernetician Gregory Bateson. He defined information as “the difference that makes the difference.” The ability to make the difference from the information given depends mostly on your skill set. Go to the Gemba more often and sharpen your observation skills. Ask “For what Purpose” and “what is the cause” more often.

  • Go Outside Your Comfort Zone:

One of the lessons in lean that does not get a lot of attention is – “go outside your comfort zone”. This is the essence of Challenge in the Continuous Improvement Pillar of the Toyota Way. When you stay inside your comfort zone, you are not willing to gather new information. You get stuck in your ways and trust your degrading mental model rather than challenging and nourishing your mental model so that you are able to develop yourself. Failure is a good thing when you understand that it represents new information that can help you with understanding uncertainties in your process. You will not want to try new things unless you go outside your comfort zone.

  • Experiment Frequently:

You learn more by exposing yourself to more chances of gaining new information. And you do this by experimenting more often. The scientific process is not a single loop of PDCA (Plan-Do-Check-Act). It is an iterative process, and you need to experiment frequently and learn from the feedback.

  • Challenge Your Own Perspective:

The Achilles’ heel for a lean leader is his confirmation bias. He may go to the gemba more often, and he may experiment frequently. Unless he challenges his own perspective, his actions may not be fruitful. My favorite question to challenge my perspective is “What is the evidence I need to invalidate my viewpoint right now, and does the information I have hint at it?” Similar questions ensure that the interpretation of the information you are getting is less tainted.

I will finish off with a funny story I heard about Sherlock Holmes and Watson;

Sherlock Holmes and Dr. Watson decide to go on a camping trip. All the way to the campsite, Holmes was giving observation lessons to Dr. Watson and challenging him. After dinner and a bottle of wine, they lay down for the night, and go to sleep.

Some hours later, Holmes awoke and nudged his faithful friend.

“Watson, look up at the sky and tell me what you see.”

Watson replied, “I see millions of stars.”

“What does that tell you?” Holmes asked.

Watson pondered for a minute.

“Astronomically, it tells me that there are millions of galaxies and potentially billions of planets.”
“Astrologically, I observe that Saturn is in Leo.”
“Horologically, I deduce that the time is approximately a quarter past three.”
“Theologically, I can see that God is all powerful and that we are small and insignificant.”
“Meteorologically, I suspect that we will have a beautiful day tomorrow.”
“What does it tell you, Holmes?” Watson asked.

Holmes was silent for a minute, then spoke: “Watson, you idiot. Someone has stolen our tent!”

Always keep on learning…

In case you missed it, my last post was The Pursuit of Quality – A Lesser Known Lesson from Ohno.

Advertisement

PDCA and the Roads to Rome:

Different roads to take, decision to make

In today’s post, I will be trying to look at the concept of equifinality in relationship to the scientific method PDCA. In Systems Theory, the concept of equifinality is defined as reaching the same end, no matter what the starting point was. This is applicable only in an open system. An open system is a system that interacts with its environment (external). This could be in the form of information, material or energy.

I wanted to look at the repeatability of the PDCA process. PDCA stands for the Plan-Do-Check-Act cycle, and is the framework for the scientific method. If three different people, with different ways of thinking, are facing the same problem, can all three reach the same end goal using the PDCA process? This would imply that equifinality is possible. This concept is shown below. Point A is the initial condition, and point B is the final desired condition. The three different colored lines depicts the three different thinking styles (the different thinking styles indicates the different starting points).

equi

Iterative Nature of PDCA:

The most important point about PDCA is the iterative nature of the cycle. Each cycle of PDCA leads to a new cycle that is more refined. The practitioner learns from each step of the PDCA cycle. The practitioner observes the effect of each step on the problem. Every action is an opportunity to observe the system more. The results of his experiments lead to more experiments, and yield a better understanding of multiple cause-effect chains in the system.

If the scientific method is followed properly, it is highly likely that the three different practitioners can ultimately reach the same destination. The number of iterations would vary from person to person due to different thinking styles. However, the iterative nature of the scientific method ensures that the each step corrects itself based on the feedback. This type of steering mechanism based on feedback loops is the basis of the PDCA process. This idea of multiple ways or methods to have the same final performance result is equifinality. This is akin to the saying “all roads lead to Rome”. This idea of “steering” is a fundamental concept of Cybernetics. I will be writing about this fascinating field in the future.

Final Words:

This post was inspired by the following thought – can a lean purist and a six sigma purist reach the same final answer to a problem if they pursued the iterative nature of the scientific method? There has been a lot of discussion about which method is better. The solution, in my opinion, is in being open and learning from the feedback loops from the problem at hand.

I will finish this post with a neat mathematical card trick that explains the idea of equifinality further. This trick is based on a principle called Kruskal Count.

The Effect:

The spectator is asked to shuffle the deck of cards to his heart’s content. Once the spectator is convinced that the deck is thoroughly shuffled, the magician explains the rules. The Ace is counted as 1, and all the face cards (Jack, Queen and King) are counted as 5. The number cards have the same values as the number on the card.

The spectator is asked to think of any number from 1 to 10. He is then directed to hold the cards face down, and then deal cards face up in a pile. He should deal the amount of cards equal to the number he chose in his mind. The spectator takes a note of the value of the final card dealt. The spectator is directed to deal those many cards face up on the already dealt cards.

deck_discardThis is repeated until the spectator has reached a card at which point there are not enough cards to deal. For example, the card was 8 of Hearts, and there are only six cards remaining. This card is his selected card. He then puts the face up cards on the table on top of the cards he has on his hand. They do all of this while you have your back turned. You easily find their selected card.

The Secret:

All roads lead to Rome. This trick has an over 80% success rate.

The secret is to repeat exactly what the spectator did. You also choose a random number between 1 and 10, and start dealing as described above. Just like the concept of equifinality, no matter which number you chose as your starting position, as you go through the process, you will choose the same set of cards at the end resulting in the same selected card! Try it for yourself. Here is a link to a good paper on this.

Always keep on learning…

In case you missed it, my last post was If the Learner Has Not Learned, Point at the Moon.

The Anatomy of an Isolated Incident:

challenger

I read about the death of Bob Ebeling today. He was a NASA contract Engineer from Morton Thiokol who tried to stop the launch of the space shuttle Challenger in 1986. On January 26, 1986 soon after the launch, the Challenger was engulfed in flames. All seven crew members lost their lives in this terrible accident. Famous Nobel laureate Richard Feynman was part of Rogers Commission which investigated the Challenger accident.  Feynman wrote about this investigation in depth in his 1988 book “What Do You Care What Other People Think?”

In today’s post, I will be looking at Isolated Incidents. There are times in my career where I am taken aback by isolated events.  These events happen very rarely, and thus it is not easy to understand the root causes. I will use the Challenger accident as the primary example to look at this. There have been 135 NASA space shuttle missions between 1981 and 2011. Of the 135 missions, 133 flights went as planned, with two ending in disaster.

The O-Ring Fiasco:

The Roger Commission identified that the Challenger accident was caused by a failure in the O-rings that were used to seal a joint on the right solid rocket booster. Bob Ebeling was among the group of Engineers who had warned NASA against the launch based on his concerns about the seals. The O-rings were not proven to work under cold conditions. It was noted that the temperature was below freezing on the day of the launch. Feynman famously demonstrated this by immersing an O-ring in a glass of ice water, and demonstrating that the O-rings were less resilient and that it retained its shape for a very short amount of time. This lack of resilience caused the failure of the seals leading to the Challenger catastrophe.

vlcsnap-2016-03-20-14h56m01s705

The Roger Commission indicated the following issues led to the Challenger accident:

  • Improper material used for the O-ring.
  • Lack of robust testing – the O-ring material was not determined to function as intended by NASA. Even though the O-ring manufacturer gave data to prove the lack of functionality at low temperatures, NASA management did not heed this.
  • Lack of understanding of risk from NASA management.
  • Potential push from management to launch the space shuttle to meet a rush deadline.

Feynman also wrote about the great disparity in the view of risk by the NASA management and the engineers. NASA management assigned a probability of 1 in 100,000 for a failure with loss of vehicle. However, when Feynman asked the engineers, he got values as low as 1 in 100. Feynman reviewed the NASA document that discussed the risk analysis of the space shuttle and was surprised to see extremely low probability values for failures. In his words;

The whole paper was quantifying everything. Just about every nut and bolt was in there. “The chance that a HPHTP pipe will burst is 10-7”. You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you put everything together you get 1 in 100,000.

Feynman also talked about an engineer being candid with him about his probability value of 1 in 300. He said that he calculated the risk as 1 in 300. However, he did not want to tell Feynman how he got his number!

The Anatomy of an Isolated Incident:

I have come to view the Isolated Incident cause-effect relationship as an equation. This is shown below.

Isolated Incident = Cause(s) + System weak points + Enabling Conditions

The Challenger Accident can be summarized:

Challenger Accident = Material limitation of the O-ring + NASA Management Policies + Cold conditions

The System Weak Point(s) are internal in nature. The enabling conditions, on the other hand, are external in nature. When you combine all the three factors in a perfect storm, you get an isolated incident. If we do not know all of the three factors, we are not able to solve the isolated incident. By itself alone, none of the factors above may cause the problem.

Another example is – when demand goes up, and production doubles. If the process is not robust enough to handle the spike in production, then isolated events can happen.

Pontiac’s Allergy to Vanilla Ice Cream:

I will finish this post with a fantastic story I read from Snopes:

The Pontiac division of General Motors received a complaint in the form of a letter. The letter was from a frustrated customer. He had been trying to contact the company for a while.

He wrote in the letter that he and his family were used to buying ice cream after dinner on a frequent basis. The type of ice cream that is purchased depended upon the mood of the family. He had recently purchased a new Pontiac car, and he had been having issues on his ice cream trips. He had figured out that the new car is allergic to vanilla ice cream.

If he purchased any other flavor, his car would start with no problem. However, if he purchased vanilla ice cream, his car will not start.

“What is there about a Pontiac that makes it not start when I get vanilla ice cream, and easy to start whenever I get any other kind?”, he asked in the letter.

The letter was delivered to the Pontiac President who was very amused by it. He sent an engineer to investigate the fantastic problem. The engineer went with the family three nights to get ice cream in the new car. The first night the family got chocolate ice cream, and the car started with no problem. The second night, they got strawberry. The car again was fine. On the third day, the family got vanilla ice cream; lo and behold the car would not start.

This was repeated on multiple days, and the results were always the same. The engineer was a logical man, and this stumped him. He took notes of everything. The only thing that he could see was time. The family always took the shortest amount of time when they purchased vanilla ice cream. This was because of the store layout. The vanilla ice cream was quite popular and was kept at the front of the store. Suddenly, the engineer identified why the isolated incident happened. “Vapor lock”, he exclaimed. For all the other flavors, the longer time allowed the engine to cool down sufficiently to start without any issues. When the vanilla ice cream was bought, the engine was still too hot for the vapor lock to dissipate.

Always keep on learning…

In case you missed it, my last post was Kintsukuroi and Kaizen.

Stop Asking Why:

why_s

We have been trained to ask “why” a lot in lean. Today’s post is about asking “why”.

My friend was doing data analysis of ERP transactions, and he noticed that the material handler was creating transactions in two different programs for dock-to-stock components. This process created double entries and did not seem to add value. He asked the question “why” and the material handler reported that she was doing it because it was the way she was trained, and because it was the way they had always done it.

I was always curious about the “5 why” method. English is my second language, and in my native language (Malayalam), I cannot ask the “why” question because it means more than one thing.

For example, let’s look at the following question;

Why are you doing double transactions?

The same question has two different layers. You can get very different answers depending upon how the “why” question is perceived;

  • What causes you to do the double transactions?

The answer could be that the operator was trained to do that or that it is in the procedure.

  • What is the purpose of doing the double transactions?

The answer to this question now makes the waste visible. There is no need for doing the double transaction.

In the Malayalam language, I have to ask each question the way it is written above. The question cannot be perceived in a different manner. It is very direct. I believe that this is where the “5 Why” method in Lean does not get the same results for everybody. The “why” question has more than one meaning, as explained above.

First Question (What Caused):

The first question (what caused) is extrinsic in nature and this is valuable in a root cause investigation. We start from a phenomenon -> cause ->effect view. Thus, the effect happened due to the presence of a cause. The “why” question is a “what caused” question. It would help if the question is asked as a “what caused” question. This type of thinking is also evident in the P-M Analysis method at Toyota. I will discuss about this more in a future post.

As an example, let’s look at a problem where the operator was missing a step. There is a big difference between “why did the operator miss the step?” and “what caused the operator to miss the step?” The first question might lead down a rabbit hole that puts the blame on the operator (needs more training, operator is lazy, etc.). The second question focuses the spotlight on the process or the system (needs error proofing, needs more defined structure etc.). Jon Miller from Gemba Academy has talked about using “what caused” in place of “why” as part of the Practical Problem Solving process.

Second Question (What is the purpose):

The second question (what is the purpose) is intrinsic in nature and this is valuable in a continuous improvement activity or during gemba walks. We start from an “operation yields value” viewpoint.

We should train the employees to ask this question on their processes. This is how we can develop our employees.

As a leader in your organization, you should ask the right question to properly develop your employees.

Story of the Ham and the story of the Can of Beans:

The reader may be aware of the story of the ham. It goes something like this:

ham

The newlywed wife was making her first major dinner for her husband. She was cooking ham. The husband was helping his wife in the kitchen. He noticed that she was cutting the ends of the ham.

“Why are you cutting the ends of the ham?” asked the husband curiously.

“This is how I learned watching my mother” answered the wife.

Now the husband was more curious. He asked his wife to call up her mother to verify the answer. The wife called her mother inquiring about the cutting.

“Hmmm, that is how I learned watching my mother” answered her mother.

Now the wife was also curious, so she called up her grandmother and probed her about the curious cutting of the ham.

The grandmother started laughing.

“Back in the day, we could only afford a small stove. Our roasting pan was small and we cooked the ends separately.”

There is a similar story about opening cans:

can

In this story, the husband notices that the wife opens the cans upside down. The wife tells him that she learned that by watching her mother. A short phone call solved the mystery. Her mother used to store the cans in a dusty cellar. Instead of cleaning the top of the cans, she found it easy to just turn it upside down and then open it.

Next time, instead of asking “why”, ask “what caused” or “what is the purpose”.

Always keep on learning…

In case you missed it, my last post was Who is Right?

 

The Mystery of Missing Advent Calendar Chocolates:

advent

It is Christmas time, which means it is advent calendar time for the kids and for those of us who are kids at heart. My wife bought our kids chocolate advent calendars from Trader Joe’s. For those who do not know advent calendars, these are countdown calendars to Christmas starting on December 1st. Each day has a window which you can open to reveal a chocolate. Each day has a uniquely shaped chocolate, a Christmas tree, a stocking etc. The kids love this.

We keep the advent calendars on the top of our refrigerator to ensure they are not tempted to eat all of the chocolate at once. This morning, I found the advent calendars on the table and a crying Annie. Annie is our youngest daughter. She was very upset.

“I did not get any chocolate today from my calendar”, she said while crying.

“You must have eaten it already”, was my response. Of course, the kids eat chocolate and sometimes they are impatient and eat more than one day’s worth. In my mind, it was a reasonable assumption to make.

Annie explained that she opened the window with 6 on it and did not find any chocolate. I looked at the calendar, and sure enough, the window for day 6 on it was open. My initial hypothesis stayed the same – Annie ate the chocolate, and she is not telling me the entire truth.

My wife suggested she open the window for day 7 and eat that chocolate. Annie then proceeded to open the window with 7 on it, in front of me. Lo and behold, it did not have any chocolate. Annie looked at me with sad eyes. I realized, I was wrong to have assumed that Annie had eaten the chocolate!

“This is a mystery”, said Audrey, her twin sister.

Now I had a second hypothesis – those darn calendar makers; they do not know what they are doing. They obviously missed filling all the spots with chocolate. As a Quality Engineer, I have seen operator errors. I have now jumped to my second hypothesis.

Having thought about for a bit, I looked at the available information. Based on what Annie told me, the chocolate was not in its spot for two consecutive days. These calendars did not have the numbers in the consecutive order. They were placed in random order. It did not strike to me that two candies at different locations would be missing candy. She had opened a spot between 6 and 7 on an earlier day, and it had the candy.

I had a reasonable hypothesis – the operator/equipment missed the spots in the calendar. I have seen it happen before in different environments. But still, something was not right.

I proceeded to put the advent calendar back onto the top of the refrigerator. Then I thought of something. I wanted to test the calendar more. I carefully opened the calendar from the base. It was a card board box with a plastic tray inside.

Just then I found out what happened! On multiple places, the chocolate was missing. The chocolate were misplaced from its cavities. They were all gathered at the bottom of the box. It could be from the transportation. It could be the end user i.e. my excited young daughter who shook the calendar. It could be the design of the calendar that allows extra space between the tray and the cardboard.

The most important thing was that Annie was now happy that she got her candies. Audrey was happy that we indeed had a mystery that we could solve. My wife and I were happy that our kids were happy.

Final Words:

This personal story has made me realize again that we should not jump to conclusions. Listen to that tiny little voice that says “there is something more to this”…

Always keep on learning…

In case you missed it, my last post was about “Lady Tasting Tea”.

The Mysterious No Fault Found:

nofault

As a Quality Engineer working in the Medical Device field, there is nothing more frustrating than a “no-fault-found” condition on a product complaint. The product is returned by the customer due to a problem while in use, and the manufacturer cannot replicate the problem. This is commonly referred to as no-fault-found (NFF). I could not find a definite rate on NFF for medical devices. However, I did find that for the avionics industry it is 40-60% of all the complaints.

The NFF can be also described as “cannot duplicate”, “trouble not identified”, “met all specifications”, “no trouble found”, or “retest ok”. This menacing condition can be quite bothersome for the customer as well as the manufacturer. In this post, I will try to define some red flags that one should watch out for, and a list of root causes that might explain the reasons behind the NFF condition. I will finish off with a great story from the field.

Red flags:

The following list contains some of the red flags that one should watch out for, if no-fault was found with the product that was returned. This list is of course by no means meant to be an exhaustive list, but might provide some guidance.

  • Major incident associated with the complaint – If the return was associated with a major incident such as a serious injury or even worse, death, one should test the unit exhaustively to identify the root cause.
  • Unit was returned more than once – If the unit was returned for the same problem, it is an indicator of an inherent root cause creating the problem. Sometimes, an existing condition can act as an enabling condition and can create more than one effect. In this case, the problem may not be the same for the second or third return. Alternatively, the enabling condition can be present at the customer’s site.
  • Nonstandard Rework(s) performed on the unit during production – I am a skeptic of reworks. A rework is deviation from the normal production. And sometimes, fixing one thing can cause another thing to fail.
  • The product is part of the first lots produced after a major design change – If the product validation process is not adequate or if proper stress tests were not performed, the unit can be produced with latent issues/bugs.
  • More than one customer reporting the same problem – If there is more than one source reporting the problem, it is a clear indication of an inherent issue.

Potential root causes for NFF condition:

The following list contains some of the root causes that might be associated with a no-fault condition. This list is of course by no means meant to be an exhaustive list.

  • Adequacy of test methods – If the test method is susceptible to variations, it may not catch failures. This cause is self-explanatory.
  • Excess stress during use – Reliability Engineering will tell you that if the stress during use exceeds the inherent strength of the product, the product will fail. This stress can be environmental or can be due to use beyond the intended use of the product. An example is if the product is used at a wrong voltage.
  • New user or lack of training – If the end user is not familiar with the product, he/she can induce the failure that might not occur otherwise. This is not an easy root cause to figure out. Sometimes this is evident by the appearance of the product in the form of visible damages (dents, burn marks etc.)
  • High number of features – Sometimes, the higher the number of features, the more the complexity of the product and worse the ease of use of the product. If the product is not easy to use, it can create double or triple fault conditions more easily. A double or triple fault condition occurs when two or three conditions are met for the fault to happen. This is considered to be isolated in nature.
  • Latent bugs/issues – No matter how much a new product design is tested, all the issues cannot be identified. Some of the issues are left unidentified and thus unknown. These are referred to as latent issues/bugs. This is the reason why your mobile phone or your computer requires updates or why some cars are recalled. These bugs will result in failures that are truly random and not easy to replicate.
  • Problem caused by an external accessory or another component – The product is sometimes used as part of a system of devices. Sometimes, the fault may lie with another component, and when the product is returned, it may not accompany all the accessories, and it will be quite hard to replicate the complaint.
  • Lack of proper validation methods – Not all of the problems may be caught if the validation methods are not adequate. This cause is similar but not the same as latent bugs/issues. Here, if there was no stress testing performed like transportation or environmental, obvious failures may not be caught.
  • Customer performed repairs – Sometimes, the failure was induced by something that the customer did on the product. This may not always be evident unless revealed by the customer.
  • Customer bias – This is most likely the hardest cause to identify on this list. Sometimes, the customer may “feel” that the product is not functioning as intended. This could be because they experienced a failure of the same brand at another time, and the customer feels that the entire product brand is defective.
  • Other unknown isolated event – Murphy’s Law states that “whatever can go wrong will go wrong.” Law of Large Numbers loosely states that “with enough number of samples, even the most isolated events can happen.” Combined together, you can have an isolated incident that happened at the customer site and may never happen at the manufacturer site.

The mystery of diathermy burns:

I got this story from the great book “Medical Devices: Use and Safetyby Bertil Jacobson MD PhD and Alan Murray PhD. Sometimes, a surgery that uses a device like an RF Generator can cause burns on the patient from the heat induced by the device. This is referred to as “diathermy burns”.

A famous neurosurgeon retired and started working at a private hospital. Curiously, after a certain date, five of his patients reported that they have contracted ugly, non-healing ulcers. These were interpreted as diathermy burns. These burns were present on the cheek bones of the patients who were placed face-down for the surgery and on the neck region of the patient who were operated in the supine position (face-upward). The surgeon has had a long uneventful and successful career with no similar injuries ever reported.

No issues were found with the generator used for the surgery. A new generator was purchased, and the problem persisted. The manufacturer of the generator advised replacing the wall outlet. The problem still persisted. The surgery routines were updated and rigorous routines involving specific placement of electrodes were put in place. The problem still persisted.

A clinical engineer was consulted. He also could not find any fault with any of the equipment. At that point he requested witnessing the next operation. During this, it was discovered that the new assistant surgeon was placing his hands heavily on the patient’s head during the operation. Thus, the diathermy burns were actually pressure necroses caused by the assistant surgeon. These apparently can be misinterpreted as diathermy burns!

This story, in a curious way, implies the need to go to the gemba as well! Always keep on learning…

Confirmation Bias – Colbert and Sagan Edition:

Yes-No

I discussed confirmation bias in an earlier post here. In this post, I hope to bring Astrophysicist Carl Sagan and Comedian Stephen Colbert together and end with a Zen story.

Wikipedia defines Confirmation Bias as “Confirmation bias, also called myside bias, is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s beliefs or hypotheses while giving disproportionately less attention to information that contradicts it.”

Confirmation bias can put brakes on your scientific thinking, and it is a daily struggle to avoid your biases.

The Colbert Report Edition:

I recently came across a study performed by LaMarre, Landreville and Beam from Ohio State University. In this study the authors investigated the biased message processing of political satire in the famous “The Colbert Report” TV show. For those who do not know this show, “The Colbert Report” show was a political satire show hosted by Stephen Colbert. Colbert refered to his fictional character as a “well-intentioned, poorly informed, high-status idiot”, and was a caricature of televised political pundits.

In the study, the researchers investigated the biased message processing of political satire in the show and the influence of political ideology on perceptions of Stephen Colbert. The researchers called his style of comedy as “ambiguous deadpan satire”. The following facts were revealed from the study.

  • No significant difference existed between conservatives and liberals regarding Stephen Colbert being funny.
  • Conservatives reported that Colbert only pretends to be funny, and genuinely meant what he said; supporting their conservative ideology. Liberals on the other hand reported that Colbert used satire and was not serious; supporting their liberal ideology.

In other words, both liberals and conservatives with extreme viewpoints watched the exact same show and came away with exactly opposite opinions. This is a classical case of confirmation bias!

Carl Sagan and the Fine Art of Baloney Detection:

Carl Sagan was a very famous American Astrophysicist and great scientific thinker. In his book, The Demon-Haunted World, Science as a Candle in the Dark, Carl Sagan provides us a thinking tool kit that will assist us in detecting baloney, as he puts it. Sagan refers to this as a means to construct and to understand, a reasoned argument and – especially important – to recognize a fallacious or fraudulent argument. The tools are as follows;

  • Wherever possible there must be independent confirmation of the “facts.”
  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
  • Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.
  • Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.
  • If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.
  • If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.
  • Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.
  • Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle — an electron, say — in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

Surprisingly, the list above is also applicable to detecting, and reducing confirmation bias.

A cup of Tea – a Zen story:

There once lived a great Zen master, Nan-in. Reputation of his wisdom spread, and a university professor decided to visit Nan-in to inquire about Zen.

The professor was welcomed into Nan-in’s room. Nan-in served the professor tea.

The professor’s cup was soon full and yet Nan-in kept on pouring tea causing the professor’s cup to overflow. Nan-in still kept on pouring.

“Master, please stop. The cup is full. There is no more room for more tea.”

“Like this cup,” Nan-in said, “your brain is full of your opinions and biases. There is no more room for Zen unless you first empty it”

Final Words:

I will finish off with a great piece of wisdom, I heard on Quora. Unfortunately, I do not know the source.

“My opinions are not me.  My opinions are just pieces of data that I carry in a box with me.  I can and should change them based on the information available.  If I marry myself to my opinions, I will cling to them regardless of what the information says.  If I want to be right, I need to be prepared to change my mind.” 

Always keep on learning…

Photo credit – Paul H. Byerly

It’s Complicated

Cynefin final

It’s Complicated:

PDCA, the four letter acronym made famous by Dr. Deming stands for Plan – Do – Check – Act. It is a continuous cycle.

PDCA is said to be the framework for scientific thinking and continuous improvement. I have always thought of PDCA to have something missing in it. It looked so simplistic. Can it really be that simple?

I have come to realize that what was missing was context; the context behind PDCA. It cannot be that everything you see is a nail, if you only have a hammer. What happens before PDCA? The moment before you decided, “Hey, let’s do PDCA.” What makes you decide the “scope” for PDCA? How do you know if PDCA is even appropriate?

This post is an ode to the Cynefin framework. For those who do not know the Cynefin framework, it is a brainchild of Dave Snowden, and it is a sense making framework. Dave Snowden has stated that in the Cynefin framework, data precedes framework and it is valid to understand. The Cynefin framework is not a categorization framework, where framework precedes data.

The idea behind the Cynefin framework is that when you encounter a problem or a new project, your first step is to understand what domain you are in. This provides us a framework to proceed. As a learning organization, it is essential that our efforts and our methodologies match the type of change that we are planning. The Cynefin framework lays the groundwork for this exact intent.

The Cynefin framework has 5 domains and is dynamic. No problem with high complexity or chaos ever stays in the same domain at all times. The problem we had last year may have appeared to be complex, but now it may be in the complicated domain, or even the simple domain. Even a situation from the Simple domain can collapse into the Chaotic domain if there is complacency.

Screen shot 2010-07-07 at 23.33.02

The following definitions are taken from Cognitive Edge website;

The Cynefin framework has five domains. The first four domains are:

Simple (also called as Obvious), in which the relationship between cause and effect is obvious to all. The approach is to Sense – Categorize – Respond and we can apply best practice.


Complicated, in which the relationship between cause and effect requires analysis or some other form of investigation and/or the application of expert knowledge. The approach is to Sense – Analyze – Respond and we can apply good practice.


Complex, in which the relationship between cause and effect can only be perceived in retrospect, but not in advance. The approach is to Probe – Sense – Respond and we can sense emergent practice.


Chaotic, in which there is no relationship between cause and effect at systems level. The approach is to Act – Sense – Respond and we can discover novel practice.


The fifth domain is Disorder, which is the state of not knowing what type of causality exists, in which state people will revert to their own comfort zone in making a decision. In full use, the Cynefin framework has sub-domains, and the boundary between simple and chaotic is seen as a catastrophic one: complacency leads to failure. In conclusion, chaos is always transitionary and dynamics are a key aspect.

This is summarized in the following figure.

Cynefin final

The need for the Cynefin Framework:

Most of the methodologies, including PDCA, assume some form of order. Sometimes this leads to the misapplication of methodology that leads to failures. Only Simple and Complicated domains assume some form of order. The Cynefin framework helps us in being efficient and at the same time effective.

There are minimal resources needed for a situation in the Simple domain. The answer is fairly obvious, and best practice is already known in the form of SOPs (Standard Operating Procedures) or work instructions. For example, the light bulb burned out – replace the light bulb. Project management is certainly not needed for this domain. There is no true need for a PDCA methodology in this domain. The Cynefin framework recommends sense-categorize-respond for this domain. The assumption is that there is a known best practice available or that the best practice is fairly straightforward.

The Complicated domain needs guidance from experts. Multiple solutions can exist, and we need experts’ help to identify the optimal solution. For example, if the light bulb keeps going out, it may not be as easy as replacing a light bulb. This is a domain that works well with PDCA. One should not imitate and apply the best-practice in this domain. Dave Snowden refers to a phenomenon called “premature convergence” where we stop exploring how to make ideas better, thinking that we have found the answer. Cynefin framework recommends sense-analyze-respond. This is similar to a PDCA approach.

The Complex domain does not have order. It is an unordered domain. We need patience for patterns to emerge in this domain. Cause and effect relations are not directly visible in this domain. The recommended practice is probe-sense-respond. Multiple and different PDCA loops might be required for this domain to let the patterns emerge. Think of any root cause projects that you completed, where you did not see the solution in the beginning, but on hindsight it made sense. Dave Snowden gives the example of “Houston, we have a problem” scene from the movie “Apollo 13”.

As the name suggests, the chaos domain is indeed full of turbulence and chaos. This is not a domain where you search for answers. This is a domain for rapid decisions to regain control and stabilize the turbulence. The recommended approach is act-sense-respond. The act phase can be an attempt to stabilize the turbulence. As you can see, this is not an ideal candidate for the PDCA approach. If PDCA is used, the Plan phase will need to be quite short. The goal of this domain is to quickly move to the complex domain as soon as possible. Dave Snowden’s example for this domain is the unfortunate 9/11 incident.

Final words:

In the business world, there is no solution that is one-size-fits-all. Context is everything! Each domain of the Cynefin framework comes with its own burden. Being too complacent in the Simple domain can push you into the Chaotic domain. Trying to imitate what worked for one company can cause you to fail (the Complicated domain). Not waiting for patterns to emerge in the Complex domain, and trying to push for best practices can push you over to the Chaotic domain. The Cynefin framework provides you a thinking framework to understand the scope of your situation and helps you in being efficient and effective with your PDCA approach. This post was written based on my thoughts on my learning with the Cynefin framework. I encourage the reader to read upon the Cynefin framework more at Cognitive-Edge.com. The HBR article “A Leader’s Framework for Decision Making” is also an excellent place to start.

Always keep on learning…

What do you mean by “No problem is a problem”:

giphy

When I first heard of “No problem is a problem”, I thought that it was a pretty deep philosophical statement. I could understand what it meant, but I realized at that time that there is something more to that statement, some deeper layers that still need to be understood.

Taiichi Ohno, the father of Toyota Production System is behind this quote. His original version is “Having no problems is the biggest problem of all.” This idea was engrained in the TPS senseis by their senseis.

Three interpretations come to surface when you look at the quote “Having no problems is the biggest problem of all.”

  • We are always surrounded by problems.
  • We are not looking hard enough.
  • By saying “there is no problem”, we are trying to hide problems.

Actually there is more to this basic idea. How would you define the concept of “problem”? Merriam-Webster defines problem as;

  • something that is difficult to deal with : something that is a source of trouble, worry, etc.
  • difficulty in understanding something
  • a feeling of not liking or wanting to do something

The book Kaizen Teian 2 defines “problem” as the gap between Ideal State and Current State. This is the gold nugget that will provide the deeper meaning to the statement “no problem is a problem”.

problem

At Toyota, you are trained to think of a problem as the gap between the current state and the ideal state. This way, you can start proposing countermeasures to reach the ideal state and thus address the problem. The thought process can be summarized as below.

  • What is your ideal state (goal)?
  • What is your current state?
  • Define the problem as the gap.
  • Suggest countermeasures with an understanding of the cause.
  • Implement and study the new current state.
  • If you have not reached your ideal state go back to step 4.

As you can see, this is the scientific thinking of PDCA (Plan – Do – Check – Act). With this light, and with the new definition of a problem as the gap, if you say there is no problem, it would mean that you have reached your ideal state, which is never the case.

One can thus see Kaizen (continuous improvement) as a problem solving methodology. Kaizen is the engine that chugs along towards the ideal state. This represents slow and incremental progress towards the ideal state. The reader should be aware that Kaizen does not equate fixing things. Fixing things is firefighting. Firefighting is associated with maintaining the status quo. This does not let you move towards your ideal state.

The traditional thinking is viewing problems as the fires that need to be put out. There is no continuous improvement thinking here. Putting out fires just mean that we are back where we started. This is the essence of “no problem is a problem”. By saying “I have no problems”, one is giving up on continuous improvement. By viewing “problem” as the gap, it gives motivation for continuous improvement. Think of this as Pull and putting out fires as Push. Thus, you have a better flow towards your ideal state.

The scientific method detailed above is also taught as genchi genbutsu at Toyota. This roughly translates to “go to the actual place of activity, and grasp the facts”. Interestingly, Honda uses a similar theme under san genshugi. This roughly translates to the three actualities. Honda requires their employees to go to the actual place of activity to gain firsthand information, look at the actual situation, and decide on countermeasures based on actual facts. The “gen” component of the Japanese word means real or actual. Sometimes this is spoken as “gem” as well. For example, gemba means actual place of action.

Final words:

I am at fault for not always using this thinking process. Looking at problems as what should be versus what is right now, helps us understand the problem better. Being at the actual location where the problem happened, and talking to the operator, looking at the equipment or the raw materials, and understanding the facts helps us in moving towards addressing the problem. View problems as the gap between ideal state and current state, and understand that your purpose is to move towards the ideal state. Under this idea of “problem”, you will always have opportunities to move towards the ideal state.

Always keep on learning…

The greatest barrier to scientific thinking:

confirmation-bias

If one were to ask me, what I am afraid of as an Engineer, I will unequivocally declare “Confirmation Bias”.

“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”

– Francis Bacon, Novum Organum, 1620

Confirmation bias is part of everybody’s thinking process. When confronted with a problem, one has to determine how to solve it. The first step is to analyze the problem, and this requires looking inward and finding the mental model that might explain the problem at hand. If one such pattern is available, then he tries to fit the problem into the model, as if it is a suit tailored to fit the body of the problem. This is a form of deductive thinking.

In the absence of a pattern, he tries to gather further information to form a mental model. The newly created model may fit the problem much better. This is a form of inductive thinking.

Sometimes, in the absence of a pattern, one might try to find multiple mental models and see which model fits the problem the best. This is a form of abductive thinking.

No matter what form of thinking is used, the problem occurs when one tries to find evidence to prove the model, and ignores any evidence that might otherwise prove it wrong. This is the curse of confirmation bias. It can create blind spot that sometimes is large enough to hide an elephant!

“When men wish to construct or support a theory, how they torture facts into their service!”

John Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852

This creates quite a challenge for any form of activity involving brain functioning like problem solving or decision making. I have attempted to create a list of steps that one can use to minimize the impact of confirmation bias. I will be the first person to tell you that this is a daily struggle for me.

  • Be aware that confirmation bias exists:

The first step is to be aware that confirmation bias is part of what we are. Just being aware of this can help us in asking the right questions.

  • Walk the process:

Walking the process allows us to understand the big picture, and helps us in seeing the problem from other people’s perspective. If a problem is identified on the floor during assembly, it helps to walk the process with the component starting at the receiving dock all the way to the assembly on the floor. This helps to slow us down, and we may see things counter to our initial hypothesis that we may have missed otherwise.

  • Can you turn the problem on and off?:

When a problem occurs, either in the field or on the production floor, I always try to see if I can turn the problem on and off. This helps to clarify my mental model and validate my thinking. The cause alone does not result in the effect. The cause, in the presence of enabling conditions creates the effect. Understanding the enabling conditions help us to turn the problem on and off.

  • Challenge yourself to disprove your model:

Challenging yourself to disprove your own model is probably the most challenging yet most effective way to combat confirmation bias. It is after all, easier to disprove a theory than prove it. This helps to purify one’s thinking.

In a recent conversation with my brother-in-law, he talked about the “tenth man” scene from the movie “World War Z”. In the movie, the whole world is under attack from a zombie virus. Israel had built a wall around the nation that prevented the outbreak up to a certain point in the movie. This was achieved through a policy referred to as “tenth man”. It basically states that if 9 out of 10 people in a council agree on something, the tenth person has to take the opposite side, no matter how improbable it might seem.

  • Understanding the void:

My first blog post here was about understanding the void. This is similar to the negative space idea in designing. The information that is not there or not obvious can sometimes be valuable. Looking for the negative space again helps us in looking at the big picture.

In the short story “Silver Blaze”, Sherlock Holmes talks about the “curious incident about the dog.” Holmes was able to solve the mystery that the crime was committed by somebody that the dog knew.

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

I will finish this post off with a Zen story.

There was a great Zen teacher. Some of his disciples came to him one day to complain about the villagers.

They told him that the villagers were spreading rumors that the teacher was immoral, and that his followers were not good people. They asked him what they should do.

“First, my friends,” he responded, “you should definitely consider whether what they say is true or not.”

Always keep on learning…