The Purpose of Visualization:

1845

Many men go fishing all of their lives without knowing that it is not the fish they are after.” – a quote misattributed to Henry David Thoreau.

What is the purpose of visualization? Before answering this, let’s look at what is visualization. Visualization is making information visible at the gemba. The information could be in the form of daily production boards or it could be non-conforming components or other artifacts placed on a table on the production floor. Another phrase that is used in place of visualization is “visibilization”. I had talked about this in the post – Visibilization: Crime Fighting, Magic and Mieruka. The purpose of visualization or visibilization is to make waste visible so that appropriate action can be pursued. Or is it?

I recently came across the paper “Defining Insight for Visual Analytics” by Chang, Ziemkiewicz et al. I enjoyed the several insights I was able to gain from this paper. The purpose of visualization is to enable and discover insight. This may seem fairly logical and straightforward. Chang et al. details that there are two types of insights – knowledge building insight and spontaneous insight. The knowledge building insight is a linear continuous process where the operator can use established problem solving methods and heuristics to solve a problem and gain insight into the process. The spontaneous insight does not come from gradual learning heuristics or problem solving methods. The spontaneous insight results in “aha!” moments and usually new knowledge. The spontaneous insight often occurs when the operator has tried the normal problem solving routines without success. The spontaneous insight happens in frustration after several attempts when the mind breaks off from normal routines. Researchers are able to study the two insights by using electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) on the participants’ brains.

Chang et al. notes that – In normal problem solving, the activity in the temporal lobe is continuous and mostly localized in the left hemisphere, which is thought to encode more detailed information in tightly related semantic networks. This indicates that normal problem solving involves a narrow but continuous focus on information that is highly relevant to the problem at hand. In contrast, when participants solve a problem with spontaneous insight, the right temporal lobe shows a sharp burst of activity, specifically in an area known as the superior temporal gyrus. Unlike the left temporal lobe, the right temporal lobe is thought to encode information in coarse, loosely associated semantic networks. This suggests that spontaneous insight occurs through sudden activation of less clearly relevant information through weak semantic networks, which corresponds to a participant’s paradigm shift following an impasse.

The findings indicate that the spontaneous insight is qualitatively different from the knowledge building insight. The knowledge building insight is using the normal routines and increasing the existing knowledge, while the spontaneous insight is breaking away from the normal routines and creating new knowledge. Spontaneous insight is a form of problem solving that is used to find solutions to difficult and seemingly incomprehensible problems. Knowledge-building insight, on the other hand, is a form learning that builds a relationally semantic knowledge base through a variety of problem-solving and reasoning heuristics.

In the light of the two insights, which one is better? The point is not to identify what is better, but to understand that both types of insights are important and are both related to one another. Chang et al. theorizes that one can only gain spontaneous insights only from routine knowledge building insights. In their words – Einstein didn’t come up with the Theory of Relativity out of thin air but rather based it on experiments inconsistent with existing theories and previous mathematical work. The existence of deep, complex knowledge about a subject increases the likelihood that a novel connection can be made within that knowledge. Likewise, each major spontaneous insight opens up the possibility of new directions for knowledge-building. Together, the two types of insights support each other in a loop that allows human learning to be both flexible and scalable.

Chang et al. hypothesizes that there is a positive non-linear relationship between gaining insights and the knowledge that the operator already possesses. The more knowledge the operator has, the more likelihood that the operator will gain further insights with visualization. In this light, the purpose of visualization is to develop your employees, and in some regards demonstrates respect for people. Making the problems/waste visible allows them to engage in daily/ frequent problem solving routines that builds knowledge building insights, which then leads to spontaneous insights to improving their processes. In other words, it is about building the continuous improvement muscle! The problems on the floor can vary in their complexities. There can be routine problems with known linear relationships (simple to complicated problems), and there can be problems where there are no known solutions and are intricately woven with non-linear relationships (complicated to complex problems). Solving the routine problems can help with gaining valuable spontaneous insights to tackle the complex problems.

I will finish off with a quote from the great Carl Sagan when he went on The Tonight Show with Johnny Carson:

For most of history of life on this planet, almost all the information they had to deal with was in their (organisms’) genes.  Then about 100 million years ago or a little longer than that, there came to be a reptile that for the first time in the history of life had more information in its brains than in its genes. That was a major step symbolically in the evolution of life on this planet. Well, now we have an organism – us, which can store more information outside the body altogether than inside the body – that is in books and computers and television and video cassettes. And that extraordinarily expands our abilities to understand what is happening and to manipulate and control our environment, if we do it wisely, for human benefit.

Always keep on learning…

In case you missed it, my last post was Looking at Kaizen and Kaikaku:

Advertisements

Hammurabi, Hawaii and Icarus:

patent

In today’s post, I will be looking at Human Error. In November 2017, The US state of Hawaii reinstated the Cold War era nuclear warning signs due to the growing fears of a nuclear attack from North Korea. On January 13, 2018, an employee from the Hawaii Emergency Management Agency sent out an alert through the communication system – “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” The employee was supposed to take part in a drill where the emergency missile warning system is tested. The alert message was not supposed to go to the general public. The cause for the mishap was soon determined to be human error. The employee in the spotlight and few others left the agency soon afterwards. Even the Hawaiian governor, David Ige, came under scrutiny because he had forgotten his Twitter password and could not update his Twitter feed about the false alarm. I do not have all of the facts for this event, and it would not be right of me to determine what went wrong. Instead, I will focus on the topic of human error.

One of the first proponents of the concept of human error in the modern times is the American Industry Safety pioneer, Herbert William Heinrich. In his seminal 1931 book, Industrial Accident Prevention, he proposed the concept of Domino theory to explain industry accidents. Heinrich reviewed several industrial accidents of his time, and came up with the following percentages for proximate causes:

  • 88% are from unsafe acts of persons (human error),
  • 10% are from unsafe mechanical or physical conditions, and
  • 2% are “acts of God” and unpreventable.

The reader may find it interesting to learn that Heinrich was working as the Assistant Superintendent of the Engineering and Inspection Division of Travelers Insurance Company, when we wrote the book in 1931. The data that Heinrich collected was somehow lost after the book was published. Heinrich’s domino theory explains an injury from an accident as a linear sequence of events associated with five factors – Ancestry and social environment, Fault of person, Unsafe act and/or mechanical or Unsafe performance of persons, Accident and Injury.

H1

He hypothesized that taking away one domino from the chain can prevent the industrial injury from happening. He wrote – If one single factor of the entire sequence is to be selected as the most important, it would undoubtedly be the one indicated by the unsafe act of the person or the existing mechanical hazard. I was taken aback by the example he gave to illustrate his point. As an example, he talked about an operator fracturing his skull as the result of a fall from a ladder. The investigation revealed that the operator descended the ladder with his back to it and caught his heel on one of the upper rungs. Heinrich noted that the effort to train and instruct him and to supervise his work was not effective enough to prevent this unsafe practice.  “Further inquiry also indicated that his social environment was conducive to the forming of unsafe habits and that his family record was such as to justify the belief that reckless tendencies had been inherited.

One of the main criticisms to Heinrich’s Domino model is its simplistic nature to explain a complex phenomenon. The Domino model is reflective of the mechanistic view prevalent at that time. The modern view of “human error” is based on cognitive psychology and systems thinking. In this view, accidents are seen as a by-product of the normal functioning of the sociotechnical system. Human error is seen as a symptom and not a cause. This new view uses the approach of “no-view” when it comes to human error. This means that the human error should not be its own category for a root cause. The process is not perfectly built, and the human variability that might result in a failure is the same that results in the ongoing success of the process. The operator has to adapt to meet the unexpected challenges, pressures and demands that arise on a day-to-day basis. The use of human error as a root cause is a fundamental attribution error – focusing on the human trait of the operator as being reckless or careless; rather than focusing on the situation that the operator was in.

One concept that may help in explaining this further is Local Rationality. Local Rationality starts with the basic assumption that everybody wants to do a good job, and we try to do the best (be rational) with the information that is available to us at a given time. If this decision led to an error, instead of looking at where the operator went wrong, we need to look at why he made the decisions that made sense to him at that point in time. The operator is in the “sharp end” of the system. James Reason, Professor Emeritus of Psychology at the University of Manchester in England, came up with the concept of Sharp End and Blunt End. Sharp end is similar to the concept of Gemba in Lean, where the actual action is taking place. This is mainly where the accident happens and is thus in the spotlight during an investigation. Blunt end, on the other hand, is removed and away in space and time. The blunt end is responsible for the policies and constraints that shape the situation for the sharp end. The blunt end consists of top management, regulators, administrators etc. Professor Reason noted that the blunt end of the system controls the resources and constraints that confront the practitioner at the sharp end, shaping and presenting sometimes conflicting incentives and demands. The operators in the sharp end of the sociotechnical system inherits the defects in the system due to the actions and policies set by blunt end and can be the last line of defense instead of being the main proponents or instigators of the accidents. Professor Reason also noted that – rather than being the main instigators of an accident, operators tend to be the inheritors of system defects. Their part is that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking. I encourage the reader to research the works of Jens Rasmussen, James Reason, Erik Hollnagel and Sydney Dekker since I have tried to only scratch the surface.

Final Words:

Perhaps the oldest source of human error causation is the Code of Hammurabi, the code of ancient Mesopotamian laws dating back to 1754 BC. The Code of Hammurabi consisted of 282 laws. Some examples of human error are given below.

  • If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kill its owner, then that builder shall be put to death.
  • If a man rents his boat to a sailor, and the sailor is careless, and the boat is wrecked or goes aground, the sailor shall give the owner of the boat another boat as compensation.
  • If a man lets in water and the water overflows the plantation of his neighbor, he shall pay ten gur of corn for every ten gan of land.

I will finish off with the story of Icarus. In Greek mythology, Icarus was the creator of the labyrinth in the island of Minos. Icarus’ father was the master craftsman Daedalus. King Minos of Crete imprisoned Daedalus and Icarus in Crete. The ingenious Daedalus observed the birds flying and invented a set of wings made from bird feathers and candle wax. He tested the wings out and made a pair for his son Icarus. Daedalus and Icarus planned their escape. Daedalus was a good Engineer since he studied the failure modes of his design and identified the limits. Daedalus instructed Icarus to follow him closely and asked him to not fly too close to the sea since the moisture can dampen the wings, and not fly too close to the sun since the heat from sun can melt the wings. As the story goes, Icarus was excited with his ability to fly and got carried away (maybe reckless). He flew too close to the sun, and the wax melted from his wings causing him to fall down to his untimely death.

Perhaps, the death of Icarus could be viewed as a human error since he was reckless and did not follow directions. However, Stephen Barlay in his 1969 book, Aircrash Detective: International Report on the Quest for Air Safety, looked at this story closely. At the high altitude that Icarus was flying, the temperature will actually be cold rather than warm. Thus, the failure would actually be from the cold temperature that would make the wax brittle and break instead of wax melting as indicated in the story. If this was true, during cold weathers the wings would have broken down and Icarus would have died at another time even if he had followed his father’s advice.

Always keep on learning…

In case you missed it, my last post was A Fuzzy 2018 Wish

The Socratic Method:

Socrates Mural

In today’s post, I am looking at the Socratic Method. Socrates was one of the early founders of Western Philosophy. Marcus Cicero (106–43 BCE), a Roman politician, wrote that it was Socrates who brought philosophy down from heaven to earth.

“Socrates however (was the) first (who) called philosophy down from heaven, and placed it in cities, and introduced it even in homes, and drove (it) to inquire about life and customs and things good and evil.”

I have always been curious about the Socratic Method. I have heard it mentioned in many books as the method to teach by asking. In my mind, I drew the analogy of guiding a horse to the pond so that it can drink water. The “guiding” is done through the questions so that the teacher does not provide the answer to the student directly. Instead, the student comes up with the answer.  This is not the same as the normal teaching in schools (“lecturing”), where the teacher will give the answers, while the students remain passive. Socrates used the analogy of a midwife who helps others to deliver their thoughts in a clear and meaningful manner.

There are three terms commonly seen to describe the Socrates Method.

  • Elenchos
  • Dialectic
  • Aparia

Elenchos is a Greek term, which can be translated as “cross-examination”. There is a negative connotation to this term. Socrates’ method has been described as an Elenctic method. The negative connotation comes from pointing out to the interlocutor that he does not have the knowledge that he thought he did, puncturing the conceit of wisdom. Socrates would start out by saying that he does not know about something, for example, the concept of virtue. Then he would ask for help from the person of interest to define what virtue is. From that point onwards, once the person of interest commits to a definition, Socrates will continue to ask questions, and each question will point out a weakness that refutes the definition. After a round of questions, the person of interest gets very confused and recognizes that he did not understand the subject as he thought he did and feels that he embarrassed himself.

 Dialectic is another Greek term that can be translated as “discussion”. Dialectic does not have the negative connotation that Elenchos has. Any complex idea contains contradictions, inconsistencies and even portions of ignorance. The Dialectic method is a way to reveal the contradictions or inconsistencies, to go back and forth between contrasting ideas to refine the topic on hand.

What Socrates is trying to achieve from his questions is “Aparia”. Aparia is another Greek term that can be translated as “Cognitive discomfort”. Once the interlocutor realizes that he does not know as much as he thought he did, he achieves aparia. He feels the discomfort cognitively because he was sure that he knew about the subject. The interlocutor is outside of his comfort zone. However, Socrates was able to find fault with his knowledge. Aparia is the starting point for the interlocutor to examine himself and reflect so that new knowledge can be gained.

Combining the three ideas above, we can loosely explain the Socratic Method as follows:

  1. Make the person of interest (POI) at ease, and ask the question in the form of “what is X?”
  2. If POI defines “X” as “Y”, find examples where “X” is not “Y”
  3. Ask questions to further define “X” in light of the new information. Repeat (2) and (3).
  4. Each round of questions must move the POI further away from their first definition.
  5. POI achieves aparia.

Socrates would plead ignorance and ask for specific definitions when asking questions. The questions can also be in other forms such as “what is the purpose of X” or “How does one obtain X” etc. The first question forces the POI to define the boundary of his conception of the idea. This can be thought of as a box. However, with each refutation, the POI realizes that the boundary he first drew is not enough, and that he has to redefine the boundary – perhaps make it larger or smaller, or draw the boundary in a whole other area.

One of the best examples I have seen to explain this is that of a chair. How would one define a chair? One possible definition is that a chair is something for a person to sit upon.

chair 1

However, there are many other things that people sit on, for example – a step on a stair.

With this refutation, the definition may now be changed to “a chair is something designed for a person to sit.”

chair 2

The new refutation might be that a bench is something that is designed for a person to sit, and so is a stool. These are not chairs.

Perhaps, the chair can be now defined as “piece of furniture designed for only one person with a back and four legs”. This is similar to the definition in Merriam Webster dictionary.

Even with the new definition, there are still inconsistencies. There are chairs such as decorative chairs that are not supposed to be sat on. There are chairs like a bean bag chair that do not have a back or legs.

chair 3

Compared to defining a chair, it is harder to define ideas that are not tangible. There are many phrases in Lean like “Respect for People” and “flow” that are thrown around. How would you define “Respect for People”? Would you define it as being nice to your workers? How would you define “flow”? Would you define it as production with one-piece at a time?

On a side note, you can use the Socratic Method on yourself. This can be compared to Hansei in Toyota Production System. What are your beliefs and worldviews? Can you identify any contradictions or inconsistencies that might refute this? Actively seeking out to disprove your belief system helps you in your pursuit for wisdom. Seek out aparia!

Final words:

Socrates did not write any books. Plato, his disciple, wrote about Socrates a lot in his books. Most of what we know about Socrates came from Plato’s books. Socrates never defined or explained his method, nor did Plato write it down as a method. What we have come to know as the Socratic Method is from reading Plato’s books and noting the patterns of dialogues that Socrates engaged in. In Plato’s book, “Apology”, Socrates talks about the reason for going around and asking questions. Socrates’ friend, Chaerephon went to Delphi and asked the Pythian priestess Is there anybody wiser than Socrates?” The Pythian priestess responded that there was no one wiser. This really confused Socrates, and he took this to mean that the Gods are commanding him to examine himself as well as others. He came to the realization that while others were pretending to possess knowledge, he knows nothing, and this knowledge is what sets him apart from others. Socrates said that the unexamined life is not worth living. The pursuit of knowledge starts with questions.

I will finish with a story of Diogenes and Plato. Diogenes was one of the founders of Cynic Philosophy. Diogenes asked Plato for a definition of man. Knowing Diogenes’ cynical nature, Plato gave the tongue-in-cheek definition from Socrates – “Man is a featherless biped.” Diogenes went outside, and bought a chicken. He then plucked all of its feathers, brought it to Plato, and said, “Behold. Here is a man.”

Plato then ordered his academy to add “with broad flat nails” to the definition.

Always keep on learning…

In case you missed it, my last post was Which Way You Should Go Depends on Where You Are:

Clause for Santa – A Look at Bounded Rationality:

claus

It is Christmas time in 2016. My kids, ages 6 and 9, believe in Santa Claus. It bothers me that they believe in Santa Claus; mainly because it is not logical to believe in a magical being bringing materialistic presents and also because we, their parents, do not get credit for the presents they receive.

From my children’s perspective though, Santa does make sense. Think of it as a black box; they write what they want in a list, believe in Santa, and on Christmas day they find their toys under the Christmas tree. The output matches the input, repeatedly over the years. This passes the scientific evidence based sniff tests’ criteria. They also find additional evidence in the form of stories, movies, songs etc. of Santa Clause and his magical flying reindeers. From their standpoint, they have empirical evidence for making a decision to believe in Santa.

This line of thinking led me to reflect on “Bounded Rationality”. Bounded Rationality is a concept that was created by the great American thinker Herbert Simon. Herbert Simon won the Nobel Prize for Economics in 1978 for his contributions.

According to the famous German Psychologist Gerd Gigerenzer, in the 1950s and 60s, the enlightenment notion of reasonableness reemerged in the form of the concept of “rationality”. Rationality refers to the optimization of some function. The optimization can be maximization or minimization. Simon determined that there is a limit to the “rationality” of humans, and his views were against the ideas of a fully rational man in neoclassical economics. Simon believed that we cannot be fully rational while making decisions, and that our rationality is bounded by our mental capabilities and mental models. In his words;

Bounded rationality refers to the individual collective rational choice that takes into account “the limits of human capability to calculate, the severe deficiencies of human knowledge about the consequences of choice, and the limits of human ability to adjudicate among multiple goals”.

 Bounded rationality does not, therefore, argue that decisions and the people taking them are inherently irrational, but that there are realistic limits on the ability of people to weigh complex options in a fully logical and objective way. Bounded rationality therefore concerns itself with the interaction between the human mind (with its prior knowledge, competing value systems and finite cognitive resources) and the social environment – the processes by which decisions are made and how these processes are shaped by the individual and their wider circumstances.

br

Thus, we do not make the best choices because; we do not have all the information, we do not understand the consequences of all the options or because we do not take time to evaluate all the alternatives. Furthermore we do not always understand that our decision was based on an imperfect model. This leads to the next idea that Herb Simon created – “satisficing”. Satisficing is a word created from two words – satisfy and suffice. In other words, satisficing is the tendency for us to latch on to “good enough for now” solutions. Simon introduced a “stop rule” as part of satisficing criterion: “Stop searching as soon as you have found an alternative that meets your aspiration level.” He later modified it to be a dynamical rule such that the aspiration level or the current criterion is raised or lowered based on previous failures or successes. Gerd Gigerenzer strongly reminds us that Bounded Rationality does not mean optimizing under constraints (finding the best option under the constraints set by the situation) or irrationality (total absence of reasonableness).

In the 2001 book, “Bounded Rationality – The Adaptive Toolbox; edited by Gerd Gigerenzer and Reinhard Selten, there is a chapter dedicated to the role of culture in bounded rationality. This chapter discusses how sociocultural processes produce bounded rational algorithms. Both ethnographic data and computer modeling suggest that innate, individually adaptive processes, such as prestige-biased transmission and conformist transmission, will accumulate and stabilize cultural-evolutionary products that act as effective decision-making algorithms, without the individual participants understanding how or why the particular system works. Systems of divination provide interesting examples of how culture provides adaptive solutions.

One of the examples they cite is the complex system of bird omens amongst the Kantu of Kalimantan (Indonesian Borneo) swidden farmers. Swidden agriculture is a technique of rotational farming. Each Kantu farmer relies on the type of bird and the type of call that the bird makes to choose the agricultural site. This creates a random distribution of the agricultural sites and ultimately helps the Kantu farmers, thus keeping their tradition alive. As a quick and thrifty heuristic, this cultural system suppresses errors that farmers make in judging the chances of a flood, and substitutes an operationally simple means for individuals to randomize their garden site selection. In addition, by randomizing each farmer’s decision independently, this belief system also reduces the chance of catastrophic failures across the entire group — it decreases the probability that many farmers will fail at the same time. All this only works because the Kantu believe that birds actually supply supernatural information that foretells the future and that they would be punished for not listening to it. How many of these cultural traditions do we still carry on in our work lives?

I found this quite interesting and maybe because it is Christmas time I could not help but draw comparisons to how we try to keep the idea of Santa alive for our kids. I thought I would dig into this deeper with my kids. I wanted to push my kids to go beyond their biases and heuristics and try to give them an opportunity to look for more information with regard to their belief in Santa. I started asking them questions in the hope that it would make them reevaluate their current decision to believe in Santa. With enough probing questions, surely they should be able to reevaluate their thinking.

I first asked them “Why do you believe in Santa?”

My youngest responded, “Believing in Santa makes him real”.

My middle child responded, “We saw him at the shopping mall parking lot loading presents in his car.”

My oldest responded with the following facts, “We get presents every year from him. We put out cookies and milk, and they are gone by Christmas day.”

Not giving up, I pushed, “If Santa gives presents to all the kids in the world, I never got any presents when I was a kid in India. Why is that?”

“You were a naughty child”, my youngest responded giggling.

“It takes a long time to get to India”, my middle child also gave her reasoning.

I thought I would give some stats with my questions, “There are about 1.9 billion kids in the world. How can Santa have toys for all of them?”

“That’s easy. Santa is super rich and can buy all the toys he wants” was the response.

“OK. How can he go around world giving toys to all the kids?”, I asked.

“He has magical reindeers” was the response.

Finally, I gave up. My attempts to crack their belief in Santa were failing. I then realized that perhaps it is not bad after all, and that my kids being kids is the most important thing of all. And it makes Christmas more magical for them.

There is always next year to try again!

Always keep on learning…

In case you missed it, my last post was What is the Sound of One Hand Clapping in Systems?