Hammurabi, Hawaii and Icarus:

patent

In today’s post, I will be looking at Human Error. In November 2017, The US state of Hawaii reinstated the Cold War era nuclear warning signs due to the growing fears of a nuclear attack from North Korea. On January 13, 2018, an employee from the Hawaii Emergency Management Agency sent out an alert through the communication system – “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” The employee was supposed to take part in a drill where the emergency missile warning system is tested. The alert message was not supposed to go to the general public. The cause for the mishap was soon determined to be human error. The employee in the spotlight and few others left the agency soon afterwards. Even the Hawaiian governor, David Ige, came under scrutiny because he had forgotten his Twitter password and could not update his Twitter feed about the false alarm. I do not have all of the facts for this event, and it would not be right of me to determine what went wrong. Instead, I will focus on the topic of human error.

One of the first proponents of the concept of human error in the modern times is the American Industry Safety pioneer, Herbert William Heinrich. In his seminal 1931 book, Industrial Accident Prevention, he proposed the concept of Domino theory to explain industry accidents. Heinrich reviewed several industrial accidents of his time, and came up with the following percentages for proximate causes:

  • 88% are from unsafe acts of persons (human error),
  • 10% are from unsafe mechanical or physical conditions, and
  • 2% are “acts of God” and unpreventable.

The reader may find it interesting to learn that Heinrich was working as the Assistant Superintendent of the Engineering and Inspection Division of Travelers Insurance Company, when we wrote the book in 1931. The data that Heinrich collected was somehow lost after the book was published. Heinrich’s domino theory explains an injury from an accident as a linear sequence of events associated with five factors – Ancestry and social environment, Fault of person, Unsafe act and/or mechanical or Unsafe performance of persons, Accident and Injury.

H1

He hypothesized that taking away one domino from the chain can prevent the industrial injury from happening. He wrote – If one single factor of the entire sequence is to be selected as the most important, it would undoubtedly be the one indicated by the unsafe act of the person or the existing mechanical hazard. I was taken aback by the example he gave to illustrate his point. As an example, he talked about an operator fracturing his skull as the result of a fall from a ladder. The investigation revealed that the operator descended the ladder with his back to it and caught his heel on one of the upper rungs. Heinrich noted that the effort to train and instruct him and to supervise his work was not effective enough to prevent this unsafe practice.  “Further inquiry also indicated that his social environment was conducive to the forming of unsafe habits and that his family record was such as to justify the belief that reckless tendencies had been inherited.

One of the main criticisms to Heinrich’s Domino model is its simplistic nature to explain a complex phenomenon. The Domino model is reflective of the mechanistic view prevalent at that time. The modern view of “human error” is based on cognitive psychology and systems thinking. In this view, accidents are seen as a by-product of the normal functioning of the sociotechnical system. Human error is seen as a symptom and not a cause. This new view uses the approach of “no-view” when it comes to human error. This means that the human error should not be its own category for a root cause. The process is not perfectly built, and the human variability that might result in a failure is the same that results in the ongoing success of the process. The operator has to adapt to meet the unexpected challenges, pressures and demands that arise on a day-to-day basis. The use of human error as a root cause is a fundamental attribution error – focusing on the human trait of the operator as being reckless or careless; rather than focusing on the situation that the operator was in.

One concept that may help in explaining this further is Local Rationality. Local Rationality starts with the basic assumption that everybody wants to do a good job, and we try to do the best (be rational) with the information that is available to us at a given time. If this decision led to an error, instead of looking at where the operator went wrong, we need to look at why he made the decisions that made sense to him at that point in time. The operator is in the “sharp end” of the system. James Reason, Professor Emeritus of Psychology at the University of Manchester in England, came up with the concept of Sharp End and Blunt End. Sharp end is similar to the concept of Gemba in Lean, where the actual action is taking place. This is mainly where the accident happens and is thus in the spotlight during an investigation. Blunt end, on the other hand, is removed and away in space and time. The blunt end is responsible for the policies and constraints that shape the situation for the sharp end. The blunt end consists of top management, regulators, administrators etc. Professor Reason noted that the blunt end of the system controls the resources and constraints that confront the practitioner at the sharp end, shaping and presenting sometimes conflicting incentives and demands. The operators in the sharp end of the sociotechnical system inherits the defects in the system due to the actions and policies set by blunt end and can be the last line of defense instead of being the main proponents or instigators of the accidents. Professor Reason also noted that – rather than being the main instigators of an accident, operators tend to be the inheritors of system defects. Their part is that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking. I encourage the reader to research the works of Jens Rasmussen, James Reason, Erik Hollnagel and Sydney Dekker since I have tried to only scratch the surface.

Final Words:

Perhaps the oldest source of human error causation is the Code of Hammurabi, the code of ancient Mesopotamian laws dating back to 1754 BC. The Code of Hammurabi consisted of 282 laws. Some examples of human error are given below.

  • If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kill its owner, then that builder shall be put to death.
  • If a man rents his boat to a sailor, and the sailor is careless, and the boat is wrecked or goes aground, the sailor shall give the owner of the boat another boat as compensation.
  • If a man lets in water and the water overflows the plantation of his neighbor, he shall pay ten gur of corn for every ten gan of land.

I will finish off with the story of Icarus. In Greek mythology, Icarus was the creator of the labyrinth in the island of Minos. Icarus’ father was the master craftsman Daedalus. King Minos of Crete imprisoned Daedalus and Icarus in Crete. The ingenious Daedalus observed the birds flying and invented a set of wings made from bird feathers and candle wax. He tested the wings out and made a pair for his son Icarus. Daedalus and Icarus planned their escape. Daedalus was a good Engineer since he studied the failure modes of his design and identified the limits. Daedalus instructed Icarus to follow him closely and asked him to not fly too close to the sea since the moisture can dampen the wings, and not fly too close to the sun since the heat from sun can melt the wings. As the story goes, Icarus was excited with his ability to fly and got carried away (maybe reckless). He flew too close to the sun, and the wax melted from his wings causing him to fall down to his untimely death.

Perhaps, the death of Icarus could be viewed as a human error since he was reckless and did not follow directions. However, Stephen Barlay in his 1969 book, Aircrash Detective: International Report on the Quest for Air Safety, looked at this story closely. At the high altitude that Icarus was flying, the temperature will actually be cold rather than warm. Thus, the failure would actually be from the cold temperature that would make the wax brittle and break instead of wax melting as indicated in the story. If this was true, during cold weathers the wings would have broken down and Icarus would have died at another time even if he had followed his father’s advice.

Always keep on learning…

In case you missed it, my last post was A Fuzzy 2018 Wish

Advertisements

The Information Model for Poka Yoke:

USB2

In today’s post, I will be looking at poka yoke or error proofing using an information model. My inspirations for this post is Takahiro Fujimoto, who wrote the wonderful book “The Evolution of a Manufacturing System at Toyota” (1999) and a discussion I had with my brother last weekend.

I will start with an interesting question – “where do you see information at your gemba, your production floor?” A common answer to this might be the procedures or the work instructions, or you might answer it as the visual aids readily available on the floor. Yet another answer might be the production boards where the running total along with reject information is recorded. All of this is correct. A general definition of information is something that carries content, which is related to data. I am not going into Claude Shannon’s work with information in this post. Fujimoto’s brilliant view of information is that every artifact on the production floor, and in fact every materialistic thing carries information. Fujimoto defines an information asset as the basic unit of an information system. Information cannot exist without the materials or energy in which it is embodied – its medium.

info asset

This information model indicates that the manufactured product carries information. The information it carries came from the design of the product. The information is transferred and transformed from the fixtures/dies/prints etc onto the physical product. Any loss of information during this process results in a defective product. To take this concept further, even if the loss of information is low, the end-user interaction with the product brings in a different dimension. The end-user gains information when he interacts with the product. If this information matches his expectations, he is satisfied. Even if there is minimal loss of information from design to manufacturing, if the end product information does not match the user’s expectations, the user gets dissatisfied.

Lets look at a simple example of a door.  A door with a handle is a poor design since the information of whether to push or pull is not clearly transferred to the user. The user might expect to pull on the handle instead of pushing on it. The information carried by the door handle is to “open the door using handle”. It does not convey whether to push or pull to open the door.

handle

Perhaps, one can add a note on the door that says, “Push”. A better solution to avoid the confusion is to eliminate the handle altogether so that the only option is to push. The removal of the handle with a note indicating “push” conveys the information that to open the door, one has to push. The information gets conveyed to the user and there is no dissatisfaction.

This example brings up an important point – a defect is created only when an operator or machine interacts with imperfect information. The imperfect information could be in the form of a worn-out die or an imperfect work instruction that aids loss of original information being transferred to the product. When you are trying to the solve a problem on the production floor, you are updating the information available on the medium so that the user’s interaction is modified to achieve the optimum result. This brings us to poka yoke or error-proofing.

If you think about it, you could say that the root cause for any problem is that the current process allows that problem to occur due to imperfect information.  This is what poka yoke tries to address. Toyota utilizes Jidoka and poka yoke to ensure product quality. Jidoka or autonomation is the idea that when a defect is identified, the process is stopped either by the machine in an automated process, or by the operator in an assembly line. The line is stopped so that the quality problem can be addressed. In the case of Jidoka, the problem has already occurred. In contrast, poka yoke eliminates the problem by preventing the problem from occurring in the first place. Poka yoke is the brainchild of probably one of the best Industrial Engineers ever, Shigeo Shingo. The best error-proofing is one where the operator cannot create a specific defect, knowingly or unknowingly. In this type of error-proofing, the information is embedded in the medium such that it conveys the proper method to the operator and if that method is not followed, the action cannot be completed. This information of only one proper way is physically embedded onto the medium.

Information in the form of work instructions may not always be effective because of limited interaction with the user. Information in the form of visual aids can be effective since it interacts with the user and provides useful information. However, the user can ignore this or get used to it. Information in the form of alarms can also be useful. This too may get ignored by the user and may not prevent the error from occurring. However, the user cannot ignore the information in the form of contact poka yoke since he has to interact with it. The proper assembly information is physically embedded in the material. A good example is a USB cable where it can be entered in only one way. The USB icon on top indicates that it is the top. Apple took this approach further by eliminating the need of orientation altogether with its lightning cables. The socket on the Apple product prevents any other cable from being inserted due to its unique shape.

Final Words:

The concept of physical artifacts carrying information is enlightening for me as a Quality Engineer. You can update the process information by updating a fixture to have a contact feature so that a part can be inserted in only one way. This information of proper orientation is embedded onto the fixture. This is much better that updating the work instruction to properly orient the part. The physical interaction ensures that the proper information is transferred to the operator to properly orient the part.

As I was researching for this post, I came across James Gleick who wrote the book, “The Information: A History, a Theory, a Flood”. I will finish off with a story I heard from James Gleick regarding information: When Gleick started working at the New York Times, a wise old head editor told him that the reader is not paying for all the news that they put in to be printed. What the reader is paying them was for all the news that they left out.

Always keep on learning…

In case you missed it, my last post was Divine Wisdom and Paradigm Shifts:

Divine Wisdom and Paradigm Shifts:

cancer

One of the best books I have read in recent times is The Emperor of All Maladies by the talented Siddhartha Mukherjee. Mukherjee won the 2011 Pulitzer Prize for this book. The book is a detailed history of Cancer and humanity’s battle with it. Amongst many things that piqued my interest, was one of the quotes I had heard attributed to Dr. Deming – In God we trust, all others must bring data.

To tell this story, I must first talk about William S. Halsted. Halsted was a very famous surgeon from John Hopkins who came up with the surgical procedure known as the “Radical Mastectomy” in the 1880’s. This is a procedure to remove the breast, the underlying muscles and attached lymph nodes to treat breast cancer. He hypothesized that the breast cancer spreads centrifugally from the breast to other areas. Thus, the removal of the breast, underlying muscles and lymph nodes would prevent the spread of cancer. He called this the “centrifugal theory”. Halsted called this procedure as “radical” to notate that the roots of the cancer are removed. Mukherjee wrote in his book that the intent of radical mastectomy was to arrest the centrifugal spread by cutting every piece of it out of the body. Physicians all across America identified the Radical Mastectomy as the best way to treat breast cancer. The centrifugal theory became the paradigm for breast cancer treatment for almost a century.

There were skeptics of this theory. The strongest critics of this theory were Geoffrey Keynes, a London based surgeon in the 1920s, and George Barney Crile, an American surgeon who started his career in the 1950s. They noted that even with the procedures that Halsted had performed, many patients died within four or five years from metastasis (cancer spreading to different organs). The surgeons overlooked these flaws, as they were firm believers in the Radical Mastectomy. Daniel Dennett, the famous American Philosopher, talks about the concept of Occam’s Broom, which might explain the thinking process for ignoring the flaws in a hypothesis. When there is a strong acceptance of a hypothesis, any contradicting information may get swept under the rug with Occam’s Broom. The contradictory information gets ignored and not confronted.

Keynes was even able to perform a local surgery of the breast and together with radiation treatment achieve some success. But Halsted’s followers in America ridiculed this approach, and came up with the name “lumpectomy” to call the local surgery. In their minds, the surgeon was simply removing “just” a lump, and this did not make much sense. They were aligning themselves with the paradigm of Radical Mastectomy. In fact, some of the surgeons even went further to come up with “superradical” and “ultraradical” procedures that were morbidly disfiguring procedures where the breast, underlying muscles, axillary nodes, the chest wall, and occasionally the ribs, part of the sternum, the clavicle and the lymph nodes inside the chest were removed. The idea of “more was better” became prevalent.

Another paradigm with clinical studies during that time was trying to look only for positive results – is treatment A better than treatment B? However, this approach did not show that treatment A was no better than treatment B. Two statisticians, Jerry Neyman and Egon Pearson, changed the approach with their idea of using the statistical concept of power. The sample size for a study should be based on the power calculated. Loosely stated, more independent samples mean higher power. Thus, with a large sample size of randomized trials, one can make a claim of “lack of benefit” from a treatment. The Halsted procedure did not get challenged for a long time because the surgeons were not willing to take part in a large sample size study.

A Philadelphia surgeon named Dr. Bernard Fisher was finally able to shift this paradigm in the 1980s. Fisher found no reason to believe in the centrifugal theory. He studied the cases put forth by Keynes and Crile. He concluded that he needed to perform a controlled clinical trial to test the Radical Mastectomy against Simple Mastectomy and Lumpectomy with radiation. The opposition from the surgeons slowly shifted with the strong advocacy from the women who wanted a less invasive treatment. Mukherjee cites the Thalidomide tragedy, the Roe vs Wade case, along with the strong exhortation from Crile to women to refuse to submit to a Radical Mastectomy, and the public attention swirling around breast cancer for the slow shift in the paradigm. Fisher was finally able to complete the study, after ten long years. Fisher stated that he was willing to have faith in divine wisdom but not in Halsted as divine wisdom. Fisher brusquely told a journalist – “In God we trust. All other must have data.”

The results of the study proved that all three cases were statistically identical. The group treated with Radical Mastectomy however paid heavily from the procedure but had no real benefits in survival, recurrence or mortality. The paradigm of Radical Mastectomy shifted and made way to better approaches and theories.

While I was researching this further, I found that the quote “In God we trust…” was attributed to another Dr. Fisher. Dr. Edwin Fisher, brother of Dr. Bernard Fisher, when he appeared before the Subcommittee on Tobacco of the Committee on Agriculture, House of Representatives, Ninety-fifth Congress, Second Session, on September 7, 1978. As part of presentation Dr. Fisher said – “I should like to close by citing a well-recognized cliche in scientific circles. The cliche is, “In God we trust, others must provide data. This is recorded in “Effect of Smoking on Nonsmokers. Hearing Before the Subcommittee on Tobacco of the Committee on Agriculture House of Representatives. Ninety-fifth Congress, Second Session, September 7, 1978. Serial Number 95-000”. Dr. Edwin Fisher unfortunately was not a supporter of the hypothesis that smoking is bad for a non-smoker. He even cited that people traveling on an airplane are more bothered by crying babies than the smoke from the smokers.

fisher

Final Words:

This past year, I was personally affected by a family member suffering from the scourge of breast cancer. During this period of Thanksgiving in America, I am thankful for the doctors and staff who facilitated her recovery. I am thankful for the doctors and experts in the medical field who were courageous to challenge the “norms” of the day for treating breast cancer. I am thankful for the paradigm shift(s) that brought better and effective treatments for breast cancer. More is not always better! I am thankful for them for not accepting a hypothesis based on just rationalism, an intuition on how things might be working. I am thankful for all the wonderful doctors and staff out there who take great care in treating all cancer patients.

I am also intrigued to find the quote of “In God we trust…” used with the statement that smoking may not have a negative impact on non-smokers.

I will finish with a story of another paradigm shift from Joel Barker in The Business of Paradigms.

A couple of Swiss watchmakers in Centre Electronique Horloger (CEH) in Neuchâtel, Switzerland, developed the first Quartz based watch. They went to different Swiss watchmakers with the technology that would later revolutionize the watch industry. However, the paradigm at that time was the intricate Swiss watch making process with gears and springs. No Swiss Watch company was interested in this new technology which did not rely on gears or springs for keeping time. The Swiss watchmakers with the new idea then went to a Clock convention and set up a booth to demonstrate their new idea. Again, no Swiss watch company was interested in what they had to offer. Two representatives, one from the Japanese company Seiko, and the other from Texas Instruments took notice of the new technology. They purchased the patents and as they say – the rest is history. The new paradigm then became Quartz watches. The Swiss, who were on the top of watch making with over 50% of the watch market in the 1970s, stepped aside for the Quartz watch revolution marking the decline of their industry. This was later termed as the Quartz Revolution.    

Always keep on learning…

In case you missed it, my last post was The Best Attribute to Have at the Gemba:

Which Way You Should Go Depends on Where You Are:

compass

I recently read the wonderful book “How Not To Be Wrong, The Power of Mathematical Thinking” by Jordan Ellenberg. I found the book to be enlightening and a great read. Jordan Ellenberg has the unique combination of being knowledgeable and capable of teaching in a humorous and engaging way. One of the gems in the book is – “Which way you should go depends on where you are”. This lesson is about the dangers of misapplying linearity. When we are thinking in terms of abstract concepts, the path from point A to point B may appear to be linear. After all, the shortest path between two points is a straight line. This type of thinking is linear thinking.

To illustrate this, let’s take the example of poor quality issues on the line. The first instinct to improve quality is to increase inspection. In this case, point A = poor quality, and point B = higher quality. If we plot this incorrect relationship between Quality and Inspection, we may assume it as a linear relationship – increasing inspection results in better quality.

Inspection and Quality

However, increasing inspection will not result in better quality in the long run and will result in higher costs of production. We must build quality in as part of the normal process at the source and not rely on inspection. In TPS, there are several ways to do this including Poka Yoke and Jidoka.

In a similar fashion, we may look at increasing the number of operators in the hopes of increasing productivity. This may work initially. However, increasing production at the wrong points in the assembly chain can hinder the overall production and decrease overall productivity. Taiichi Ohno, the father of Toyota Production System, always asked to reduce the number of operators to improve the flow. Toyota Production System relies on the thinking of the people to improve the overall system.

The two cases discussed above are nonlinear in nature. Thus increasing one factor may increase the response factor initially. However, continually increasing the factor can yield negative results. One example of a non-linear relationship is shown below:

productivity

The actual curve may of course vary depending on the particularities of the example. In nonlinear relationships, which way you should go depends on where you are. In the productivity example, if you are at the Yellow star location on the curve, increasing the operators will only decrease productivity. You should reduce the number of operators to increase productivity. However, if you are at the Red star, you should look into increasing the operators. This will increase productivity up to a point, after which the productivity will decrease. Which Way You Should Go Depends on Where You Are!

In order to know where you are, you need to understand your process. As part of this, you need to understand the significant factors in the process. You also need to understand the boundaries of the process where things will start to breakdown. The only way you can truly learn your process is through experimentation and constant monitoring. It is likely that you did not consider all of the factors or the interactions. Everything is in flux and the only constant thing is change. You should be open for input from the operators and allow improvements to happen from the bottom up.

I will finish off with the anecdote of the “Laffer curve” that Jordan Ellenberg used to illustrate the concept of nonlinearity. One polical party in America have been pushing for lowering taxes on the wealthy. The conservatives made this concept popular using the Laffer curve. Arthur Laffer was an economics professor at the University of Chicago. The story goes that Arthur Laffer drew the curve on the back of a napkin during dinner in 1974 with the senior members of then President Gerald Ford’s administration. The Laffer Curve is shown below:

Laffer curve

The horizontal axis shows the tax rate and the vertical axis shows the revenue that is generated from taxation. If there is no taxation, then there is no revenue. If there is 100% taxation, there is also no revenue because nobody would want to work and make money, if they cannot hold on to it. The argument that was raised was that America was on the right hand side of the curve and thus reducing taxation would increase revenue. It has been challenged whether this assumption was correct. Jordan used the following passage from Greg Manikiw, a Harvard economist and a Republican who chaired the Council of Economic Advisors under the second President Bush:

Subsequent history failed to confirm Laffer’s conjecture that lower tax rates would raise tax revenue. When Reagan cut taxes after he was elected, the result was less tax revenue, not more. Revenue from personal income taxes fell by 9 percent from 1980 to 1984, even though average income grew by 4 percent over this period. Yet once the policy was in place, it was hard to reverse.

The Laffer curve may not be symmetric as shown above. The curve may not be smooth and even as shown above and could be a completely different curve altogether. Jordan states in the book – All the Laffer curve says is that lower taxes could, under some circumstances, increase tax revenue; but figuring out what those circumstances are requires deep, difficult, empirical work, the kind of work that doesn’t fit on a napkin.

Always keep on learning…

In case you missed it, my last post was Epistemology at the Gemba:

Concept of Constraints in Facing Problems:

220px-Atlas_Santiago_Toural_GFDL

In today’s post, I will be looking at the concept of constraints in facing problems. Please note that I did not state “solving problems”. This is because not all problems are solvable. There are certain problems, referred to as “wicked problems” or complex problems that are not solvable. These problems have different approaches and none of the approaches can solve the problems completely. Some of the alternatives are better than the others, but at the same time they may have their own unintended consequences. Some examples of this are global warming and poverty.

My post is related to the Manufacturing world. Generally in the manufacturing world, most of the problems are solvable. These problems have a clear cause and effect relationships. They can be solved by using the best practice or a good practice. The best practice is used for obvious problems, when the cause and effect relationship is very clear, and there is truly one real solution. A good practice is employed where the cause and effect relationship is evident only with the help of subject-matter-experts. These are called “complicated problems”. There are also complex problems where the cause and effect relationships are not evident. These may be understood only after-the-fact. An example for this is launching a new product and ensuring a successful launch. Most of the time, the failures are studied and the reasons for the failure are “determined” after the fact.

The first step in tackling these problems is to understand what type of problem it is. Sometimes, the method to solve a problem is prescribed before the problem is understood. Some of the methods assume that the problem has a linear cause and effect relationship. An example is 5 why. 5 why assumes that there is a linear relationship between cause and effect. This is evident in the question – “why did x happen?”  This works fine for the obvious problems. This may not work that well for complicated problems and never for a complex problem. One key thing to understand is that the problems can be composite problems, some aspects may be obvious while some aspects may be complicated. Using a prescribed method can be ineffective in these cases.

The concept of constraints is tightly related to the concept of variety. The best resource for this is Ross Ashby’s “An Introduction to Cybernetics” [1]. Ashby defined variety as the number of distinct elements in a set of distinguishable elements or as the logarithm to base 2 of the number of distinct elements. Thus, we can say that the variety of genders is 2 (male or female) or as 1 bit (based on the logarithm calculation). Ashby defined constraint as a relation between two sets. Constraint only exists when one set’s variety is lower than the other set’s variety. Ashby gives the axample of a school that only admits boys. Compared to the set of gender (boys and girls), the school’s variety is less (only boys). Thus the school has a constraint imposed on itself.

A great resource for this is Dave Snowden and his excellent Cynefin framework [2]. Snowden says that ontology precedes epistemology or in other words data precedes framework. The fundamental properties of the problem must be understood before choosing a “tool” to address the problem. Prescribing a standard tool to use in all situations is constraining oneself and this will lead to ineffective attempts at finding a solution. When the leader says we need to use lean or six sigma, this is an attempt to add constraints by removing variety. Toyota’s methodologies referred to as Toyota Production System, was developed for their problems. They identified the problems and then proceeded to find ways to address them. They did not have a framework to go by. They created the framework based on decades of experience and tweaking. Thus blindly copying their methodologies are applying constraints on yourself that may be unnecessary. As the size or scope of a project increases, it tends to increase the complexity of the project. Thus enterprise wide applications of “prescribed solutions” are not generally effective since the cause-effect relationships cannot be completely predicted leading to unintended consequences, inefficiency and ineffectiveness. On the other hand, Ashby advises to take note of any existing constraints in a system, and to take advantage of the constraints to improve efficiency and effectiveness.

A leader should thus first understand the problem to determine the approach to proceed. Sometimes, one may have to use a composite of tools. One needs to be open for modifications by having a closed loop(s) with a feedback mechanism so that the approach can be modified as needed. It is also advisable to use heuristics like genchi genbutsu since they are general guidelines or rules of thumb. This does not pose a constraint. Once a methodology is chosen, then a constraint is being applied since the available number of tools to use (variety) has now diminished.  This thinking in terms of constraints prevents the urge to treat everything as a nail when your preferred tool is a hammer.

I will finish with a great story from the great Zen master Huangbo Xiyun;

Huangbo once addressed the assembly of gathered Zen students and said; “You are all partakers of brewer’s grain. If you go on studying Zen like that, you will never finish it. Do you know that in all the land of T’ang there is no Zen teacher?”
Then a monk came forward and said, “But surely there are those who teach disciples and preside over the assemblies. What about that?”
Huangbo said, “I do not say that there is no Zen, but that there is no Zen teacher…”

Always keep on learning…

In case you missed it, my last post was Jidoka, the Governing Principle for Built-in-Quality:

[1] https://www.amazon.com/Introduction-Cybernetics-W-Ross-Ashby/dp/1614277656

[2] http://cognitive-edge.com/blog/part-two-origins-of-cynefin/

Process Validation and the Problem of Induction:

EPSON MFP image

From “The Simpsons”

Marge: I smell beer. Did you go to Moe’s?

Homer: Every time I have beer on my breath, you assume I’ve been drinking.[1]

In today’s post, I will be looking at process validation and the problem of induction.  I have looked at process validation through another philosophical angle by using the lesson of the Ship of Theseus [4] in an earlier post.

US FDA defines process validation [2] as;

“The collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product.”

My emphases on FDA’s definition are the two words – “capability” and “consistency”. One of the misconceptions about process validation is that once the process is validated, then it achieves almost an immaculate status. One of the horror stories I have heard from my friends in the Medical Devices field is that the manufacturer stopped inspecting the product since the process was validated. The problem with validation is the problem of induction. Induction is a process in philosophy – a means to obtain knowledge by looking for patterns from observations and coming to a conclusion. For example, the swans that I have seen so far are white, thus I conclude that ALL swans are white. This is a famous example to show the problem of induction because black swans do exist. However, the data I collected showed that all of the swans in my sample were white. My process of collection and evaluation of the data appears capable and the output consistent.

The misconception that the manufacturer had in the example above was the assumption that the process is going to remain the same and thus the output also will remain the same. This is the assumption that the future and present are going to resemble the past. This type of thinking is termed the assumption of “uniformity of nature” in philosophy. This problem of induction was first thoroughly questioned and looked at by the great Scottish philosopher David Hume (1711-1776). He was an empiricist who believed that knowledge should be based on one’s sense based experience.

One way of looking at process validation is to view the validation as a means to develop a process where it is optimized such that it can withstand the variations of the inputs. Validation is strictly based on the inputs at the time of validation. The 6 inputs – man, machine, method, materials, inspection process and the environment, all can suffer variation as time goes on. These variations reveal the problem of induction – the results are not going to stay the same. There is no uniformity of nature. The uniformities observed in the past are not going to hold for the present and future as well.

In general, when we are doing induction, we should try to meet five conditions;

  1. Use a large sample size that is statistically valid
  2. Make observations under different and extreme circumstances
  3. Ensure that none of the observations/data points contradict
  4. Try to make predictions based on your model
  5. Look for ways and test your model to fail

The use of statistics is considered as a must for process validation. The use of a statistically valid sample size ensures that we make meaningful inferences from the data. The use of different and extreme circumstances is the gist of operational qualification or OQ. OQ is the second qualification phase of process validation. Above all, we should understand how the model works. This helps us to predict how the process works and thus any contradicting data point must be evaluated. This helps us to listen to the process when it is talking. We should keep looking for ways to see where it fails in order to understand the boundary conditions. Ultimately, the more you try to make your model to fail, the better and more refined it becomes.

The FDA’s guidance on process validation [2] and the GHTF (Global Harmonized Task Force) [3] guidance on process validation both try to address the problem of induction through “Continued Process Verification” and “Maintaining a State of Validation”. We should continue monitoring the process to ensure that it remains in a state of validation. Anytime any of the inputs are changed, or if the outputs show a trend of decline, we should evaluate the possibility of revalidation as a remedy for the problem of induction. This brings into mind the quote “Trust but verify”. It is said that Ronald Reagan got this quote from Suzanne Massie, a Russian writer. The original quote is “Doveryai, no proveryai”.

I will finish off with a story from the great Indian epic Mahabharata, which points to the lack of uniformity in nature.

Once a beggar asked for some help from Yudhishthir, the eldest of the Pandavas. Yudhishthir told him to come on the next day. The beggar went away. At the time of this conversation, Yudhishthir’s younger brother Bhima was present. He took one big drum and started walking towards the city, beating the drum furiously. Yudhishthir was surprised.

He asked the reason for this. Bhima told him:
“I want to declare that our revered Yudhishthir has won the battle against time (Kaala). You told that beggar to come the next day. How do you know that you will be there tomorrow? How do you know that beggar would still be alive tomorrow? Even if you both are alive, you might not be in a position to give anything. Or, the beggar might not even need anything tomorrow. How did you know that you both can even meet tomorrow? You are the first person in this world who has won the time. I want to tell the people of Indraprastha about this.”

Yudhishthir got the message behind this talk and called that beggar right away to give the necessary help.

Always keep on learning…

In case you missed it, my last post was If a Lion Could Talk:

[1] The Simpsons – Season 27; Episode 575; Every Man’s Dream

[2] https://www.fda.gov/downloads/drugs/guidances/ucm070336.pdf

[3] https://www.fda.gov/OHRMS/DOCKETS/98fr/04d-0001-bkg0001-10-sg3_n99-10_edition2.pdf

[4] https://harishsnotebook.wordpress.com/2015/03/08/ship-of-theseus-and-process-validation/

[5] Non-uniformity of Nature Clock drawing by Annie Jose

In-the-Customer’s-Shoes Quality:

shoes

I had a conversation recently with a Quality professional from another organization. The topic somehow drifted to the strict Quality standards in Japan. The person talked about how the product gets rejected by his Japanese counterparts for small blemishes, debris etc. The “defects” met the corporate standards, yet the product gets rejected at their Japanese warehouse. This conversation led me to write this post. My response was that the Japanese were looking at the product from the eyes of the customer. The small blemishes and debris impact the perception of quality, and can bring distaste as the product is being used.

In Japanese, the term for quality is Hinshitsu (hin = goods, and shitsu = quality). With the advent of TQM (Total Quality Movement), the idea of two “Qualities” was made more visible by Professor Noriaki Kano. He termed these;

  1. Miryokuteki Hinshitsu, or Attractive Quality
  2. Atarimae Hinshitsu, Must-Be Quality

These concepts were not exactly new, but Prof. Kano was able to put more focus on this. The “Attractive Quality” refers to something that fascinates or excites the customer and the “Must-Be Quality” refers to everything that is expected from the item by the customer. For example, a new phone in the market is expected to function out of the box. It should be able to make calls, connect to the internet, take pictures, play games etc. But if the phone came with the case or if the phone came with the name of the owner etched on the back, then that particular attribute is exciting for the customer. It was not something that he was expecting, and thus it brings “joy” to the customer. The interesting thing about the Attractive Quality is that today’s Attractive Quality becomes tomorrow’s Must-Be Quality. Would you purchase a phone today without the ability to browse the internet or take pictures? These features were added as Attractive Quality features in the past, and they have become Must-Be Quality features today.

The Japanese Quality guru Kaoru Ishikawa called these “Forward-looking qualities” and “Backward-looking qualities”. He called the special features like “easy to use”, “feels good to use” etc. as forward looking qualities. In contrast, “absence of defects” was called as backward looking. The father of Statistical Quality Control, Walter Shewhart called these as Objective and Subjective qualities.

Sometimes the Miryokuteki Hinshitsu also refers to the “Aesthetic Quality” of the product. Apple products are famous for this. There is a lot of attention paid by the Apple Designers for the Aesthetic Quality of their products. The IPhone should feel and look good. Even the package it comes in should say that it contains a “quality product”. In the Japanese culture, the concept of Aesthetics is rooted in “Shibui” and “Mononoaware”. Shibui can be defined as a quality associated with physical beauty “that has a tranquil effect on the viewer”. It brings to attention the naturalness, simplicity and subdued tone. Mononoaware on the other hand refers to the merging of one’s identity with that of an object. (Source: The Global Business by Ronnie Lessem, 1987).

The Total Quality Movement (Or Total Quality Control Movement as it is often referred to in the Japanese books) was taken quite seriously by the Japanese manufacturers. The following concepts were identified as essential;

  1. Customer orientation
  2. The “Quality first” approach
  3. Quality is everyone’s responsibility – from top management down
  4. Continual improvement of Quality
  5. Quality assurance is the responsibility of the producer, not of the purchaser or the inspection department
  6. Quality should be extended from the hardware (i.e., the product) to the software (i.e., services, work, personnel, departments, management, corporations, groups, society and the environment)

Source: Kaoru Ishikawa

Rather than relying on inspection, the Japanese manufacturers, including Toyota and Nissan, believed in building in quality throughout the entire process. The awareness of quality was seen as essential by the operator involved in making the product. It became a matter of owning the process and taking pride in what the operator did. Kenichi Yamamoto, the previous chairman of Mazda, is quoted to have said by BusinessWeek – “any manufacturer can produce according to statistics.”Yamamoto’s remark is about not focusing simply on quantities. Even when we are focusing on quality we should focus on both the objective and subjective quality. This reflects how our company culture views the ownership of quality.

Final Words:

I have always wondered why the windows in an airplane are not aligned with the airplane’s seats. It appears that the plane’s body is built based on a standard, and the seats are later added based on what the plane carriers want. There is not always a focus on what the customer wants, which explains why the seats are not aligned with the windows. I refer to the idea of the quality of a product as “in-the-customer’s-shoes quality”. If you were the customer, how would you like the product?

I will finish off with a story I heard from one of the episodes of the delightful TV show, “Japanology Plus”. This story perfectly and literally captures the concept of in-the-customer’s-shoes quality.

The episode was interviewing a “Japanophile” who was living in Japan for quite a long time. He talked about one incident that truly changed his view on Japan. He went to a small tea house in Japan. He was requested to remove his shoes before entering the room. After the tea, when he came out he was pleasantly surprised to see that his shoes were now moved to face away from the room. This way, he did not have to turn around and fumble to put his shoes on. He can simply put the shoes on his way out without turning around. He was taken aback by the thoughtfulness of the host.

Always keep on learning…

In case you missed it, my last post was “Four Approaches to Problem Solving”.

Four Approaches to Problem Solving:

dc

As a Quality professional, I am always interested in learning about problem solving. In today’s post I will be looking at the four approaches to Problem Solving as taught by the late great Systems Thinker, Russell Ackoff. He called these “Problem Treatments” – the ways one deals with problems. They are;

  1. Absolution – This is a common reaction to a problem. This means to ignore a problem with the hope that it will solve by itself or it will go away of its own accord.
  2. Resolution – This means to do something that yields an outcome that is “good enough”, in other words, that “satisfices”. This involves a clinical approach to problems that relies heavily on past experience, trial and error, qualitative judgment, and so-called common sense.
  3. Solution – This means to do something that yields the best outcome that “optimizes”. This involves a research approach to problems, one that often relies on experimentation, quantitative analysis, and uncommon sense. This is the realm of effective counterintuitive solutions.
  4. Dissolution – This means to redesign either the entity that has the problem or its environment in such a way as to eliminate the problem and enable the entity involved to do better in the future that the best it can do today – in a word, to “idealize”.

I see it also as the progression of our reaction to a big problem. At first, we try to ignore it. Then we try to put band aids on it. Then we try to make the process better, and finally we change a portion of the process so that the problem cannot exist in the new process. Ackoff gave a story in his book, “The Democratic Corporation”, to further explain these ideas. Ackoff was called in by a consultant to help with a problem in a large city in Europe. The city used double-decker buses for public transportation that had a bus driver and a conductor in it. The driver got paid extra based on how efficiently he could keep up with the schedule, and the conductor got paid extra based on how efficiently he could collect fares and keeps track of receipts. The conductor was also in charge of letting the driver know when the bus was ready to move by signaling to them from the rear entrance to the bus. During peak hours, problems arose. To meet the high volume of passengers, conductors started to let passengers in without collecting fares with the thought that they could be collected between stops. The conductors could not always get back to the entrance to signal to the driver that they were ready to move. The drivers started to determine themselves when they could move by trying to see that no one was getting off or on to the bus. All this caused delays that were costly to the driver. This resulted in great hostility between the drivers and the conductors. The drivers were trying to do what was best for them, and the conductors were trying to do what was best for them.

The management at first tried to “absolve” by pretending that the problem would go away on its own. When things got worse, the management tried to “resolve” by proposing to retract the incentives. This was not met well by both the drivers and conductors, and the management was not willing to increase their wages to offset the incentives. Next the management tried to “solve” the problem by proposing that the driver and the conductor share the total sum of incentives. This also was not met well by the drivers and the conductors because of lack of trust and unwillingness to increase their interdependence.

Finally, Ackoff proposed a modification to the process. He proposed that during the peak hours the conductors should be taken off the bus and placed at the stops. This way he can collect the fares from the people already at the stop, and he can verify the receipts of the people getting off the bus. He also can easily signal the bus driver. The problem was “dissolved” by this modification to the process.

Final Words:

One of the best teachings from Ackoff for Management is that to manage a system effectively, you must focus on the interactions of the parts rather than their behaviors (actions) taken separately. The next time you are facing a problem, think and understand if you are trying to absolve, resolve, solve or dissolve the problem. I will finish with a great story from Osho about the butcher who never had to sharpen his knife.

There was a great butcher in Japan and he was said to be a Zen master. After hearing about him, the emperor came to see him at his work. The emperor asked only one thing, about the knife that he used to kill the animals. The knife looked so shiny, as if it had just been sharpened.

The emperor asked, “Do you sharpen your knife every day?”

He said, “No, this is the knife my father used, and his father used, and it has never been sharpened. But we know exactly the points where it has to cut the animal so there is a minimum of pain possible — through the joints where two bones meet. The knife has to go through the joint, and those two bones that meet there go on sharpening the knife. And that is the point where the animal is going to feel the minimum pain. I am aware of the interactions.”

“For three generations we have not sharpened the knife. A butcher sharpening a knife simply means he does not know his art.”

Always keep on learning…

In case you missed it, my last post was Respect for People in light of Systems Thinking.

The Pursuit of Quality – A Lesser Known Lesson from Ohno:

Ohno

In today’s post, I will be looking at a lesser known lesson from Taiichi Ohno regarding the pursuit of Quality.

“The pursuit of quantity cultivates waste while the pursuit of quality yields value.”

Ohno was talking about using andons and the importance of resisting mass production thinking. Andon means “lantern” in Japanese, and is a form of visual control on the floor. Toyota requires and requests the operators to pull the andon cord to stop the line if a defect is found and to alert the lead about the issue. Ohno said the following about andons;

“Correcting defects is necessary to reach our goal of totally eliminating waste.”

Prior to the oil crisis, in the early 1970’s in Japan, all the other companies were buying high-volume machines to increase output. They reasoned that they could store the surplus in the warehouse and sell them when the time was right. Toyota, on the other hand, resisted this and built only what was needed. According to Ohno, the companies following mass-production thinking got a rude awakening in the wake of the oil crisis since they could not dispose off their high inventory. Meanwhile Toyota thrived and their profits increased. The other companies started taking notice of the Toyota Production System.

Ohno’s lesson of the pursuit of quality to yield value struck a chord with me. This concept is similar to Dr. Deming’s chain reaction model. Dr. Deming taught us that improvement of quality begets the natural and inevitable improvement of productivity. His entire model is shown below (from his book “Out of the Crisis”).

Deming Chain reaction

Dr. Deming taught the Japanese workers that the defects and faults that get into the hands of the customer lose the market and cost him his job. Dr. Deming taught the Japanese management that everyone should work towards a common aim – quality.

Steve Jobs Story:

I will finish with a story I heard from Tony Fadell who worked as a consultant for Apple and helped with the creation of the IPod. Tony said that Steve Jobs did not like the “Charge Before Use” sticker on all of the electronic gadgets that were available at that time. Jobs argued that the customer had paid money anticipating using the gadget immediately, and that the delay from charging takes away from the customer satisfaction. The normal burn-in period used to be 30 minutes for the IPod. The burn-in is part of the Quality/Reliability inspection where the electronic equipment runs certain cycles for a period of time with the intent of stressing the components to weed out any defective or “weak” parts. Jobs changed the burn-in time to two hours so that when the customer got the IPod, it was fully charged for him to use right away. This was a 300% increase in the inspection time and would have impacted the lead time. Traditional thinking would argue that this was not a good decision. However, this counterintuitive approach was welcomed by the customers and nowadays it is the norm that electronic devices come charged so that the end user can start using it immediately.

Always keep on learning…

In case you missed it, my last post was Challenge and Kaizen.

Eight Lessons from Programming – At the Gemba:

At the gemba - coding

In today’s post, I will be writing about the eight lessons I learned from Programming. I enjoy programming, and developing customer centric programs. I have not pursued a formal education in programming, although I did learn FORTRAN and BASIC as part of my Engineering curriculum. Whatever I have learned, I learned with an attitude of “let’s wing it and see”.

  • Be Very Dissatisfied with Repetitive Activities:

Our everyday life is riddled with repetition. This is the operative model of a business. Design a product, and then make them again and again. This repetitive way of doing things can be sometimes very inefficient. The programmer should have a keen eye to recognize the repetitive non-value adding activities that can be easily automated. If you have to generate a report every week, let’s automate it so that it is generated every week with minimal effort from you.

  • There is Always a Better Way of Doing Things:

Along the same lines as the first lesson, you must realize that there is always a better way of doing things. The best is not here yet, nor will it ever be. This is the spirit of kaizen. Even when a process has been automated, there is still big room left for improvement. The biggest room certainly is the room for improvement.

  • Never Forget Making Models:

When a Lean Practitioner is looking at a system, creating a model is the first step. This model could be a mental model, a mathematical model or even a small scale physical model. This model can even be a basic flowchart. This is part of the Plan phase of PDCA. How do the components work with each other? How does the system interact with the environment? What happens when step A is followed by Step B? A good programmer should understand the system first before proceeding with creating programs. A good programmer is also a good Systems Thinker.

  • Keep Memory in Mind:

A good programmer knows that using up a lot of memory and not freeing up memory can cause the program to hang and sometimes crash. Memory Management is an important lesson. This is very much akin to the concept of Muri in Lean. Overburdening the resources has an adverse impact on productivity and quality, and it is not a sustainable model in the long run.

  • Walk in Their Shoes:

A good programmer should look at the program from the end user’s viewpoint. Put yourself in their shoes, and see if your program is easy to use or not. Programmers are sometimes very focused on adding as many features as possible, when the end user is requiring only a few features. There is some similarity with the use of lean or six sigma tools at the Gemba. If it is not easy to use, the end users will try to find a way around it. This brings us to the next lesson.

  • Listen to the Gemba:

One of the lessons I learned early in my career is that I am not the owner of the program I write. The person using the program is the owner. If I do not listen to the end user then my program is not going to be used. I do not make the program for me; I make it for the end user. Less can be more and more can be less. The probability of a program being successful is inversely proportional to the distance of gemba from the source of program creation.

  • Documentation:

I wrote at the beginning that I learned programming from a “winging it” attitude. However, I soon learned the importance of documentation. A good programmer relies on good documentation. The documentation should explain the logic of the program, the flow of the program, how it will be tested and qualified, how the program changes will be documented and how the bugs will be tracked. The simplest tool for documentation can be a checklist. My favorite view on using checklists is – not using a checklist for a project is like shopping without a shopping list. You buy several things that are not needed, and do not buy the things that you actually need.

  • Keep a Bugs List – Learn from Mistakes:

Bugs to a programmer are like problems on a factory floor to a lean practitioner- it depends on how you view them. For a lean practitioner, problems are like gold mine. They are all opportunities to improve. In this same line of thinking, bugs are also a programmer’s friends. You learn the most from making mistakes. No program is 100% bug free. Each bug is unique and provides a great lesson. The goal is to learn from them so that you do not repeat them.

Another important lesson is – ensure that fixing a problem does not cause new problems. A programmer is prone to the law of unintended consequences. Any change to a program should be tested from a system standpoint.

Final Words:

I will finish off with my favorite anecdote about programming:

When Apple introduced the IPod, they were very proud of its “shuffle” feature. There is no accurate way of truly randomizing songs. However, there are several algorithms that can generate a pretty good random order. Apple utilized such an algorithm. It was so good that the users started complaining because sometimes the same song was repeated, or the same artist was played repeatedly. That is not how random should be – the end users argued. Steve Jobs then asked his programmers to change the algorithm so that it is less random.

The Digital Music Service company, Spotify faced the same problem. As they explained on their blog;

“If you just heard a song from a particular artist, that doesn’t mean that the next song will be more likely from a different artist in a perfectly random order. However, the old saying says that the user is always right, so we decided to look into ways of changing our shuffling algorithm so that the users are happier. We learned that they don’t like perfect randomness.”

The perception of random for the end user meant that the songs are equally spaced from one another based on how similar they are. The end user did not want randomness in a theoretical sense. They wanted random from a human practical sense.

Spotify changed their algorithm in 2014. “Last year, we updated it with a new algorithm that is intended to feel more random to a human.”

Always keep on learning…

In case you missed it, my last post was Be Like Coal At the Gemba.