The Cybernetic View of Quality Control:

Shewhart cycle1

My last post was a review of Mark Graban’s wonderful book, Measures of Success. After reading Graban’s book, I started rereading Walter Shewhart’s books, Statistical Method from the Viewpoint of Quality Control (edited by Dr. Deming) and Economic Control of Quality of Manufactured Product. Both are excellent books for any Quality professional. One of the themes that stood out for me while reading the two books was the concept of Cybernetics. Today’s post is a result from studying Shewhart’s books and articles on cybernetics by Paul Pangaro.

The term “cybernetics” has its origins from the Greek word, κυβερνήτης, which means “navigation”. Cybernetics is generally translated as “the art of steering”. Norbert Wiener, the great American mathematician, wrote the 1948 book, Cybernetics: Or Control and Communication in the Animal and the Machine. Wiener made the term “cybernetics” famous. Wiener adapted the Greek word to evoke the rich interaction of goals, predictions, actions, feedback, and response in systems of all kinds.

Loosely put, cybernetics is about having a goal and a self-correcting system that adjusts to the perturbations in the environment so that the system can keep moving towards the goal. This is referred to as the “First Order Cybernetics”. An example (remaining true to the Greek origin of the word), we can use is a ship sailing towards a destination. When there are perturbations in the form of wind, the steersman adjusts the path accordingly and maintains the course. Another common example is a thermostat. The thermostat is able to maintain the required temperature inside the house by adjusting according to the external temperature. The thermostat “kicks on” when a specified temperature limit is tripped and cools or heats the house. An important concept that is used for cybernetics is the “law of requisite variety” by Ross Ashby. The law of requisite variety states that only variety can absorb variety. If the wind is extreme, the steersman may not be able to steer the ship properly. In other words, the steersman lacks the requisite variety to handle or absorb the external variety. The main mechanism of cybernetics is the closed feedback loop that helps the steersman adjust accordingly to maintain the course. This is also the art of a regulation loop –compare, act and sense.

Warren McCulloch, the American cybernetician, explained cybernetics as follows:

Narrowly defined it (cybernetics) is but the art of the helmsman, to hold a course by swinging the rudder so as to offset any deviation from that course. For this the helmsman must be so informed of the consequences of his previous acts that he corrects them – communication engineers call this ‘negative feedback’ – for the output of the helmsman decreases the input to the helmsman. The intrinsic governance of nervous activity, our reflexes, and our appetites exemplify this process. In all of them, as in the steering of the ship, what must return is not energy but information. Hence, in an extended sense, cybernetics may be said to include the timeliest applications of the quantitative theory of information.

Walter Shewhart’s ideas of statistical control works well with the cybernetic ideas. Shewhart purposefully used the term “control” for his field. The term control or regulation is a key concept in cybernetics, as explained above. Shewhart defined control as:

A phenomenon is said to be controlled when, through the use of past experience, we can predict at least within limits, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.

Shewhart expanded further:

The idea of control involves action for the purpose of achieving a desired end. Control in this sense involves both action and a specified end.

..We should keep in mind that the state of statistical control is something presumable to be desired, something to which one may hope to attain; in other words it is an ideal goal.

Shewhart’s view of control aligns very well with the teleological aspects of cybernetics. From here, Shewhart develops his famous Shewhart cycle as a means to maintain statistical control. Shewhart wrote:

Three steps in quality control. Three senses of statistical control. Broadly speaking, there are three steps in a quality control process: the specification of what is wanted, the production of things to satisfy the specification, and the inspection of things produced to see if they satisfy the specification.

The three steps (making a hypothesis, carrying out an experiment, and testing the hypothesis) constitute a dynamic scientific process of acquiring knowledge. From this viewpoint, it is better to show them as a forming a sort of spiral gradually approach a circular path to which would represent the idealized case, where no evidence is found in the testing of hypothesis indicates a need for changing the hypothesis. Mass production viewed in this way constitutes a continuing and self-corrective method for making the most efficient use of raw and fabricated materials.

The Shewhart cycle as he proposed is shown below:

Shewhart cycle1

One of the criterions Shewhart developed for his model was that the model should be as simple as possible and adaptable in a continuing and self-corrective operation of control. The idea of self-correction is a key point of cybernetics as part of maintaining the course.

The brilliance of Shewhart was in providing guidance on when we should react and when we should not react to the variations in the data. He stated that a necessary and sufficient condition for statistical control is to have a constant system of chance causes… It is necessary that differences in the qualities of a number of pieces of a product appear to be consistent with the assumption that they arose from a constant system of chance causes… If a cause system is not constant, we shall say that an assignable cause is present.

Shewhart continued:

My own experience has been that in the early stages of any attempt at control of a quality characteristic, assignable causes are always present even though the production operation has been repeated under presumably the same essential conditions. As these assignable causes are found and eliminated, the variation in quality gradually approaches a state of statistical control as indicated by the statistics of successive samples falling within their control limits, except in rare instances.

We are engaging in a continuing, self-corrective operation designed for the purpose of attaining a state of statistical control.

The successful quality control engineer, like the successful research worker, is not a pure reason machine but instead is a biological unit reacting to and acting upon an everchanging environment.

James Wilk defined cybernetics as:

Cybernetics is the study of justified intervention.”

This is an apt definition when we look at quality control, as viewed by Shewhart. We have three options when it comes to quality control:

  1. If we have an unpredictable system, then we work to eliminate the causes of signals, with the aim of creating a predictable system.
  2. If we have a predictable system that is not always capable of meeting the target, then we work to improve the system in a systematic way, aiming to create a new a system whose results now fluctuate around a better average.
  3. When the range of predictable performance is always better than the target, then there’s less of a need for improvement. We could, however, choose to change the target and then continue improving in a systematic way.

Source: Measures of Success (Mark Graban, 2019)

Final Words:

Shewhart wrote “Statistical Method from the Viewpoint of Quality Control” in 1939, nine years before Wiener’s Cybernetics book. The use of statistical control allows us to have a conversation with a process. The process tells us what the limits are, and as long as the data points are plotted randomly within the two limits, we can assume that whatever we are seeing is due to chance or natural variation. The data should be random and without any order. When we see some manner of order in the likes of a trend or an outside data point, then we should look for an assignable cause. The data points are not necessarily due to chance anymore. As we keep plotting, we should improve our process, and recalculate the limits.

I will finish off with Dr. Deming’s enhancement of Shewhart’s cycle. This is taken from a presentation by Clifford L. Norman. This was part of the evolution of the PDSA (Plan-Do-Study-Act) cycle which later became famous as PDCA cycle (Plan-Do-Check-Act). This showed only 3 steps with a decision point after step 3.

Shewhart cycle2

The updated cycle has lots of nuggets in it such as experimenting on a small scale, reflecting on what we learned etc.

Always keep on learning…

In case you missed it, my last entry was My Recent Tweets:

Note: The updated Shewhart cycle was added to the post after a discussion with Benjamin Taylor (Syscoi.com).

Book Review – Measures of Success:

Measures-of-Success-Cover-Dark-Green-Final-copy-1

In today’s post, I am reviewing the book, “Measures of Success”, written by Mark Graban. Graban is a Lean thinker and practitioner. Graban has written several books on Lean including Lean Hospitals and Healthcare Kaizen. Graban was kind enough to send me a preview copy of his latest book, Measures of Success. As Graban writes in the Preface, his goal is to help managers, executives, business owners, and improvement specialists in any industry use limited time available more effectively.

The book is about Process Behavior Charts or PBC (Statistical Process Control or SPC). Graban teaches in an easy way how to use Process Behavior Charts to understand a process, and truly see and listen to the process. The use of PBC is a strategy of prevention, and not a strategy of detection alone. PBCs help us see when a process is in control and whether what we see is indicative of normal noise present in a process in control or not. Walter Shewhart, who created and pioneered SPC, defined control as:

A phenomenon is said to be controlled when, through the use of past experience, we can predict at least within limits, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.

 Shewhart proceeded to state that a necessary and sufficient condition for statistical control is to have a constant system of chance causes… It is necessary that differences in the qualities of a number of pieces of a product appear to be consistent with the assumption that they arose from a constant system of chance causes… If a cause system is not constant, we shall say that an assignable cause is present.

Graban has written a great book to help us decide what is noise and what is meaningful data. By understanding how the process is speaking to us, we can stop overreacting and use the saved time to actually make meaningful improvements to the process. Graban has a great style of writing which makes a somewhat hard statistical subject easy to read. I enjoyed the narrative he gave of the CEO looking at the Bowling Chart and reacting to it in the third chapter. The CEO was following the red and green datapoints, and reacting by either pontificating as a means of encouragement or yelling “just do things right” at her team. And worse of all, she thinks that she is making a difference by doing it. Just try harder and get to the green datapoint! Graban also goes into detail on Deming’s Red Bean experiment that is a fun way of demonstrating the minimal impact a worker has on normal variation of the process through a fun exercise.

Similar to Deming’s line of questions regarding process improvementHow are you going to improve? By what method? And How will you know?, Graban also provides three insightful core questions:

  1. Are we achieving our target or goal?
  2. Are we improving?
  3. How do we improve?

We should be asking these questions when we are looking at a Process Behavior Chart. These questions will guide in our continual improvement initiatives. Graban has identified 10 key points that help us reflect on our learning of PBCs. They are available at his website. They help us focus on truly understanding what the process is saying – where are we and should we make a change?

Graban provides numerous examples of current events depicted as PBCs. Some of the examples include San Antonio homicide rates and Oscar Ratings. Did the homicide rate significantly go down recently? Did the Oscar ratings significantly go down in the recent years? These are refreshing because they help solidify our understanding. This also provides a framework for us to do our own analysis of current events we see in the news. Graban also provides an in-depth analysis of his blog data. In addition, there are several workplace cases and examples included.

The list of Chapters are as follows:

  • Chapter 1: Improving the Way We Improve
  • Chapter 2: Using Process Behavior Charts for Metrics
  • Chapter 3: Action Metrics, not Overreaction Metrics
  • Chapter 4: Linking Charts to Improvement
  • Chapter 5: Learning From “The Red Bead Game”
  • Chapter 6: Looking Beyond the Headlines
  • Chapter 7: Linear Trend Lines and Other Cautionary Tales
  • Chapter 8: Workplace Cases and Examples
  • Chapter 9: Getting Started With Process Behavior Charts

The process of improvement can be summarized by the following points identified in the book:

  • If we have an unpredictable system, then we work to eliminate the causes of signals, with the aim of creating a predictable system.
  • If we have a predictable system that is not always capable of meeting the target, then we work to improve the system in a systematic way, aiming to create a new a system whose results now fluctuate around a better average.
  • When the range of predictable performance is always better than the target, then there’s less of a need for improvement. We could, however, choose to change the target and then continue improving in a systematic way.

It is clear that Graban has written this book with the reader in mind. There are lots of examples and additional resources provided by Graban to start digging into PBCs and make it interesting. The book is not at all dry and has managed to retain the main technical concepts in SPC.

The next time you see a Metric dashboard either at the Gemba or in the news, you will definitely know to ask the right questions. Graban also provides a list of resources to further improve our learning of PBCs. I encourage the readers to check out Mark Graban’s Blog at LeanBlog.org and also buy the book, Measures of Success.

Always keep on learning…

In case you missed it, my last post was Ubuntu At the Gemba:

Bootstrap Kaizen:

bootstrap

I am writing today about “bootstrap kaizen”. This is something I have been thinking about for a while. Wikipedia describes bootstrapping as “a self-starting process that is supposed to proceed without external input.” The term was developed from a 19th century adynaton – “pull oneself over a fence by one’s bootstraps.” Another description is to start with something small that overtime turns into something bigger – a compounding effect from something small and simple. One part of the output is feedback into the input loop so as to generate a compounding effect. This is the same concept of booting computers, where a computer upon on startup starts with a small code that is run from the BIOS which loads the full operating system. I liked the idea of bootstrapping when viewed with the concept of kaizen or “change for the better” in Lean. Think about how the concept of improvement can start small, and eventually with iterations and feedback loops can make the entire organization better.

As I was researching along these lines, I came across Doug Engelbart. Doug Engelbart was an American genius who gave us the computer mouse and he was part of the team that gave us internet. Engelbart was way ahead of his time. Engelbart was also famous for the Mother of All Demos, which he gave in 1968 (way before Windows or Apple Events). Engelbart’s goal in life was to help create truly high performance human organizations. He understood that while population and gross product were increasing at a significant rate, the complexity of man’s problems were growing still faster. On top of this, the urgency with which solutions must be found became steadily greater. The product of complexity and urgency had surpassed man’s ability to deal with it. He vowed to increase the effectiveness with which individuals and organizations work at intelligent tasks. He wanted better and faster solutions to tackle the “more-complex” problems. Engelbart came up with “bootstrapping our collective IQ.”

He explained:

Any high-level capability needed by an organization rests atop a broad and deep capability infrastructure, comprised of many layers of composite capabilities. At the lower levels lie two categories of capabilities – Human-based and Tools-based. Doug Engelbart called this the Augmentation System.

Augmentation system

The human-based capability infrastructure is boosted by the tool-based capability infrastructure. As we pursue significant capability improvement, we should orient to pursuing improvement as a multi-element co-evolution process of the Tool System and Human System. Engelbart called this a bootstrapping strategy, where multi-disciplinary research teams would explore the new tools and work processes, which they would all use immediately themselves to boost their own collective capabilities in their lab(s).

Doug Engelbart’s brilliance was that he identified the link between the human system and the tool system. He understood that developing new tools improves our ability to develop even more new tools. He came up with the idea of “improving the improvement process.” I was enthralled by this when I read this because I was already thinking about “bootstrap kaizen.” He gave us the idea of “ABC model of Organizational Improvement.” In his words:

    A Activity: ‘Business as Usual’. The organization’s day to day core business activity, such as customer engagement and support, product development, R&D, marketing, sales, accounting, legal, manufacturing (if any), etc. Examples: Aerospace – all the activities involved in producing a plane; Congress – passing legislation; Medicine – researching a cure for disease; Education – teaching and mentoring students; Professional Societies – advancing a field or discipline; Initiatives or Nonprofits – advancing a cause.

    B Activity: Improving how we do that. Improving how A work is done, asking ‘How can we do this better?’ Examples: adopting a new tool(s) or technique(s) for how we go about working together, pursuing leads, conducting research, designing, planning, understanding the customer, coordinating efforts, tracking issues, managing budgets, delivering internal services. Could be an individual introducing a new technique gleaned from reading, conferences, or networking with peers, or an internal initiative tasked with improving core capability within or across various A Activities.

    C Activity: Improving how we improve. Improving how B work is done, asking ‘How can we improve the way we improve?’ Examples: improving effectiveness of B Activity teams in how they foster relations with their A Activity customers, collaborate to identify needs and opportunities, research, innovate, and implement available solutions, incorporate input, feedback, and lessons learned, run pilot projects, etc. Could be a B Activity individual learning about new techniques for innovation teams (reading, conferences, networking), or an initiative, innovation team or improvement community engaging with B Activity and other key stakeholders to implement new/improved capability for one or more B activities.

This approach can be viewed as a nested set of feedback loops as below:

ABC

Engelbart points out that, Bootstrapping has multiple immediate benefits:

1) Providers grow increasingly faster and smarter at:

  • Developing what they use – providers become their own most aggressive and vocal customer, giving themselves immediate feedback, which creates a faster evolutionary learning curve and more useful results
  • Integrating results – providers are increasingly adept at incorporating experimental practices and tools of their own making, and/or from external sources, co-evolving their own work products accordingly, further optimizing usefulness as well as downstream integratability
  • Compounding ROI – if the work product provides significant customer value, providers will start seeing measurable results in raising their own Collective IQ, thus getting faster and smarter at creating and deploying what they’re creating and deploying – results will build like compounding interest
  • Engaging stakeholders – providers experience first-hand the value of deep involvement by early adopters and contributors, blurring the distinction between internal and external participants, increasing their capacity to network beneficial stakeholders into the R&D cycle (i.e. outside innovation is built in to the bootstrapping strategy)
  • Deploying what they develop – as experienced users of their own work product, providers are their own best customers engaging kindred external customers early on, deployment/feedback becomes a natural two-way flow between them

2) Customers benefit commensurately:

  • End users benefit in all the ways customers benefit through outside innovation
  • Additionally, end users can visit provider’s work environment to get a taste and even experience firsthand how they’ve seriously innovated the way they work, not in a demo room, but in their actual work environment
  • Resulting end products and services, designed by stakeholders, and rigorously co-evolved, shaken down and refined by stakeholders, should be easier and more cost-effective to implement, while yielding greater results sooner than conventionally developed products and services

Final Notes:

I love that Engelbart’s Augmentation System points out that tools are to be used to augment the human capability, and that this should be ultimately about the system level development. His idea of bootstrapping explains how the “kaizen” thinking should be in Lean.

Interestingly, Engelbart understood that the Human side of the Augmentations System can be challenging. A special note on the Human System: Of the two, Engelbart saw the Human System to be a much larger challenge than the Tool System, much more unwieldy and staunchly resistant to change, and all the more critical to change because, on the whole, the Human System tended to be self-limiting, and the biggest gating factor in the whole equation. It’s hard for people to step outside their comfort zone and think outside the box, and harder still to think outside whatever paradigm or world view they occupy. Those who think that the world is flat, and science and inquiry are blasphemous, will not consider exploring beyond the edges, and will silence great thinkers like Socrates and Gallileo.

As I was researching for this post, I also came across the phrase “eating your own dog food.” This is an idea made famous by the software companies. The idea behind the phrase is that we should use our own products in our day-to-day business operations (Deploying what they develop). In a similar vein, we should engage in improvement activities with tools that we can make internally. This will improve our improvement muscles so that we may be able to tweak off-the-shelf equipment to make it work for us. This is the true spirit of the Augmentation System.

When you are thinking about getting new tools or equipment for automation, make sure that it is to strictly to augment the human system. Unless we think in these terms, we will not be able to improve the system as a whole. We should focus more on the C activities. I highly encourage the reader to learn more about Doug Engelbart. (http://www.dougengelbart.org/)

Always keep on learning…

In case you missed it, my last post was A “Complex” View of Quality:

A “Complex” View of Quality:

Q

I am a Quality Manager by profession. Thus, I think about Quality a lot. How would one define “Quality”? A simple view of quality is – “conformance to requirements.” This simplistic view of quality lacks the complexity that it should have. This assumes that everything is static, the customer will always have the same requirements and will be happy if the specifications/requirements are met. Customer satisfaction is a complex thing. Customers are external to the plant manufacturing the widget. Thus, the plant will always lack the variety that the external world will impose on it. For example, lets look at a simple thing like a cellphone. Theoretically, the purpose of a cellphone used to be to allow the end user to make a phone call. Think of all the variety of requirements that the end user has for a cellphone these days – internet, camera, ability to play games, ability to use productivity apps, stopwatch, alarms, affordability etc. Additionally, the competition is always coming out with a newer, faster, and maybe cheaper cellphone. To paraphrase the red queen from Alice in Wonderland – the manufacturer has to do a lot of running to stay in the same place – to maintain the share of market.

320px-Alice_queen2

In this line of thinking, quality can be viewed as matching the complexity imposed by the consumer. There are two approaches in Quality that differs from the concept of just meeting the requirements.

1) Taguchi’s idea of quality:

Genichi Taguchi, a Japanese engineer and statistician, came up with the idea of a “loss function”. The main idea behind this is that anytime a product deviates from the target specification, the customer experiences a loss function. Every product dimensional specification has a tolerance for manufacturability. When all of the dimensions are near the target specification, the loss function is minimal resulting in a better customer experience. One of the best examples to explain this is from Sony. The story goes that Sony had two television manufacturing facilities, one in Japan and one in the USA. Both facilities used the same design specifications for television. Interestingly, the televisions manufactured in the USA facility had a lower satisfaction rating than the televisions manufactured in Japan. It was later found that the difference was in how the two facilities approached quality for the color density. The paradigm that the USA facility had was that as long as the color density was within the range, the product was acceptable, whereas, the Japanese facility made a point to meet the nominal value for the color density. Thus, the Japanese Sony televisions were deemed superior to the American Sony televisions.

TV

2) Kano’s idea of quality:

Noriaki Kano is another Japanese Quality Management pioneer who came up with the idea of the Kano model. The Kano model is a great way of looking at a characteristic from the point of the customer. The Kano model has two axes – customer satisfaction and feature implementation. The customer satisfaction goes from satisfied to dissatisfied, and the feature implementation goes from insufficient to sufficient. This two-dimensional arrangement leads to various categories of “quality” such as Attractive quality, One-dimensional quality, Must-be quality and Indifferent quality. Although there are more categories identified by Kano, I am looking at only the four categories identified above.

  • Attractive quality – this is something the customer would find attractive if it is present, and indifferent if it is absent. For example, let’s say that you went to get a car wash, and the store gave you free beverage and snack. You were not expecting this, and getting the free beverage and snack made the experience pleasant. If you were not aware of the free beverage and snack, you would not be dissatisfied because you were not expecting to get the free beverage and snack.
  • One-dimensional quality – this is something that customer would view on a one-dimension. If there is more of it, the customer is more happy, and if there is less of it, the customer is less happy. For example, let’s look at the speed of your internet connection at home. The faster the internet, the happier you are, and the slower the internet, the sadder you are.
  • Must-be quality – this is something that the customer views as an absolute must-have. If you go into a store to buy eggs, you expect the carton to have eggs in it. If the eggs are not there, you are not happy.
  • Indifferent – this is something that a particular customer truly does not care about. The example that Kano gives to explain this in his 2001 paper was the “I-mode” feature on some Japanese cellphones. This feature allowed the user to connect to the internet. When a survey was conducted, most of the middle-aged people viewed this feature indifferently. They could care less that the cellphone could be used to connect to the internet.

Kano

The brilliant insight from the Kano model is that the perception of quality is not linear or static. The perception of quality is non-linear and it evolves with time. Kano hypothesizes that a successful quality element goes through a lifecycle detailed below:

Indifferent => Attractive => One-Dimensional => Must-Be.

A feature that began as indifferent could become an attractive feature, which would then evolve into a one-dimensional feature and finally it becomes a must-be feature. Take for example, the ability to take pictures on your cellphone. This was treated indifferently at the beginning, and then it became an attractive feature. The better the resolution of the pictures taken, the happier you became. Finally, the ability to take sharp pictures became a must-have on your cellphone.

The customer is not always aware of what the attractive feature could be on a product. This is akin to what Ford said – “If I had asked people what they wanted, they would have said faster horses.” Steve Jobs added on to this and said – “People don’t know what they want until you show it to them. That’s why I never rely on market research. Our task is to read things that are not yet on the page.”

Kano had a brilliant insight regarding this as well. In the 2001 paper, “Life Cycle and Creation of Attractive Quality”, he gave the Konica model. Kano talked about the camera that Konica came out with in the 1970s that had built-in flash and the capability to auto focus. At that time, the camera was treated as a mature product and to survive the competition Konica decided to come up with a new camera. Konica engaged in a large survey with the customers with the expectation of coming out with a completely new camera. The R&D team was disappointed with the survey results which only suggested minor changes to the existing designs. The team decided to visit a photo processing lab to examine the prints and negative films taken by consumers and to evaluate the quality of prints and developed films. This is the spirit of genchi genbutsu in lean (go and see to grasp the situation). The team learned that the two main issues the users had were to do with underexposures due to lack of flash and out-of-focus.

Kano notes that:

To solve these problems, Konica developed and released cameras with auto focus and a built-in flash as well as auto film loading and winding functions from the middle to the end of 1970s. This prompted consumers to buy a second and even a third camera. Thereafter, Konica’s business considerably grew and completely changed the history of camera development in the world.

As long as customers are around, quality should be viewed as non-linear, complex and evolving.

Always keep on learning…

In case you missed it, my last post was Lessons from Genkan:

MTTF Reliability, Cricket and Baseball:

bradman last

I originally hail from India, which means that I was eating, drinking and sleeping Cricket at least for a good part of my childhood. Growing up, I used to “get sick” and stay home when the one TV channel that we had broadcasted Cricket matches. One thing I never truly understood then was how the batting average was calculated in Cricket. The formula is straightforward:

Batting average = Total Number of Runs Scored/ Total Number of Outs

Here “out” indicates that the batsman had to stop his play because he was unable to keep his wicket. In Baseball terms, this will be similar to a strike out or a catch where the player has to leave the field. The part that I could not understand was when the Cricket batsman did not get out. The runs he scored was added to the numerator but there was no changes made to the denominator. I could not see this as a true indicator of the player’s batting average.

When I started learning about Reliability Engineering, I finally understood why the batting average calculation was bothering me. The way the batting average in Cricket is calculated is very similar to the MTTF (Mean Time To Failure) calculation. MTTF is calculated as follows:

MTTF = Total time on testing/Number of failures

For a simple example, if we were testing 10 motors for 100 hours and three of them failed at 50, 60 and 70 hours respectively, we can calculate MTTF as 293.33 hours. The problem with this is that the data is a right-censored data. This means that we still have samples where the failure has not occurred and we stopped the testing. This is similar to the case where we do not include the number of innings where the batsman did not get out. A key concept to grasp here is that the MTTF or the MTBF (Mean Time Between Failure) metric is not for a single unit. There is more to this than just saying that on average a motor is going to last 293.33 hours.

When we do reliability calculations, we should be aware whether censored data is being used and use appropriate survival analysis to make a “reliability specific statement” – we can expect that 95% of the motor population will survive x hours. Another good approach is to calculate the lower bound confidence intervals based on the MTBF. A good resource is https://www.itl.nist.gov/div898/handbook/apr/section4/apr451.htm.

Ty Cobb. Don Bradman and Sachin Tendulkar:

We can compare the batting averages in Cricket to Baseball. My understanding is that the batting average in Baseball is calculated as follows:

Batting Average = Number of Hits/Number of Bats

Here the hit can be in the form of singles, home runs etc. Apparently, this statistic was initially brought up by an English statistician Henry Chadwick. Chadwick was a keen Cricket fan.

I want to now look at the greats of Baseball and Cricket, and look at a different approach to their batting capabilities. I have chosen Ty Cobb, Don Bradman and Sachin Tendulkar for my analyses. Ty Cobb has the largest Baseball batting average in American Baseball. Don Bradman, an Australian Cricketer often called the best Cricket player ever, has the largest batting average in Test Cricket. Sachin Tendulkar, an Indian Cricketer and one of the best Cricket players of recent times, has the largest number of runs scored in Test Cricket. The batting averages of the three players are shown below:

averages

As we discussed in the last post regarding calculating reliability with Bayesian approach, we can make reliability statements in place of batting averages. Based on 4191 hits in 11420 bats, we could make a statement that – with 95% confidence Ty Cobb is 36% likely to make a hit in the next bat. We can utilize the batting average concept in Baseball to Cricket. In Cricket, hitting fifty runs is a sign of a good batsman. Bradman has hit fifty or more runs on 56 occasions in 80 innings (70%). Similarly Tendulkar has hit fifty or more runs on 125 occasions in 329 innings (38%).

We could state that with 95% confidence, Bradman was 61% likely to score fifty or more runs in the next inning. Similarly, Sachin was 34% likely to score fifty runs or more in the next inning at 95% confidence level.

Final Words:

As we discussed earlier, similar to MTTF, batting average is not a good estimation for a single inning. It is an attempt for a point estimate for reliability but we need additional information regarding this. This should not be looked at it as a single metric in isolation. We cannot expect that Don Bradman would score 99.94 runs per innings. In fact, in the last very match that Bradman played, all he had to do was score 4 single runs to achieve the immaculate batting average of 100. He had been out only 69 times and he just needed four measly runs to complete 7000 runs and even if he got out on that inning, he would have achieved the spectacular batting average of 100. He was one of the best players ever. His highest score was 334. This is called “triple century” in Cricket, and this is a rare achievement. As indicated earlier, he was 61% likely to have scored fifty runs or more in the next inning. In fact, Bradman had scored more than four runs 69 times in 79 innings.

bradman last

Everyone expected Bradman to cross the 100 mark easily. As fate would have it, Bradman scored zero runs as he was bowled out (the batsman misses and the ball hits the wicket) by the English bowler Eric Hollies, in the second ball he faced. He had hit 635 fours in his career. A four is where the batsman scores four runs by hitting the ball so that it rolls over the boundary of the field. All Bradman needed was one four to achieve the “100”. Bradman proved that to be human is to be fallible. He still remains the best that ever was and his record is far from broken. At this time, the batsman with the second best batting average is 61.87.

Always keep on learning…

In case you missed it, my last post was Reliability/Sample Size Calculation Based on Bayesian Inference:

Hammurabi, Hawaii and Icarus:

patent

In today’s post, I will be looking at Human Error. In November 2017, The US state of Hawaii reinstated the Cold War era nuclear warning signs due to the growing fears of a nuclear attack from North Korea. On January 13, 2018, an employee from the Hawaii Emergency Management Agency sent out an alert through the communication system – “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” The employee was supposed to take part in a drill where the emergency missile warning system is tested. The alert message was not supposed to go to the general public. The cause for the mishap was soon determined to be human error. The employee in the spotlight and few others left the agency soon afterwards. Even the Hawaiian governor, David Ige, came under scrutiny because he had forgotten his Twitter password and could not update his Twitter feed about the false alarm. I do not have all of the facts for this event, and it would not be right of me to determine what went wrong. Instead, I will focus on the topic of human error.

One of the first proponents of the concept of human error in the modern times is the American Industry Safety pioneer, Herbert William Heinrich. In his seminal 1931 book, Industrial Accident Prevention, he proposed the concept of Domino theory to explain industry accidents. Heinrich reviewed several industrial accidents of his time, and came up with the following percentages for proximate causes:

  • 88% are from unsafe acts of persons (human error),
  • 10% are from unsafe mechanical or physical conditions, and
  • 2% are “acts of God” and unpreventable.

The reader may find it interesting to learn that Heinrich was working as the Assistant Superintendent of the Engineering and Inspection Division of Travelers Insurance Company, when we wrote the book in 1931. The data that Heinrich collected was somehow lost after the book was published. Heinrich’s domino theory explains an injury from an accident as a linear sequence of events associated with five factors – Ancestry and social environment, Fault of person, Unsafe act and/or mechanical or Unsafe performance of persons, Accident and Injury.

H1

He hypothesized that taking away one domino from the chain can prevent the industrial injury from happening. He wrote – If one single factor of the entire sequence is to be selected as the most important, it would undoubtedly be the one indicated by the unsafe act of the person or the existing mechanical hazard. I was taken aback by the example he gave to illustrate his point. As an example, he talked about an operator fracturing his skull as the result of a fall from a ladder. The investigation revealed that the operator descended the ladder with his back to it and caught his heel on one of the upper rungs. Heinrich noted that the effort to train and instruct him and to supervise his work was not effective enough to prevent this unsafe practice.  “Further inquiry also indicated that his social environment was conducive to the forming of unsafe habits and that his family record was such as to justify the belief that reckless tendencies had been inherited.

One of the main criticisms to Heinrich’s Domino model is its simplistic nature to explain a complex phenomenon. The Domino model is reflective of the mechanistic view prevalent at that time. The modern view of “human error” is based on cognitive psychology and systems thinking. In this view, accidents are seen as a by-product of the normal functioning of the sociotechnical system. Human error is seen as a symptom and not a cause. This new view uses the approach of “no-view” when it comes to human error. This means that the human error should not be its own category for a root cause. The process is not perfectly built, and the human variability that might result in a failure is the same that results in the ongoing success of the process. The operator has to adapt to meet the unexpected challenges, pressures and demands that arise on a day-to-day basis. The use of human error as a root cause is a fundamental attribution error – focusing on the human trait of the operator as being reckless or careless; rather than focusing on the situation that the operator was in.

One concept that may help in explaining this further is Local Rationality. Local Rationality starts with the basic assumption that everybody wants to do a good job, and we try to do the best (be rational) with the information that is available to us at a given time. If this decision led to an error, instead of looking at where the operator went wrong, we need to look at why he made the decisions that made sense to him at that point in time. The operator is in the “sharp end” of the system. James Reason, Professor Emeritus of Psychology at the University of Manchester in England, came up with the concept of Sharp End and Blunt End. Sharp end is similar to the concept of Gemba in Lean, where the actual action is taking place. This is mainly where the accident happens and is thus in the spotlight during an investigation. Blunt end, on the other hand, is removed and away in space and time. The blunt end is responsible for the policies and constraints that shape the situation for the sharp end. The blunt end consists of top management, regulators, administrators etc. Professor Reason noted that the blunt end of the system controls the resources and constraints that confront the practitioner at the sharp end, shaping and presenting sometimes conflicting incentives and demands. The operators in the sharp end of the sociotechnical system inherits the defects in the system due to the actions and policies set by blunt end and can be the last line of defense instead of being the main proponents or instigators of the accidents. Professor Reason also noted that – rather than being the main instigators of an accident, operators tend to be the inheritors of system defects. Their part is that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking. I encourage the reader to research the works of Jens Rasmussen, James Reason, Erik Hollnagel and Sydney Dekker since I have tried to only scratch the surface.

Final Words:

Perhaps the oldest source of human error causation is the Code of Hammurabi, the code of ancient Mesopotamian laws dating back to 1754 BC. The Code of Hammurabi consisted of 282 laws. Some examples of human error are given below.

  • If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kill its owner, then that builder shall be put to death.
  • If a man rents his boat to a sailor, and the sailor is careless, and the boat is wrecked or goes aground, the sailor shall give the owner of the boat another boat as compensation.
  • If a man lets in water and the water overflows the plantation of his neighbor, he shall pay ten gur of corn for every ten gan of land.

I will finish off with the story of Icarus. In Greek mythology, Icarus was the creator of the labyrinth in the island of Minos. Icarus’ father was the master craftsman Daedalus. King Minos of Crete imprisoned Daedalus and Icarus in Crete. The ingenious Daedalus observed the birds flying and invented a set of wings made from bird feathers and candle wax. He tested the wings out and made a pair for his son Icarus. Daedalus and Icarus planned their escape. Daedalus was a good Engineer since he studied the failure modes of his design and identified the limits. Daedalus instructed Icarus to follow him closely and asked him to not fly too close to the sea since the moisture can dampen the wings, and not fly too close to the sun since the heat from sun can melt the wings. As the story goes, Icarus was excited with his ability to fly and got carried away (maybe reckless). He flew too close to the sun, and the wax melted from his wings causing him to fall down to his untimely death.

Perhaps, the death of Icarus could be viewed as a human error since he was reckless and did not follow directions. However, Stephen Barlay in his 1969 book, Aircrash Detective: International Report on the Quest for Air Safety, looked at this story closely. At the high altitude that Icarus was flying, the temperature will actually be cold rather than warm. Thus, the failure would actually be from the cold temperature that would make the wax brittle and break instead of wax melting as indicated in the story. If this was true, during cold weathers the wings would have broken down and Icarus would have died at another time even if he had followed his father’s advice.

Always keep on learning…

In case you missed it, my last post was A Fuzzy 2018 Wish

The Information Model for Poka Yoke:

USB2

In today’s post, I will be looking at poka yoke or error proofing using an information model. My inspirations for this post is Takahiro Fujimoto, who wrote the wonderful book “The Evolution of a Manufacturing System at Toyota” (1999) and a discussion I had with my brother last weekend.

I will start with an interesting question – “where do you see information at your gemba, your production floor?” A common answer to this might be the procedures or the work instructions, or you might answer it as the visual aids readily available on the floor. Yet another answer might be the production boards where the running total along with reject information is recorded. All of this is correct. A general definition of information is something that carries content, which is related to data. I am not going into Claude Shannon’s work with information in this post. Fujimoto’s brilliant view of information is that every artifact on the production floor, and in fact every materialistic thing carries information. Fujimoto defines an information asset as the basic unit of an information system. Information cannot exist without the materials or energy in which it is embodied – its medium.

info asset

This information model indicates that the manufactured product carries information. The information it carries came from the design of the product. The information is transferred and transformed from the fixtures/dies/prints etc onto the physical product. Any loss of information during this process results in a defective product. To take this concept further, even if the loss of information is low, the end-user interaction with the product brings in a different dimension. The end-user gains information when he interacts with the product. If this information matches his expectations, he is satisfied. Even if there is minimal loss of information from design to manufacturing, if the end product information does not match the user’s expectations, the user gets dissatisfied.

Lets look at a simple example of a door.  A door with a handle is a poor design since the information of whether to push or pull is not clearly transferred to the user. The user might expect to pull on the handle instead of pushing on it. The information carried by the door handle is to “open the door using handle”. It does not convey whether to push or pull to open the door.

handle

Perhaps, one can add a note on the door that says, “Push”. A better solution to avoid the confusion is to eliminate the handle altogether so that the only option is to push. The removal of the handle with a note indicating “push” conveys the information that to open the door, one has to push. The information gets conveyed to the user and there is no dissatisfaction.

This example brings up an important point – a defect is created only when an operator or machine interacts with imperfect information. The imperfect information could be in the form of a worn-out die or an imperfect work instruction that aids loss of original information being transferred to the product. When you are trying to the solve a problem on the production floor, you are updating the information available on the medium so that the user’s interaction is modified to achieve the optimum result. This brings us to poka yoke or error-proofing.

If you think about it, you could say that the root cause for any problem is that the current process allows that problem to occur due to imperfect information.  This is what poka yoke tries to address. Toyota utilizes Jidoka and poka yoke to ensure product quality. Jidoka or autonomation is the idea that when a defect is identified, the process is stopped either by the machine in an automated process, or by the operator in an assembly line. The line is stopped so that the quality problem can be addressed. In the case of Jidoka, the problem has already occurred. In contrast, poka yoke eliminates the problem by preventing the problem from occurring in the first place. Poka yoke is the brainchild of probably one of the best Industrial Engineers ever, Shigeo Shingo. The best error-proofing is one where the operator cannot create a specific defect, knowingly or unknowingly. In this type of error-proofing, the information is embedded in the medium such that it conveys the proper method to the operator and if that method is not followed, the action cannot be completed. This information of only one proper way is physically embedded onto the medium.

Information in the form of work instructions may not always be effective because of limited interaction with the user. Information in the form of visual aids can be effective since it interacts with the user and provides useful information. However, the user can ignore this or get used to it. Information in the form of alarms can also be useful. This too may get ignored by the user and may not prevent the error from occurring. However, the user cannot ignore the information in the form of contact poka yoke since he has to interact with it. The proper assembly information is physically embedded in the material. A good example is a USB cable where it can be entered in only one way. The USB icon on top indicates that it is the top. Apple took this approach further by eliminating the need of orientation altogether with its lightning cables. The socket on the Apple product prevents any other cable from being inserted due to its unique shape.

Final Words:

The concept of physical artifacts carrying information is enlightening for me as a Quality Engineer. You can update the process information by updating a fixture to have a contact feature so that a part can be inserted in only one way. This information of proper orientation is embedded onto the fixture. This is much better that updating the work instruction to properly orient the part. The physical interaction ensures that the proper information is transferred to the operator to properly orient the part.

As I was researching for this post, I came across James Gleick who wrote the book, “The Information: A History, a Theory, a Flood”. I will finish off with a story I heard from James Gleick regarding information: When Gleick started working at the New York Times, a wise old head editor told him that the reader is not paying for all the news that they put in to be printed. What the reader is paying them was for all the news that they left out.

Always keep on learning…

In case you missed it, my last post was Divine Wisdom and Paradigm Shifts:

Divine Wisdom and Paradigm Shifts:

cancer

One of the best books I have read in recent times is The Emperor of All Maladies by the talented Siddhartha Mukherjee. Mukherjee won the 2011 Pulitzer Prize for this book. The book is a detailed history of Cancer and humanity’s battle with it. Amongst many things that piqued my interest, was one of the quotes I had heard attributed to Dr. Deming – In God we trust, all others must bring data.

To tell this story, I must first talk about William S. Halsted. Halsted was a very famous surgeon from John Hopkins who came up with the surgical procedure known as the “Radical Mastectomy” in the 1880’s. This is a procedure to remove the breast, the underlying muscles and attached lymph nodes to treat breast cancer. He hypothesized that the breast cancer spreads centrifugally from the breast to other areas. Thus, the removal of the breast, underlying muscles and lymph nodes would prevent the spread of cancer. He called this the “centrifugal theory”. Halsted called this procedure as “radical” to notate that the roots of the cancer are removed. Mukherjee wrote in his book that the intent of radical mastectomy was to arrest the centrifugal spread by cutting every piece of it out of the body. Physicians all across America identified the Radical Mastectomy as the best way to treat breast cancer. The centrifugal theory became the paradigm for breast cancer treatment for almost a century.

There were skeptics of this theory. The strongest critics of this theory were Geoffrey Keynes, a London based surgeon in the 1920s, and George Barney Crile, an American surgeon who started his career in the 1950s. They noted that even with the procedures that Halsted had performed, many patients died within four or five years from metastasis (cancer spreading to different organs). The surgeons overlooked these flaws, as they were firm believers in the Radical Mastectomy. Daniel Dennett, the famous American Philosopher, talks about the concept of Occam’s Broom, which might explain the thinking process for ignoring the flaws in a hypothesis. When there is a strong acceptance of a hypothesis, any contradicting information may get swept under the rug with Occam’s Broom. The contradictory information gets ignored and not confronted.

Keynes was even able to perform a local surgery of the breast and together with radiation treatment achieve some success. But Halsted’s followers in America ridiculed this approach, and came up with the name “lumpectomy” to call the local surgery. In their minds, the surgeon was simply removing “just” a lump, and this did not make much sense. They were aligning themselves with the paradigm of Radical Mastectomy. In fact, some of the surgeons even went further to come up with “superradical” and “ultraradical” procedures that were morbidly disfiguring procedures where the breast, underlying muscles, axillary nodes, the chest wall, and occasionally the ribs, part of the sternum, the clavicle and the lymph nodes inside the chest were removed. The idea of “more was better” became prevalent.

Another paradigm with clinical studies during that time was trying to look only for positive results – is treatment A better than treatment B? However, this approach did not show that treatment A was no better than treatment B. Two statisticians, Jerry Neyman and Egon Pearson, changed the approach with their idea of using the statistical concept of power. The sample size for a study should be based on the power calculated. Loosely stated, more independent samples mean higher power. Thus, with a large sample size of randomized trials, one can make a claim of “lack of benefit” from a treatment. The Halsted procedure did not get challenged for a long time because the surgeons were not willing to take part in a large sample size study.

A Philadelphia surgeon named Dr. Bernard Fisher was finally able to shift this paradigm in the 1980s. Fisher found no reason to believe in the centrifugal theory. He studied the cases put forth by Keynes and Crile. He concluded that he needed to perform a controlled clinical trial to test the Radical Mastectomy against Simple Mastectomy and Lumpectomy with radiation. The opposition from the surgeons slowly shifted with the strong advocacy from the women who wanted a less invasive treatment. Mukherjee cites the Thalidomide tragedy, the Roe vs Wade case, along with the strong exhortation from Crile to women to refuse to submit to a Radical Mastectomy, and the public attention swirling around breast cancer for the slow shift in the paradigm. Fisher was finally able to complete the study, after ten long years. Fisher stated that he was willing to have faith in divine wisdom but not in Halsted as divine wisdom. Fisher brusquely told a journalist – “In God we trust. All other must have data.”

The results of the study proved that all three cases were statistically identical. The group treated with Radical Mastectomy however paid heavily from the procedure but had no real benefits in survival, recurrence or mortality. The paradigm of Radical Mastectomy shifted and made way to better approaches and theories.

While I was researching this further, I found that the quote “In God we trust…” was attributed to another Dr. Fisher. Dr. Edwin Fisher, brother of Dr. Bernard Fisher, when he appeared before the Subcommittee on Tobacco of the Committee on Agriculture, House of Representatives, Ninety-fifth Congress, Second Session, on September 7, 1978. As part of presentation Dr. Fisher said – “I should like to close by citing a well-recognized cliche in scientific circles. The cliche is, “In God we trust, others must provide data. This is recorded in “Effect of Smoking on Nonsmokers. Hearing Before the Subcommittee on Tobacco of the Committee on Agriculture House of Representatives. Ninety-fifth Congress, Second Session, September 7, 1978. Serial Number 95-000”. Dr. Edwin Fisher unfortunately was not a supporter of the hypothesis that smoking is bad for a non-smoker. He even cited that people traveling on an airplane are more bothered by crying babies than the smoke from the smokers.

fisher

Final Words:

This past year, I was personally affected by a family member suffering from the scourge of breast cancer. During this period of Thanksgiving in America, I am thankful for the doctors and staff who facilitated her recovery. I am thankful for the doctors and experts in the medical field who were courageous to challenge the “norms” of the day for treating breast cancer. I am thankful for the paradigm shift(s) that brought better and effective treatments for breast cancer. More is not always better! I am thankful for them for not accepting a hypothesis based on just rationalism, an intuition on how things might be working. I am thankful for all the wonderful doctors and staff out there who take great care in treating all cancer patients.

I am also intrigued to find the quote of “In God we trust…” used with the statement that smoking may not have a negative impact on non-smokers.

I will finish with a story of another paradigm shift from Joel Barker in The Business of Paradigms.

A couple of Swiss watchmakers in Centre Electronique Horloger (CEH) in Neuchâtel, Switzerland, developed the first Quartz based watch. They went to different Swiss watchmakers with the technology that would later revolutionize the watch industry. However, the paradigm at that time was the intricate Swiss watch making process with gears and springs. No Swiss Watch company was interested in this new technology which did not rely on gears or springs for keeping time. The Swiss watchmakers with the new idea then went to a Clock convention and set up a booth to demonstrate their new idea. Again, no Swiss watch company was interested in what they had to offer. Two representatives, one from the Japanese company Seiko, and the other from Texas Instruments took notice of the new technology. They purchased the patents and as they say – the rest is history. The new paradigm then became Quartz watches. The Swiss, who were on the top of watch making with over 50% of the watch market in the 1970s, stepped aside for the Quartz watch revolution marking the decline of their industry. This was later termed as the Quartz Revolution.    

Always keep on learning…

In case you missed it, my last post was The Best Attribute to Have at the Gemba:

Which Way You Should Go Depends on Where You Are:

compass

I recently read the wonderful book “How Not To Be Wrong, The Power of Mathematical Thinking” by Jordan Ellenberg. I found the book to be enlightening and a great read. Jordan Ellenberg has the unique combination of being knowledgeable and capable of teaching in a humorous and engaging way. One of the gems in the book is – “Which way you should go depends on where you are”. This lesson is about the dangers of misapplying linearity. When we are thinking in terms of abstract concepts, the path from point A to point B may appear to be linear. After all, the shortest path between two points is a straight line. This type of thinking is linear thinking.

To illustrate this, let’s take the example of poor quality issues on the line. The first instinct to improve quality is to increase inspection. In this case, point A = poor quality, and point B = higher quality. If we plot this incorrect relationship between Quality and Inspection, we may assume it as a linear relationship – increasing inspection results in better quality.

Inspection and Quality

However, increasing inspection will not result in better quality in the long run and will result in higher costs of production. We must build quality in as part of the normal process at the source and not rely on inspection. In TPS, there are several ways to do this including Poka Yoke and Jidoka.

In a similar fashion, we may look at increasing the number of operators in the hopes of increasing productivity. This may work initially. However, increasing production at the wrong points in the assembly chain can hinder the overall production and decrease overall productivity. Taiichi Ohno, the father of Toyota Production System, always asked to reduce the number of operators to improve the flow. Toyota Production System relies on the thinking of the people to improve the overall system.

The two cases discussed above are nonlinear in nature. Thus increasing one factor may increase the response factor initially. However, continually increasing the factor can yield negative results. One example of a non-linear relationship is shown below:

productivity

The actual curve may of course vary depending on the particularities of the example. In nonlinear relationships, which way you should go depends on where you are. In the productivity example, if you are at the Yellow star location on the curve, increasing the operators will only decrease productivity. You should reduce the number of operators to increase productivity. However, if you are at the Red star, you should look into increasing the operators. This will increase productivity up to a point, after which the productivity will decrease. Which Way You Should Go Depends on Where You Are!

In order to know where you are, you need to understand your process. As part of this, you need to understand the significant factors in the process. You also need to understand the boundaries of the process where things will start to breakdown. The only way you can truly learn your process is through experimentation and constant monitoring. It is likely that you did not consider all of the factors or the interactions. Everything is in flux and the only constant thing is change. You should be open for input from the operators and allow improvements to happen from the bottom up.

I will finish off with the anecdote of the “Laffer curve” that Jordan Ellenberg used to illustrate the concept of nonlinearity. One polical party in America have been pushing for lowering taxes on the wealthy. The conservatives made this concept popular using the Laffer curve. Arthur Laffer was an economics professor at the University of Chicago. The story goes that Arthur Laffer drew the curve on the back of a napkin during dinner in 1974 with the senior members of then President Gerald Ford’s administration. The Laffer Curve is shown below:

Laffer curve

The horizontal axis shows the tax rate and the vertical axis shows the revenue that is generated from taxation. If there is no taxation, then there is no revenue. If there is 100% taxation, there is also no revenue because nobody would want to work and make money, if they cannot hold on to it. The argument that was raised was that America was on the right hand side of the curve and thus reducing taxation would increase revenue. It has been challenged whether this assumption was correct. Jordan used the following passage from Greg Manikiw, a Harvard economist and a Republican who chaired the Council of Economic Advisors under the second President Bush:

Subsequent history failed to confirm Laffer’s conjecture that lower tax rates would raise tax revenue. When Reagan cut taxes after he was elected, the result was less tax revenue, not more. Revenue from personal income taxes fell by 9 percent from 1980 to 1984, even though average income grew by 4 percent over this period. Yet once the policy was in place, it was hard to reverse.

The Laffer curve may not be symmetric as shown above. The curve may not be smooth and even as shown above and could be a completely different curve altogether. Jordan states in the book – All the Laffer curve says is that lower taxes could, under some circumstances, increase tax revenue; but figuring out what those circumstances are requires deep, difficult, empirical work, the kind of work that doesn’t fit on a napkin.

Always keep on learning…

In case you missed it, my last post was Epistemology at the Gemba:

Concept of Constraints in Facing Problems:

220px-Atlas_Santiago_Toural_GFDL

In today’s post, I will be looking at the concept of constraints in facing problems. Please note that I did not state “solving problems”. This is because not all problems are solvable. There are certain problems, referred to as “wicked problems” or complex problems that are not solvable. These problems have different approaches and none of the approaches can solve the problems completely. Some of the alternatives are better than the others, but at the same time they may have their own unintended consequences. Some examples of this are global warming and poverty.

My post is related to the Manufacturing world. Generally in the manufacturing world, most of the problems are solvable. These problems have a clear cause and effect relationships. They can be solved by using the best practice or a good practice. The best practice is used for obvious problems, when the cause and effect relationship is very clear, and there is truly one real solution. A good practice is employed where the cause and effect relationship is evident only with the help of subject-matter-experts. These are called “complicated problems”. There are also complex problems where the cause and effect relationships are not evident. These may be understood only after-the-fact. An example for this is launching a new product and ensuring a successful launch. Most of the time, the failures are studied and the reasons for the failure are “determined” after the fact.

The first step in tackling these problems is to understand what type of problem it is. Sometimes, the method to solve a problem is prescribed before the problem is understood. Some of the methods assume that the problem has a linear cause and effect relationship. An example is 5 why. 5 why assumes that there is a linear relationship between cause and effect. This is evident in the question – “why did x happen?”  This works fine for the obvious problems. This may not work that well for complicated problems and never for a complex problem. One key thing to understand is that the problems can be composite problems, some aspects may be obvious while some aspects may be complicated. Using a prescribed method can be ineffective in these cases.

The concept of constraints is tightly related to the concept of variety. The best resource for this is Ross Ashby’s “An Introduction to Cybernetics” [1]. Ashby defined variety as the number of distinct elements in a set of distinguishable elements or as the logarithm to base 2 of the number of distinct elements. Thus, we can say that the variety of genders is 2 (male or female) or as 1 bit (based on the logarithm calculation). Ashby defined constraint as a relation between two sets. Constraint only exists when one set’s variety is lower than the other set’s variety. Ashby gives the axample of a school that only admits boys. Compared to the set of gender (boys and girls), the school’s variety is less (only boys). Thus the school has a constraint imposed on itself.

A great resource for this is Dave Snowden and his excellent Cynefin framework [2]. Snowden says that ontology precedes epistemology or in other words data precedes framework. The fundamental properties of the problem must be understood before choosing a “tool” to address the problem. Prescribing a standard tool to use in all situations is constraining oneself and this will lead to ineffective attempts at finding a solution. When the leader says we need to use lean or six sigma, this is an attempt to add constraints by removing variety. Toyota’s methodologies referred to as Toyota Production System, was developed for their problems. They identified the problems and then proceeded to find ways to address them. They did not have a framework to go by. They created the framework based on decades of experience and tweaking. Thus blindly copying their methodologies are applying constraints on yourself that may be unnecessary. As the size or scope of a project increases, it tends to increase the complexity of the project. Thus enterprise wide applications of “prescribed solutions” are not generally effective since the cause-effect relationships cannot be completely predicted leading to unintended consequences, inefficiency and ineffectiveness. On the other hand, Ashby advises to take note of any existing constraints in a system, and to take advantage of the constraints to improve efficiency and effectiveness.

A leader should thus first understand the problem to determine the approach to proceed. Sometimes, one may have to use a composite of tools. One needs to be open for modifications by having a closed loop(s) with a feedback mechanism so that the approach can be modified as needed. It is also advisable to use heuristics like genchi genbutsu since they are general guidelines or rules of thumb. This does not pose a constraint. Once a methodology is chosen, then a constraint is being applied since the available number of tools to use (variety) has now diminished.  This thinking in terms of constraints prevents the urge to treat everything as a nail when your preferred tool is a hammer.

I will finish with a great story from the great Zen master Huangbo Xiyun;

Huangbo once addressed the assembly of gathered Zen students and said; “You are all partakers of brewer’s grain. If you go on studying Zen like that, you will never finish it. Do you know that in all the land of T’ang there is no Zen teacher?”
Then a monk came forward and said, “But surely there are those who teach disciples and preside over the assemblies. What about that?”
Huangbo said, “I do not say that there is no Zen, but that there is no Zen teacher…”

Always keep on learning…

In case you missed it, my last post was Jidoka, the Governing Principle for Built-in-Quality:

[1] https://www.amazon.com/Introduction-Cybernetics-W-Ross-Ashby/dp/1614277656

[2] http://cognitive-edge.com/blog/part-two-origins-of-cynefin/