The Truths of Complexity:

The Covid 19 pandemic has given me an opportunity to observe, meditate and learn about complexity in action. In today’s post, I am looking at “truths” in complexity. Humans, more than any other species, have the ability to change their environment at a faster pace. They are also able to maintain belief systems over time and act on them autonomously. These are good reasons to call all “human systems” complex systems.

The Theories of Truth:

Generally, there are three theories of truth in philosophy. They are as follows:

  1. Correspondence theory of truth – very simply put, this means that what you have internally in your mind corresponds one-to-one with the external world. The statement you might make such as – “the cat is on the mat” is true, if there are truly a cat and a mat, and if that cat is on that mat. The main objection to this theory is that we don’t have access to have an objective reality. What we have is a sensemaking organ, our brain, that is trying to make sense based on the data provided by the various sensory organs. The brain over time generates stable correlations which allows it to abstract meanings from the filtered information from the sensory data. The correspondence theory is viewed as a “static” picture of truth, and fails to explain the dynamic and complex nature of reality.
  2. Coherence theory of truth – In this approach, a statement is true if it is coherent with the different specified set of beliefs and propositions. Here the idea is more about a fit and harmony with existing beliefs. The coherence theory is about consistency. An objection to this theory is that the subjective nature of a statement can “bend” to match the existing strong belief systems. Perhaps, a good example of this is the recent poll that found that the majority of democrats fear that the worst is yet to come for the Covid 19 pandemic, while the majority of republicans believe that the worst is over. Another criticism against this is that we can be inconsistent in our beliefs as indicated by cognitive dissonance.
  3. Pragmatic Theory of truth – The pragmatic theory of truth was put forth as an alternative to the static correspondence theory of truth. In this theory, the value of truth is dependent on the utility it brings. Pragmatic theories of truth have the effect of shifting attention away from what makes a statement true and toward what people mean or do in describing a statement as true. As one of the proponents of Pragmatic theory, William James, put it – True beliefs are useful and dependable in ways that false beliefs are not:‘You can say of it then either that “it is useful because it is true” or that “it is true because it is useful”. Both these phrases mean exactly the same thing.’ One of my favorite explanations of pragmatic theory comes from Richard Rorty, who viewed it as coping with reality, rather than copying reality. One of the criticisms against the pragmatic theory of truth is how it explains truth in terms of utility. As John Capps notes, utility, long-term durability, and assertibility (etc.) should be viewed not as definitions but rather as criteria of truth, as yardsticks for distinguishing true beliefs from false ones.

Sensemaking Complexity:

From the discussion of truth, we can see that seeking truth is not an easy task, especially when we deal with complexity of human systems. Our natural tendency is to find order as pleasing and reassuring. We try to find order in all we can, and we try our best to maintain order as long as we can. In this attempt, we often neglect the actual complexity we are dealing with. A common way to distinguish complexity of a phenomenon is – ordered, complicated or complex. We can say a square peg in a square hole is an ordered phenomenon. The correspondence theory of truth is quite apt here because we have a one to one relationship. We have a very good working knowledge of cause and effect. As complexity increases, we get to complicated phenomenon where there is still somewhat a good cause and effect relationship. A car can be viewed as a complicated phenomenon. The correspondence theory is still apt here. Once we add a human to the mix, we get to complexity. Imagine the driver of a car. Now imagine thousands of drivers all at once. The correspondence theory of truth falls apart fast here.

The main source of complexity in the example discussed above comes from humans. We are autonomous, and we are able to justify our own actions. We may go faster than the speed limit because we are already late for the appointment. We may overtake on the wrong side because the other driver is driving slowly. We assign meanings and we also assign purposes for others. We do not always realize that other humans also have the same power.

We have seen varying responses and behavior in this pandemic. We have seen the different justifications and hypotheses. We agree with some of them and strongly disagree with others depending on how they cohere with our own belief systems. The actual transmission of the virus is fairly constrained. It transmits mainly from person to person. The transmission occurs mainly through respiratory droplets. Every human interaction carries some risk of becoming infected if the other person is a carrier of the virus. However, the actual course of the pandemic has been complex.

Philosophical Insights to Sensemaking Complexity:

I will use the ideas of Friedrich Nietzsche and William. V.O. Quine to further look at truth and how we come to know about truth. Nietzsche had a multidimensional view of truth. He viewed truth as:

A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.

He emphasized on the abstract nature of truth. One comes to view the abstractions/metaphors as stand in for reality, and eventually falsely equate them to reality.

Every word immediately becomes a concept, in as much as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates through our equating what is unequal.

Nietzsche advised us against using a cause-effect, correspondence type viewpoint in sensemaking complexity:

It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically. 

As Maureen Finnigan notes in her wonderful essay, Nietzsche’s Perspective: Beyond Truth as an Ideal:

As truth is not objective, in like manner, it is not subjective. Since thinking is not wholly rational, disconnected from the body, or independent of the world, the subjective perception, or conception, of truth through the intellect alone is impossible. “The ‘pure spirit’ is pure stupidity: if we subtract the nervous system and the senses—the ‘mortal shroud’—then we miscalculate—that is all!” Inasmuch as the individual is not independent from the world, one can neither subjectively nor objectively explain the world as if detached, but must interpret the world from within. Subjective and objective, like True and apparent, soul and body, thinking thing and material thing, intellect and sense, noumena and phenomena, are dualities that Nietzsche aspires to overcome. Thus, although Nietzsche is not a rationalist, this does not mean he falls into the irrationalist camp. He does not abolish reason but instead situates it within life, as an instrument, not as an absolute.

With complexity, we should not look for correspondence but coherence. Correspondence forces categorization while coherence forces connections. This follows nicely into Quine’s Web of Belief idea. Quine’s idea is a holistic approach. We make meanings in a holistic fashion. When we observe a phenomenon, our sensory experience and the belief it generates do not standalone in our entire belief system. Instead, Quine postulates that we make sense holistically with a web of belief. Every belief is connected to other beliefs like a web.

For example, we can say Experience1(E1) led to Belief1(B1), and Experience2(E2) led to Belief2(B2) etc. This has the correspondence nature we discussed earlier. This view prefers the ordered static approach to sensemaking. However, in Quine’s view, it is more dynamic, interconnected and complex. This has the coherence nature we discussed earlier. The schematic below, inspired by a lecture note from Bryan. Van. W. Norden, shows this in detail.

The idea of Web of Belief is clearly explained by Thomas Kelly:

Quine famously suggests that we can picture everything that we take to be true as constituting a single, seamless “web of belief.” The nodes of the web represent individual beliefs, and the connections between nodes represent the logical relations between beliefs. Although there are important epistemic differences among the beliefs in the web, these differences are matters of degree as opposed to kind. From the perspective of the epistemologist, the most important dimension along which beliefs can vary is their centrality within the web: the centrality of a belief corresponds to how fundamental it is to our overall view of the world, or how deeply implicated it is with the rest of what we think. The metaphor of the web of belief thus represents the relevant kind of fundamentality in spatial terms: the more a particular belief is implicated in our overall view of the world, the nearer it is to the center, while less fundamental beliefs are located nearer the periphery of the web. Experience first impinges upon the web at the periphery, but no belief within the web is wholly cut off from experience, inasmuch as even those beliefs at the very center stand in logical relations to beliefs nearer the periphery.

The idea of degrees rather than a concrete distinction between beliefs is very important to note here. Additionally, Quine proposes that when we counter an experience contradicting our belief, we seek to restore consistency/coherence in the web by giving up beliefs that are located near the periphery rather than the ones near the center.

Final Words:

The dynamic nature of complexity is not just applicable to a pandemic but also to scientific paradigms. This is beautifully explained in the quote from Jacob Bronowski below:

“There is no permanence to scientific concepts because they are only our interpretations of natural phenomena … We merely make a temporary invention which covers that part of the world accessible to us at the moment”

Our beliefs shape our experience as much as our experiences shape our beliefs in a recursive manner. The web gets more complex as time goes on, where some of the nodes become more distinct and some others get hazier. We are prone to getting perpetually frustrated if we try to apply a static framework to the dynamic everchanging domain of complexity. It gets more frustrating because patterns emerge on a continuous basis providing an illusion of order. The static and rigid frameworks break because of their rigidity and inflexibility to tackle the variety thrown upon them.

With this mind, we should come to realize that we do not have a means to know the external world as-is. All we can know is how it appears to us based on our web of belief. The pragmatic tradition of truth advises us to keep going on our search for truth, and that this search is self-corrective. The correspondence theory fails us because the meaning we create is not independent of us, but very much a product of our web of belief. At the same time, if we don’t seek to understand others, coherence theory will fail us because we would lack the requisite variety needed to make sense of a complex phenomenon. I will finish with an excellent quote from Maureen Finnigan:

Human beings impose their own truth on life instead of seeking truth within life.

Stay safe and Always keep on learning… In case you missed it, my last post was Korzybski at the Gemba:

The Free Energy Principle at the Gemba:

FEP

In today’s post, I am looking at the Free Energy Principle (FEP) by the British neuroscientist, Karl Friston. The FEP basically states that in order to resist the natural tendency to disorder, adaptive agents must minimize surprise. A good example to explain this is to say successful fish typically find themselves surrounded by water, and very atypically find themselves out of water, since being out of water for an extended time will lead to a breakdown of homoeostatic (autopoietic) relations.[1]

Here the free energy refers to an information-theoretic construct:

Because the distribution of ‘surprising’ events is in general unknown and unknowable, organisms must instead minimize a tractable proxy, which according to the FEP turns out to be ‘free energy’. Free energy in this context is an information-theoretic construct that (i) provides an upper bound on the extent to which sensory data is atypical (‘surprising’) and (ii) can be evaluated by an organism, because it depends eventually only on sensory input and an internal model of the environmental causes of sensory input.[1]

In FEP, our brains are viewed as predictive engines, or also Bayesian Inference engines. This idea is built on predictive coding/processing that goes back to the German physician and physicist Hermann von Helmholtz from the 1800s. The main idea is that we have a hierarchical structure in our brain that tries to predict what is going to happen based on the previous sensory data received. As philosopher Andy Clarke explains, our brain is not a cognitive couch potato waiting for sensory input to make sense of what is going on. It is actively predicting what is going to happen next. This is why minimizing the surprise is important. For example, when we lift a closed container, we predict that it is going to have a certain weight based on our previous experiences and the visual signal of the container. We are surprised if the container is light in weight and can be lifted easily. We have similar experiences when we miss a step on the staircase. From a mathematical standpoint, we can say that when our internal model matches the sensory input, we are not surprised. This refers to the KL divergence in information theory. The lower the divergence, the better the fit between the model and the sensory input, and lower the surprise. The hierarchical model is top down. The prediction flows top down, while the sensory data flows bottom up. If the model matches the sensory data, then nothing goes up the chain. However, when there is a significant difference between the top down prediction and the bottom up incoming sensory date, the difference is raised up the chain. One of my favorite examples to explain this further is to imagine that you are in the shower with your radio playing. You can faintly hear the radio in the shower. When your favorite song plays on the radio, you feel like you can hear it better than when an unfamiliar song is played. This is because your brain is able to better predict what is going to happen and the prediction helps smooth out the incoming auditory signals. British neuroscientist Anil Seth has a great quote regarding the predictive processing idea, “perception is controlled hallucination.”

Andy Clarke explains this further:

Perception itself is a kind of controlled hallucination… [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them.

(T)o perceive the world is to successfully predict our own sensory states. The brain uses stored knowledge about the structure of the world and the probabilities of one state or event following another to generate a prediction of what the current state is likely to be, given the previous one and this body of knowledge. Mismatches between the prediction and the received signal generate error signals that nuance the prediction or (in more extreme cases) drive learning and plasticity.

Predictive coding models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context — time and task allowing — to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.

What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive).

(T)he task of the perceiving brain is to account for (to accommodate or ‘explain away’) the incoming or ‘driving’ sensory signal by means of a matching top-down prediction. The better the match, the less prediction error then propagates up the hierarchy. The higher level guesses are thus acting as priors for the lower level processing, in the fashion (as remarked earlier) of so-called ‘empirical Bayes’.

The question on what happens when the prediction does not match is best explained by Friston:

“The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimize free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment.”

Our brains are continuously sampling the data coming in and making predictions. When there is a mismatch between the prediction and the data, we have three options.

  • Update our model to match the incoming data.
  • Attempt to change the environment so that the model matches the environment. Try resampling the data coming in.
  • Ignore and do nothing.

Option 3 is not always something that will yield positive results. Option 1 is a learning process where we are updating our internal models based on the new evidence. Option 2 show ours strong confidence in our internal model, and that we are able to change the environment. Or perhaps there is something wrong with the incoming data and we have to get more data to proceed.

The ideas from FEP can also further our understanding on our ability to balance between maintaining status quo (exploit) and going outside our comfort zones (explore). To paraphrase the English polymath Spencer Brown, the first act of cognition is to differentiate (act of distinction). We start with differentiating – Me/everything else. We experience and “bring forth” the world around us by constructing it inside our mind. This construction has to be a simpler version due to the very high complexity of the world around us. We only care about correlations that matter to us in our local environment. This matters the most for our survival and sustenance. This leads to a tension. We want to look for things that confirm our hypotheses and maintain status quo. This is a short-term vision. However, this doesn’t help in the long run with our sustenance. We also need to explore to look for things that we don’t know about. This is the long-term vision. This helps us prepare to adapt with the everchanging environment. There is a balance between the two.

The idea of FEP can go from “I model the world” to “we model the world” to “we model ourselves modelling the world.” As part of a larger human system, we can cocreate a shared model of our environment and collaborate to minimize the free energy leading to our sustenance as a society.

Final Words:

FEP is a fascinating field and I welcome the readers to check out the works of Karl Friston, Andy Clarke and others. I will finish with the insight from Friston that the idea of minimizing free energy is also a way to recognize one’s existence.

Avoiding surprises means that one has to model and anticipate a changing and itinerant world. This implies that the models used to quantify surprise must themselves embody itinerant wandering through sensory states (because they have been selected by exposure to an inconstant world): Under the free-energy principle, the agent will become an optimal (if approximate) model of its environment. This is because, mathematically, surprise is also the negative log-evidence for the model entailed by the agent. This means minimizing surprise maximizes the evidence for the agent (model). Put simply, the agent becomes a model of the environment in which it is immersed. This is exactly consistent with the Good Regulator theorem of Conant and Ashby (1970). This theorem, which is central to cybernetics, states that “every Good Regulator of a system must be a model of that system.” .. Like adaptive fitness, the free-energy formulation is not a mechanism or magic recipe for life; it is just a characterization of biological systems that exist. In fact, adaptive fitness and (negative) free energy are considered by some to be the same thing.

Always keep on learning…

In case you missed it, my last post was The Whole is ________ than the sum of its parts:

[1] The free energy principle for action and perception: A mathematical review. Christopher L. Buckley, Chang Sub Kim, Simon McGregor, Anil K. Seth (2017)

Clausewitz at the Gemba:

vonClausewitz

In today’s post, I will be looking at Clausewitz’s concept of “friction”. Carl von Clausewitz (1780-1831) was a Prussian general and military philosopher. Clausewitz is considered to be one of the best classical strategy thinkers and is well known for his unfinished work, “On War.” The book was published posthumously by his wife Marie von Brühl in 1832.

War is never a pleasant business and it takes a terrible toll on people. The accumulated effect of factors, such as danger, physical exertion, intelligence or lack thereof, and influence of environment and weather, all depending on chance and probability, are the factors that distinguish real war from war on paper. Friction, Clausewitz noted, was what separated war in reality from war on paper. Friction, as the name implies, hindered proper and smooth execution of strategy and clouded the rational thinking of agents. He wrote:

War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.

Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war.

Friction is the only conception which, in a general way, corresponds to that which distinguishes real war from war on paper. The military machine, the army and all belonging to it, is in fact simple; and appears, on this account, easy to manage. But let us reflect that no part of it is in one piece, that it is composed entirely of individuals, each of which keeps up its own friction in all directions.

Clausewitz viewed friction as impeding our rational abilities to make decisions. He cleverly stated, “the light of reason is refracted in a manner quite different from that which is normal in academic speculation… the ordinary man can never achieve a state of perfect unconcern in which his mind can work with normal flexibility.” In a tense situation, as most often the case is in combat, the “freshness” or usefulness of the available information is quickly decaying and reliability of the information is also in question.

Friction is what happens when reality differs from your model. Although Clausewitz’s concept of friction contains other elements, I am interested in is the friction coming from ambiguous information. Uncertainty and information are related to each other. In fact, one is the absence of the other. The only way to reduce uncertainty (be certain) is to have the required information that counters the uncertainty. To quote Wikipedia, Uncertainty refers to epistemic situations involving imperfect or unknown information. If we have full information then we don’t have uncertainty. It’s a zero-sum game.

We have two options to deal with the uncertainty due to informational friction:

  1. Reduce uncertainty by making useful information readily available to required agents when needed and where needed
  2. Come up with ways to tolerate uncertainty when we are not able to reduce it further.

As Moshe Rubinstein points out in his wonderful book, Tools for Thinking and Problem Solving, uncertainty is reduced only by acquisition of information and you need to ask three questions, in the order specified, when acquiring information.

  1. Is the information relevant? (is it current, and is the context applicable?)
  2. Is the information credible? (is it accurate?)
  3. Is the information worth the cost?

How should we proceed to minimize the friction?

  1. We should try to get the total picture, an understanding of the forest before we get lost in the trees. This helps us in realizing where our epistemic boundaries might be, and where we need to improve our learning.
  2. We should have the courage to ask questions and cast doubts on our world views. Even with our belief system, we can ask whether it is relevant and credible. We should try to ask – what is wrong with this picture? What am I missing?
  3. We should always keep on learning. We should not shy away from “hard projects.” We should see the challenges as learning experiences.
  4. We should know and be ready to have our plan fail. We should understand what the “levers” are in our plan. What happens when we push on one lever versus pulling on another? We should have models with the understanding that they are not perfect but they help us understand things better. We should rely on heuristics and flexible rules of thumbs. They are more flexible when things go wrong.
  5. We should reframe our understanding from a different perspective. We can try to draw things out or write about it or even talk about it to your spouse or family. Different viewpoints should be welcomed. We should generate multiple analogies and stories to help tell our side of the story. These will only help in further our understanding.
  6. When we make decisions under uncertainty and risk, each action can result in multiple outcomes, and most of the times, these are unpredictable and can have large-scale consequences. We should engage in fast and safe-to-fail experiments and have strong feedback loops to change course and adapt as needed.
  7. We should have stable substructures when things fail. This allows us to go back to a previous “safe point” rather than go back all the way to the start.
  8. We should go to gemba to grasp the actual conditions and understand the context. Our ability to solve a problem is inversely proportional to the distance from the gemba.
  9. We should take time, as permissible, to detail out our plan, but we should be ready to implement it fast. Plan like a tortoise and run like a hare.
  10. We should go to the top to take a wide perspective, and then come down to have boots on ground. We should take time to reflect on what went wrong and what went right, and what our impact was on ourselves and others. This is the spirit of Hansei in Toyota Production System.

Final Words:

Although not all of us are engaged in a war at the gemba, we can learn from Clausewitz about the friction from uncertainty, which impedes us on a daily basis. Clausewitz first used the term “friction” in a letter he wrote to his future wife, Marie von Brühl, in 1806. He described friction as the effect that reality has on ideas and intentions in war. Clausewitz was a man ahead of his time, and from his works we can see elements of systems thinking and complexity science.

We propose to consider first the single elements of our subject, then each branch or part, and, last of all, the whole, in all its relations—therefore to advance from the simple to the complex. But it is necessary for us to commence with a glance at the nature of the whole, because it is particularly necessary that in the consideration of any of the parts the whole should be kept constantly in view. The parts can only be studied in the context of the whole, as a “gestalt.

Clausewitz realized that each war is unique and thus what may have worked in the past may not work this time. He said:

Further, every war is rich in particular facts; while, at the same time, each is an unexplored sea, full of rocks, which the general may have a suspicion of, but which he has never seen with his eye, and round which, moreover, he must steer in the night. If a contrary wind also springs up, that is, if any great accidental event declares itself adverse to him, then the most consummate skill, presence of mind and energy, are required; whilst to those who only look on from a distance, all seems to proceed with the utmost ease.

Clausewitz encourages us to get out of our comfort zone, and gain as much variety of experience as we can. The variety of states in the environment always is larger than the variety of states we can hold. He continues to advise the following to reduce the impact of friction:

The knowledge of this friction is a chief part of that so often talked of, experience in war, which is required in a good general. Certainly, he is not the best general in whose mind it assumes the greatest dimensions, who is the most overawed by it (this includes that class of over-anxious generals, of whom there are so many amongst the experienced); but a general must be aware of it that he may overcome it, where that is possible; and that he may not expect a degree of precision in results which is impossible on account of this very friction. Besides, it can never be learnt theoretically; and if it could, there would still be wanting that experience of judgment which is called tact, and which is always more necessary in a field full of innumerable small and diversified objects, than in great and decisive cases, when one’s own judgment may be aided by consultation with others. Just as the man of the world, through tact of judgment which has become habit, speaks, acts, and moves only as suits the occasion, so the officer, experienced in war, will always, in great and small matters, at every pulsation of war as we may say, decide and determine suitably to the occasion. Through this experience and practice, the idea comes to his mind of itself, that so and so will not suit. And thus, he will not easily place himself in a position by which he is compromised, which, if it often occurs in war, shakes all the foundations of confidence, and becomes extremely dangerous.

US President Dwight Eisenhower said, “In preparing for battle I have always found that plans are useless, but planning is indispensable.” The act of planning helps us to conceptualize our future state. We should strive to minimize the internal friction, and we should be open to keep learning, experimenting, and adapting as needed to reach our future state. We should keep on keeping on:

“Perseverance in the chosen course is the essential counter-weight, provided that no compelling reasons intervene to the contrary. Moreover, there is hardly a worthwhile enterprise in war whose execution does not call for infinite effort, trouble, and privation; and as man under pressure tends to give in to physical and intellectual weakness, only great strength of will can lead to the objective. It is steadfastness that will earn the admiration of the world and of posterity.”

Always keep on learning…

In case you missed it, my last post was Exploring The Ashby Space:

Solving a Lean Problem versus a Six Sigma Problem:

Model

I must confess upfront that the title of this post is misleading. Similar to the Spoon Boy in the movie, The Matrix, I will say – There is no Lean problem nor a Six Sigma problem. All these problems are our mental constructs of a perceived phenomenon. A problem statement is a model of the actual phenomenon that we believe is the problem. The problem statement is never the problem! It is a representation of the problem. We form the problem statement based on our vantage point, our mental models and biases. Such a constructed problem statement is thus incomplete and sometimes incorrect. We do not always ask for the problem statement to be reframed from the stakeholder’s viewpoint. A problem statement is an abstraction based on our understanding. Its usefulness lies in the abstraction. A good abstraction ignores and omits unwanted details, while a poor abstraction retains them or worse omits valid details. Our own cognitive background hinders our ability to frame the true nature of the problem. To give a good analogy, a problem statement is like choosing a cake slice. The cake slice represents the cake, however, you picked the slice you wanted, and you still left a large portion of the cake on the table, and nobody wants your slice once you have taken a bite out of it.

When we have to solve a problem, it puts tremendous cognitive stress on us. Our first instinct is to use what we know and what we feel comfortable with. Both Lean and Six Sigma use a structured framework that we feel might suit the purpose. However, depending upon what type of “problem” we are trying to solve, these frameworks may lack the variety they need to “solve” the problem. I have the used the quotation marks on purpose. For example, Six sigma relies on a strong cause-effect relationship, and are quite useful to address a simple or complicated problem. A simple problem is a problem where the cause-effect relationship is obvious, whereas a complicated problem may require an expert’s perspective and experience to analyze and understand the cause-effect relationship. However, when you are dealing with a complex problem, which is non-linear, the cause-effect relationship is not entirely evident, and the use of a hard-structured framework like Six sigma can actually cause more harm than benefit. All human-centered “systems” are complex systems. In fact, some might say that such systems do not even exist. To quote Peter Checkland, In a certain sense, human activity systems do not exist, only perceptions of them exist, perceptions which are associated with specific worldviews.

We all want and ask for simple solutions. However, simple solutions do not work for complex problems. The solutions must match the variety of the problem that is being resolved. This can sometimes be confusing since the complex problems may have some aspects that are ordered which give the illusion of simplicity. Complex problems do not stay static. They evolve with time, and thus we should not assume that the problem we are trying to address still has the same characteristics when they were identified.

How should one go from here to tackle complex problems?

  • Take time to understand the context. In the complex domain, context is the key. We need to take our time and have due diligence to understand the context. We should slow down to feel our way through the landscape in the complex domain. We should break our existing frameworks and create new ones.
  • Embrace diversity. Complex problems require multidisciplinary solutions. We need multiple perspectives and worldviews to improve our general comprehension of the problem. This also calls to challenge our assumptions. We should make our assumptions and agendas as explicit as possible. The different perspective allows for synthesizing a better understanding.
  • Similar to the second suggestion, learn from fields of study different from yours. Learn philosophy. Other fields give you additional variety that might come in handy.
  • Understand that our version of the problem statement is lacking, but still could be useful. It helps us to understand the problem better.
  • There is no one right answer to complex problems. Most solutions are good-enough for now. What worked yesterday may not work today since complex problems are dynamic.
  • Gain consensus and use scaffolding while working on the problem structure. Scaffolding are temporary structures that are removed once the actual construction is complete. Gaining consensus early on helps in aligning everybody.
  • Go to the source to gain a truer understanding. Genchi Genbutsu.
  • Have the stakeholders reframe the problem statement in their own words, and look for contradictions. Allow for further synthesis to resolve contradictions. The tension arising from the contradictions sometimes lead us to improving and refining our mental models.
  • Aim for common good and don’t pursue personal gains while tackling complex problems.
  • Establish communication lines and pay attention to feedback. Allow for local context while interpreting any new information.

Final Words:

I have written similar posts before. I invite the reader to check them out:

Lean, Six Sigma, Theory of Constraints and the Mountain

Herd Structures in ‘The Walking Dead’ – CAS Lessons

A successful framework relies on a mechanism of feedback-induced iteration and keenness to learn. The iteration function is imperative because the problem structure itself is often incomplete and inadequate. We should resist the urge to solve a Six Sigma or a Lean problem. I will finish with a great paraphrased quote from the Systems Thinker, Michael Jackson (not the famous singer):

To deal with a significant problem, you have to analyze and structure it. This means, analyzing and structuring the problem itself, not the system that will solve it. Too often we push the problem into the background because we are in a hurry to proceed to a solution. If you read most texts thoughtfully, you will see that almost everything is about the solution; almost nothing is about the problem.

Always keep on learning…

In case you missed it, my last post was Maurice Merleau-Ponty’s Lean Lessons: