Exploring The Ashby Space:

Ashby4

Today’s post is a follow-up to an earlier post, Solving a Lean Problem versus a Six Sigma Problem:

In today’s post, I am looking at “The Ashby Space.” The post is based on the works of Ross Ashby, Max Boisot, Bill McKelvey and Karl Weick. Ross Ashby was a prominent cybernetician who is famous for his “Law of Requisite Variety.” The Law of Requisite Variety can be stated as “Only variety can destroy/absorb variety.” Ashby defined variety as the number of distinguishable states of a system. Stafford Beer used variety as a measure of complexity. The more variety a system has the more complex it is. An important concept to grasp with this idea is that the number of distinguishable states (and thus variety) depends upon the ability of the observer. In this regard, variety of a system may be viewed as dependent on the observer.

Max Boisot and Bill McKelvey expanded upon the Law of Requisite Variety and stated that only complexity can destroy complexity. In other words, only internal complexity can destroy external complexity. If the regulatory agency of a system does not have the requisite variety to match the variety of its environment, it will not be able to adapt and survive. Ashby explained this using the example of a fencer:

If a fencer faces an opponent who has various modes of attack available, the fencer must be provided with at least an equal number of modes of defense if the outcome is to have the single value: attacked parried.

Boisot and McKelvey restated Ashby’s law as – the range of responses that a living system must be able to marshal in its attempt to adapt to the world must match the range of situations—threats and opportunities—that it confronts. They explained this further using the graphical depiction they termed as “the Ashby Space.” The Ashby Space has two axes, the horizontal axis represents the Variety of Responses, and the vertical axis represents the Variety of Stimuli. Ashby’s law can be represented by the 45˚ diagonal line. The diagonal line represents the requisite variety where the stimuli variety matches the response variety. To adapt and survive we should be in on the diagonal line or below. If we are above the diagonal line, the external variety surpasses the internal variety needed and we perish. Using Ashby’s fencer example, the fencer is able to defend against the opponent only if his defense variety matches or exceeds that of the opponent’s offense variety. This is shown below.

Ashby1

Boisot and McKelvey also depicted the Ordered, Complex and Chaotic regimes in the Ashby space. In the ordered regime, the cause-effect relationships are distinguishable and generally has low variety. The complex regime has a higher variety of stimuli present and requires a higher variety of responses. The cause-effect relationships are non-linear and may make sense only in hindsight. The chaotic regime has the most variety of stimuli. This is depicted in the schematic below. Although the three regimes may appear equally sized in the schematic, this is just for representational purposes.

Ashby2

The next idea that we will explore on the Ashby Space is the idea of the Adaptive Frontier. Ashby proposed a strong need for reducing the amount of variety from the external environment. He viewed this as the role of regulation. Ashby pointed out that the amount of regulation that can be achieved is limited by the amount of information that can be transmitted and processed by the system. This idea is depicted by the Adaptive Frontier curve. Any variety that lies outside this curve is outside the “adaptation budget” of the system. The system does not have the resources nor capacity to process all the variety that is coming in, and does not have the capacity to allocate resources to choose appropriate responses. The adaptive frontier is shown in the schematic below as the red dotted curve.

Ashby3

Combining all the ideas above, the Ashby Space can be depicted as below.

Ashby Space

Boisot and McKelvey detail three types of responses that a living system might follow in the presence of external stimuli. Consider the schematic below, where the agent is located at “Q” in the Ashby Space, which refers to the stimuli variety, X.

  1. The Behaviorist – This is also referred to as the “headless chicken response”. When presented with the stimuli variety, X, the agent will pursue the headless chicken response of trying to match the high variety in a haphazard fashion and soon finds himself outside the adaptive frontier and perishes. The agent fails to filter out any unwanted stimuli and fails to process meaningful information out of the incoming data.
  2. The Routinizer – The routinizer interprets the incoming stimuli as “seen it all before.” They will filter out too much of the incoming data and fail to recognize patterns or mis-categorize them. The routinizer is using the schema which they already have, and their success lies in how well their schema matches the real-world variety-reducing regularities confronting the agent.
  3. The Strategist – An intelligent agent has to correctly interpret the data first, and extract valid information about relevant regularities from the incoming stimuli. The agent then has to use existing schema and match against existing patterns. If the patterns do not match, the agent will have to develop new patterns. As you go up in the Ashby space, the complexity increases, and as you go down, the complexity decreases. The schemas should have the required complexity to match the incoming stimuli. The agent should also be aware of the adaptive frontier and stay within the resource budget constraints. The strategist will try to filter out noise, use/develop appropriate schemas and generate effectively complex responses.

Ashby4

Final Words:

The Ashby Space is a great representation to keep in mind while coping with complexity. The ability of a system to discern what is meaningful and what is noise depends on the system’s past experiences, world views, biases and what its construes as morals and values. Boisot and McKelvey note that:

Not everything in a living system’s environment is relevant or meaningful for it, however. If it is not to waste its energy responding to every will-o-the wisp, a system must distinguish schema based on meaningful information (signals about real-world regularities judged important) from noise (meaningless signals). Note that what constitutes information or noise for a system is partly a function of the organism’s own expectations, judgments, and sensory abilities about what is important —as well as of its motivations— and hence, of its models of the world. Valid and timely representations (schema) economize on the organism’s scarce energy resources.

This also points to the role of sensemaking. As Karl Weick notes, “an increase in complexity can increase perceived uncertainty… Complexity affects what people notice and ignore… The variety in a firm’s repertory of beliefs should affect the amount of time it spends consciously struggling to make sense. The greater the variety of beliefs in a repertoire, the more fully should any situation be seen, the more solutions identified, and the more likely it should be that someone knows a great deal about what is happening.”

The models or representations we construct to represent a phenomenon do not have to be as complex as the phenomenon itself, just like the usefulness of a map is in its abstraction. If the map was as complex as the city it represented, it would become identical to city, with the roads, buildings etc., an exact replica. The system however should have the requisite variety. The system should be able to filter out unwanted variety and amplify its meaningful variety to achieve this. The agent must wait for “meaningful” patterns to emerge, and keep learning.

The agent must also be aware to not claim victory or “Mission Accomplished” when dealing with complexity. Some portion of the stimuli variety may be met with the existing schema as part of routinizing. However, this does not mean that the requisite variety has been achieved. A broken clock is able to tell time correctly twice a day, but it does not mean that you should assume that the clock is functional.

I will finish off with a great insight from Max Boisot:

Note that we do not necessarily require an exact match between the complexity of the environment and the complexity of the system. Afterall, the complexity of the environment might turn out to be either irrelevant to the survival of the system or amenable to important simplifications. Here, the distinction between complexity as subjectively experienced and complexity as objectively given is useful. For it is only where complexity is in fact refractory to cognitive efforts at interpretation and structuring that it will resist simplification and have to be dealt with on its own terms. In short, only where complexity and variety cannot be meaningfully reduced do they have to be absorbed. So an interesting way of reformulating the issue that we shall be dealing with in this article is to ask whether the increase in complexity that confronts firms today has not, in effect, become irreducible or “algorithmically incompressible”? And if it has, what are the implications for the way that firms strategize?

Always keep on learning…

In case you missed it, my last post was Nietzsche’s Overman at the Gemba:

I welcome the reader to read further upon the ideas of Ross Ashby. Some of the references I used are:

  1. An Introduction to Cybernetics, Ross Ashby (1957)
  2. Requisite variety and its implications for the control of complex systems, Cybernetica 1:2, p. 83-99, Ross Ashby (1958)
  3. Complexity and Organization–Environment Relations: Revisiting Ashby’s Law of Requisite Variety, Max Boisot and Bill McKelvey (2011)
  4. Knowledge, Organization, and Management. Building on the Work of Max Boisot, Edited by John Child and Martin Ihrig (2013)
  5. Connectivity, Extremes, and Adaptation: A Power-Law Perspective of Organizational Effectiveness, Max Boisot and Bill McKelvey (2011)
  6. Counter-Terrorism as Neighborhood Watch: A Socio/Computational Approach for Getting Patterns from Dots, Max Boisot and Bill McKelvey (2004)
  7. Sensemaking in Organizations (Foundations for Organizational Science), Karl Weick (1995)

Know Your Edges:

jigsaw

In today’s post I will start with a question, “Do you know your edges?

Edges are boundaries where a system or a process (depending upon your construction) breaks down or changes structure. Our preference, as the manager or the owner, is to stay in our comfort zone, a place where we know how things work; a place where we can predict how things go; a place we have the most certainty. Let’s take for a simple example your daily commute to work – chances are high that you always take the same route to work. This is what you know and you have a high certainty about how long it will take you to get to your work. Counterintuitively, the more certainty you have of something, the less information you have to gain from it. Our natural tendency is to have more certainty about things, and we hate uncertainty. We think of uncertainty as a bad thing. If I can use a metaphor, uncertainty is like medicine – you need it to stay healthy!

To discuss this further, I will look at the concept of variety from Cybernetics. Variety is a concept that was put forth by William Ross Ashby, a giant in the world of Cybernetics. Simply speaking, variety is the number of states. If you look at a stop light, generally it has three states (Red, Yellow and Green). In other words, the stop light’s variety is three (ignoring flashing red and no light). With this, it is able to control traffic. When the stop light is able to match the ongoing traffic, everything is smooth. But when the volume of traffic increases, the stop light is not able to keep up. The system reacts by slowing down the traffic. This shows that the variety in the environment is always greater than the variety available internally. The external variety also equates with uncertainty. Scaling back, let’s look at a manufacturing plant. The uncertainty comes in the form of 6M (Man, Machine, Method, Material, Measurement and Mother Nature). The manager’s job is to reduce the certainty. This is done by filtering the variety imposed from the outside, magnifying the variety that is available internally or looking at ways to improve the requisite variety. Ashby’s Law of Requisite Variety can be stated as – “only variety can absorb variety.

All organizations are sociotechnical systems. This also means that in order to sustain, they need to be complex adaptive systems. In order to improve the adaptability, the system needs to keep learning. It may be counterintuitive, but uncertainty is required for a complex adaptive system to keep learning, and to maintain the requisite variety to sustain itself. Thus, the push to stay away from uncertainty or staying in the comfort zone could actually be detrimental. Metaphorically, staying the comfort zone is staying away from the edges, where there is more uncertainty. After a basic level of stability is achieved, there is not much information available in the center (away from the edges). Since the environment is always changing, the organization has to keep updating the information to adapt and survive. This means that the organization should engage in safe to fail experiments and move away from their comfort zone to keep updating their information. The organization has to know where the edges are, and where the structures break down. Safe to fail experiments increases the solution space of the organization making it better suited for challenges. These experiments are fast, small and reversible, and are meant to increase the experience of the organization without risks. The organization cannot remain static and has to change with time. The experimentation away from the comfort zone provides direction for growth. It also shows where things can get catastrophic, so that the organization can be better prepared and move away from that direction.

This leads me to the concept of “fundamental regulator paradox”. This was developed by Gerald Weinberg, an American Computer scientist. As a system gets really good at what it does, and nothing ever goes wrong, then it is impossible to tell how well it is working. When strict rules and regulations are put in place to maintain “perfect order”, they can actually result in the opposite of what they are originally meant for. The paradox is stated as:

The task of a regulator is to eliminate variation, but this variation is the ultimate source of information about the quality of its work. Therefore, the better job a regulator does, the less information it gets about how to improve.

This concept also tells us that trying to stay in the comfort zone is never good and that we should not shy away from uncertainty. Exploring away from the comfort zone is how we can develop the adaptability and experience needed to survive.

Final Words:

This post is a further expansion from my recent tweet. https://twitter.com/harish_josev/status/1055977583261769728?s=11

Information is most rich at the edges. Information is at its lowest in the center. Equilibrium also lies away from the edges.

The two questions, “How good are you at something?” and “How bad are you at something?” may be logically equivalent. However, there is more opportunity to gain information from the second question since it leads us away from the comfort zone.

I will finish with a lesson from one of my favorite TV Detectives, D.I Richard Poole from Death in Paradise.

Poole noted that solving murders were like solving jigsaw puzzles. One has to work from the corners, and then the edges and then move towards the middle. Then, everything will fall in line and start to make sense.

Always keep on learning…

In case you missed it, my last post was Bootstrap Kaizen: