Exploring The Ashby Space:

Ashby4

Today’s post is a follow-up to an earlier post, Solving a Lean Problem versus a Six Sigma Problem:

In today’s post, I am looking at “The Ashby Space.” The post is based on the works of Ross Ashby, Max Boisot, Bill McKelvey and Karl Weick. Ross Ashby was a prominent cybernetician who is famous for his “Law of Requisite Variety.” The Law of Requisite Variety can be stated as “Only variety can destroy/absorb variety.” Ashby defined variety as the number of distinguishable states of a system. Stafford Beer used variety as a measure of complexity. The more variety a system has the more complex it is. An important concept to grasp with this idea is that the number of distinguishable states (and thus variety) depends upon the ability of the observer. In this regard, variety of a system may be viewed as dependent on the observer.

Max Boisot and Bill McKelvey expanded upon the Law of Requisite Variety and stated that only complexity can destroy complexity. In other words, only internal complexity can destroy external complexity. If the regulatory agency of a system does not have the requisite variety to match the variety of its environment, it will not be able to adapt and survive. Ashby explained this using the example of a fencer:

If a fencer faces an opponent who has various modes of attack available, the fencer must be provided with at least an equal number of modes of defense if the outcome is to have the single value: attacked parried.

Boisot and McKelvey restated Ashby’s law as – the range of responses that a living system must be able to marshal in its attempt to adapt to the world must match the range of situations—threats and opportunities—that it confronts. They explained this further using the graphical depiction they termed as “the Ashby Space.” The Ashby Space has two axes, the horizontal axis represents the Variety of Responses, and the vertical axis represents the Variety of Stimuli. Ashby’s law can be represented by the 45˚ diagonal line. The diagonal line represents the requisite variety where the stimuli variety matches the response variety. To adapt and survive we should be in on the diagonal line or below. If we are above the diagonal line, the external variety surpasses the internal variety needed and we perish. Using Ashby’s fencer example, the fencer is able to defend against the opponent only if his defense variety matches or exceeds that of the opponent’s offense variety. This is shown below.

Ashby1

Boisot and McKelvey also depicted the Ordered, Complex and Chaotic regimes in the Ashby space. In the ordered regime, the cause-effect relationships are distinguishable and generally has low variety. The complex regime has a higher variety of stimuli present and requires a higher variety of responses. The cause-effect relationships are non-linear and may make sense only in hindsight. The chaotic regime has the most variety of stimuli. This is depicted in the schematic below. Although the three regimes may appear equally sized in the schematic, this is just for representational purposes.

Ashby2

The next idea that we will explore on the Ashby Space is the idea of the Adaptive Frontier. Ashby proposed a strong need for reducing the amount of variety from the external environment. He viewed this as the role of regulation. Ashby pointed out that the amount of regulation that can be achieved is limited by the amount of information that can be transmitted and processed by the system. This idea is depicted by the Adaptive Frontier curve. Any variety that lies outside this curve is outside the “adaptation budget” of the system. The system does not have the resources nor capacity to process all the variety that is coming in, and does not have the capacity to allocate resources to choose appropriate responses. The adaptive frontier is shown in the schematic below as the red dotted curve.

Ashby3

Combining all the ideas above, the Ashby Space can be depicted as below.

Ashby Space

Boisot and McKelvey detail three types of responses that a living system might follow in the presence of external stimuli. Consider the schematic below, where the agent is located at “Q” in the Ashby Space, which refers to the stimuli variety, X.

  1. The Behaviorist – This is also referred to as the “headless chicken response”. When presented with the stimuli variety, X, the agent will pursue the headless chicken response of trying to match the high variety in a haphazard fashion and soon finds himself outside the adaptive frontier and perishes. The agent fails to filter out any unwanted stimuli and fails to process meaningful information out of the incoming data.
  2. The Routinizer – The routinizer interprets the incoming stimuli as “seen it all before.” They will filter out too much of the incoming data and fail to recognize patterns or mis-categorize them. The routinizer is using the schema which they already have, and their success lies in how well their schema matches the real-world variety-reducing regularities confronting the agent.
  3. The Strategist – An intelligent agent has to correctly interpret the data first, and extract valid information about relevant regularities from the incoming stimuli. The agent then has to use existing schema and match against existing patterns. If the patterns do not match, the agent will have to develop new patterns. As you go up in the Ashby space, the complexity increases, and as you go down, the complexity decreases. The schemas should have the required complexity to match the incoming stimuli. The agent should also be aware of the adaptive frontier and stay within the resource budget constraints. The strategist will try to filter out noise, use/develop appropriate schemas and generate effectively complex responses.

Ashby4

Final Words:

The Ashby Space is a great representation to keep in mind while coping with complexity. The ability of a system to discern what is meaningful and what is noise depends on the system’s past experiences, world views, biases and what its construes as morals and values. Boisot and McKelvey note that:

Not everything in a living system’s environment is relevant or meaningful for it, however. If it is not to waste its energy responding to every will-o-the wisp, a system must distinguish schema based on meaningful information (signals about real-world regularities judged important) from noise (meaningless signals). Note that what constitutes information or noise for a system is partly a function of the organism’s own expectations, judgments, and sensory abilities about what is important —as well as of its motivations— and hence, of its models of the world. Valid and timely representations (schema) economize on the organism’s scarce energy resources.

This also points to the role of sensemaking. As Karl Weick notes, “an increase in complexity can increase perceived uncertainty… Complexity affects what people notice and ignore… The variety in a firm’s repertory of beliefs should affect the amount of time it spends consciously struggling to make sense. The greater the variety of beliefs in a repertoire, the more fully should any situation be seen, the more solutions identified, and the more likely it should be that someone knows a great deal about what is happening.”

The models or representations we construct to represent a phenomenon do not have to be as complex as the phenomenon itself, just like the usefulness of a map is in its abstraction. If the map was as complex as the city it represented, it would become identical to city, with the roads, buildings etc., an exact replica. The system however should have the requisite variety. The system should be able to filter out unwanted variety and amplify its meaningful variety to achieve this. The agent must wait for “meaningful” patterns to emerge, and keep learning.

The agent must also be aware to not claim victory or “Mission Accomplished” when dealing with complexity. Some portion of the stimuli variety may be met with the existing schema as part of routinizing. However, this does not mean that the requisite variety has been achieved. A broken clock is able to tell time correctly twice a day, but it does not mean that you should assume that the clock is functional.

I will finish off with a great insight from Max Boisot:

Note that we do not necessarily require an exact match between the complexity of the environment and the complexity of the system. Afterall, the complexity of the environment might turn out to be either irrelevant to the survival of the system or amenable to important simplifications. Here, the distinction between complexity as subjectively experienced and complexity as objectively given is useful. For it is only where complexity is in fact refractory to cognitive efforts at interpretation and structuring that it will resist simplification and have to be dealt with on its own terms. In short, only where complexity and variety cannot be meaningfully reduced do they have to be absorbed. So an interesting way of reformulating the issue that we shall be dealing with in this article is to ask whether the increase in complexity that confronts firms today has not, in effect, become irreducible or “algorithmically incompressible”? And if it has, what are the implications for the way that firms strategize?

Always keep on learning…

In case you missed it, my last post was Nietzsche’s Overman at the Gemba:

I welcome the reader to read further upon the ideas of Ross Ashby. Some of the references I used are:

  1. An Introduction to Cybernetics, Ross Ashby (1957)
  2. Requisite variety and its implications for the control of complex systems, Cybernetica 1:2, p. 83-99, Ross Ashby (1958)
  3. Complexity and Organization–Environment Relations: Revisiting Ashby’s Law of Requisite Variety, Max Boisot and Bill McKelvey (2011)
  4. Knowledge, Organization, and Management. Building on the Work of Max Boisot, Edited by John Child and Martin Ihrig (2013)
  5. Connectivity, Extremes, and Adaptation: A Power-Law Perspective of Organizational Effectiveness, Max Boisot and Bill McKelvey (2011)
  6. Counter-Terrorism as Neighborhood Watch: A Socio/Computational Approach for Getting Patterns from Dots, Max Boisot and Bill McKelvey (2004)
  7. Sensemaking in Organizations (Foundations for Organizational Science), Karl Weick (1995)