Direct and Indirect Constraints:

In today’s post, I am following on the theme of Lila Gatlin’s work on constraints and tying it up with cybernetics. Please refer to my previous posts here and here for additional background. As I discussed in the last post, Lila Gatlin used the analogy of language to explain the emergence of complexity in evolution. She postulated that lower complex organisms such as invertebrates focused on D1 constraints to ensure that the genetic material is passed on accurately over generations, while vertebrates maintained a constant level of D1 constraints and utilized D2 constraints to introduce novelty leading to complexification of the species. Gatlin noted that this is similar to Shannon’s second theorem which points out that if a message is encoded properly, then it can be sent over a noisy medium in a reliable manner. As Jeremy Campbell notes:

In Shannon’s theory, the essence of successful communication is that the message must be properly encoded before it is sent, so that it arrives at its destination just as it left the transmitter, intact and free from errors caused by the randomizing effects of noise. This means that a certain amount of redundancy must be built into the message at the source… In Gatlin’s new kind of natural selection, “second-theorem selection,” fitness is defined in terms very different and abstract than in classical theory of evolution. Fitness here is not a matter of strong bodies and prolific reproduction, but of genetic information coded according to Shannon’s principles.

The codes that made possible the so-called higher organisms, Gatlin suggests, were redundant enough to ensure transmission along the channel from DNA to protein without error, yet at the same time they possessed an entropy, in Shannon’s sense of “amount of potential information,” high enough to generate a large variety of possible messages.

Gatlin viewed that complexity arose from the ability to introduce more variety while at the same time maintaining accuracy in an optimal mix, similar to human language where there is always constant emergence of new and new ideas while the main grammar, syntax etc. are maintained. As Campbell continues:

In the course of evolution, certain living organisms acquired DNA messages which were coded in this optimum way, giving them a highly successful balance between variety and accuracy, a property also displayed by human languages. These winning creatures were the vertebrates, immensely innovative and versatile forms of life, whose arrival led to a speeding-up of evolution.

As Campbell puts it, vertebrates were agents of novelty. They were able to revolutionize their anatomy and body chemistry. They were able to evolve more rapidly and adapt to their surroundings. The first known vertebrate is a bottom-dwelling fish that lived over 350 million years ago. They had a heavy external skeleton that anchored them to the floor of the water-body. They evolved such that some of the spiny parts of the skeleton grew into fins. They also evolved such that they developed skull with openings for sense organs such as eyes, nose, ears etc. Later on, some of them developed limbs from the bony supports of fins, leading to the rise of amphibians.

What kind of error-correcting redundancy did he DNA of these evolutionary prize winners, the vertebrates, possess? It had to give them the freedom to be creative, to become something markedly different, for their emergence was made possible not merely by changes in the shape of a common skeleton, but rather by developing whole new parts and organs of the body. Yet this redundancy also had to provide them with the constraints needed to keep their genetic messages undistorted.

Gatlin defined the first type of redundancy, one that allows deviation from equiprobability as ‘D1 constraint’. This is also referred to as ‘governing constraint’. The second type of redundancy, one that allows deviation from independence was termed by Gatlin as ‘D2 constraint’, and this is also referred to as ‘enabling constraint’. Gatlin’s speculation was that vertebrates were able to use both D1 and D2 constraints to increase their complexification, ultimately leading to a high cognitive being such as our species, homo sapiens.

One of the pioneers in Cybernetics, Ross Ashby, also looked at a similar question. He was looking at the biological learning mechanisms of “advanced” organisms. Ashby identified that for lower complex organisms, the main source of regulation is their gene pattern. For Ashby, regulation is linked to their viability or survival. He noted that the lower complex organisms can rely just on their gene pattern to continue to survive in their environment. Ashby noted that they are adapted because their conditions have been constant over many generations. In other words, a low complex organism such as a hunting wasp can hunt and survive simply based on their genetic information. They do not need to learn to adapt, they can adapt with what they have. Ashby referred to this as direct regulation. With direct regulation, there is a limit to the adaptation. If the regularities of the environment change, the hunting wasp will not be able to survive. It relies on the regularities of the environment for its survival. Ashby contrasted this with indirect regulation. With indirect regulation, one is able to amplify adaptation. Indirect regulation is the learning mechanism that allows the organism to adapt. A great example for this is a kitten. As Ashby notes:

This (indirect regulation) is the learning mechanism. Its peculiarity is that the gene-pattern delegates part of its control over the organism to the environment. Thus, it does not specify in detail how a kitten shall catch a mouse, but provides a learning mechanism and a tendency to play, so that it is the mouse which teaches the kitten the finer points of how to catch mice.

The learning mechanism in its gene pattern does not directly teach the kitten to hunt for the mice. However, chasing the mice and interacting with it, trains the kitten how to catch the mice. As Ashby notes, the gene pattern is supplemented by the information supplied by the environment. Part of the regulation is delegated to the environment.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so, the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

Ashby further notes:

As a channel of communication, it has a definite, finite capacity, Q say. If this capacity is used directly, then, by the law of requisite variety, the amount of regulation that the organism can use as defense against the environment cannot exceed Q.  To this limit, the non-learning organisms must conform. If, however, the regulation is done indirectly, then the quantity Q, used appropriately, may enable the organism to achieve, against its environment, an amount of regulation much greater than Q. Thus, the learning organisms are no longer restricted by the limit.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

As I look at Ashby’s ideas, I cannot help but see similarities between the D1/D2 constraints and Direct/Indirect regulation respectively. Indirect regulation, similar to enabling constraints, helps the organism adapt to its environment by connecting things together. Indirect regulation has a second order nature to it such as learning how to learn. It works on being open to possibilities when interacting with the environment. It brings novelty into the situation. Similar to governing constraints, direct regulation focuses only on the accuracy of the ‘message’. Nothing additional or any form of amplification is not possible. Direct regulation is hardwired, whereas indirect regulation is enabling. Direct regulation is context-free, whereas indirect regulation is context-sensitive. What the hunting wasp does is entirely reliant on its gene pattern, no matter the situation, whereas, what a kitten does is entirely dependent on the context of the situation.

Final Words:

Cybernetics can be looked at as the study of possibilities, especially why out of all the possibilities only certain outcomes occur. There are strong undercurrents to information theory in Cybernetics. For example, in information theory entropy is a measure of how many messages might have been sent, but were not. In other words, if there are a lot of possible messages available, and only one message is selected, then it eliminates a lot of uncertainty. Therefore, this represents a high information scenario. Indirect regulation allows us to look at the different possibilities and adapt as needed. Additionally, indirect regulation allows retaining the successes and failures and the lessons learned from them.

I will finish with a great lesson from Ashby to explain the idea of the indirect regulation:

If a child wanted to discover the meanings of English words, and his father had only ten minutes available for instruction, the father would have two possible modes of action. One is to use the ten minutes in telling the child the meanings of as many words as can be described in that time. Clearly there is a limit to the number of words that can be so explained. This is the direct method. The indirect method is for the father to spend the ten minutes showing the child how to use a dictionary. At the end of the ten minutes the child is, in one sense, no better off; for not a single word has been added to his vocabulary. Nevertheless, the second method has a fundamental advantage; for in the future the number of words that the child can understand is no longer bounded by the limit imposed by the ten minutes. The reason is that if the information about meanings has to come through the father directly, it is limited to ten-minutes’ worth; in the indirect method the information comes partly through the father and partly through another channel (the dictionary) that the father’s ten-minute act has made available.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was D1 and D2 Constraints:

More Notes on Constraints in Cybernetics:

In today’s post, I am looking further at constraints. Please see here for my previous post on this. Ross Ashby is one of the main pioneers of Cybernetics, and his book “Introduction to Cybernetics” still remains an essential read for a cybernetician. Alicia Juarrero is a Professor Emerita of Philosophy at Prince George’s Community College (MD), and is well known for her book, “Dynamics in Action: Intentional Behavior as a Complex System”.

I will start off with the basic idea of a system and then proceed to complexity from a Cybernetics standpoint. A system is essentially a collection of variables that an observer has chosen to make sense of something. Thus, a system is a mental construct and not something that is an objective reality. A system from this standpoint is entirely contingent upon the observer. Ashby’s view on complexity was regarding variety. Variety is the number of possible states of a system. A good example of this is a light switch. It has two states – ON or OFF. Thus, we can state that a light switch has a variety of 2. Complexity is expressed in terms of variety. The higher variety a system has, the more possibilities it possesses. A light switch and a person combined has indefinite variety. The person is able to communicate via messages simply by turning the light switch ON and OFF in a certain logical sequence such as Morse code.

Now let’s look at constraints. A constraint can be said to exist when the variety of a system is said to have diminished or decreased. Ashby gives the example of a boys only school. The variety for sex in humans is 2. If a school has a policy that only boys are allowed in that school, the variety has now decreased to 1 from 2. We can say that a constraint exists at the school.

Ashby indicated that we should be looking at all possibilities when we are trying to manage a situation. Our main job is to influence the outcomes so that certain outcomes are more likely than others. We do this through constraints. Ashby noted:

The fundamental questions in regulation and control can be answered only when we are able to consider the broader set of what it (system) might do, when ‘might’ is given some exact specification.

We can describe what we have been talking about so far with a simple schematic. We can try to imagine the possible outcomes of the system when we interact with it and utilize constraints so that certain outcomes, P2 and P4 are more likely to occur. There may be other outcomes that we do not know of or can imagine. Ashby advises that cybernetics is not about trying to understand what a system is, but what a system does. We have to imagine a set of all possible outcomes, so that we can guide or influence the system by managing variety. The external variety is always more than the internal variety. Therefore, to manage a situation, we have to at least match the variety of the system. We do this by attenuating the unwanted variety and by amplifying our internal variety so that we can match the variety thrown at us by the system. This is also represented as Ashby’s Law of Requisite Variety – only variety can absorb variety. Ashby stated:

Cybernetics looks at the totality, in all its possible richness, and then asks why the actualities should be restricted to some portion of the total possibilities.

Ashby talked about several versions of constraints. He talked about slight and severe constraints. He gave an example of a squad of soldiers. If the soldiers are asked to line up without any instructions, they have maximum freedom or minimum constraints to do so. If the order was given that no man may stand next to a man whose birthday falls on the same day, the constraint would be slight, for of all the possible arrangements few would be excluded. If, however, the order was given that no man was to stand at the left of a man who was taller than himself, the constraint would be severe; for it would, in fact, allow only one order of standing (unless two men were of exactly the same height). The intensity of the constraint is thus shown by the reduction it causes in the number of possible arrangements.

Another way that Ashby talked about constraints was by identifying constraint in vectors. Here, multiple factors are combined in a vector such that the resultant constraint is considered. The example that Ashby gave was that of an automobile. He gave the example of the vector shown below:

(Age of car, Horse-power, Color)

He noted that each component has a variety that may or may not be dependent on the other components. If the components are dependent on each other the final constraint will be less than the sum of individual component constraints. If the components are all independent, then the resultant constraints would be the sum of individual constraints. This is an interesting point to further look at. Imagine that we are looking at a team here of say Person A, B and C. Each person here is able to come up with indefinite possibilities, the resultant variety of the team would be also indefinite. If we allow for the indefinite possibilities to emerge, as in innovation or invention of new ideas or products, the constraints could play a role. When we introduce thinking agents to the mix, the number of possibilities goes up.

Complexity is about managing variety – about allowing room for possibilities to tackle complexity. Ashby famously noted that a world without constraints is totally chaotic. His point is that if a constraint exists, it can be used to tackle complexity. Allowing parts to depend upon each other introduces constraints that could cut down on unwanted variety and at the same time allow for innovative possibilities to emerge. The controller’s goal is to manage variety and allow for certain possible outcomes to be more likely than others. For this, the first step to imagine the total set of possible outcomes to best of their abilities. This means that the controller also has to have a good imagination and creative mind. This points to the role of the observer when it comes to seeing and identifying the possibilities. Ashby referred to the set of possibilities as “product space.” Ashby noted that its chief peculiarity is that it contains more than actually exists in the real physical world, for it is the latter that gives us the actual, constrained subset.

The real world gives the subset of what is; the product space represents the uncertainty of the observer. The product space may therefore change if the observer changes; and two observers may legitimately use different product spaces within which to record the same subset of actual events in some actual thing. The “constraint” is thus a relation between observer and thing; the properties of any particular constraint will depend on both the real thing and on the observer. It follows that a substantial part of the theory of organization will be concerned with properties that are not intrinsic to the thing but are relational between the observer and thing.

A keen reader might be wondering how the ideas of constraints stack up against Alicia Juarrero’s versions of constraints. More on this in a future post.  I will finish with a wonderful tribute to Ross Ashby from John Casti:

The striking fact is that Ashby’s idea of the variety of a system is amazingly close to many of the ideas that masquerade today under the rubric “complexity.”

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning… In case you missed it, my last post was Towards or Away – Which Way to Go?

The Extended Form of the Law of Requisite Variety:

This is a follow-up to my last week’s post – Notes on Regulation: In today’s post, I am looking at the Arvid Aulin-Ahmavaara’s extended form of the law of requisite variety (using Francis Heylighen’s version). As I have noted previously, Ross Ashby, the great mind and pioneer of Cybernetics came up with the law of requisite variety (LRV). The law can be stated as only variety can absorb variety. Here variety is the number of possible states available for a system. This is equivalent to statistical entropy. For example, a coin can be shown to have a variety of two – Heads and Tails. Thus, if a user wants a way to randomly choose one of two outcomes, the coin can be used. The user can toss the coin to randomly choose one of two options. However, if the user has 6 choices, they cannot use the coin to randomly choose one of six outcomes efficiently. In this case, a six-sided die can be used. A six-sided die has a variety of six. This is a simple explanation of variety absorbing variety.

The controller can find ways to amplify variety to still meet the external variety thrown upon the system. Let’s take the example of the coin and six choices again. It is possible for the user to toss the coin three times or use three coins, and use the three coin-toss results to make a choice (the variety for three coin-tosses is 8). This is a means to amplify variety in order to acquire requisite variety. From a cybernetics standpoint, the goal of regulation is to ensure that the external disturbances do not reach the essential variables. The essential variables are important for a system’s viability. If we take the example of an animal, some of the essential variables are the blood pressure, body temperature etc. The essential variables must be kept within a specific range to ensure that the animal continues to survive. The external disturbances are denoted by D, the essential variables by E and the actions available to the regulator as A. As noted, variety is expressed as statistical entropy for the variable. As Aulin-Ahmavaara notes – If A is a variable of any kind, the entropy H(A) is a measure of its variety.

With this background, we can note the extended form of the Law of Requisite Variety as:

H(E) ≥ H(D) – H(A) + H(A|D) – B

The H portions of the term represents the statistical entropy for the term. For example, H(E) is the statistical entropy for the essential variables. The larger the value for H, the more the uncertainty around the variable. The goal for the controller is to keep the H(E) as low as possible since a larger value for the entropy for the essential variables indicate a larger range of values for the essential variables. If the essential variables are not kept to a small range of values, the viability of the organism is compromised. We can now look at the other terms of the equation and see how the value for H(E) can be maintained at a lower value.

Heylighen notes:

This means that H(E) should preferably be kept as small as possible. In other words, any deviations from the ideal values must be efficiently suppressed by the control mechanism. The inequality expresses a lower bound for H(E): it cannot be smaller than the sum on the right-hand side. That means that if we want to make H(E) smaller, we must try to make the right-hand side of the inequality smaller. This side consists of four terms, expressing respectively the variety of disturbances H(D), the variety of compensatory actions H(A), the lack of requisite knowledge H(A|D) and the buffering capability B.

As noted, D represents the external disturbances, and H(D) is the variety of disturbances coming in. If H(D) is large, then it also increases the value generally for H(E). Thus, an organism in a complex environment is more likely to face some adversities that might drive the essential variables outside the safe range. For example, you are less likely to die while sitting in your armchair than while trekking through the Amazonian rain forest or wandering through the concrete jungle of a megacity. A good rule of thumb for survivability would be to avoid environments that have a larger variety for disturbances.

The term H(A) represents the variety of actions available to counter the disturbances. The more variety you have for your actions, the more likely you are able to counteract the disturbances. At least one of them will be able to solve the problem, escape the danger, or restore you to a safe, healthy state. Thus, the Amazonian jungle may not be so dangerous for an explorer having a gun to shoot dangerous animals, medicines to treat disease or snakebite, filters to purify water, and the physical condition to run fast or climb in trees if threatened. The term H(A) enters the inequality with a minus (–) sign, because a wider range of actions allows you to maintain a smaller range of deviations in the essential variables H(E).

The term H(A|D) represents a conditional state. It is also called the lack of requisite knowledge. It has a plus sign since it indicates a “lack”. It is not enough that you have a wide range of actions, you have to know which action will be effective. If you have minimal knowledge, then your best strategy is to try out each action at random, and this is highly inefficient and ineffective if time is not on your side. For example, there is little use in having a variety of antidotes for different types of snakebites if you do not know which snake bit you. H(A|D) expresses your uncertainty about performing an action A (e.g., taking a particular antidote) for a given disturbance D (e.g., being bitten by a particular snake). The larger your uncertainty, the larger the probability that you would choose a wrong action, and thus fail to reduce the deviation H(E). Therefore, this term has a “+” sign in the inequality: more uncertainty (= less knowledge) produces more potentially lethal variation in your essential variables.

The final term B stands for buffering (passive regulation). It expresses your amount of protective reserves or buffering capacity. Better even than applying the right antidote after a snake bite is to wear protective clothing thick enough to stop any snake poison from entering your blood stream. The term is negative because higher capacity means less deviation in the essential variables.

The law of requisite variety expresses in an abstract form what is needed for an organism to prevent or repair the damage caused by disturbances. If this regulation is insufficient, damage will accumulate, including damage to the regulation mechanisms themselves. This produces an acceleration in the accumulation of damage, because more damage implies less prevention or repair of further damage, and therefore a higher rate of additional damage.

The optimal formation for the Law of Requisite Variety occurs when the minimum value for H(E) is achieved, and when there is no lack of requisite knowledge. The essence of regulation is that disturbances happen all the time, but that their effects are neutralized before they have irreparably damaged the organism. This optimal result of regulation is represented as:

H(E)min = H(D) – H(A) – B

I encourage the reader to check out my previous posts on the LRV.

Getting Out of the Dark Room – Staying Curious:

Notes on Regulation:

Storytelling at the Gemba:

Exploring The Ashby Space:

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Notes on Regulation:

References:

[1] Cybernetic Principles of Aging and Rejuvenation: the buffering-challenging strategy for life extension – Francis Heylighen

[2] The Law of Requisite Hierarchy – A. Y. Aulin-Ahmavaara

Notes on Regulation:

In today’s post, I am looking at the idea of regulation. I talked about direct and indirect regulation in my previous post. In today’s post, I will look at passive and active regulation.

Ashby viewed a system as a selection of variables chosen by an observer for the purpose of sensemaking and control. The observer is looking not at what the system is (what the variables are), but at what the system does. In other words, the observer is interested in the behavior of the system. The observer is interested in influencing the behavior so that the system is maintained in certain desirable states. Of all possible states, the system can be in, the observer would like to keep the system in a chosen few states. To achieve this, the observer has to model the behavior of the system. As J. Achterbergh and D. Vriens note:

we should “model” the behavior of this entity (system) in such a way that we can understand how it behaves in the first place, and how this behavior reacts to “influences.” One could say that (at least) two kinds of influences on behavior (input) can be discerned: “disturbances” – causing the concrete entity to behave “improperly,” and “regulatory actions” – causing “proper” behavior (by preventing or dealing with disturbances).

The general understanding is that the environmental disturbances cause the system to behave improperly. The role of the regulator is to prevent the disturbances from reaching the essential variables of the system. The controller sets the target for the system, while the regulator acts on realizing the target. An easy example to distinguish the controller and the regulator is with a thermostat. The homeowner in this case is the controller, while the thermostat is the regulator. The homeowner decides the range for the thermostat, and all the thermostat can do is turn on or off depending upon the temperature inside the house. The regulator is not able to change the target; only the controller can change the target.

The goal of the regulator, as noted above, is to ensure that the disturbances from outside do not impact the essential variables of the system. Ashby noted:

An essential feature of the good regulator is that it blocks the flow of variety from disturbances to essential variables.

J. Achterbergh and D. Vriens expand this further:

Regulators block variety flowing from disturbances to essential variables. The more variety they block, the more effective the regulator. The most effective regulator is the one that blocks all the variety flowing from disturbances to essential variables… We can now also define regulation as the activity performed by the regulator. That is, regulation is “blocking the flow of variety from disturbances to essential variables.” If the more general description of essential variables is used (i.e., those variables that must be kept within limits to achieve some goal) then the purpose of regulation is that it enables the realization of this goal. If the goal is the survival of some concrete system, then the purpose of regulation is trying to keep the values of its essential variables within the limits needed for survival, in spite of the values of disturbances.

This is a good place to introduce the main law of Cybernetics – the law of requisite variety (LRV). LRV is the brainchild of Ross Ashby, the most prolific thinker and pioneer of Cybernetics. LRV states that only variety can absorb variety. Here variety is the number of possible states. For example, a light switch has a variety of two – ON or OFF. If the user just wants the light to be turned on or off, then the light switch can meet that variety. However, if the user wants the light to be dimmed down or up, then the situation calls for a lot more variety than two. Here, a light switch with a variety of two cannot absorb the variety “thrown” at it. However, a dimmer switch with an indefinite amount of variety can achieve this.

Ashby was inspired by Claude Shannon’s tenth theorem. There is an upper limit for the amount of variety the regulator can absorb. The controller will need to find ways to attenuate variety (filter out unwanted variety thrown at the system) and amplify internal variety such that the requisite variety is achieved. A simple example of attenuating variety is the big sign on the front of a fast-food place. The customer should not go into the fast-food place and ask to buy a car. Since there is an upper limit to a single regulator, the controller has to use multiple regulators linked to achieve amplification of variety. The fast-food place can use more employees during rush-hour to meet with the extra variety thrown at it. This is an example of amplifying variety.

Ashby talked about two types of regulation. This has been explained as Passive and Active regulation by J. Achterbergh and D. Vriens. Passive regulation does not make any selection. We can state that passive regulation is always working. An easy example to explain this is the shell of a turtle. It does not make any selections. J. Achterbergh and D. Vriens explained this as follows:

In the case of passive regulation there exists a passive block between the disturbances and the essential variables. This passive block, for instance the shell of a turtle, separates the essential variables from a variety of disturbances. It is characteristic of passive regulation that it does not involve selection… the regulator does not select a regulatory move dependent on the occurrence of a possibly disturbing event, for the block is given independent of disturbances. Because no selection is involved, the passive “regulator” does not need information about changes in the state of the essential variable or about disturbances causing such changes to perform its regulatory activity.

Francis Heylighen explained passive regulation as buffering:

 Buffering—at least in the cybernetic sense—is a passive form of regulation: the system dampens or absorbs the disturbances through its sheer bulk of protective material. Examples of buffers are shock absorbers in a car, water reservoirs or holding basins dampening fluctuations in rainfall, and the fur that protects a warm-blooded animal from variations in outside temperature. The advantage is that no energy, knowledge or information is needed for active intervention. The disadvantage is that buffering is not sufficient for reaching goals that are not equilibrium states in themselves, because moving away from equilibrium requires active intervention. For example, while a reservoir makes the flow of water more even, it cannot provide water in regions where it never rains. Similarly, fur alone cannot maintain a mammal body at a temperature higher than the average temperature of the surroundings: that requires active heat production.

Active regulation requires a selection of activity and requires information. J. Achterbergh and D. Vriens explained active regulation as follows:

In the case of active regulation, the regulator needs to select a regulatory move. Dependent on either the occurrence of a change of the state of the essential variable or of a disturbance, the regulator selects the regulatory move to block the flow of variety to the essential variables. Because it has to select a regulatory move, the active regulator either needs information about changes in the state of the essential variable or about the disturbances causing such changes in order to perform its regulatory function.

There are two forms of active regulation – feedforward (cause-controlled) and feedback (error-controlled). In feedforward regulation, the regulator anticipates and acts when it senses the disturbances prior to having any impact on the essential variable. Heylighen explained this as:

In feedforward regulation, the system acts on the disturbance before it has affected the system, in order to prevent a deviation from happening. For example, if you perceive a sudden movement in the vicinity of your face, you will close your eyelids before any projectile can hit it, so as to prevent potential damage to your eyeball. The disadvantage that it is not always possible to act in time, and that the anticipation may turn out to be incorrect, so that the action does not have the desired result. For example, the projectile may not have been directed at your eyes, but at a different part of your face. By shutting your eyes, you make it more difficult to avoid the actual impact.

In feedback regulation, the regulator acts only after the essential variable is impacted.

In feedback regulation, the system neutralizes or compensates the deviation after the disturbance has pushed the system away from its goal, by performing an appropriate repair action. For example, a thermostat compensates for a fall in temperature by switching on the heating, but only after it detected a lower than desired temperature. For effective regulation, it suffices that the feedback is negative—i.e. reducing the deviation—because a sustained sequence of corrections will eventually suppress any deviation. The advantage is that there is no need to rely on a complex, error-prone process of anticipation on the basis of imperfect perceptions: only the direction of the actual deviation has to be sensed. The disadvantage is that the counteraction may come too late, allowing the deviation to cause irreversible damage before it was effectively suppressed.

Ashby viewed feedforward as reacting to threat, and feedback as reacting to disaster. Feedforward control (cause controlled) generally comes from feedback control (error controlled). We should have a somewhat good knowledge of the situation’s behavior and this comes from previous feedback experiences.

In next week’s post, I will look at the extended form of the Law of Requisite Variety. I will finish this post with an example from J. Achterbergh and D. Vriens to further explain the three forms of regulation with an example of a medieval knight:

To illustrate these different modes of regulation, imagine a medieval knight on a battlefield. One of the essential variables might be “pain,” with the norm value “none.” In combat, the knight will encounter many opponents with different weapons all potentially threatening this essential variable. To deal with these disturbances, he might wear suitable armor: a passive block. If a sword hits him nevertheless (e.g., somewhere, not covered by the armor), he might withdraw from the fight, treat his wounds and try to recover: an error-controlled regulatory activity, directed at dealing with the pain. A cause-controlled regulatory activity might be to actively parry the attacks of an opponent, with the effect that these attacks cannot harm him.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Getting Out of the Dark Room – Staying Curious:

References:

[1] Cybernetic Principles of Aging and Rejuvenation:the buffering-challenging strategy for life extension – Francis Heylighen

[2] Social Systems Conducting Experiments – Jan Achterbergh, Dirk Vriens

The Cybernetics of Ohno’s Production System:

In today’s post, I am looking at the cybernetics of Ohno’s Production System. For this I will start with the ideas of ultrastability from one of the pioneers of Cybernetics, Ross Ashby. It should be noted that I am definitely inspired by Ashby’s ideas and thus may take some liberty with them.

Ashby defined a system as a collection of variables chosen by an observer. “Ultrastability” can be defined as the ability of a system to change its internal organization or structure in response to environmental conditions that threaten to disturb a desired behavior or value of an essential variable (Klaus Krippendorff). Ashby identified that when a system is in a state of stability (equilibrium), and when disturbed by the environment, it is able to get back to the state of equilibrium. This is the feature of an ultrastable system. Let’s look at the example of an organism and its environment. The organism is able to survive or stay viable by making sure that certain variables, such as internal temperature, blood pressure etc. stays in a specific range. Ashby referred to these variables as essential variables. When the essential variables go outside a specific range, the viability of the organism is compromised. Ashby noted:

That an animal should remain ‘alive’, certain variables must remain without certain ‘physiological’ limits. What these variables are, and what the limits, are fixed when the species is fixed. In practice one does not experiment on animals in general, one experiments on one of a particular species. In each species the many physiological variables differ widely in their relevance to survival. Thus, if a man’s hair is shortened from 4 inches to 1 inch, the change is trivial; if his systolic blood pressure drops from 120 mm. of mercury to 30, the change will quickly be fatal.

Ashby noted that the organism affects the environment, and the environment affects the organism: such a system is said to have a feedback. Here the environment does not simply mean the space around the organism. Ashby had a specific definition for environment. Given an organism, its environment is defined as those variables whose changes affect the organism, and those variables which are then changed by the organism’s behavior. It is thus defined in a purely functional, not a material sense. The reactionary part is the sensory-motor framework of the organism. The feedback between the reactionary part (R) of an organism (Orgm) and the environment (Envt.) is depicted below:

Ashby explains this using an example of a kitten resting near a fire. The kitten settles at a safe distance from the fire. If a lump of hot coal falls near the kitten, the environment is threatening to have a direct affect on the essential variables. It the kitten’s brain does nothing; the kitten will get burned. The kitten being the ultrastable system is able to use the correct mechanism – move away from the hot coal and maintain its essential variables in check. Ashby proposed that an ultrastable system has two feedbacks. One feedback that operates frequently while the other feedback that operates infrequently when the essential variables are threatened. The two feedback loops are needed for a system to get back into equilibrium. This is also how the system can learn and adapt. Paul Pangaro and Michael C. Geoghegan note:

What are the minimum conditions of possibility that must exist such that a system can learn and adapt for the better, that is, to increase its chance of survival? Ashby concludes via rigorous argument that the system must have minimally two feedback loops, or double feedback… The first feedback loop, shown on the left side and indicated via up/down arrows, ‘plays its part within each reaction/behavior.’ As Ashby describes, this loop is about the sensory and motor channels between the system and the environment, such as a kitten that adjusts its distance from a fire to maintain warmth but not burn up. The second feedback loop encompasses both the left and right sides of the diagram, and is indicated via long black arrows. Feedback from the environment is shown coming into an icon for a meter in the form of a round dial, signifying that this feedback is measurable insofar as it impinges on the ‘essential variables.’

Ashby depicted his ultrastable system as below:

The first feedback loop can be thought as a mechanism that cannot change itself. It is static, while the second feedback loop is able to operate some parameters so that the structure can change resulting in a new behavior. The second feedback loop acts only when the essential variables are challenged or when the system is not in equilibrium. It must be noted that there are no decisions being made with the first feedback loop. It is simply an action mechanism. It keeps doing what was working before, while the second feedback loop alters the action mechanism to result in a new behavior. If the new behavior is successful in maintaining the essential variables, the new action is continued until it is not effective any longer. When the system is able to counter the threatening situation posed by the environment, it is said to have requisite variety. The law of requisite variety was proposed by Ashby as – only variety can absorb variety. The system must be able to have the requisite variety (in terms of available actions) to counter the variety thrown upon it by the environment. The environment always possesses far more variety than the system. The system must find ways to attenuate the variety coming in, and amplify its own variety to maintain the essential variables.

Let’s look at this with an easy example of a baby. When the baby experiences any sort of discomfort, it starts crying. The crying is the behavior that helps put it back into equilibrium (removal of discomfort) since it gets the attention from its mother or other family members. As the baby grows, its desired variables also get specific (food, water, love, etc.) The action of crying does not always get it what it is looking for. Here the second feedback loop comes in, and it tries a new behavior and see if it results in a better outcome. This behavior could be to point at something or even learning and using words. The new action is kept and used, as long as it becomes successful. The baby/child learns and adapts as needed to meet its own wants and desires.

Pangaro and Geoghegan note that the idea of an ultrastable system is applicable in social realms also. To evoke the social arena, we call the parameters ‘behavior fields.’ When learning by trial-and-error, a behavior field is selected at random by the system, actions are taken by the system that result in observable behaviors, and the consequences of these actions in the environment are in turn registered by the second feedback loop. If the system is approaching the danger zone, and the essential variables begin to go outside their acceptable limits, the step function says, ‘try something else’—repeatedly, if necessary—until the essential variables are stabilized and equilibrium is reached. This new equilibrium is the learned state, the adapted state, and the system locks-in.

It is important to note that the first feedback loop is the overt behavior that is locked in. The system cannot change this unless the second feedback loop is engaged. Stuart Umpleby cites Ashby’s example of an autopilot to explain this further:

In his theory of adaptation two feedback loops are required for a machine to be considered adaptive (Ashby 1960).  The first feedback loop operates frequently and makes small corrections.  The second feedback loop operates infrequently and changes the structure of the system, when the “essential variables” go outside the bounds required for survival.  As an example, Ashby proposed an autopilot.  The usual autopilot simply maintains the stability of an aircraft.  But what if a mechanic miswires the autopilot?  This could cause the plane to crash.  An “ultrastable” autopilot, on the other hand, would detect that essential variables had gone outside their limits and would begin to rewire itself until stability returned, or the plane crashed, depending on which occurred first. The first feedback loop enables an organism or organization to learn a pattern of behavior that is appropriate for a particular environment.  The second feedback loop enables the organism to perceive that the environment has changed and that learning a new pattern of behavior is required.

Ohno’s Production System:

Once I saw that the idea of an ultrastable system may be applied to the social realm, I wanted to see how it can be applied to Ohno’s Production System. Taiichi Ohno is regarded as the father of the famous Toyota Production System. Before it was “Toyota Production System”, it was Ohno’s Production System. Taiichi Ohno was inspired by the challenge issued by Kiichiro Toyoda, the founder of Toyota Motor Corporation. The challenge was to catch up with America in 3 years in order to survive.  Ohno built his ideas with inspirations from Sakichi Toyoda, Kiichiro Toyoda, Henry Ford and the supermarket system. Ohno did a lot of trial and error. And the ideas he implemented, he made sure were followed. Ohno was called “Mr. Mustache”. The operators thought of Ohno as an eccentric. They used to joke that military men used to wear mustaches during World War II, and that it was rare to see a Japanese man with facial hair afterward. “What’s Mustache up to now?” became a common refrain at the plant as Ohno carried out his studies. (Source: Against All Odds, Togo and Wartman)

His ideas were not easily understood by others. He had to tell others that he will take responsibility for the outcomes, in order to convince them to follow his ideas. Ohno could not completely make others understand his vision since his ideas were novel and not always the norm. Ohno was persistent, and he made improvements slowly and steadily. He would later talk about the idea of Toyota being slow and steady like the tortoise. Ohno loved what he did, and he had tremendous passion pushing him forward with his vision. As noted, his ideas were based on trial and error, and were thus perceived as counter-intuitive by others.

Ohno can be viewed as part of the second feedback loop and the assembly line as part of the first feedback loop, while the survivability of the company via the metrics of cost, quality, productivity etc. can be viewed as the “essential variables”. Ohno implemented the ideas of kanban, jidoka etc. on the line, and they were followed. The assembly line could not change the mechanisms established as part of Ohno’s production system. Ohno’s production system can be viewed as a closed system in that the framework is static. Ohno watched how the interactions with the environment went, and how the essential variables were being impacted. Based on this, the existing behaviors were either changed slightly, or changed out all the way until the desired equilibrium was achieved.

Here the production system framework is static because it cannot change itself. The assembly line where it is implemented is closed to changes at a given time. It is “action oriented” without decision powers to make changes to itself. There is no point in copying the framework unless you have the same problems that Ohno faced.

Umpleby also describes the idea of the double feedback loop in terms of quality improvement similar to what we have discussed:

The basic idea of quality improvement is that an organization can be thought of as a collection of processes. The people who work IN each process should also work ON the process, in order to improve it. That is, their day-to-day work involves working IN the process (the first, frequent feedback loop). And about once a week they meet as a quality improvement team to consider suggestions and to design experiments on how to improve the process itself. This is the second, less frequent feedback loop that leads to structural changes in the process. Hence, process improvement methods, which have been so influential in business, are an illustration of Ashby’s theory of adaptation.

This follows the idea of kairyo and kaizen in the Toyota Production System.

Final Words:

It is important to note that Ohno’s Production System is not Toyota Production System is not Toyota’s Production System is not Lean. Ohno’s Production System evolved into Toyota Production System. Toyota’s production system is emergent while Toyota Production System is not. Toyota Production System’s framework can be viewed as a closed system, in the sense that the framework is static. At the same time, the different plants implementing the framework are dynamic due to the simple fact that they exist in an everchanging environment. For an organization to adapt to an everchanging environment, it needs to be ultrastable. An organization can have several ultrastable systems connected with each other resulting in a homeostasis. I will finish with an excellent quote from Mike Jackson.

The organization should have the best possible model of the environment relevant to its purposes… the organization’s structure and information flows should reflect the nature of that environment so that the organization is responsive to it.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Cybernetics of a Society:

When is a Model Not a Model?

Ross Ashby, one of the pioneers of Cybernetics, started an essay with the following question:

I would like to start not at: How can we make a model?, but at the even more primitive question: Why make a model at all?

He came up with the following answer:

I would like then to start from the basic fact that every model of a real system is in one sense second-rate. Nothing can exceed, or even equal, the truth and accuracy of the real system itself. Every model is inferior, a distortion, a lie. Why then do we bother with models? Ultimately, I propose. we make models for their convenience.

To go further on this idea, we make models to come up with a way to describe “how things work?” This is done for us to also answer the question – what happens when… If there is no predictive or explanatory power, there is no use for the model. From a cybernetics standpoint, we are not interested in the “What is this thing?”, but the “What does this thing do?” We never try to completely understand a “system”. We understand it in chunks, the chunks that we are interested in. We construct a model in our heads that we call a “system” to make sense of how we think things work out in the world. We only care about certain specific interactions and its outcomes.

One of the main ideas that Ashby proposed was the idea of variety. Loosely put, variety is the number of available states a system has. For example, a switch has a variety of two – ON or OFF. A stop light has a variety of three (generally) – Red, Yellow or Green. As we increase the complexity, the variety also increases. The variety is dependent on the ability of the observer to discern them. A keen-eyed observer can discern a higher number of states for a phenomenon than another observer. Take the example of the great fictional characters, Sherlock Holmes and John Watson. Holmes is able to discern more variety than Watson, when they come upon a stranger. Holmes is able to tell the most amazing details about the stranger that Watson cannot. When we construct a model, the model lacks the original variety of the phenomenon we are modeling. This is important to keep in mind. The external variety is always much larger than the internal variety of the observer. The observer simply lacks the ability to tackle the extremely high amount of variety. To address this, the observer removes or attenuates the unwanted variety of the phenomenon and constructs a simpler model. For example, when we talk about a healthcare system, the model in our mind is pretty simple. One hospital, some doctors and patients etc. It does not include the millions of patients, the computer system, the cafeteria, the janitorial service etc. We only look at the variables that we are interested in.

Ashby explained this very well:

Another common aim that will have to be given up is that of attempting to “understand” the complex system; for if “understanding” a system means having available a model that is isomorphic with it, perhaps in one’s head, then when the complexity of the system exceeds the finite capacity of the scientist, the scientist can no longer understand the system—not in the sense in which he understands, say, the plumbing of his house, or some of the simple models that used to be described in elementary economics.

A crude depiction of model-making is shown below. The observer has chosen certain variables that are of interest, and created a similar “looking” version as the model.

Ashby elaborated on this idea as:

We transfer from system to model to lose information. When the quantity of information is small, we usually try to conserve it; but when faced with the excessively large quantities so readily offered by complex systems, we have to learn how to be skillful in shedding it. Here, of course, model-makes are only following in the footsteps of the statisticians, who developed their techniques precisely to make comprehensible the vast quantities of information that might be provided by, say, a national census. “The object of statistical methods, said R. A. Fisher, “is the reduction of data.”

There is an important saying from Alfred Korzybski – the map is not the territory. His point was that we should take the map to be the real thing. An important corollary to this, as a model-maker is:

If the model is the same as the phenomenon it models, it fails to serve its purpose. 

The usefulness of the model is in it being an abstraction. This is mainly due to the observer not being able to handle the excess variety thrown at them. This also answers one part of the question posed in the title of this post – A model ceases to be a model when it is the same as the phenomenon it models. The second part of the answer is that the model has to have some similarities to the phenomenon, and this is entirely dependent on the observer and what they want.

This brings me to the next important point – We can only manage models. We don’t manage the actual phenomenon; we only manage the models of the phenomenon in our heads. The reason being again that we lack the ability to manage the variety thrown at us.

The eminent management cybernetician, Stafford Beer, has the following words of wisdom for us:

Instead of trying to specify it in full detail, you specify it only somewhat. You then ride on the dynamics of the system in the direction you want to go.

To paraphrase Ashby, we need not collect more information than is necessary for the job. We do not need to attempt to trace the whole chain of causes and effects in all its richness, but attempt only to relate controllable causes with ultimate effects.

The final aspect of model-making is to take into consideration the temporary nature of the model. Again, paraphrasing Ashby – We should not assume the system to be absolutely unchanging. We should accept frankly that our models are valid merely until such time as they become obsolete.

Final Words:

We need a model of the phenomenon to manage the phenomenon. And how we model the phenomenon depends upon our ability as the observer to manage variety. We only need to choose certain specific variables that we want. Perhaps, I can explain this further with the deep philosophical question – If a tree falls in a forest and no one is around to hear it, does it make a sound? The answer to a cybernetician should be obvious at this point. Whether there is sound or not depends on the model you have, and if you have any value in the tree falling having a sound.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Maximum Entropy Principle:

Destruction of Information/The Performance Paradox:

Ross Ashby was one of the pioneers of Cybernetics. His 1956 book, An Introduction to Cybernetics, is still one of the best introductions to Cybernetics. As I was researching his journals, I came across an interesting phrase – “destruction of information.” Ashby noted:

I am not sure whether I have stated before my thesis – that the business of living things is the destruction of information.

Ashby gave several examples to explain what he meant by this. For example:

Consider a thermostat controlling a room’s temperature. If it is working well, we can get no idea, from the temperature of the room whether it is hot or cold outside. The thermostat’s job is to stop this information from reaching the occupant.

He also gave the example of an antiaircraft gun and its predictor. Suppose we observe only the error made by each shell in succession. If the predictor is perfect, we shall get the sequence of 0,0,0,0 etc. By examining this sequence, we can get no information of about how the aircraft maneuvered. Contrast this with the record of a poor predictor: 2, 1, 2, 3… -3, 0, 3 etc. By examining, this we can get quite a good idea of how the pilot maneuvered. In general, the better the predictor, the less the maneuvers show in the errors. The predictor’s job is to destroy this information.

As an observer, we learn about a living system or a phenomenon by the variety it displays. Here, variety can be loosely expressed as the number of distinct states a system has. Interestingly, the number of states or the variety is dependent upon the system demonstrating it, as well as the observer’s ability to distinguish the different states. If the observer is not able to make the needed number of distinctions, then less information is generated. On the other hand, if the system of interest is able to hide its different states, it minimizes the amount of information available for the observer. In this post, we are interested in the latter category. Ashby talks about an interesting example to further this idea:

An insect whose coloration makes it invisible will not show, by its survival or disappearance whether a predator has or has not seen it. An imperfectly colored one will reveal this fact by whether it has survived or not.

Another example, Ashby gives is that of an expert boxer:

An expert boxer, when he comes home, will show no signs of whether he had a fight in the street or not. An imperfect boxer will carry the information.

Ashby’s idea can be further looked at from an adaptation standpoint. When you adapt very well to your everchanging surroundings, you are destroying information or you are not demonstrating any information. Ashby also noted that adaptation means “destroying information.” In this manner, you know that you are adapting well, when you don’t break a sweat. A master swordsman moves effortlessly while defeating an opponent. A good runner is not out of breath after a quick sprint.

The Performance Paradox:

My take on this idea from Ashby is to express it as a form of performance paradox – When something works really well, you will not notice it, or worse you will think that it’s wasteful. The most effective and highly efficient components stay the quietest. The best spy is the one you have not ever heard of. When you try to monitor a highly performing component, you may rarely get evidence of its performance. It is almost as if it is wasteful. Another way to view this is – the imperfect components lend themselves to be monitored, while the perfect components do not. The danger in not understanding regulation from a cybernetics standpoint is to completely misread the interactions, and assume that the perfect component has no value.

I encourage the reader to read further upon these ideas here:

Edit (12/1/2020): Adding more clarity on “destruction of information”.

The phrase “destruction of information” was used by Ashby from a Shannon entropy sense. He is indicating that the agent is purposefully reducing the information entropy that would had been otherwise available. Another example is that of a good poker player, who is difficult to read.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Locard’s Exchange Principle at the Gemba:

Talking about Constraints in Cybernetics:

In today’s post, I am looking at constraints with respect to Cybernetics. I am looking mainly at the ideas from Ross Ashby, one of the pioneers of Cybernetics. Ashby wrote one of the best introductions to Cybernetics, aptly titled An Introduction to Cybernetics. Ashby described constraints in terms of variety. Variety is the number of distinct elements that an observer is capable of making. For example, consider the following set of elements:

{a, b, b, B, c, C}

Someone could say that the variety of this set is 3 since there are three letters. Some other person could say that the variety is actually 5 if the lower and upper cases are distinguished. A very common example to explain variety is a traffic stop light. Generally, the stop light in the US has 3 states (Red, Yellow and Green). Sometimes, additional states are possible such as blinking Red (indicating a STOP sign) or no light. Thus, the variety of a stop light can vary from 3 to 4 to 5.

Ashby explained constraints as – when there are two related sets and one set has less variety than the other, we can determine that a constraint is present in the set of elements with less variety. Let’s consider the stop light again. If all the lights were independent, we can have 8 possible states. This is shown below, where “X” means OFF and “O” means ON.

Figure 1 – The Eight States of a Stop Light

Per our discussion above, we utilize mainly 3 of these states to control traffic (ignoring the blinking states). These are identified in the blue shaded cells {2, 6, 7}. Thus, we can say that there is a constraint applied on the stop light since the actual variety the stop light possesses is 3 instead of 8. Ashby distinguishes slight and severe constraints. The example that Ashby gives is applying a constraint on a squad of soldiers in a single rank. The soldiers can be made to stand in numerous ways. For example, if the constraint to be applied is that no one soldier is to stand next to another soldier who shares the same birthday, the variety achieved is high. This is an example of a slight constraint. It is highly unlikely that two soldiers share the same birthday in a small group. However, if the constraint to be applied is that the soldiers should arrange themselves in the order of their height, the variety is then highly reduced. This is an example of a severe constraint.

Another example that Ashby gives is that of a chair. A chair taken as a whole has six degrees of freedom for movement. However, when the chair is disassembled into its parts, the freedom for movement increases. Ashby said:

A chair is a thing because it has coherence, because we can put it on this side of a table or that, because we can carry it around or sit on it. The chair is also a collection of parts. Now any free object in our three-dimensional world has six degrees of freedom for movement. Were the parts of the chair unconnected each would have its own six degrees of freedom; and this is in fact the amount of mobility available to the parts in the workshop before they are assembled. Thus, the four legs, when separate, have 24 degrees of freedom. After they are joined, however, they have only the six degrees of freedom of the single object. That there is a constraint is obvious when one realizes that if the positions of three legs of an assembled chair are known, then that of the fourth follows necessarily—it has no freedom.

Thus, the change from four separate and free legs to one chair corresponds precisely to the change from the set’s having 24 degrees of freedom to its having only 6. Thus, the essence of the chair’s being a “thing”, a unity, rather than a collection of independent parts corresponds to the presence of the constraint.

Ashby continued:

Seen from this point of view, the world around us is extremely rich in constraints. We are so familiar with them that we take most of them for granted, and are often not even aware that they exist. To see what the world would be like without its usual constraints we have to turn to fairy tales or to a “crazy” film, and even these remove only a fraction of all the constraints.

There are several takeaways we can have from Ashby’s explanation of constraints.

  1. The effect of the observer: The observer is king when it comes to cybernetics. The variety of an observed system is dependent on the observer. This means that the observation is subject to the constraints that the observer applies knowingly or unknowingly in the form of biases, beliefs, etc. The observer brings and applies internal constraints on the external world. Taking this a step further, our experiential reality is a result of our limited perceptual network. For example, we can see only a small section of the light spectrum. We can hear only a small section of the sound spectrum. We have cognitive blind-spots that we are not aware of. And yet we claim access to an objective reality and we are surprised when people don’t understand our point of view. We should not force our own views such that we come up with false dichotomies. This is sadly all very prevalent in today’s politics where almost every matter has been turned into a political viewpoint.
  2. Constraints are not a bad thing: Ashby’s great insight was that when a constraint exists, we can take advantage of it. We can make reasonably good predictions when constraints exist. Constraints help us to understand how things work. Ashby said that every law of nature is a constraint. We are able to estimate the variety that would exist if total independence occurred. We are able to minimize this variety by understanding the existing variety and adding further constraints as possible to produce results that we want. Adding constraints is about reducing unwanted variety. Design Engineering takes full use of this. On a similar note, Ashby also pointed out that learning is possible only to the extent that a sequence shows constraint. Learning is only possible when there is a constraint. If we are to learn a language, we learn it by learning the constraints that exists in the language in the form of syntax, meanings of the words, grammar etc.
  3. Law of Requisite Variety: Ross Ashby came up with the Law of Requisite Variety. This law simply can be explained as variety destroys (compensates) variety. For example, a good swordsman is able to fend off an opponent, if they are able to block and counter-attack every move of the opponent. The swordsman has to match the variety of the opponent (the set of attacks and blocks). To take our previous example, the stop light has to have a requisite variety to control traffic. If the 3 states identified in Figure 1 are not enough, the “system” will absorb the variety in the form of a traffic jam. When we think in terms of constraints, the requisite variety should be aligned with the identified constraints. We should minimize bringing in our internal constraints, and watch for the external constraints existing. The variety that we need to match must be aligned to the constraints already existing.
  4. Constraints do not need to be Objects: Similar to point 1, what we tell ourselves in terms of narratives and stories are also constraints. We are Homo Narrans – storytellers. We make sense of the world in terms of the stories we share and tell ourselves and others. We control ourselves and others with the stories we tell. We limit ourselves with what we believe. If we can understand the stories, we tell ourselves or others are telling us, we can better ourselves.
  5. Adaptation or Fit: Ashby realized that an organism can adapt just so far as the real world is constrained, and no further. Evolution is about fit. It is about supporting those factors that allow the organism to match the constraint in order to survive. The organism evolves to match the changing constraints present in the changing environment. This often happens through finding use for what is already existing. There is a great example that Cybernetician and Radical Constructivist, Ernst von Glasersfeld gives – the way the key fits a lock that it is able to open:

The fit describes a capacity of the key, not a property of the lock. When we face a novel problem, we are in much the same position as the burglar who wishes to enter a house. The “key” with which he successfully opens the door might be a paper clip, a bobby pin, a credit card, or a skillfully crafted skeleton key. All that matters is that it fits within the constraints of the particular lock and allows the burglar to get in.

I will finish with Ernst von Galsersfeld’s description of Cybernetics in terms of constraints:

Cybernetics is not interested in causality but constraints. Cybernetics is the art of maintaining equilibrium in a world of constraints and possibilities.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Deconstructing Systems – There is Nothing Outside the Text:

When a Machine Breaks…:

In today’s post, I am looking with more depth at the ideas of Cybernetics with relation to Ross Ashby, one of the pioneers of Cybernetics.

In particular, I am looking at one of the Ashby aphorisms:

When a machine breaks, it changes its mind.

This is a very interesting observation from a Cybernetics standpoint. Ashby defined a machine as follows:

It is a collection of parts which (a) alter in time, and (b) which interact with on one another in some determinate and known manner.

A designer designs the machine specific to an environment. This means that the designer has encoded a model of the environment into the machine so that when certain perturbations are encountered, the machine reacts in a certain manner. The variety that is estimated to be “thrown” at the machine is captured by the designer, and appropriate responses are encoded into the parts or the circuitry of the machine. The external variety is attenuated to a successful degree by the information conveyed by the machine in terms of affordances and signs on the machine. For example, a vending machine has signs on it along with pushable buttons that convey information to the user.

Ashby viewed this as the machine being successfully adapted to its environment. Ashby spoke of adaptation as being in a state of equilibrium. He referred to the stable state of equilibrium as “normal” equilibrium.  

Normal equilibrium has some special properties which we must notice. Firstly, the system tends to the configuration C; so, if it is disturbed slightly from C, it will automatically develop internal actions or tendencies bringing it back to C. In other words, it opposes any disturbance from C. Further, if we disturb it in various ways, it will develop different tendencies with different disturbances, the tendencies being always adjusted to the disturbances so as to oppose them.

it must be noted that an equilibrium configuration is a· property of the organization… The equilibrium states of a machine are defined by the organization only.

From this point on, Ashby explains what the “break” means with regards to the machine.

Let us imagine a machine has “broken.” The first observation is that no matter how chaotic the result, it is, by our definition, still a machine. But it is a different machine. A break is a change of organization.

The specific organization entails what the machine can do when it is perturbed. The machine only has the initial information to deal with perturbations. When a new scenario arises, it cannot deal with it because it cannot generate new information (unlike humans). The difference with us humans is that we can generate new information as needed to deal with the new perturbation. Sometimes, this can be in the mode of the basic fight or flight response. The reaction is indeed an effort to get an equilibrium. As Ashby put it:

The drive to equilibrium forces the emergence of intelligence.

Information is described as the reduction in uncertainty. When the environment is dynamic and constantly changing, we can say that there is a usefulness quotient for the freshness of the information on hand. This is something like a “best by date” that is on the carton of milk. As Ashby put it – Any system that achieves appropriate selection (to a degree better than chance) does so as a consequence of information received. From a second order Cybernetics standpoint, information is generated by the autopoietic being. It is not something that can be transmitted in the form of a physical commodity from one person to the other. We should work on improving our ability to generate new information as needed when new perturbations arise. This provides us the requisite variety to deal with the new variety that is thrown at us. What worked in the past, and what worked at another organization may not be meaningful with the new perturbations. The generation of new information requires updating the model of the environment to some degree. This updating corresponds to isomorphism, the idea that there is a corresponding one to one relationship between the various states of the model and the environment. The better this correspondence, the better the model.

Another aspect of the statement that the machine changes its mind, is that the “mind” is embodied in the physical body also. There is a famous debate in philosophy that looks at how much the mind is separate from the body – is the mind embodied in the body or is it separate? It is believed that the mind is part of the body as much as the body being part of the mind. There is no use trying to separate the two. Ashby may be giving a gentle nod to this idea that the mind should not be separated from the body.  When a machine breaks, it changes its mind!

Ashby’s approach of tying adaptation/intelligence to the idea of stable equilibrium is unique. I will finish off with his explanation regarding this:

Finally, there is one point of fundamental importance which must be grasped. It is that stable equilibrium is necessary for existence, and that systems in unstable equilibrium inevitably destroy themselves. Consequently, if we find that a system persists, in spite of the usual small disturbances which affect every physical body, then we may draw the conclusion with absolute certainty that the system must be in stable equilibrium. This may sound dogmatic, but I can see no escape from this deduction.

Please maintain social distance and wear masks. Stay safe and Always keep on learning… In case you missed it, my last post was Cybernetics Ideas from a Thermostat:

Cybernetics Ideas from a Thermostat:

The thermostat is a simple device that is often used to describe the basic ideas of Cybernetics. Cybernetics is the art of steering. Simply put, a goal is identified and the “system” acts to get closer to the goal. In the example of the thermostat, the user specifies the setpoint for the thermostat such that when the temperature goes below the setpoint, it kicks on the furnace and stops when the internal temperature of the house meets the desired temperature. In a similar fashion, when the temperature goes above a setpoint, the thermostat kicks on the air conditioner to bring down the internal temperature. The thermostat acts as a medium for achieving a constant temperature inside the house. This is also the idea of homeostasis. In order to achieve what the thermostat does, it needs to have a closed loop. It needs to read the internal temperature at specified frequencies, and act as needed depending upon this information. If it was an open loop, no information is fed back into the system, and thus no homeostasis is achieved. An example of an open loop is a campfire without anyone to manage it. The fire continues to burn until it goes out.

Ernst von Glasersfeld, the father of radical constructivism, talked about these ideas in his short paper, Reflections on Cybernetics (2000):

The good old thermostat, the favorite example in the early literature of cybernetics, is still a useful explanatory tool. In it a temperature is set as the goal-state the user desires for the room. The thermostat knows nothing of the room or of desirable temperatures. It is designed to eliminate any discrepancy between a set reference value and the feedback it receives from its sensory organ, namely the value indicated by its thermometer. If the sensed value is too low, it switches on the heater, if it is too high, it switches on the cooling system. Employing Gordon Pask’s clever distinction (Pask, 1969, p.23–24): from the user’s point of view, the thermostat has a purpose for, i.e. to maintain a desired temperature, whereas the purpose in the device is to eliminate a difference.

The idea that the thermostat’s purpose is simply to eliminate a difference is most important here. I have written about this here.

Von Galsersfeld continues:

This example may also help to clarify a second cybernetic feature that is rarely stressed. Imagine a thermostat that has an extremely sensitive thermometer. If it senses a temperature that is a fraction below the reference value, it switches on the heater. The moment the temperature begins to rise above the reference, it switches on the cooling system –and thus it enters into an interminable oscillation. This would hardly be desirable. Therefore, it is important to design the device so that it has an area of inaction around the reference value where neither the one nor the other response is triggered. In other words, rather than a single switching point, there have to be two, with some space for equilibrium in between.

Homeostasis does not refer to a fine line it needs to maintain. It is often a band or a range. The wider the band, the easier it is to maintain homeostasis. It is more efficient to define the “stable conditions” to be between a range of values. A good example for this is a bicycle lane. It is not easy, if not impossible, to ride a bicycle in a straight line. However, it is easy to ride a bicycle in a somewhat wider lane. With the thermostat, this region is sometimes referred to as a “deadband.” This is the range of the temperature, within which the thermostat does not act (stays OFF). Below the lower limit, the thermostat will kick on the furnace, and above the upper limit, the thermostat will kick on the air conditioner.

Another important lesson from a thermostat is that if you want to change the room temperature, there is no point in moving the thermostat value to an extreme setpoint. Let’s say that you want to cool the room down. It is of no use if you put the thermostat value at 40 degrees F (4.44 degrees C). The house will not get colder faster with this approach. The thermostat controls the temperature inside the house, but not the speed with which it achieves this.  

To be economically efficient, the thermostat must be aligned with the external temperature. For example, in colder weather conditions, the heat setpoint should be reduced (for example 67 degrees F or 19.4 degrees C), and similarly during warmer weather conditions the cool set point should be raised. Even though, the thermostat is the regulator, the user determines how this regulation is achieved. The thermostat as a regulator must also follow the Good Regulator Theorem. All good regulators must be a model of the system that it tries to regulate. The model of how to maintain the internal temperature constant (within the deadband) is programmed into the thermostat. It also follows the law of Requisite Variety. The thermostat must have the requisite variety to adjust the internal temperature based on the external perturbations. The thermostat must be able to differentiate the states of “below the setpoint temperature” or “above the setpoint temperature” to achieve the requisite variety and maintain the internal temperature. Both the Good Regulator Theorem and the Law of Requisite Variety are at utmost importance in Cybernetics, and they are both the contributions of one of the pioneers of Cybernetics, Ross Ashby.

I will finish this with some great aphorisms from Ross Ashby:

The drive to equilibrium forces the emergence of intelligence.

That the brain matches its environment is no more surprising than the matching of the two ends of a broken stick.

Every piece of wisdom is the worst folly in the opposite environment. Change the environment to its opposite and every piece of wisdom becomes the worst of folly.

The rule for decision is: Use what you know to narrow the field as far as possible: after that, do as you please.

Any system that achieves appropriate selection (to a degree better than chance) does so as a consequence of information received.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Toyota House – Why Jidoka and JIT?