Round and Round We Go:

In today’s post, I am looking at a simple idea – Loops, and will follow it up with Heinz von Foerster’s ideas on second order Cybernetics. A famous example of a loop is “PDCA”. The PDCA loop is generally represented as a loop – Plan-Do-Check-Act-Plan-Do…, and the loop is represented as an iterative process where it goes on and on. To me, this is a misnomer and misrepresentation. These should be viewed as recursions. First, I will briefly explain the difference between iteration and recursion. I am using the definitions of Klaus Krippendorff:

Iteration – A process for computing something by repeating a cycle of operations.

Recursion – The attribute of a program or rule which can be applied on its results indefinitely often.

In other words, iteration is simply repetition. In a program, I can say to print the word “Iteration” 5 times. There is no feedback here, other than to keep count of the times the word was printed on screen. On the other hand, in recursion, the value of the first cycle is fed back into the second cycle, the output of which is fed into the third cycle and so on. Here circular feedback is going on. A great example of a recursive function is the Fibonnaci sequence. The Fibonacci sequence is expressed as follows:

Fn = Fn-1 + Fn-2, for n > 1

Fn = 1, for n = 0 or 1

Here, we can see that the previous value is fed into the equation to create a new value, and this is an example of recursion.

From the complexity science standpoint, recursions lead to interesting phenomenon. This is not an iterative non-feedback loop any longer, where you come back to the same point again and again. With recursion, you get to circular causality with each loop, and you enter a new state altogether. Each loop is directly impacted by the previous loop. Anything that leads back to its original starting point doesn’t lead to emergence and can actually lead to a paradox. A great example is the liar paradox. In a version of this, a card has a statement written on both sides of a card. They are as follows:

  1. The statement on the other side of this card is FALSE.
  2. The statement on the other side of this card is TRUE.  

This obviously leads to a paradox when you follow it along a loop. You do not get to a new state with each iteration. Douglas Hofstadter wonderfully explained this as a mirror mirroring itself. However, with recursion, a wonderful emergence can happen, as we see in complexity science. Circular causality and recursion are ideas that have strong footing in Second Order Cybernetics. A great example of this is to look at the question – how do we make sense of the world around us? Heinz von Foerster, the Socrates of Cybernetics, has a lot to say about this. As Bernard Scott notes:

For Heinz von Foerster, the goal of second-order cybernetics is to explain the observer to himself, that is, it is the cybernetics of the cybernetician. The Greek root of cybernetics, kubernetes, means governor or steersman. The questions asked are; who or what steers the steersman, how is the steersman steered and, ethically, how does it behoove the steersman to steer himself? Von Foerster begins his epistemology, in traditional manner, by asking, “How do we know?” The answers he provides-and the further questions he raises-have consequences for the other great question of epistemology, “What may be known?” He reveals the creative, open-ended nature of the observer’s knowledge of himself and his world.

Scott uses von Foerster’s idea of undifferentiated coding to explore this further. I have written about this before here.

Undifferentiated coding is explained as below:

The response of a nerve cell encodes only the magnitude of its perturbation and not the physical nature of the perturbing agent.

Scott continues:

Put more specifically, there is no difference between the type of signal transmitted from eye to brain or from ear to brain. This raises the question of how it is we come to experience a world that is differentiated, that has “qualia”, sights, sounds, smells. The answer is that our experience is the product of a process of computation: encodings or “representations” are interpreted as being meaningful or conveying information in the context of the actions that give rise to them. What differentiates sight from hearing is the proprioceptive information that locates the source of the signal and places it in a particular action context.

Von Foerster explained the circular relationship between sense data and experiences as below:

The motorium (M) provides the interpretation for the sensorium (S) and the sensorium provides the interpretation for the motorium.

How we make sense depends on how we experience, and how we experience depends upon how we make sense. As Scott notes, we can explain the above relationship as follows:

S = F(M). Sensorium, S, is a function of motorium, M.

M = G(S). Motorium, M, is a function of sensorium, S.

Von Foerster pointed out that this is an open recursive loop, since we can replace M with G(S).

S=F(G(S))

With more replacements for the “S”, this equation becomes an open recursive loop as follows:

S=F(G(F(G(F(G(…………G(S)))))……

Scott continues:

Fortunately, the circularity is not vicious, as in the statement “I am a liar”. Rather, it is virtuous or, as von Foerster calls it, it is a creative circle, which allows us to “transcend into another domain”. The indefinite series is a description of processes taking place in sequence, in “time”, with steps t, t+1, t+2 and so on. (I put “time” in quotes as a forward marker for discussion to come). In such indefinite recursive expressions, solutions are those values of the expression which, when entered into the expression as a base, produce themselves. These are known as Eigen values (self-values). Here we have the emergence of stabilities, invariances. The “objects” that we experience are “tokens” for the behaviors that give rise to those experiences. There is an “ultimate” base to these recursions: once upon a “time”, the observer came into being. As von Foerster neatly puts it, “an observer is his own ultimate object”.

The computations that give rise to the experience of a stable world of “objects” are adaptations to constraints on possible behaviors. Whatever else, the organism, qua system, must continue to compute itself, as a product. “Objects” are anything else it may compute (and recompute) as a unitary aspect of experience: things, events, all kinds of abstraction. The possible set of “objects” it may come to know are limited only by the organism’s current anatomy and the culture into which she is born.

I have written about this further here – Consistency over Completeness.

Heinz von Foerster said – The environment contains no information; it is as it is. We are informationally closed entities, which means that information cannot come from outside to inside. We make meanings out of the perturbations and we construct a reality that our interpretative framework can afford.

I will finish with a great observation from the Cybernetist philosopher Yuk Hui:

Recursivity is a general term for looping. This is not mere repetition, but rather more like a spiral, where every loop is different as the process moves generally towards an end, whether a closed one or an open one.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Observing with Your Hands:

References:

  1. M. C. Escher Spiral
  2. Second Order Cybernetics as Cognitive Methodology. Bernard Scott
  3. A Dictionary of Cybernetics. Klaus Krippendorff

Observing with Your Hands:

In today’s post, I am looking at the ideas inspired by mirror neurons. Mirror neurons are a class of neurons that activate when someone engages in an activity or when they observe the same activity being performed by someone else. It was first identified by a group of Italian neurophysiologists led by Giacomo Rizzolatti in the 1980s. They were studying macaque monkeys. As part of their research, they placed electrodes in the monkeys’ brains to study hand and mouth motions. The story goes that the electrodes sent signals when the monkeys observed the scientists eating peanuts. The same neurons that fired when the monkeys were eating peanuts fired when they merely observed the same action. Several additional studies indicate that the mirror neurons are activated to respond to goal-oriented actions. For example, when the scientist covered the peanut bowl, and performed the action of picking a peanut and eating, the mirror neurons still fired even though the monkeys could not see the peanut bowl. However, when the scientist simply mimicked the action of taking a peanut without a peanut bowl, the neurons did not fire. There have been several hypotheses regarding the mirror neurons such as they facilitate learning by copying, and that they are the source for empathy.

The most profound idea about mirror neurons is that action execution and action observation are tightly coupled. Our ability to interpret or comprehend other’s actions involve our own motor system. For example, when we observe someone doing an action, depending upon whether we have performed the action adds depth to how we can observe the action being performed. If I am watching a ballet and the ballerina performs a difficult move, I may not fully grasp what I have seen since I do not know ballet and because I have never performed it. However, if I watch a spin bowler in Cricket throwing an off-spin, I will be able to grasp it better and possibly tell how the ball is going to spin. This is because I have played a lot of Cricket as a youth. The same with a magician performing a sleight of hand.

The idea of mirror neurons brings an extra depth to the meaning of going to the gemba. Going to gemba is a key tenet of Toyota Production System. We go to the gemba, where the action is, to grasp the current situation. We go there to observe. Gemba, it is said, is our best teacher. When we go there to observe the work being performed, we may get a different experience depending upon whether we ourselves have performed the work or not. Heinz von Foerster, the Socrates of Cybernetics, said – if you want to see, learn how to act. He was talking about the circular loop of sensorium and motorium. In order to see, there has to be interaction between the sensorium and motorium.

In a similar way, Kiichiro Toyoda, the founder of Toyota Motor Corporation is said to have remarked that engineers would never amount to anything unless they had to wash their hands at least three times a day; the evidence they were getting their hands dirty from real work.

I will finish with a great advice from Taiichi Ohno:

Don’t look with your eyes, look with your feet. Don’t think with your head, think with your hands.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Extended Form of the Law of Requisite Variety:

Image Reference – Now You See It. Now You Don’t (Bill Tarr)

The Extended Form of the Law of Requisite Variety:

This is a follow-up to my last week’s post – Notes on Regulation: In today’s post, I am looking at the Arvid Aulin-Ahmavaara’s extended form of the law of requisite variety (using Francis Heylighen’s version). As I have noted previously, Ross Ashby, the great mind and pioneer of Cybernetics came up with the law of requisite variety (LRV). The law can be stated as only variety can absorb variety. Here variety is the number of possible states available for a system. This is equivalent to statistical entropy. For example, a coin can be shown to have a variety of two – Heads and Tails. Thus, if a user wants a way to randomly choose one of two outcomes, the coin can be used. The user can toss the coin to randomly choose one of two options. However, if the user has 6 choices, they cannot use the coin to randomly choose one of six outcomes efficiently. In this case, a six-sided die can be used. A six-sided die has a variety of six. This is a simple explanation of variety absorbing variety.

The controller can find ways to amplify variety to still meet the external variety thrown upon the system. Let’s take the example of the coin and six choices again. It is possible for the user to toss the coin three times or use three coins, and use the three coin-toss results to make a choice (the variety for three coin-tosses is 8). This is a means to amplify variety in order to acquire requisite variety. From a cybernetics standpoint, the goal of regulation is to ensure that the external disturbances do not reach the essential variables. The essential variables are important for a system’s viability. If we take the example of an animal, some of the essential variables are the blood pressure, body temperature etc. The essential variables must be kept within a specific range to ensure that the animal continues to survive. The external disturbances are denoted by D, the essential variables by E and the actions available to the regulator as A. As noted, variety is expressed as statistical entropy for the variable. As Aulin-Ahmavaara notes – If A is a variable of any kind, the entropy H(A) is a measure of its variety.

With this background, we can note the extended form of the Law of Requisite Variety as:

H(E) ≥ H(D) – H(A) + H(A|D) – B

The H portions of the term represents the statistical entropy for the term. For example, H(E) is the statistical entropy for the essential variables. The larger the value for H, the more the uncertainty around the variable. The goal for the controller is to keep the H(E) as low as possible since a larger value for the entropy for the essential variables indicate a larger range of values for the essential variables. If the essential variables are not kept to a small range of values, the viability of the organism is compromised. We can now look at the other terms of the equation and see how the value for H(E) can be maintained at a lower value.

Heylighen notes:

This means that H(E) should preferably be kept as small as possible. In other words, any deviations from the ideal values must be efficiently suppressed by the control mechanism. The inequality expresses a lower bound for H(E): it cannot be smaller than the sum on the right-hand side. That means that if we want to make H(E) smaller, we must try to make the right-hand side of the inequality smaller. This side consists of four terms, expressing respectively the variety of disturbances H(D), the variety of compensatory actions H(A), the lack of requisite knowledge H(A|D) and the buffering capability B.

As noted, D represents the external disturbances, and H(D) is the variety of disturbances coming in. If H(D) is large, then it also increases the value generally for H(E). Thus, an organism in a complex environment is more likely to face some adversities that might drive the essential variables outside the safe range. For example, you are less likely to die while sitting in your armchair than while trekking through the Amazonian rain forest or wandering through the concrete jungle of a megacity. A good rule of thumb for survivability would be to avoid environments that have a larger variety for disturbances.

The term H(A) represents the variety of actions available to counter the disturbances. The more variety you have for your actions, the more likely you are able to counteract the disturbances. At least one of them will be able to solve the problem, escape the danger, or restore you to a safe, healthy state. Thus, the Amazonian jungle may not be so dangerous for an explorer having a gun to shoot dangerous animals, medicines to treat disease or snakebite, filters to purify water, and the physical condition to run fast or climb in trees if threatened. The term H(A) enters the inequality with a minus (–) sign, because a wider range of actions allows you to maintain a smaller range of deviations in the essential variables H(E).

The term H(A|D) represents a conditional state. It is also called the lack of requisite knowledge. It has a plus sign since it indicates a “lack”. It is not enough that you have a wide range of actions, you have to know which action will be effective. If you have minimal knowledge, then your best strategy is to try out each action at random, and this is highly inefficient and ineffective if time is not on your side. For example, there is little use in having a variety of antidotes for different types of snakebites if you do not know which snake bit you. H(A|D) expresses your uncertainty about performing an action A (e.g., taking a particular antidote) for a given disturbance D (e.g., being bitten by a particular snake). The larger your uncertainty, the larger the probability that you would choose a wrong action, and thus fail to reduce the deviation H(E). Therefore, this term has a “+” sign in the inequality: more uncertainty (= less knowledge) produces more potentially lethal variation in your essential variables.

The final term B stands for buffering (passive regulation). It expresses your amount of protective reserves or buffering capacity. Better even than applying the right antidote after a snake bite is to wear protective clothing thick enough to stop any snake poison from entering your blood stream. The term is negative because higher capacity means less deviation in the essential variables.

The law of requisite variety expresses in an abstract form what is needed for an organism to prevent or repair the damage caused by disturbances. If this regulation is insufficient, damage will accumulate, including damage to the regulation mechanisms themselves. This produces an acceleration in the accumulation of damage, because more damage implies less prevention or repair of further damage, and therefore a higher rate of additional damage.

The optimal formation for the Law of Requisite Variety occurs when the minimum value for H(E) is achieved, and when there is no lack of requisite knowledge. The essence of regulation is that disturbances happen all the time, but that their effects are neutralized before they have irreparably damaged the organism. This optimal result of regulation is represented as:

H(E)min = H(D) – H(A) – B

I encourage the reader to check out my previous posts on the LRV.

Getting Out of the Dark Room – Staying Curious:

Notes on Regulation:

Storytelling at the Gemba:

Exploring The Ashby Space:

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Notes on Regulation:

References:

[1] Cybernetic Principles of Aging and Rejuvenation: the buffering-challenging strategy for life extension – Francis Heylighen

[2] The Law of Requisite Hierarchy – A. Y. Aulin-Ahmavaara

Notes on Regulation:

In today’s post, I am looking at the idea of regulation. I talked about direct and indirect regulation in my previous post. In today’s post, I will look at passive and active regulation.

Ashby viewed a system as a selection of variables chosen by an observer for the purpose of sensemaking and control. The observer is looking not at what the system is (what the variables are), but at what the system does. In other words, the observer is interested in the behavior of the system. The observer is interested in influencing the behavior so that the system is maintained in certain desirable states. Of all possible states, the system can be in, the observer would like to keep the system in a chosen few states. To achieve this, the observer has to model the behavior of the system. As J. Achterbergh and D. Vriens note:

we should “model” the behavior of this entity (system) in such a way that we can understand how it behaves in the first place, and how this behavior reacts to “influences.” One could say that (at least) two kinds of influences on behavior (input) can be discerned: “disturbances” – causing the concrete entity to behave “improperly,” and “regulatory actions” – causing “proper” behavior (by preventing or dealing with disturbances).

The general understanding is that the environmental disturbances cause the system to behave improperly. The role of the regulator is to prevent the disturbances from reaching the essential variables of the system. The controller sets the target for the system, while the regulator acts on realizing the target. An easy example to distinguish the controller and the regulator is with a thermostat. The homeowner in this case is the controller, while the thermostat is the regulator. The homeowner decides the range for the thermostat, and all the thermostat can do is turn on or off depending upon the temperature inside the house. The regulator is not able to change the target; only the controller can change the target.

The goal of the regulator, as noted above, is to ensure that the disturbances from outside do not impact the essential variables of the system. Ashby noted:

An essential feature of the good regulator is that it blocks the flow of variety from disturbances to essential variables.

J. Achterbergh and D. Vriens expand this further:

Regulators block variety flowing from disturbances to essential variables. The more variety they block, the more effective the regulator. The most effective regulator is the one that blocks all the variety flowing from disturbances to essential variables… We can now also define regulation as the activity performed by the regulator. That is, regulation is “blocking the flow of variety from disturbances to essential variables.” If the more general description of essential variables is used (i.e., those variables that must be kept within limits to achieve some goal) then the purpose of regulation is that it enables the realization of this goal. If the goal is the survival of some concrete system, then the purpose of regulation is trying to keep the values of its essential variables within the limits needed for survival, in spite of the values of disturbances.

This is a good place to introduce the main law of Cybernetics – the law of requisite variety (LRV). LRV is the brainchild of Ross Ashby, the most prolific thinker and pioneer of Cybernetics. LRV states that only variety can absorb variety. Here variety is the number of possible states. For example, a light switch has a variety of two – ON or OFF. If the user just wants the light to be turned on or off, then the light switch can meet that variety. However, if the user wants the light to be dimmed down or up, then the situation calls for a lot more variety than two. Here, a light switch with a variety of two cannot absorb the variety “thrown” at it. However, a dimmer switch with an indefinite amount of variety can achieve this.

Ashby was inspired by Claude Shannon’s tenth theorem. There is an upper limit for the amount of variety the regulator can absorb. The controller will need to find ways to attenuate variety (filter out unwanted variety thrown at the system) and amplify internal variety such that the requisite variety is achieved. A simple example of attenuating variety is the big sign on the front of a fast-food place. The customer should not go into the fast-food place and ask to buy a car. Since there is an upper limit to a single regulator, the controller has to use multiple regulators linked to achieve amplification of variety. The fast-food place can use more employees during rush-hour to meet with the extra variety thrown at it. This is an example of amplifying variety.

Ashby talked about two types of regulation. This has been explained as Passive and Active regulation by J. Achterbergh and D. Vriens. Passive regulation does not make any selection. We can state that passive regulation is always working. An easy example to explain this is the shell of a turtle. It does not make any selections. J. Achterbergh and D. Vriens explained this as follows:

In the case of passive regulation there exists a passive block between the disturbances and the essential variables. This passive block, for instance the shell of a turtle, separates the essential variables from a variety of disturbances. It is characteristic of passive regulation that it does not involve selection… the regulator does not select a regulatory move dependent on the occurrence of a possibly disturbing event, for the block is given independent of disturbances. Because no selection is involved, the passive “regulator” does not need information about changes in the state of the essential variable or about disturbances causing such changes to perform its regulatory activity.

Francis Heylighen explained passive regulation as buffering:

 Buffering—at least in the cybernetic sense—is a passive form of regulation: the system dampens or absorbs the disturbances through its sheer bulk of protective material. Examples of buffers are shock absorbers in a car, water reservoirs or holding basins dampening fluctuations in rainfall, and the fur that protects a warm-blooded animal from variations in outside temperature. The advantage is that no energy, knowledge or information is needed for active intervention. The disadvantage is that buffering is not sufficient for reaching goals that are not equilibrium states in themselves, because moving away from equilibrium requires active intervention. For example, while a reservoir makes the flow of water more even, it cannot provide water in regions where it never rains. Similarly, fur alone cannot maintain a mammal body at a temperature higher than the average temperature of the surroundings: that requires active heat production.

Active regulation requires a selection of activity and requires information. J. Achterbergh and D. Vriens explained active regulation as follows:

In the case of active regulation, the regulator needs to select a regulatory move. Dependent on either the occurrence of a change of the state of the essential variable or of a disturbance, the regulator selects the regulatory move to block the flow of variety to the essential variables. Because it has to select a regulatory move, the active regulator either needs information about changes in the state of the essential variable or about the disturbances causing such changes in order to perform its regulatory function.

There are two forms of active regulation – feedforward (cause-controlled) and feedback (error-controlled). In feedforward regulation, the regulator anticipates and acts when it senses the disturbances prior to having any impact on the essential variable. Heylighen explained this as:

In feedforward regulation, the system acts on the disturbance before it has affected the system, in order to prevent a deviation from happening. For example, if you perceive a sudden movement in the vicinity of your face, you will close your eyelids before any projectile can hit it, so as to prevent potential damage to your eyeball. The disadvantage that it is not always possible to act in time, and that the anticipation may turn out to be incorrect, so that the action does not have the desired result. For example, the projectile may not have been directed at your eyes, but at a different part of your face. By shutting your eyes, you make it more difficult to avoid the actual impact.

In feedback regulation, the regulator acts only after the essential variable is impacted.

In feedback regulation, the system neutralizes or compensates the deviation after the disturbance has pushed the system away from its goal, by performing an appropriate repair action. For example, a thermostat compensates for a fall in temperature by switching on the heating, but only after it detected a lower than desired temperature. For effective regulation, it suffices that the feedback is negative—i.e. reducing the deviation—because a sustained sequence of corrections will eventually suppress any deviation. The advantage is that there is no need to rely on a complex, error-prone process of anticipation on the basis of imperfect perceptions: only the direction of the actual deviation has to be sensed. The disadvantage is that the counteraction may come too late, allowing the deviation to cause irreversible damage before it was effectively suppressed.

Ashby viewed feedforward as reacting to threat, and feedback as reacting to disaster. Feedforward control (cause controlled) generally comes from feedback control (error controlled). We should have a somewhat good knowledge of the situation’s behavior and this comes from previous feedback experiences.

In next week’s post, I will look at the extended form of the Law of Requisite Variety. I will finish this post with an example from J. Achterbergh and D. Vriens to further explain the three forms of regulation with an example of a medieval knight:

To illustrate these different modes of regulation, imagine a medieval knight on a battlefield. One of the essential variables might be “pain,” with the norm value “none.” In combat, the knight will encounter many opponents with different weapons all potentially threatening this essential variable. To deal with these disturbances, he might wear suitable armor: a passive block. If a sword hits him nevertheless (e.g., somewhere, not covered by the armor), he might withdraw from the fight, treat his wounds and try to recover: an error-controlled regulatory activity, directed at dealing with the pain. A cause-controlled regulatory activity might be to actively parry the attacks of an opponent, with the effect that these attacks cannot harm him.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Getting Out of the Dark Room – Staying Curious:

References:

[1] Cybernetic Principles of Aging and Rejuvenation:the buffering-challenging strategy for life extension – Francis Heylighen

[2] Social Systems Conducting Experiments – Jan Achterbergh, Dirk Vriens

Getting Out of the Dark Room – Staying Curious:

In today’s post I am looking at the importance of staying curious in the light of Karl Friston’s “Free Energy Principle” (FEP) and Ross Ashby’s ideas on indirect regulation. I have discussed Free Energy Principle here. The FEP basically states that in order to resist the natural tendency to disorder, adaptive agents must minimize surprise.

Karl Friston, the brilliant mind behind FEP noted:

the whole point of the free-energy principle is to unify all adaptive autopoietic and self-organizing behavior under one simple imperative; avoid surprises and you will last longer.

Avoiding surprises means that one has to model and anticipate a changing and itinerant world. This implies that the models used to quantify surprise must themselves embody itinerant wandering through sensory states (because they have been selected by exposure to an inconstant world): Under the free-energy principle, the agent will become an optimal (if approximate) model of its environment. This is because, mathematically, surprise is also the negative log-evidence for the model entailed by the agent. This means minimizing surprise maximizes the evidence for the agent (model). Put simply, the agent becomes a model of the environment in which it is immersed. This is exactly consistent with the Good Regulator theorem of Conant and Ashby (1970). This theorem, which is central to cybernetics, states that “every Good Regulator of a system must be a model of that system.” .. Like adaptive fitness, the free-energy formulation is not a mechanism or magic recipe for life; it is just a characterization of biological systems that exist. In fact, adaptive fitness and (negative) free energy are considered by some to be the same thing.

This idea of the agent having a model of its environment is quite important in Cybernetics. In fact, the idea of FEP can be traced back to Ashby’s ideas on Cybernetics. For an organism to survive, it needs to keep certain internal variables such as blood pressure, internal temperature etc. in a certain range. Ashby called these as essential variables, depicted by “E”. Ashby noted that the goal of regulation is to keep these essential variables in range, in the light of disturbances coming from the environment. In other words, the goal of regulation is to minimize the effect of disturbances coming in. A perfect regulation will result in no disturbances reaching the essential variables. The organism will be completely ignorant of what is going on outside in this case. When the regulation succeeds, we say that the regulator has requisite variety. It is able to counter the variety coming in from the environment. Ashby called this “the law of Requisite Variety”, and explained it succinctly as “only variety can absorb variety.” Ashby explained the direct and indirect regulation as follows:

Direct and indirect regulation occur as follows. Suppose an essential variable X has to be kept between limits x’ and x”. Whatever acts directly on X to keep it within the limits is regulating directly. It may happen, however, that there is a mechanism M available that affects X, and that will act as a regulator to keep X within the limits x’ and x” provided that a certain parameter P (parameter to M) is kept within the limits p’ and p”. If, now, any selective agent acts on P so as to keep it between p’ and p”, the end result, after M has acted, will be that X is kept between x’ and x”.

Now, in general, the quantities of regulation required to keep P in p’ and p” and to keep X in x’ to x” are independent. The law of requisite variety does not link them. Thus, it may happen that a small amount of regulation supplied to P may result in a much larger amount of regulation being shown by X.

When the regulation is direct, the amount of regulation that can be shown by X is absolutely limited to what can be supplied to it (by the law of requisite variety); when it is indirect, however, more regulation may be shown by X than is supplied to P. Indirect regulation thus permits the possibility of amplifying the amount of regulation; hence its importance.

Ashby explained the direct and indirect regulation with the following example:

Living organisms came across this possibility eons ago, for the gene-pattern is a channel of communication from parent to offspring: ‘Grow a pair of eyes,’ it says, ‘ they’ll probably come in useful; and better put hemoglobin into your veins — carbon monoxide is rare and oxygen common.’ As a channel of communication, it has a definite, finite capacity, Q say. If this capacity is used directly, then, by the law of requisite variety, the amount of regulation that the organism can use as defense against the environment cannot exceed Q. To this limit, the non-learning organisms must conform. If, however, the regulation is done indirectly, then the quantity Q, used appropriately, may enable the organism to achieve, against its environment, an amount of regulation much greater than Q. Thus, the learning organisms are no longer restricted by the limit.

A lower cognitive capacity organism may be able to survive with just relying on its gene-pattern, while a higher cognitive capacity organism has to supplement the basic gene-patterns with a learning behavior. In order to do this, it has to learn from its environment. Ashby continued:

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment… dictionary. While the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so, the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

It is important to note that the environment does not input information into the organism. Instead, the organism perceives the environment through its action on the environment. The environment also acts on the organism, just like the organism acts on the environment. Perception is possible only through this circular causal cycle. As Ashby noted, the gene pattern for learning allows for the organism to model its environment, and this allows for the indirect regulation. Ashby explains this point further:

This is the learning mechanism. Its peculiarity is that the gene-pattern delegates part of its control over the organism to the environment. Thus, it does not specify in detail how a kitten shall catch a mouse, but provides a learning mechanism and a tendency to play, so that it is the mouse which teaches the kitten the finer points of how to catch mice. This is regulation, or adaptation, by the indirect method. The gene-pattern does not, as it were, dictate, but puts the kitten into the way of being able to form its own adaptation, guided in detail by the environment.

The Dark Room:

At this point, we can look at the idea of the dark room. This is a thought experiment in FEP. We can try to explain this also using Ashby’s ideas. If the goal of the regulator is to minimize the impact of disturbances on the essential variables, one strategy is to then go to an environment with minimum disturbances. In FEP, this thought experiment is explained similarly as – if the goal of the agent is to minimize surprise, why wouldn’t the agent find a dark room and stay in it indefinitely?

A recurrent puzzle raised by critics of these models (FEP) is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.” 

Karl Friston offers an answer to this question:

Technically, the resolution of the Dark-Room Problem rests on the fact that average surprise or entropy H(s|m) is a function of sensations and the agent (model) predicting them. Conversely, the entropy H(s) minimized in dark rooms is only a function of sensory information. The distinction is crucial and reflects the fact that surprise only exists in relation to model-based expectations. The free-energy principle says that we harvest sensory signals that we can predict (cf., emulation theory; Grush, 2004); ensuring we keep to well-trodden paths in the space of all the physical and physiological variables that underwrite our existence. In this sense, every organism (from viruses to vegans) can be regarded as a model of its econiche, which has been optimized to predict and sample from that econiche. Interestingly, free energy is used explicitly for model optimization in statistics (e.g., Yedidia et al., 2005) using exactly the same principles.

This means that a dark room will afford low levels of surprise if, and only if, the agent has been optimized by evolution (or neurodevelopment) to predict and inhabit it. Agents that predict rich stimulating environments will find the “dark room” surprising and will leave at the earliest opportunity. This would be a bit like arriving at the football match and finding the ground empty. Although the ambient sensory signals will have low entropy in the absence of any expectations (model), you will be surprised until you find a rational explanation or a new model (like turning up a day early). Notice that average surprise depends on, and only on, sensations and the model used to explain them. This means an agent can compare the surprise under different models and select the best model; thereby eluding any “circular explanation” for the sensations at hand.

We are born with a gene pattern that allows for learning. The basic pattern is to learn, and our survival mainly comes from this. We are able to get out of the dark room because of this. We are born curious and this allows us to keep on learning. We have an inner ability to keep looking for answers and not be satisfied with status quo.

I am sure there is an important lesson for us all here with the idea of the dark room and the indirect regulation. I could simply say – stay curious and keep on learning. Or I can have you come to that conclusion on your own. As famous Spanish philosopher, José Ortega y Gasset noted – He who wants to teach a truth should place us in the position to discover it ourselves.

I will finish with a great lesson from Ashby to explain the idea of the indirect regulation:

If a child wanted to discover the meanings of English words, and his father had only ten minutes available for instruction, the father would have two possible modes of action. One is to use the ten minutes in telling the child the meanings of as many words as can be described in that time. Clearly there is a limit to the number of words that can be so explained. This is the direct method. The indirect method is for the father to spend the ten minutes showing the child how to use a dictionary. At the end of the ten minutes the child is, in one sense, no better off; for not a single word has been added to his vocabulary. Nevertheless, the second method has a fundamental advantage; for in the future the number of words that the child can understand is no longer bounded by the limit imposed by the ten minutes. The reason is that if the information about meanings has to come through the father directly, it is limited to ten-minutes’ worth; in the indirect method the information comes partly through the father and partly through another channel (the dictionary) that the father’s ten-minute act has made available.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Cybernetics of Ohno’s Production System:

The Cybernetics of Ohno’s Production System:

In today’s post, I am looking at the cybernetics of Ohno’s Production System. For this I will start with the ideas of ultrastability from one of the pioneers of Cybernetics, Ross Ashby. It should be noted that I am definitely inspired by Ashby’s ideas and thus may take some liberty with them.

Ashby defined a system as a collection of variables chosen by an observer. “Ultrastability” can be defined as the ability of a system to change its internal organization or structure in response to environmental conditions that threaten to disturb a desired behavior or value of an essential variable (Klaus Krippendorff). Ashby identified that when a system is in a state of stability (equilibrium), and when disturbed by the environment, it is able to get back to the state of equilibrium. This is the feature of an ultrastable system. Let’s look at the example of an organism and its environment. The organism is able to survive or stay viable by making sure that certain variables, such as internal temperature, blood pressure etc. stays in a specific range. Ashby referred to these variables as essential variables. When the essential variables go outside a specific range, the viability of the organism is compromised. Ashby noted:

That an animal should remain ‘alive’, certain variables must remain without certain ‘physiological’ limits. What these variables are, and what the limits, are fixed when the species is fixed. In practice one does not experiment on animals in general, one experiments on one of a particular species. In each species the many physiological variables differ widely in their relevance to survival. Thus, if a man’s hair is shortened from 4 inches to 1 inch, the change is trivial; if his systolic blood pressure drops from 120 mm. of mercury to 30, the change will quickly be fatal.

Ashby noted that the organism affects the environment, and the environment affects the organism: such a system is said to have a feedback. Here the environment does not simply mean the space around the organism. Ashby had a specific definition for environment. Given an organism, its environment is defined as those variables whose changes affect the organism, and those variables which are then changed by the organism’s behavior. It is thus defined in a purely functional, not a material sense. The reactionary part is the sensory-motor framework of the organism. The feedback between the reactionary part (R) of an organism (Orgm) and the environment (Envt.) is depicted below:

Ashby explains this using an example of a kitten resting near a fire. The kitten settles at a safe distance from the fire. If a lump of hot coal falls near the kitten, the environment is threatening to have a direct affect on the essential variables. It the kitten’s brain does nothing; the kitten will get burned. The kitten being the ultrastable system is able to use the correct mechanism – move away from the hot coal and maintain its essential variables in check. Ashby proposed that an ultrsstable system has two feedbacks. One feedback that operates frequently while the other feedback that operates infrequently when the essential variables are threatened. The two feedback loops are needed for a system to get back into equilibrium. This is also how the system can learn and adapt. Paul Pangaro and Michael C. Geoghegan note:

What are the minimum conditions of possibility that must exist such that a system can learn and adapt for the better, that is, to increase its chance of survival? Ashby concludes via rigorous argument that the system must have minimally two feedback loops, or double feedback… The first feedback loop, shown on the left side and indicated via up/down arrows, ‘plays its part within each reaction/behavior.’ As Ashby describes, this loop is about the sensory and motor channels between the system and the environment, such as a kitten that adjusts its distance from a fire to maintain warmth but not burn up. The second feedback loop encompasses both the left and right sides of the diagram, and is indicated via long black arrows. Feedback from the environment is shown coming into an icon for a meter in the form of a round dial, signifying that this feedback is measurable insofar as it impinges on the ‘essential variables.’

Ashby depicted his ultrastable system as below:

The first feedback loop can be thought as a mechanism that cannot change itself. It is static, while the second feedback loop is able to operate some parameters so that the structure can change resulting in a new behavior. The second feedback loop acts only when the essential variables are challenged or when the system is not in equilibrium. It must be noted that there are no decisions being made with the first feedback loop. It is simply an action mechanism. It keeps doing what was working before, while the second feedback loop alters the action mechanism to result in a new behavior. If the new behavior is successful in maintaining the essential variables, the new action is continued until it is not effective any longer. When the system is able to counter the threatening situation posed by the environment, it is said to have requisite variety. The law of requisite variety was proposed by Ashby as – only variety can absorb variety. The system must be able to have the requisite variety (in terms of available actions) to counter the variety thrown upon it by the environment. The environment always possesses far more variety than the system. The system must find ways to attenuate the variety coming in, and amplify its own variety to maintain the essential variables.

Let’s look at this with an easy example of a baby. When the baby experiences any sort of discomfort, it starts crying. The crying is the behavior that helps put it back into equilibrium (removal of discomfort) since it gets the attention from its mother or other family members. As the baby grows, its desired variables also get specific (food, water, love, etc.) The action of crying does not always get it what it is looking for. Here the second feedback loop comes in, and it tries a new behavior and see if it results in a better outcome. This behavior could be to point at something or even learning and using words. The new action is kept and used, as long as it becomes successful. The baby/child learns and adapts as needed to meet its own wants and desires.

Pangaro and Geoghegan note that the idea of an ultrastable system is applicable in social realms also. To evoke the social arena, we call the parameters ‘behavior fields.’ When learning by trial-and-error, a behavior field is selected at random by the system, actions are taken by the system that result in observable behaviors, and the consequences of these actions in the environment are in turn registered by the second feedback loop. If the system is approaching the danger zone, and the essential variables begin to go outside their acceptable limits, the step function says, ‘try something else’—repeatedly, if necessary—until the essential variables are stabilized and equilibrium is reached. This new equilibrium is the learned state, the adapted state, and the system locks-in.

It is important to note that the first feedback loop is the overt behavior that is locked in. The system cannot change this unless the second feedback loop is engaged. Stuart Umpleby cites Ashby’s example of an autopilot to explain this further:

In his theory of adaptation two feedback loops are required for a machine to be considered adaptive (Ashby 1960).  The first feedback loop operates frequently and makes small corrections.  The second feedback loop operates infrequently and changes the structure of the system, when the “essential variables” go outside the bounds required for survival.  As an example, Ashby proposed an autopilot.  The usual autopilot simply maintains the stability of an aircraft.  But what if a mechanic miswires the autopilot?  This could cause the plane to crash.  An “ultrastable” autopilot, on the other hand, would detect that essential variables had gone outside their limits and would begin to rewire itself until stability returned, or the plane crashed, depending on which occurred first. The first feedback loop enables an organism or organization to learn a pattern of behavior that is appropriate for a particular environment.  The second feedback loop enables the organism to perceive that the environment has changed and that learning a new pattern of behavior is required.

Ohno’s Production System:

Once I saw that the idea of an ultrastable system may be applied to the social realm, I wanted to see how it can be applied to Ohno’s Production System. Taiichi Ohno is regarded as the father of the famous Toyota Production System. Before it was “Toyota Production System”, it was Ohno’s Production System. Taiichi Ohno was inspired by the challenge issued by Kiichiro Toyoda, the founder of Toyota Motor Corporation. The challenge was to catch up with America in 3 years in order to survive.  Ohno built his ideas with inspirations from Sakichi Toyoda, Kiichiro Toyoda, Henry Ford and the supermarket system. Ohno did a lot of trial and error. And the ideas he implemented, he made sure were followed. Ohno was called “Mr. Mustache”. The operators thought of Ohno as an eccentric. They used to joke that military men used to wear mustaches during World War II, and that it was rare to see a Japanese man with facial hair afterward. “What’s Mustache up to now?” became a common refrain at the plant as Ohno carried out his studies. (Source: Against All Odds, Togo and Wartman)

His ideas were not easily understood by others. He had to tell others that he will take responsibility for the outcomes, in order to convince them to follow his ideas. Ohno could not completely make others understand his vision since his ideas were novel and not always the norm. Ohno was persistent, and he made improvements slowly and steadily. He would later talk about the idea of Toyota being slow and steady like the tortoise. Ohno loved what he did, and he had tremendous passion pushing him forward with his vision. As noted, his ideas were based on trial and error, and were thus perceived as counter-intuitive by others.

Ohno can be viewed as part of the second feedback loop and the assembly line as part of the first feedback loop, while the survivability of the company via the metrics of cost, quality, productivity etc. can be viewed as the “essential variables”. Ohno implemented the ideas of kanban, jidoka etc. on the line, and they were followed. The assembly line could not change the mechanisms established as part of Ohno’s production system. Ohno’s production system can be viewed as a closed system in that the framework is static. Ohno watched how the interactions with the environment went, and how the essential variables were being impacted. Based on this, the existing behaviors were either changed slightly, or changed out all the way until the desired equilibrium was achieved.

Here the production system framework is static because it cannot change itself. The assembly line where it is implemented is closed to changes at a given time. It is “action oriented” without decision powers to make changes to itself. There is no point in copying the framework unless you have the same problems that Ohno faced.

Umpleby also describes the idea of the double feedback loop in terms of quality improvement similar to what we have discussed:

The basic idea of quality improvement is that an organization can be thought of as a collection of processes. The people who work IN each process should also work ON the process, in order to improve it. That is, their day-to-day work involves working IN the process (the first, frequent feedback loop). And about once a week they meet as a quality improvement team to consider suggestions and to design experiments on how to improve the process itself. This is the second, less frequent feedback loop that leads to structural changes in the process. Hence, process improvement methods, which have been so influential in business, are an illustration of Ashby’s theory of adaptation.

This follows the idea of kairyo and kaizen in the Toyota Production System.

Final Words:

It is important to note that Ohno’s Production System is not Toyota Production System is not Toyota’s Production System is not Lean. Ohno’s Production System evolved into Toyota Production System. Toyota’s production system is emergent while Toyota Production System is not. Toyota Production System’s framework can be viewed as a closed system, in the sense that the framework is static. At the same time, the different plants implementing the framework are dynamic due to the simple fact that they exist in an everchanging environment. For an organization to adapt to an everchanging environment, it needs to be ultrastable. An organization can have several ultrastable systems connected with each other resulting in a homeostasis. I will finish with an excellent quote from Mike Jackson.

The organization should have the best possible model of the environment relevant to its purposes… the organization’s structure and information flows should reflect the nature of that environment so that the organization is responsive to it.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Cybernetics of a Society:

The Cybernetics of a Society:

In today’s post, I will be following the thoughts from my previous post, Consistency over Completeness. We were looking at each one of us being informationally closed, and computing a stable reality. The stability comes from the recursive computations of what is being observed. I hope to expand the idea of stability from an individual to a society in today’s post.

Humberto Maturana, the cybernetician biologist (or biologist cybernetician) said – anything said is said by an observer. Heinz von Foerster, one of my heroes in cybernetics, expanded this and said – everything said is said to an observer. Von Foerster’s thinking was that language is not monologic but always dialogic. He noted:

The observer as a strange singularity in the universe does not attract me… I am fascinated by images of duality, by binary metaphors like dance and dialogue where only a duality creates a unity. Therefore, the statement.. – “Anything said is said by an observer” – is floating freely, in a sense. It exists in a vacuum as long as it Is not embedded in a social structure because speaking is meaningless, and dialogue is impossible, if no one is listening. So, I have added a corollary to that theorem, which I named with all due modesty Heinz von Foerster’s Corollary Nr. 1: “Everything said is said to an observer.” Language is not monologic but always dialogic. Whenever I say or describe something, I am after all not doing it for myself but to make someone else know and understand what I am thinking of intending to do.

Heinz von Foerster’s great insight was perhaps inspired by the works of his distant relative and the brilliant philosopher, Ludwig Wittgenstein. Wittgenstein proposed that language is a very public matter, and that a private language is not possible. The meaning of a word, such as “apple” does not inherently come from the word “apple”. The meaning of the word comes from how it is used. The meaning comes from repeat usage of the word in a public setting. Thus, even though the experience of an apple may be private to the individual, how we can describe it is by using a public language. Von Foerster continues:

When other observers are involved… we get a triad consisting of the observers, the languages, and the relations constituting a social unit. The addition produces the nucleus and the core structure of society, which consists of two people using language. Due to the recursive nature of their interactions, stabilities arise, they generate observers and their worlds, who recursively create other stable worlds through interacting in language. Therefore, we can call a funny experience apple because other people also call it apple. Nobody knows, however, whether the green color of the apple you perceive, is the same experience as the one I am referring to with the word green. In other words, observers, languages, and societies are constituted through recursive linguistic interaction, although it is impossible to say which of these components came first and which were last – remember the comparable case of hen, egg and cock – we need all three in order to have all three.

Klaus Krippendorff defined closure as follows – A system is closed if it provides its own explanation and no references to an input are required. With closures, recursions are a good and perhaps the only way to interact. As organizationally closed entities, we are able to stay viable only as part of a social realm. When we are part of a social realm, we have to construct reality with reference to an external reference. Understanding is still generated internally, but with an external point of reference. This adds to the reality of the social realm as a collective. If the society has to have an identity that is sustained over time, its viability must come from its members. Like a set of nested dolls, society’s structure comes from participating individuals who themselves are embedded recursively in the societal realm. The structure of the societal or social realm is not designed, but emergent from the interactions, desires, goals etc. of the individuals. The society is able to live on while the individuals come and go.

I am part of someone else’s environment, and I add to the variety of their environment with my decisions and actions (sometimes inactions). This is an important reminder for us to hold onto in light of recent world events including a devastating pandemic. I will finish with some wise words from Heinz von Foerster:

A human being is a human being together with another human being; this is what a human being is. I exist through another “I”, I see myself through the eyes of the Other, and I shall not tolerate that this relationship is destroyed by the idea of the objective knowledge of an independent reality, which tears us apart and makes the Other as object which is distinct from me. This world of ideas has nothing to do with proof, it is a world one must experience, see, or simply be. When one suddenly experiences this sort of communality, one begins to dance together, one senses the next common step and one’s movements fuse with those of the other into one and the same person, into a being that can see with four eyes. Reality becomes communality and community. When the partners are in harmony, twoness flows like oneness, and the distinction between leading and being led has become meaningless.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Consistency over Completeness:

Source – The Certainty of Uncertainty: Dialogues Introducing Constructivism By Bernhard Poerksen

Consistency over Completeness:

Today’s post is almost a follow-up to my earlier post – The Truth about True Models. In that post, I talked about Dr. Donald Hoffman’s idea of Fitness-Beats-Truth or FBT Theorem. Loosely put, the idea behind the FBT Theorem is that we have evolved to not have “true” perceptions of reality. We survived because we had “fitness” based models and because we did not have “true models”. In today’s post, I am continuing on this idea using the ideas from Heinz von Foerster, one of my Cybernetics heroes.

Heinz von Foerster came up with “the postulate of epistemic homeostasis”. This postulate states:

The nervous system as a whole is organized in such a way (organizes itself in such a way) that it computes a stable reality.

It is important to note here that, we are speaking about computing “a” reality and not “the” reality. Our nervous system is informationally closed (to follow up from the previous post). This means that we do not have direct access to the reality outside. All we have is what we can perceive through our perception framework. The famous philosopher, Immanuel Kant, referred to this as the noumena (the reality that we don’t have direct access to) and the phenomena (the perceived representation of the external reality). All we can do is to compute a reality based on our interpretive framework. This is just a version of the reality, and each one of us computes such a reality that is unique to each one of us.

The other concept to make note of is the “stable” part of the stable reality. In Godelian* speak, our nervous system cares more about consistency than completeness. When we encounter a phenomenon, our nervous system looks at stable correlations from the past and present, and computes a sensation that confirms the perceived representation of the phenomenon. Von Foerster gives the example of a table. We can see the table, and we can touch it, and maybe bang on it. With each of these confirmations and correlations between the different sensory inputs, the table becomes more and more a “table” to us.

*Kurt Godel, one of the famous logicians of last century came up with the idea that any formal system able to do elementary arithmetic cannot be both complete and consistent; it is either incomplete or inconsistent.

From the cybernetics standpoint, we are talking about an observer and the observed. The interaction between the observer and the observed is an act of computing a reality. The first step to computing a reality is making distinctions. If there are no distinctions, everything about the observed will be uniform, and no information can be processed by the observer. Thus, the first step is to make distinctions. The distinctions refer to the variety of the observed. The more distinctions there are, the more variety the observed has. From a second order cybernetics standpoint, the variety of the observed depends upon of the variety of the observer. This goes back to the unique stable reality computation point from earlier. Each one of us are unique in how we perceive things. This is our variety as the observer. The observed, that which is external to us, always has more potential variety than us. We cut down or attenuate this high variety by choosing certain attributes that interests us. Once the distinctions are made, we find relations between these distinctions to make sense of it all. This corresponds to the confirmations and correlations that we noted above in the example of a table.

We are able to survive in our environment because we are able to continuously compute a stable reality. The stability comes from the recursive computations of what is being observed. For example, lets go back to the example of the table. Our eyes receive the sensory input of the image of the table. This is a first set of computation. This sensory image then goes up the “neurochain”, where it is computed again. This happens again and again as the input gets “decoded” at each level, until it gets satisfactorily decoded by our nervous system. The final result is a computation of a computation of a computation of a computation and so on. The stability is achieved from this recursion.

The idea of a consistency over completeness is quite fascinating. This is mainly due to the limitation of our nervous system to have a true representation of the reality. There is a common belief that we live with uncertainty, but our nervous system strives to provide us a stable version of reality, one that is devoid of uncertainties. This is a fascinating idea. We are able to think about this only from a second order standpoint. We are able to ponder about our cognitive blind spots because we are able to do second order cybernetics. We are able to think about thinking. We are able to put ourselves into the observed. Second order cybernetics is the study of observing systems where the observer themselves are part of the observed system.

I will leave the reader with a final thoughtthe act of observing oneself is also a computation of “a” stable reality.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Wittgenstein and Autopoiesis:

Wittgenstein and Autopoiesis:

In Tractatus Logico-Philosophicus, Wittgenstein wrote the following:

“The world of the happy man is a different one from that of the unhappy man.”

He also noted that, if a lion could talk, we would not understand him.

As a person very interested in cybernetics, I am looking at what Wittgenstein said in the light of autopoiesis. Autopoiesis is the brainchild of mainly two Chilean biologist cyberneticians Humberto Maturana and Francesco Varela. Autopoiesis was put forth as the joining of two Greek words, “auto” meaning self, and “poiesis” meaning creating. I have talked about autopoiesis here.  I am most interested in the autopoiesis’ idea of “organizational closure” for this post. An entity is organizationally closed when it is informationally tight. In other words, autopoietic entities maintain their identities by remaining informationally closed to their surroundings. We, human beings are autopoietic entities. We cannot take in information as a commodity. We generate meaning within ourselves based on experiencing external perturbations. Information does not enter from outside into our brain.

Let’s take the example of me looking at a blue light bulb. I interpret the presence of the blue light as being blue when my eyes are hit with the light. The light does not inform my brain, but rather my brain interprets the light as blue based on all my previous similar interactions I have had. There is no qualitative information coming to my brain saying that it is a blue light, but rather my brain interprets it as a blue light. It is “informative” rather than being a commodity piece of information. As cybernetician Bernard Scott noted:

…an organism does not receive “information” as something transmitted to it, rather, as a circularly organized system it interprets perturbations as being informative.

All of my previous interactions/perturbations with the light, and others explaining those interactions as being “blue light” generated a structural coupling so that my brain perceives a new similar perturbation as being “blue light”. This also brings up another interesting idea from Wittgenstein. We cannot have a private language. One person alone cannot invent a private language. All we have is public language, one that is reinterpreted and reinforced with repeat interactions. The sensation that we call “blue light” is a unique experience that is 100% unique to me as the interpreter. This supports the concept of autopoiesis as well. We cannot “open” ourselves to others so that they can see what is going on inside our head/mind.

Our interpretive framework, which we use to make sense of perturbations hitting us, is a result of all our past experiences and reinforcements. Our interpretive framework is unique to us homo sapiens. We share a similar interpretive framework, but the actual results from our interpretive framework is unique to each one of us. It is because of this that even if a lion could talk to us, we would not be able to understand it, at least not at the start. We lack the interpretive framework to understand it. The uniqueness of our interpretive framework is also the reason we feel differently regarding the same experiences. This is the reason, as a happy person, we cannot understand the world of a sad person, and vice versa.

Our brain makes sense based on the sensory perturbation and the interpretive framework it already has. A good example to think about this is the images that fall on our retina. The images are upside down, but we are able to “see” right side up. This is possible due to our structural coupling. What happens if there is a new sensory perturbation? We can only make sense of what we know. If we face a brand-new perturbation, we can make sense of it only in terms of what we know. The more we know, the more we are further able to know. As we face the same perturbation repeatedly, we are able to “better” experience it, and describe it to ourselves in a richer manner. With enough repeat interactions, we are finally able to experience it in our own unique manner. From this standpoint, there is no mind-body separation. The “mind” and “body” are both part of the same interpretive framework.

I will leave with another thought experiment to spark these ideas in the reader’s mind. There has always been talk about aliens. From what Wittgenstein taught us, when we meet the aliens, will we be able to understand each other?

I recommend the following posts to the reader expand upon this post:

If a Lion Could Talk:

The System in the Box:

A Study of “Organizational Closure” and Autopoiesis:

Please maintain social distance and wear masks. Stay safe and Always keep on learning… In case you missed it, my last post was When is a Model Not a Model?

When is a Model Not a Model?

Ross Ashby, one of the pioneers of Cybernetics, started an essay with the following question:

I would like to start not at: How can we make a model?, but at the even more primitive question: Why make a model at all?

He came up with the following answer:

I would like then to start from the basic fact that every model of a real system is in one sense second-rate. Nothing can exceed, or even equal, the truth and accuracy of the real system itself. Every model is inferior, a distortion, a lie. Why then do we bother with models? Ultimately, I propose. we make models for their convenience.

To go further on this idea, we make models to come up with a way to describe “how things work?” This is done for us to also answer the question – what happens when… If there is no predictive or explanatory power, there is no use for the model. From a cybernetics standpoint, we are not interested in the “What is this thing?”, but the “What does this thing do?” We never try to completely understand a “system”. We understand it in chunks, the chunks that we are interested in. We construct a model in our heads that we call a “system” to make sense of how we think things work out in the world. We only care about certain specific interactions and its outcomes.

One of the main ideas that Ashby proposed was the idea of variety. Loosely put, variety is the number of available states a system has. For example, a switch has a variety of two – ON or OFF. A stop light has a variety of three (generally) – Red, Yellow or Green. As we increase the complexity, the variety also increases. The variety is dependent on the ability of the observer to discern them. A keen-eyed observer can discern a higher number of states for a phenomenon than another observer. Take the example of the great fictional characters, Sherlock Holmes and John Watson. Holmes is able to discern more variety than Watson, when they come upon a stranger. Holmes is able to tell the most amazing details about the stranger that Watson cannot. When we construct a model, the model lacks the original variety of the phenomenon we are modeling. This is important to keep in mind. The external variety is always much larger than the internal variety of the observer. The observer simply lacks the ability to tackle the extremely high amount of variety. To address this, the observer removes or attenuates the unwanted variety of the phenomenon and constructs a simpler model. For example, when we talk about a healthcare system, the model in our mind is pretty simple. One hospital, some doctors and patients etc. It does not include the millions of patients, the computer system, the cafeteria, the janitorial service etc. We only look at the variables that we are interested in.

Ashby explained this very well:

Another common aim that will have to be given up is that of attempting to “understand” the complex system; for if “understanding” a system means having available a model that is isomorphic with it, perhaps in one’s head, then when the complexity of the system exceeds the finite capacity of the scientist, the scientist can no longer understand the system—not in the sense in which he understands, say, the plumbing of his house, or some of the simple models that used to be described in elementary economics.

A crude depiction of model-making is shown below. The observer has chosen certain variables that are of interest, and created a similar “looking” version as the model.

Ashby elaborated on this idea as:

We transfer from system to model to lose information. When the quantity of information is small, we usually try to conserve it; but when faced with the excessively large quantities so readily offered by complex systems, we have to learn how to be skillful in shedding it. Here, of course, model-makes are only following in the footsteps of the statisticians, who developed their techniques precisely to make comprehensible the vast quantities of information that might be provided by, say, a national census. “The object of statistical methods, said R. A. Fisher, “is the reduction of data.”

There is an important saying from Alfred Korzybski – the map is not the territory. His point was that we should take the map to be the real thing. An important corollary to this, as a model-maker is:

If the model is the same as the phenomenon it models, it fails to serve its purpose. 

The usefulness of the model is in it being an abstraction. This is mainly due to the observer not being able to handle the excess variety thrown at them. This also answers one part of the question posed in the title of this post – A model ceases to be a model when it is the same as the phenomenon it models. The second part of the answer is that the model has to have some similarities to the phenomenon, and this is entirely dependent on the observer and what they want.

This brings me to the next important point – We can only manage models. We don’t manage the actual phenomenon; we only manage the models of the phenomenon in our heads. The reason being again that we lack the ability to manage the variety thrown at us.

The eminent management cybernetician, Stafford Beer, has the following words of wisdom for us:

Instead of trying to specify it in full detail, you specify it only somewhat. You then ride on the dynamics of the system in the direction you want to go.

To paraphrase Ashby, we need not collect more information than is necessary for the job. We do not need to attempt to trace the whole chain of causes and effects in all its richness, but attempt only to relate controllable causes with ultimate effects.

The final aspect of model-making is to take into consideration the temporary nature of the model. Again, paraphrasing Ashby – We should not assume the system to be absolutely unchanging. We should accept frankly that our models are valid merely until such time as they become obsolete.

Final Words:

We need a model of the phenomenon to manage the phenomenon. And how we model the phenomenon depends upon our ability as the observer to manage variety. We only need to choose certain specific variables that we want. Perhaps, I can explain this further with the deep philosophical question – If a tree falls in a forest and no one is around to hear it, does it make a sound? The answer to a cybernetician should be obvious at this point. Whether there is sound or not depends on the model you have, and if you have any value in the tree falling having a sound.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Maximum Entropy Principle: