Ashby’s Trowel:

In today’s post, I am looking at a concept that I am referring to as Ashby’s trowel. The premise of this idea is very simple – context matters! I will start off the discussion using the very well-known heuristic of Occam’s razor. Occam’s razor, named after the 14th century English Franciscan friar, William of Occam. This is commonly described as – entities should not be multiplied beyond necessity. In other words, explanations should only have the necessary number of assumptions. Very often this is incorrectly presented as a call to seek simplicity. As a cybernetician, I can tell you that simplicity is overrated.

The idea of a philosophical razor is that it can be used to remove the unwanted things by slicing the unwanted assumptions away from the model. Occam’s razor is the most famous of the many philosophical razors. In medicine, Occam’s razor is often contrasted with Hickam’s dictum. Hickam’s dictum is named after the twentieth century American physician, John Hickam. It is described as – patients can have as many diseases as they damn (or darn) well please. So, if an elderly patient complains of several ailments, Hickam would advise that we trust the patient and try to treat several likable diseases instead of assuming that the different ailments are resulting from one single disease. This heuristic is meaningful when the patient is elderly, is on multiple medications, and if the ailments started at different times. In other words, simplicity is overrated when dealing with a complex situation as our human bodies, especially when tackled with age and side effects of many different medications.

A trowel is a tool used by a mason to add and to remove mortar as needed so that a clean level surface is achieved. With Ashby’s trowel, I am putting forth the reminder that the solution that you are seeking should have enough complexity to match the complexity of the problem that you are seeking to solve. Ashby presented this as his law of Requisite Variety – only variety can absorb variety. Here variety refers to the number of possible states of a “system” conjured up (constructed) by an observer. If we take the example of a light switch, it generally has two states – ON and OFF. Thus, its variety is 2. The external variety is always more than the internal variety. In the case of a light switch, the user’s variety of needing something ON and OFF when they want, can be easily met by the light switch. But now consider if the user wants to dim the lighting with the switch. The variety of ON and OFF cannot meet this new demand that is added by the user. The Engineer now has to come up with a dimmer switch that has indefinite variety between its LOW and HIGH setting.

When we have a problem, we are often reminded to go for simple solutions. This may be a good heuristic to hold on to, however this should not be the law. One of the problems with seeking simple solutions is that we stop searching for more solutions once we get to a “simple” solution. This is referred to as the “satisfaction of search.”

From the cybernetics standpoint, simplicity and complexity also depend upon who is doing the observing. What is simple to you may be complicated for me, and vice versa. The more meaningful heuristic to have is Ashby’s trowel – context matters, so we have to match the complexity.

I welcome the reader to look into this more –

I will finish with a wise quote that is very much aligned to Ashby’s trowel, from one of my favorite philosophers, David Hume – If the cause, assigned for any effect, be not sufficient to produce it, we must either reject that cause, or add to it such qualities as will give it a just proportion to the effect.

Stay safe and always keep on learning…

In case you missed it, my last post was The Purpose of Purposeful Entities in Purposive Systems:

Maturana’s Aesthetic Seduction:

In today’s post, I am looking at the great cybernetician Humberto Maturana’s idea of “aesthetic seduction”. Maturana was an important biologist who was one of the creators of autopoiesis. I have written about it previously. He challenged the prevalent notion at that time that our nervous system takes in information from the environment. He proposed that our nervous system is closed. This means that there is no input of information coming in from the environment. Instead, the nervous system is reading itself. When the nervous system is perturbed by the environment, it goes through a structural change based on its current state, and this transformation is what is read by the nervous system. The perception or experience of the red color is a result of our closed nervous framework, rather than the result of the rose’s petals. The information is generated within itself. We are not information processing machines, and there is no input-output business going on. As Raf Vanderstraeten notes:

the central premise of Autopoiesis and Cognition is that systems are informationally closed. Thus, no information crosses the boundary separating the system from its environment. We do not see a world “out there” that exists apart from us. Rather, we see only what our systemic organization allows us to see. The world merely irritates; it triggers changes determined by the system’s own organization. The world cannot instruct an observing system; the world rather is constructed by the observing system. Only a closed system is able to know (the world).

As one can imagine, such an idea may seem rather strange or being “out there”. Maturana spoke of aesthetic seduction with regards to convincing others of his ideas. His stand was that he should not try to convince anyone. He wanted his ideas to speak for themselves and he wanted the beauty of his ideas to invite the readers. This is the beauty of aesthetic seduction (no pun intended). He noted:

The idea of aesthetic seduction is based on the insight that people enjoy beauty. We call something beautiful when the circumstances we find ourselves in make us feel good. Judging something as ugly and unpleasant, on the other hand, indicates displeasure because we are aware of the difference to our views of what is agreeable and pleasant. The aesthetic is harmony and pleasure, the enjoyment of what is given to us. An attractive view transforms us. A beautiful picture makes us look at it again and again, enjoy its color scheme, photograph it, perhaps even buy it. The relationship with a picture may transform the life of people because the picture has become a source of aesthetic experience.

He pointed out that there is no manipulation involved here. He really wanted the readers to enjoy the presented ideas.

I certainly never intend to seduce or persuade people in a manipulative way. Beauty would vanish if I tried to seduce in this way. Any attempt to persuade applies pressure and destroys the possibility of listening. Pressure creates resentment. Wanting to manipulate people stimulates resistance. Manipulation means exploiting our relation with other people in such a way as to give them the impression that whatever happens is beneficial and advantageous for them. But the resulting actions of the manipulated person are, in fact, useful for the manipulator. Manipulation, therefore, really means cheating people.

Maturana advises us to be respectful and engage in open conversations. Our nervous systems may be closed, but that does not mean that our minds should be too.

The only thing left to me in the way of aesthetic seduction is just to be what I am, wholly and entirely, and to admit no discrepancy whatsoever between what I am saying and what I am doing. Of course, this does not at all exclude some jumping about and playacting during a lecture. But not in order to persuade or to seduce but in order to generate the experiences that produce and make manifest what I am talking about. The persons becoming acquainted with me in this way can then decide for themselves whether they want to accept what they see before them. Only when there is no discrepancy between what is said and what is done, when there is no pretense and no pressure, aesthetic seduction may unfold. In such a situation, the people listening and debating will feel accepted to such an extent as to be able to present themselves in an uninhibited and pleasurable manner. They are not attacked, they are not forced to do things, and they can show themselves as they are, because someone else is presenting himself naked and unprotected. Such behavior is always seductive in a respectful way because all questions and fears suddenly become legitimate and completely new possibilities of encountering one another emerge.

Maturana’s words are so beautiful that I am not going to add further to it. I will leave with his words on not wanting to convince others of his ideas:

I never attempt to convince anyone. Some people become annoyed when they are confronted with my considerations. That is perfectly okay. I would never try to correct their views and then force my own ideas upon them.

Stay safe and always keep on learning…

In case you missed it, my last post was A Constructivist’s View of POSIWID:

A Constructivist’s View of POSIWID:

POSIWID or “Purpose Of a System Is What It Does” is a famous dictum in Cybernetics. This is attributed to the Management Cybernetician Stafford Beer. Beer noted:

A good observer will impute the purpose of the system from its actions and thus from the resultant state.

Hence the key aphorism:

The purpose of a system is what it does.

There is, after all, no point in claiming that the purpose of a system is to do what it consistently fails to do.

I have written about this before here – https://harishsnotebook.wordpress.com/2019/02/18/purpose-of-a-system-in-light-of-vsm/ and here – https://harishsnotebook.wordpress.com/2020/06/14/hegel-dialectics-and-posiwid/

In cybernetics, the emphasis is on what a “system” does, and not especially what a “system” is, or what the designer or management of the “system” claims what the “system” is doing. Thus, we can see that POSIWID has a special place in every cybernetician’s mind. A “system” is a collection of variables that an observer purposefully selects to make sense of the world around them. The boundaries, parts etc. of the “system” vary according to who is doing the observing, and the purpose also is assigned by the observer. Beer explains this clearly:

The point that I find that I am most anxious to add is that this System has a PURPOSE. The trouble is: WHO SAYS SO?

So where does the idea that Systems in general have a purpose come from? IT COMES FROM YOU!

 It is you the observer of the System who recognizes its purpose. Come to think of it, then, is it not just YOU — the observer — who recognizes that there is a System in the first place?

Another key point to mention is that an observer may impute several purposes for the “system”. Beer continues:

Consider the System called a tiger…

The purpose of a tiger is:

  • to be itself
  • to be its own part of the Jungle System
  • to be a link in animal evolution
  • to eat whatever it eats, for Ecology’s sake
  • to provide tiger-skins
  • to perpetuate the genes of which it is the host

For the moment, I am prepared to say that the purpose of a tiger is to demonstrate that the recognition of a System and of its purpose is a highly subjective affair.

Understanding the purpose of a “system” helps us in understanding how we construct the “systems” themselves:

All of this turns out to mean that we simply cannot attribute purposes, or even boundaries, to systems as if these were objective facts of nature. The facts about the system are in the eye of the beholder. This sounds like an unproductive conclusion, but we can make something of it. It means that both the nature and the purpose of a System are recognized by an observer within his perception of WHAT THE SYSTEM DOES.

From Beer’s writing, it is clear that the POSIWID is dependent upon the observer. This is also the basis of constructivism. In constructivism, the observer is the king or queen. The “system” is a selection of variables chosen by the observer to improve their understanding of a phenomenon. The boundaries drawn by the observer are entirely arbitrary and contingent on the mood of the observer. A “system” is thus a mental construct of the observer. For example, an educational “system” may have physical artifacts in the world such as buildings, books, chalk boards etc. However, depending upon the observer, what the “system” entails will change. For a student, it is “system” for education, or it is a “system” to get away from their hometown. For a teacher, it is a “system” to provide meaning to their lives or it is a “system” to spend time while doing another job on the side. There can be as many “systems” involving the same collection of parts as the number of the observers. Beer continues:

The definition of the purpose of a System as being what it does lays the onus not on ‘nature’ but on the particular observer concerned. It immediately accounts for UNRESOLVABLE disagreements about systems too. For two people may well disagree about anything at all, and never become reconciled. They say that they will be convinced, and give way, if the FACTS show that they were mistaken. But the facts about the nature and purpose of a System are not objective realities. Once you have declared, as an observer, what the facts are, the nature and purpose of the System observed are ENTAILED.

As a constructivist, this is an important concept to grasp. If there are two observers and each is constructing the “system”, they each will come up with their own “systems” and varying POSIWIDs. Our first step in Systems Thinking then is to understand how the other participants view the “system” as, their assigned purposes, and how they see the POSIWIDs as. Even if they assign a purpose for the “system”, the outcome that they perceive may not match what they expect. I have come to take away some important points from our discussion so far:

  1. There are always multiple participants in the social realm. It is very important to understand what the “system” means for each stakeholder. This includes the parts, the whole, the assigned purposes and the POSIWIDs. There is no POSIWID(s) without an observer.
  2. It is important to understand that there is always a gap between what we believe the purpose(s) of a “system” should be, and what it actually is doing. It is tempting to assign an objective reality to the “IT” here. We should resist this temptation and understand that the “IT” or the “system” is an “as-if” model or abstraction that we employ to make sense.
  3. To carry on from the previous point, in order to understand the gap, we need good comparators in place to allow us to measure what the gap between the expected and actual is. POSIWIDs are entirely dependent upon the variety of the observer to distinguish what is happening. A good example to point this out further will be to take the cliché fictional example of Sherlock Holmes and Inspector Lestrade. Holmes, the master observer, is able to distinguish much more attributes than Inspector Lestrade, which would correlate to more POSIWIDs.
  4. On a similar note, what we perceive as the “system” is doing could be faulty. This means that we need an ongoing error correction step to improve our ability to manage the “system”. We need to interact with the “system” as much as possible, and also welcome input from other participants and their perspectives. We cannot manage a “system” unless we are a part of the “system”. We should embrace and own our epistemic humility.
  5. The POSIWID(s) should be reinterpreted as often as possible, with input from others. They help us understand the dynamics of the various parts and how they interact with each other.
  6. We should focus on only a few POSIWIDs at a time. Since we lack the variety to manage all the external variety thrown at us, we should attenuate and filter out the unwanted POSIWIDs.
  7. We cannot predict what the POSIWID(s) will be beforehand. Due to complexity of connections between the parts, and the nonlinear relations between them, POSIWIDs are more likely to be unpredictable. This is another reason we should resist the temptation to treat “systems” as objective realities in the world.

One of the main struggles I had when I started my journey into constructivism is how we can manage a “system” if it is entirely “subjective”? I have put the term subjective in quotes because there is no subject/object distinction in constructivism. I will write more on this later. For the moment, I will carry on with the use of the term “subjective”. Beer explained this well:

‘How is it that systems are subjective, while some of them can be singled out and declared to be viable?’

‘Once you have defined them, you can tell whether they are viable or not.’

‘And those criteria are suddenly supposed to be objective?’

‘Well, it’s all about necessity and sufficiency within a stated frame of reference.’

if systems are subjective phenomena, then we are going to have trouble in determining a measure. The whole idea of measures is to be objective… Yet the problem we face is not unique. In fact, the measures that we are accustomed to call objective work only because we accept a set of conventions about how they are to be employed. For example, if we quote the height of Mount Everest, we do not mean that this is the distance you would travel from the base camp to climb it; nor do we mean that if we look at Mount Everest while holding a ruler at arm’s length, we can read off its height. We might have agreed on either of these conventions: they would both work, given certain other stateable conditions. It seems that objective measures, like objective systems, exist only as conventional crystallizations of one out of a virtually infinite number of subjective possibilities.

Stay safe and always keep on learning…

In case you missed it, my last post was Systems in Quotes vs. Systems Without Quotes:

Source: The Heart of Enterprise (Stafford Beer, 1979)

Systems in Quotes vs. Systems Without Quotes:

Humberto Maturana is one of my favorite authors who has helped me further my learning of cybernetics. Sadly, he passed away recently. In today’s post, I am inspired by Maturana’s ideas. One of Maturana’s famous ideas is “autopoiesis.” I have written about this here. A closely related idea from Maturana is the difference between objectivity without parentheses and objectivity in parentheses. He explains this as follows:

There are two distinct attitudes, two paths of thinking and explaining. The first path I call objectivity without parentheses It takes for granted the observer-independent existence of objects that – it is claimed – can be known; it believes in the possibility of an external validation of statements. Such a validation would lend authority and unconditional legitimacy to what is claimed and would, therefore, aim at subjection. It entails the negation of all those who are not prepared to agree with the “objective” facts. One does not want to listen to them or try to understand them. The fundamental emotion reigning here is powered by the authority of universally valid knowledge. One lives in the domain of mutually exclusive transcendental ontologies: each ontology supposedly grasps objective reality; what exists seems independent from one’s personality and one’s actions.

The other attitude I call objectivity in parentheses; its emotional basis is the enjoyment of the company of other human beings. The question of the observer is accepted fully, and every attempt is made to answer it. The distinction between objects and the experience of existence is, according to this path, not denied but the reference to objects is not the basis of explanations, it is the coherence of experiences with other experiences that constitutes the foundation of all explanation. In this view, the observer becomes the origin of all realities; all realities are created through the observer’s operations of distinction. We have entered the domain of constitutive ontologies: all Being is constituted through the Doing of observers. If we follow this path of explanation, we become aware that we can in no way claim to be in possession of the truth but that there are numerous possible realities. Each of them is fully legitimate and valid although, of course, not equally desirable. If we follow this path of explanation, we cannot demand the subjection of our fellow human beings but will listen to them, seek cooperation and communication, and will try to find out under what circumstances we would consider to be valid what they are saying. Consequently, some claim will be true if it satisfies the criteria of validation of the relevant domain of reality.

Maturana is a proponent of objectivity in parentheses. Maturana teaches us that it is impossible to establish an observer-independent point of reference. Everything said is said by an observer. He agrees that there seem to be objects independent of us. The use of parentheses is to acknowledge this – to signal a certain state of awareness. In other words, we do not discover reality, but we invent a reality. We construct an experiential version of reality that is accessible to our interpretative framework. This is a version that is built through a circular causal loop between us and our environment in which we are embedded in. We are embodied minds embedded in our world, and not bodies with minds separated from the world. The latter view represents objectivity without parentheses.

Our version of reality becomes stable from our history of interactions with our environment. The environment contains everything outside our closed interpretative framework. This includes other beings also. The history of interactions provides us an opportunity to generate correlations that we can assign meanings to. For example, as a child, we learn that crying generally leads to situations where we can find comfort in the form of food, attention etc. However, as we grow older, most of us have to relearn that crying does not lead to comfort. We have to try other means to get what we need – learning to speak a common language. There is an error correction that goes on in the social realm where we can find commonalities in the realities that we construct. However, this can also lead to clans and tribes, where as a group we isolate from other clans and tribes with opposing ideas. An important point to be made at this juncture is that the success of the constructed reality is based simply on viability of the construction. If the constructed reality continues to stay viable over time, then it has merit. There is no external point of reference utilized here. There is no external authority who decrees what is right and wrong, or what is moral or immoral. The only way we would be willing to change the construction is if we realize that it is no longer viable based on either an internal reference point or when something happens in our environment that challenges our survival altogether. The first case is where we have to change our internal structure. This could be based on a perturbation from outside such as conversing with a person with an opposing view or reading a book that presents a powerful argument that challenges our paradigm. The second case is where our organization itself gets changed, and we cease to exist.

Systems in Quotes:

The more I have learned about cybernetics, especially second order cybernetics and the works of thinkers such as Heinz von Foerster and Humberto Maturana, the more I start to question the use of “systems”. The word “system” is used in many ways to represent many things. Sometimes it could be the biological system (our body); sometimes it could be the education system; sometimes it could be the network system; on and on. To use a quote from Jean-Paul Sartre, “This word has been so stretched and has taken on so broad a meaning that it no longer means anything at all.” Sartre was talking about existentialism. But I think it is quite suitable here. My statement might come across as quite irrational to some of the readers. Please bear with me as I try to explain my view. There is after all nothing rational about the complexity of what we try to represent with the word “system”. The “system” could mean different things to different people. It all depends on who is doing the description. Let’s take the example of an organization. It is quite common for management consultants to say we need to learn to change the “system” or fix the “system”. Or we should not blame the “system”. The emphasis here is that the “system” is something that we can change or it is something real that we can fix. As I have pointed out often here on the blog, my view is that “systems” are mental constructs used to make sense of the world around us. It is a construction of the observer, and they decide what all parts go within the boundary, and where the boundary of the “system” is drawn. There is nothing objective about a “system”. “Systems” are part of the experiential reality of the observer. Since we are informationally closed, we cannot share this experiential reality.

When I talk about “systems”, in the spirit of Maturana, I am differentiating between Systems in quotes, and systems without quotes. If we replace the word “objectivity” with “system”, and “parentheses” with “quotes” in Maturana’s explanation, perhaps my position would become clearer. My concern with not using quotes is that we are removing the observer from the observation; the describer from the description. To put it in other words – a cat doesn’t know that it is a cat. The distinguishing characteristics come from the distinguisher than the distinguished. In the case of an organization, if we are going to blame the “system”, where will we start? The assumption is that we all know what we mean by the “system” here. The first step in systems thinking is to try to view the world from the other person’s viewpoint. This is part of understanding the boundaries and how the other person views the world. In other words, we are looking for actively perturbing our closed interpretative framework. We are looking to actively engage to change our minds. How often do we do this? Is this what the consultants look to do when they talk about fixing the “system”? Maturana follows up on his objectivity in parentheses idea that I find is quite apt here:

They might – possibly – follow the path of objectivity in parentheses and, therefore, be capable of reflection: They would respect differences, would not claim to be the sole possessors of truth, and would enjoy the company of others. In the process of living together, they would produce different cultures. Consequently, the number of possible realities may seem potentially infinite but their diversity is constrained by communal living, by cultures and histories created together, by shared interests and predilections. Every human being is certainly different but not entirely different.

When we hear of the word “system” being thrown around, our first reaction should be – can you please elaborate on what do you mean by “system”?

Stay safe and always keep on learning…

In case you missed it, my last post was Being-In-the-Ohno-Circle:

Cybernetics of Kindness:

In today’s post, I am looking at the Socrates of Cybernetics, Heinz von Foerster’s ethical imperative:

“Always act so as to increase the number of choices.”

I see this as the recursive humanist commandment. This is very much applicable to ethics, and how we should treat each other. Von forester said the following about ethics:

Whenever we speak about something that has to do with ethics, the other is involved. If I live alone in the jungle or in the desert, the problem of ethics does not exist. It only comes to exist through our being together. Only our togetherness, our being together, gives rise to the question, How do I behave toward the other so that we can really always be one?

Von Foerster’s views align with that of constructivism, the idea that we construct our knowledge about our reality. We construct our knowledge to “re-cognize” a reality through the intercorrelation of the activities of the various sense organs. It is through these computed correlations that we recognize a reality. No findings exist independently of observers. Observing systems can only correlate their sense experiences with themselves and each other.

Paul Pangaro reminded me that von Foerster did not mean “options” or “possibilities”. Von Foerster specifically chose the word “choices”. By choices, he meant those selections among options that you might “actually take” depending on who “you are” right now. Here choices narrow down to the few that apply most to what you are now in this moment and in this context, down to a decision that makes you who you are. As von Foerster said, “Don’t make the decision, let the decision make you.” You and your choice you take are indistinguishable.

Since we are the ones doing the construction, we are also ultimately responsible for what we construct. No one should take this away from us. Ernst von Glasersfeld, father of radical constructivism explained this well:

The moment you begin to think that you are the author of your knowledge, you have to consider that you are responsible for it. You are responsible for what you are thinking, because it’s you who’s doing the thinking and you are responsible for what you have put together because it’s you who’s putting it all together. It’s a disagreeable idea and it has serious consequences, because it makes you truly responsible for everything you do. You can no longer say “well, that’s how the world is”, or “sono così”; you know, that’s not good enough.

Cybernetics is about communication and control in the animal and machine, as Norbert Wiener viewed it. When we view control in terms of von Foerster’s ethical imperative, interesting thoughts come about. Control is about reducing the number of choices so that only certain pre-selected activities are available for the one being controlled. For example, a steersman has to control their ship such that it maintains a specific course, and here the ship’s “available options” to move are drastically reduced. When we use this view of control and apply it to human beings, we should do so in light of von Foerster’s ethical imperative.

Von Foerster also said – A is better off when B is better off. This also provides further clarity on the recursiveness. If I am to make sure that I act so as to increase the number of choices for B, then B also in turn does the same. How I act impacts how others (re)act, which in turn impacts how I act back… on and on. This might remind the reader of the golden rule – Treat others as you would like others to treat you. However, this is missing the point about constructivism and the ongoing interaction that leads to the construction of a social reality. I see this as part of a social contract. As Jean-Jacques Rousseau noted, Man is born free, but everywhere he is in chains. The social contract comes about from the ongoing interactions and the contexts we are in with our fellow human beings as part of being in a society or social groups. This also means that this is dynamic and contingent in nature. What was “good” before may not be “good” today. This requires an ongoing framing and reframing though interactions.

John Boyd, father of OODA loop, shed more light on this:

Studies of human behavior reveal that the actions we undertake as individuals are closely related to survival, more importantly, survival on our own terms. Naturally, such a notion implies that we should be able to act relatively free or independent of any debilitating external influences — otherwise that very survival might be in jeopardy. In viewing the instinct for survival in this manner we imply that a basic aim or goal, as individuals, is to improve our capacity for independent action. The degree to which we cooperate, or compete, with others is driven by the need to satisfy this basic goal. If we believe that it is not possible to satisfy it alone, without help from others, history shows us that we will agree to constraints upon our independent action — in order to collectively pool skills and talents in the form of nations, corporations, labor unions, mafias, etc — so that obstacles standing in the way of the basic goal can either be removed or overcome. On the other hand, if the group cannot or does not attempt to overcome obstacles deemed important to many (or possibly any) of its individual members, the group must risk losing these alienated members. Under these circumstances, the alienated members may dissolve their relationship and remain independent, form a group of their own, or join another collective body in order to improve their capacity for independent action.

In a similar fashion, Dirk Baecker also noted the following:

Control means to establish causality ensured by communication. Control consists in reducing degrees of freedom in the self-selection of events. This is why the notion of “conditionality” is certainly one of the most important notions in the field of systems theory. Conditionality exists as soon as we introduce a distinction which separates subsets of possibilities and an observer who is forced to choose, yet who can only choose depending on the “product space” he is able to see. If we assume observers on both sides of the control relationship, we end up with subsets of possibilities selecting each other and thereby experiencing, and solving, the problem of “double contingency” so much cherished by sociologists. In other words, communication is needed to entice observers into a self-selection and into the reduction of degrees of freedom that goes with it. This means there must be a certain gain in the reduction of degrees of freedom, which for instance may be a greater certainty in the expectation of specific things happening or not happening.

Ultimately, this is all about what we value for ourselves and for the society we are part of. Our personal freedom makes sense only in light of other’s personal freedoms. That is the context – in relation to another human being, one who may be less fortunate than us. Making the world easier for those less fortunate than us makes the world better for everyone of us. I will finish with a great quote from one of my favorite science fiction character, Doctor Who:

“Human progress isn’t measured by industry. It’s measured by the value you place on a life. An unimportant life. A life without privilege. The boy who died on the river, that boy’s value is your value. That’s what defines an age. That’s what defines a species.”

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was The Constraint of Custom:

The Cybernetics of “Here & Now” and “There & Then”:

In today’s post, I am looking at difference. Difference is a big concept in Cybernetics. As Ross Ashby noted:

The most fundamental concept in cybernetics is that of “difference”, either that two things are recognizably different or that one thing has changed with time.

In Cybernetics, the goal is to eliminate the difference. If we take an example of a steersman on a boat, they are continuously trying to correct their course so that they can reach their destination correctly. The course has set the path, and any difference due to environmental conditions or other things will need to be corrected. This is a negative feedback cycle, where the current value is compared against a set value, and any difference will trigger an action from the steersman. If the steersman has enough variety, in terms of experience or technology, they can easily correct the difference.

We can see from the example that there has to be a set value so that the current value can be compared against it. This comparison has to be either continuous (if possible) or as frequent as possible to allow the steersman to be control the external variety. If the steersman is not able to steer the boat to be in a “zone of safety”, they will lose control of the boat. If the feedback is received in long intervals, the steersman will not be effective in steering the boat. This basic idea can be applied to all sorts of situations. Basically, we identify a goal value, and then have processes in place to ensure that the “system” of interest is kept with in an allowable range of the goal value. From this standpoint, we can identify a problem as the difference of the goal value and the current value. When this difference is beyond an allowable value, we have to initiate an action that will bring the system back into the tolerable range.

This discussion points to the importance of maintaining the system between the viable range for selected essential variables. These could be the number of sales or rate of employee retention for an organization. This is about the ongoing survival by keeping the organization viable. We can see that this is a homeostatic type loop about the “here and now” for the organization, where selected essential variables are kept within a tolerance range. As noted before, this loop has to be either continuous if possible, or as frequent as possible.

What we have discussed does not address how an organization can grow. Our discussion has been about how to keep the organization surviving. Now we will look at the cybernetics of growth, which is also an important aspect for viability of an organization. For the growth part, similar to the first loop, we need a second loop where the goal value is an ideal state. This ideal state is “there and then” for the organization. This is a long-term goal for the organization, and unlike the homeostatic loop, this second loop does not have to be continuous or frequent. This second loop utilizes more infrequent comparisons. The emphasis is still keeping the essential variables in check by frequently keeping an eye on what is going on here and now, while at the same time looking out into the near future (“there and then”) infrequently. I encourage the reader to look into Stafford Beer’s VSM model that looks at the “here and now” and “there and then” ideas to ensure viability of an organization. I have written an introduction to VSM here.

For some of the readers, this might remind you of the Roman God, Janus. Janus has two heads, looking in opposite directions. He is viewed as the God of change or transitions sometimes depicted as having one head looking into the past/current, while the other head looking into the future.

This may be paradoxical for some readers. In order to be adaptive, maintaining the status quo is very important. A smaller frequent feedback loop for status quo, and a larger infrequent loop for adjusting the course into the future is needed for viability. The idea of the two self-correcting loops goes back to Ross Ashby. I have written about it here.

A keen reader might see traces of Chris Argyris and Donald Schon’s “double loop” learning here. That is the case because Argyris and Schon were inspired by Ashby. They note the following in Organizational Learning II:

We borrow the distinction between single- and double-loop learning from W. Ross Ashby’s ‘Design for a Brain’. Ashby formulates his distinction in terms of (a) the adaptive behavior of a stable system, “the region of stability being the region of the phase space in which all the essential variables lie within their normal limits,” and (b) a change in the value of an effective parameter, which changes the field within which the system seeks to maintain its stability. One of Ashby’s examples is the behavior of a heating or cooling system governed by a thermostat. In an analogy to single-loop learning, the system changes the values of certain variables (for example, the opening or closing of an air valve) in order to keep temperature within the limits of a setting. Double-loop learning is analogous to the process by which a change in the setting induces the system to maintain temperature within the range specified by a new setting.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was The Cybernetics of Bayesian Epistemology:

Direct and Indirect Constraints:

In today’s post, I am following on the theme of Lila Gatlin’s work on constraints and tying it up with cybernetics. Please refer to my previous posts here and here for additional background. As I discussed in the last post, Lila Gatlin used the analogy of language to explain the emergence of complexity in evolution. She postulated that lower complex organisms such as invertebrates focused on D1 constraints to ensure that the genetic material is passed on accurately over generations, while vertebrates maintained a constant level of D1 constraints and utilized D2 constraints to introduce novelty leading to complexification of the species. Gatlin noted that this is similar to Shannon’s second theorem which points out that if a message is encoded properly, then it can be sent over a noisy medium in a reliable manner. As Jeremy Campbell notes:

In Shannon’s theory, the essence of successful communication is that the message must be properly encoded before it is sent, so that it arrives at its destination just as it left the transmitter, intact and free from errors caused by the randomizing effects of noise. This means that a certain amount of redundancy must be built into the message at the source… In Gatlin’s new kind of natural selection, “second-theorem selection,” fitness is defined in terms very different and abstract than in classical theory of evolution. Fitness here is not a matter of strong bodies and prolific reproduction, but of genetic information coded according to Shannon’s principles.

The codes that made possible the so-called higher organisms, Gatlin suggests, were redundant enough to ensure transmission along the channel from DNA to protein without error, yet at the same time they possessed an entropy, in Shannon’s sense of “amount of potential information,” high enough to generate a large variety of possible messages.

Gatlin viewed that complexity arose from the ability to introduce more variety while at the same time maintaining accuracy in an optimal mix, similar to human language where there is always constant emergence of new and new ideas while the main grammar, syntax etc. are maintained. As Campbell continues:

In the course of evolution, certain living organisms acquired DNA messages which were coded in this optimum way, giving them a highly successful balance between variety and accuracy, a property also displayed by human languages. These winning creatures were the vertebrates, immensely innovative and versatile forms of life, whose arrival led to a speeding-up of evolution.

As Campbell puts it, vertebrates were agents of novelty. They were able to revolutionize their anatomy and body chemistry. They were able to evolve more rapidly and adapt to their surroundings. The first known vertebrate is a bottom-dwelling fish that lived over 350 million years ago. They had a heavy external skeleton that anchored them to the floor of the water-body. They evolved such that some of the spiny parts of the skeleton grew into fins. They also evolved such that they developed skull with openings for sense organs such as eyes, nose, ears etc. Later on, some of them developed limbs from the bony supports of fins, leading to the rise of amphibians.

What kind of error-correcting redundancy did he DNA of these evolutionary prize winners, the vertebrates, possess? It had to give them the freedom to be creative, to become something markedly different, for their emergence was made possible not merely by changes in the shape of a common skeleton, but rather by developing whole new parts and organs of the body. Yet this redundancy also had to provide them with the constraints needed to keep their genetic messages undistorted.

Gatlin defined the first type of redundancy, one that allows deviation from equiprobability as ‘D1 constraint’. This is also referred to as ‘governing constraint’. The second type of redundancy, one that allows deviation from independence was termed by Gatlin as ‘D2 constraint’, and this is also referred to as ‘enabling constraint’. Gatlin’s speculation was that vertebrates were able to use both D1 and D2 constraints to increase their complexification, ultimately leading to a high cognitive being such as our species, homo sapiens.

One of the pioneers in Cybernetics, Ross Ashby, also looked at a similar question. He was looking at the biological learning mechanisms of “advanced” organisms. Ashby identified that for lower complex organisms, the main source of regulation is their gene pattern. For Ashby, regulation is linked to their viability or survival. He noted that the lower complex organisms can rely just on their gene pattern to continue to survive in their environment. Ashby noted that they are adapted because their conditions have been constant over many generations. In other words, a low complex organism such as a hunting wasp can hunt and survive simply based on their genetic information. They do not need to learn to adapt, they can adapt with what they have. Ashby referred to this as direct regulation. With direct regulation, there is a limit to the adaptation. If the regularities of the environment change, the hunting wasp will not be able to survive. It relies on the regularities of the environment for its survival. Ashby contrasted this with indirect regulation. With indirect regulation, one is able to amplify adaptation. Indirect regulation is the learning mechanism that allows the organism to adapt. A great example for this is a kitten. As Ashby notes:

This (indirect regulation) is the learning mechanism. Its peculiarity is that the gene-pattern delegates part of its control over the organism to the environment. Thus, it does not specify in detail how a kitten shall catch a mouse, but provides a learning mechanism and a tendency to play, so that it is the mouse which teaches the kitten the finer points of how to catch mice.

The learning mechanism in its gene pattern does not directly teach the kitten to hunt for the mice. However, chasing the mice and interacting with it, trains the kitten how to catch the mice. As Ashby notes, the gene pattern is supplemented by the information supplied by the environment. Part of the regulation is delegated to the environment.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so, the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

Ashby further notes:

As a channel of communication, it has a definite, finite capacity, Q say. If this capacity is used directly, then, by the law of requisite variety, the amount of regulation that the organism can use as defense against the environment cannot exceed Q.  To this limit, the non-learning organisms must conform. If, however, the regulation is done indirectly, then the quantity Q, used appropriately, may enable the organism to achieve, against its environment, an amount of regulation much greater than Q. Thus, the learning organisms are no longer restricted by the limit.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

As I look at Ashby’s ideas, I cannot help but see similarities between the D1/D2 constraints and Direct/Indirect regulation respectively. Indirect regulation, similar to enabling constraints, helps the organism adapt to its environment by connecting things together. Indirect regulation has a second order nature to it such as learning how to learn. It works on being open to possibilities when interacting with the environment. It brings novelty into the situation. Similar to governing constraints, direct regulation focuses only on the accuracy of the ‘message’. Nothing additional or any form of amplification is not possible. Direct regulation is hardwired, whereas indirect regulation is enabling. Direct regulation is context-free, whereas indirect regulation is context-sensitive. What the hunting wasp does is entirely reliant on its gene pattern, no matter the situation, whereas, what a kitten does is entirely dependent on the context of the situation.

Final Words:

Cybernetics can be looked at as the study of possibilities, especially why out of all the possibilities only certain outcomes occur. There are strong undercurrents to information theory in Cybernetics. For example, in information theory entropy is a measure of how many messages might have been sent, but were not. In other words, if there are a lot of possible messages available, and only one message is selected, then it eliminates a lot of uncertainty. Therefore, this represents a high information scenario. Indirect regulation allows us to look at the different possibilities and adapt as needed. Additionally, indirect regulation allows retaining the successes and failures and the lessons learned from them.

I will finish with a great lesson from Ashby to explain the idea of the indirect regulation:

If a child wanted to discover the meanings of English words, and his father had only ten minutes available for instruction, the father would have two possible modes of action. One is to use the ten minutes in telling the child the meanings of as many words as can be described in that time. Clearly there is a limit to the number of words that can be so explained. This is the direct method. The indirect method is for the father to spend the ten minutes showing the child how to use a dictionary. At the end of the ten minutes the child is, in one sense, no better off; for not a single word has been added to his vocabulary. Nevertheless, the second method has a fundamental advantage; for in the future the number of words that the child can understand is no longer bounded by the limit imposed by the ten minutes. The reason is that if the information about meanings has to come through the father directly, it is limited to ten-minutes’ worth; in the indirect method the information comes partly through the father and partly through another channel (the dictionary) that the father’s ten-minute act has made available.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was D1 and D2 Constraints:

D1 and D2 Constraints:

In today’s post, I am following up from my last post and looking further at the idea of constraints as proposed by Dr. Lila Gatlin. Gatlin was an American biophysicist, who used the idea of information theory to propose an information-processing aspect of life. In information theory, the ‘constraints’ are the ‘redundancies’ utilized for the transmission of the message. Gatlin’s use of this idea from an evolutionary standpoint is quite remarkable. I will explain the idea of redundancies in language using an example I have used before here. This is the famous idea that if a monkey had infinite time on its hands and a typewriter, it will at some point, type out the entire works of Shakespeare, just by randomly clicking on the typewriter keys. It is obviously highly unlikely that a monkey can actually do this. In fact, this was investigated further by William R. Bennett, Jr., a Yale professor of Engineering. As Jeremy Campbell, in his wonderful book, Grammatical Man, notes:

Bennett… using computers, has calculated that if a trillion monkeys were to type ten keys a second at random, it would take more thana trillion times as long as the universe has been in existence merely to produce the sentence “To be, or not to be: that is the question.”

This is mainly because the keyboard of a typewriter does not truly reflect the alphabet as they are used in English. The typewriter keyboard has only one key for each letter. This means that every letter has the same chance of being struck. From an information theory standpoint, this represents a maximum entropy scenario. Any letter can come next since they all have the same probability of being struck. In English, however, the distribution of letters is not the same. Some letters such as “E” are more likely to occur than say “Q”. This is a form of “redundancy” in language. Here redundancy refers to regularities, something that occurs on a regular basis. Gatlin referred to this redundancy as “D1”, which she described as divergence from equiprobability. Bennett used this redundancy next in his experiment. This will be like saying that some letters now had lot more keys on the typewriter so that they are more likely to be clicked. Campbell continues:

Bennett has shown that by applying certain quite simple rules of probability, so that the typewriter keys were not struck completely at random, imaginary monkeys could, in a matter of minutes, turn out passages which contain striking resemblances to lines from Shakespeare’s plays. He supplied his computers with the twenty-six letters of the alphabet, a space and an apostrophe. Then, using Act Three of Hamlet as his statistical model, Bennett wrote a program arranging for certain letters to appear more frequently than others, on the average, just as they do in the play, where the four most common letters are e, o, t, and a, and the four least common letters are j, n, q, and z. Given these instructions, the computer monkeys still wrote gibberish, but no it had a slight hint of structure.

The next type of redundancy in English is the divergence from independence. In English, we know that certain letters are more likely to come together. For example, “ing” or “qu” or “ion”. If we see an “i” and “o”, then there is high chance that the next letter is going to be an “n”. If we see a “q”, we can be fairly sure that the next letter is going to be a “u”. The occurrence of one letter makes the occurrence of another letter highly likely. In other words, this type of redundancy makes the letter interdependent rather than independent. Gatlin referred to this as “D2”. Bennett utilized this redundancy for his experiment:

Next, Bennett programmed in some statistical rules about which letters are likely to appear at the beginning and end of words, and which pairs of letters, such as th, he, qu, and ex, are used most often. This improved the monkey’s copy somewhat, although it still fell short of the Bard’s standards. At this second stage of programming, a large number of indelicate words and expletives appeared, leading Bennett to suspect that one-syllable obscenities are among the most probable sequences of letters used in normal language. Swearing has a low information content! When Bennett then programmed the computer to take into account triplets of letters, in which the probability of one letter is affected by the two letters which come before it, half the words were correct English ones and the proportion of obscenities increased. At a fourth level of programming, where groups of four letters were considered, only 10 percent of the words produced were gibberish and one sentence, the fruit of an all-night computer run, bore a certain ghostly resemblance to Hamlet’s soliloquy:

TO DEA NOW NAT TO BE WILL AND THEM BE DOES

DOESORNS CALAWROUTOULD

We can see that as Bennett’s experiment started using more and more redundancies found in English, a certain structure seems to emerge. With the use of redundancies, even though it might appear that the monkeys were free to choose any key, the program made it such that certain events were more likely to happen than others. This is the basic premise of constraints. Constraints make certain things more likely to happen than others. This is different than a cause-and-effect phenomenon like a billiard ball hitting another billiard ball. Gatlin’s brilliance was to use this analogy with evolution. She pondered why some species were able to evolve to be more complex than others. She concluded that this has to do with the two types of redundancies, D1 and D2. She considered the transmission of genetic material to be similar to how a message is transmitted from the source to the receiver. She determined that some species were able to evolve differently because they were able to use the two redundancies in an optimal fashion.

If we come back to the analogy with the language, and if we were to only use D1 redundancy, then we would have a very high success rate of repeating certain letters again and again. Eventually, the strings we would generate would become monotonous, without any variety. It would be something like EEEAAEEEAAAEEEO. Novelty is introduced when we utilize the second type of redundancy, D2. Using D2 introduces a more likelihood of emergence since there are more connections present. As Campbell explains the two redundancies further:

Both kinds lower the entropy, but not in the same way, and the distinction is a critical one. The first kind of redundancy, which she calls D1, is the statistical rule that some letters likely to appear more often than the others, on the average, in a passage of text. D1 which is context-free, measures the extent to which a sequence of symbols generated by a message source departs from the completely random state where each symbol is just as likely to appear as any other symbol. The second kind of redundancy, D2, which is context-sensitive, measures the extent to which the individual symbols have departed from a state of perfect independence from one another, departed from a state in which context does not exist. These two types of redundancy apply as much to a sequence of chemical bases strung out along a molecule of DNA as to the letters and words of a language.

Campbell suggests that D2 is a richer version of redundancy because it permits greater variety, while at the same time controlling errors. Campbell also notes that Bennett had to utilize the D1 constraint as a constant, whereas he had to keep on increasing the D2 constraints to the limit of his equipment until he saw something roughly similar to sensible English. Using this analogy to evolution, Gatlin notes:

Let us assume that the first DNA molecules assembled in the primordial soup were random sequences, that is, D2 was zero, and possibly also D1. One of the primary requisites of a living system is that it reproduces itself accurately. If this reproduction is highly inaccurate, the system has not survived. Therefore, any device for increasing the fidelity of information processing would be extremely valuable in the emergence of living forms, particularly higher forms… Lower organisms first attempted to increase the fidelity of the genetic message by increasing redundancy primarily by increasing D1, the divergence from equiprobability of the symbols. This is a very unsuccessful and naive technique because as D1 increases, the potential message variety, the number of different words that can be formed per unit message length, declines. Gatlin determined that this was the reason why invertebrates remained “lower organisms”.

A much more sophisticated technique for increasing the accuracy of the genetic message without paying such a high price for it was first achieved by vertebrates. First, they fixed D1. This is a fundamental prerequisite to the formulation of any language, particularly more complex languages… The vertebrates were the first living organisms to achieve the stabilization of D1, thus laying the foundation for the formulation of a genetic language. Then they increased D2 at relatively constant D1. Hence, they increased the reliability of the genetic message-without loss of potential message variety. They achieved a reduction in error probability without paying too great a price for it… It is possible’ within limits to increase the fidelity of the genetic message without loss of potential message variety provided that the entropy variables change in just the right way, namely, by increasing D2 at relatively constant D1. This is what the vertebrates have done. This is why we are “higher” organisms.

Final Words:

I have always wondered about the exponential advancement of technology and how we as a species were able to achieve it. Gatlin’s ideas made me wonder if they are applicable to our species’ tremendous technological advancement. We started off with stone tools and now we are on the brink of visiting Mars. It is quite likely that we first came across a sharp stone and cut ourselves on it and then thought of using it for cutting things. From there, we realized that we could sharpen certain stones to get the same result. Gatlin puts forth that during the initial stages, it is extremely important that errors are kept to a minimum. We had to first get better at the stone tools before we could proceed to higher and more complex tools. The complexification happened when we were able to make connections – by increasing D2 redundancy. As Gatlin states – D2 endows the structure, The more tools and ideas we could connect, the faster and better we could invent new technologies. The exponentiality only came by when we were able to connect more things to each other.

I was introduced to Gatlin’s ideas through Campbell and Alicia Juarrero. As far as I could tell, Gatlin did not use the terms “context-free” or “context-sensitive”. They seem to have been used by Campbell. Juarrero refers to “context-free constraints” as “governing constraints” and “context-sensitive constraints” as “enabling constraints”. I will be writing about these in a future post. I will finish with a neat observation about the ever-present redundancies in English language from Claude Shannon, the father of Information Theory.:

The redundancy of ordinary English, not considering statistical structure over greater distances than about eight letters, is roughly 50%. This means that when we write English half of what we write is determined by the structure of the language and half is chosen freely.

In other words, if you follow basic rules of English language, you could make sense at least 50% of what you have written, as long as you use short words!

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning… In case you missed it, my last post was More Notes on Constraints in Cybernetics:

More Notes on Constraints in Cybernetics:

In today’s post, I am looking further at constraints. Please see here for my previous post on this. Ross Ashby is one of the main pioneers of Cybernetics, and his book “Introduction to Cybernetics” still remains an essential read for a cybernetician. Alicia Juarrero is a Professor Emerita of Philosophy at Prince George’s Community College (MD), and is well known for her book, “Dynamics in Action: Intentional Behavior as a Complex System”.

I will start off with the basic idea of a system and then proceed to complexity from a Cybernetics standpoint. A system is essentially a collection of variables that an observer has chosen to make sense of something. Thus, a system is a mental construct and not something that is an objective reality. A system from this standpoint is entirely contingent upon the observer. Ashby’s view on complexity was regarding variety. Variety is the number of possible states of a system. A good example of this is a light switch. It has two states – ON or OFF. Thus, we can state that a light switch has a variety of 2. Complexity is expressed in terms of variety. The higher variety a system has, the more possibilities it possesses. A light switch and a person combined has indefinite variety. The person is able to communicate via messages simply by turning the light switch ON and OFF in a certain logical sequence such as Morse code.

Now let’s look at constraints. A constraint can be said to exist when the variety of a system is said to have diminished or decreased. Ashby gives the example of a boys only school. The variety for sex in humans is 2. If a school has a policy that only boys are allowed in that school, the variety has now decreased to 1 from 2. We can say that a constraint exists at the school.

Ashby indicated that we should be looking at all possibilities when we are trying to manage a situation. Our main job is to influence the outcomes so that certain outcomes are more likely than others. We do this through constraints. Ashby noted:

The fundamental questions in regulation and control can be answered only when we are able to consider the broader set of what it (system) might do, when ‘might’ is given some exact specification.

We can describe what we have been talking about so far with a simple schematic. We can try to imagine the possible outcomes of the system when we interact with it and utilize constraints so that certain outcomes, P2 and P4 are more likely to occur. There may be other outcomes that we do not know of or can imagine. Ashby advises that cybernetics is not about trying to understand what a system is, but what a system does. We have to imagine a set of all possible outcomes, so that we can guide or influence the system by managing variety. The external variety is always more than the internal variety. Therefore, to manage a situation, we have to at least match the variety of the system. We do this by attenuating the unwanted variety and by amplifying our internal variety so that we can match the variety thrown at us by the system. This is also represented as Ashby’s Law of Requisite Variety – only variety can absorb variety. Ashby stated:

Cybernetics looks at the totality, in all its possible richness, and then asks why the actualities should be restricted to some portion of the total possibilities.

Ashby talked about several versions of constraints. He talked about slight and severe constraints. He gave an example of a squad of soldiers. If the soldiers are asked to line up without any instructions, they have maximum freedom or minimum constraints to do so. If the order was given that no man may stand next to a man whose birthday falls on the same day, the constraint would be slight, for of all the possible arrangements few would be excluded. If, however, the order was given that no man was to stand at the left of a man who was taller than himself, the constraint would be severe; for it would, in fact, allow only one order of standing (unless two men were of exactly the same height). The intensity of the constraint is thus shown by the reduction it causes in the number of possible arrangements.

Another way that Ashby talked about constraints was by identifying constraint in vectors. Here, multiple factors are combined in a vector such that the resultant constraint is considered. The example that Ashby gave was that of an automobile. He gave the example of the vector shown below:

(Age of car, Horse-power, Color)

He noted that each component has a variety that may or may not be dependent on the other components. If the components are dependent on each other the final constraint will be less than the sum of individual component constraints. If the components are all independent, then the resultant constraints would be the sum of individual constraints. This is an interesting point to further look at. Imagine that we are looking at a team here of say Person A, B and C. Each person here is able to come up with indefinite possibilities, the resultant variety of the team would be also indefinite. If we allow for the indefinite possibilities to emerge, as in innovation or invention of new ideas or products, the constraints could play a role. When we introduce thinking agents to the mix, the number of possibilities goes up.

Complexity is about managing variety – about allowing room for possibilities to tackle complexity. Ashby famously noted that a world without constraints is totally chaotic. His point is that if a constraint exists, it can be used to tackle complexity. Allowing parts to depend upon each other introduces constraints that could cut down on unwanted variety and at the same time allow for innovative possibilities to emerge. The controller’s goal is to manage variety and allow for certain possible outcomes to be more likely than others. For this, the first step to imagine the total set of possible outcomes to best of their abilities. This means that the controller also has to have a good imagination and creative mind. This points to the role of the observer when it comes to seeing and identifying the possibilities. Ashby referred to the set of possibilities as “product space.” Ashby noted that its chief peculiarity is that it contains more than actually exists in the real physical world, for it is the latter that gives us the actual, constrained subset.

The real world gives the subset of what is; the product space represents the uncertainty of the observer. The product space may therefore change if the observer changes; and two observers may legitimately use different product spaces within which to record the same subset of actual events in some actual thing. The “constraint” is thus a relation between observer and thing; the properties of any particular constraint will depend on both the real thing and on the observer. It follows that a substantial part of the theory of organization will be concerned with properties that are not intrinsic to the thing but are relational between the observer and thing.

A keen reader might be wondering how the ideas of constraints stack up against Alicia Juarrero’s versions of constraints. More on this in a future post.  I will finish with a wonderful tribute to Ross Ashby from John Casti:

The striking fact is that Ashby’s idea of the variety of a system is amazingly close to many of the ideas that masquerade today under the rubric “complexity.”

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning… In case you missed it, my last post was Towards or Away – Which Way to Go?

Towards or Away – Which Way to Go?

In today’s post I am pondering the question – as a regulator, should you be going towards or away from a target? Are the two things the same? I will use Erik Hollnagel’s ideas here. Hollnagel is a Professor Emeritus at Linköping University who has a lot of work in Safety Management. Hollnagel challenges the main theme of safety management as getting to zero accidents. He notes:

The goal of safety management is obviously to improve safety. But for this to be attainable it must be expressed in operational terms, i.e., there must be a set of criteria that can be used to determine when the goal has been reached… the purpose of an SMS is to bring about a significant reduction – or even the absence – of risk, which means that the goal is to avoid or get away from something. An increase in safety will therefore correspond to a decrease in the measured output, i.e., there will be fewer events to count. From a control point of view that presents a problem, since the absence of measurements means that the process becomes uncontrollable.

He identifies this as a problem from a cybernetics standpoint. Cybernetics is the art of steersmanship. The controller identifies a target and the regulator works on getting to the target. There is a feedback loop so that when the difference between the actual condition and the target is higher than a preset value, the regulator tries to bring the difference down. Take the example of a steersman of a boat – the steersman propels the boat to the required destination by steering the boat. If there is a strong wind, the steersman adjusts accordingly so that the boat is always moving towards the destination. The steersman is continuously measuring the difference from the expected path and adjusting accordingly.

Hollnagel continues with this idea:

Quantifying safety by measuring what goes wrong will inevitably lead to a paradoxical situation. The paradox is that the safer something (an activity or a system) is, the less there will be measure. In the end, when the system is perfectly safe – assuming that this is either meaningful or possible – there will be nothing to measure. In control theory, this situation is known as the ‘fundamental regulator paradox’. In plain terms, the fundamental regulator paradox means that if something happens rarely or never, then it is impossible to know how well it works. We may, for instance, in a literal or metaphorical sense, be on the right track but also precariously close to the limits. Yet there is no indication of how close, it is impossible to improve performance.

The idea of the fundamental regulator paradox was put forward by Gerald Weinberg. He described it as:

The task of a regulator is to eliminate variation, but this variation is the ultimate source of information about the quality of its work. Therefore, the better job a regulator does, the less information it gets about how to improve.

Weinberg noted that as the regulator gets better at what it is doing, the more difficult it is for them to improve. If we go back to the case of the steersman, perfect regulation is when the steersman is able to make adjustment at a superhuman speed so that the boat travels in a straight line from start to end. Weinberg is pointing out this is not possible. When 100% percent regulation is achieved, we are also cutting off any contact with the external world. This is also the source of information that the regulator needs to do its job.

Coming back to the original question of “away from” or “towards”, Hollnagel states:

From a control perspective it would make more sense to use a definition of safety such that the output increases when safety improves. In other words, the goal should not be to avoid or get away from something, but rather to achieve or get closer to something.

While pragmatically it seems very reasonable that the number of accidents should be reduced as far as possible, the regulator paradox shows that such a goal is counterproductive in the sense that it makes it increasingly difficult to manage safety… The essence of regulation is that a regulator makes an intervention in order to steer or direct the process in a certain direction. But if there is no response to the intervention, if there is no feedback from the process, then we have no way of knowing whether the intervention had the intended effect.

Hollnagel advises that we should see safety in terms of resilience and not as absence of something (accidents, missed days etc.) but rather as the presence of something.

Based on the discussion we can see that “moving towards” is a better approach for a regulator than “moving away” from something. From a management standpoint, we should deter from enforcing policies that are too strict in the hopes of perfect regulation. They would lack the variety needed to tackle the external variety thrown at us. We should allow room for some noise in the processes. As the variety of the situation increases, we should stop setting targets and instead, provide a direction to move towards. Putting a hard target is again an attempt at perfect regulation that can stress the various elements within the organization.

I will finish with some wise words from Weinberg:

The fundamental regulator paradox carries an ominous message for any system that gets too comfortable with its surroundings. It suggests, for instance, that a society that wants to survive for a long time had better consider giving up some of the maximum comfort it can achieve to return for some chance of failure or discomfort.

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning…

In case you missed it, my last post was The Cybernetics of the Two Wittgensteins:

References:

  1. The Trappers’ Return, 1851. George Caleb Bingham
  2. Safety management – looking back or looking forward – Erik Hollnagel, 2008
  3. On the design of stable systems – Gerald Weinberg, 1979