Error Correction of Error Correction:

If I were asked to explain cybernetics, the first thing that would come to my mind would be – error correction. The example that is often used to explain cybernetics is that of the steersman. You have a steersman on a boat moving from point A to point B. Ideally, the boat should move from point A to B in a straight line. However, the wind can change the direction of the boat, and the steersman has to adjust accordingly to stay on course. This negative feedback loop requires a target such that the difference from the target is compensated. In technical terms, there is a comparator (something that can measure) that checks on a periodic or continuous basis what the difference is, and provides this information to make adjustments accordingly. Let’s call this framework as first order cybernetics. In this framework, we need a closed loop so that we have feedback. This allows for information to be fed back so that we can compare it against a goal and make adjustments accordingly. This approach was made famous by one of the main pioneers of Cybernetics, Norbert Wiener. He used this for guided missile technology where the missile could change its course as needed similar to the steersman on the boat. First order cybernetics obviously is quite useful. But it is based on the assumption that there is a target that we can all agree upon. This also assumes that the comparator is able to work effectively and efficiently.

With this background, I would now like to look at second order cybernetics. One of the main pioneers of second order cybernetics was Heinz von Foerster. He wanted to go beyond the idea of just error correction. He wanted to look at error correction of error correction. As I noted earlier, the error correction mechanism assumes that the target is clear and available, and also that the comparator and the correcting mechanism are working appropriately. Von Foerster challenged the notion of an objective reality and introduced the notion of the observer being part of what is observed. The general tendency is to keep the observer out of what is being observed with the underlying belief that the observation is readily available for all those who are interested. Von Foerster pushed back on this idea and said that the observer is included in the observation. One of my favorite aphorisms from von Foerster is – only when you realize you are blind, can you see. We all have cognitive blind spots. Realizing this and being aware of it allows us to improve how we look at things. There is a circularity that we have to respect and understand better here. What we see impacts what we understand, and what we understand impacts what we see. It is an ongoing self-correcting cycle. If the first order error correction is a correcting to a specific problem, then second order error correcting is the error correction of the error correction.

There is a great example that von Foerster gives that might explain this idea better. He talked about the Turing’s test. Turing’s test or the Imitation Game as originally called by the great Alan Turing is a test given to an “intelligent machine” to see if its intelligence is comparable or indistinguishable from that of a human. Von Foerster turned this on its head by bringing up the second order implications. He noted:

The way I see it, the potential intelligence of a machine is not being tested. In actual fact, the scholars are testing themselves (when they give the Turing test). Yes, they are testing themselves to determine whether or not they can tell a human being from a machine. And if they don’t manage to do this, they will have failed. The way I see it, the examiners are examining themselves, not the entity that is meekly sitting behind the curtain and providing answers for their questions. As I said, “Tests test tests.”

One of the main implications from this is that the observer is responsible for their own construction of what they are observing. We are all informationally closed entities that construct our version of a stable paradigm that we call a reality (not THE reality). And we are responsible for our construction, and we are ethically bound to allow others to construct their versions. We come to an eigenvalue for this “reality” when we continue to interact with each other. The more we stay away from each other in our own echo chambers, the harder it becomes to reconcile the different realities. The term “informationally closed” means that information does not enter us from the outside. We generate meaning based on how we are being perturbed based on the affordances of the environment we are interacting with. The main criticism to this approach is that it leads to relativism, the notion that every viewpoint matters. I reject this notion and affirmatively state that we should support pluralism. By saying that we do not have access to an objective reality, I am saying that we need epistemic humility. We need to realize that we do not have the Truth; that there is no Truth out there. As the wonderful Systems Thinker, Charles West Churchman said, “The systems approach begins when first you see the world through the eyes of another.” We should be beware of those that claim that they have access to the Truth.

When we understand the second order implications, we realize that although the map is not the territory, the map is all we have. Thus, we have to keep working on getting better at making maps. We have to work on error correction of our error corrections. I will finish with some wise words from von Foerster:

The consciousness of consciousness is self-consciousness. The understanding of understanding is self-understanding. And the organization of organization is self-organization. I propose that whenever this self crops up we emphasize this moment of circularity. The result is this: The self does not appear as something static or firm but instead becomes fluid and is constantly being produced. It starts moving. I would plead that we also maintain the dynamics of this word when we speak of self-organization. The way I see it, the self changes every moment, each and every second.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning… In case you missed it, my last post was The Open Concept of Systems:

The Cybernetics of Complexity:

In today’s post, I am looking the second order view of complexity. While I was thinking of a good title for this post, I went from “A constructivist walks into a Complexity bar” to “The Chernobyl model of Complexity”. Finally, I settled with “The Cybernetics of Complexity.” What I am looking at is not new by any means. I am inspired by the ideas of Ross Ashby, Stafford Beer, Heinz von Foerster, Haridimos Tsoukas, Mary Jo Hatch and Ralph Stacey.

I start from the basic premise that complexity is a description rather than a property of a phenomenon. This would indicate that the complexity is dependent on the one doing the describing, i.e., the observer. Complexity is a description, which means it needs someone describing it. This is the observer. The same thing can be perceived as complex and complicated by two different people. Tsoukas and Hatch explain this further:

in order for cognitive beings to be able to act effectively in the world we must organize our thinking… one way of viewing organizations as complex systems is to explore complex ways of thinking about organizations-as complex systems; which we call second order complexity. We further note that entering the domain of second-order complexity – the domain of the thinker thinking about complexity – raises issues of interpretation (and, we argue, narration) that have heretofore been ignored by complexity theorists.

What is complexity? It is our contention that the puzzle of defining the complexity of a system leads directly to concern with description and interpretation and therefore to the issue of second-order complexity.

Tsoukas and Hatch references Jim Casti to explain this further:

complexity is, in effect, in the eye of the beholder: ‘system complexity is a contingent property arising out of the interaction I between a system S and an observer/decision-maker O’. To put it more formally, the complexity of a system, as seen by an observer, is directly proportional to the number of inequivalent descriptions of the system that the observer can generate. The more inequivalent descriptions an observer can produce, the more complex the system will be taken to be.

Casti’s definition of complexity is an interesting one for it admits that the complexity of a system is not an intrinsic property of that system; it is observer-dependent, that is, it depends upon how the system is described and interpreted. Consequently, if an observer’s language is complex enough (namely, the more inequivalent descriptions it can contain) the system at hand will be described in a complex way and thus will be interpreted as a complex system. What complexity science has done is to draw our attention to certain features of systems’ behaviors which were hitherto unremarked, such as nonlinearity, scale-dependence, recursiveness, sensitivity to initial conditions, emergence. It is not that those features could not have been described before, but that they now have been brought into focus and given meaning. To put it another way, physics has discovered complexity by complicating its own language of description.

Here, the language of description comes from the observer. One of the best examples that I have to provide some clarity is a scene from HBO’s wonderful show Chernobyl, adapted from the Chernobyl tragedy. In the scene, Anatoly Dyatlov, the deputy chief Engineer was alerted of things going wrong by the other engineers taking part in a test. Dyatlov stubbornly refused to acknowledge that anything was wrong. He asked the engineer, “What does the dosimeter say?” The response was. “3.6 Roentgen, but that’s as high as the meter..” Dyatlov, in the show cut him off midsentence and famously state, “3.6. Not great, not terrible.

Dyatlov firmly believed that the reactor could not explode. Even though he was informed that the meter can go only as high as 3.6 roentgen, he found the situation to be manageable. Later it is revealed using a different gage with higher range, the actual rate was 15,000 roentgen per hour. This scene is truly remarkable because there were different people looking at a phenomenon and coming to different conclusions with terrible consequences.

In philosophy, we talk about ontology and epistemology. Ontology is about what exists and epistemology is about how you come to know things. We are all born with a set of “gages” (to loosely put). But each one of our gages have different ranges. The set of gages is unique to our species. For example, we can only see a small part of the light spectrum. We can only hear only a small part of the sound spectrum. We are informationally closed. This means that we generate meaning within a closed interpretative framework/mechanism. Information cannot come in directly. Rather, we are perturbed by the environment and we generate meaning from it. It might make it easier if we can come up with a way to quantify complexity.

A loose way to do this is to view complexity in terms of the number of possible interactions the phenomenon can have. This in turn is related to the number of states of the phenomenon. In cybernetics, complexity is viewed in terms of variety. Variety is the number of states of a phenomenon. I have discussed this concept at length before. To explain it loosely with an example, the variety of a simple light switch is two, the two states being ON and OFF. A variable light switch on the other hand has a whole lot more variety. The other insight regarding variety is that it is dependent on the observer since the observer is the one describing the number of “possible” states. Now this is where the possible rub comes in for some people. I see complexity as dependent upon the observer. Do I reject that there is nothing out there that is not in my head? That is a question about ontology. I am not very keen on just looking at ontology. I am looking at this from an epistemological viewpoint. Going back to the Chernobyl show, if my gage is inadequate, then that determines my reality which determines my action. If I have a better gage, then I can better understand what is going on. If I have others around me with more gages, then I can do a comparison and come to a general consensus on what is going on so that our general viability is maintained.

We have learned through evolution as a species to cut down on the high variety thrown at us so that we can remain viable. As noted earlier, we have evolved to see only a narrow band of the light spectrum, same with the sound and other natural phenomena. This has led to us having a set of “gages” unique to our species so that we can continue being viable. When these gages become inadequate, then our viability is in question. The purpose of gages is to make sense of what is happening so that we can act or not act. We don’t register everything that is coming in because we don’t need to. Our genetic makeup has become tuned to just what we need.

When I say complexity is in the eyes of the beholder, I mean that our range of gages are different dependent upon the observer. What we sense directly impacts how we act. Some of us can manage situations better because they are able to make sense better. Whether a situation is complex or complicated changes based on who is doing the observing. The term observer here means the person interacting with the situation. You can call him an actor or an agent, if needed.

Tsoukas and Hatch expand on this:

If practitioners are to increase their effectiveness in managing paradoxical social systems, they should, as Weick recommends, ‘complicate’ themselves. But complicate themselves in what way? By generating and accommodating multiple inequivalent descriptions, practitioners will increase the complexity of their understanding and, therefore, will be more likely, in logico-scientific terms, to match the complexity of the situation they attempt to manage, or, in narrative terms, to enact it.

In Cybernetics, this aligns with Ross Ashby’s law of requisite variety. This law states that only variety can absorb variety. To simply put, we have to cut down excess external variety coming in and find ways to amplify our internal variety so that the internal variety matches the external variety. A good way to cut down the external variety is to focus on only what matters/values to us. A good way to amplify our internal variety is to keep on learning and to be open to other perspectives. Of course, there are a lot of other ways to do this. A specific procedure cannot be made because everything is dependent upon the context. The phenomenon itself is changing with time, and so are we as the observers.

We have to welcome how the other practitioners describe the phenomenon. We have to engage with them so that we can come to a stable narrative of the phenomenon. This is not possible if we see ourselves as external to the phenomenon and if we believe that we all experience a single objective phenomenon. As Tsoukas and Hatch note – Expanding the focus from the system itself (first-order complexity) to also include those who describe the system as complex (second-order complexity) exposes the interpretive-cum-narrative dimensions of complexity. A complex system is said to have many specific characteristics including non-linearity, feedback loops, etc. But these are all descriptions of an observer describing the phenomenon. As Tsoukas and Hatch note:

Although you may call non-linearity, scale dependence, recursiveness, sensitivity to initial conditions and emergence properties of the system, they are actually your descriptive terms – they are part of a vocabulary, a way of talking about a system. Why use such a vocabulary? Is it because it corresponds to how the system really is? Not quite. Because the system cannot speak for itself, you do not know what the system really is. Rather, you use such a vocabulary because of its suspected utility – it may enable you to do certain things with it. A new vocabulary, notes Rorty, ‘is a tool for doing something which could not have been envisaged prior to the development of a particular set of descriptions, those which it itself helps to provide’.

What we have to then do is to understand that seeing complexity as a description of a phenomenon helps us in understanding how we understand the phenomenon. This is a second-order act, a cognitive act. We need to be aware of our blind spots (realization that we have inadequate gages). We need to improve our vocabulary so that we can better describe what we experience. Some models of complexity recommend bringing in experts for complicated phenomenon. Complicated phenomenon are cases where the complexity is slightly higher, but a cause-and-effect relationship still exists. The reason for bringing in the experts is because they are able to describe the phenomenon differently than a layperson. This again shows that complexity is dependent on the observer. It also indicates that we can improve our sensemaking capabilities by improving our vocabulary by keeping on learning. I will try to loosely explain my ideas based on a one-dimensional depiction of complexity. I am not saying that this is a correct model. I am providing this only for clarity’s sake. The chart below shows the complexity in terms of variety. The green line starts at 0 and ends at 100 to show complexity on a spectrum. Depending upon the capability of the observer to distinguish possible varieties, two observers perceive and understand complexity as shown below. The observer 2 in this case is able to manage complexity better than observer 1. Please note that to manage complexity means to cut down on the excess external variety and amplify internal variety. The other point to keep in mind is that the observer is informationally closed, so the observer is able to generate meaning of only those characteristics that perturbs the observer. In other words, the observer can distinguish only those characteristics that the observer’s interpretative mechanism can afford.  

When we look at a phenomenon and try to make sense of it, we try to do it in terms of a whole narrative, one that makes sense to us. This adds a uniqueness to how each one of the practitioners approach the phenomenon. The same complex phenomenon can have different contexts for different people. For example, the same Covid pandemic can be a problem of health crisis for one person, while for another it could be about freedom and government regulation. A stable social reality is achieved through continuous interaction. The environment changes, so we have to continuously interact with each other and the environment and continue to reframe reality. This social stability is an ongoing activity.

Final Words:

I had indicated that this post is about a second order view of complexity. In order to improve our understanding of complexity, we have to understand how we understand – how we come to know about things that we can describe. I do not propose that there is an objective reality out there that we all experience equally. All we can say is that we each experience a reality and through ongoing interaction we come to a stable version of reality. One of the criticisms to this approach is that this leads to solipsism. The main version of Solipsism is that others may not really exist and that only one’s mind is sure to exist. This is a no-win argument that I find no appeal in. I am happy that other smarter people exist because my life is better because of them. Another criticism to this approach is that it supports relativism. Relativism is the idea that all perspectives are equally valid. This also is a terrible idea in my view. I support the idea of pluralism. I have written about this before here.  Pluralism does not agree that all belief systems are equally valid. In a cybernetic explanatory manner, a pluralist believes that what is more important is to be less wrong. At the same time, a pluralist is open to understanding other people’s belief systems.

What I am hoping to achieve from this constructivist view is epistemic humility. This is the stance that what we know is incomplete, and that it may also be inadequate. We have to keep on learning, and be open to other viewpoints.

I will finish with a wonderful quote from Heinz von Foerster:

properties that apparently are associated with things are indeed properties that belong to the observer. So, that means the properties which are thought to reside in things turn out to be properties of the observer. I’ll give you immediately an example. A good example, for instance, is obscenity. You know that there is a tremendous effort even going up to the Supreme Court which is almost a comedy worthy to be written by Aristophanes. Who wants to establish what is obscene? Now it’s perfectly clear that “obscene” is, of course, a property which resides in the observer, because if you take a picture and show it to Mr. X, and Mr. X says, “This picture is obscene,” you know something about Mr. X, but nothing about the picture.

This post is also available as a podcast – https://anchor.fm/harish-jose/episodes/The-Cybernetics-of-Complexity-e15v5v9

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning… In case you missed it, my last post was Observations on Observing, The Case Continues:

The Case of the Distinguished Observer:

In today’s post, I am looking at observation. This will be a general overview and I will follow up with more posts in the future. I am inspired by the ideas of George Spencer-Brown (GSB), Niklas Luhman, Dirk Baecker and Heinz von Foerster. In Cybernetics, observation does not mean just to utilize your eyes and look at something. It has a deeper “sensemaking” type meaning. Observation in Cybernetics does not follow the rigid subject-object relationship. Toth Benedek explains this:

Heinz von Foerster tried to develop a point of view that replaces the linear and rigid structure of the object-subject (observer-observed) distinction. According to von Foerster, the observer is really constructed by the observed and vice versa: ‘observation’ is nothing else but the circular relation between them. Observation as a relation defines the observer and the observed, so the observer refers not only to the observed, but also to himself by the act of observation.

Observation is an operation of distinction. The role of an observer is to generate information. If no information is being generated, then no observation has been made. An observation is an act of cognition. GSB in his seminal work, Laws of Form noted:

A universe comes into being when a space is severed or taken apart. The skin of a living organism cuts off an outside from an inside. So does the circumference of a circle in the plane. By tracing the way we represent such a severance, we can begin to reconstruct, with an accuracy and coverage that appear almost uncanny, the basic forms underlying linguistic, mathematical, physical, and biological science, and can begin to see how the familiar laws of our own experience follow inexorably from the original act of severance.

GSB advises us to draw a distinction. He proposed a notation called as “mark” to do this. A basic explanation of a mark is shown below. It separates a space into two sections; one part that is observed and the other that is not observed. We can look at a space, and identify a difference, a distinction that allows us to identify a part of the space as something and the remaining of that space as NOT that something. For example, we can distinguish a part of a house as kitchen and everything else is “not kitchen”. At that point in time, we are looking only at the kitchen, and ignoring or not paying attention to anything else. What is being observed is in relation to what is not being observed. A kitchen is identified as “kitchen” only in the context of the remaining of the house.

Dirk Baecker explains this:

Spencer-Brown’s first propositions about his calculus is the distinction being drawn itself, considered to be “perfect continence”, that is to contain everything. A distinction can only contain everything when one assumes that it indeed contains (a) its two sides, that is the marked state and the unmarked state, (b) the operation of the distinction, that is the separation of the two sides by marking one of them, and (c) the space in which all this occurs and which is brought forth by this occurrence.

From the context of GSB, we can view a distinction as a first order observation. We can only see what is inside the box, and not what is outside the box. What is outside the box is our “blind spot.”

Hans-Georg Moeller explains this very well:

A first-order observation can simply observe something and, on the basis of this, establish that thing’s factuality: I see that this book is black—thus the book is black. Second-order observation observes how the eye of an observer constructs the color of this book as black. Thus, the simple “is” of the expression “the book is black” becomes more complex—it is not black in itself but as seen by the eyes of its observer. The ontological simplicity is lost and the notion of “being” becomes more complex. What is lost is the certainty about the “essential” color of this book.

The first order observer is confident about the observation he makes. He may view his observation as necessary and not contingent. However, a second order observer is able to also see what the first order observer is not. The second order observer is able to understand to an extent how the first order observer is making his distinctions. The second order observer thus comes to the conclusion that the distinction made by the first order observer is in fact contingent and somewhat arbitrary.

The most important point about the first order observation is that the first order observer cannot see that he does not see what he does not see. In other words, the first order observer is unaware that he has a blind spot. A second order observer observing a first order observer is able to see what the first order observer is not able to see, and he is also able to see that the first order observer has a blind spot. This is depicted in the schematic below:

As the schematic depicts, the second order observer is also making a distinction. In other words, what he is doing is also a first order observation! This means that the second order observer also has a blind spot, and he not aware that he has a blind spot! As Benedek further notes:

the first order of observation (our eye’s direct observation) is unable to get a coherent and complete image about the world out there. What we can see is something we learnt to see: the image we “see” is a result of computing processes.

The second order observation can also be carried out as a self-observation, where the observer doing the first order observation is also the observer doing the second order observation. This may appear paradoxical. GSB talked about an idea called “reentry” in Laws of Form. Reentry is the idea of reentering the form again. In other words, we are re-introducing the distinction we used onto the form again. The reentry is depicted in the schematic below:

Dirk Baecker explains:

Spencer-Brown’s calculus of form consists in developing arithmetic with distinctions from a first instruction—”draw a distinction”—to the re-introduction (“re-entry”) of the distinction into the form of distinction, in order to be able to show in this way that the apparently simple, but actually already complex beginning involved in making a distinction can only take place in a space in which the distinction is for its part introduced again. The observer who makes this distinction through it becomes aware of the distinction, to which he is himself indebted.

Self-observation requires a reentry. In order to become aware that we have cognitive blind spots, we have to perform reentry. The re-entry includes what was not part of the original distinction. This allows us to understand (to a point) how we make and utilize distinctions. To paraphrase Heinz von Foerster, we come to see when we realize that we cannot see.

The reentry is a continuous operation that is self-correcting in nature. There is no end point to this per-se and it oscillates between the inside and the outside. This leads to an emergent stability as an eigenform. As noted before, the second order observation is still a form of first order observation even with reentry. There are still cognitive blind spots and we are still subject our biases and limitations of our interpretative framework. We are affected by what we observe and we can only observe what our interpretative framework can afford. As noted at the start of the post, the role of the observer is to generate information. If the observer is not able to make a distinction, then no information can be generated. This has the same effect as us being in a closed system where the entropy keeps on increasing. Borrowing a phrase from Stafford Beer, this means that observers are negentropic pumps. We engage in making dynamic distinctions which allows us to gather the requisite information/knowledge to remain viable in an everchanging environment.

The discussion about first order and second order observations may bring up the question – is it possible to have a third order observation? Heinz von Foerster pointed out that there is no need for a third order observation. He noted that a reflection of a reflection is still a reflection. Hans-Georg Moeller explains this further:

While second-order observation arrives at more complex notions of reality or being, it still only observes—it is a second-order observation, because it observes as a first-order observation another first-order observation. It is, so to speak, the result of two simultaneous first-order observations. A third-order observation cannot transcend this pattern—for it is still the first-order observation of a first-order observation of a first-order observation… No higher-order observation—not even a third-order observation—can observe more “essentially” than a lower-order observation. A third-order observation is still an observation of an observation and thus nothing more than a second-order observation. There is no Platonic climb towards higher and higher realities—no observation brings us closer to the single light of truth.

I will finish with some wise words from Dirk Baecker:

Draw a distinction.

Watch its form.

Work its unrest.

Know your ignorance.

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning… In case you missed it, my last post was The Cybernetics of Magic and the Magic of Cybernetics:

The Cybernetics of Magic and the Magic of Cybernetics:

In today’s post, I am looking at magic and cybernetics. From a young age, I have been a fan of magic. I have talked about magic here before. I see magic as the art of paradoxes.  The word paradox stems from the Greek words – “para” and “dokein”, and taken together it means contrary to expectation.

Take for example a simple magic trick where the magician shows you an open empty hand. The magician closes the hand, and does a gentle wiggle and then opens his hand to reveal a coin. He again closes his hands, and does another gentle wiggle and then opens the hand to show that his hand is empty. The magic happens from a self-referential operation. The spectator (or the observer) sees an empty hand and describes it to themselves as an empty hand. Later, when the magician shows their hand again, the hand now contains a coin. The spectator has to reference back to the previous state of empty hand, and face the moment of paradox. The hand that was empty now has a coin. The moment of magic comes only when the spectator can reference back to the empty hand. If we denote the empty hand as A, the value of the hand now is !A or in other words, not an empty hand. If the spectator cannot reference back to their original observation, they will not see the magic. From the magician’s standpoint, he should take care to make sure that this experience is as strong as possible. For example, he should take care to maintain the image of the hand with and without the coin, the same. This means that the position of the fingers, the gap between them, the gesture etc. are all maintained the same for the two states – one where the hand has no coin, and the second where the hand has a coin. This reinforces the “magic” for the spectator.

The idea of self-reference is of great importance in cybernetics. In logic, the idea of self-reference is shunned because it normally leads to paradoxes. A great example for a paradox is the liar paradox. One of the oldest forms of liar paradox is the statement that Epimenides, the Cretan made. He said that, “all Cretans are liars.” Since he himself was a Cretan, that would mean that he is also a liar, but that would mean that what he is saying is true, which means that he must be a liar… and so on. This goes into a paradox from the self-reference. There have been many solutions suggested for this conundrum. One of the ways to resolve any apparent paradox is to introduce temporality into this sentence. We can do this by making the statement slightly ambiguous and add the word “sometimes”. So, the sentence becomes, “all Cretans are liars sometimes.” The temporality suggests that the value for the statement and the person uttering the statement changes with time and this dissolves the paradox.

Paradoxes don’t exist in the “real world.” The reasonable conclusion is that they have something to do with our stubborn and rigid thinking. When we are unwilling to add temporality or ambiguity, we get stuck with our thinking. Another way to look at this is from a programmer’s standpoint. The statement a = a + 1, is valid from a computer program standpoint. Here the variable, “a” does not stand for a constant value. It is a placeholder for a value at a given point in time. Thus, although the equation a = a +1 is self-referential, it does not crash the computer because we introduce temporality to it, and we do not see “a” having one unique value at all times.

In Cybernetics, self-reference is accepted as a normal operation. Cyberneticians talk about second order concepts such as “understanding understanding” and “observing observing”. One of my favorite description of Cybernetics comes from Larry Richards. He describes cybernetics as a way of thinking about ways of thinking (of which it – cybernetics – is one). This is form of self-reference.

In Cybernetics, self-reference does not lead to paradox. Instead, it leads to a stable outcome. As cognizing agents, we build a stable reality based on self-reference. We can do activities such as thinking about thinking or learning about learning from this approach. Louis Kauffman talks about this:

Heinz von Foerster in his essays has suggested the enticing notion that “objects are tokens for eigen behaviors.” … The short form of this meaning is that there is a behavior between the perceiver and the object perceived and a stability or repetition that “arises between them.” It is this stability that constitutes the object (and the perceiver). In this view, one does not really have any separate objects, objects are always “objects perceived,” and the perceiver and the perceived arise together in the condition of observation.

We identify the world in terms of how we shape it. We shape the world in response to how it changes us. We change the world and the world changes us. Objects arise as tokens of behavior that leads to seemingly unchanging forms. Forms are seen to be unchanging through their invariance under our attempts to change, to shape them.

My post was inspired by the ideas of Spencer-Brown, Francisco Varela and Heinz von Foerster. I will finish with another gem from Heinz von Foerster:

I am the observed relation between myself and observing myself.

This post is also available as a podcast here – https://anchor.fm/harish-jose/episodes/The-Cybernetics-of-Magic-and-the-Magic-of-Cybernetics-e14a257

Please maintain social distance and wear masks. Please take vaccination, if able. Stay safe and Always keep on learning… In case you missed it, my last post was TPS’s Operation Paradox:

Note: The point of a = a+ 1, was made also by Elena Esposito (Kalkul der Form).

Round and Round We Go:

In today’s post, I am looking at a simple idea – Loops, and will follow it up with Heinz von Foerster’s ideas on second order Cybernetics. A famous example of a loop is “PDCA”. The PDCA loop is generally represented as a loop – Plan-Do-Check-Act-Plan-Do…, and the loop is represented as an iterative process where it goes on and on. To me, this is a misnomer and misrepresentation. These should be viewed as recursions. First, I will briefly explain the difference between iteration and recursion. I am using the definitions of Klaus Krippendorff:

Iteration – A process for computing something by repeating a cycle of operations.

Recursion – The attribute of a program or rule which can be applied on its results indefinitely often.

In other words, iteration is simply repetition. In a program, I can say to print the word “Iteration” 5 times. There is no feedback here, other than to keep count of the times the word was printed on screen. On the other hand, in recursion, the value of the first cycle is fed back into the second cycle, the output of which is fed into the third cycle and so on. Here circular feedback is going on. A great example of a recursive function is the Fibonnaci sequence. The Fibonacci sequence is expressed as follows:

Fn = Fn-1 + Fn-2, for n > 1

Fn = 1, for n = 0 or 1

Here, we can see that the previous value is fed into the equation to create a new value, and this is an example of recursion.

From the complexity science standpoint, recursions lead to interesting phenomenon. This is not an iterative non-feedback loop any longer, where you come back to the same point again and again. With recursion, you get to circular causality with each loop, and you enter a new state altogether. Each loop is directly impacted by the previous loop. Anything that leads back to its original starting point doesn’t lead to emergence and can actually lead to a paradox. A great example is the liar paradox. In a version of this, a card has a statement written on both sides of a card. They are as follows:

  1. The statement on the other side of this card is FALSE.
  2. The statement on the other side of this card is TRUE.  

This obviously leads to a paradox when you follow it along a loop. You do not get to a new state with each iteration. Douglas Hofstadter wonderfully explained this as a mirror mirroring itself. However, with recursion, a wonderful emergence can happen, as we see in complexity science. Circular causality and recursion are ideas that have strong footing in Second Order Cybernetics. A great example of this is to look at the question – how do we make sense of the world around us? Heinz von Foerster, the Socrates of Cybernetics, has a lot to say about this. As Bernard Scott notes:

For Heinz von Foerster, the goal of second-order cybernetics is to explain the observer to himself, that is, it is the cybernetics of the cybernetician. The Greek root of cybernetics, kubernetes, means governor or steersman. The questions asked are; who or what steers the steersman, how is the steersman steered and, ethically, how does it behoove the steersman to steer himself? Von Foerster begins his epistemology, in traditional manner, by asking, “How do we know?” The answers he provides-and the further questions he raises-have consequences for the other great question of epistemology, “What may be known?” He reveals the creative, open-ended nature of the observer’s knowledge of himself and his world.

Scott uses von Foerster’s idea of undifferentiated coding to explore this further. I have written about this before here.

Undifferentiated coding is explained as below:

The response of a nerve cell encodes only the magnitude of its perturbation and not the physical nature of the perturbing agent.

Scott continues:

Put more specifically, there is no difference between the type of signal transmitted from eye to brain or from ear to brain. This raises the question of how it is we come to experience a world that is differentiated, that has “qualia”, sights, sounds, smells. The answer is that our experience is the product of a process of computation: encodings or “representations” are interpreted as being meaningful or conveying information in the context of the actions that give rise to them. What differentiates sight from hearing is the proprioceptive information that locates the source of the signal and places it in a particular action context.

Von Foerster explained the circular relationship between sense data and experiences as below:

The motorium (M) provides the interpretation for the sensorium (S) and the sensorium provides the interpretation for the motorium.

How we make sense depends on how we experience, and how we experience depends upon how we make sense. As Scott notes, we can explain the above relationship as follows:

S = F(M). Sensorium, S, is a function of motorium, M.

M = G(S). Motorium, M, is a function of sensorium, S.

Von Foerster pointed out that this is an open recursive loop, since we can replace M with G(S).

S=F(G(S))

With more replacements for the “S”, this equation becomes an open recursive loop as follows:

S=F(G(F(G(F(G(…………G(S)))))……

Scott continues:

Fortunately, the circularity is not vicious, as in the statement “I am a liar”. Rather, it is virtuous or, as von Foerster calls it, it is a creative circle, which allows us to “transcend into another domain”. The indefinite series is a description of processes taking place in sequence, in “time”, with steps t, t+1, t+2 and so on. (I put “time” in quotes as a forward marker for discussion to come). In such indefinite recursive expressions, solutions are those values of the expression which, when entered into the expression as a base, produce themselves. These are known as Eigen values (self-values). Here we have the emergence of stabilities, invariances. The “objects” that we experience are “tokens” for the behaviors that give rise to those experiences. There is an “ultimate” base to these recursions: once upon a “time”, the observer came into being. As von Foerster neatly puts it, “an observer is his own ultimate object”.

The computations that give rise to the experience of a stable world of “objects” are adaptations to constraints on possible behaviors. Whatever else, the organism, qua system, must continue to compute itself, as a product. “Objects” are anything else it may compute (and recompute) as a unitary aspect of experience: things, events, all kinds of abstraction. The possible set of “objects” it may come to know are limited only by the organism’s current anatomy and the culture into which she is born.

I have written about this further here – Consistency over Completeness.

Heinz von Foerster said – The environment contains no information; it is as it is. We are informationally closed entities, which means that information cannot come from outside to inside. We make meanings out of the perturbations and we construct a reality that our interpretative framework can afford.

I will finish with a great observation from the Cybernetist philosopher Yuk Hui:

Recursivity is a general term for looping. This is not mere repetition, but rather more like a spiral, where every loop is different as the process moves generally towards an end, whether a closed one or an open one.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Observing with Your Hands:

References:

  1. M. C. Escher Spiral
  2. Second Order Cybernetics as Cognitive Methodology. Bernard Scott
  3. A Dictionary of Cybernetics. Klaus Krippendorff

Observing with Your Hands:

In today’s post, I am looking at the ideas inspired by mirror neurons. Mirror neurons are a class of neurons that activate when someone engages in an activity or when they observe the same activity being performed by someone else. It was first identified by a group of Italian neurophysiologists led by Giacomo Rizzolatti in the 1980s. They were studying macaque monkeys. As part of their research, they placed electrodes in the monkeys’ brains to study hand and mouth motions. The story goes that the electrodes sent signals when the monkeys observed the scientists eating peanuts. The same neurons that fired when the monkeys were eating peanuts fired when they merely observed the same action. Several additional studies indicate that the mirror neurons are activated to respond to goal-oriented actions. For example, when the scientist covered the peanut bowl, and performed the action of picking a peanut and eating, the mirror neurons still fired even though the monkeys could not see the peanut bowl. However, when the scientist simply mimicked the action of taking a peanut without a peanut bowl, the neurons did not fire. There have been several hypotheses regarding the mirror neurons such as they facilitate learning by copying, and that they are the source for empathy.

The most profound idea about mirror neurons is that action execution and action observation are tightly coupled. Our ability to interpret or comprehend other’s actions involve our own motor system. For example, when we observe someone doing an action, depending upon whether we have performed the action adds depth to how we can observe the action being performed. If I am watching a ballet and the ballerina performs a difficult move, I may not fully grasp what I have seen since I do not know ballet and because I have never performed it. However, if I watch a spin bowler in Cricket throwing an off-spin, I will be able to grasp it better and possibly tell how the ball is going to spin. This is because I have played a lot of Cricket as a youth. The same with a magician performing a sleight of hand.

The idea of mirror neurons brings an extra depth to the meaning of going to the gemba. Going to gemba is a key tenet of Toyota Production System. We go to the gemba, where the action is, to grasp the current situation. We go there to observe. Gemba, it is said, is our best teacher. When we go there to observe the work being performed, we may get a different experience depending upon whether we ourselves have performed the work or not. Heinz von Foerster, the Socrates of Cybernetics, said – if you want to see, learn how to act. He was talking about the circular loop of sensorium and motorium. In order to see, there has to be interaction between the sensorium and motorium.

In a similar way, Kiichiro Toyoda, the founder of Toyota Motor Corporation is said to have remarked that engineers would never amount to anything unless they had to wash their hands at least three times a day; the evidence they were getting their hands dirty from real work.

I will finish with a great advice from Taiichi Ohno:

Don’t look with your eyes, look with your feet. Don’t think with your head, think with your hands.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Extended Form of the Law of Requisite Variety:

Image Reference – Now You See It. Now You Don’t (Bill Tarr)

The Cybernetics of a Society:

In today’s post, I will be following the thoughts from my previous post, Consistency over Completeness. We were looking at each one of us being informationally closed, and computing a stable reality. The stability comes from the recursive computations of what is being observed. I hope to expand the idea of stability from an individual to a society in today’s post.

Humberto Maturana, the cybernetician biologist (or biologist cybernetician) said – anything said is said by an observer. Heinz von Foerster, one of my heroes in cybernetics, expanded this and said – everything said is said to an observer. Von Foerster’s thinking was that language is not monologic but always dialogic. He noted:

The observer as a strange singularity in the universe does not attract me… I am fascinated by images of duality, by binary metaphors like dance and dialogue where only a duality creates a unity. Therefore, the statement.. – “Anything said is said by an observer” – is floating freely, in a sense. It exists in a vacuum as long as it Is not embedded in a social structure because speaking is meaningless, and dialogue is impossible, if no one is listening. So, I have added a corollary to that theorem, which I named with all due modesty Heinz von Foerster’s Corollary Nr. 1: “Everything said is said to an observer.” Language is not monologic but always dialogic. Whenever I say or describe something, I am after all not doing it for myself but to make someone else know and understand what I am thinking of intending to do.

Heinz von Foerster’s great insight was perhaps inspired by the works of his distant relative and the brilliant philosopher, Ludwig Wittgenstein. Wittgenstein proposed that language is a very public matter, and that a private language is not possible. The meaning of a word, such as “apple” does not inherently come from the word “apple”. The meaning of the word comes from how it is used. The meaning comes from repeat usage of the word in a public setting. Thus, even though the experience of an apple may be private to the individual, how we can describe it is by using a public language. Von Foerster continues:

When other observers are involved… we get a triad consisting of the observers, the languages, and the relations constituting a social unit. The addition produces the nucleus and the core structure of society, which consists of two people using language. Due to the recursive nature of their interactions, stabilities arise, they generate observers and their worlds, who recursively create other stable worlds through interacting in language. Therefore, we can call a funny experience apple because other people also call it apple. Nobody knows, however, whether the green color of the apple you perceive, is the same experience as the one I am referring to with the word green. In other words, observers, languages, and societies are constituted through recursive linguistic interaction, although it is impossible to say which of these components came first and which were last – remember the comparable case of hen, egg and cock – we need all three in order to have all three.

Klaus Krippendorff defined closure as follows – A system is closed if it provides its own explanation and no references to an input are required. With closures, recursions are a good and perhaps the only way to interact. As organizationally closed entities, we are able to stay viable only as part of a social realm. When we are part of a social realm, we have to construct reality with reference to an external reference. Understanding is still generated internally, but with an external point of reference. This adds to the reality of the social realm as a collective. If the society has to have an identity that is sustained over time, its viability must come from its members. Like a set of nested dolls, society’s structure comes from participating individuals who themselves are embedded recursively in the societal realm. The structure of the societal or social realm is not designed, but emergent from the interactions, desires, goals etc. of the individuals. The society is able to live on while the individuals come and go.

I am part of someone else’s environment, and I add to the variety of their environment with my decisions and actions (sometimes inactions). This is an important reminder for us to hold onto in light of recent world events including a devastating pandemic. I will finish with some wise words from Heinz von Foerster:

A human being is a human being together with another human being; this is what a human being is. I exist through another “I”, I see myself through the eyes of the Other, and I shall not tolerate that this relationship is destroyed by the idea of the objective knowledge of an independent reality, which tears us apart and makes the Other as object which is distinct from me. This world of ideas has nothing to do with proof, it is a world one must experience, see, or simply be. When one suddenly experiences this sort of communality, one begins to dance together, one senses the next common step and one’s movements fuse with those of the other into one and the same person, into a being that can see with four eyes. Reality becomes communality and community. When the partners are in harmony, twoness flows like oneness, and the distinction between leading and being led has become meaningless.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Consistency over Completeness:

Source – The Certainty of Uncertainty: Dialogues Introducing Constructivism By Bernhard Poerksen

Consistency over Completeness:

Today’s post is almost a follow-up to my earlier post – The Truth about True Models. In that post, I talked about Dr. Donald Hoffman’s idea of Fitness-Beats-Truth or FBT Theorem. Loosely put, the idea behind the FBT Theorem is that we have evolved to not have “true” perceptions of reality. We survived because we had “fitness” based models and because we did not have “true models”. In today’s post, I am continuing on this idea using the ideas from Heinz von Foerster, one of my Cybernetics heroes.

Heinz von Foerster came up with “the postulate of epistemic homeostasis”. This postulate states:

The nervous system as a whole is organized in such a way (organizes itself in such a way) that it computes a stable reality.

It is important to note here that, we are speaking about computing “a” reality and not “the” reality. Our nervous system is informationally closed (to follow up from the previous post). This means that we do not have direct access to the reality outside. All we have is what we can perceive through our perception framework. The famous philosopher, Immanuel Kant, referred to this as the noumena (the reality that we don’t have direct access to) and the phenomena (the perceived representation of the external reality). All we can do is to compute a reality based on our interpretive framework. This is just a version of the reality, and each one of us computes such a reality that is unique to each one of us.

The other concept to make note of is the “stable” part of the stable reality. In Godelian* speak, our nervous system cares more about consistency than completeness. When we encounter a phenomenon, our nervous system looks at stable correlations from the past and present, and computes a sensation that confirms the perceived representation of the phenomenon. Von Foerster gives the example of a table. We can see the table, and we can touch it, and maybe bang on it. With each of these confirmations and correlations between the different sensory inputs, the table becomes more and more a “table” to us.

*Kurt Godel, one of the famous logicians of last century came up with the idea that any formal system able to do elementary arithmetic cannot be both complete and consistent; it is either incomplete or inconsistent.

From the cybernetics standpoint, we are talking about an observer and the observed. The interaction between the observer and the observed is an act of computing a reality. The first step to computing a reality is making distinctions. If there are no distinctions, everything about the observed will be uniform, and no information can be processed by the observer. Thus, the first step is to make distinctions. The distinctions refer to the variety of the observed. The more distinctions there are, the more variety the observed has. From a second order cybernetics standpoint, the variety of the observed depends upon of the variety of the observer. This goes back to the unique stable reality computation point from earlier. Each one of us are unique in how we perceive things. This is our variety as the observer. The observed, that which is external to us, always has more potential variety than us. We cut down or attenuate this high variety by choosing certain attributes that interests us. Once the distinctions are made, we find relations between these distinctions to make sense of it all. This corresponds to the confirmations and correlations that we noted above in the example of a table.

We are able to survive in our environment because we are able to continuously compute a stable reality. The stability comes from the recursive computations of what is being observed. For example, lets go back to the example of the table. Our eyes receive the sensory input of the image of the table. This is a first set of computation. This sensory image then goes up the “neurochain”, where it is computed again. This happens again and again as the input gets “decoded” at each level, until it gets satisfactorily decoded by our nervous system. The final result is a computation of a computation of a computation of a computation and so on. The stability is achieved from this recursion.

The idea of a consistency over completeness is quite fascinating. This is mainly due to the limitation of our nervous system to have a true representation of the reality. There is a common belief that we live with uncertainty, but our nervous system strives to provide us a stable version of reality, one that is devoid of uncertainties. This is a fascinating idea. We are able to think about this only from a second order standpoint. We are able to ponder about our cognitive blind spots because we are able to do second order cybernetics. We are able to think about thinking. We are able to put ourselves into the observed. Second order cybernetics is the study of observing systems where the observer themselves are part of the observed system.

I will leave the reader with a final thoughtthe act of observing oneself is also a computation of “a” stable reality.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Wittgenstein and Autopoiesis:

Cybernetic Explanation, Purpose and AI:

In today’s post, I am following the theme of cybernetic explanation that I talked about in my last post – The Monkey’s Prose – Cybernetic Explanation. I recently listened to the talks given as part of the Tenth International Conference on Complex Systems. I really enjoyed the keynote speech by the Herb. A. Simon award winner, Melanie Mitchell. She told the story of a project that her student did where the AI was able to recognize whether there was an animal in a picture or not with good accuracy. Her student dug deep into the AI’s model. The AI is taught to identify a characteristic by showing a large number of datasets (in this case pictures with and without animals). The AI is shown which picture has an animal and which picture does not. The AI comes up with an algorithm based on the large dataset.  The correct answers reinforce the algorithm, and the wrong answers tweaks the algorithm as needed with the assigned weights to the “incorrectness”. This is very much like how we learn. What Mitchell’s student found was that the AI is assigning probabilities based on whether the background is blurry or not. When the background is blurry, it is more likely that there is an animal in the picture. In other words, it is not looking for an animal, it is just looking to see whether the background is blurry or not. Depending upon the statistical probability, the AI will answer that there is or there is not an animal in the picture.

We, humans, assign the meaning to the AI’s output, and believe that the AI is able to differentiate whether there is an animal in the picture or not. In actuality, the AI is merely using statistical probabilities of whether the background is blurry or not. We cannot help but assign meanings to things. We say that nature has a purpose, or that evolution has a purpose. We assign causality to phenomenon. It is interesting to think about whether it truly matters that the AI is not really identifying the animal in the picture. The outcome still has the appearance that the AI is able to tell whether there is an animal or not in the picture. We are able to bring in more concepts that the AI cannot. Mitchell discusses the difference between concepts and perceptual categories. What the AI is doing is constructing perceptual categories that are limited in nature, whereas what we construct are concepts that may be linked to other concepts. The example that Mitchell provided was that of a bridge. For us, a bridge can mean many things based on the linguistic application. We can say that a person is able to “bridge the gap” or that our nose has a bridge. The capacity for AI, at this time at least, is to stick to the bridge being a perceptual category based on the context of the data it has. We can talk in metaphors that the AI cannot understand. A bridge can be a concept or an actual physical thing for us. For a simple task such as the question of an animal in the picture carries no risk. However, as we up the ante to a task such as autonomous driving, we can no longer rely on the appearances of the AI being able to carry out the task. This is demonstrated in the morality or ethics debate with regards to AI, and how it should carry out probability calculations in the event of a hazard. This involves questions such as the ones in the trolley problem.

This also leads to another idea that has the cybernetic explanation embedded in it. This is the idea of “do no harm”. The requirement is not specifically to do good deeds, but to not do things that will cause harm to others. As the English philosopher, John Stuart Mill put it:

That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

 This is also what Isaac Asimov referred to as the first of the three laws of robotics in his 1942 short story, Runaround:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The other two laws are circularly referenced to the first law:

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The idea of cybernetic explanation gives us another perspective to purpose and meaning. Our natural disposition is to assign meaning and purpose, as I indicated earlier. We tend to believe that Truth is out there or that there is an objective reality. As the great Cybernetician Heinz von Foerster put it – “The environment contains no information; the environment is as it is”. Truth or descriptions of reality is our creation with our vocabulary. And most importantly, there are other beings describing realities with their vocabularies as well. I will finish with some wise words from Friedrich Nietzsche.

“It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically.”

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Monkey’s Prose – Cybernetic Explanation:

Complexity – Only When You Realize You Are Blind, Can You See:

In today’s post, I am looking at the idea of complexity from a second order Cybernetics standpoint. The phrase “only when you realize you are blind, can you see”, is a paraphrase of a statement from the great Heinz von Foerster. I have talked about von Foerster in many of my posts, and he is one of my heroes in Cybernetics. There is no one universally accepted definition for complexity. Haridimos Tsoukas and Mary Jo Hatch wrote a very insightful paper called “Complex Thinking, Complex Practice”. In the paper, they try to address how to explain complexity. They refer to the works of John Casti and C. H. Waddington to further their ideas:

Waddington notes that complexity has something to do with the number of components of a system as well as with the number of ways in which they can be related… Casti defines complexity as being ‘directly proportional to the length of the shortest possible description of [a system]’.

Casti’s views on complexity are particularly interesting because complexity is not viewed as being intrinsic to the phenomenon. This is a common idea in Cybernetics, mainly second order cybernetics. There are two ‘classifications’ of cybernetics – first order and second order cybernetics. As von Foerster explained it, first order cybernetics is the study of observed systems, where the basic assumption is that the system is objectively knowable. The second order cybernetics is the study of observing systems, where the basic assumption is that the observer is included in the act of observing, and thus the observer is part of the observed system. This leads to second order thinking such as understanding understanding or observing observing. It is interesting because, as I am typing, Microsoft Word is telling me that “understanding understanding” is syntactically incorrect. This obviously would be a first order viewpoint. The second order cybernetics is a meta discipline and one that generates wisdom.

When we take the observer into consideration, we realize that complexity is in the eyes of the beholder. Complexity is observer-dependent; that is, it depends upon how the system is described and interpreted. If the observer is able to make more varying distinctions in their description, we can say that the phenomenon or the system is being interpreted as complex. In their paper, Tsoukas and Jo Hatch brings up the ideas of language in describing and thus interpreting complexity. They note that:

Chaos and complexity are metaphors that posit new connections, draw our attention to new phenomena, and help us see what we could not see before (Rorty).

This is quite interesting. When we learn the language of complexity, we are able to understand complexity better, and we become better at describing it, in a reflexive manner.

What complexity science has done is to draw our attention to certain features of systems’ behaviors which were hitherto unremarked, such as non-linearity, scale-dependence, recursiveness, sensitivity to initial conditions, emergence (etc.)

From this standpoint, we can say that complexity lies in the interactions we have with the system, and depending on our perspectives (vantage point) and the interaction we can come away with a different interpretation for complexity.

Heinz von Foerster remarked that complexity is not in the world but rather in the language we use to describe the world. Paraphrasing von Foerster, cognition is computation of descriptions of reality. Managing complexity then becomes a cognitive task. How well you can interact or manage interactions depends on how effective your description is and how well it aligns with others’ descriptions. The complexity of a system lies in the description of that system, which entirely rests on the observer/sensemaker. The idea that complexity is in the eyes of beholder is to point out the importance of second order cybernetics/thinking. The world is as it is, it gets meaning only when we assign meaning to it through how we describe/interpret it. To put differently, “the logic of the world is the logic of the descriptions of the world” (Heinz von Foerster)

The idea of complexity not being intrinsic to a system is also echoed by one of the pioneers of cybernetics, Ross Ashby. He noted – “a system’s complexity is purely relative to a given observer; I reject the attempt to measure an absolute, or intrinsic, complexity; but this acceptance of complexity as something in the eye of the beholder is, in my opinion, the only workable way of measuring complexity”.

The ideas of second order cybernetics emphasize the importance of observers. The “system” is a mental construct by an observer to make sense of a phenomenon. The observer based on their needs draw boundaries to separate a “system” from its environment. This allows the observer to understand the system in the context of its environment. At the same time, the observer has to understand that there are other observers in the same social realm who may draw different boundaries and come out with different understandings based on their own needs, biases, perspectives etc.

A phenomenon can have multiple identities or meanings depending on who is describing the desired phenomenon. Let’s use the Covid 19 pandemic as an example. For some people, this has become a problem of economics rather than a healthcare problem, while for some others it has become a problem of freedom or ethics. If we are to attempt tackling the complexity of such an issue, the worst thing we can do is to attempt first order thinking- the idea that the phenomenon can be observed objectively. Issues requiring second order approach get worse by the application of first order methodologies. The danger in this is that we can fall prey to our own narrative being the whole Truth.

As the pragmatic philosopher Richard Rorty points out:

The world does not speak. Only we do. The world can, once we have programmed ourselves with a language, cause us to hold beliefs. But it cannot propose a language for us to speak. Only other human beings can do that.

If we are to understand complexity of a phenomenon, we need to start with realizing that our version of complexity is only one of the many.  Our ability to understand complexity depends on our ability to describe it. We lack the ability to completely describe a phenomenon. The different descriptions that come about from the different participants may be contradictory and can point out apparent paradoxes in our social realm.

In complexity, if we are to tackle it, we need to have coherence of multiple interpretations. As Karl Weick points out, we need to complicate ourselves. By generating and accommodating multiple inequivalent descriptions, practioners will increase the complexity of their understanding and, therefore, will be more likely to match the complexity of the situation they attempt to manage. In complexity, coherence – the idea of connecting ideas together, is important since it helps to create a clearer picture and affords avoiding blind spots. This co-construted description itself is an emergent phenomenon.

In second order Cybernetics, there are two statements that might shed more light on everything we have discussed so far:

Anything said is said BY an observer. (Maturana)

Anything said is said TO an observer. (von Foerster)

A lot can be said between these two statements. The first points out that the importance of the observer, and the second points out that there are other observers, and we coconstruct our social reality.

Our descriptions are abstractions since we are limited by our languages. All our biases, fears, misunderstandings, ignorance etc. lie hidden in the “systems” we construct. We get into trouble when we assume that these abstractions are real things. This is the first order approach, where we are not aware that we do not see due to our cognitive blind spots. When we realize that all we have are abstractions, we get to the second order approach. We include ourselves in our observation and we start looking at how we make these abstractions. We also become aware of other autonomous participants of our social reality engaging in similar constructions of narratives. As we seek their understanding, we become aware of our cognitive blind spots. We realize that everything is on a spectrum, and our thinking of “either/or” is actually a false dichotomy.

At this point, Heinz von Foerster would say that we start to see when we realize that we are blind.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Causality and Purpose in Systems: