Sunday 15 July 2018

Uncertainty in Counting and Symmetry-Breaking in an Evolutionary Process

Keynes's, in his seminal "Treatise on Probability" of 1921 (little known to today's statisticians who really ought to read it), identified a principle which he called "negative analogy", as the principle by which some new difference is codified and confirmed by repeated experimental sampling.


"The object of increasing the number of instances arises out of the fact that we are nearly always aware of some difference between the instances, and that even where the known difference is insignificant we may suspect, especially when our knowledge of the instances is very incomplete, that there may be more. Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy. For this reason, and for this reason only, new instances are valuable." (p. 233)
This principle should be compared to Ashby's approach to a "cybernetic science": the cybernetician "observes what might have happened by did not". The cybernetician can only do this by observing many descriptions of a thing to observe the "unessential resemblances" and introduce "a new difference". What both Keynes and Ashby are saying is that the observation of likeness is essentially uncertain.

The issue is central to Shannon information theory. Information theory counts likenesses. It determines the surprisingness of events because it treats each event as an element in an alphabet. It can then calculate the probability of that event and thus establish some metric of "average surprisingness" in a sequence of events. Although in the light of Keynes's thoughts on probability this seems naïve, Shannon's equation has been extremely useful - we owe the internet to it - so one shouldn't throw the baby out with the bathwater.

But the Shannon index, which identifies the elements of the alphabet, is actually a means by which uncertainty is managed in the process of calculating the "surprisingness" of a message. This can be shown in the diagram below, derived from Beer's diagrams in Platform for Change:


The beauty of this diagram is that it makes it explicit that the Shannon index is a "creative production" of the process of uncertainty management. It is a codification or categorisation. That means that essentially, it only has meaning because it is social. That means in turn that we have to consider an environment of other people categorising events, and for the environment to produce many examples of messages which might be analysed. Two people will differ in the ways they categories their events, which means that the uncertainty dynamic in counting elements is fluid, not fixed:

There is a tipping-point in the identification of indexes where some current scheme for identifying differences is called into question, and a new scheme comes into being. New schemes are not arbitrary, however. Some difference in the examples that are provided gradually gets identified (as Keynes says: "Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy") but the way this happens is somehow coherent and consistent with what has gone before. 

I wonder if this suggests that there is an underlying principle of evolutionary logic by which the most fundamental principles of difference from the very earliest beginnings are encoded in emergent higher-level differences further on in history. A new difference which is identified by "negative analogy" is not really "new", but an echo of something more primitive or fundamental. Shannon, of course, only treats the surface. But actually, what we might need is a historicisation of information theory.

Let's say, for the sake of argument, that the foundational environment is a de Broglie-Bohm "pilot wave": a simple oscillation. From the differences between the codification of a simple oscillation, higher level features might be codified, which might then give way to the identification of new features by drawing back down to fundamental origins. The symmetry-breaking of this process is tied to the originating principle - which could be a pilot wave, or some other fundamental property of nature.  



So what might this mean for Shannon information? When the relative entropy between different features approaches zero, then the distinction between the difference between features collapses. This may be a sign that some new identification of a feature is about to take place: it is a collapse into the originating state.

Each level of this diagram might be redrawn as a graph of the shifting entropies of features at each level. A basic level diagram can draw the entropy of shifting entropies. A further level can draw the entropy of the relations between the entropy of shifting entropies, and so on.

We may not be able to see exactly how the negative analogy is drawn. But we might be able to see the effects of it having been drawn in the evolutionary development of countable features. Surprise has an evolutionary hierarchy. 

No comments: