Sunday, 15 July 2018

Uncertainty in Counting and Symmetry-Breaking in an Evolutionary Process

Keynes's, in his seminal "Treatise on Probability" of 1921 (little known to today's statisticians who really ought to read it), identified a principle which he called "negative analogy", as the principle by which some new difference is codified and confirmed by repeated experimental sampling.


"The object of increasing the number of instances arises out of the fact that we are nearly always aware of some difference between the instances, and that even where the known difference is insignificant we may suspect, especially when our knowledge of the instances is very incomplete, that there may be more. Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy. For this reason, and for this reason only, new instances are valuable." (p. 233)
This principle should be compared to Ashby's approach to a "cybernetic science": the cybernetician "observes what might have happened by did not". The cybernetician can only do this by observing many descriptions of a thing to observe the "unessential resemblances" and introduce "a new difference". What both Keynes and Ashby are saying is that the observation of likeness is essentially uncertain.

The issue is central to Shannon information theory. Information theory counts likenesses. It determines the surprisingness of events because it treats each event as an element in an alphabet. It can then calculate the probability of that event and thus establish some metric of "average surprisingness" in a sequence of events. Although in the light of Keynes's thoughts on probability this seems naïve, Shannon's equation has been extremely useful - we owe the internet to it - so one shouldn't throw the baby out with the bathwater.

But the Shannon index, which identifies the elements of the alphabet, is actually a means by which uncertainty is managed in the process of calculating the "surprisingness" of a message. This can be shown in the diagram below, derived from Beer's diagrams in Platform for Change:


The beauty of this diagram is that it makes it explicit that the Shannon index is a "creative production" of the process of uncertainty management. It is a codification or categorisation. That means that essentially, it only has meaning because it is social. That means in turn that we have to consider an environment of other people categorising events, and for the environment to produce many examples of messages which might be analysed. Two people will differ in the ways they categories their events, which means that the uncertainty dynamic in counting elements is fluid, not fixed:

There is a tipping-point in the identification of indexes where some current scheme for identifying differences is called into question, and a new scheme comes into being. New schemes are not arbitrary, however. Some difference in the examples that are provided gradually gets identified (as Keynes says: "Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy") but the way this happens is somehow coherent and consistent with what has gone before. 

I wonder if this suggests that there is an underlying principle of evolutionary logic by which the most fundamental principles of difference from the very earliest beginnings are encoded in emergent higher-level differences further on in history. A new difference which is identified by "negative analogy" is not really "new", but an echo of something more primitive or fundamental. Shannon, of course, only treats the surface. But actually, what we might need is a historicisation of information theory.

Let's say, for the sake of argument, that the foundational environment is a de Broglie-Bohm "pilot wave": a simple oscillation. From the differences between the codification of a simple oscillation, higher level features might be codified, which might then give way to the identification of new features by drawing back down to fundamental origins. The symmetry-breaking of this process is tied to the originating principle - which could be a pilot wave, or some other fundamental property of nature.  



So what might this mean for Shannon information? When the relative entropy between different features approaches zero, then the distinction between the difference between features collapses. This may be a sign that some new identification of a feature is about to take place: it is a collapse into the originating state.

Each level of this diagram might be redrawn as a graph of the shifting entropies of features at each level. A basic level diagram can draw the entropy of shifting entropies. A further level can draw the entropy of the relations between the entropy of shifting entropies, and so on.

We may not be able to see exactly how the negative analogy is drawn. But we might be able to see the effects of it having been drawn in the evolutionary development of countable features. Surprise has an evolutionary hierarchy. 

Thursday, 5 July 2018

Seven Problems of Pointing to the Future of Education (with our hands tied behind our back) and Seven suggestions for addressing it

The theme of "research connectedness" is common in today's universities. It's a well-intentioned attempt to say "University isn't school". Unfortunately, due to a combination of factors including marketisation, modularisation and learning outcomes, university has become increasingly like school in recent years. "Research connectedness" attempts to remedy the trend by introducing more inquiry-based learning and personalised curricula. All of this stuff is good, and its something which I have been involved with for a long time. It's also at the heart of what I'm doing at the Far Eastern Federal University in Russia on "Global Scientific Dialogue". But I can't help thinking that we're still missing the point with all these new initiatives. There are (at least) seven problems:

Problem 1: Universities see the curriculum as their product. However, the product of learning is the student's understanding which is arrived at through conversation. The university sells courses and certificates; it does not sell "the potential for arriving at an understanding".

Problem 2: Learning outcomes do not measure student understanding of a subject. They establish the validity of a student's claim to meet a set of criteria written by a teacher (or a module author). What it really measures is the student's understanding of the assessment process.

Problem 3: Learning outcomes codify the expectations of teachers with regard to the way that student performance will be assessed in a subject. By definition, they demand that the teacher knows what they are doing. In research, it is often the case that nobody quite knows what they are doing (Einstein: "If we knew what we were doing, we wouldn't call it research!")

Problem 4: Modules are aggregates of learning outcomes; that is, sets of expectations. Students study many modules, and have to negotiate many different sets of expectations from many different teachers. There is no space for how different teachers' understandings and expectations differ, whether there is coherence, or how incoherence might lead to fundamental problems in the student's understanding.

Problem 5: Inevitably, the only way to cope is to behave strategically. "How do I pass?" becomes more important than "What do I understand?". Suppressing "What do I understand?" in some cases may lead to mental breakdown.

Problem 6: An inquiry-based module from the perspective of strategic learners appears to be the worst of all possible worlds: "basically, they don't teach you anything, and you have to find your own way". Since even inquiry-based modules will have learning outcomes, a strategic approach may work, but result in very dissatisfactory learning experiences ("I didn't learn anything!")

Problem 7: Students are customers in an education market. Whilst learning outcomes codify teacher expectations, learner expectations are increasingly codified by the financial transaction they have with the university, and "student satisfaction" becomes a weapon that can be used against teachers to force them to align their expectations with the learners'.

What can we do?  Here are seven suggestions:

1. The product of the university must be student understanding. Certificates, modules and timetables are epiphenomena.

2. Understanding is produced by conversation. The fundamental function of the university is to find ways of best coordinating rich conversations between students and staff.

3. The curriculum is an outmoded means of coordinating conversation. It is a rigid inflexible object in a fast-changing, uncertain world. The means of coordinating conversation needs to become a much more flexible technology (indeed, this is where investment in technology should be placed, not in VLEs or e-Portfolio, which merely uphold the ailing curriculum)

4. Traditional assessment relies on experts, which necessitates hierarchy within the institution. This hierarchy can be transformed into a "heterarchy" - a flat structure of peer-based coordination. Technologies like Adaptive Comparative Judgement, machine learning and other tools for collaboration and judgement-making can be of great significance here.

5. Transformation of institutional hierarchy can produce far greater flexibility in the way that learners engage with the institution. The "long transactions" of assessment (i.e. the 14 week period "I've done my assessment, you give me a mark") can be broken-up into tiny chunks, students can genuinely roll-on, roll-off courses, and new funding models including educational subscription and assessment services explored.

6. The university needs to investigate and understand its real environment (it is not a market - it is society!). The environment is uncertain, and the best way to understand uncertainty is through making exploratory interventions in society from which the university can learn how to coordinate itself. Generosity towards society should be a strategic mission: free courses, learning opportunities, community engagement should be done for the purpose not of "selling courses", but for the strategic seeking of future viability.

7. To put student understanding at the heart of what the university is, is also to place shared scientific inquiry as the underpinning guide. The scientific discourse is hampered by ancient practices of publication and status which ill-suit an inherently uncertain science. The university should free itself from this, and embrace the rich panoply of technology we have at our disposal for encouraging scientists to communicate their uncertainty about science in an open and dialogic way.


Saturday, 30 June 2018

Ground State

I haven't done any improvisation for ages. It is really a kind of spiritual practice for me - one that I am understanding more deeply now, as I am looking at work on "Communicative Musicality". I like Trevarthen's idea of "synrhythmia" and "amphoteronomy". It seems plausible to me that the vibrations of sound resonate with the circadian rhythms of physiology.

John Torday suggests that the physiological mechanisms and vibrations which underpin consciousness, are in themselves reflected in the action of cellular calcium pumps, which may well unlock the primal origins of cells in pre-history and the "implicate order" in Bohm's quantum mechanics. Music affects us profoundly - might it be because the whole universe and the wholeness of history is collapsed within the cellular organisation of our physiology? What a thought that is! I would like it to be true...


Tuesday, 26 June 2018

Tara McPherson on UNIX, technical componentisation and Feminism

I came back from Berkeley with a haul of books, partly thanks to the wonderful Moe’s Bookstore, and to a recommendation from a Japanese biology professor for a book of essays about current work on "communicative musicality", which I didn’t know about. In answer to the topic of my paper in Berkeley, “Do cells sing to each other?”, the answer is yes, and we are only just beginning to understand the aesthetic dimensions of communication which underpin biological self-organisation, and knowledge about which will, I believe, transform the way we think about human communication and learning. I also picked up a copy of Tara McPherson’s “Feminist in a Software Lab”, which my wife drew my attention to in another bookshop window.

It’s a beautiful book. McPherson is one of the lead figures in the digital humanities, and her book concerns underpinning critical issues within the most basic technologies we use. Unlike a lot of critical work about technology, which tends to be written by people who are not so comfortable with a command prompt, McPherson understands the world of UNIX kernels, cron jobs and bash scripting from the perspective of a practitioner. She also understands the technical rationale behind people like Eric Raymond, mention of whom caused such uproar among feminist critics at this year’s OER conference. But because she understands the technics, she can see beyond the surface to deeper problems in the way we think about technology, and to where Raymond’s deeply unpleasant politics is connected to a rationale for software development which very few dispute. She cites Nabeel Siddiqui who, on a SIGCIS listserv exchange about “Is UNIX racist?”, says:
“Certain programming practices reflect broader cultural ideas about modularity and standardization. These ideas also manifest in ideas about race during the Civil Rights movement and beyond… Computation is not simply about the technology itself but has broad implications for how we conceive of and think about the world around us… The sort of thinking that manifests itself in ‘color-blind’ policies and civil rights backlash have parallels with the sort of rhtetoric expressed in Unix Programming manuals.” 
McPherson adds “this thinking also resonates with structures within UNIX, including its turn to modularity and the pipe.” With regard to education, she comments:
“Many now lament the overspecialization of the university; in effect, this tendency is a result of the additive logic of the lenticular or of the pipeline, where “content areas” or “fields” are tacked together without any sense of intersection, context or relation.” 
She quotes Zahid Chaudhury saying
“hegemonic structures tend to produce difference through the very mechanisms that guarantee equivalence. Laundry lists of unique differences, therefore, are indexes of an interpretive and political desire, a desire that often requires recapitulation to the familiar binarisms of subordination/subversion, homogeneity/heterogeneity, and increasingly, immoral/moral” 
This connection urgently needs to be made. The lack of diversity in tech is a problem – but it is underpinned by an approach to rationalist thinking which has gone unchallenged and which frames the way we think about science and software, pedagogy and the organisation of education – and, most importantly, diversity itself. Misogyny and racism are built into the genotype of techno-rationalism. This helps to explain how simply increasing diverse representation doesn’t really seem to change anything. Something deeper has to happen, and McPherson points to where that might be.

It is right to focus critique on the component orientation of modern software. We rationalise our software constructions as recursive aggregations of functional components; we replace one system with another which is deemed to have “functional equivalence”, all the time obliterating the difference between the village post office and an online service. Having said this, component orientation seems to help with the management of large-scale software projects (although maybe it doesn’t!), and facilitates the process of recombination which is an important part of many technical innovations. Yet McPherson also points to the fact that this separation and otherness is also a creation of boundary and distinction, and those distinctions tend to accompany distinctions of race and gender.
Through her Vectors project, McPherson has been probing at all this. She has enlisted the support of some powerful fellow travellers, including Katherine Hayles, whose work on cybernetics and the post-human is equally brilliant.

It’s rather easy these days to adopt a critical stance on technology – from the perspective of race, gender, sexuality, and so on. That’s because, I think, there’s so much injustice in the world and many people are hurting and angry. But critical intelligence demands more than the expression of outrage – that, after all, will be componentised by the system and used to maintain itself whilst it pretends to embrace diversity. Critical intelligence demands a deeper understanding of more fundamental biological, ecological, physical and social mechanisms which find expression in our technology.

McPherson is an advocate for making things – not just talking about them. If we all need to learn to code (and I am very sceptical about the motivation for government initiatives to do this), it is not because we all need to become workers for Apple or Microsoft. It is because we need a deep understanding of a way of thinking which has overtaken us in the 20th and 21st century. It’s about mucking-in and talking to each other.




Saturday, 23 June 2018

The Presence of the Past and the Future of Education

I've been in Berkeley for the last few days at the Biosemiotics gathering (http://biosemiotics.life). It's a long story as to how an educational technologist becomes interested in cell to cell signalling, but basically it involves cybernetics, philosophy, music and technology. In fact, all the things that this blog is about.

In addition to the biosemiotics conference, I went to Los Angeles to meet with Prof. John Torday of UCLA whose work on cell signalling (see http://www.thethirdwayofevolution.com/people/view/john-s.-torday) follows a different path to that of the biosemioticians like Terry Deacon. Deacon's work on the role of constraint in epigenesis (see https://www.amazon.co.uk/Incomplete-Nature-Mind-Emerged-Matter/dp/0393049914/ref=sr_1_1?ie=UTF8&qid=1529740533&sr=8-1&keywords=incomplete+nature) impressed me hugely because he was basically saying something that philosophers had been arguing for a long time: absence is causal. Actually, Bateson got there first (also at the conference was Bateson scholar Peter Harries-Jones, whose work on bio-entropy is very important), but Bateson made a fundamental distinction between the Jungian Pleroma and Creatura - between the non-living inanimate world which obeys the 2nd law of thermodynamics, and the living world, which works against entropy, producing "neg-entropy", or "information".

Torday goes beyond Bateson, and suggests a deep connection between pleroma and creatura, between matter and consciousness. To do this, he cites the quantum mechanics of David Bohm whose hidden variables, or pilot waves, presents a fundamental originating mechanism for what Bohm calls an "implicate order". Going beyond Bateson is no mean feat. I'm convinced that this is right.

Torday has been steadily producing empirical evidence in his work on the physiology of the lung and the treatment of asthma. There have been significant medical breakthroughs which can only be explained through his new perspective on cell signalling.

Put most basically, cells organise themselves according to the ambiguity in their environment. Since the environment is central to cellular organisation, changes to the environment become imprinted in cell structures, where environmental stress causes fundamental functional changes to organisms. This helps to explain Stephen Jay Gould's exaptation, or pre-adaptation, by which the swim bladder of the fish evolves into a lung.

But its not just lungs and swim bladders. Consciousness itself may also be the result of a similar process. The primeval past of evolutionary development, from the big bang (or whatever was at the beginning) to the present is enfolded in our being.

Torday and I talked a lot about education. The pursuit of truth is really a pursuit of the fundamental ground state - of what Bohm calls the implicate order. The truth resonates, and Bohm himself argued that through music we could glimpse something of the implicate order which we lose sight of in other aspects of intellectual life. But we also see it in love, justice, and mathematics.

I'm optimistic because I think that in the end we have the truth on a spring. Right now, it's stretched almost to breaking point... I'm experiencing some of the direct consequences of this myself at the moment. But truth will return - although springs, like cells, have hysteresis, so everything which has been remains present in everything that comes after. This should serve as a warning to those who pursue self-interest, greed and oppression.

At the biosemiotics conference at Berkeley I got everyone to make music. They loved it, partly because they had to engage with each other in making sounds and listening to each other. The implicate order is in our communicative practice - it can't be abstracted away. In the end, when things right themselves again, we will teach our students differently, and we will use our technology to transform the ways we organise deep conversations into what Bohm called "dialogue". Fundamentally, we will dance again, because conversation is dancing - it is, as I mentioned at Anthony Seldon's excellent HE Festival the other week, "con-versare"... to "turn together". 

Sunday, 10 June 2018

Information theoretical approaches to Ecology in Music

Information theory provides an index of "surprise" in a system. It concerns the difference between what is anticipated and what is unexpected. So in a string of events which might read A, A, A, A, A there is a strong anticipation that the next event will be A. The Shannon formula reveals this because the probability of A in this string is 1, and log 1 is 0, thus making the degree of surprise 0. But if the string read A, A, A, A, Z then we get a much higher number: the probability of A is 4/5, and the probability of Z is 1/5, and their logs (base 2) are  -0.32192809488 and -2.32192809489. Multiply these by the probabilities we get:

(4/5 * -0.32192809488) + (1/5 * -2.32192809489) =

-0.25754247591 + -0.464385618978 = -0.721928094888

The problem with this approach is that it sees a sequence of characters as a single description of a phenomena that can be treated independently from any other phenomena. But nothing only has a single description. There are always multiple descriptions of this. This means that there are multiple dimensions of "surprise" which must be considered together when doing any kind of analysis - and each dimension of surprise constrains the surprising nature of other dimensions.

A musical equivalent to the A, A, A, A, A might be seen to be
But is this right? By simply calculating the entropy of the note C, this would give an entropy of 0. And so would this...
What if the Cs continued for hours (rather like the B-flats in Stockhausen's "Stimmung") - is that the same? No. 

A better way to think about this is to think about the interacting entropies of multiple descriptions of the notes. How many descriptions are there of the note C? Well, there are descriptions about the timbre, the rhythm, the volume, and so on. And these will vary over time, both from note to note, and from time t1 to time t2.. 

I've written a little routine in Python to pull apart the different dimensions in a MIDI file and analyse it in time segments for the interactions between the entropies of the different kinds of description (I'll put the code on GitHub once I've ironed-out the bugs!). 

Analysing the midi data produces entropies over time sections, which look a bit like this (using 2-second chunks):
These values for entropy for each of the dimensions can be plotted against one another (one of the beauties of entropy is that it normalises the "surprise" in anything - so sound can be compared to vision, for example). Then we can do more with the resulting comparative plots. For example, we can spot where the entropies move together - i.e. where it seems that one entropy is tied to another. Such behaviour might suggest that a new variable could be identified which combines the coupled values, and that the occurrence of that new variable can then be searched for and its entropy calculated. This overcomes the fundamental problem with Shannon in that it seems tied to a predefined set of  variables. 

Comparing the interaction of entropies in music can be a process of autonomous pattern recognition - rather like deep learning algorithms. But rather than explore patterns in a particular feature, it explores patterns in surprisal between different features: the principal value of Shannon's equations is that they are relational. 

The point of pursuing this in music is that there is something in music which is profoundly like social life: its continuous emergence, the ebb and flow of emotional tension, the emergence of new structure, the articulation of a profound coherence, and so on. David Bohm's comment that music allows us to apprehend an "implicate order" is striking. I realised only recently Bohm's thought and the cosmological thought of the composer Michael Tippett might be connected (Tippett only became aware of Bohm very late in his life, but expressed some interest in it). That's my own process of seeking cosmological order at work!

Tuesday, 5 June 2018

Screens, Print and the Ever-changing lifeworld

I'm writing some music at the moment. I'm using self-publishing book tools provided by Blurb (http://blurb.co.uk) to help me focus on the always laborious process of studying and playing the written notes and gradually improving them. It seems to be working. My initial not-very-good-notes sit on my piano in a beautiful book. I play at them, cross things out, make adjustments, which I then feed into the score and will get to the stage when I produce the next printed version.

This process isn't the same as simply printing-off pages. The blurb book arrives a few days after ordering and it looks beautiful. The pages are bound together which mean that the ordering of the flow of the music is tied into the form of the book. In other words, the form of the book constrains the form of the music as I originally wrote it. The constraint is useful because it means that I have to work with what's there, chipping away bits and pieces.

On a computer screen anything is possible. Any mistake can be made, erased, remade, re-erased, etc. The computer presents unlimited possibilities. And that can be a problem in creative work. Unlimited possibilities = increased uncertainty in making decisions about what to do. The computer presents an ever-changing lifeworld.

As human beings (and indeed, as animals), we desire a manageably stable relationship with our ennvironment. It is this primal force which sits behind Lorenz's 'imprinting' and Bowlby's 'attachment'. It starts with proximity to the parent, and transforms into proximal relationships to objects such as toys and teddy bears, and later I think into attachments to ideas - where some of those ideas are our own creations. This primal force is something which is destabilised by computers - and particularly by the AI-driven social media which is ever-changing.

I noticed that Dave Elder-Vass wrote about our 'attachment' to online services (although he never mentions Bowlby) in his recent "Profit and Gift in the Digital Economy". His instinct is right, but what he calls attachment is I suspect a clinging-on to some kind of stability. Facebook is like Lorenz's wire-frame "mother": as it changes, we are compelled to follow. But as we do so, we are taken back to that primal stage of imprinting when we were babies. In adult life, however, we learn to create our own environment with concepts, artefacts and tools. Higher learning is an important stage of development in enabling us to do this.

The important point is that the adult life of declaring new concepts and ideas entails acts of communication which connect something inside us (a psychodynamic process) to something in our environment (a communicative process). The balance between the inner process and the outer process is a sign of health in the individual's relation to the world. So what if the communicative dimension is replaced with a constant stream of visual disruptions which demand the maintenance of proximity towards them? How do these inner world phenomena get expressed? How is the balance between inside and outside maintained?

I think the answer is, it isn't. There's something stupid about the way that the continually shifting phenomena of the online world mean that the outer world stability which is necessary for personal growth is never allowed to form. The reason is partly to do with the corporate business models of the social media companies: they need an ever increasing range of transactions with their customers in order to justify their existence and maintain their value. This corporate model necessitates damaging the mental health of users by destabilising their lifeworld. The obsession with social media may be a kind of PTSD: might we see lawsuits in the future???

So what of print and my music writing? My book of  notes arrives a few days after I ordered it, and it stays with me. I continually glance through it, thinking about changes and improvements, and scribbling all over it. But the book is stable. It becomes my attachment object, and since it is stable, I can coordinate the flow between my inner processes and my outer processes.

I encouraged a friend who is currently writing-up their PhD thesis to send their draft document to blurb to get it printed: "You need multiple descriptions of the thing you are working on in order to focus and develop your ideas". He did it, and it seems that his experience is very similar to my own.

There's something important about print. As the internet becomes ever-more controlled by government and corporations, I wouldn't be surprised to see what is efffectively the 3d printing of books become a major activity in the near future. People often talk about Stewart Brand's "Whole Earth Catalogue" of the 1960s as being a proto-internet. But maybe the book itself is about to find a new lease of life for the sake of everyone's sanity!