Monday, 19 August 2019

Emerging Coherence of a New View of Physics at the Alternative Natural Philosophy Association

The Alternative Natural Philosophy Association met in Liverpool University last week following a highly successful conference on Spencer-Brown's Laws of Form (see  There is a profound connection between Spencer-Brown and the physics/natural science community of ANPA, not least in the fact that Louis Kauffman is a major contributor to the development of Spencer-Brown's calculus, and also a major contributor to the application of these ideas in physics.

Of central importance throughout ANPA was the concept of "nothing", which in Spencer-Brown maps on to what he calls the "unmarked state". At ANPA 4 speakers, all of them eminent physicists, gave presentations referencing each other, with each of them saying that the totality of the universe must be zero, and that "we must take nothing seriously". 

The most important figure in this is Peter Rowlands. Rowlands's theory of nature has been in development for 30 years, and over that time he has made predictions about empirical findings which were dismissed when he made them, but subsequently discovered to be true (for example, the acceleration of the universe, and the ongoing failure to discover super-symmetrical particles). If this was just a lucky guess, that would be one thing, but for Rowlands it was the logical consequence of a thoroughgoing theory which took zero as its starting point.

Rowlands articulates a view of nature which unfolds nothing at progressively complex orders. He argues that the dynamic relationship between the most basic elements of the universe (mass, space, time and charge) arrange themselves at each level of complexity in orders which effectively cancel each other out through a mathematical device where things which multiply each other create zero, called a nilpotent.

This brilliant idea cuts through a range of philosophical problems like a knife. It is hardly surprising that, as John Hyatt pointed out in a brilliant presentation, Shakespeare had an intuition that this might be how nature worked:
Our revels now are ended.
These our actors,
As I foretold you,
were all spirits, and
Are melted into air, into thin air:
And like the baseless fabric of this vision,
The cloud-capp'd tow'rs, the gorgeous palaces,
The solemn temples, the great globe itself,
Yea, all which it inherit, shall dissolve,
And, like this insubstantial pageant faded,
Leave not a rack behind.
We are such stuff
As dreams are made on;
and our little life
Is rounded with a sleep.
But Rowlands needs a mechanism, or an "engine" to drive his "nothing-creating" show. He uses group theory in mathematics, and William Rowan Hamilton's concept of Quaternions: a 3-dimensional complex number, notated as i, j, k, where i*i = j*j = k*k = i*j*k = -1. Mapping these quaternions on to the basic components of physical systems (plus a unitary value which makes up the 4), he sees mass, time, charge and space represented in a dynamic numerical system which is continually producing nilpotent expressions. This provides an ingenious way of re-expressing Einstein's equation of mass-energy-momentum, but most importantly it allows for the Einstein equation to be situated as entirely consistent with Dirac's equation of quantum mechanics. Rowlands is able to re-express Dirac's equation in simpler terms using his quaternions as operators in a similar and commensurable way to how he deals with Einstein's equation.

As Mike Houlden argued at the conference, this way of thinking helps to unpick some fundamental assumptions made about the nature of the universe and the beginning of time. For example, the concept held by most physicists that there is a fixed amount of dark matter in the universe which was created instantly at the big bang is challenged by Rowlands's system. It articulates a continual creation process that sees a recursive process of symmetry-breaking throughout nature, from quantum phenomena through to biology, and by extension consciousness.

Rowlands articulates a picture similar to that of Bohm - particularly in upholding the view of nature as a "hologram" - but his thoroughgoing mathematics produces what Bohm was arguing for: an algebra for the universe.

Empirical justification for these ideas may not be far off. As Mike Houlden argued, the discovery of dark energy (presumed to be the driver for the acceleration of the universe) and the assumption that the proportion of dark matter in the universe was fixed at the big bang (whatever that is) are likely to be questioned in the future. Rowlands's theory helps to explain the creation of dark matter and dark energy as balancing processes which are the result of the creation of mass, and which serve to maintain the nilpotency of the universe.

From an educational perspective this is not only extremely exciting, but also relevant. The fundamental coherence of the universe and the fundamental coherence of our understanding of the universe are likely to be connected as different expressions of the same broken symmetry. Learning, like living, as Shakespeare observed, is also much ado about nothing. It's not only the cloud capp'd towers which disappear. 

Sunday, 4 August 2019

China's experiments with AI and education

At the end of Norbert Wiener's "The Human Use of Human Beings", he identified that there was a "new industrial revolution" afoot, which would be dominated by machines replacing, or at least assisting, human judgement (this is 1950).  Wiener, having invented cybernetics, feared for the future of the world: he understood the potential of what he and his colleagues had unleashed, which included computers (John von Neuman), information theory (Claude Shannon) and neural networks (Warren McCulloch). He wrote:
"The new industrial revolution is a two-edged sword. It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction." (p.162)
The destructive power of technology would result, Wiener argues, from our "burning incense before the technology God". Well, this is what's going on in China in their education system right now (see

There has, unsurprisingly, been much protest by teachers online to this story. However, sight must not be lost of the fact that there are indeed benefits that the technology brings to these students, autonomy being not the least of them. But we are missing a coherent theoretical strand that connects good face-to-face teaching to Horrible Histories, Khan academy and this AI (and many steps in-between). There is most probably a thread that connects them and we should seek to articulate it as precisely as we can, otherwise we will be beholden to the rough instinct of human beings unaware of their own desire to maintain their existence within their current context, in the face of a new technology which will transform that context beyond recognition.

AI gives us a new powerful God in front of which we (and particularly our politicians) will need to resist the temptation to light the incense. But many will burn incense, and this will fundamentally be about using this technology to maintain the status quo in education in an uncertain environment. So this is AI to get the kids through "the test" more quickly. And (worse) the tests they are concerned with are STEM. Where's the AI that teaches poetry, drama or music?

It's the STEM thing which is the real problem here, and ironically, it is the thing which is most challenged by the AI/Machine learning revolution (actually, I think the best way to describe the really transformative technology is to call it an "artificial anticipatory system", but I won't go into that now). This is because in the world that's going to unfold around us - the world that we're meant to be preparing our kids for - machine learning will provide new "filters" through which we can make sense of things. This is a new kind of technology which clearly works - within limits, but well beyond expectations. Most importantly, while the machine learning technology works, nobody knows exactly how these filters work (although there are some interesting theories:

Machine learning is created through a process of "training" - where multiple redundant descriptions of phenomena are fed into a machine for it to understand the underlying patterns behind them. Technical problems in the future will be dealt with through this "training" process, in the way that our current technical problems demand "coding" - the writing of specific algorithms. It is also likely that many professionals in many domains will be involved in training machines. Indeed, training machines will become as important as training humans.

This dominance of machine training and partnership between humans and machines in the workplace means that the future of education is going to have to become more interdisciplinary. It won't be enough for doctors to know about the physiological systems of the body; professionally they will have to be deeply informed about the ways that the AI diagnostic devices are behaving around them, and take an active role in refining and configuring them. Moreover, such training processes will involve not only the functional logic of medical conditions, but the aesthetics of images, the nuances of judgement, and the social dynamics of machines and human/organisational decision-making. So how do we prepare our kids for this world?

The fundamental problems of education have little to do with learning stuff to pass the test: that is a symptom of the problem we have. They have instead to do with organising the contexts for conversations about important things, usually between the generations. So the Chinese initiative basically exacerbates a problem produced by our existing institutional technologies (I think of Wiener's friend Heinz von Foerster: "we must not allow technology to create problems it can solve"). So AI is dragged out of what Cohen and March famously called the "garbage can of institutional decision-making" (see, when the real problem (which is avoided) is, "how do we reorganise education so as to prepare our kids for the interdisciplinary world as it will become?"

This is where we should be putting our efforts. Our new anticipatory technology provides new means for organising people and conversations. It actually may give us a way in which we might organise ourselves such that "many brains can think as one brain", which was Stafford Beer's aim in his "management cybernetics" (Beer was another friend of Wiener). My prediction is that eventually we will see that this is the way to go: it is crucial to local and planetary viability that we do.

Will China and others see that what they are currently doing is not a good idea? I suspect it really depends not on their attitude to technology (which will take them further down the "test" route), but their attitude to freedom and democracy. Amartya Sen may well have been right in "Development as Freedom" in arguing that democracy was the fundamental element for economic and social development. We shall see. But this is an important moment.

Wednesday, 31 July 2019

Fractals of Learning

I've been doing some data analysis on responses of students to a comparative judgement exercise I did with them last year. Basically, they were presented with pairs of documents on various topics in science, technology and society, and asked "Which do you find more interesting and why?"

The responses collected over two weeks from about 150 students were surprisingly rich, and I've become interested in drawing distinctions between them. Some students clearly are transformed by many of the things which they read about (and this was in the context of a face-to-face course which also gravitated around these topics), and their answers reflect an emerging understanding. Other students, while they might also appear to engage with the process, are a bit more shallow in their response. 

To look at this, I've looked at a number of dimensions of their engagement and plotted the shifts in entropy in each dimension. So, we can look at the variety of documents or topics they talk about: some students stick to the same topic (so there is continually low entropy), while others choose a wide variety (so entropy jumps around). The amount of text they write also has an entropy over time, as does the entropy of the text itself. This last one is interesting because it can reveal key words in the same way that a word cloud might: key concepts get repeated, so the entropy gets reduced. 

What then would we expect to see of a student gradually discovering some new concept which helps them connect many topics? Perhaps an initial phase of high entropy in document choice, high entropy in concepts used and low entropy in the amount of text (responses might be a similar length). As time goes on, a concept might assert itself as dominant in a number of responses. The concept entropy goes down, while the document entropy might continue to oscillate. 

The overall pattern is counterpoint, rather like this graph below:

The graphical figure above is a representation of the positive and negative shifts in entropy of the main variables (going across the top), followed by the positive and negative shifts in the relative entropy of variables to one another. The further over to the right when patterns change is an indication of increasing "counterpoint" between the different variables. The further to the left is a sign of particular change in particular variables. From top to bottom is time, measured in slots where responses were made.

Not all the graphs are so rich in their counterpoint. This one (admittedly with fewer comparisons) is much more synchronous. There's a "wobble" in the middle where things are shifted in different directions, while at the end, the comments on the documents, the type of documents, and the type of topics all vary at once. If there was a common concept that had been found here, one would expect to see that the entropy of the comments would really be lower. But the graph and the diagram provide a frame for asking questions about it.
This one is more rich. It has a definite structure of entropies shifting up and down, and at the end there is a kind of unity which is produced. Looking at the student comments, it was quite apparent that there were a number of concepts which had an impact.

It doesn't always work as a technique, but there does appear to be a correlation between the shape of these graphs and the ways in which the students developed their ideas in their writing which merits further study.

More interestingly, this one (below) produced a richly contrapuntal picture, but when I looked at the data, it was collected over a very short period of time, meaning that this was the result of a one-off concentrated effort, rather that a longitudinal process. But that is interesting too, because there is a fractal structure to this stuff. A small sample can be observed to display a pattern which can then be contextualised within a larger context where that pattern might be repeated (for example, with a different set of concepts), or it might be shown to be an isolated island within a larger pattern which is in fact quite different.
Either way, the potential is there to use these graphs as a way of getting students to reflect on their own activities. I'm not sure I would go so far as to say "your graph should look like this", but awareness of the correlations between intellectual engagement and patterns of entropy is an interesting way of engaging learners in thinking about their own learning processes. Actually, it also might be possible to produce a 3d landscape from these diagrams, and from that a "google map" of personal learning: now that is interesting, isn't it?

Monday, 29 July 2019

Recursive Pedagogy, Systems thinking and Personal Learning Environments

Most of us are learning most of what we know, what we can do, what we use on an everyday basis, what we talk about to friends and colleagues, online. Not sat in lectures, gaining certificates, or sitting exams. Those things (the formal stuff) can provide 'passports' for doing new things, gaining trust in professional colleagues, getting a new job. But it is not where the learning is really happening any more. The extent to which this is a dramatic change in the way society organises its internal conversations is remarkably underestimated. Instead, institutions have sought to establish the realm of 'online learning' as a kind of niche - commodifying it, declaring scarcity around it, creating a market. This isn't true of just educational institutions of course. Social media corporations saw a different kind of marketing opportunity: to harness the desire to learn online into a kind of game which would continually manipulate and disorient individuals in the hope that they might buy stuff they didn't want, or vote for people who weren't good for them. But the basic fact remains: most of us are learning most of what we know online.

That means machines are shaping us. One senses that our sense of self is increasingly constituted by machines. I wonder if the slightly paranoid reactionaries who worry about the power of digital 'platforms' are really anxious about an assault on what they see as 'agency' and 'self' by corporations. But are we so sure about the nature of self or agency in the first place? Are we being naive to suppose autonomous agents acting in an environment of machines? Wasn't the constitution of self always trans-personal? Wasn't it always trans-personal-mechanical? The deeper soul-searching that needs to be done is a search for the individual in world of machines. Some might say this is Latour's project - but seeing 'agency' everywhere is not helpful (what does it mean, exactly?). Rather more, we should look to Gilbert Simondon, Luhmann, Kittler, and a few others. There's also a biological side to the argument which situates 'self' and consciousness with cells and evolutionary history, not brains. That too is important. It's a perspective which also carries a warning: that the assertion of agency, autonomy and self against the machine is an error in thinking which produces in its wake bad decision, ecological catastrophe and the kind of corporate madness which our platform reactionaries complain about in the first place!

Having said this, we then need to think about 'personal' learning in a context where the 'personal' is constituted by its mechanical and social environment. Machine learning gives us an insight into a way of thinking about 'personal' learning. Deep down, it means 'system awareness': to see ourselves as part of a system which constitutes us being aware of a system. It's recursive.

Some people object to the word 'system', thinking that it (again) denies 'agency'. Ask them to define what they mean by agency, and we end up confused. So its useful to be a bit clearer about 'system'. Here's my definition:

To think of 'systems' is a thought that accepts that the world is produced by thought.

This is why I'm a cybernetician. I think this is critically important. To deny that thought produces the world is to set thought against those things which constitute it. When thought is set against that which constitutes it, it becomes destructive of those things it denies: the planet, society, love.

So what of learning? What of learning online? What of personal learning?

It's about seeing our learning as a recursive process too. To study something is to study the machines through which we learn something. It may be that the machine learning revolution will make this more apparent, for the machines increasingly operate in the same kind of way that our consciousness operates in learning the stuff that is taught by the machines. It's about closing the reflexive loop.

So what about all that stuff about certificates, trust, passports, etc? It seems likely to me that closing the reflexive loop will produce new ways of codifying what we know: a kind of meta-codification of knowledge and skill. Against this, the institutional stamp of authority will look as old-fashioned as the wax seal. 

Monday, 15 July 2019

Interdisciplinary Creativity in Marseille

Last week I was lucky enough to go to this year's Social Ontology conference in Marseille. I've been going to southern France for a few years now to sit with economists and management theorists (no cyberneticians apart from me!) and talk about everything. Academic "authority" was provided by Tony Lawson (whose Cambridge social ontology group was the model for the meeting) and Hugh Willmott, whose interdisciplinarity helped established Critical Management Studies. Three years ago, I hosted the event in Liverpool, and more and more it feels like a meeting of friends - a bit like the Alternative Natural Philosophy Association ( which I'm hosting in Liverpool in August, but with management studies instead of physics.

This year, Tony didn't come, but instead we had David Knights from Lancaster University. It's always been an intimate event - and usually better for that, where the discussion has been of a very high level. Gradually we have eschewed papers, and focused entirely on dialogue for two days on a topic. This year's topic was Creativity.

If I'd read David Bohm before I'd started coming to these conferences, I would have known exactly what this was and why it was so good. Now I know Bohm, and I know he would have absolutely understood what we were doing. And with a topic like creativity, understanding what we were doing, where we were going, or where we would end up, was often unclear. Dialogue is a bit scary - it's like finding your way through the fog. Sometimes people get frustrated, and it is intense. But it is important to have faith that what we manage to achieve collectively is greater than what could be achieved by any individual.

So what conclusions did we reach? Well, I think I can sum up my own conclusions:
  • Creativity is not confined to human beings. It is a principle of nature. It may be the case that creative artists tune-in to natural processes, since this would explain how it is that their labours can result in something eternal. 
  • Creativity is connected to coherence. It is an expression of fundamental underlying patterns. In an uncertain environment, the necessity for the creative act is a necessity to maintain coherence of perception.
  • Creativity can be destructive. However (my view) I think that "creative destruction" needs unpicking. Creativity may always create something new which is additional to what was there before. This creates an increase in complexity and a selection problem. The "destruction" is done in response to this increase in complexity - often by institutions ("from now on, we are going to do it like this!")
  • The difference between creativity with regard to technical problems and creativity in human problems was discussed. Technical creativity is also driven by the drive for individual coherence - particularly in addressing ways of managing complexity - but it loses sight of the institutional destructive processes that may follow in its wake. 
  • The conversion of everything to money is, I think, such a "technical" innovation. On the one hand, money codifies expectations and facilitates the management of complexity. However, it prepares the way for the destruction of richness in the environment. 
  • The idea of "origin-ality" was explored. "Original" need not be new, but rather, connected to deeper "origins" in some way. This relates directly to the idea of creativity as a search for coherence.
  • Time is an important factor in creativity - it too may feature as a fundamental dimension in the coherence of the universe to which artists respond (particularly musicians, dancers, actors). Time raises issues about the nature of anticipation in aesthetic experience, and the perception of "new-ness"
  • A genealogy of creativity may be necessary - a process of exploring through dialogue how our notions of creativity have come to be. 
  • The genealogical issue is important when considering the role of human creativity in failures of collective decision-making and the manifest destruction of our environment. I'm inclined to see the issue of genealogy as a kind of laying-out of the levels of recursion in the topics and discourses of creativity, and this laying out may be necessary to provide sufficient flexibility for humankind to address its deepest problems.
  • Psychoanalytic approaches to creativity are useful, as are metaphors of psychodynamics. Michael Tippett's discussion of his own creative process had a powerful effect on everyone. However, the value of psychodynamics may lie in the fact that similar mechanisms are at work at different levels of nature (for example, cellular communication).
Michael Tippett from Directors Cut Films on Vimeo.

I took my Roli Seaboard with me, which inspired people to make weird noises. Music is so powerful to illustrate this stuff, and I invited people to contribute to a sound collage of the conference... which you can hear here. Actually, it's the first time I've heard a reflexology technique being used on the Seaboard!

Tuesday, 9 July 2019

Creativity and Novelty in Education and Life

A number of things have happened this week which has led me to think about the intellectual efforts that academics engage in to make utterances which they claim to be insightful, new or distinct in some other way. The pursuit of scholarship seems to result from some underlying drive to uncover things, the communication of which brings recognition by others that what one says is in some way "important" or "original", and basically confers status. Educational research is particularly interesting in this regard since very little that is uttered by anyone is new, yet it is often presented as being new. I don't want to criticise this kind fakery in educational research (but it is fakery), of which we are all guilty, but rather to ask why it is we are driven to do it. Fundamentally, I want to ask "Why are we driven to reclaiming ideas from the past as new and rediscovered in the present?" Additionally, I think we should ask about the role of technology in facilitating this rediscovery and repackaging of the past.

Two related questions accompany this. The first is about "tradition". At a time when we see many of the tropes of statehood, politics and institutional life becoming distorted in weird ways (by the Trumps, Farages and co), what is interesting is to observe what is retained in these distortions and what is changed. Generally it seems that surface appearance is preserved, but underlying structure is transformed from the structures that were once distributed, engaging the whole community in the reproduction of rituals and beliefs, to structures which leave a single centre of power responsible for the reproduction of rituals and beliefs.  This is, in a certain sense, a creative act on the part of the individual who manages to subvert traditions to bend to their own will.

Central to this distortion process is the control of the media. Technology has transformed our communication networks which, before the internet, were characterised by personal conversations occurring within the context of global "objects" such as TV and newspapers. Now the personal conversations are occurring within the frame of the media itself. The media technologies determine the way the communication game is played, and increasingly intrude on personal conversations where personal uncertainties could be resolved. The intrusion of media technologies increasingly serves to sway conversation in the direction of those who control the media, leaving personal uncertainties either unresolved, or deliberately obfuscated. The result is both a breakdown in mental health and increasingly lack of coherence, and increased control by media-controlling powers.

Where does creativity and novelty sit in all of this? Well, it too is a kind of trope. We think we are rehearsing being Goethe or Beethoven, but while the surface may bear some similarity, the deep structure has been rewired. More importantly, the university has become wired into this mechanism too. Is being creative mere appearance in a way that it wasn't in a pre-internet age?

At the same time, there's something about biology which is driven to growth and development to overcome restriction. Our media bubble is restriction on growth, and right now it looks menacing. The biological move is always to a meta-level re-description. Epochs are made when the world is redescribed. But we cannot redescribe in terms of "creativity" or "innovation" because those things are tropes wired into the media machine. Seeing the media machine for what it is may present us with some hope - but that is very different from our conventional notions of creativity.

Sunday, 7 July 2019

The Preservation of Context in Machine learning

I'm creating a short online course to introduce staff in my faculty to machine learning. It's partly about awareness-raising (what's machine learning going to do to medicine, dentistry, veterinary science, psychology, biology, etc?), and partly about introducing people to the tools which are increasingly accessible and available for experimentation.

As I've pointed out before, these tools are becoming increasingly standardised, with the predominance of python-based frameworks for creating machine learning models. Of course, python presents a bit of a barrier - it's so much better if you can do it in the web, and indeed, if you could do it in a desktop app based on web technologies like Electron.js. So that is what I'm working on.

Putting machine learning tools in the hands of ordinary people is important. The big internet corporations want to present a message that only they have the "big data" and expertise sufficient to really handle the immense power of AI. I'm not convinced. First of all, personal data is much "bigger" that we think, and secondly, our machine learning tools are hungry for data partly because we don't fully understand how they work. The real breakthrough will come when we do understand how it works. I think this challenge is connected to appreciating the "bigness" of personal data. For example, you could think of your 20 favourite films, and then rank them in an order. How much information is there there?

Well (without identifying my favourites), we have
... etc

Now if we consider that every item in the ranking is a relation to every other item, then the amount of data is actually the number of permutations of pairs of items. So,

F E... and so on
That's 20!/(20-2)!, or 380 rows of data from a rank list of 20 items.

So could you train an algorithm to learn our preferences? Why not?
Given a new item, can it have a guess as to which rank that item might be? Well, it seems it can have a pretty good stab at it.

This is interesting because if the machine learning can estimate how "important" we think a thing is, and we can then refine this judgement in some way (by adjusting its position), then something is happening between the human and the machine: the machine is preserving the context of the human judgement which is used to train it.

The predominant way machine learning is currently used is to give an "answer": to identify the category of thing a item is. Yet the algorithm that has done this has been trained by human experts whose judgements of categories is highly contextual. By giving an answer, the machine strips out the context. In the end, information is lost. Using ranking, it may be possible to calculate how much information is lost, and from there to gain a deeper understanding of what is actually happening in the machine.

Losing information is a problem in social systems. Where context is ignored, tyranny begins. I think this is why everyone needs to know about (and actively engage with) machine learning.

Saturday, 6 July 2019

Communication's Illusion

There was a Nostradamus prediction revealed at the beginning of this year that 2019 was the year we became closer to animals (see for one of the many click-bait references to this). One interpretation is that we might learn to talk to animals...

The question interests me because it invites the question as to how the noises we make when we talk compare to the noises that animals make. Because we are largely obsessed with processing "information" and "meaning" in our communication (that is, attenuating the richness of sounds to codes), we tend to be oblivious to the amount of redundancy our communication entails, and we also assume that because animal communication has so much redundancy, it carries less meaning. The redundancy of animal communication is much more obvious: why doesn't a crow only "caw" once? Didn't one "caw" do the job? Why does it do it regularly 4 or 5 times (or more)? Why with the same rhythmic regularity?

Understanding of information and meaning in human communication is far from complete, and certainly for the latter, the scientific consensus seems to be pointing to the fundamental importance of redundancy, or constraint, in the establishment of meaning in human communication. Animal communication is likely to be just as meaningful to animals as our communication is to us. Indeed, our perception of "consciousness" among animals seems to be dependent on our observing animals operating within a lifeworld which we ourselves recognise. Like this ape who was filmed using a smartphone:
One problem we have in appreciating this is the belief that human consciousness is exceptional. This single belief could turn out to be the greatest scientific error, from which the destruction of our environment stems. It may be as naive as believing the earth to be the centre of the universe. In biology, many believe DNA is the centre of the universe of life and consciousness, and human DNA is special. I'm with John Torday who argues this view is ripe for a similar Copernican transformation.

I'm making a lot of weird music at the moment using a combination of the piano and the Roli Seaboard. The Seaboard can create disturbed and confused "environments". The piano tries to create redundancies in the patterns of its notes, harmonies, rhythms, and so on. As living things, all us animals inhabit a confusing environment. The creation of redundancy in communication seems fundamental to the battle to maintain coherence of understanding and effectiveness of coordination. So birds tweet their rhythmic patterns... and I blog! (and others Tweet!). Even when we talk of deep things like philosophy, we are usually repeating what's gone before. But somehow, we need to do it. We need the redundancy.

Do we only think we are saying "something" - some key "new" information? Do we only think this because we believe our consciousness is "special"? This is an uncomfortable thought. But I can't help wondering that "talking to animals" is not being Dr Doolittle. It is about realising how much more like birds our human communication really is.

Friday, 28 June 2019

Choosing a new VLE: Technological Uncertainty points towards Personal Machine Learning

I sat in a presentation from a company wanting to replace my university's Virtual Learning Environment yesterday. It was a slick presentation (well practiced) and people generally liked it because the software wasn't too clunky. Lack of clunkiness is a sign of quality these days. New educational technology's functionality is presented as a solution to problems which have been created by other technology, whether it is the management of video, the coordination of marks, mobile apps, management of threaded discussion, integration of external tools, and so on. A solution to these problems creates new options for doing the same kinds of things: "use our video service to make the management of video easier", "use our PDF annotation tool to integrate with our analytics tools", etc. Redundancy of functionality is increased in the name of simplification of technological complexity in the institution. But in the end, it can't keep up: what we end up with is another option to choose from, an increase in the uncertainty of learners and teachers, which inevitably necessitates a managerial diktat to insist on the use of tool x rather than tool y. Technology that promises freedom produces restriction, and an increasingly wide stand-off between the technology of the institution and the technology of the outside world.

The basic thesis of my book "Uncertain Education" is that technology always creates new options for humans to act. Through this, the basic human problem of choosing the manner and means of acting becomes less certain. Institutions react to rising uncertainty in their environment often by co-opting technologies to reinforce their existing structures: so "institutional" tools, rather than personal tools, dominate. Hence we have corporate learning platforms in universities, and the dominance of corporate online platforms everywhere else. This is shown in the diagram below: the "institution's assistance" operates at a higher-level "metasystem", which tries to attenuate the uncertainty of learners and teachers in the primary system (the circle in the middle). Institutional technology like this seeks to ease the burden of choice of technology for workers, but the co-opting institutional process can't keep up with the pace of change in the outside world - indeed, it feeds that change. This situation is inherently unstable, and will, I think, eventually lead to transformation of organisational structures. New kinds of tools may drive this process. I am wondering whether personal AI, or more specifically, personal machine learning, might provide a key to transformation.

Machine learning appears to be a tool which also generates many new options for acting. Therefore it also should exacerbate uncertainty. But is there a point at which the tools which generate new options for acting create new ways in which options might be chosen by an individual? Is there a point at which this kind of technology is able assist in the creation of a coherent understanding of the world in the face of explosive complexification produced by technology? One of the ways this might work is if machine learning tools could assist in stimulating and coordinating conversations directly between teachers and learners. Rather than an institutional metasystem, machine learning could operate at the level of the human system in the middle, helping to mitigate the uncertainty that is required to be managed by the higher level system:

Without wanting to sound too posthuman, machine learning may not be so much a tool as an evolutionary "moment" in the relationship between humans and machines. It is the moment when the separation between humans and machines, which humans have defended since the industrial revolution in what philosopher Gilbert Simondon calls "facile humanism", becomes indefensible. Perhaps people like Friedrich Kittler and Erich Horl are right: we are no longer humans selves constituted of cells and psychology existing in a techno-social system; now the technical system constitutes the human "I" in a process intermediated by our cells and our consciousness.

I wonder if the point is driven home when we appreciate machine learning tools as an anticipatory system. Biological life drives an anticipatory process in modelling and adapting to the environment. We don't know how anticipation occurs, but we do possess models of what it might be like. One way of thinking about anticipation is to imagine it as a kind of fractal - something which approximates to David Bohm's 'implicate order' - an underlying and repeated symmetry. We see it in nature, in trees, in art, music, and in biological developmental processes. Biological processes also appear to be endosymbiotic - they absorb elements of the environment within their internal structure, repeating them at higher levels of organisation. So cells absorbed the mitochondria which once lived independently, and the whole reproduces itself at a higher order. This is a fractal.

Nobody quite knows how machine learning works. But the suspicion is that it too is a fractal. Machine learning anticipates the properties of an object it is presented with by mapping its features which are detected through progressive layers of analysis focusing on smaller and smaller chunks of an image. The fractal is created by recursively exploring the relationship between images and labels across different levels of analysis. Human judgements which feed the "training" of this system eventually become encoded as a set of "fixed points" in a relational pattern in the machines model.

I don't think we've yet grasped what this means. At the moment we see machine learning as another "tool". The problem with machine learning as a "tool" is that it is then used to provide an "answer": that is, it is used to filter-out information which does not relate to this answer. Most of our "information tools", which provide us with increased options for doing things, actually operate like this: they discard information, removing context. This adds to the uncertainty they produce: tool x and tool y both do similar jobs, but they filter out different information. Choosing which tool to use is to decide which information we don't need, which requires human anticipation of an unknowable future. Fundamentally, this is the problem that any university wanting to invest in new technology is faced with. Context is everything, and identifying the context requires anticipation.

Humans are "black boxes": we don't really know how any of us work. But as black boxes who converse, we gradually tune-in to each other, understanding the behaviour of each of us, and in the process, understanding more about our own "black box". In the process we manage the uncertainty of our own existence. Machine learning is also a black box. So might the same thing work? If you put two black boxes together, do they begin to "understand" each other? If you put a human black box together with a machine black box, does the human gain insight into the machine, and insight into the themselves through exploring the operation of the anticipatory system in the machine? If you put a number of human black boxes together with a machine black box, does it stimulate conversation between the humans as well as engagement with the machine? It is important to note in each of these scenarios, information is preserved: context is maintained with the increase in insight, and can be further encoded by the machine to enrich human conversation.

I wonder if these questions point to a new kind of organisational setup in institutions between humans and technology. I cannot see how the institutional platform can really be a viable option for the future: discarding information is not a way forward. But we need to understand the nature of machine learning, and the ways in which information can be preserved in the human machine relationship.

Tuesday, 18 June 2019

Machine Learning as a Personal Anticipatory System

Can a living system survive without anticipation? As humans we take anticipation for granted as a function of consciousness: without an ability to make sense of the world around us, and to preempt changes, we would not be able to survive. We attribute this ability to high-level functions like language and communication. At the same time, the ability of all living things to adapt to environments whilst not always showing the same skill of language is apparent, although many scientists are reluctant to attribute consciousness to bacteria or cells. Ironically, this reluctance probably has more to do with our human language for describing consciousness, than it does to the nature of any "language" or "communication" of cells or bacteria!

We believe human consciousness is special, or exceptional, partly because we have developed a language for making distinctions about consciousness which reinforces a separation between human thought and other features of the natural world. In philosophy, the distinction boils down to "mind" and "body". We have now reached a stage of development where continuing to think like this will most likely destroy our environment, and us with it.

Human technology is a product of human thought. We might believe our computers and big data to be somehow "objective" and separate from us, but we are looking at the manifestations of consciousness. Like other manifestations of consciousness such as art, music, mathematics and science, our technologies tell us something about how consciousness works: they carry an imprint of consciousness in their structure. This is perhaps easiest to see in the artifice of mathematics, which whilst being an abstraction, appears to reveal fundamental patterns which are reproduced throughout nature. Fractals, and the imaginary numbers upon which they sit, are good examples of this.

It is also apparent in our technologies of machine learning. Behind the excitement about AI and machine learning lies a fundamental problem of perception: these tools display remarkable properties in their ability to record patterns of human judgement and reproduce them, but we have little understanding of how it works. Of course, we can describe the architecture of a convolutional neural network (for example), but in terms of what is encoded in the network, how it is encoded, and how results are produced, we have little understanding. Work with these algorithms is predominantly empirical, not theoretical. Computer programmers have developed "tricks" for training networks, such as training a full network with existing public domain image sets (using, for example, the VGG16 model), but then retraining the bottom layer for the specific images that they want identified (for example, images of diabetic retinopathy, or faces). This works better than training the whole network on specific images. Why? We don't know - it just does.

It seems likely that whatever is happening in a neural network is some kind of fractal. The training process of back-propagation involves recursive processing which seeks fixed points in the production of results across a vast range of variables from one layer of the network to the next. The fractal nature of the network means that retraining the network cannot be achieved by tweaking a single variable: the whole network must be retrained. Neural networks are very dissimilar from human brains in this way. But the fractal nature of neural networks does raise a question as to whether the structure of human consciousness is also fractal.

There is an important reason for thinking that it might be. Fractals are by definition self-similar, and self-similarity means that a pattern perceived at one level with one set of variables can be reproduced at another level, with a different set of variables. In other words, a fractal representation of one set of events can have the same structure as the fractal pattern of a different set of events: perception of the first set can anticipate the second set.

I've been fascinated by the work of Daniel Dubois on Anticipatory Systems recently partly because it is closely related to fractals, and it also seems to have a strong correlation to the way that neural networks work. Dubois makes the point that an anticipatory system processes events over time by developing models that anticipate them, whilst also generate multiple possible models and selecting the best fit. Each of these models is a differently-generated fractal.

If we want to understand what AI and machine learning really mean for society, we need to think about what use an artificial anticipatory system might be. One dystopian view is that it means the "Minority Report" - total anticipatory surveillance. I am sceptical about this, because an artificial anticipatory system is not a human system: its fractals are rigid and inflexible. Human anticipation and machine anticipation need to work together. But a personal artificial anticipatory system is something that is much more interesting. This is a system which processes the immediate information flows of experience and detects patterns. Could such a system help individuals establish deeper coherence in their understanding and action? It might. Indeed, it might counter the deep dislocation produced by overwhelming information that we are currently immersed in, and provide a context for a deeper conversation about understanding.

Sunday, 16 June 2019

Machine Learning and the Future of Work: Why eventually we will all create our own AIs

I'm on my way to Russia again. I've had an amazing couple of days with a Chinese delegation from Xiamen Eye Hospital and the leading experts in retinal disease in China, who are collaborating with us on a big EPSRC project. There was a very special atmosphere: despite the language differences, we were all conscious of staring at the future of medical diagnostics where AI and humans work in partnership.

There's a lot of critical dystopian stuff about technology in society and education in the e-learning discourse at the moment. I think history will see this critical reaction more as a response to desperately nasty things going on in our universities, rather than an accurate prediction of the future. I am also subject to these institutional pathologies, but I suspect both the dystopian critiques and the institutional self-harm are symptoms of more profound changes which are going to hit us. Eventually we will rediscover a sane way of organising human thought and creativity once more, which is what our universities used to do for society.

So this is what I'm going to say the students in Vladivostok:

Machine Learning, Scientific Dialogue and the Future of Work
It is not unusual today to hear people say how the next wave of the technological revolution will be Artificial Intelligence. Sometimes this is called the "4th industrial revolution": there will be robots everywhere - robot teachers, robot doctors, robot lawyers, etc. In this imagined future, machines are envisaged to take the place of humans. But this is misleading. The future will however involve a deeper partnership between humans and intelligent machines. In order to understand this, it is important to understand how our technologies of AI work, how the processes of creating AIs and machine learning are becoming available to you and me, and how human work is likely to change in the face of technologies which have remarkable new capabilities. 
In this presentation, I will explain how it will become increasingly easy to create our own AIs. Even now, the technologies of Machine Learning are widely available, increasingly standardised and accessible to people with a bit of computer programming knowledge. The situation at the moment is very much like the early web in the 1990s, when to create a website, people needed a bit of knowledge of HTML. As with the web, creating our own AIs will become something everyone can do.  
Drawing on my work, I will explain how in a world of networked services, there is one feature about Artificial Intelligence which is largely ignored by those not informed of its technical nature: AI does not need to be centralised. A machine learning algorithm is essentially a single (and often not very large) file, which can be embedded in any individual device (this is how, for example, the facial recognition works on your phone). The world of AI will be increasingly distributed. 
Finally, I will consider what this future means for human work. One of the important distinctions between human decision-making and AI is that humans make judgements in a context; AI, however, ignores context. In other words, AI, like much information technology, actually discards information, and this has many negative consequences on the organisation of institutions, stable society and the economy. The most potentially powerful feature of AI in partnership with humans is that it can preserve information by preserving the context of human judgement. I will discuss ways in which this can be done, and why it means that those things which humans do best – empathy, criticality, creativity and conversation – will become the essence of the work we do in the future.

Tuesday, 4 June 2019

German Media Theory and Education

I'm discovering a branch of media studies which I was unaware of before Steve Watson pointed me to Erich Hörl's "Sacred Channels: The Archaic illusion of Communication". Hörl's book is amazing: cybernetics, Luhmann, Bataille, Simondon & co all spiralling around a principal thesis that communication is an illusion, and that many of our current problems arise from the fact that we don't think it is. The "illusion" of communication is very similar to David Bohm's assertion that "everything is produced by thought, but thought says it didn't do it". This is not "media studies" as we know it in UK universities. But it is how the Germans do it, and have been doing it for some time.

Just as Luhmann has been a staple of the German sociology landscape for undergraduate sociologists for 20 years now, so Luhmann's thinking informed a radical view of media which Hörl has inherited. He got it from Friedrich Kittler. Kittler died in 2011, leaving behind a body of work which teased apart the boundaries between media and human being. Most importantly, he overturned the hypothesis of Marshall McLuhan that media "extend" the human. Echoing Luhmann, Kittler says that media make humans. Just as Luhmann pokes the distinction between psychology and sociology (he really doesn't believe in psychology), Kittler dissolves the "interface" between the human and the media.

The result is that practically everything counts as media. Wagner's Bayreuth was media (Kittler wrote extensively about music, culminating with a four volume work he never finished, "Music and Mathematics"), AI is media, the city is media. So is education media? Not just the media that education uses to teach (which educational technologists know all about). But education itself - the systemic enveloping of conversations between students and teachers - is that media?

As Erich Hörl has pointed out, these ideas are very similar to those of another voice in technology studies who is gaining an increasingly dominant following after his death, Gilbert Simondon. Like Kittler, Simondon starts with systems and cybernetics. Simondon's relevance to the question of education and technology is quite fundamental. Kittler, I don't think, knew his work well, and Hörl acknowledges that he has further to go in his own absorption of the work. Simondon made a fundamental connection between media, or machine, and human beings as distinction-making, individuating entities. The individuation process - that process which Jung saw as the fundamental process of personal growth - was tied-up with the process of accommodating ourselves to the media which comprise us. This accommodation was achieved through levels of "transduction" - the multiple processes which produce multiple levels of distinctions, from the distinctions between our cells, to the distinctions in our language, and the distinctions with our environment. What happens in education, basically, is that the media which make us us are transformed through changes in the ways the transductions are organised at different levels.

I described a lot of this in my book, albeit not in the elegant fashion that Kittler, Hörl  (or Simondon) would have done. Kittler, Simondon and Hörl have got me thinking in a new way about how we think about education. There's much more to say about this however, because Kittler and Hörl's approach opens the way for a more empirical approach to understanding education as media. I was privileged to have learnt about Luhmann through one of his best disciples, Loet Leydesdorff. Leydesdorff's work has been dedicated to making Luhmann's theory empirically useful, which he has done by relating it to Shannon (which Luhmann did in the first place), and to the mathematics of anticipation by the Belgian mathematician, Daniel Dubois.

Here, we may yet have a science of education which straddles the boundaries between technology, critique, pedagogy and phenomenology whilst maintaining an empirical focus and theoretical coherence. That is the best way of getting better education. This science of education may well turn out to be exactly the same as the empirical and coherent science of media that Kittler and Hörl are aiming for, which transcends the sociological critique of media (seeing that as simply more media!), by providing a meta-methodology for making meaningful distinctions about our distinction-making processes in our media-immersed state.

Sunday, 2 June 2019

Two kinds of information in music and media

My recent music has been exploring the idea that there are two kinds of information in the world. I am following the theory of my colleague Peter Rowlands, who had this to say (in the video below) on the subject of how nature is a kind of information system, but very different from the information systems of our digital computers. Peter summarises the difference by saying that digital information is made from 1s and 0s, but the significant thing is the 1. Nature, he contends, operates with multiple levels of zero. His reasons for thinking this are a thoroughly worked-through mathematical account of quantum mechanics, and particularly the Dirac equation (the only equation in Westminster Abbey!). Nature is all "Much ado about nothing".

I've been fascinated by "nothing" for a long time. Nothing is "absence" as opposed to "presence", and absence is (according to philosophers like Roy Bhaskar, cyberneticians like Gregory Bateson, and biologists like Terry Deacon) constraint. Constraint is important in digital information because it is represented by Shannon's concept of "redundancy". So there is a connection between nothing and redundancy. This resonates with me with something like music, because it is so full of redundancy, and music does appear to be "much ado about nothing".

There is something we do when we make music which somehow makes sense. The patterns we create create the conditions for richer patterns which eventually define a structure. Musicians create redundancy in the form of repetition which brings coherence to the music. There are different kinds of redundancy: pitch, rhythm, timbre, intervals, etc. Much of this patterning occurs in the context of an external nature which is always shifting the context in which the music is made. It might be the sound of the wind, or water, or traffic, computer sounds, or elevator music - our sonic environment is moving around us all the time. The musical sense may be the natural pattern-making response to this which seeks to produce coherence. If this is the case, then birdsong and the noises of all animals, and maybe even language itself, can be seen as a process of maintaining coherence of perception within an environment. This is a radical view when applied to language - it means that we don't communicate. We don't transfer "information" between us. As Niklas Luhmann says in his most famous quote,
"Humans cannot communicate; not even their brains can communicate; not even their conscious minds can communicate. Only communication can communicate."
He could be right. It's also quoted in Erich Hörl's new book "Sacred Channels: The archaic illusion of communication". Hörl follows a line of inquiry from Friedrich Kittler (who is also new to me) who argued that "media studies" needs to reject Marshall McLuhan's view that media extend the human; media makes the human. Gilbert Simondon said the same thing in connecting technology with human individuation. If there is a new theoretical way forwards for our thinking about technology, media and education, it rests with these people. Cybernetics is at the heart of it.

My music works with this idea where the electronic component of this piece represents the unstable shifting lifeworld of nature. Because this is "digital", we might think about it being not only the noise of the wind, but the noise of computers - digital information. The piano represents the musician's attempt to create pattern and maintain coherence in the whole. It is engaged in much ado about nothing.

Saturday, 1 June 2019

Augar's Intergenerational Conversation

"Education" as a topic is very complex and hard to define. We might think of schools, classrooms, teachers, but whatever we choose to include as "education" inevitably excludes something. This is the problem of making a distinction about anything - but it is exacerbated when we think of education. The exclusion/inclusion problem creates uncertainty, and this uncertainty has to be managed through a process which usually involves talking to each other. Since talking to each other about important things is something we do in education, the topic of "education" is uniquely caught in a web of conversation. At the beginning of my book, I quoted Everett Hughes, who I think gets it about right when he says that education is a "complex of arts" where:
"the manner of practicing them is the very stuff of the clash of wills and interests; thus, the stuff of politics."
This is the same confrontation of wills and interests that parents face with their children, that the younger generation faces with the older. But all the way through, it is conversation which is the process of negotiation.

Philip Augar's review of post-18 education funding has been fairly warmly received - partly because of the thoughtful tone it sets, and its modest reprimands against some of the more outrageous excesses of marketised higher education. However, as many commentators have pointed out, in cutting the headline fee for students, but increasing the repayment period, it is appears more socially regressive than the current system. The message hasn't changed: it is the job of job of students (the young) to pay for their education (pay for their elders to teach them) over the course of their lives, although it is recommended that the loan funding to pay for education may be available for more flexible study options. The rationale is that the young benefit from education financially.

This week I've been involved in two separate discussions about the future of work. That Artificial Intelligence and global data is going to transform the workplace is barely beyond doubt. Exactly what kind of impact it will have on opportunities for the young is as yet unclear. Will every automated service create an equivalent number of jobs in other areas? Will the growth of profits of large corporations which benefit from a falling salary bill trickle-down to those left behind in the rush to reduce expensive human labour? Or are we heading for a data-coordinated future of globalised gig-work at globalised rock-bottom wages? If this is the future for the young, who could blame them for questioning the fairness of the financial burden they bear for an education which turns out to fall short of the promises made by their universities?

This is how we depress the future. As Stafford Beer said (in an unpublished notebook):
"In a hundred years from any `now', everyone alive will be dead: it would therefore be possible for the human race to run its affairs quite differently - in a wise and benevolent fashion. Education exists to make sure this does not happen."
 What is AI? What is the Web? Are they technologies "for organising our affairs quite differently"? They could be. "In a wise and benevolent fashion"? Not currently, according to Tim Berners-Lee and many others, but they could be. Then we come to education. Beer is making a point about education's role in reproducing social class divisions, which Bourdieu famously explained. But education is conversation, and more importantly, an intergenerational conversation. Our technologies are tools which both afford the coordination of conversation, and create new kind of remarkable artefacts for us to talk about. And these conversations are intergenerational: to be able to summon-up movies, videos or documents on demand and watch/read them together, whether online or together in the living room with our kids is profound and powerful. Something very special happens in those conversations.

In these kinds of simple things - of the elders sharing resources and talking with the young - there is something very important that we've missed in our educational market. Teaching involves the revealing of one's understanding, and the existential need to teach may lie with the elders, not the young. The gains for the young to participate are not always obvious to them (or anyone else). Promises made by the elders to the young about future riches are not always believable, but behind them lies the desire of the elders to encourage the young and preserve humanity after the elders are dead. Successful companies understand the importance of supporting the next generation, and they don't do it for the future financial benefit of the young. They do it to preserve the viability of the business.

If the existential need is to teach, not for the young to learn for future financial gain, then the elders should pay the young to be taught, for them to reveal their understanding to the next generation before the elders die. Only seeing it this way round makes any sense looking into the future: the young will have their own children, they will become the elders, they will have an existential need to teach, and they will pay their young to learn. The spirit of encouragement drives one generation to the next.

Now look at what Augar has tweaked but otherwise left untouched. Despite some florid prose extolling the virtues of education, the underlying existential issue is financial gain for the young through the acquisition of knowledge and certificates. The elders (of whom Augar is one) are merely functionaries in delivering knowledge and certificates. The promise of financial gain will be broken amidst employment insecurity, rents, lifelong debt and inequality. They will look at the elders and see their big houses and long lifespans (damn it, they won't even die quickly and leave an inheritance!), and ask how it is that their hopes for the future were diminished. Their only respite will be to inflict a similar injustice on their own children as they mutter "there is no alternative". This is positive feedback: the spirit of despair infects one generation to the next.

Augar's report is thoughtful though, so I don't want to dismiss it. One of his targets is the breaking down of the monolith of the 3-year degree course, and reconfiguring the way the institution's transactions with its students work. This is good. But Corbyn was right about the financing of education and who should pay. It's not just an argument about one generation of students. It's an argument about a viable society. 

Thursday, 23 May 2019

Polythetic Analysis: Can we get beyond ethnography in education?

I had a great visit to Cambridge to see Steve Watson the other day - an opportunity to talk about cybernetics and education, machine learning, and possible projects. He also shared with me a great new book on cybernetics and communication about which I will write later - it looks brilliant:

One thing came up in conversation that resonated with me very strongly. It was about empirically exploring the moment-to-moment experience of education - the dynamics of the learning conversation, or of media engagement, in the flow of time. What's the best thing we can do? Well, probably ethnography. And yet, there's something which makes me feel a bit deflated by this answer. While there's some great ethnographic accounts out there, it all becomes very wordy: that momentary flow of experience which is beyond words becomes pages of (sometimes) elegant description. I've been asking myself if we can do better: to take experiences that are beyond words, and to re-represent them in other ways which allow for a meta-discussion, but which also are beyond words in a certain sense.

Of course, artists do this. But then we are left with the same problem as people try to describe what the artist does - in pages of elegant description!

This is partly why Alfred Schutz's work on musical communication really interests me. Schutz wanted to understand the essence of music as communication. In the process, he wanted to understand something about communication itself as being "beyond words". Schutz's descriptions are also a bit wordy, but there are some core concepts: "tuning-in to one another", "a spectrum of vividness of sense impressions", and most interestingly, "polythetic" experience. Polythetic is an interesting word - which has led me to think that polythetic analysis is something we could do more with.

If you google "polythetic analysis", you get an approach to data clustering where things are grouped without having any core classifiers which separate one group from another. This is done over an entire dataset. Schutz's use of polythetic is slightly different, because he is interested in the relations of events over time, where there is never any core classifier which connects one event to another, and yet they belong together because subsequent events are shaped by former events. I suppose if I want to distinguish Schutz from the more conventional use of polythetic, then it might be called "temporal polythetic" analysis.

While there are no core classifiers which distinguish events as belonging to one another, there is a kind of "dance" or "counterpoint" between variables. Schutz is interested in this dance. I've been working on a paper where the dance is analysed as a set of fluctuations in entropy of different variables. When we look at the fluctuations, patterns can be generated, much like the patterns below (which are from a Bach 3-part invention). The interesting question is whether one person's pattern becomes tuned-in to another person's. If it is possible to compare the patterns of different individuals over time then it is possible to have a meta-conversation about what might be going, to compare different experiences and different situations. In this way, a polythetic comparison of online experience versus face-to-face might be possible, for example, or a comparison of watching different videos.

So in communication, or conversation, there are multiple events which occur over time: Schutz's "spectrum of vividness" of sense impressions. As these events occur, and simultaneously to them, there is a reflective process whereby a model which anticipates future events is constructed. This model might be a bit like the fractal-like pattern shown above. In addition to this level of reflection, there is a further process whereby there are many possible models, many possible fractals, that might be constructed: a higher level process requires that the most appropriate model, or the best fit, is selected. 

Overall this means that Schutz's tuning-in process might be represented graphically in this way:

This diagram labels the "flow of experience" as "Shannon redundancy" - the repetitive nature of experience, the reflexive modelling process as "incursive", and the selection between possible models as "hyperincursive" (this is following the work on anticipatory systems by Daniel Dubois). 

Imagine if we analysed data from a conversation: everything can have an entropy over time - the words used, the pitch of the voice, the rhythm of words, the emphasis of words, and so on. Or imagine we examine educational media, we can examine the use of camera shots, or slides changing, or words on the screen, and spoken words. Our experience of education and media is all contrapuntal in this way.

Polythetic analysis presents a way in which the counterpoint might be represented and compared in a way that acts as a kind of "imprint" of meaning-making. While ethnography tries to articulate the meaning (often using more words than was in the initial situation), analysing the imprint of the meaning may enable us to create representations of the dynamic process, to make richer and more powerful comparisons between different kinds of experience.

Wednesday, 8 May 2019

Bach as an anticipatory fractal - and thoughts on computer visualisation

I've got to check that I've got this right, but it seems that an algorithmic analysis I've written of a Bach 3-part invention reveals a fractal. It's based on a table of entropies for different basic variables (pitch, rhythm, intervals, etc). An increase in entropy is a value for a variable "x", where a decrease in entropy is a value for "not-x". Taking the variables as A, B, C, D, etc, there is also the values for the combined entropies of AB (and not-AB), AC, BC, etc. And also for ABC, ABD, BCD, and so on.

The raw table looks a bit like this:
But plotting this looks something like this:

What a fascinating thing that is! It should be read from left to right as an index of increasing complexity of the variables (i.e. more combined variables), with those at the far left the simplest basic variables. From top to bottom is the progress in time of the music. 

My theory is that music continually creates an anticipatory fractal, whose coherence emerges over time. The fractal is a selection mechanism for how the music should continue. As the selection mechanism comes into focus, so the music eventually selects that it should stop - that it has attained a coherence within itself. 

Need to think more. But the power of the computer to visualise things like this is simply amazing. What does it do to my own anticipatory fractal? Well, I guess it is supporting my process of defining my own selection mechanism for a theory!

Tuesday, 7 May 2019

"Tensoring" Education: Machine Learning, Metasystem and Tension

I've been thinking a lot about Buckminster-Fuller recently, after I gave a talk to architecture students about methods in research (why does research need a method?). One of the students is doing an interesting research project on whether tall buildings can be created in hot environments which don't require artificial cooling systems. The tall building is a particular facet of modern society which is overtly unsustainable: we seem only to be able to build these monoliths and make them work by pumping a huge amount of technology into their management systems. Inevitably, the technology will break down, or become too expensive to run or maintain. One way of looking at this is to see the tall building as a "system", which makes its distinction between itself and its environment, but whose distinction raises a whole load of undecidable questions. Technologies make up the "metasystem" - the thing that mops up the uncertainty of the building and keeps the basic distinction it made intact.  Overbearing metasystems are the harbinger of doom - whether they are in a passenger plan (the Boeing 737 Max story is precisely a story about multiple levels of overbearing metasystems), in society (universal credit, surveillance), or in an institution (bureaucracy).

Buckminster Fuller made the distinction between "compression" and "tension" in architecture. We usually think of building in terms of compression: that means "stuff" - compressed piles of bricks on the land. His insight was that tension appeared to be the operative principle of the universe - it is the tension of gravity, for example, that keeps planets in their orbit. Fuller's approach to design was one of interacting and overlapping constraints. This is, of course, very cybernetic, and the geodesic dome was an inspiration to many cyberneticians - most notably, Stafford Beer, who devised a conversational framework around Fuller's geometical ideas called "syntegrity".

In education too, we tend to think of compressed "stuff": first there are the buildings of education - lecture halls, libraries, labs and so on. Today our "stuff"-focused lens is falling on virtual things - digital "platforms" - MOOCs, data harvesting, and so on, as well as the corporate behemoths like Facebook and Twitter. But it's still stuff. The biggest "stuff" of all in education is the curriculum - the "mass" of knowledge that is somehow (and nobody knows exactly how) transferred from one generation to the next. Fuller (and Beer) would point out that this focus on "stuff" misses the role of "tension" in our intergenerational conversation system.

Tension lies in conversation. Designing education around conversation is very different from designing it around stuff. Conversation is the closest analogue to gravity: it is the "force" which keeps us bound to one another. As anyone who's been in a relationship breakdown knows - as soon as the conversation stops, things fall apart, expectations are no longer coordinated, and the elements that were once held in a dynamic balance, go off in their different directions. Of course, often this is necessary - it is part of learning. But the point is that there is a dynamic: one conversation breaks and another begins. The whole of society maintains its coherence. But our understanding of how this works is very limited.

Beer's approach was to make interventions in the "metasystems" of individuals. He understood that the barriers to conversation lay in the "technologies" and "categories" which each of us has built up within us as a way of dealing with the world. Using Buckminster Fuller's ideas, he devised a way of disrupting the metasystem, and in the process, open up individuals to their raw uncertainty. This then necessitated conversation as individuals had to find a new way to balance their inner uncertainty with the uncertainty of their environment.

The design aspect of tensored education focuses on the metasystem. Technology is very powerful in providing a context for people to talk to each other. However, there is another aspect of "tensoring" which is becoming increasingly important in technology: machine learning. Machine learning's importance lies in the fact that it is a tensored technology: it is the product of multiple constraints - much like Buckminster-Fuller's geodesic dome. The human intelligence that machine learning feeds on is itself "tensored" - our thoughts are, to varying extents - ordered. Expert knowledge is more ordered in its tensored structure than that of novices. Machine learning is able to record the tensoring of expert knowledge.

When devising new ways of organising a tensored education, this tool for coordinating tension in the ordering of human understanding, and avoiding "compression" may be extremely useful.

Sunday, 28 April 2019

How the Roli Seaboard is changing the way I think about music

I am making very weird noises at the moment. Partly encouraged by a richly rewarding collaboration with John Hyatt and Mimoids (see, a digital musical instrument - the Roli Seaboard - is becoming my favoured mode of musical expression. A year ago, I would have thought that highly improbable. For me, nothing could touch the sensitivity, breadth of expression and sophistication that is possible with an acoustic piano - if you have the technique to do it. Having said that, I do wonder if we've run out of ideas within that medium.

Part of the problem with contemporary music is that the only way forwards is towards greater complexity. And with greater complexity sometimes comes a barrier with people: music becomes "clever" or "difficult" and we lose something of what matters about the whole thing in the first place.

While I've been thinking about this, I've also been thinking about what music really is in the first place. Why do I have some kind of "soundtrack" running in my head all the time? What's going on? Is it connected to the way I make sense of the world?

Music's profound quality arises from redundancy. That's interesting because it raises the question as to why my cognitive system has to continually generate redundancy. The interesting thing is that redundancy can create coherence. So maybe that continual soundtrack is simply my consciousness making sense of the chaos around me. I'm beginning to wonder about this with regard to all communicative musicality.. even learning conversations: they seem to arise from some profound need to make sense of things - and not just by learners, but by teachers too.

This also helps to explain why class music lessons in school are often terrible. Attempting to rationally codify the very thing that we use all the time to make sense of the world is likely to result in some kind of adverse reaction.

In a complex world, simplicity is important. Which brings me back to contemporary music. Not that we want to create simple music and put it on the pedestal of high art. But we need to express something of what music does to us, and perhaps to understand how it works better. The piano is a sophisticated and delicate instrument which can make simple things sound interesting. But the Roli Seaboard is an instrument which expresses ambiguity, complexity and variety in a way that the piano cannot. To me, the Seaboard sounds like the world around us - the noisy world of loudspeakers, garish colours, and distraction. The Seaboard is context, and it creates a frame for our simpler and more traditional forms of music to reveal what they really do for us: to create coherence, and (in terms of collective singing) conviviality.

Saturday, 27 April 2019

Tradition, Redundancy and Losing the Way

This week there was a rare opportunity to hear Michael Tippett's piano concerto in Manchester (it's rare anywhere) with Steven Osborne playing (who was a fellow student with me at Manchester University in the late 80s). I hadn't heard the Tippett for years - it's incredibly radiant and warm music. Another composer, John McCabe, said something fascinating about him: "I find Tippett's music tends to make me feel better" (see I agree, and Tippett was very conscious that he was attempting to do something physiological with sound (he got this from Vincent D'Indy - see This, in his mind, was deeply connected to social concerns and emancipation, as well as to depth psychology. Jung and T.S. Eliot were profound influences.

Both these issues have been on my mind. On the day of the concert I had had a job interview (the first for a long time), which although I didn't get the job, prompted a fascinating discussion about individuation, both from a Jungian perspective and from that of Simondon. But during the concert I was thinking about the ritual of playing music, and returning to music from many years ago, and thinking about Eliot's famous essay "Tradition and the Individual Talent", which I had first got to know at Manchester with Tippett's biographer.

The whole arts world is a kind of ritual, seeming to preserve an elite social order. When that order is challenged - for example, by an 850 year old cathedral burning down - the human reaction seems irrational - but its elite nature is clear for all to see. The irony is that great art - and Tippett was a visionary artist - is made in the spirit of challenging the social order (he was also a Marxist). His piano concerto is a superb case-in-point: unlike any other concerto, it is anti-heroic. Few pianists would take it on because it doesn't put them in the spotlight. Audiences are disoriented because their expectations are frustrated by a fiendishly difficult piano part which causes the soloist to work very hard, but which remains veiled behind a collective radiant wall of sound. For most of it, they are an accompaniment, or a catalyst. Tippett was making a statement: one that is echoed in Eliot's essay -
The emotion of art is impersonal. And the poet cannot reach this impersonality without surrendering himself wholly to the work to be done. And he is not likely to know what is to be done unless he lives in what is not merely the present, but the present moment of the past, unless he is conscious, not of what is dead, but of what is already living.
Steven Osborne and Andrew Davies take this on because they understand this and believe in it. But there are contradictions (even in upholding them as "champions"!) Even in the wonderful performance in Manchester, I wondered if the point was lost on most of the audience. How do we get the point across about accompaniment or catalysis in a world which fetishises the individual achievement? Another way of asking this is to say "How do we see relations and the conversation as more important than the individual?" This was really what I talked about in my interview. And I have reflected on it more as I have thought that most of what I have done - in academia and in music - was catalysis.

But there are deeper questions about ritualised tradition. If one were to compress the years since the composition of Beethoven's 5th symphony, and examine the many millions of performances, then the ritualised repetition whereby people gather together and re-perform a set of instructions looks full of redundancy. Is redundancy the basis of tradition?

Redundancy is the basis of so much communication, from the crying of a baby, to the squawks of crows or music itself. Teaching depends on the redundancy of saying the same thing many different ways. Like playing Beethoven 5 in different ways (but rarely that different - apart from this... What is it? What's going on?

My speculation is that the world is a confusing place. All living things struggle to bring coherence to it - and they do this through conversation. We are thrown into conversation from birth. Through conversation, living things negotiate the differences between the different distinctions they make. Although we see those agreed distinctions - like words in a language - as "information", the really important thing is the redundancy that sits in the background of the process that makes it. Its the redundancy that brings coherence - just as the redundancy of Beethoven's motifs gives form to his symphony.

To accompany, to catalyse, we have to see the redundancy that needs to be added to bring coherence. I think this is really what teachers do. Its actually the opposite of "information". What Eliot describes as "surrendering to the work to be done" is the process of identifying the redundancy that needs to be created. In Gregory Bateson's terms, it is identifying the "Pattern that connects". The ritual of teaching and the ritual of performance of tradition are all about the coherence of our civilisation. There's something profoundly necessary about it, and yet within it are dangers which can produce incoherence.

To lose one's way is to lose sight of the process of creating redundancy, of catalysing ongoing conversations. This can happen if we codify the products of a previous age to the point that we believe that merely repeating these "products" - the information - will maintain civilisation. It will instead do the opposite. That's why Tippett's message - and his example - is important. It's not the figure; it's the ground - the earth - our shared context.