Thursday, 9 March 2023

The Maximum Entropy of Work

I'm very doubtful that the current trajectory of AI will make our lives easier. Indeed, the impressive progress of AI has led me to reflect on the fact that despite huge technological advances over the last 50 years, the lives of the majority of people have got harder and more uncertain. If I compare my own career with that of my dad, he was able to jog-along with a job he didn't much like, but basically survived without too much threat, and retired at 58 with a very generous pension. My journey, by comparison, has been a rollercoaster (indeed, a rollercoaster with some bits of the track missing!) and I am seeing people (particularly young academics) in their 30s faring even less well. So what's going on? And - before I delve into that further - it's too easy and lazy simply to blame "capitalism": we need to be more precise. 

I suspect the common denominator in the work equation is technological advancement alongside rigid institutional structures. This is not to denigrate technology - it is amazing - but it is to ask deeper questions about our institutions. I think there is a systems explanation for what is happening.

When a new technology arrives in a social system (a society, a business, an institution) it increases the possibilities for surprise in that system. Quite simply, new things become possible which people haven't seen before. Since information entropy is a measure of surprise, we can say that the "maximum entropy" of the social system increases, where this is the maximum is what is possible - not necessarily what is observed. 

What is observed in a social system with a new technology is a degree of surprise (some degree of innovation is observable), but nowhere near the maximum amount of possible surprise. So observable entropy increases, but the maximum entropy increases more. What does this mean for work and workers?

A bit like voltage in electronics, the difference between the maximum potential and the observed reality creates a space in which activity is stimulated. The bigger the space between observed entropy and maximum entropy, the greater the stimulation for activity. This activity is what we do in work. More precisely, work becomes a process of exploring the many ways in which the possible new configurations of practice and technology can be realised. Some of that work is called "research", other aspects of this work might be called "operations", other aspects of it might be called "management", but whatever kind of activity it is, it increasingly involves the exploration of new options.

This "work space" between the maximum entropy and the observed entropy is, as David Graeber famously said in his "bullshit jobs", mostly pointless. The work is basically doing things that have been, or can be, done in many different ways: it is effectively "redundant". But that's the point - redundancy generation in the space between the maximum entropy and the observed entropy is what must go on in that space. And it is exhausting and dispiriting, particularly if it increases. 

This is a bleak outlook because of all recent technologies to increase the maximum entropy, AI is in a league of its own. It will accelerate the growth of maximum entropy beyond anything we have yet seen. So what will happen with the observed entropy and the work in the space between?

The problem is the increasing gap between observed entropy and maximum entropy. What keeps the observed entropy so much lower is the structure of institutions. The deepest risk is that the maximum entropy goes off the scale, and the observed entropy - the visible interface to existing institutions doesn't change very much at all. That will create a pressure-cooker atmosphere within the work system. There will be work, and indeed more of it than ever before, but work will become increasing febrile and pointless.  It will make us sick: the mental health problems of workers, students and everyone else will suffer. 

It would be better if the redundancy-generating space was maintained as stable rather than increasing. This might be achieved if we consider the drivers for increasing maximum entropy through technology. One of the drivers is noise. It is the noise generated by an existing technology (for example, an AI) which drives the innovation to the next iteration of the technology. If human labour was seen as an effective management of noise, rather than the generation of redundancy, then society might be steered in a way which doesn't cause internal collapse. 

Another way of saying this is to say that uncertainty is the variable to manipulate collectively, and only humans can manipulate this variable. One of the problems with increasing maximum entropy is that labour is directed to do tasks that can be clearly defined. We see this with chatGPT at the moment: thousands of academics who say "we can use it to do <insert name of well-defined task>" This is looking for your keys where the light is, not where you lost them. 

One of the things the technology might be able to do is to direct human labour to where the uncertainty is greatest. Focused in this way, the work is really about exploring differences in understanding between different people of things which nobody is clear about. This is high variety, convivial, high level work for the many. Part of this work is work to explore the possibilities of new technology - the "redundancy work" in the space between observed and maximum entropy. But the other part of the work is to coordinate intellectual effort in exploring the noise of uncertainty, and the result of that work can help manage the gap between maximum entropy and observed entropy. 

What does this look like practically? I think, given that uncertainty is experienced physiologically, and exploring uncertainty together is deeply convivial, this looks like work with a focus on wellness, maybe using technology to identify where wellness might be threatened. 

Creating a "wellness system" is a possibility. The consequences of not doing this look far more dire than anyone can yet imagine. 

Wednesday, 8 March 2023

Birtwistle's Seriousness

I attended the commemorative concert for Harrison Birtwistle on Sunday. It was a powerful occasion which has led me to think about the abandonment of seriousness in art which seems to have occurred in the last 20 years or so. Birtwistle was a serious artist - by which I mean that he never sought popularity. He was committed to his project, crystal clear in its direction and what he was doing, and uncompromising in his attitude towards whether anybody else liked it or not. 

He was lucky in the sense that his formative years coincided with a post-war spirit that supported experimental music that was often hard on the ears, but which allowed for the exploration of deeper meaning. This supportive spirit has pretty much gone with late capitalism's demand that a market must exist for whatever the artist produces. Birtwistle now has a niche because it was able to grow in better times. How could such a niche be constructed now? What do we lose if we lose our ability to do this?

Part of the problem in answering this is that art is not always for the present or a present audience - it is for a future where things that may not resonate in the present find resonance decades after the artist is dead. Birtwistle's music will make more sense and convey its power and meaning more overtly in future worlds. How do we know which art will produce this effect? This is where some kind of deeper knowledge of what matters is important. Some people can tune into this and know what matters, what needs to be preserved. Those people too are now threatened in an anti-intellectual climate which even (or maybe particularly) in universities favours work that delivers immediate gain. 

Universities are part of society's mechanism for selecting what matters. They are now failing to do this. The decline of the professoriate both in quality and power in steering institutions is a signal of what has gone wrong. It is difficult to see a way back, although it would likely feature technology I would guess. I'm not sure how though. 

If we have no mechanism for selecting what matters, the future state of knowledge is threatened. It is an analogue of the current ecological crisis - the decline in diversity of species. 

The Birtwistle piece that opened the concert was a short duet called "The message". This took inspiration from an artwork by Bob Law containing the words: "The purpose of life is to pass the message on". Birtwistle's seriousness lies in the fact that he understood this. 



All seriousness is about understanding this message.

And we can hope that the best of his music should be a sufficient transducer - like this: Harrison Birtwistle - Earth Dances - YouTube

Monday, 30 January 2023

AI, Technical Architecture and the Future of Education

I gave a presentation to the leaders of Learning Support at the University of Copenhagen this morning. I will write a paper about this, but in the meantime this is a blogpost to summarise the key points.

I began by saying that I would say nothing about "stopping the students cheating". I said basically, as leaders in learning technology in universities, there is no time to worry about this. The technology is improving so fast, what really matters is to think ahead about how things are going to change, and the strategies that are required to adapt. 

I said that basically, we are in "Singin' in the Rain". The movie is a good guide to the tech-rush that's about to unfold. 

I also referred to the 2001 Spielberg movie AI, which I didn't understand when I first saw it. I think we will look back on it as a prescient masterpiece. 

My own credentials for talking about AI are that I have been involved in an AI diagnostic project in Diabetic Retinopathy for 7 years at the University of Liverpool, and after £1.1m of project funding and then £2m of VC support, this has now been spun-out. When the project started I was an AI sceptic (despite being the co-inventor of the novel approach that has led to it's success!). I'm not sceptical now. 

I said that what is really important to understand is how the technology represents a new kind of technical architecture. I represented this with a diagram:
 
As a term, AI is a silly description. "Artificial Anticipation" is much better. The technology is new. It is not a database; it consists of a document called a model (which is a file) that can be thought of as being like a "sieve". The configuration of the structure of the sieve is produced through a process called "training", which requires lots of data, and lots and lots of time. This process uses huge amounts of data from the internet. Training requires "data redundancy" - lots of representations of the same thing. 

Since academics have been busy writing papers which are very similar to each other for the last 30 years, chatGPT has had rich pickings from which it can train itself. 

If you want to understand the training process, I recommend looking at google's "teachable machine" (see http://teachablemachine.withgoogle.com). This allows you to not only train a machine learning model (to recognise images or objects), but to download the model file and write your own programs with it. It's designed for children - which is how simple all of this stuff will be quite soon...


Once trained, the "model" does not need to be connected to the internet (chatGPT isn't, despite being accessed online). The model can make predictions about the likely categories of data it hasn't seen before (unlike a database which gives back what was put into it in response to a query). The better the training, the better the predictions. 

All predictions are probabilities. In chatGPT, every word is chosen according to the predictions of the chatGPT model, on the basis of the probabilities generated by the model. The basic architecture looks like the diagram above. Notice how the output of the text is fed as input back into the model. Also notice the statistical layer which does something called "autoregression" to refine the selection process from the options presented by the model. 

This architecture is where the clues are to how profound the impact of the technology is going to be. 

Models are not connected to the internet. That means they can stand alone and do everything that chapGPT does. We can have conversations with a file on our device as if we were on the internet. Spielberg got this spot-on in AI. 

Another implication of this is, as I (carefully) pointed out to some Chinese students I gave a presentation to a few months back (at Beijing Normal University), the conversations you have can be entirely private. There need not be any internet traffic. Think about the implications of that. 

We are going to see AI models on personal devices doing all kinds of things everywhere.

I made a couple of cybernetic references: one to Ashby's homeostat - because the homeostat's autonomous units coordinated their behaviour with each other in the way that AI's are likely to provide data for other AIs to train themselves. This is likely to be a tipping point. I strongly suggested that people read Andy Pickering's "The Cybernetic Brain".

There's something biological about this architecture. A machine learning model does not change in most machine learning applications: chatGPT's model does not retrain itself: retraining takes huge amounts of resource and time. What happens is that the statistical layer which refines the selection does adapt. Biologically, it's similar to the model being the Genotype (DNA) and the statistical layer being the phenotype (adaptive organism). 

This also ties in with AI being seen as an anticipatory system because the academic work on anticipatory systems originally comes from biology: an anticipatory system is a system which contains a model of itself in its environment (Robert Rosen). Loet Leydesdorff, with whom I have worked for nearly 15 years, has developed a model of this (building on Rosen's work) to explain communication in the context of economics, innovation and academic discourse (the Triple Helix). I have found Loet's thinking very powerful to explain this current phase of AI.


Of course, there are limitations to the technology. But some of these - particularly about uncertainty and inspectibility will be overcome I think (some of my own work concerns this)


But perhaps the biggest question concerns the nature of the technical architecture. AI - or Artificial Anticipatory Technology - is basically a document which is also a medium. What does that mean for us? Why does it matter in education?


The real question behind this is "What is education for?". Again, Spielberg gets something deeply correct here: one of the principal reasons why we have education at all is the ongoing survival of the species - which means that those who will die first must pass on the ability to make good judgements about the world to those who are younger. 

The education system is our technology for doing this. It's rather crude and introduces all kinds of problems. It combines documents (books, papers, videos, etc) which contain knowledge which requires interpretation and communication by teachers and students in order to fulfil this "cultural transmission" (someone objected to the word "transmission", and I agree it's an awkward shorthand for the complexity of what really happens).

AI is a document which is also a medium of interpretation and communication. It is a new kind of cultural artefact. What kind of education system do we build around this? Do we even need an education system that looks remotely like what we have now?

I said I think this is what we should be thinking about. It's going to come for us much faster than most senior managers in universities can imagine. 

So we simply haven't got time to worry about stopping the kids cheating!

Friday, 13 January 2023

Triad Chords as a "nice noise" (From Plankton to Puccini)

20 years ago, when the Lindsay string quartet retired from Manchester University, Ian Kemp - who had been an inspirational musical figure for me and so many others - returned from retirement to conduct a last "Lindsay session", playing Beethoven and Tippett (which was the favourite diet). Although Ian complained that he was "bad at hearing", his musical intellect remained sharp as tack. 

There was a passage in the music (I think it must have been Tippett) which was very unusual. So he asked, in his typical way, "what's going on here?". By this time, University academics of Kemp's temperament were very rare, and they had been replaced with younger people who were eager to please and were full of "musical analysis terminology". So Ian's question prompted much impressive-sounding jargon. "Perhaps," he said on hearing this, "but maybe it's just a nice noise". 

So what is a nice noise? We hear, with Western ears at least, the major triad as the epitome of musical consonance - a nice noise. It is a resting place, and the tonal geometric relations that form around the triad provide us not only with the "nice noise" of the chord itself, but an unfolding diachronic (and diatonic) space with which we can engineer a sense of arrival and homecoming in tonal music. 


When we learn about triads, we are introduced to the notation, and young pianists are taught how to shape their hands. But something gets added in both these cases. The triad is never "just" the notes. It is never "just the hand-shape". If it was "just the notes", then playing a triad with sine waves would be as satisfying as playing it on the piano. But it isn't - and this is my point: the triad's beauty lies in what occurs outside the notes. It lies in the noise that surrounds it. 

So much of music analysis manages to miss the music. I strongly suspect that Kemp's "nice noise" comment hit the music on the nose. Part of the key to understanding this (pardon the pun) lies in inspecting the relationship between a triad and a note.

Marina Frolova-Walker's fascinating lecture on the triad (see (38) Triads, Major and Minor - YouTubeP includes a nice demonstration of the overtone series and how this relates to the triad. But if we play a note and analyse its harmonics, we see the different harmonics at a couple of octaves above the fundamental note. If we add another note a third above the original note, what actually happens is the overall frequencies become "noisier" - there is a tussle between two fundamental notes which are nevertheless connected. 

Marina does say something about the experience of early musicians in hearing the consonance between two notes. This must have been fascinating and puzzling, because perception struggles to piece together the coherence of sounds which on the one hand interfere with each other, and on the other, agree with each other. The recursive operations of consciousness in the face of this oscillation is possibly comparable to the way that early art features recursive geometric tiling patterns (across many different cultures across the world)

Just as with the oscillations of perception with a tiling pattern, the oscillations of perception with a triad creates a dynamic dance between noise and consonance. As Marina illustrates at the beginning of her talk, Wagner completely understands and demonstrates this dance at the beginning of The Ring. 

The consonance of the triad is not static - it moves. But it moves in a way in which perception becomes fascinated. Understanding this also helps to explain why not everybody in the world has the same music. The issue is not about consonance and dissonance - it is about the relationship between stability, order and noise. Western harmony is one way of managing a dance between these factors, but it depends on particular kinds of social relation which reflect the society that favours that way of doing things. There are many others, just as there are many other kinds of society. 

The role of noise in creating order is much overlooked. Kemp's "nice noise", and the triad itself, is a dynamic relation between noise and order. An energy imbalance is inherent in the first note connecting the physiology of perception and action with the physics of sound. The noise around music is essential in driving forwards the process of unfolding immanent structures in the sound as more energy is produced, and the physiology of expectation adapts. 

I thought a while ago that there was a clear distinction between the synchronic aspects of music and the diachronic aspects. (I wrote about this here: Redundancies in the communication of music: An operationalization of Schutz's ‘Making Music Together’ - Johnson - 2021 - Systems Research and Behavioral Science - Wiley Online Library and here: Communicative Musicality, Learning and Energy: A Holographic Analysis of Sound Online and in the Classroom | SpringerLink). Now I think the synchronic aspects are much more dynamic than I realised. The ancient and medieval theorists who spoke of the divisions of the string and the harmonics ignored the role that perception plays in appreciating the beauty of "real" music, as opposed to mere mathematical relations. But now I see (and hear) that what happens to perception in the experience of the structure of sound is just as dynamic as what happens over time as sound develops. 

There is also something to say here about evolution, and the evolution of music. Michael Spitzer, with whom I've had the privilege of some detailed conversations recently alongside the biologist John Torday, has suggested that music is fundamentally connected to the ocean. He asked me a few weeks ago, after I'd given a talk on "music and epigenetics" about how the primeval ocean connects to Beethoven. It's a great question. Now, I think I would say that the ocean is a noisy environment (Michael says it is the most sonically rich environment on earth). The developmental process of life concerns the continual generation of order (negentropy). What do we need for this order-producing process? Information - in the form of selection is one thing. Constraint is the flip-side of information, and this is also required (technically, this is known as redundancy). But noise is critical. It's only with noise that the latent structures of organisms - from cells upwards - can be "shaken" into finding new ordered configurations. It's the same process - from plankton to Puccini!