Tuesday, 31 October 2023

Iconicity and Epidemiology: Lessons for AI and Education

The essence of cybernetics is iconicity. It is partly, but not only, about thinking pictorially. More deeply it is about playing with representations which open up a dance between mind and nature. This is distinct from approaches to thought which are essentially "symbolic". Mathematics is the obvious example, but actually, most of the concepts one learns in school are symbols that stand in relation to one another, and whose relation to the world outside has to be "learnt". This process can be difficult because the symbols themselves are shrouded in obscure rules which are often unclear and sometimes contradictory.

Iconic approaches make the symbols as simple as possible: a distinction, a game, a process - onto which we are invited to project our experience of a particular subject or problem. It was something that was first considered by C.S. Peirce who developed his own approaches to iconic logic (see this for example: Peirce.pdf (uic.edu)). Cybernetics followed in Peirce's footsteps, and the iconicity of its diagrams and technical creativity makes its subject matter transdisciplinary. It also makes cybernetics a difficult thing for education to deal with, because education organises itself around subjects and their symbols, not icons and games. 

But thinking iconically changes things.

I am currently teaching epidemiology which has been quite fun. But I'm struck by how the symbols of epidemiology - not just the equations, but the classifications of study types, problematisation of things like bias and confounding, etc, all put barriers in the way of understanding something that is basically about counting. So I have been thinking about ways of doing this more iconically.

To do this is to invite people into the dance between mind and nature, and to do that, we need new kinds of invitations. I'm grateful to Lou Kauffman who recommended Lancelot Hogben's famous "Mathematics for the Million" as a starting point. 

Hogben's book teaches the context and history of mathematical inquiry first, and then delves into the specifics of its symbolism. That is a good approach, and one that needs updating for today (I don't know of anything quite like it). Having said that, there are some great online tools to do iconic things: The "Seeing theory" project from Brown university is wonderful (and open source): https://seeing-theory.brown.edu/  (again, thanks to Lou for that)

Then of course, we have games and simulations - and now we have AI. Here's a combination of those things I've been playing with inspired by Mary Flannagan's "Grow a Game" Grow a Game - Mary Flanagan

My AI version http://13.40.150.219:9995/



Basically enter a topic, select a game and chatGPT will produce prompts suggesting rule changes to the game to reflect the topic. Of course, whatever the AI comes up with can be tweaked by humans - but its a powerful way of stimulating new ideas and thought in epidemiology. 

There's more to do here.

Friday, 27 October 2023

Computer metaphors and Human Understanding

One of the most serious accusations levelled against cognitivism is that it imposed a computer metaphor over natural processes of consciousness. At the heart of the approach is the concept of information as conceived by engineers of electronic systems in the 1950s (particularly Shannon). The problem with this is that there is no coherent definition of information that applies to all the different domains in which one might speak of information: from electronics, to biology, to psychology to philosophy, theology and physics.

Shannon information is a particularly special case and unique in the sense that it provides a method of quantification. Shannon himself, however, made no pretence in applying this to other phenomena than the engineering situation he focused on. But the quantified definition contains concepts other than information - most notably, redundancy (which Shannon, following the cyberneticians including Ashby identified as constraint on transmission) and noise. Noise is the reason why the redundancy is there - Shannon's whole engineering problem concerned the distinguishing of signal from noise on a communication channel (i.e. a wire). 

Shannon was involved with the establishment of cybernetics as a science. He was one of the participants at the later "Macy conferences" where the term "cybernetics" was defined by Norbert Wiener (actually, it may have been the young Heinz von Foerster who is really responsible for this). Shannon would have been aware that other cyberneticians saw redundancy rather than information as the key concept of natural systems: most notably, Gregory Bateson saw redundancy as an index of "meaning" - something which was also alluded to by Shannon's co-author, the philosopher Warren Weaver.

But in the years that followed the cybernetic revolution, it was information that was the key concept. Underpinned by the technical architecture that was first established by John von Neumann (another attendee of the Macy conferences), computers were constructed from a principle that separated processing from storage. This gave rise to the cognitivist separation of "memory" from "intelligence". 

There were of course many critiques and revisions: Ulrich Niesser, for example, among early cognitivists, came to challenge the cognitivist orthodoxy.  Karl Pribram wrote a wonderful paper on the importance of redundancy on cognition and memory (The Four Rs of Remembering6 see karlpribram.com/wp-content/uploads/pdf/theory/T-039.pdf). But the information processing model prevailed, inspiring the first wave of Artificial Intelligence and expert systems from the late 80s to the early 90s. 

So what have we got now with our AI? 

What is really important is that our current AI is NOT "information" technology. It produces information in the form of predictions, but the means by which those predictions are formed is the analysis and processing of redundancy. This is unlike early AI. The other thing to say is that the technology is inherently noisy. Probabilities are generated for multiple options, and somehow a selection must be made between those probabilities: statistical analysis becomes really important in this selection process. Indeed, within own involvement with AI development in medical diagnostics, the development of models (for making predictions about images) was far less important than the statistical post-processing that cleaned the noise from the data, and increased the sensitivity and specificity of the AI judgement. It will be the same with chatGPT: there the statistics must ensure that the chatBot doesn't say anything that will upset OpenAI's investors!

Information and redundancy are two sides of the same coin. But redundancy is much more powerful and important in natural systems, as has been obvious to researchers in ecology and the life sciences for many years (notably, statistical ecologist Robert Ulanowicz, economist Loet Leydesdorff, Bateson, Terry Deacon, etc). It is also fundamental to education - but few educationalists recognise this.

The best example is in the Vygotskian Zone of Proximal Development. I described a year or so ago how the ZPD was basically a zone of  "mutual redundancy" (here: Reconceiving the Digital Network: From Cells to Selves | Request PDF (researchgate.net) ), drawing on Leydesdorff's description. ChatGPT emphasises this: Leydesdorff's work is of seminal importance in understanding where we really are in our current phase of socio-technical development. 

Nature computes with redundancy, not information - and this is computation unlike how we think of computation with information. This is not to leave Shannon behind though: in Shannon, what happens is selection. Symbols are selected by a sender, and interpretations are selected by a receiver. The key in the ability to communicate is that the complexity of the sending machine is equivalent to the complexity of the receiving machine (which is a restatement of Ashby's Law of Requisite Variety - Variety (cybernetics) - Wikipedia). If the receiver doesn't have the complexity of the sender there will be challenges in communication. With such challenges - either because of noise on the channel, or because of insufficient complexity on the part of the receiver, it is necessary for the sender to create more redundancy in the communication: sufficient redundancy can overcome a deficiency in the complexity of the receiver to interpret the message. 

One of the most remarkable features of AI generally is that it is both created with redundancy, and it is capable of generating large amounts of redundancy. If it didn't, its capacity to appear meaningful would be diminished. 

For many years (with Leydesdorff) the nature of redundancy in the construction of meaning and communication has fascinated me. Music provides a classic example of redundancy in communication - there is so much repetition, which we analysed here: onlinelibrary.wiley.com/doi/full/10.1002/sres.2738. I've just written a new paper on music and biology which will be published soon which develops these ideas, drawing on the importance of what might be called a "topology of information" with reference to evolutionary biology. 

It's not just that the computer metaphor doesn't work. The metaphor that does work is probably musical.

Monday, 4 September 2023

Wittgenstein on AI

Struck by what appears to be a very high degree of conceptual confusion about AI, I've been drawn back to the basic premise of Wittgenstein that the problems of philosophy (or here, "making sense of AI") stem from lack of clarity in the way language is used. Wittgenstein's thoughts on aesthetics come closest to articulating something that might be adapted to the way people react to AI:

"When we make an aesthetic judgement about a thing, we do not just gape at it and say: "Oh! How marvellous!" We distinguish between a person who knows what he is talking about and a person who doesn't. If a person is to admire English poetry, he must know English. Suppose that a Russian who doesn't know English is overwhelmed by a sonnet admitted to be good. We would say that he does not know what is in it. In music this is more pronounced. Suppose there is a person who admires and enjoys what is admitted to be good but can't remember the simplest tunes, doesn't know when the bass comes in, etc. We say he hasn't seen what's in it. We use the phrase 'A man is musical' not so as to call a man musical if he says "Ah!" when a piece of music is played, any more than we call a dog musical if it wags its tail when music is played."

Wittgenstein says that expressions of aesthetic appreciation have their origins as interjections in response to aesthetic phenomena.  The same is true of our judgements to writing produced by AI: we said (perhaps when we first saw it) "Wow!" or "that's amazing". Even after more experience with it, we can laugh at an AI-generated poem or say "Ah!" to a picture. But these interjections are not indicators of understanding. They are more like expressions of surprise at what appears to be "understanding" by a machine. 

In reality, such interjections are a response to what might be described as "noise that appears to make sense". But there is a difference between the judgement of someone who might interject after an AI has returned a result who has a deeper understanding of what is going behind the scenes, and someone who doesn't. One of the problems of our efforts to establish conceptual clarity is that it is very difficult to distinguish the signal "Wow!" from its provenance in the understanding or lack of it in the person making the signal. 

Aesthetic judgement is not simply about saying "lovely" to a particular piece of art. It is about understanding the repertoire of interjections that are possible in response to a vast range of different stimuli. Moreover, it is about having an understanding of the constraints of reaction alongside an understanding of the mechanisms for production of the stimuli in the first place. It is about appreciating  a performance of Beethoven when we also have some appreciation of what it is like to try to play Beethoven. 

Finally, whatever repertoire one has to make judgements, you can find others in the social world with whom you can communicate the structure of your repertoire of reactions to AI. This is about sharing the selection mechanism for your utterances and in so doing articulating a deeper comprehension of the technology between you. 

I'm doing some work at the moment on the dimensionality of these different positions. It seems that this may hold the key for a more rational understanding of the technology and help us to carve a coherent path towards adapting our institutions to it. But in appreciating the dimensionality of these positions, the problem is that the interconnections between the different dimensions breaks. 

It is easy to fake expertise in AI because few understand it deeply. That means it is possible to learn a repertoire of communications about AI without the utterances being grounded in the actual "noise" of the real technology. 

It is also easy to construct new kinds of language game about AI which are divorced from practice, but manage to co-opt existing discourses so as to give those existing discourses some veneer of "relevance". "AI ethics" is probably the worst offender here, but there's lots of words spent of discussing the sociology of "meaning" in AI. 

Equally it is possible to be deeply grounded in the noise of the technology but to find that the concepts arising from this engagement find no resonance with people who have no contact with the technics, or indeed, are in some cases almost impossible to express as signals. 

It is in understanding the dynamics of these problems which is where the dimensionality can help. It is also where experiments to probe the relationship between human communications about the technology and the technology itself can be situated. 

Sunday, 23 July 2023

Exploring the Dark with AI

One of the consequences of a changing landscape of technology is that everyone is in the dark. What we need to do when everyone is in the dark is talk to those people who are most familiar with the dark to show us around their uncertainties. This is when interdisciplinary engagement can be most powerful and productive. 

In 1968 Arthur Koestler organised a symposium at Alpbach, Austria which gave rise to a book of essays by the leading scientists of the day. The book is called "Beyond Reductionism: New perspectives in the life sciences". The attendance list included: Ludwig von Bertalanffy, Jerome Bruner, Viktor Frankl, Friedrich Hayek, Jean Piaget, Conrad Waddington and Paul Weiss. (The gender bias is unfortunately a sign of the time)

If we were to create a similar meeting, who would we invite? Who has been shining lights into the darkness for some time, who might show us a way forwards? Whose conversations might benefit from deeper interdisciplinary connection? I think my list would include (in no particular order): Isabelle Stengers (philosophy), Mark Solms (neurobiology), Maxine Sheets-Johnstone (dance, philosophy), Peter Rowlands (physics), Antonio Damasio (psychology), Karen Barad (physics), John Torday (evolutionary biology), Sabine Hossenfelder (physics), Louis Kauffman (Mathematics), Katherine Hayles (cybernetics), Lee Smolin (Physics), Elizabet Sahtouris (evolutionary biology), Rupert Wegerif (education), Mariana Mazzucato (economics).

Most of those people won't see this message - but I think we should do something like this. Academia today is much changed from the world of 1968. Today we don't seem to believe in the dark much - everything is brightly lit with learning outcomes and assessment criteria and universities as businesses. Dark things happening - disease, war - put us into oscillation which is more dangerous than the initial triggers. 

Holistic thinking is, I suspect, much less easy today than it was in 1968. I have been talking to friends about the difficulty of getting young people involved in the Alternative Natural Philosophy Association (http://anpa.onl). Only those with well-established careers can afford to do think holistically, or people hiding under the radar. Everyone else seems to just need to survive. But none of us will survive if we don't encourage holism among the young and discourage the managerial nonsense that has become education. 

We all begin in the dark. Showing each other around is an important thing to do. 

Friday, 19 May 2023

The Digitalised Imagination

Just over 2 years ago I decided I wanted a bit of adventure in the tail-end of Covid, and gave up a slightly depressing management position at the University of Liverpool, and became a post-doc on a project on curriculum digitalisation at the University of Copenhagen. I thought at the time that digitalisation was the most important undercurrent in education, and I knew that it was a difficult thing to move towards. My best achievement had been at the Far Eastern Federal University in Russia, which I wrote about here: Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum (springer.com). The Copenhagen experience was nowhere near as good as the Russian experience, and I left Copenhagen for Manchester with a much deeper appreciation of what I had done in Russia. I just wished I'd done it in Switzerland!

During this time, and for seven years previously, I had been deeply involved in a medical diagnostic AI project whose innovation I was co-inventor. It was obvious that AI was a tidal wave that was about to hit education, and much of my frustration in Copenhagen was that very few people were really interested. They are now, like everyone else. 

There is a risk that AI sweeps the digitalisation agenda away. After all, why teach the kids to code when the computer will do it for you? This kind of statement underpins errors in the ways that digitalisation was conceived - particularly in Copenhagen and many other European universities. It also underpins the difference between the institutional approach of Copenhagen and the approach I took in Russia. 

Digitalisation is not about skill or competency. It is not about "digital literacy" (whatever that means!). It is about imagination. This was understood by the Russians, and dogmatically avoided in Copenhagen. The deep problem is the sanctifying of "competency" within European education, and the EU has been particularly pernicious in pushing this. However much the sheer lack of insight as to what "competency" is (ask anyone to define it!), it is continually asserted that this is the thing education must do. 

Now in the new AI world that is opening up in front of us, the biggest threat is not technology, but poverty of the imagination. And imagination today means (partly) the "technical imagination". It is about understanding the realm of possibility under the surface, behind the interface - it is the Freudian unconscious of the technical world which through the working of creativity can find expression in the interfaces we produce. 

With an imaginative collapse, humanity becomes enslaved. While the demands of the technical imagination are going to encompass a huge range of disciplines, skills, ideas, relationships, we will need our new tools to oil the wheels of our discourse and knowledge and find new ways of organising ourselves. It is in steering this process to which education needs to direct itself. But ironically, the university as it is currently constituted is geared-up for imaginative collapse and corporate takeover. 

Digitalisation is about changing this. It's not going to be easy. 

Tuesday, 16 May 2023

The Glass Bead Game of Human Relations

I attended an interesting session today on burnout and stress at work. There are many conflicting analyses of these problems. On the one hand, there are those studies which focus on the individual, seeing stress as an attribute of individuals, and "stressors" as independent variables producing the experienced symptoms of stress. There are clearly epistemological problems with this, not least that stress is rather like a headache - something that is subjectively experienced, but cannot be directly witnessed by others (only its effects). Searle calls this a "subjective epistemological" phenomenon (to be contrasted with "objective epistemological" things like historical dates, or "subjective ontological" things like money or universities, or "objective ontological" things like the motion of planets, or light). The notion of the "self" that is stressed is the biological/psychological entity bounded by their skin. Let's call this Stress1.

The alternative view of stress is that it is a manifestation of social relations and communication. This entails a different conception of the self as something that is constructed within communication, particularly the communication of the first person "I". The self in this sense is more like Searles's "ontological subjective" category: the reality of a self is construed by the expectations which arise as a result of social engagement and "positioning". This is the self as it is seen by others. It is also the self which can be oppressed by others directly, or by situations which result from others taking insufficient care of environmental factors that can negatively impact on the expression of the self. This is what can happen in situations where people become stressed. Communicative theories which examine stress in these circumstances include things like the "double bind", which is unfortunately extremely common in many workplaces. This is Stress2. 

Both perspectives on the stressed self - the ontological-subjective self and the epistemological-subjective self - are important. However, in terms of practical steps to eliminate stress, the two perspectives have different approaches. Stress1 is addressed through treatment to the individual - rather like giving someone with a headache paracetamol: mindfulness, etc. Stress2 is addressed through changing the structures of communication. This is much harder to do, and so Stress1 dominates the discourse, and its (rather hair-brained) remedies go relatively unchallenged. 

Stress2 is difficult because it basically requires the making of better decisions at the top of an organisation. Bad decisions will cause stress. Good decisions ought not to, but instead to create synergy, wellness and productivity. Decisions are the result of the skill of decision-makers, so the question really is how we create good decision-makers.  Here we see that the incentives for people to climb the ranks of decision-making encourage behaviour which is anathema to the making of "good decisions". People are rewarded instead for hitting targets, increasing profits, and driving down costs. All of which comes at a human cost. 

Even if better criteria could be defined to encourage and recruit better decision-makers, it will always be possible to "fake" criteria if they are in the form of new targets or KPIs. This won't work.

This has led me to wonder about what Herman Hesse's "Glass Bead Game" might actually have been (or might one day be in the future). Why do the elites of 25th century Castelia take this game, which is a bit like music (as Hesse describes it) so seriously? There is something important about it being a game. 

A game is not a set of criteria. It is a practice which requires the learning of skill to play well. As one learns to play well, one deepens in insight. As one deepens in insight, one might become more aware and able to act in the world in a way where the making of good decisions becomes more probable. Importantly, to play the glass bead game is not to "hit targets". It is not a KPI. It is an art. Only those who are more experienced in the game can judge those who are less experienced, but gradual mastery equips one with the skill to make good judgements oneself. Of course, Joseph Knecht decides the game is not for him, and a different spiritual path takes him elsewhere. But it is still a spiritual path - perhaps a different kind of game.

What if one's progression up the ranks of decision-making powers was organised like this? Would we have fewer psychopaths and more enlightened individuals at the top of our organisations? I think this is what Hesse was driving at. After all, he had seen the worst kind of management psychopaths in history in the Nazis. He must have asked himself what novel kind of arrangements might make the making of Nazis less probable. 

The other interesting thing about this though is that the Glass Bead Game is technological. Is there a way in which we could organise our technologies to produce a radically different kind of incentive scheme for those who aspire to become custodians of society? We clearly have some very powerful and novel technologies in front of us which should cause us to reflect on a better world that we might be able to build with them. 

Sunday, 14 May 2023

Positioning AI

I've been creating a simple app for my Occupational Health students to help them navigate and inquire after their learning content in flexible ways. It's the kind of thing that the chatGPT API makes particularly easy, and it seems worth playing with since chatGPT won't be the only API that does this kind of thing soon (Vicuna  and other open source offerings are probably the future...)

As with any tool development, the key is whether the people for whom the tool is made find it useful. This is always a tricky moment because others either do or don't see that a vision of what they do (manifested in the technics of what is made) actually aligns with what they perceive their needs to be. Like a bad doctor, I risk (like so many technical people) positioning students as recipients of techno-pedagogical "treatment". (Bad teachers do this too)

We've seen so many iterations of tools where mouse clicks and menus have to be negotiated which seem far-removed from real wants and needs. The VLE is the classic example. I wrote a paper about this many years ago with regard to Learning Design technology, which I am reflecting on again in the light of this new technology (see Microsoft Word - 07.doc (researchgate.net)). I used Rom Harre's Positioning Theory as a guide. I still think it is useful, and it makes me wonder how chatGPT might be any different in terms of positioning. 

Harre's Positioning Theory presents a way of talking about the constraints within which the Self is constructed in language and practice. There are three fundamental areas of constraint: 

  1. The speech acts that can be selected by an individual in their practice
  2. The positions they occupy in their social settings (for example, as a student, a teacher, a worker, etc)
  3. The "storyline" in their head which attempts to rationalise their situation and present themselves as heroic. 
With positioning through the use of tools, learners and teachers are often seen as recipients of the tool designer's judgement about what their needs are. This is always a problem in any kind of implementation - a constant theme in the adoption of technology. Of course, the storyline for the tool designer is always heroic!

But chatGPT doesn't seem to have had any adoption problems. It appears to most people who experience it that this is astonishing technology which can do things which we have been longing for easy solutions to: "please give me the answer to my question without all the adds, and the need to drill through multiple websites! (and then write me a limerick about it)" But in many cases, our needs and desires have been framed by the tedium of the previous generation of technology. It could have been much better - but it wasn't for reasons which are not technical, but commercial. 

However, could chatGPT have positioning problems? This is an interesting question because chatGPT is a linguistic tool. It, like us, selects utterances. Its grasp of context is crude by comparison to our awareness of positions, but it does display some contextual (positioning) awareness - not least in its ability to mimic different genres of discourse. Clearly, however, it doesn't have a storyline. However, because of the naturalness of the interface, and its ability to gain information from us, it is perfectly capable of learning our storylines. 

In a world of online AI like chatGPT or BARD, the ability to learn individuals' storylines would be deeply disturbing. However, this is unlikely to be where the technology is heading. AI is a decentralising technology - so we are really talking about a technology which is under the direct control of users, and which has the capacity to learn about its user. That could be a good thing. 

I might create a tool for my students to use and say "here is something that I think you might find useful". Ultimately, whether they find it useful or not depends on whether what they perceive as meaningful matches what I perceive as meaningful to them. But what is "meaningful" in the first place?

What students and teachers and technologists are all doing is looking for ways in which they (we) can anticipate our environment. Indeed, this simple fact may be the basic driving force behind the information revolution of the last 40 years. A speech act is a selection of an utterance whose effects are anticipated. If a speech act doesn't produce the expected effects, then we are likely to learn from the unexpected consequences, and choose a different speech act next time. Positioning depends on anticipation, and anticipation depends on having a good model of the world, and particularly, having a storyline which situates the self in that model of the world. 

Anticipations form in social contexts, in the networks of positionings that we find ourselves in our different social roles. ChatGPT will no doubt find its way into all walks of life and different positions. Its ability to create difference in many different ways can be a stimulus to revealing ourselves to one another in different social situations. But there are good and bad positionings. The danger is that we allow ourselves to be positioned by the technology as recipients of information, art, AI-generated video, instruction, jokes, etc. The danger is that we lose sight of what drives our curiosity in the first place. That is going to be the key question for education in the future. 

This is where the guts of judgement lie. What is in a position is not merely a set of expectations about the world around us. It is deeply rooted in our physiology. If we are not to become passively positioned by powerful technology, then it will become necessary for us to look inwards on our physiology in our deepest exercise of judgement. This is what we are going to need to teach succeeding generations. Information hypnosis, from which we have been suffering for many years of the web, cannot be the way of the future.

Sunday, 7 May 2023

The Endosymbiotic Moment

It's become increasingly obvious that there is something quasi-biological about current AI approaches. It's not just that there is a strong genotype-phenotype homology in the way that relatively fixed machine learning models work in partnership with adaptive statistics (see Improvisation Blog: AI, Technical Architecture and the Future of Education (dailyimprovisation.blogspot.com)). More importantly, the unfolding evolutionary dynamics of machine learning also appears to confirm some profound theories about cellular evolution. In my book about the future the education, written four years ago now, I said that there would come an "endosymbiotic moment" between education and technology.  Events seem to be playing that out, but now I think it's not just education in for an endosymbiotic moment, but the whole of society. 

This may be why people like Elon Musk, who has had a big stake in AI research, is calling for a "pause". Why? - is it wishful thinking to suggest that it may be because the people who are most threatened by what is happening are people like him? But it may be. 

The essence of biological evolution, and specifically cellular evolution, is that a boundary (e.g. the cell wall) must be maintained. The cell wall defines the relationship between its inside and its outside. Given that the environment of the cell is constantly changing, the cell must somehow adapt to threats to its existence. The principal strategy is what Lynn Margulis called "endosymbiosis". This is basically where the cell absorbs aspects of its environment which would otherwise threaten it. For example, it leads to the presence of mitochondria within the cell which, Margulis argued, were once independent simple organisms like bacteria. Endosymbiosis is the means by which the cell becomes more like its environment, and through this process, is able to anticipate any likely threats and opportunities that the environment might throw at it. It is also the way in which cells acquire "memory" of their evolutionary history - a kind of inner story which helps to coordinate future adaptations and coordinations with other cells. From this perspective, DNA  is not the "blueprint" for life, but rather the accreted result of ongoing R&D in the cells existence. 

What's this got to do with technology? The clue is in a leaked memo from Google (Google "We Have No Moat, And Neither Does OpenAI" (semianalysis.com)), which highlighted the threat to the company's AI efforts not from competitor companies, but from open source developments. All corporate entities, whether companies, universities or even governments maintain their viability and identity (and in the case of companies, profits) by maintaining the scarcity of what they do. That means maintaining a boundary. Often we see corporate entities doing this by "swallowing up" aspects of their environment which threaten them. The big tech giants have made a habit of this. 

The Google memo suggests something is happening in the environment which the corporation can't swallow. This is open source development of AI. Of course, there is nothing new about open source, but corporations were always able to maintain an advantage (and maintain scarcity) in their adoption of the technology, often by packaging products and services together to offer them to corporations and individuals. Microsoft has had the biggest success here. So why is open source AI so much more of a problem than Open Office or Ubuntu ?

The answer to this question lies in the nature of AI itself. It is, fundamentally, an endosymbiotic technology: a method whereby the vast networked environment of the internet can be absorbed into a single technological device (an individual computer/phone). That device, which then doesn't need to be connected to the internet, can reproduce the variety of the internet. This provides individuals equipped with the technology a vastly increased power to anticipate their environment. Up until this point, the tech industry has aimed to empower individuals with some anticipatory capability, but to maintain control of the tools which provide this. It is that control of the anticipatory tools which is likely to be lost by corporations. And it will not just be chatbots - it will be all forms of AI. It is what might be called a "radical decentralisation moment".

This has huge implications. Intellectual property, for example, depends of scarcity creation. But what happens if innovation is now performed by (or in conjunction with) machines which are ubiquitous and decentralised? New developments in technology will quickly find their way to the open source world, not just because of some desire to be "open" but because that is the place where it can most effectively develop. Moreover, open source AI is much simpler that open source office applications. It has far fewer components: a training algorithm + data + statistics is just about all that's needed. Who would invest in a new corporate innovation in a world where any innovation is likely to be reproduced by the open source community within a matter of months? (I wonder if the Silicon Valley Bank collapse carried some forewarning of this problem)

But its not just the identities of tech businesses which are under threat. What about education? What about government? Are we now really so sure that the scarcity of the educational certificate, underpinned by the authority of the institution, is safe from an open source challenge? (Blockchain hasn't gone away, for example) I'm not now, and the way that universities have responded to chatGPT has highlight the priority for them to "protect the certificate!" like the queen in the hive. If the certificate goes, what else does education have? (I'm not suggesting "nothing", but the certificate is the current business model and has been for decades)

Then there is government and the legal frameworks which protect the declaration of scarcity in commerce through IP legislation and contracts. The model of this was the East India Company, where protecting territories and trade routes with the use of force underpinned imperial wealth. What if you can't protect anything? What kind of chaos does that produce? AI regulation is not going to be a shopping list of do's and don'ts because its going to be difficult to stop people doing things. China is perhaps the most interesting case. No government can control a self-installed, non-networked chatbot: it's like kids in the Soviet Union listening to rock and roll on x-ray film turned into records. Then of course there'll be terrorist cells arming themselves with bomb-making experts. We are going to need to think deeper than the ridiculously bureaucratic nonsense of GDPR. 

Our priority in education, industry and government is going to need to be to restabilise relations between entities with identities which will be very different from the identities they have now. In the Reformation, it was the Catholic church which underwent significant changes, underpinned by major changes in government. The English civil war and the restoration produced fundamental changes to government, while the industrial revolution produced deep changes to commerce. But this is a dangerous time. Historical precedent shows that changes on this level are rarely unaccompanied by war. 

Monday, 10 April 2023

Quantum Ears

It seems obvious to say that music starts at time a and finishes at time b, and in between goes on a journey. But I'm beginning to hear it differently. I don't think there is a time a and a time b: they are constructed as part of our sense-making about what happens to us when we listen or play. Importantly, our sense-making must omit certain key aspects of making music. The principal dimension that I think is omitted is noise, or the energy that is continually shaking our senses and causing our physiology to find new ways of organising itself. 

If we consider what noise does, then the journey of music over what is perceived as time is entirely co-present at any "now". Music is a more like a space to explore than a path to follow. From the very moment that we both make and don't make a sound, the whole space is there, existing in the dynamic between physiology and the universe. 

Harrison Birtwistle seems to have heard music like this, and his thought has had a big influence on me. I was particularly struck by Birtwistle's appreciation of Paul Klee - particularly Klee's pedagogical sketchbooks. Birtwistle says:

Like Paul Klee, I'm taking a line for a walk. But the lines Klee draws are pure continuum, they look like a map of a walk or a journey. And this is how we usually think of journeys - fluid things which are uninterrupted. But when you're in the process of journeying, you perceive them differently. You don't look straight ahead, you look to the right and then to the left. And when you turn to the left you fail to take in the events on the right and vice-versa. In retrospect you think of the journey as being a logical progression from one thing to another, but in actual fact it consists of a series of unrelated things, which means that you're simply making choices all the time, choices about where to look. It's to do with discontinuity. You have a continuum, but you're cutting things out of it while you look the other way.

Music is discontinuous in essence. The "continuity" is something that perception imposes on us, making us ignorant of the dynamics that drive its discontinuities. Deep down, what we perceive in Mozart or Bach (and in Birtwistle) is coherence, which is not the same thing as continuity. 

Coherence does not need time as we understand it. It represents the deep symmetry of nature, in which what we call time is a parameter. In quantum mechanics, this deep symmetry is what balances out local (physically proximate) phenomena with non-local (physically distant) phenomena. For there to be "spooky action at a distance" (which there appears to be), then there must be some underlying balancing that goes on between what happens locally and something happening non-locally. All matter, including our physiology - and our ears - will partake in this universal symmetry.

Because of this complex symmetrical mechanism, the energy of the quantum world is always buzzing and interfering with our physiological substrates. To deal with this, all life needs to construct niches. The space of music is its niche. To be entranced by music is to be drawn into its niche, and then (in the case of Western classical music) to be convinced of music's "journeying". But the journey is an illusion. Music immediately presents a multiplicity of the same thing. Heterophony is the closest we get to this kind of thing. 

Taking time and continuity out of the music equation carries important lessons for other aspects of life. Learning, like music, is discontinuous, but learners and teachers are forced to deny this by the expedience of institutions who must regiment educational practice. Equally, the climate emergency is often portrayed as a "race against time" - but rather like the pathology of education, the more we impose a linear model on what is essentially a discontinuous system, we become (despite the good intentions of activists) more denatured, not less. The same is true in politics: our only understanding of a regulatory system is one which works in a linear continuous fashion, and which in operation creates more alienation.  

Thursday, 6 April 2023

Universal Uncertainty

Measuring "speed" of change is tricky - speed is relational. There does however seem to be a lot more uncertainty around: anticipating the future means grappling with very high degrees of contingency. To say "things" change, what we mean by "things" is not so much "stuff happening in the world", but rather our relation to "stuff happening in the world". It's not the stuff which is uncertain. It is the relationship between our context and perception and "stuff" which is generating more contingency in our decision-making. 

Uncertainty means disorder in relations. We can measure "maximum disorder" of relations as the entropy of the stuff in the world (particularly when new technologies increase the number of options we have, or a new virus radically restricts our capacity to adapt to the world) in relation to the entropy of our capacity to deal with it. If the equations don't balance, then there will be uncertainty.  At some point in the future, these equations will balance out again - and on it goes. This appears to be an evolutionary principle. 

COVID was a good example of this explosive relative uncertainty. A disruption at a biological level of organisation impacted on the normal institutional mechanisms for dealing with uncertainty (see here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7518093/). As a result, it became very difficult to coordinate expectations across society with normal regulatory mechanisms. This necessitated an authoritarian doctrine of "follow the science" backed-up with the threat of force, as a way of radically changing the way people lived. The irony about this was that science is the business of exploring uncertainty, while the COVID authoritarian science (rather like "school science") excluded uncertainty in its official pronouncements, to leave doubt and inquiry in the hands of conspiracy theorists. "Following the science" is not the same as "being scientific".
 
I've been thinking about this diagram, presented by Jerry Ravetz to explain Post-Normal Science. All science displays degrees of uncertainty. In a presentation I gave the other week, I contrasted images from the Hubble telescope and images from the James Webb telescope. I said that while the technology improves, and we get more information (in fact, the maximum entropy of information increases), there is still a relation between those things which we are certain about, and those things which we are not certain about. The relation between certainty about craters on the moon, and certainty about planets in other galaxies is constant. 


In the context of COVID, this is useful, because there were things which we knew were high risk for transmission, and other things about which there was much argument. With COVID, there were also high decision stakes alongside high scientific uncertainty. The difficulty was that government not only failed to convey the systems uncertainty, but in fact attenuated it.
 
This diagram is also interesting because it reveals that there is a a gradation of causal relationships in the "systems uncertainty" direction. Attributions of causation between factors become more contingent the further one goes from left to right. It is perhaps no surprise that contingency in decision also rises, and perhaps this is related to the "stakes" of those decisions.  How might we think about this gradation of causal relationships? 
 
These must be related to the communication dynamics that are established in the light of experience. Hume argued that causes were the outcome of communication dynamics between scientists in the light of their experiments. I think he was right (although lots of people don't),  Regularity of events was the key ingredient to produce scientific consensus. The problem is that with higher systems uncertainties, the likelihood of regularity in events become less. Systems become more complex, more contingent, mechanisms harder to agree on. This lack of social agreement can impact the decision-stakes: failure to agree scientifically can render political chaos and social disorder. 

With COVID, the fundamental disruptive mechanism was a bio-techno-social dynamic, where technology took the forms of apps, masks, vaccines, etc. It's actually very similar with AI at the moment. That is also a bio-techno-social disruption, where its not a disease that represents the "bio" bit, but our cognition and emotions. The challenge for institutions is to find a way of renormalising relations. That requires finding new perspectives from which to view the dynamics we are in. 

In some ways, COVID presented an easier challenge because it (sort of) went away, and life could get "back to normal". AI is much more serious because the institutional discourse relations cannot grasp what is happening in the bio-techno-social mechanism, and are constantly blind-sided by "the next cool thing". I wonder if these are the conditions within which Copernicus and Galileo paved the way to a social gestalt-switch which restabilised European institutions.

In order to get on top of what is happening to technology, we are going to need a similar gestalt-switch.






Thursday, 9 March 2023

The Maximum Entropy of Work

I'm very doubtful that the current trajectory of AI will make our lives easier. Indeed, the impressive progress of AI has led me to reflect on the fact that despite huge technological advances over the last 50 years, the lives of the majority of people have got harder and more uncertain. If I compare my own career with that of my dad, he was able to jog-along with a job he didn't much like, but basically survived without too much threat, and retired at 58 with a very generous pension. My journey, by comparison, has been a rollercoaster (indeed, a rollercoaster with some bits of the track missing!) and I am seeing people (particularly young academics) in their 30s faring even less well. So what's going on? And - before I delve into that further - it's too easy and lazy simply to blame "capitalism": we need to be more precise. 

I suspect the common denominator in the work equation is technological advancement alongside rigid institutional structures. This is not to denigrate technology - it is amazing - but it is to ask deeper questions about our institutions. I think there is a systems explanation for what is happening.

When a new technology arrives in a social system (a society, a business, an institution) it increases the possibilities for surprise in that system. Quite simply, new things become possible which people haven't seen before. Since information entropy is a measure of surprise, we can say that the "maximum entropy" of the social system increases, where this is the maximum is what is possible - not necessarily what is observed. 

What is observed in a social system with a new technology is a degree of surprise (some degree of innovation is observable), but nowhere near the maximum amount of possible surprise. So observable entropy increases, but the maximum entropy increases more. What does this mean for work and workers?

A bit like voltage in electronics, the difference between the maximum potential and the observed reality creates a space in which activity is stimulated. The bigger the space between observed entropy and maximum entropy, the greater the stimulation for activity. This activity is what we do in work. More precisely, work becomes a process of exploring the many ways in which the possible new configurations of practice and technology can be realised. Some of that work is called "research", other aspects of this work might be called "operations", other aspects of it might be called "management", but whatever kind of activity it is, it increasingly involves the exploration of new options.

This "work space" between the maximum entropy and the observed entropy is, as David Graeber famously said in his "bullshit jobs", mostly pointless. The work is basically doing things that have been, or can be, done in many different ways: it is effectively "redundant". But that's the point - redundancy generation in the space between the maximum entropy and the observed entropy is what must go on in that space. And it is exhausting and dispiriting, particularly if it increases. 

This is a bleak outlook because of all recent technologies to increase the maximum entropy, AI is in a league of its own. It will accelerate the growth of maximum entropy beyond anything we have yet seen. So what will happen with the observed entropy and the work in the space between?

The problem is the increasing gap between observed entropy and maximum entropy. What keeps the observed entropy so much lower is the structure of institutions. The deepest risk is that the maximum entropy goes off the scale, and the observed entropy - the visible interface to existing institutions doesn't change very much at all. That will create a pressure-cooker atmosphere within the work system. There will be work, and indeed more of it than ever before, but work will become increasing febrile and pointless.  It will make us sick: the mental health problems of workers, students and everyone else will suffer. 

It would be better if the redundancy-generating space was maintained as stable rather than increasing. This might be achieved if we consider the drivers for increasing maximum entropy through technology. One of the drivers is noise. It is the noise generated by an existing technology (for example, an AI) which drives the innovation to the next iteration of the technology. If human labour was seen as an effective management of noise, rather than the generation of redundancy, then society might be steered in a way which doesn't cause internal collapse. 

Another way of saying this is to say that uncertainty is the variable to manipulate collectively, and only humans can manipulate this variable. One of the problems with increasing maximum entropy is that labour is directed to do tasks that can be clearly defined. We see this with chatGPT at the moment: thousands of academics who say "we can use it to do <insert name of well-defined task>" This is looking for your keys where the light is, not where you lost them. 

One of the things the technology might be able to do is to direct human labour to where the uncertainty is greatest. Focused in this way, the work is really about exploring differences in understanding between different people of things which nobody is clear about. This is high variety, convivial, high level work for the many. Part of this work is work to explore the possibilities of new technology - the "redundancy work" in the space between observed and maximum entropy. But the other part of the work is to coordinate intellectual effort in exploring the noise of uncertainty, and the result of that work can help manage the gap between maximum entropy and observed entropy. 

What does this look like practically? I think, given that uncertainty is experienced physiologically, and exploring uncertainty together is deeply convivial, this looks like work with a focus on wellness, maybe using technology to identify where wellness might be threatened. 

Creating a "wellness system" is a possibility. The consequences of not doing this look far more dire than anyone can yet imagine. 

Wednesday, 8 March 2023

Birtwistle's Seriousness

I attended the commemorative concert for Harrison Birtwistle on Sunday. It was a powerful occasion which has led me to think about the abandonment of seriousness in art which seems to have occurred in the last 20 years or so. Birtwistle was a serious artist - by which I mean that he never sought popularity. He was committed to his project, crystal clear in its direction and what he was doing, and uncompromising in his attitude towards whether anybody else liked it or not. 

He was lucky in the sense that his formative years coincided with a post-war spirit that supported experimental music that was often hard on the ears, but which allowed for the exploration of deeper meaning. This supportive spirit has pretty much gone with late capitalism's demand that a market must exist for whatever the artist produces. Birtwistle now has a niche because it was able to grow in better times. How could such a niche be constructed now? What do we lose if we lose our ability to do this?

Part of the problem in answering this is that art is not always for the present or a present audience - it is for a future where things that may not resonate in the present find resonance decades after the artist is dead. Birtwistle's music will make more sense and convey its power and meaning more overtly in future worlds. How do we know which art will produce this effect? This is where some kind of deeper knowledge of what matters is important. Some people can tune into this and know what matters, what needs to be preserved. Those people too are now threatened in an anti-intellectual climate which even (or maybe particularly) in universities favours work that delivers immediate gain. 

Universities are part of society's mechanism for selecting what matters. They are now failing to do this. The decline of the professoriate both in quality and power in steering institutions is a signal of what has gone wrong. It is difficult to see a way back, although it would likely feature technology I would guess. I'm not sure how though. 

If we have no mechanism for selecting what matters, the future state of knowledge is threatened. It is an analogue of the current ecological crisis - the decline in diversity of species. 

The Birtwistle piece that opened the concert was a short duet called "The message". This took inspiration from an artwork by Bob Law containing the words: "The purpose of life is to pass the message on". Birtwistle's seriousness lies in the fact that he understood this. 



All seriousness is about understanding this message.

And we can hope that the best of his music should be a sufficient transducer - like this: Harrison Birtwistle - Earth Dances - YouTube

Monday, 30 January 2023

AI, Technical Architecture and the Future of Education

I gave a presentation to the leaders of Learning Support at the University of Copenhagen this morning. I will write a paper about this, but in the meantime this is a blogpost to summarise the key points.

I began by saying that I would say nothing about "stopping the students cheating". I said basically, as leaders in learning technology in universities, there is no time to worry about this. The technology is improving so fast, what really matters is to think ahead about how things are going to change, and the strategies that are required to adapt. 

I said that basically, we are in "Singin' in the Rain". The movie is a good guide to the tech-rush that's about to unfold. 

I also referred to the 2001 Spielberg movie AI, which I didn't understand when I first saw it. I think we will look back on it as a prescient masterpiece. 

My own credentials for talking about AI are that I have been involved in an AI diagnostic project in Diabetic Retinopathy for 7 years at the University of Liverpool, and after £1.1m of project funding and then £2m of VC support, this has now been spun-out. When the project started I was an AI sceptic (despite being the co-inventor of the novel approach that has led to it's success!). I'm not sceptical now. 

I said that what is really important to understand is how the technology represents a new kind of technical architecture. I represented this with a diagram:
 
As a term, AI is a silly description. "Artificial Anticipation" is much better. The technology is new. It is not a database; it consists of a document called a model (which is a file) that can be thought of as being like a "sieve". The configuration of the structure of the sieve is produced through a process called "training", which requires lots of data, and lots and lots of time. This process uses huge amounts of data from the internet. Training requires "data redundancy" - lots of representations of the same thing. 

Since academics have been busy writing papers which are very similar to each other for the last 30 years, chatGPT has had rich pickings from which it can train itself. 

If you want to understand the training process, I recommend looking at google's "teachable machine" (see http://teachablemachine.withgoogle.com). This allows you to not only train a machine learning model (to recognise images or objects), but to download the model file and write your own programs with it. It's designed for children - which is how simple all of this stuff will be quite soon...


Once trained, the "model" does not need to be connected to the internet (chatGPT isn't, despite being accessed online). The model can make predictions about the likely categories of data it hasn't seen before (unlike a database which gives back what was put into it in response to a query). The better the training, the better the predictions. 

All predictions are probabilities. In chatGPT, every word is chosen according to the predictions of the chatGPT model, on the basis of the probabilities generated by the model. The basic architecture looks like the diagram above. Notice how the output of the text is fed as input back into the model. Also notice the statistical layer which does something called "autoregression" to refine the selection process from the options presented by the model. 

This architecture is where the clues are to how profound the impact of the technology is going to be. 

Models are not connected to the internet. That means they can stand alone and do everything that chapGPT does. We can have conversations with a file on our device as if we were on the internet. Spielberg got this spot-on in AI. 

Another implication of this is, as I (carefully) pointed out to some Chinese students I gave a presentation to a few months back (at Beijing Normal University), the conversations you have can be entirely private. There need not be any internet traffic. Think about the implications of that. 

We are going to see AI models on personal devices doing all kinds of things everywhere.

I made a couple of cybernetic references: one to Ashby's homeostat - because the homeostat's autonomous units coordinated their behaviour with each other in the way that AI's are likely to provide data for other AIs to train themselves. This is likely to be a tipping point. I strongly suggested that people read Andy Pickering's "The Cybernetic Brain".

There's something biological about this architecture. A machine learning model does not change in most machine learning applications: chatGPT's model does not retrain itself: retraining takes huge amounts of resource and time. What happens is that the statistical layer which refines the selection does adapt. Biologically, it's similar to the model being the Genotype (DNA) and the statistical layer being the phenotype (adaptive organism). 

This also ties in with AI being seen as an anticipatory system because the academic work on anticipatory systems originally comes from biology: an anticipatory system is a system which contains a model of itself in its environment (Robert Rosen). Loet Leydesdorff, with whom I have worked for nearly 15 years, has developed a model of this (building on Rosen's work) to explain communication in the context of economics, innovation and academic discourse (the Triple Helix). I have found Loet's thinking very powerful to explain this current phase of AI.


Of course, there are limitations to the technology. But some of these - particularly about uncertainty and inspectibility will be overcome I think (some of my own work concerns this)


But perhaps the biggest question concerns the nature of the technical architecture. AI - or Artificial Anticipatory Technology - is basically a document which is also a medium. What does that mean for us? Why does it matter in education?


The real question behind this is "What is education for?". Again, Spielberg gets something deeply correct here: one of the principal reasons why we have education at all is the ongoing survival of the species - which means that those who will die first must pass on the ability to make good judgements about the world to those who are younger. 

The education system is our technology for doing this. It's rather crude and introduces all kinds of problems. It combines documents (books, papers, videos, etc) which contain knowledge which requires interpretation and communication by teachers and students in order to fulfil this "cultural transmission" (someone objected to the word "transmission", and I agree it's an awkward shorthand for the complexity of what really happens).

AI is a document which is also a medium of interpretation and communication. It is a new kind of cultural artefact. What kind of education system do we build around this? Do we even need an education system that looks remotely like what we have now?

I said I think this is what we should be thinking about. It's going to come for us much faster than most senior managers in universities can imagine. 

So we simply haven't got time to worry about stopping the kids cheating!

Friday, 13 January 2023

Triad Chords as a "nice noise" (From Plankton to Puccini)

20 years ago, when the Lindsay string quartet retired from Manchester University, Ian Kemp - who had been an inspirational musical figure for me and so many others - returned from retirement to conduct a last "Lindsay session", playing Beethoven and Tippett (which was the favourite diet). Although Ian complained that he was "bad at hearing", his musical intellect remained sharp as tack. 

There was a passage in the music (I think it must have been Tippett) which was very unusual. So he asked, in his typical way, "what's going on here?". By this time, University academics of Kemp's temperament were very rare, and they had been replaced with younger people who were eager to please and were full of "musical analysis terminology". So Ian's question prompted much impressive-sounding jargon. "Perhaps," he said on hearing this, "but maybe it's just a nice noise". 

So what is a nice noise? We hear, with Western ears at least, the major triad as the epitome of musical consonance - a nice noise. It is a resting place, and the tonal geometric relations that form around the triad provide us not only with the "nice noise" of the chord itself, but an unfolding diachronic (and diatonic) space with which we can engineer a sense of arrival and homecoming in tonal music. 


When we learn about triads, we are introduced to the notation, and young pianists are taught how to shape their hands. But something gets added in both these cases. The triad is never "just" the notes. It is never "just the hand-shape". If it was "just the notes", then playing a triad with sine waves would be as satisfying as playing it on the piano. But it isn't - and this is my point: the triad's beauty lies in what occurs outside the notes. It lies in the noise that surrounds it. 

So much of music analysis manages to miss the music. I strongly suspect that Kemp's "nice noise" comment hit the music on the nose. Part of the key to understanding this (pardon the pun) lies in inspecting the relationship between a triad and a note.

Marina Frolova-Walker's fascinating lecture on the triad (see (38) Triads, Major and Minor - YouTubeP includes a nice demonstration of the overtone series and how this relates to the triad. But if we play a note and analyse its harmonics, we see the different harmonics at a couple of octaves above the fundamental note. If we add another note a third above the original note, what actually happens is the overall frequencies become "noisier" - there is a tussle between two fundamental notes which are nevertheless connected. 

Marina does say something about the experience of early musicians in hearing the consonance between two notes. This must have been fascinating and puzzling, because perception struggles to piece together the coherence of sounds which on the one hand interfere with each other, and on the other, agree with each other. The recursive operations of consciousness in the face of this oscillation is possibly comparable to the way that early art features recursive geometric tiling patterns (across many different cultures across the world)

Just as with the oscillations of perception with a tiling pattern, the oscillations of perception with a triad creates a dynamic dance between noise and consonance. As Marina illustrates at the beginning of her talk, Wagner completely understands and demonstrates this dance at the beginning of The Ring. 

The consonance of the triad is not static - it moves. But it moves in a way in which perception becomes fascinated. Understanding this also helps to explain why not everybody in the world has the same music. The issue is not about consonance and dissonance - it is about the relationship between stability, order and noise. Western harmony is one way of managing a dance between these factors, but it depends on particular kinds of social relation which reflect the society that favours that way of doing things. There are many others, just as there are many other kinds of society. 

The role of noise in creating order is much overlooked. Kemp's "nice noise", and the triad itself, is a dynamic relation between noise and order. An energy imbalance is inherent in the first note connecting the physiology of perception and action with the physics of sound. The noise around music is essential in driving forwards the process of unfolding immanent structures in the sound as more energy is produced, and the physiology of expectation adapts. 

I thought a while ago that there was a clear distinction between the synchronic aspects of music and the diachronic aspects. (I wrote about this here: Redundancies in the communication of music: An operationalization of Schutz's ‘Making Music Together’ - Johnson - 2021 - Systems Research and Behavioral Science - Wiley Online Library and here: Communicative Musicality, Learning and Energy: A Holographic Analysis of Sound Online and in the Classroom | SpringerLink). Now I think the synchronic aspects are much more dynamic than I realised. The ancient and medieval theorists who spoke of the divisions of the string and the harmonics ignored the role that perception plays in appreciating the beauty of "real" music, as opposed to mere mathematical relations. But now I see (and hear) that what happens to perception in the experience of the structure of sound is just as dynamic as what happens over time as sound develops. 

There is also something to say here about evolution, and the evolution of music. Michael Spitzer, with whom I've had the privilege of some detailed conversations recently alongside the biologist John Torday, has suggested that music is fundamentally connected to the ocean. He asked me a few weeks ago, after I'd given a talk on "music and epigenetics" about how the primeval ocean connects to Beethoven. It's a great question. Now, I think I would say that the ocean is a noisy environment (Michael says it is the most sonically rich environment on earth). The developmental process of life concerns the continual generation of order (negentropy). What do we need for this order-producing process? Information - in the form of selection is one thing. Constraint is the flip-side of information, and this is also required (technically, this is known as redundancy). But noise is critical. It's only with noise that the latent structures of organisms - from cells upwards - can be "shaken" into finding new ordered configurations. It's the same process - from plankton to Puccini!