Friday, 30 January 2026

Creativity tomorrow

I find myself in a transition phase at the moment. I've spent the last few years trying to wake academics up to what was once the "impending wave of AI", and spending quite a lot of time frustrated by the fact that most academics thought it was irrelevant. Now everyone's a sodding expert! Well, that's academia for you...

So I think I can back away from this AI mania for a bit and let people get on with it. My university has bought 60000 licenses for Microsoft copilot. Personally, I would have done nothing, but worked hard with the staff to accept that our educational environment has changed and we must change - but buying an AI platform is not the way to do that! The locus of control of technology must sit with the individual student (and academic) - even if that now amounts the locus of control of the cost of technology.

Management anxiety has led to the institution to take this decision. They want to be seen to do something. Others will watch and see what happens... "For god's sake, don't just do something - stand there!" would have been more sensible advice.

Of course what the institution won't be able to do will be to keep up. I attended a fascinating "creatives" session at the RNCM the other day. There were film-makers who are making a name for themselves with AI video. They liken the present situation to the rise of cheap sequencers and production software in the 90s which meant that kids in their bedrooms could make music that only professionals could do a few years before. That's now happening with movies. No need for actors, props, lighting, sound, etc. Artistic input is still there but it coordinates activity by "worker bots" rather in the way conceptual artists produce art with assistants. 

That's here to stay. We're going somewhere different now. 

Most fascinating is the ability to generate analytical dashboards based on real-time discussions. I've got quite good at this. I was in a meeting with a public health academic this morning talking about cooperative housing. While he and another friend were talking, I made an app to distil the parameters of the conversation and demonstrate the interrelations between them. Quite amazing - and they were a bit gobsmacked. 

Of course AI is not making any money, there will be a crash, the energy, etc. But the energy problem will be solved (as many other issues will be) - its like early steam engines. 

So on the one hand I'm backing out of institutional AI. But I'm moving forwards with AI. The institution will get left behind. But that may need to happen...


Friday, 9 January 2026

The Reality of Charles Ives

Charles Ives famously joked once "Are my ears on wrong?" His music has always confirmed to me that if his ears were on wrong, so were mine. I was introduced to Ives by my dad who once led a production of Waiting for Godot, and had found the perfect music to accompany it - The Unanswered Question. Curiously a few days after my dad died (many years ago now), I went to a concert (trying to clear my head), and there was a remarkable performance of the unanswered question in Manchester. Strange how those things happen - it was a very meaningful experience - we imagine some kind of divine blessing at these moments.

Great artists tune-in to some fundamental principle of the universe. They often struggle to articulate exactly what that is, but it is clearly evident in their work. Academics are often arrogant enough to believe that they can unpick these fundamental principles - and make themselves look foolish in the process. In between the artistic expression and the academic "sense-making" is a process of loss of information. This is more pronounced in academic work which is "easier" to digest in the academy - that which divides things up into structural and formal relations, or carves it up on a spreadsheet. Its like how dissecting a frog destroys the living essence of frogness.

So if we have to academicise great art, there should be something as impenetrable in our scholarship as there is in the art. For me, the best theoretical work is like this, where theory has a similar structure to the thing it theorises. We tend not to think like this. We look to theory to "explain", where it can be something that accompanies us on a journey. It is a bit like how I described Gordon Pask's thinking about information the other day: not as a calculation, but a physical process. 

Music is a physical process. It is a process of physiological adaptation to perturbation driven by nature's tendency towards homeostasis. What drives that process we don't know, although there are emerging theories as to how it might happen. Much more difficult is to think how we might measure a process, or even to think what measurement actually means. 

Today we are used to using computers to measure things - often with statistical formulae. But what is a computer? What is a machine? So, here is Gordon Pask again on that with a statement I find very profound:

"The word 'machine' means a piece of hardware constrained, algebraically, to act as a computer" 

Friday, 2 January 2026

AI and Epistemological Correction

The capability of AI is likely to lead us to bad decision-making. But it needn't. What is remarkable in the statistical amalgamation of training data (particularly computer code) is the capacity to represent old thinking, which was probably never properly understood, in new ways which can be more readily understood. Some of this old thinking is deeply cybernetic, which also challenges present analytical techniques. 

One of those analytical techniques which straddles the cybernetic approach and present analytical approaches is Shannon information theory. Shannon entropy is a probabilistic calculation that is performed on a digital computer, but Gordon Pask's take on entropy was quite different from Shannon. He saw entropy as an enacted property by a physical machine, not as a calculation of a number of bits of information. The difference between high and low entropy was the amount of work that needed to be done in order for a system to establish stability following a perturbation. This is modellable as Shannon information, but it has a number of advantages over the Shannon equations. Primarily it places the emphasis on the whole system relationship which is not measurable as such, but which is enacted in the system behaviour. 

This is a much more biological way of approaching the whole issue of surprise. Surprise isn't a metric. It is an enacted systemic relationship. 

How can our present AI help with this? Partly because it makes it much easier to make "maverick machines" - those devices through which Pask explored his understanding (and in the process lost a lot of others!). Most importantly, those devices are analogue, not digital. But of course, present digital AI is quite good at creating (on-demand) analogue simulators. 

This really speeds up the process of getting up to speed with Pask's brain. I wonder if, in fact, very few of the other cyberneticians really understood him. Even his students became somehow dogmatic about his ideas rather than really thinking in the same way that he did. This was probably the fault of the pedagogical relationship between him and them (his airy Victorian engineer persona probably didn't help - and set a bad example to the succeeding generation). What if he'd been able to playfully create analogue computers on-demand to explain what he was talking about? (indeed this process was directly modelled in his conversation theory - the shared modelling environment at the bottom of his diagram). But there's more to it than merely the concepts. The analogue machine had to be coupled with a brain that operates with it.

There's something in these analogue machines that is missing in other cybernetic approaches. For example, Spencer-Brown misses something with the binary division of the mark, even if self-reference produces some interesting results. Shannon misses it by focusing on numbers for operationalisation. Maturana and Varela miss it because their "embodied cognition" (perhaps more Varela), while also embracing self-reference, isn't really embodied at all but metaphorical. Beer perhaps is the closest. For him, the organisation is the enacting device. But it is still difficult to envisage without simply falling back to Beer's demarcation of system 1, 2, 3 etc. and the march of his disciples.

With Pask we get the organic recursive system which must work on itself to establish homeostasis. The degree of complexity relates directly to the amount of work expended. This is why Pask's musicolour is particularly powerful. The organic system is us - and I think that is a clue as to how this enacted entropy might be operationalised. The key to an organisation's health lies in the physiological mechanisms of each of its workers. To understand those mechanisms we need to look to both cybernetics and biology.

Putting the details of that aside (pending a new paper!) that means that occupational health is deeply connected to organisational viability. Pask's enacted approach to entropy could be very powerful as an organisational tool focusing on the adaptation of each individual. Could this approach identify risks of pathological organisation or corruption? Maybe it could.