Monday, 4 September 2023

Wittgenstein on AI

Struck by what appears to be a very high degree of conceptual confusion about AI, I've been drawn back to the basic premise of Wittgenstein that the problems of philosophy (or here, "making sense of AI") stem from lack of clarity in the way language is used. Wittgenstein's thoughts on aesthetics come closest to articulating something that might be adapted to the way people react to AI:

"When we make an aesthetic judgement about a thing, we do not just gape at it and say: "Oh! How marvellous!" We distinguish between a person who knows what he is talking about and a person who doesn't. If a person is to admire English poetry, he must know English. Suppose that a Russian who doesn't know English is overwhelmed by a sonnet admitted to be good. We would say that he does not know what is in it. In music this is more pronounced. Suppose there is a person who admires and enjoys what is admitted to be good but can't remember the simplest tunes, doesn't know when the bass comes in, etc. We say he hasn't seen what's in it. We use the phrase 'A man is musical' not so as to call a man musical if he says "Ah!" when a piece of music is played, any more than we call a dog musical if it wags its tail when music is played."

Wittgenstein says that expressions of aesthetic appreciation have their origins as interjections in response to aesthetic phenomena.  The same is true of our judgements to writing produced by AI: we said (perhaps when we first saw it) "Wow!" or "that's amazing". Even after more experience with it, we can laugh at an AI-generated poem or say "Ah!" to a picture. But these interjections are not indicators of understanding. They are more like expressions of surprise at what appears to be "understanding" by a machine. 

In reality, such interjections are a response to what might be described as "noise that appears to make sense". But there is a difference between the judgement of someone who might interject after an AI has returned a result who has a deeper understanding of what is going behind the scenes, and someone who doesn't. One of the problems of our efforts to establish conceptual clarity is that it is very difficult to distinguish the signal "Wow!" from its provenance in the understanding or lack of it in the person making the signal. 

Aesthetic judgement is not simply about saying "lovely" to a particular piece of art. It is about understanding the repertoire of interjections that are possible in response to a vast range of different stimuli. Moreover, it is about having an understanding of the constraints of reaction alongside an understanding of the mechanisms for production of the stimuli in the first place. It is about appreciating  a performance of Beethoven when we also have some appreciation of what it is like to try to play Beethoven. 

Finally, whatever repertoire one has to make judgements, you can find others in the social world with whom you can communicate the structure of your repertoire of reactions to AI. This is about sharing the selection mechanism for your utterances and in so doing articulating a deeper comprehension of the technology between you. 

I'm doing some work at the moment on the dimensionality of these different positions. It seems that this may hold the key for a more rational understanding of the technology and help us to carve a coherent path towards adapting our institutions to it. But in appreciating the dimensionality of these positions, the problem is that the interconnections between the different dimensions breaks. 

It is easy to fake expertise in AI because few understand it deeply. That means it is possible to learn a repertoire of communications about AI without the utterances being grounded in the actual "noise" of the real technology. 

It is also easy to construct new kinds of language game about AI which are divorced from practice, but manage to co-opt existing discourses so as to give those existing discourses some veneer of "relevance". "AI ethics" is probably the worst offender here, but there's lots of words spent of discussing the sociology of "meaning" in AI. 

Equally it is possible to be deeply grounded in the noise of the technology but to find that the concepts arising from this engagement find no resonance with people who have no contact with the technics, or indeed, are in some cases almost impossible to express as signals. 

It is in understanding the dynamics of these problems which is where the dimensionality can help. It is also where experiments to probe the relationship between human communications about the technology and the technology itself can be situated. 

Sunday, 23 July 2023

Exploring the Dark with AI

One of the consequences of a changing landscape of technology is that everyone is in the dark. What we need to do when everyone is in the dark is talk to those people who are most familiar with the dark to show us around their uncertainties. This is when interdisciplinary engagement can be most powerful and productive. 

In 1968 Arthur Koestler organised a symposium at Alpbach, Austria which gave rise to a book of essays by the leading scientists of the day. The book is called "Beyond Reductionism: New perspectives in the life sciences". The attendance list included: Ludwig von Bertalanffy, Jerome Bruner, Viktor Frankl, Friedrich Hayek, Jean Piaget, Conrad Waddington and Paul Weiss. (The gender bias is unfortunately a sign of the time)

If we were to create a similar meeting, who would we invite? Who has been shining lights into the darkness for some time, who might show us a way forwards? Whose conversations might benefit from deeper interdisciplinary connection? I think my list would include (in no particular order): Isabelle Stengers (philosophy), Mark Solms (neurobiology), Maxine Sheets-Johnstone (dance, philosophy), Peter Rowlands (physics), Antonio Damasio (psychology), Karen Barad (physics), John Torday (evolutionary biology), Sabine Hossenfelder (physics), Louis Kauffman (Mathematics), Katherine Hayles (cybernetics), Lee Smolin (Physics), Elizabet Sahtouris (evolutionary biology), Rupert Wegerif (education), Mariana Mazzucato (economics).

Most of those people won't see this message - but I think we should do something like this. Academia today is much changed from the world of 1968. Today we don't seem to believe in the dark much - everything is brightly lit with learning outcomes and assessment criteria and universities as businesses. Dark things happening - disease, war - put us into oscillation which is more dangerous than the initial triggers. 

Holistic thinking is, I suspect, much less easy today than it was in 1968. I have been talking to friends about the difficulty of getting young people involved in the Alternative Natural Philosophy Association ( Only those with well-established careers can afford to do think holistically, or people hiding under the radar. Everyone else seems to just need to survive. But none of us will survive if we don't encourage holism among the young and discourage the managerial nonsense that has become education. 

We all begin in the dark. Showing each other around is an important thing to do. 

Friday, 19 May 2023

The Digitalised Imagination

Just over 2 years ago I decided I wanted a bit of adventure in the tail-end of Covid, and gave up a slightly depressing management position at the University of Liverpool, and became a post-doc on a project on curriculum digitalisation at the University of Copenhagen. I thought at the time that digitalisation was the most important undercurrent in education, and I knew that it was a difficult thing to move towards. My best achievement had been at the Far Eastern Federal University in Russia, which I wrote about here: Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum ( The Copenhagen experience was nowhere near as good as the Russian experience, and I left Copenhagen for Manchester with a much deeper appreciation of what I had done in Russia. I just wished I'd done it in Switzerland!

During this time, and for seven years previously, I had been deeply involved in a medical diagnostic AI project whose innovation I was co-inventor. It was obvious that AI was a tidal wave that was about to hit education, and much of my frustration in Copenhagen was that very few people were really interested. They are now, like everyone else. 

There is a risk that AI sweeps the digitalisation agenda away. After all, why teach the kids to code when the computer will do it for you? This kind of statement underpins errors in the ways that digitalisation was conceived - particularly in Copenhagen and many other European universities. It also underpins the difference between the institutional approach of Copenhagen and the approach I took in Russia. 

Digitalisation is not about skill or competency. It is not about "digital literacy" (whatever that means!). It is about imagination. This was understood by the Russians, and dogmatically avoided in Copenhagen. The deep problem is the sanctifying of "competency" within European education, and the EU has been particularly pernicious in pushing this. However much the sheer lack of insight as to what "competency" is (ask anyone to define it!), it is continually asserted that this is the thing education must do. 

Now in the new AI world that is opening up in front of us, the biggest threat is not technology, but poverty of the imagination. And imagination today means (partly) the "technical imagination". It is about understanding the realm of possibility under the surface, behind the interface - it is the Freudian unconscious of the technical world which through the working of creativity can find expression in the interfaces we produce. 

With an imaginative collapse, humanity becomes enslaved. While the demands of the technical imagination are going to encompass a huge range of disciplines, skills, ideas, relationships, we will need our new tools to oil the wheels of our discourse and knowledge and find new ways of organising ourselves. It is in steering this process to which education needs to direct itself. But ironically, the university as it is currently constituted is geared-up for imaginative collapse and corporate takeover. 

Digitalisation is about changing this. It's not going to be easy. 

Tuesday, 16 May 2023

The Glass Bead Game of Human Relations

I attended an interesting session today on burnout and stress at work. There are many conflicting analyses of these problems. On the one hand, there are those studies which focus on the individual, seeing stress as an attribute of individuals, and "stressors" as independent variables producing the experienced symptoms of stress. There are clearly epistemological problems with this, not least that stress is rather like a headache - something that is subjectively experienced, but cannot be directly witnessed by others (only its effects). Searle calls this a "subjective epistemological" phenomenon (to be contrasted with "objective epistemological" things like historical dates, or "subjective ontological" things like money or universities, or "objective ontological" things like the motion of planets, or light). The notion of the "self" that is stressed is the biological/psychological entity bounded by their skin. Let's call this Stress1.

The alternative view of stress is that it is a manifestation of social relations and communication. This entails a different conception of the self as something that is constructed within communication, particularly the communication of the first person "I". The self in this sense is more like Searles's "ontological subjective" category: the reality of a self is construed by the expectations which arise as a result of social engagement and "positioning". This is the self as it is seen by others. It is also the self which can be oppressed by others directly, or by situations which result from others taking insufficient care of environmental factors that can negatively impact on the expression of the self. This is what can happen in situations where people become stressed. Communicative theories which examine stress in these circumstances include things like the "double bind", which is unfortunately extremely common in many workplaces. This is Stress2. 

Both perspectives on the stressed self - the ontological-subjective self and the epistemological-subjective self - are important. However, in terms of practical steps to eliminate stress, the two perspectives have different approaches. Stress1 is addressed through treatment to the individual - rather like giving someone with a headache paracetamol: mindfulness, etc. Stress2 is addressed through changing the structures of communication. This is much harder to do, and so Stress1 dominates the discourse, and its (rather hair-brained) remedies go relatively unchallenged. 

Stress2 is difficult because it basically requires the making of better decisions at the top of an organisation. Bad decisions will cause stress. Good decisions ought not to, but instead to create synergy, wellness and productivity. Decisions are the result of the skill of decision-makers, so the question really is how we create good decision-makers.  Here we see that the incentives for people to climb the ranks of decision-making encourage behaviour which is anathema to the making of "good decisions". People are rewarded instead for hitting targets, increasing profits, and driving down costs. All of which comes at a human cost. 

Even if better criteria could be defined to encourage and recruit better decision-makers, it will always be possible to "fake" criteria if they are in the form of new targets or KPIs. This won't work.

This has led me to wonder about what Herman Hesse's "Glass Bead Game" might actually have been (or might one day be in the future). Why do the elites of 25th century Castelia take this game, which is a bit like music (as Hesse describes it) so seriously? There is something important about it being a game. 

A game is not a set of criteria. It is a practice which requires the learning of skill to play well. As one learns to play well, one deepens in insight. As one deepens in insight, one might become more aware and able to act in the world in a way where the making of good decisions becomes more probable. Importantly, to play the glass bead game is not to "hit targets". It is not a KPI. It is an art. Only those who are more experienced in the game can judge those who are less experienced, but gradual mastery equips one with the skill to make good judgements oneself. Of course, Joseph Knecht decides the game is not for him, and a different spiritual path takes him elsewhere. But it is still a spiritual path - perhaps a different kind of game.

What if one's progression up the ranks of decision-making powers was organised like this? Would we have fewer psychopaths and more enlightened individuals at the top of our organisations? I think this is what Hesse was driving at. After all, he had seen the worst kind of management psychopaths in history in the Nazis. He must have asked himself what novel kind of arrangements might make the making of Nazis less probable. 

The other interesting thing about this though is that the Glass Bead Game is technological. Is there a way in which we could organise our technologies to produce a radically different kind of incentive scheme for those who aspire to become custodians of society? We clearly have some very powerful and novel technologies in front of us which should cause us to reflect on a better world that we might be able to build with them. 

Sunday, 14 May 2023

Positioning AI

I've been creating a simple app for my Occupational Health students to help them navigate and inquire after their learning content in flexible ways. It's the kind of thing that the chatGPT API makes particularly easy, and it seems worth playing with since chatGPT won't be the only API that does this kind of thing soon (Vicuna  and other open source offerings are probably the future...)

As with any tool development, the key is whether the people for whom the tool is made find it useful. This is always a tricky moment because others either do or don't see that a vision of what they do (manifested in the technics of what is made) actually aligns with what they perceive their needs to be. Like a bad doctor, I risk (like so many technical people) positioning students as recipients of techno-pedagogical "treatment". (Bad teachers do this too)

We've seen so many iterations of tools where mouse clicks and menus have to be negotiated which seem far-removed from real wants and needs. The VLE is the classic example. I wrote a paper about this many years ago with regard to Learning Design technology, which I am reflecting on again in the light of this new technology (see Microsoft Word - 07.doc ( I used Rom Harre's Positioning Theory as a guide. I still think it is useful, and it makes me wonder how chatGPT might be any different in terms of positioning. 

Harre's Positioning Theory presents a way of talking about the constraints within which the Self is constructed in language and practice. There are three fundamental areas of constraint: 

  1. The speech acts that can be selected by an individual in their practice
  2. The positions they occupy in their social settings (for example, as a student, a teacher, a worker, etc)
  3. The "storyline" in their head which attempts to rationalise their situation and present themselves as heroic. 
With positioning through the use of tools, learners and teachers are often seen as recipients of the tool designer's judgement about what their needs are. This is always a problem in any kind of implementation - a constant theme in the adoption of technology. Of course, the storyline for the tool designer is always heroic!

But chatGPT doesn't seem to have had any adoption problems. It appears to most people who experience it that this is astonishing technology which can do things which we have been longing for easy solutions to: "please give me the answer to my question without all the adds, and the need to drill through multiple websites! (and then write me a limerick about it)" But in many cases, our needs and desires have been framed by the tedium of the previous generation of technology. It could have been much better - but it wasn't for reasons which are not technical, but commercial. 

However, could chatGPT have positioning problems? This is an interesting question because chatGPT is a linguistic tool. It, like us, selects utterances. Its grasp of context is crude by comparison to our awareness of positions, but it does display some contextual (positioning) awareness - not least in its ability to mimic different genres of discourse. Clearly, however, it doesn't have a storyline. However, because of the naturalness of the interface, and its ability to gain information from us, it is perfectly capable of learning our storylines. 

In a world of online AI like chatGPT or BARD, the ability to learn individuals' storylines would be deeply disturbing. However, this is unlikely to be where the technology is heading. AI is a decentralising technology - so we are really talking about a technology which is under the direct control of users, and which has the capacity to learn about its user. That could be a good thing. 

I might create a tool for my students to use and say "here is something that I think you might find useful". Ultimately, whether they find it useful or not depends on whether what they perceive as meaningful matches what I perceive as meaningful to them. But what is "meaningful" in the first place?

What students and teachers and technologists are all doing is looking for ways in which they (we) can anticipate our environment. Indeed, this simple fact may be the basic driving force behind the information revolution of the last 40 years. A speech act is a selection of an utterance whose effects are anticipated. If a speech act doesn't produce the expected effects, then we are likely to learn from the unexpected consequences, and choose a different speech act next time. Positioning depends on anticipation, and anticipation depends on having a good model of the world, and particularly, having a storyline which situates the self in that model of the world. 

Anticipations form in social contexts, in the networks of positionings that we find ourselves in our different social roles. ChatGPT will no doubt find its way into all walks of life and different positions. Its ability to create difference in many different ways can be a stimulus to revealing ourselves to one another in different social situations. But there are good and bad positionings. The danger is that we allow ourselves to be positioned by the technology as recipients of information, art, AI-generated video, instruction, jokes, etc. The danger is that we lose sight of what drives our curiosity in the first place. That is going to be the key question for education in the future. 

This is where the guts of judgement lie. What is in a position is not merely a set of expectations about the world around us. It is deeply rooted in our physiology. If we are not to become passively positioned by powerful technology, then it will become necessary for us to look inwards on our physiology in our deepest exercise of judgement. This is what we are going to need to teach succeeding generations. Information hypnosis, from which we have been suffering for many years of the web, cannot be the way of the future.

Sunday, 7 May 2023

The Endosymbiotic Moment

It's become increasingly obvious that there is something quasi-biological about current AI approaches. It's not just that there is a strong genotype-phenotype homology in the way that relatively fixed machine learning models work in partnership with adaptive statistics (see Improvisation Blog: AI, Technical Architecture and the Future of Education ( More importantly, the unfolding evolutionary dynamics of machine learning also appears to confirm some profound theories about cellular evolution. In my book about the future the education, written four years ago now, I said that there would come an "endosymbiotic moment" between education and technology.  Events seem to be playing that out, but now I think it's not just education in for an endosymbiotic moment, but the whole of society. 

This may be why people like Elon Musk, who has had a big stake in AI research, is calling for a "pause". Why? - is it wishful thinking to suggest that it may be because the people who are most threatened by what is happening are people like him? But it may be. 

The essence of biological evolution, and specifically cellular evolution, is that a boundary (e.g. the cell wall) must be maintained. The cell wall defines the relationship between its inside and its outside. Given that the environment of the cell is constantly changing, the cell must somehow adapt to threats to its existence. The principal strategy is what Lynn Margulis called "endosymbiosis". This is basically where the cell absorbs aspects of its environment which would otherwise threaten it. For example, it leads to the presence of mitochondria within the cell which, Margulis argued, were once independent simple organisms like bacteria. Endosymbiosis is the means by which the cell becomes more like its environment, and through this process, is able to anticipate any likely threats and opportunities that the environment might throw at it. It is also the way in which cells acquire "memory" of their evolutionary history - a kind of inner story which helps to coordinate future adaptations and coordinations with other cells. From this perspective, DNA  is not the "blueprint" for life, but rather the accreted result of ongoing R&D in the cells existence. 

What's this got to do with technology? The clue is in a leaked memo from Google (Google "We Have No Moat, And Neither Does OpenAI" (, which highlighted the threat to the company's AI efforts not from competitor companies, but from open source developments. All corporate entities, whether companies, universities or even governments maintain their viability and identity (and in the case of companies, profits) by maintaining the scarcity of what they do. That means maintaining a boundary. Often we see corporate entities doing this by "swallowing up" aspects of their environment which threaten them. The big tech giants have made a habit of this. 

The Google memo suggests something is happening in the environment which the corporation can't swallow. This is open source development of AI. Of course, there is nothing new about open source, but corporations were always able to maintain an advantage (and maintain scarcity) in their adoption of the technology, often by packaging products and services together to offer them to corporations and individuals. Microsoft has had the biggest success here. So why is open source AI so much more of a problem than Open Office or Ubuntu ?

The answer to this question lies in the nature of AI itself. It is, fundamentally, an endosymbiotic technology: a method whereby the vast networked environment of the internet can be absorbed into a single technological device (an individual computer/phone). That device, which then doesn't need to be connected to the internet, can reproduce the variety of the internet. This provides individuals equipped with the technology a vastly increased power to anticipate their environment. Up until this point, the tech industry has aimed to empower individuals with some anticipatory capability, but to maintain control of the tools which provide this. It is that control of the anticipatory tools which is likely to be lost by corporations. And it will not just be chatbots - it will be all forms of AI. It is what might be called a "radical decentralisation moment".

This has huge implications. Intellectual property, for example, depends of scarcity creation. But what happens if innovation is now performed by (or in conjunction with) machines which are ubiquitous and decentralised? New developments in technology will quickly find their way to the open source world, not just because of some desire to be "open" but because that is the place where it can most effectively develop. Moreover, open source AI is much simpler that open source office applications. It has far fewer components: a training algorithm + data + statistics is just about all that's needed. Who would invest in a new corporate innovation in a world where any innovation is likely to be reproduced by the open source community within a matter of months? (I wonder if the Silicon Valley Bank collapse carried some forewarning of this problem)

But its not just the identities of tech businesses which are under threat. What about education? What about government? Are we now really so sure that the scarcity of the educational certificate, underpinned by the authority of the institution, is safe from an open source challenge? (Blockchain hasn't gone away, for example) I'm not now, and the way that universities have responded to chatGPT has highlight the priority for them to "protect the certificate!" like the queen in the hive. If the certificate goes, what else does education have? (I'm not suggesting "nothing", but the certificate is the current business model and has been for decades)

Then there is government and the legal frameworks which protect the declaration of scarcity in commerce through IP legislation and contracts. The model of this was the East India Company, where protecting territories and trade routes with the use of force underpinned imperial wealth. What if you can't protect anything? What kind of chaos does that produce? AI regulation is not going to be a shopping list of do's and don'ts because its going to be difficult to stop people doing things. China is perhaps the most interesting case. No government can control a self-installed, non-networked chatbot: it's like kids in the Soviet Union listening to rock and roll on x-ray film turned into records. Then of course there'll be terrorist cells arming themselves with bomb-making experts. We are going to need to think deeper than the ridiculously bureaucratic nonsense of GDPR. 

Our priority in education, industry and government is going to need to be to restabilise relations between entities with identities which will be very different from the identities they have now. In the Reformation, it was the Catholic church which underwent significant changes, underpinned by major changes in government. The English civil war and the restoration produced fundamental changes to government, while the industrial revolution produced deep changes to commerce. But this is a dangerous time. Historical precedent shows that changes on this level are rarely unaccompanied by war. 

Monday, 10 April 2023

Quantum Ears

It seems obvious to say that music starts at time a and finishes at time b, and in between goes on a journey. But I'm beginning to hear it differently. I don't think there is a time a and a time b: they are constructed as part of our sense-making about what happens to us when we listen or play. Importantly, our sense-making must omit certain key aspects of making music. The principal dimension that I think is omitted is noise, or the energy that is continually shaking our senses and causing our physiology to find new ways of organising itself. 

If we consider what noise does, then the journey of music over what is perceived as time is entirely co-present at any "now". Music is a more like a space to explore than a path to follow. From the very moment that we both make and don't make a sound, the whole space is there, existing in the dynamic between physiology and the universe. 

Harrison Birtwistle seems to have heard music like this, and his thought has had a big influence on me. I was particularly struck by Birtwistle's appreciation of Paul Klee - particularly Klee's pedagogical sketchbooks. Birtwistle says:

Like Paul Klee, I'm taking a line for a walk. But the lines Klee draws are pure continuum, they look like a map of a walk or a journey. And this is how we usually think of journeys - fluid things which are uninterrupted. But when you're in the process of journeying, you perceive them differently. You don't look straight ahead, you look to the right and then to the left. And when you turn to the left you fail to take in the events on the right and vice-versa. In retrospect you think of the journey as being a logical progression from one thing to another, but in actual fact it consists of a series of unrelated things, which means that you're simply making choices all the time, choices about where to look. It's to do with discontinuity. You have a continuum, but you're cutting things out of it while you look the other way.

Music is discontinuous in essence. The "continuity" is something that perception imposes on us, making us ignorant of the dynamics that drive its discontinuities. Deep down, what we perceive in Mozart or Bach (and in Birtwistle) is coherence, which is not the same thing as continuity. 

Coherence does not need time as we understand it. It represents the deep symmetry of nature, in which what we call time is a parameter. In quantum mechanics, this deep symmetry is what balances out local (physically proximate) phenomena with non-local (physically distant) phenomena. For there to be "spooky action at a distance" (which there appears to be), then there must be some underlying balancing that goes on between what happens locally and something happening non-locally. All matter, including our physiology - and our ears - will partake in this universal symmetry.

Because of this complex symmetrical mechanism, the energy of the quantum world is always buzzing and interfering with our physiological substrates. To deal with this, all life needs to construct niches. The space of music is its niche. To be entranced by music is to be drawn into its niche, and then (in the case of Western classical music) to be convinced of music's "journeying". But the journey is an illusion. Music immediately presents a multiplicity of the same thing. Heterophony is the closest we get to this kind of thing. 

Taking time and continuity out of the music equation carries important lessons for other aspects of life. Learning, like music, is discontinuous, but learners and teachers are forced to deny this by the expedience of institutions who must regiment educational practice. Equally, the climate emergency is often portrayed as a "race against time" - but rather like the pathology of education, the more we impose a linear model on what is essentially a discontinuous system, we become (despite the good intentions of activists) more denatured, not less. The same is true in politics: our only understanding of a regulatory system is one which works in a linear continuous fashion, and which in operation creates more alienation.