As I'm turning my attention towards the topic of "work" (and turning away from "education"), I'm concerned that the technological changes in work are going to serve the interests of what Michael Sandel calls the "credentialed class" and further exacerbate inequality. This at a time when, for even those getting their degree certificates, work is going to get harder to find since we are seeing the automation of what were graduate entry-level jobs. That means it won't really be credentials that count, but privilege, yet we'll pretend that it is credentials in the interests of the education industry.
Sandel's attack on the ideology of meritocracy is well-placed. He's said it in various forums over the last year or so, and co-authored a book with Thomas Picketty (Equality: What It Means and Why It Matters : Piketty, Thomas, Sandel, Michael J.: Amazon.co.uk: Books). But as he himself acknowledges, he's not saying anything new: it is basically the same argument that Michael Young put forwards in his book that coined the term "meritocracy" in the 1950s ("The Rise of the Meritocracy"). What's new in Sandel is that he's able to flesh-out the pathology that Young predicted with the concrete evidence from populism: his talk about the history of how we've got here is really worth listening to:
It's the twists of irony in this story which make it so compelling: the denigration of big government by Thatcher and Reagan in favour of the invisible hand of the market (this was from Hayek - I wonder if he would change his mind if he saw what became of his ideas!); then, after the failure of those right-wing governments, the emergence of soft-left politicians like Blair and Clinton, who saw nothing wrong with the markets, but argued that to "make it" you had to get educated. So there was a massive increase in Higher Education, which is where I and many others, found work.
Thanks at least for that... but it might have been a mistake societally. The sting in the meritocratic tail was, as Sandel says, an implication that if you were struggling to get on, it was "your fault". From there stemmed a deep discontent among those who couldn't get on, who would eventually turn to undemocratic demagogues who gave voice to their discontent, but whose agenda served only the interests of themselves, merely using the discontent as a vehicle for propelling their own rise to power.
As Adam Curtis has eloquently expressed in his masterly recent "Shifty" (see 1. Shifty: The Land of Make Believe Adam Curtis 2025), the story is one of governments and technocrats unleashing technological forces which eventually spin out of their control. The interesting thing with the Trumps and Starmers (and Putins, Xis and Orbans) is that this is really out of control of everybody, and nobody knows what to do about it. Unfortunately under those conditions, war is the button humans tend to reach for.
The story from Thatcher to Blair to Trump is a story of using technology to make uncomplicated human systems unnecessarily complicated, and ultimately chaotic. This is always the danger with technology: it tends to increase complexity. I did a presentation on cybernetics and public health last week, and I had a clip from John Seddon who gave a talk a few months ago for the Mike Jackson Annual Lecture in Hull. In his Michael Caine style, Seddon said:
"I hear it so many times people talk about service organisations as complex systems. They are not. They are unnecessarily complicated systems. They're man-made. Man can't make a complex system, but they sure can make an unnecessarily complicated system"
With AI, this is going to get worse. Not because of any inherent malevolence in the technology itself (intrinsically, it is a remarkable scientific discovery), but because of our inability to really think about what we are doing, why we are doing it, and who we are doing it for. It ought to be education's job to think about these things, but instead, education focuses on its own inherited operational complexity, while seeing the pathological growth of techno-operational complexity everywhere else as a business opportunity for selling more "education".
The inspiration for thinking more holistically about this may come from indigenous communities. The role of knowledge in these communities is not as something which is acquired under special institutional conditions, but something which is woven into the fabric of community life. The community has a good working model of itself which is enacted in daily living. It is a very different way of thinking about knowledge - and there is an excellent exhibition in Manchester's Whitworth Gallery about it at the moment in the work of the Peruvian artist Santiago Yahuarcani - Santiago Yahuarcani: The Beginning of Knowledge | Whitworth Art Gallery. (I think there are modern equivalents to this indigenous approach to knowledge - maybe it's not that different from the way Nelson organised his fleet!)
I would like to think that when Sandel appeals for "dignity at work" and Seddon appeals for "system knowledge", and awareness of 'failure demand' which puts huge strain on organisations, they are talking about the same thing. They may not see it like this. Seddon might say of Sandel that he only talks about dignity and preaching to the credentialed not to be condescending, whereas Seddon would say the actual work of all workers in the organisation is to study their work, their demands, to challenge assumptions, and increase the self-knowledge of the organisation. That is work for everyone, and it is the job of management/organisation to coordinate it.
I think the route to Sandel's "dignity at work" is the path Seddon charts. If we took that path with AI, for example, we would not be eyeing up ways in which AI can make our existing operations more efficient. We would be asking how AI can allow us to perceive aspects of our work which we couldn't see before. That would be to use it as a scientific instrument, not a new pair of roller skates - an instrument of knowledge, not a accelerant of current operations.
Then the thought about education itself: what if we taught people how to do this? Then education's value would no longer need to lie in a certificate, but in the actual tangible benefits that the "work of thinking" performs on all organisations.
This is a follow-on from my previous post about the fate of transdisciplinary scholarship in the present academy. That perhaps sounded like a personal complaint. Partly, it was - but there's something more to the process which I would call "discipline capture" that any individual might feel. What I described was the process by which a transdiscipline like cybernetics gets 'torn apart' by discipline-based academics who seek to appropriate parts of the transdiscipline for career gains and attention from their disciplinary colleagues. By this process, the transdiscipline's fundamental nature is destroyed. Even the advocates of the transdiscipline become agents of its destruction.
Cybernetics provides an excellent example. The original cybernetics thinkers were highly detailed and mathematical in their thinking and somewhat difficult to understand. The papers of Wiener, McCulloch, Von Foerster or Pask can be challenging not just in their mathematical and logical elegance, but in their deviation from academic disciplinary norms. Pask's papers on learning are particularly notable for this. Once those original thinkers die, their disciples want to keep the conversation going, but recognise the need to communicate to a wider audience (otherwise, who is going to go to the conferences?). So a gradual process of dumbing-down occurs. This also occurs through discipline capture - Pask's dilution into Laurillard's work is a case in point. Nobody has time or the inclination to read the original work, and they are too busy trying to drum up an audience for their own interpretation, or to self-aggrandise on the back of transdisciplinary scholarship. But the dumbing-down has a real consequential loss in our ability to harness the original insights.
This is a social dynamic, and one that Von Foerster (particularly) predicted ("The more profound the problem ignored, the greater the chances for fame and success!"). It does beg the question as to why disciplines and academics working in universities today are so destructive to transdisciplinary thinking - despite their "championing" of it (with champions like that, who needs enemies!). It's not just ego, ambition and the need to maintain a hold within the academy, although all of those play a part. That doesn't really explain anything. But we need to look at what a discipline is in the first place.
Disciplines represent themselves through discourse which becomes codified within institutional structures and publications. Luhmann pointed out long ago the connection between discursive dynamics and institutional structure (he examples economics, art, law, education, etc). Leydesdorff later produced powerful metrics for analysing these dynamics (see his brilliant "Evolutionary Dynamics of Discursive Knowledge", to which I gave a video introduction here: Mark William Johnson: Chapter 1 - The Evolutionary Dynamics of Discursive Knowledge). I was lucky to have been part of that. But what this work didn't consider so much was the hegemonic power of a discourse backed by institutional authority. Luhmann and Leydesdorff's high-level "codes" of communication - the fundamental organising principles which distinguish art from economics, or law from love - represent constraints on utterances. Institutional structures amplify and reinforce those constraints, alongside metrics for academic performance.
Of course, disciplines develop and change - often by appropriating new ideas from other disciplines (biochemistry, for example). This development arises through what Leydesdorff calls "mutual redundancy" - a process of aligning the dynamics of one discourse with another. The transdiscipline is different in this process, because it presents mutual redundancy to all other disciplines. Cybernetics particularly presents new fundamental concepts which resonate with all levels of organisation, knowledge, subjects, etc. I wrote about this with Leydesdorff many years ago (see Beer's Viable System Model and Luhmann's Communication Theory: ‘Organizations’ from the Perspective of Meta‐Games - Johnson - 2015 - Systems Research and Behavioral Science - Wiley Online Library). From the perspective of this paper (which was our first collaboration, and quite dense), discipline capture is a meta-game. If we (I) see it as destructive of transdisciplinarity, then the metagame approach is to play a different game.
I think our emerging technologies might provide a way to do this. Some of my friends have been very interested in creating a "glass bead game", and I am very sympathetic to this, although trying to realise what Herman Hesse was really going on about is difficult, to say the least. I do think that there are many ways to do something that breaks the rules of the existing academic games. One way may be the course I set up at the Far Eastern Federal University 8 years ago. It's still going, despite the obvious constraints on my participation. The guiding principle of that course was to see the learning journey as a process of construction through a syncretistic world of indistinct encounters with multiple fields of knowledge. Now AI and VR and heaven knows what else could do this even more powerfully.
A few weeks ago I gave a lecture-performance at Manchester's wonderful transdisciplinary space, Bound and Infinity on "music and cybernetics" (or musicocybernetics). It's a small space, but with a projection, a piano and a synthesizer, video and sound, I did something which (in the words of one attendee), invited the audience to think in new ways. There was nothing deterministic about this - it was improvisatory. But it had the desired effect. Much like what I had aimed for in Russia.
There is a new kind of syncretistic art form that is possible. We need it - because what happens in education at the moment is just so dreary in comparison to what is possible with the new technologies we are surrounded by. It is a time for experiment and play. This will be too threatening to the established educational elite to support, so they are likely to get left behind in this "game-changer".
The humanities exhibit various patterns of academic practice in today's university, but the most irritating is the "hunt for explanatory principles". Basically this is a practice where little original work is done, but where academics seek to sound clever by attempting to fit the work of a neglected transdisciplinary intellectual figure to a manifest (and usually intangible) phenomenon within a specific discipline. As a transdisciplinary person myself, I find some colleagues who are securely grounded in a discipline always on the hunt for some clue for their latest "conquest". Whatever clue I or others like me might provide becomes their speech acts of the kind "I've discovered x and applied it to fashion/art/music/business/society/etc". Perhaps we shouldn't tell them about x in the first place, but I can't get angry about it other than to be disappointed that it's so intellectually lazy because "x" is usually barely known within a particular academic community, and there is little authority which can be brought to bear to criticise the new explanatory principle, while the academic parades fake erudition and often misconceived interpretations of what "x" was going on about in the first place.
What this often represents is, once again, the disciplinary colonisation of transdisciplinary concepts. It is the Procrustean move of the institution, whose academic reward structures favour codifiable disciplinary appropriation, which in turn encourages expedient academics to own things that weren't intended to be owned - and certainly not by them.
A deeper problem with this is that nothing fundamentally new gets done because the brains of academics are focused on their constant attention-grabbing practices in pursuing explanatory principles, rather than actually making any intellectual progress at all. Then there is the problem of explanatory principles in the first place.
To say "I can explain q" or "with the theory which I have discovered by dead philosopher x, I can explain this (and I shall bask in x's reflected glory!)", is an epistemological error. Gregory Bateson (another "x"!) long ago pointed out the misapprehensions around "explanatory principles". An explanatory principle can explain anything we want it to explain. It is a speech act designed to satisfy (or perhaps dull) curiosity. Bateson's favourite example of an explanatory principle is the "dormitive principle" to explain why ether puts us to sleep, as described by Moliere. I'm finding it a bit depressing at the moment that cybernetics is being used in a similar "dormitive principle" kind of way. It's great for making people sound clever - but what's new? Where's the progress?
It's as if we've got the scientific method round the wrong way. In Hume, explanation was part of the dialogue between scientists seeking to articulate causal explanations for the phenomena produced by experiments. Increasingly in the arts and humanities, and Business Schools, we see precious few experiments. Of course, in the light of a candidate causal explanation, one would then seek further experiments. But we don't see this. Often all we see is self-congratulation. It's perhaps not a million miles away from how the scholastic university must have been just before everything was discredited and overturned in the 17th century. I'm not convinced that our new form of pseudo-scholasticism won't meet the same fate.
Explanatory principles can explain anything we want them to explain, or nothing at all. It is the conversation - the coordination among scientists - where the real progress is made, and that requires experiment. We now have new means of doing experiments. Perhaps we should use them and do away with this performative nonsense!
I'm grateful to Diana Wu David (see Diana Wu David | future of work consultant & coach), whose work on the Future of Work is very motivating and visionary, for pointing me to the Harvard study. As I've been thinking about this stuff, I've also been sharing my enthusiasm for Gary Stevenson, whose videos on economics have been a real eye-opener for me over the last two years or so.
Flourishing is a complex phenomenon, but the lack of resources among the poor must inevitably play a key role. Gary's analysis of the Covid lockdown as a wealth transfer to the rich is a very compelling narrative, and his criticism of the academic establishment is spot-on: what anachronistic nonsense!
It is interesting to consider whether human beings have any kind of "innate" capacity to overcome adversity. Is it easier if you have the emotional support of a loving family, than if you are estranged from your family and have been abused for the whole of your life? Surely these situations are different. So it really does matter "who your parents are" as Stevenson says - not just because of the financial resources available to the middle classes, but because the emotional support becomes more probable (but obviously not certain) under circumstances of material family comfort.
As human beings we find ourselves caught between self-care in local communities - care which prioritises autonomy and personal choice, with the care that is provided by social institutions - health services, social services, education, etc. These latter entities are heteronomous, to use Ivan Illich's borrowing from Kant's distinction between autonomy and heteronomy. Illich's argument was to say that if the balance between autonomy and heteronomy gets out of whack, then we are in trouble. He further said that social systems and technologies start from a position of empowering autonomy, but end up as heteronomous behemoths (church, transport, energy, health service, education, etc)
The less wealth we have, the mechanisms of self-care become skewed towards subsistence rather than sustainability, while the subsistence mode is increasingly reinforced by the relationship between individuals and heteronomous public services. This is partly because the heteronomous side has no interest in the qualitative aspects of existence, but rather sees its role in terms of statistics and average outcomes. So it becomes a vicious circle. Also the heteronomous side will seek to maintain itself by selecting those people it serves for whom its interventions stand the best chance of working.
I'm in Seoul at the moment, on an "AI Tour" of East Asia in Hong Kong, Zhuhai (China), Seoul and Taiwan. Having spent a couple of years trying to get academic staff up to speed with what's happening with the technology, I've come here with a slightly different message from the usual "look at what you can do with GenAI" stuff. The message is how do we think ahead of where the technology is "at", to thinking about where it (and we) are going. To put it simply, this technology is going to change the way we perceive the world. In some ways, this is what technology has always done (it's striking to think that Chinese society is now unthinkable without the mobile phone), but AI is going to present fundamental perceptual shifts to us, and this will have a huge impact on how we learn and coordinate ourselves in the world.
If we assume that the fundamental scientific breakthrough with AI has been made (although I think more discoveries are on the way which will bridge the biological/technological divide), then what is going to happen next is a predictable increase in speed and scale. In terms of speed, the things that we are accustomed to taking a few minutes or seconds like image and video generation will become almost real-time. Speed change is a fundamental change in the nature of the technology: images appearing as we speak will change the way we communicate. Video appearing as we speak will be even more profound. At some point we may even have images appearing as we think. I would have been sceptical about this stuff a few years ago, but it some of this stuff is already practicable, and the rest is coming into view. We simply aren't ready for what it will do to us, and to a large extent we are worrying about the wrong things.
Interactive video also will be soon with us - so not only will AI generate video from prompts, but will enable us to virtually interact with that video. I showed this demo of a game world that was generated by AI in Seoul. It's already pretty astonishing. Computer games are going to become increasingly important - potentially as a means of communication. It's making me think that my scepticism about VR was misplaced. I had based this on the fact that the content for VR is so time consuming to create. But with AI content generation will become as much of a non-issue as it has rapidly become with creating text.
I can't say what this will all mean. But I can say that this is likely to happen.
There is a mystery as to why the most transdisciplinary science, cybernetics, never really got a hold in the university. Yes, there were weird outposts of cybernetic activity like Von Foerster's Biological Computer Lab at the University of Illinois, but it turned out to be not very sustainable. The most significant major UK centre was in Hull University, and that has pretty much been disbanded. I believe what we are seeing with the impact of AI in the university is telling us why this happened, and why a similar pathology is happening again.
A university is a set of disciplinary fiefdoms - elegantly described years ago by Tony Bucher's "Academic Tribes and Territories". Academic tribes or fiefdom's tend to want to defend themselves from each other. When disciplinary boundaries are clear, this works pretty well - and has done since the trivium and quadrivium of the middle ages...
When a truly transdisciplinary subject comes along - and cybernetics was just that - it puts disciplines in a bit of a panic. It's not that they want to defend themselves from the transdiscipline, but rather they each seek to own it, and therefore look to acquire and colonise bits of it. We are seeing exactly this process unfolding around AI at the moment: every discipline is staking its claim to AI. The consequence of this is that the transdiscipline becomes divided and absorbed into disciplines. Its intrinsic transdisciplinary nature is dissolved. This is truly crazy behaviour, but it is determined by the structure of institutions.
The only hope I feel is for universities (or some other institution for scientific inquiry and intellectual growth) to construct themselves not around the codifications of curriculum and disciplines, but to construct themselves around tacit knowledge, shared experience and creative expression. AI could help here. It could be a huge amplification of the creative imagination. It could create shared experiences in ways we have not conceived of before. Unfortunately, despite their many virtues, universities are unlikely to be the home for these new kinds of innovations and experiences. It will happen somewhere else.
The issue has to do with the roots of institutions of knowledge and science in monasticism. Whatever caused human beings to retreat from daily ordinary to live in ascetic small communities in the desert was driven by a deep physiological need. Many scholars feel this same need in the wake of the modern academy's transformation into a business. Physical retreat is unlikely. But a spiritual retreat to new kinds of shared experience and ways of communicating is likely to become more feasible. Students and industry may follow, and the universities may have to play catch-up.
I went to a concert in the Barbican a couple of weeks ago where the BBC SO played Tristan Murail's Gondwana. There were a number of other hybrid electronic/acoustic works. As a now-elder exponent of musical spectralism (where audio analysis of sound waves themselves informs dynamics, instrumentation and form), I found the Murail piece particularly fascinating. Perhaps more impressive than Gondwana, is his piano music - particularly this piece "Territoires de l'oubli":
Perhaps not the sort of thing one would want to listen to all the time, but it is fascinating because of the way the thumps and bangs resonate with the higher compass of the piano. I find it like listeneing to nature - rather in the same way as Messaien (who of course is the composer who springs to mind in listening to this). But this isn't about birdsong or God. It's fundamentally about physics.
I also think it's about difference. The art of Murail is an art of identifying small differences in the detail of a waveform, and reproducing these differences in instructions to players such that the reproduction of small differences is almost as imperceptible as they are in our auditory experience.
What is a "small difference"? It is clearly related to what Ernst Weber called a "just noticeable difference" - an idea which became the foundation of psychophysics. It subsequently became embedded as a principle in an AI innovation I was responsible for, so this is a bit personal for me. There are big philosophical questions around psychophysical principles because they appear to run against the Kantian notion that there is an unknowable thing-in-itself about which we can have no knowledge. Weber, and later Fechner, suggested that there were indeed empirical things we can do - and Murail is clearly experimenting with these.
A small difference is liminal - existing in the space between presence and absence. It is the beginning of where what Graham Priest calls paraconsistency (where true and false co-exist simultaneously) arises (Graham Priest - 4. What is paraconsistent logic?)
Increasingly it seems that paraconsistency is the only useful logical position to take in our incredibly tortured world at the moment. Warren McCulloch, in his first paper on neural networks in 1942, articulated something like this and prefigured a paraconsistent logic in discussing the circularity of the nervous system (A heterarchy of values determined by the topology of nervous nets | Bulletin of Mathematical Biology). A "heterarchy of values" is a paraconsistent logic.
This isn't a new idea. It goes back to the idea of "synchronic contingency" which was a key feature of the philosophy of John Duns Scotus. That it has acquired new resonance with quantum mechanics is perhaps an indication that the medieval insights were correct. I'm revisiting some of this stuff: Improvisation Blog: Information and Syncretism: from Floridi to Piaget
Our distinction-making is never objective, yet it result in selections from possibilities where we take what we select to constitute "reality". The logic of selection is not Aristotelian, as McCulloch argued. Yet we take it to be so - particularly from an organisational perspective. This is what is producing the aporia that see unfold in the Trump/Russia/Ukraine pathology. If we examined this from a paraconsistent perspective, how would it look different?
Rather like Murail's music seeks to unite the form of music with the physics of music, what if we united the form of decision with the biology of decision? Could AI, which is also a paraconsistent technology, help us?