Friday, 8 August 2025

Lines, Boxes and Spaces

There's a wonderful event happening in Manchester at the moment. The Bound and Infinity bookshop in Tib Street is hosting a number of friends talking about physics (Peter Rowlands), Mathematics and Laws of Form (Louis Kauffman), architecture (Andrew Crompton) as well as interjections from art, music, philosophy, etc. I brought one of my PhD students from public health whose reaction was "where has this been all my life?!". Particularly wonderful was the fact that many attendees are quite young and thinking the kinds of ambitious thoughts that one has at 18 (and which academia is very good at knocking out of people). This event serves as encouragement to youth not to give in to the deadly institution.

I only encountered cybernetic thinking in my mid 30s and had the same reaction. I slightly kick myself that I might have got there sooner if I'd had the courage to speak to Stafford Beer in Manchester University music department when I was a student (he visited regularly to attend the Lindsay String quartet concerts). I wish someone had dragged me to a cybernetics conference at that age. But maybe it's best that that didn't happen. 

The problem is that ambitious thinking doesn't have an easy ride in the university. This is really because of the pathology of disciplines that I wrote about recently. Disciplines are fiefdoms as Tony Becher pointed out (I discovered in a conversation with Ron Barnett a few weeks ago that Becher was responsible for helping Ron get on the academic ladder from his admin role in the university). People like Becher and Barnett know what universities are - and what they should be. If Barnett's example is anything to go by, there are likely to be more brilliant and original minds among the admin of the university than among the credentialed academics. That's a problem which we should do something about.

The real difficulty is that the career path for those who think in an interdisciplinary way just isn't there. Universities continually talk about interdisciplinarity, but they don't do it - and very often they don't know what it really is, which is revealed when universities try to create "departments" for interdisciplinarity.

The real problem is that institutions organise themselves into disciplinary boxes - departments with budgets, teaching loads, journals, etc. An interdisciplinary box is no better than a disciplinary box. Each box wants to maintain it's viability and compete with other boxes in the process. But interdisciplinarity doesn't belong in a box.

Cyberneticians often draw diagrams of organisations with boxes and wires/lines connecting them. Interdisciplinarity really belongs in the wires (the lines) not the boxes. But there is no career in being in the lines.

The interdisciplinary scholar's role is to flow through the institution between the boxes. It is a completely different kind of existence to existing in a box. This is not to say that boxes aren't important: there should be people in a history box or a physics box... Rigour does count for something. But boxes without people flowing through the wires stifles imagination. 

This has been apparent to cyberneticians for decades. Gregory Bateson wrote this in an address to the Regents of the University of California in 1978:

"While much that universities teach today is new and up-to-date, the presupposition or premises of thought upon which all our teaching is based are ancient and, I assert, obsolete. I refer to such notions as:

a. The Cartesian dualism separating "mind" and "matter"

b. The strange physicalism of the metaphors which we use to describe and explain mental phenomena - "power", "tension", "energy", "social forces", etc

c. Our anti-aesthetic assumption, borrowed from the emphasis which Bacon, Locke and Newton long ago gave to the physical sciences, viz that all phenomena (including the mental) can and shall be studied and evaluated in quantitative terms. 

The view of the world - the latent and partly unconscious epistemology - which such ideas together generate is out of date in three different ways:

a. pragmatically, it is clear that these premises and their corollaries lead to greed, monstrous over-growth, war, tyranny, and pollution. In this sense, our premises are daily demonstrated false, and the students are half aware of this.

b. Intellectually, the premises are obsolete in that systems theory, cybernetics, holistic medicine, and gestalt psychology offer demonstrably better ways of understanding the world of biology and behaviour.

c. As a base for religion, such premises as I have mentioned became clearly intolerable and therefore obsolete about 100 years ago. In the aftermath of Darwinian evolution, this was stated rather clearly by such thinkers as Samuel Butler and Peter Kropotkin. But already in the eighteenth century, William Blake saw that the philosophy of Locke and Newton could only generate "dark Satanic mills"

But this leads to the key message he says sarcastically that by 1979

"we shall know a little more by dint of rigour and imagination, the two great contraries of mental process, either of which by itself is lethal. Rigour alone is paralytic death, but imagination alone is insanity."

Thursday, 17 July 2025

Michael Sandel and John Seddon on Work

As I'm turning my attention towards the topic of "work" (and turning away from "education"), I'm concerned that the technological changes in work are going to serve the interests of what Michael Sandel calls the "credentialed class" and further exacerbate inequality. This at a time when, for even those getting their degree certificates, work is going to get harder to find since we are seeing the automation of what were graduate entry-level jobs. That means it won't really be credentials that count, but privilege, yet we'll pretend that it is credentials in the interests of the education industry.  

Sandel's attack on the ideology of meritocracy is well-placed. He's said it in various forums over the last year or so, and co-authored a book with Thomas Picketty (Equality: What It Means and Why It Matters : Piketty, Thomas, Sandel, Michael J.: Amazon.co.uk: Books). But as he himself acknowledges, he's not saying anything new: it is basically the same argument that Michael Young put forwards in his book that coined the term "meritocracy" in the 1950s ("The Rise of the Meritocracy"). What's new in Sandel is that he's able to flesh-out the pathology that Young predicted with the concrete evidence from populism: his talk about the history of how we've got here is really worth listening to:


It's the twists of irony in this story which make it so compelling: the denigration of big government by Thatcher and Reagan in favour of the invisible hand of the market (this was from Hayek - I wonder if he would change his mind if he saw what became of his ideas!); then, after the failure of those right-wing governments, the emergence of soft-left politicians like Blair and Clinton, who saw nothing wrong with the markets, but argued that to "make it" you had to get educated. So there was a massive increase in Higher Education, which is where I and many others, found work. 

Thanks at least for that... but it might have been a mistake societally. The sting in the meritocratic tail was, as Sandel says, an implication that if you were struggling to get on, it was "your fault". From there stemmed a deep discontent among those who couldn't get on, who would eventually turn to undemocratic demagogues who gave voice to their discontent, but whose agenda served only the interests of themselves, merely using the discontent as a vehicle for propelling their own rise to power.

As Adam Curtis has eloquently expressed in his masterly recent "Shifty" (see 1. Shifty: The Land of Make Believe Adam Curtis 2025), the story is one of governments and technocrats unleashing technological forces which eventually spin out of their  control. The interesting thing with the Trumps and Starmers (and Putins, Xis and Orbans) is that this is really out of control of everybody, and nobody knows what to do about it. Unfortunately under those conditions, war is the button humans tend to reach for. 

The story from Thatcher to Blair to Trump is a story of using technology to make uncomplicated human systems unnecessarily complicated, and ultimately chaotic. This is always the danger with technology: it tends to increase complexity. I did a presentation on cybernetics and public health last week, and I had a clip from John Seddon who gave a talk a few months ago for the Mike Jackson Annual Lecture in Hull. In his Michael Caine style, Seddon said: 
"I hear it so many times people talk about service organisations as complex systems. They are not. They are unnecessarily complicated systems. They're man-made. Man can't make a complex system, but they sure can make an unnecessarily complicated system"

With AI, this is going to get worse. Not because of any inherent malevolence in the technology itself (intrinsically, it is a remarkable scientific discovery), but because of our inability to really think about what we are doing, why we are doing it, and who we are doing it for. It ought to be education's job to think about these things, but instead, education focuses on its own inherited operational complexity, while seeing the pathological growth of techno-operational complexity everywhere else as a business opportunity for selling more "education". 

The inspiration for thinking more holistically about this may come from indigenous communities. The role of knowledge in these communities is not as something which is acquired under special institutional conditions, but something which is woven into the fabric of community life. The community has a good working model of itself which is enacted in daily living. It is a very different way of thinking about knowledge - and there is an excellent exhibition in Manchester's Whitworth Gallery about it at the moment in the work of the Peruvian artist Santiago Yahuarcani - Santiago Yahuarcani: The Beginning of Knowledge | Whitworth Art Gallery. (I think there are modern equivalents to this indigenous approach to knowledge - maybe it's not that different from the way Nelson organised his fleet!)

I would like to think that when Sandel appeals for "dignity at work" and Seddon appeals for "system knowledge", and awareness of 'failure demand' which puts huge strain on organisations, they are talking about the same thing. They may not see it like this. Seddon might say of Sandel that he only talks about dignity and preaching to the credentialed not to be condescending, whereas Seddon would say the actual work of all workers in the organisation is to study their work, their demands, to challenge assumptions, and increase the self-knowledge of the organisation. That is work for everyone, and it is the job of management/organisation to coordinate it. 

I think the route to Sandel's "dignity at work" is the path Seddon charts. If we took that path with AI, for example, we would not be eyeing up ways in which AI can make our existing operations more efficient. We would be asking how AI can allow us to perceive aspects of our work which we couldn't see before. That would be to use it as a scientific instrument, not a new pair of roller skates - an instrument of knowledge, not a accelerant of current operations. 

Then the thought about education itself: what if we taught people how to do this? Then education's value would no longer need to lie in a certificate, but in the actual tangible benefits that the "work of thinking" performs on all organisations. 


Saturday, 31 May 2025

Discipline Capture and Escape

This is a follow-on from my previous post about the fate of transdisciplinary scholarship in the present academy. That perhaps sounded like a personal complaint. Partly, it was - but there's something more to the process which I would call "discipline capture" that any individual might feel. What I described was the process by which a transdiscipline like cybernetics gets 'torn apart' by discipline-based academics who seek to appropriate parts of the transdiscipline for career gains and attention from their disciplinary colleagues. By this process, the transdiscipline's fundamental nature is destroyed. Even the advocates of the transdiscipline become agents of its destruction. 

Cybernetics provides an excellent example. The original cybernetics thinkers were highly detailed and mathematical in their thinking and somewhat difficult to understand. The papers of Wiener, McCulloch, Von Foerster or Pask can be challenging not just in their mathematical and logical elegance, but in their deviation from academic disciplinary norms. Pask's papers on learning are particularly notable for this. Once those original thinkers die, their disciples want to keep the conversation going, but recognise the need to communicate to a wider audience (otherwise, who is going to go to the conferences?). So a gradual process of dumbing-down occurs. This also occurs through discipline capture - Pask's dilution into Laurillard's work is a case in point. Nobody has time or the inclination to read the original work, and they are too busy trying to drum up an audience for their own interpretation, or to self-aggrandise on the back of transdisciplinary scholarship. But the dumbing-down has a real consequential loss in our ability to harness the original insights. 

This is a social dynamic, and one that Von Foerster (particularly) predicted ("The more profound the problem ignored, the greater the chances for fame and success!"). It does beg the question as to why disciplines and academics working in universities today are so destructive to transdisciplinary thinking - despite their "championing" of it (with champions like that, who needs enemies!). It's not just ego, ambition and the need to maintain a hold within the academy, although all of those play a part. That doesn't really explain anything. But we need to look at what a discipline is in the first place. 

Disciplines represent themselves through discourse which becomes codified within institutional structures and publications. Luhmann pointed out long ago the connection between discursive dynamics and institutional structure (he examples economics, art, law, education, etc). Leydesdorff later produced powerful metrics for analysing these dynamics (see his brilliant "Evolutionary Dynamics of Discursive Knowledge", to which I gave a video introduction here: Mark William Johnson: Chapter 1 - The Evolutionary Dynamics of Discursive Knowledge). I was lucky to have been part of that. But what this work didn't consider so much was the hegemonic power of a discourse backed by institutional authority. Luhmann and Leydesdorff's high-level "codes" of communication - the fundamental organising principles which distinguish art from economics, or law from love - represent constraints on utterances. Institutional structures amplify and reinforce those constraints, alongside metrics for academic performance. 

Of course, disciplines develop and change - often by appropriating new ideas from other disciplines (biochemistry, for example). This development arises through what Leydesdorff calls "mutual redundancy" - a process of aligning the dynamics of one discourse with another. The transdiscipline is different in this process, because it presents mutual redundancy to all other disciplines. Cybernetics particularly presents new fundamental concepts which resonate with all levels of organisation, knowledge, subjects, etc. I wrote about this with Leydesdorff many years ago (see Beer's Viable System Model and Luhmann's Communication Theory: ‘Organizations’ from the Perspective of Meta‐Games - Johnson - 2015 - Systems Research and Behavioral Science - Wiley Online Library). From the perspective of this paper (which was our first collaboration, and quite dense), discipline capture is a meta-game. If we (I) see it as destructive of transdisciplinarity, then the metagame approach is to play a different game. 

I think our emerging technologies might provide a way to do this. Some of my friends have been very interested in creating a "glass bead game", and I am very sympathetic to this, although trying to realise what Herman Hesse was really going on about is difficult, to say the least. I do think that there are many ways to do something that breaks the rules of the existing academic games. One way may be the course I set up at the Far Eastern Federal University 8 years ago. It's still going, despite the obvious constraints on my participation. The guiding principle of that course was to see the learning journey as a process of construction through a syncretistic world of indistinct encounters with multiple fields of knowledge. Now AI and VR and heaven knows what else could do this even more powerfully. 

A few weeks ago I gave a lecture-performance at Manchester's wonderful transdisciplinary space, Bound and Infinity on "music and cybernetics" (or musicocybernetics). It's a small space, but with a projection, a piano and a synthesizer, video and sound, I did something which (in the words of one attendee), invited the audience to think in new ways. There was nothing deterministic about this - it was improvisatory. But it had the desired effect. Much like what I had aimed for in Russia. 

There is a new kind of syncretistic art form that is possible. We need it - because what happens in education at the moment is just so dreary in comparison to what is possible with the new technologies we are surrounded by. It is a time for experiment and play. This will be too threatening to the established educational elite to support, so they are likely to get left behind in this "game-changer".



Tuesday, 6 May 2025

The Hunt for Explanatory Principles

The humanities exhibit various patterns of academic practice in today's university, but the most irritating is the "hunt for explanatory principles". Basically this is a practice where little original work is done, but where academics seek to sound clever by attempting to fit the work of a neglected transdisciplinary intellectual figure to a manifest (and usually intangible) phenomenon within a specific discipline. As a transdisciplinary person myself, I find some colleagues who are securely grounded in a discipline always on the hunt for some clue for their latest "conquest". Whatever clue I or others like me might provide becomes their speech acts of the kind "I've discovered x and applied it to fashion/art/music/business/society/etc". Perhaps we shouldn't tell them about x in the first place, but I can't get angry about it other than to be disappointed that it's so intellectually lazy because "x" is usually barely known within a particular academic community, and there is little authority which can be brought to bear to criticise the new explanatory principle, while the academic parades fake erudition and often misconceived interpretations of what "x" was going on about in the first place. 

What this often represents is, once again, the disciplinary colonisation of transdisciplinary concepts. It is the Procrustean move of the institution, whose academic reward structures favour codifiable disciplinary appropriation, which in turn encourages expedient academics to own things that weren't intended to be owned - and certainly not by them. 

A deeper problem with this is that nothing fundamentally new gets done because the brains of academics are focused on their constant attention-grabbing practices in pursuing explanatory principles, rather than actually making any intellectual progress at all. Then there is the problem of explanatory principles in the first place. 

To say "I can explain q" or "with the theory which I have discovered by dead philosopher x, I can explain this (and I shall bask in x's reflected glory!)", is an epistemological error. Gregory Bateson (another "x"!) long ago pointed out the misapprehensions around "explanatory principles". An explanatory principle can explain anything we want it to explain. It is a speech act designed to satisfy (or perhaps dull) curiosity. Bateson's favourite example of an explanatory principle is the "dormitive principle" to explain why ether puts us to sleep, as described by Moliere. I'm finding it a bit depressing at the moment that cybernetics is being used in a similar "dormitive principle" kind of way. It's great for making people sound clever - but what's new? Where's the progress?

It's as if we've got the scientific method round the wrong way. In Hume, explanation was part of the dialogue between scientists seeking to articulate causal explanations for the phenomena produced by experiments. Increasingly in the arts and humanities, and Business Schools, we see precious few experiments. Of course, in the light of a candidate causal explanation, one would then seek further experiments. But we don't see this. Often all we see is self-congratulation. It's perhaps not a million miles away from how the scholastic university must have been just before everything was discredited and overturned in the 17th century. I'm not convinced that our new form of pseudo-scholasticism won't meet the same fate. 

Explanatory principles can explain anything we want them to explain, or nothing at all. It is the conversation - the coordination among scientists - where the real progress is made, and that requires experiment. We now have new means of doing experiments. Perhaps we should use them and do away with this performative nonsense!

Thursday, 1 May 2025

World Flourishing and Gary Stevenson

I've been very interested to watch the launch of the World Flourishing report from Harvard yesterday. The Guardian picked one of the headlines concerning the low ranking of the UK in terms of flourishing (see UK among lowest-ranked countries for ‘human flourishing’ in wellbeing study | Science | The Guardian). The launch is here: https://www.youtube.com/live/iKTeNiEn9gU?feature=shared 

I'm grateful to Diana Wu David (see Diana Wu David | future of work consultant & coach), whose work on the Future of Work is very motivating and visionary, for pointing me to the Harvard study. As I've been thinking about this stuff, I've also been sharing my enthusiasm for Gary Stevenson, whose videos on economics have been a real eye-opener for me over the last two years or so. 

Flourishing is a complex phenomenon, but the lack of resources among the poor must inevitably play a key role. Gary's analysis of the Covid lockdown as a wealth transfer to the rich is a very compelling narrative, and his criticism of the academic establishment is spot-on: what anachronistic nonsense!


It is interesting to consider whether human beings have any kind of "innate" capacity to overcome adversity.  Is it easier if you have the emotional support of a loving family, than if you are estranged from your family and have been abused for the whole of your life? Surely these situations are different. So it really does matter "who your parents are" as Stevenson says - not just because of the financial resources available to the middle classes, but because the emotional support becomes more probable (but obviously not certain) under circumstances of material family comfort. 

As human beings we find ourselves caught between self-care in local communities - care which prioritises autonomy and personal choice, with the care that is provided by social institutions - health services, social services, education, etc. These latter entities are heteronomous, to use Ivan Illich's borrowing from Kant's distinction between autonomy and heteronomy. Illich's argument was to say that if the balance between autonomy and heteronomy gets out of whack, then we are in trouble. He further said that social systems and technologies start from a position of empowering autonomy, but end up as heteronomous behemoths (church, transport, energy, health service, education, etc)

The less wealth we have, the mechanisms of self-care become skewed towards subsistence rather than sustainability, while the subsistence mode is increasingly reinforced by the relationship between individuals and heteronomous public services. This is partly because the heteronomous side has no interest in the qualitative aspects of existence, but rather sees its role in terms of statistics and average outcomes. So it becomes a vicious circle. Also the heteronomous side will seek to maintain itself by selecting those people it serves for whom its interventions stand the best chance of working. 

Friday, 11 April 2025

Rethinking VR and Perception

I'm in Seoul at the moment, on an "AI Tour" of East Asia in Hong Kong, Zhuhai (China), Seoul and Taiwan. Having spent a couple of years trying to get academic staff up to speed with what's happening with the technology, I've come here with a slightly different message from the usual "look at what you can do with GenAI" stuff. The message is how do we think ahead of where the technology is "at", to thinking about where it (and we) are going. To put it simply, this technology is going to change the way we perceive the world. In some ways, this is what technology has always done (it's striking to think that Chinese society is now unthinkable without the mobile phone), but AI is going to present fundamental perceptual shifts to us, and this will have a huge impact on how we learn and coordinate ourselves in the world. 

If we assume that the fundamental scientific breakthrough with AI has been made (although I think more discoveries are on the way which will bridge the biological/technological divide), then what is going to happen next is a predictable increase in speed and scale. In terms of speed, the things that we are accustomed to taking a few minutes or seconds like image and video generation will become almost real-time. Speed change is a fundamental change in the nature of the technology: images appearing as we speak will change the way we communicate. Video appearing as we speak will be even more profound. At some point we may even have images appearing as we think. I would have been sceptical about this stuff a few years ago, but it some of this stuff is already practicable, and the rest is coming into view. We simply aren't ready for what it will do to us, and to a large extent we are worrying about the wrong things. 

Interactive video also will be soon with us - so not only will AI generate video from prompts, but will enable us to virtually interact with that video. I showed this demo of a game world that was generated by AI in Seoul. It's already pretty astonishing. Computer games are going to become increasingly important - potentially as a means of communication. It's making me think that my scepticism about VR was misplaced. I had based this on the fact that the content for VR is so time consuming to create. But with AI content generation will become as much of a non-issue as it has rapidly become with creating text. 


I can't say what this will all mean. But I can say that this is likely to happen. 

Friday, 28 March 2025

AI in the Academy: Unfolding dynamics, and a history lesson

There is a mystery as to why the most transdisciplinary science, cybernetics, never really got a hold in the university. Yes, there were weird outposts of cybernetic activity like Von Foerster's Biological Computer Lab at the University of Illinois, but it turned out to be not very sustainable. The most significant major UK centre was in Hull University, and that has pretty much been disbanded. I believe what we are seeing with the impact of AI in the university is telling us why this happened, and why a similar pathology is happening again. 

A university is a set of disciplinary fiefdoms - elegantly described years ago by Tony Bucher's "Academic Tribes and Territories". Academic tribes or fiefdom's tend to want to defend themselves from each other. When disciplinary boundaries are clear, this works pretty well - and has done since the trivium and quadrivium of the middle ages... 

When a truly transdisciplinary subject comes along - and cybernetics was just that - it puts disciplines in a bit of a panic. It's not that they want to defend themselves from the transdiscipline, but rather they each seek to own it, and therefore look to acquire and colonise bits of it. We are seeing exactly this process unfolding around AI at the moment: every discipline is staking its claim to AI. The consequence of this is that the transdiscipline becomes divided and absorbed into disciplines. Its intrinsic transdisciplinary nature is dissolved. This is truly crazy behaviour, but it is determined by the structure of institutions. 

The only hope I feel is for universities (or some other institution for scientific inquiry and intellectual growth) to construct themselves not around the codifications of curriculum and disciplines, but to construct themselves around tacit knowledge, shared experience and creative expression. AI could help here. It could be a huge amplification of the creative imagination. It could create shared experiences in ways we have not conceived of before. Unfortunately, despite their many virtues, universities are unlikely to be the home for these new kinds of innovations and experiences. It will happen somewhere else. 

The issue has to do with the roots of institutions of knowledge and science in monasticism. Whatever caused human beings to retreat from daily ordinary to live in ascetic small communities in the desert was driven by a deep physiological need. Many scholars feel this same need in the wake of the modern academy's transformation into a business. Physical retreat is unlikely. But a spiritual retreat to new kinds of shared experience and ways of communicating is likely to become more feasible. Students and industry may follow, and the universities may have to play catch-up.