Pages

Monday, 11 August 2025

Autonomy and Heteronomy - teaching students to make their own tools for learning

I've recently run sessions for summer school students from China and the US on public health. Occasionally you can get gasps from students who suddenly see that they can do something which they had never imagined they could do before. AI is great for this. To tell public health students (who by and large are not the most technical) "by the end of this session, you will be creating code for a public health app", and to be able to deliver on that in a way that surprised even me, was a great experience for everyone. 

I did an activity with them which asked them to design an app with focus on how the app would balance "what individuals can do for themselves autonomously" and "what is done for them". The biggest challenge in the exercise was actually to get them to think away from technology doing everything for users. When I asked one group who wanted to do something around vaccination about what individuals could do for themselves they said "follow the rules". We talked a lot about this!

The upshot of the exercise was that flipchart paper was filled with handwritten designs for their tool, what the system would do and what individuals could do. Taking a photo of this and putting it into an AI (we tried a number - copilot, chatgpt, deepseek), and asking the AI to write the interface code in HTML performed miracles - hence the gasps. Interestingly, by a long way, deepseek was  best at this. Its code was always correct (which copilot wasn't), but also far more detailed in its interpretation of the design than chatgpt (4o). 

But the discussion was more important than the tech. Behind my "what people can do for themselves" and "what is done for them" was the dichotomy between heteronomy and autonomy which Illich borrowed from Kant, applying it to technology arguing that technology reinforces the heteronomous side at the expense of the autonomous side. My little exercise may have done something to push things back to the autonomous side... albeit with large heteronomous AI models (although these could be accessed offline). All this is fascinating. 

But broadly speaking, Illich was right about the pressure to absorb everything on the heteronomous side, and he attacked most public services for doing this, from education to the health service. It is, I suspect, this heteronomous absorption which lies at the root of our current political, environmental and economic crisis. This challenge is consuming me at the moment - the real issue with AI is not AI - it is our approach to organisation which does what Illich complains about. From Silicon valley to our universities it is evident everywhere. 

40 years ago in Manchester, Enid Mumford was designing technical systems with users (for her, it was nurses). She insisted that systems should be made with people, not done to them. This was also a call for balancing autonomy with heteronomy. Frankly, it didn't really catch on. Increasingly - and perhaps out of necessity - computer system design became professionalised. But if people can create their own code with AI, perhaps there is a new opportunity for revisiting this. Some of the technical foundations for a shift are already there - particularly in Service-Oriented Architecture initiatives (my early work on Personal Learning Environments was based around users constructing their own tools around a Service-Oriented Architecture). AI gives us a new angle on this idea.

So one simple thing we could do to address the heteronomy/autonomy balance is to teach students to make their own tools for learning. This would be, I think, far more effective as pedagogy than any attempt at "AI literacy" which seems to be where things are going. All that does is create more power for the heteronomous power of the educators (who often are not experts in this stuff) to attenuate the creative forces which have been unleashed by the technology. Yet it is these creative forces which will be essential for the survival of graduates in the world as it is unfolding.

When the gen-AI thing broke, I said to university audiences (in Denmark, China, Manchester) that what people should worry about is what bright 13-year olds would be doing with the tech. By the time they come to University they are going to be hugely advanced not only technically, but also in their knowledge elsewhere. The world is such a weird and confusing place to be a child, and those children with the space and technology to ask questions will be exploring amazing things right now. I had a reminder of this when a friend who I invited to the Laws of Form conference last week told her 8-year old son that she was going to meet a world expert in topology. "I know about topology - that's about knots and space," he said, "I've been watching loads of videos on youtube. A circle is an un-knot - it's amazing". He then went on to tell her about the square root of -1. 

This may be one particular child. But I doubt it. Children have fresh brains and the world has lots of big worrying questions, and incredibly powerful means of finding things out and producing technologies. They are likely to make the most creative use of these tools. But they are then likely to encounter the heteronomous education system. The heteronomous side may win (again). But I'm not sure. And I'm pretty certain we should push against it.

The problem is that to rebalance heteronomy and autonomy is a real challenge to the way things are organised - particularly in the wake of a commodified education system (for which read, heteronomy-dominated). It is also a challenge to those who gate-keep the heteronomous side because it is in their interests to do so. But it is almost certainly not in the interests of the next generation. 


Friday, 8 August 2025

Lines, Boxes and Spaces

There's a wonderful event happening in Manchester at the moment. The Bound and Infinity bookshop in Tib Street is hosting a number of friends talking about physics (Peter Rowlands), Mathematics and Laws of Form (Louis Kauffman), architecture (Andrew Crompton) as well as interjections from art, music, philosophy, etc. I brought one of my PhD students from public health whose reaction was "where has this been all my life?!". Particularly wonderful was the fact that many attendees are quite young and thinking the kinds of ambitious thoughts that one has at 18 (and which academia is very good at knocking out of people). This event serves as encouragement to youth not to give in to the deadly institution.

I only encountered cybernetic thinking in my mid 30s and had the same reaction. I slightly kick myself that I might have got there sooner if I'd had the courage to speak to Stafford Beer in Manchester University music department when I was a student (he visited regularly to attend the Lindsay String quartet concerts). I wish someone had dragged me to a cybernetics conference at that age. But maybe it's best that that didn't happen. 

The problem is that ambitious thinking doesn't have an easy ride in the university. This is really because of the pathology of disciplines that I wrote about recently. Disciplines are fiefdoms as Tony Becher pointed out (I discovered in a conversation with Ron Barnett a few weeks ago that Becher was responsible for helping Ron get on the academic ladder from his admin role in the university). People like Becher and Barnett know what universities are - and what they should be. If Barnett's example is anything to go by, there are likely to be more brilliant and original minds among the admin of the university than among the credentialed academics. That's a problem which we should do something about.

The real difficulty is that the career path for those who think in an interdisciplinary way just isn't there. Universities continually talk about interdisciplinarity, but they don't do it - and very often they don't know what it really is, which is revealed when universities try to create "departments" for interdisciplinarity.

The real problem is that institutions organise themselves into disciplinary boxes - departments with budgets, teaching loads, journals, etc. An interdisciplinary box is no better than a disciplinary box. Each box wants to maintain it's viability and compete with other boxes in the process. But interdisciplinarity doesn't belong in a box.

Cyberneticians often draw diagrams of organisations with boxes and wires/lines connecting them. Interdisciplinarity really belongs in the wires (the lines) not the boxes. But there is no career in being in the lines.

The interdisciplinary scholar's role is to flow through the institution between the boxes. It is a completely different kind of existence to existing in a box. This is not to say that boxes aren't important: there should be people in a history box or a physics box... Rigour does count for something. But boxes without people flowing through the wires stifles imagination. 

This has been apparent to cyberneticians for decades. Gregory Bateson wrote this in an address to the Regents of the University of California in 1978:

"While much that universities teach today is new and up-to-date, the presupposition or premises of thought upon which all our teaching is based are ancient and, I assert, obsolete. I refer to such notions as:

a. The Cartesian dualism separating "mind" and "matter"

b. The strange physicalism of the metaphors which we use to describe and explain mental phenomena - "power", "tension", "energy", "social forces", etc

c. Our anti-aesthetic assumption, borrowed from the emphasis which Bacon, Locke and Newton long ago gave to the physical sciences, viz that all phenomena (including the mental) can and shall be studied and evaluated in quantitative terms. 

The view of the world - the latent and partly unconscious epistemology - which such ideas together generate is out of date in three different ways:

a. pragmatically, it is clear that these premises and their corollaries lead to greed, monstrous over-growth, war, tyranny, and pollution. In this sense, our premises are daily demonstrated false, and the students are half aware of this.

b. Intellectually, the premises are obsolete in that systems theory, cybernetics, holistic medicine, and gestalt psychology offer demonstrably better ways of understanding the world of biology and behaviour.

c. As a base for religion, such premises as I have mentioned became clearly intolerable and therefore obsolete about 100 years ago. In the aftermath of Darwinian evolution, this was stated rather clearly by such thinkers as Samuel Butler and Peter Kropotkin. But already in the eighteenth century, William Blake saw that the philosophy of Locke and Newton could only generate "dark Satanic mills"

But this leads to the key message he says sarcastically that by 1979

"we shall know a little more by dint of rigour and imagination, the two great contraries of mental process, either of which by itself is lethal. Rigour alone is paralytic death, but imagination alone is insanity."