I've recently run sessions for summer school students from China and the US on public health. Occasionally you can get gasps from students who suddenly see that they can do something which they had never imagined they could do before. AI is great for this. To tell public health students (who by and large are not the most technical) "by the end of this session, you will be creating code for a public health app", and to be able to deliver on that in a way that surprised even me, was a great experience for everyone.
I did an activity with them which asked them to design an app with focus on how the app would balance "what individuals can do for themselves autonomously" and "what is done for them". The biggest challenge in the exercise was actually to get them to think away from technology doing everything for users. When I asked one group who wanted to do something around vaccination about what individuals could do for themselves they said "follow the rules". We talked a lot about this!
The upshot of the exercise was that flipchart paper was filled with handwritten designs for their tool, what the system would do and what individuals could do. Taking a photo of this and putting it into an AI (we tried a number - copilot, chatgpt, deepseek), and asking the AI to write the interface code in HTML performed miracles - hence the gasps. Interestingly, by a long way, deepseek was best at this. Its code was always correct (which copilot wasn't), but also far more detailed in its interpretation of the design than chatgpt (4o).
But the discussion was more important than the tech. Behind my "what people can do for themselves" and "what is done for them" was the dichotomy between heteronomy and autonomy which Illich borrowed from Kant, applying it to technology arguing that technology reinforces the heteronomous side at the expense of the autonomous side. My little exercise may have done something to push things back to the autonomous side... albeit with large heteronomous AI models (although these could be accessed offline). All this is fascinating.
But broadly speaking, Illich was right about the pressure to absorb everything on the heteronomous side, and he attacked most public services for doing this, from education to the health service. It is, I suspect, this heteronomous absorption which lies at the root of our current political, environmental and economic crisis. This challenge is consuming me at the moment - the real issue with AI is not AI - it is our approach to organisation which does what Illich complains about. From Silicon valley to our universities it is evident everywhere.
40 years ago in Manchester, Enid Mumford was designing technical systems with users (for her, it was nurses). She insisted that systems should be made with people, not done to them. This was also a call for balancing autonomy with heteronomy. Frankly, it didn't really catch on. Increasingly - and perhaps out of necessity - computer system design became professionalised. But if people can create their own code with AI, perhaps there is a new opportunity for revisiting this. Some of the technical foundations for a shift are already there - particularly in Service-Oriented Architecture initiatives (my early work on Personal Learning Environments was based around users constructing their own tools around a Service-Oriented Architecture). AI gives us a new angle on this idea.
So one simple thing we could do to address the heteronomy/autonomy balance is to teach students to make their own tools for learning. This would be, I think, far more effective as pedagogy than any attempt at "AI literacy" which seems to be where things are going. All that does is create more power for the heteronomous power of the educators (who often are not experts in this stuff) to attenuate the creative forces which have been unleashed by the technology. Yet it is these creative forces which will be essential for the survival of graduates in the world as it is unfolding.
When the gen-AI thing broke, I said to university audiences (in Denmark, China, Manchester) that what people should worry about is what bright 13-year olds would be doing with the tech. By the time they come to University they are going to be hugely advanced not only technically, but also in their knowledge elsewhere. The world is such a weird and confusing place to be a child, and those children with the space and technology to ask questions will be exploring amazing things right now. I had a reminder of this when a friend who I invited to the Laws of Form conference last week told her 8-year old son that she was going to meet a world expert in topology. "I know about topology - that's about knots and space," he said, "I've been watching loads of videos on youtube. A circle is an un-knot - it's amazing". He then went on to tell her about the square root of -1.
This may be one particular child. But I doubt it. Children have fresh brains and the world has lots of big worrying questions, and incredibly powerful means of finding things out and producing technologies. They are likely to make the most creative use of these tools. But they are then likely to encounter the heteronomous education system. The heteronomous side may win (again). But I'm not sure. And I'm pretty certain we should push against it.
The problem is that to rebalance heteronomy and autonomy is a real challenge to the way things are organised - particularly in the wake of a commodified education system (for which read, heteronomy-dominated). It is also a challenge to those who gate-keep the heteronomous side because it is in their interests to do so. But it is almost certainly not in the interests of the next generation.