There's a lot of stuff about technology in education on the internet at the moment. A lot of it is increasingly paranoid: worries about the "platform" university, surveillance, brain scanning, boredom scanning, omniscient AIs, idiot AIs, and big corporations making huge sums of money out of the hopes and dreams of our kids. Whitehead noted once that if one wants to see where new ideas are going to arise, one has to see what people are not talking about. So, most likely the future is going to be "none of the above". But what are we not talking about?
The Golem-like AI is, and always has been, a chimera. What is real in all this stuff? Well, follow the money. Educational institutions are enormous financial concerns, boosted by outrageous student fees, burgeoning student numbers, increasingly ruthless managerial policies of employment, and an increasing disregard for the pursuit of truth in favour of the pursuit of marketing-oriented "advances" and "high ranking" publication - bring on the Graphene! (condoms, lightbulbs, and water purification here we come. perhaps.) Of course, tech companies are massive financial concerns too, but while we are all on Facebook and Twitter, we are not taking out thousands of pound loans to feed our Facebook habit. Naturally, Facebook would like to change that. But it seems a long-shot.
So we come back to the institution of education. Why has it become such a dreadful thing? When I think about my own time in the music department at Manchester University in the late 80s, I think of how my best professors would probably not be able to get a job in the metrics-obsessed University now. This is a disaster.
Ross Ashby (another genius who would have struggled) noted that any system that distinguished categories effectively "throws away information". The profundity of that observation is worth reflecting on. All our educational IT systems, all our bibliometric systems, NSS, REF, TEF, etc are category-making systems. They all throw away information. The result of the impact of technology - Information technology no less - in education has been the loss of information by its institutions.
What happens to institutions when they lose information? They lose the capacity to adapt their operations in an increasingly complex environment. As a result, they become more rigid and conservative. Information systems create problems that information systems can solve, each new wave of problems loses more information than the previous wave. We are left with a potentially irrelevant (although still popular) operation which has little bearing on the real world. This is where we are, I think.
Let's be a bit clearer about institutions: institutions which lose information become unviable. So, a viable institution is an entity which conserves information. Traditionally - before technology - this was done by balancing the institution's operational (information losing) function with a reflexive (information gaining) function that would probe the environment, where academics had space for thinking about how the world was changing and making informed interventions both in the world and in the institution. When technology entered the institution, the operational function - which was always a "categorising" function - was amplified, and to many excited by the apparent possibilities of new techno-coordinating powers, the loss of information was welcomed, while at the same time, the reflexive function was dismissed as a waste or irrelevant. Basically, everything became "operations", and thought went out of the window.
Many AI enthusiasts see AI as a further step towards the information-losing side of things, and welcome it. AI can lose information better than anything - basically, a technology for soaking up a large amount of information redundancy for the sake of producing a single "answer" which saves human beings the labour of having to talk to each other and work out the nuances of stuff.
But in the end, this will not work.
But AI, or rather deep learning, is a different kind of technology. In working with redundancy rather than information, it is something of a counterbalance to information systems. Redundancy is the opposite of information. Where information systems amplified the "categorising" operational processes, might deep learning technology amplify the reflexive processes? I am particularly interested in this question, because it presents what might be a "feasible utopia" for technologically-enhanced institutions of education in the future. Or rather, it presents the possibility of using technology to conserve, not destroy, information.
The key to being able to do this is to understand how deep learning might work alongside human judgement, and particularly the ordering of human judgement. If deep learning can operate to preserve human order, coordinate effective human-based correction of machine error, whilst supporting human judgement-making, then a virtuous circle might be possible. This, it seems to me, is something worth aiming for in technologically embedded education.
Conserving information is the heart of any viable institution. States and churches have survived precisely because their operations serve to preserve their information content, although like everything, they have been challenged by technology in recent years.
In Stafford Beer's archive, there is a diagram he drew of the health system in Toronto. At the centre of the diagram is a circle representing a "population of healthy people". This is a system for preserving health, not treating illness. And more importantly it is a system for preserving information about health.
We need the same for education. In the centre of a similar diagram for education, perhaps there should be a box representing a population of wise people: everyone from children to spiritual leaders. What is the system to preserve this wisdom in society? It is not our current "education" system that seeks to identify the deficit of ignorance and fill it with lectures and certificates. Those things make wise people go mad! It is instead a set of social coordination functions which together preserve information about wisdom, and create the conditions for its propagation from one generation to the next. We don't have this because of the information loss in education. Can we use technology to correct the loss of information and turn it into conservation? I think we might be able to do this. We should try.
My concern with the paranoia about technology in education at the moment is that it is entirely framed around the perspective of the traditional institution in its current information-losing form. It is effectively realising that the path of information-loss leads to the abyss. It does indeed. But it is not technology that takes us there. It is a particular approach to technology which turns its hand to categorisation and information loss which takes us there. Other technologies are possible. Another technological world is possible. Better education, which preserves information and maintains a context for the preservation of wisdom between the generations is possible. But only (now) with technology.
The Golem-like AI is, and always has been, a chimera. What is real in all this stuff? Well, follow the money. Educational institutions are enormous financial concerns, boosted by outrageous student fees, burgeoning student numbers, increasingly ruthless managerial policies of employment, and an increasing disregard for the pursuit of truth in favour of the pursuit of marketing-oriented "advances" and "high ranking" publication - bring on the Graphene! (condoms, lightbulbs, and water purification here we come. perhaps.) Of course, tech companies are massive financial concerns too, but while we are all on Facebook and Twitter, we are not taking out thousands of pound loans to feed our Facebook habit. Naturally, Facebook would like to change that. But it seems a long-shot.
So we come back to the institution of education. Why has it become such a dreadful thing? When I think about my own time in the music department at Manchester University in the late 80s, I think of how my best professors would probably not be able to get a job in the metrics-obsessed University now. This is a disaster.
Ross Ashby (another genius who would have struggled) noted that any system that distinguished categories effectively "throws away information". The profundity of that observation is worth reflecting on. All our educational IT systems, all our bibliometric systems, NSS, REF, TEF, etc are category-making systems. They all throw away information. The result of the impact of technology - Information technology no less - in education has been the loss of information by its institutions.
What happens to institutions when they lose information? They lose the capacity to adapt their operations in an increasingly complex environment. As a result, they become more rigid and conservative. Information systems create problems that information systems can solve, each new wave of problems loses more information than the previous wave. We are left with a potentially irrelevant (although still popular) operation which has little bearing on the real world. This is where we are, I think.
Let's be a bit clearer about institutions: institutions which lose information become unviable. So, a viable institution is an entity which conserves information. Traditionally - before technology - this was done by balancing the institution's operational (information losing) function with a reflexive (information gaining) function that would probe the environment, where academics had space for thinking about how the world was changing and making informed interventions both in the world and in the institution. When technology entered the institution, the operational function - which was always a "categorising" function - was amplified, and to many excited by the apparent possibilities of new techno-coordinating powers, the loss of information was welcomed, while at the same time, the reflexive function was dismissed as a waste or irrelevant. Basically, everything became "operations", and thought went out of the window.
Many AI enthusiasts see AI as a further step towards the information-losing side of things, and welcome it. AI can lose information better than anything - basically, a technology for soaking up a large amount of information redundancy for the sake of producing a single "answer" which saves human beings the labour of having to talk to each other and work out the nuances of stuff.
But in the end, this will not work.
But AI, or rather deep learning, is a different kind of technology. In working with redundancy rather than information, it is something of a counterbalance to information systems. Redundancy is the opposite of information. Where information systems amplified the "categorising" operational processes, might deep learning technology amplify the reflexive processes? I am particularly interested in this question, because it presents what might be a "feasible utopia" for technologically-enhanced institutions of education in the future. Or rather, it presents the possibility of using technology to conserve, not destroy, information.
The key to being able to do this is to understand how deep learning might work alongside human judgement, and particularly the ordering of human judgement. If deep learning can operate to preserve human order, coordinate effective human-based correction of machine error, whilst supporting human judgement-making, then a virtuous circle might be possible. This, it seems to me, is something worth aiming for in technologically embedded education.
Conserving information is the heart of any viable institution. States and churches have survived precisely because their operations serve to preserve their information content, although like everything, they have been challenged by technology in recent years.
In Stafford Beer's archive, there is a diagram he drew of the health system in Toronto. At the centre of the diagram is a circle representing a "population of healthy people". This is a system for preserving health, not treating illness. And more importantly it is a system for preserving information about health.
We need the same for education. In the centre of a similar diagram for education, perhaps there should be a box representing a population of wise people: everyone from children to spiritual leaders. What is the system to preserve this wisdom in society? It is not our current "education" system that seeks to identify the deficit of ignorance and fill it with lectures and certificates. Those things make wise people go mad! It is instead a set of social coordination functions which together preserve information about wisdom, and create the conditions for its propagation from one generation to the next. We don't have this because of the information loss in education. Can we use technology to correct the loss of information and turn it into conservation? I think we might be able to do this. We should try.
My concern with the paranoia about technology in education at the moment is that it is entirely framed around the perspective of the traditional institution in its current information-losing form. It is effectively realising that the path of information-loss leads to the abyss. It does indeed. But it is not technology that takes us there. It is a particular approach to technology which turns its hand to categorisation and information loss which takes us there. Other technologies are possible. Another technological world is possible. Better education, which preserves information and maintains a context for the preservation of wisdom between the generations is possible. But only (now) with technology.