At the end of Norbert Wiener's "The Human Use of Human Beings", he identified that there was a "new industrial revolution" afoot, which would be dominated by machines replacing, or at least assisting, human judgement (this is 1950). Wiener, having invented cybernetics, feared for the future of the world: he understood the potential of what he and his colleagues had unleashed, which included computers (John von Neuman), information theory (Claude Shannon) and neural networks (Warren McCulloch). He wrote:
There has, unsurprisingly, been much protest by teachers online to this story. However, sight must not be lost of the fact that there are indeed benefits that the technology brings to these students, autonomy being not the least of them. But we are missing a coherent theoretical strand that connects good face-to-face teaching to Horrible Histories, Khan academy and this AI (and many steps in-between). There is most probably a thread that connects them and we should seek to articulate it as precisely as we can, otherwise we will be beholden to the rough instinct of human beings unaware of their own desire to maintain their existence within their current context, in the face of a new technology which will transform that context beyond recognition.
AI gives us a new powerful God in front of which we (and particularly our politicians) will need to resist the temptation to light the incense. But many will burn incense, and this will fundamentally be about using this technology to maintain the status quo in education in an uncertain environment. So this is AI to get the kids through "the test" more quickly. And (worse) the tests they are concerned with are STEM. Where's the AI that teaches poetry, drama or music?
It's the STEM thing which is the real problem here, and ironically, it is the thing which is most challenged by the AI/Machine learning revolution (actually, I think the best way to describe the really transformative technology is to call it an "artificial anticipatory system", but I won't go into that now). This is because in the world that's going to unfold around us - the world that we're meant to be preparing our kids for - machine learning will provide new "filters" through which we can make sense of things. This is a new kind of technology which clearly works - within limits, but well beyond expectations. Most importantly, while the machine learning technology works, nobody knows exactly how these filters work (although there are some interesting theories: https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9)
Machine learning is created through a process of "training" - where multiple redundant descriptions of phenomena are fed into a machine for it to understand the underlying patterns behind them. Technical problems in the future will be dealt with through this "training" process, in the way that our current technical problems demand "coding" - the writing of specific algorithms. It is also likely that many professionals in many domains will be involved in training machines. Indeed, training machines will become as important as training humans.
This dominance of machine training and partnership between humans and machines in the workplace means that the future of education is going to have to become more interdisciplinary. It won't be enough for doctors to know about the physiological systems of the body; professionally they will have to be deeply informed about the ways that the AI diagnostic devices are behaving around them, and take an active role in refining and configuring them. Moreover, such training processes will involve not only the functional logic of medical conditions, but the aesthetics of images, the nuances of judgement, and the social dynamics of machines and human/organisational decision-making. So how do we prepare our kids for this world?
The fundamental problems of education have little to do with learning stuff to pass the test: that is a symptom of the problem we have. They have instead to do with organising the contexts for conversations about important things, usually between the generations. So the Chinese initiative basically exacerbates a problem produced by our existing institutional technologies (I think of Wiener's friend Heinz von Foerster: "we must not allow technology to create problems it can solve"). So AI is dragged out of what Cohen and March famously called the "garbage can of institutional decision-making" (see https://en.wikipedia.org/wiki/Garbage_can_model), when the real problem (which is avoided) is, "how do we reorganise education so as to prepare our kids for the interdisciplinary world as it will become?"
This is where we should be putting our efforts. Our new anticipatory technology provides new means for organising people and conversations. It actually may give us a way in which we might organise ourselves such that "many brains can think as one brain", which was Stafford Beer's aim in his "management cybernetics" (Beer was another friend of Wiener). My prediction is that eventually we will see that this is the way to go: it is crucial to local and planetary viability that we do.
Will China and others see that what they are currently doing is not a good idea? I suspect it really depends not on their attitude to technology (which will take them further down the "test" route), but their attitude to freedom and democracy. Amartya Sen may well have been right in "Development as Freedom" in arguing that democracy was the fundamental element for economic and social development. We shall see. But this is an important moment.
"The new industrial revolution is a two-edged sword. It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction." (p.162)The destructive power of technology would result, Wiener argues, from our "burning incense before the technology God". Well, this is what's going on in China in their education system right now (see https://www.technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/)
There has, unsurprisingly, been much protest by teachers online to this story. However, sight must not be lost of the fact that there are indeed benefits that the technology brings to these students, autonomy being not the least of them. But we are missing a coherent theoretical strand that connects good face-to-face teaching to Horrible Histories, Khan academy and this AI (and many steps in-between). There is most probably a thread that connects them and we should seek to articulate it as precisely as we can, otherwise we will be beholden to the rough instinct of human beings unaware of their own desire to maintain their existence within their current context, in the face of a new technology which will transform that context beyond recognition.
AI gives us a new powerful God in front of which we (and particularly our politicians) will need to resist the temptation to light the incense. But many will burn incense, and this will fundamentally be about using this technology to maintain the status quo in education in an uncertain environment. So this is AI to get the kids through "the test" more quickly. And (worse) the tests they are concerned with are STEM. Where's the AI that teaches poetry, drama or music?
It's the STEM thing which is the real problem here, and ironically, it is the thing which is most challenged by the AI/Machine learning revolution (actually, I think the best way to describe the really transformative technology is to call it an "artificial anticipatory system", but I won't go into that now). This is because in the world that's going to unfold around us - the world that we're meant to be preparing our kids for - machine learning will provide new "filters" through which we can make sense of things. This is a new kind of technology which clearly works - within limits, but well beyond expectations. Most importantly, while the machine learning technology works, nobody knows exactly how these filters work (although there are some interesting theories: https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9)
Machine learning is created through a process of "training" - where multiple redundant descriptions of phenomena are fed into a machine for it to understand the underlying patterns behind them. Technical problems in the future will be dealt with through this "training" process, in the way that our current technical problems demand "coding" - the writing of specific algorithms. It is also likely that many professionals in many domains will be involved in training machines. Indeed, training machines will become as important as training humans.
This dominance of machine training and partnership between humans and machines in the workplace means that the future of education is going to have to become more interdisciplinary. It won't be enough for doctors to know about the physiological systems of the body; professionally they will have to be deeply informed about the ways that the AI diagnostic devices are behaving around them, and take an active role in refining and configuring them. Moreover, such training processes will involve not only the functional logic of medical conditions, but the aesthetics of images, the nuances of judgement, and the social dynamics of machines and human/organisational decision-making. So how do we prepare our kids for this world?
The fundamental problems of education have little to do with learning stuff to pass the test: that is a symptom of the problem we have. They have instead to do with organising the contexts for conversations about important things, usually between the generations. So the Chinese initiative basically exacerbates a problem produced by our existing institutional technologies (I think of Wiener's friend Heinz von Foerster: "we must not allow technology to create problems it can solve"). So AI is dragged out of what Cohen and March famously called the "garbage can of institutional decision-making" (see https://en.wikipedia.org/wiki/Garbage_can_model), when the real problem (which is avoided) is, "how do we reorganise education so as to prepare our kids for the interdisciplinary world as it will become?"
This is where we should be putting our efforts. Our new anticipatory technology provides new means for organising people and conversations. It actually may give us a way in which we might organise ourselves such that "many brains can think as one brain", which was Stafford Beer's aim in his "management cybernetics" (Beer was another friend of Wiener). My prediction is that eventually we will see that this is the way to go: it is crucial to local and planetary viability that we do.
Will China and others see that what they are currently doing is not a good idea? I suspect it really depends not on their attitude to technology (which will take them further down the "test" route), but their attitude to freedom and democracy. Amartya Sen may well have been right in "Development as Freedom" in arguing that democracy was the fundamental element for economic and social development. We shall see. But this is an important moment.
No comments:
Post a Comment