Friday 28 June 2019

Choosing a new VLE: Technological Uncertainty points towards Personal Machine Learning

I sat in a presentation from a company wanting to replace my university's Virtual Learning Environment yesterday. It was a slick presentation (well practiced) and people generally liked it because the software wasn't too clunky. Lack of clunkiness is a sign of quality these days. New educational technology's functionality is presented as a solution to problems which have been created by other technology, whether it is the management of video, the coordination of marks, mobile apps, management of threaded discussion, integration of external tools, and so on. A solution to these problems creates new options for doing the same kinds of things: "use our video service to make the management of video easier", "use our PDF annotation tool to integrate with our analytics tools", etc. Redundancy of functionality is increased in the name of simplification of technological complexity in the institution. But in the end, it can't keep up: what we end up with is another option to choose from, an increase in the uncertainty of learners and teachers, which inevitably necessitates a managerial diktat to insist on the use of tool x rather than tool y. Technology that promises freedom produces restriction, and an increasingly wide stand-off between the technology of the institution and the technology of the outside world.

The basic thesis of my book "Uncertain Education" is that technology always creates new options for humans to act. Through this, the basic human problem of choosing the manner and means of acting becomes less certain. Institutions react to rising uncertainty in their environment often by co-opting technologies to reinforce their existing structures: so "institutional" tools, rather than personal tools, dominate. Hence we have corporate learning platforms in universities, and the dominance of corporate online platforms everywhere else. This is shown in the diagram below: the "institution's assistance" operates at a higher-level "metasystem", which tries to attenuate the uncertainty of learners and teachers in the primary system (the circle in the middle). Institutional technology like this seeks to ease the burden of choice of technology for workers, but the co-opting institutional process can't keep up with the pace of change in the outside world - indeed, it feeds that change. This situation is inherently unstable, and will, I think, eventually lead to transformation of organisational structures. New kinds of tools may drive this process. I am wondering whether personal AI, or more specifically, personal machine learning, might provide a key to transformation.

Machine learning appears to be a tool which also generates many new options for acting. Therefore it also should exacerbate uncertainty. But is there a point at which the tools which generate new options for acting create new ways in which options might be chosen by an individual? Is there a point at which this kind of technology is able assist in the creation of a coherent understanding of the world in the face of explosive complexification produced by technology? One of the ways this might work is if machine learning tools could assist in stimulating and coordinating conversations directly between teachers and learners. Rather than an institutional metasystem, machine learning could operate at the level of the human system in the middle, helping to mitigate the uncertainty that is required to be managed by the higher level system:

Without wanting to sound too posthuman, machine learning may not be so much a tool as an evolutionary "moment" in the relationship between humans and machines. It is the moment when the separation between humans and machines, which humans have defended since the industrial revolution in what philosopher Gilbert Simondon calls "facile humanism", becomes indefensible. Perhaps people like Friedrich Kittler and Erich Horl are right: we are no longer humans selves constituted of cells and psychology existing in a techno-social system; now the technical system constitutes the human "I" in a process intermediated by our cells and our consciousness.

I wonder if the point is driven home when we appreciate machine learning tools as an anticipatory system. Biological life drives an anticipatory process in modelling and adapting to the environment. We don't know how anticipation occurs, but we do possess models of what it might be like. One way of thinking about anticipation is to imagine it as a kind of fractal - something which approximates to David Bohm's 'implicate order' - an underlying and repeated symmetry. We see it in nature, in trees, in art, music, and in biological developmental processes. Biological processes also appear to be endosymbiotic - they absorb elements of the environment within their internal structure, repeating them at higher levels of organisation. So cells absorbed the mitochondria which once lived independently, and the whole reproduces itself at a higher order. This is a fractal.

Nobody quite knows how machine learning works. But the suspicion is that it too is a fractal. Machine learning anticipates the properties of an object it is presented with by mapping its features which are detected through progressive layers of analysis focusing on smaller and smaller chunks of an image. The fractal is created by recursively exploring the relationship between images and labels across different levels of analysis. Human judgements which feed the "training" of this system eventually become encoded as a set of "fixed points" in a relational pattern in the machines model.

I don't think we've yet grasped what this means. At the moment we see machine learning as another "tool". The problem with machine learning as a "tool" is that it is then used to provide an "answer": that is, it is used to filter-out information which does not relate to this answer. Most of our "information tools", which provide us with increased options for doing things, actually operate like this: they discard information, removing context. This adds to the uncertainty they produce: tool x and tool y both do similar jobs, but they filter out different information. Choosing which tool to use is to decide which information we don't need, which requires human anticipation of an unknowable future. Fundamentally, this is the problem that any university wanting to invest in new technology is faced with. Context is everything, and identifying the context requires anticipation.

Humans are "black boxes": we don't really know how any of us work. But as black boxes who converse, we gradually tune-in to each other, understanding the behaviour of each of us, and in the process, understanding more about our own "black box". In the process we manage the uncertainty of our own existence. Machine learning is also a black box. So might the same thing work? If you put two black boxes together, do they begin to "understand" each other? If you put a human black box together with a machine black box, does the human gain insight into the machine, and insight into the themselves through exploring the operation of the anticipatory system in the machine? If you put a number of human black boxes together with a machine black box, does it stimulate conversation between the humans as well as engagement with the machine? It is important to note in each of these scenarios, information is preserved: context is maintained with the increase in insight, and can be further encoded by the machine to enrich human conversation.

I wonder if these questions point to a new kind of organisational setup in institutions between humans and technology. I cannot see how the institutional platform can really be a viable option for the future: discarding information is not a way forward. But we need to understand the nature of machine learning, and the ways in which information can be preserved in the human machine relationship.

1 comment:

Unknown said...
This comment has been removed by a blog administrator.