Tuesday, 18 June 2019

Machine Learning as a Personal Anticipatory System

Can a living system survive without anticipation? As humans we take anticipation for granted as a function of consciousness: without an ability to make sense of the world around us, and to preempt changes, we would not be able to survive. We attribute this ability to high-level functions like language and communication. At the same time, the ability of all living things to adapt to environments whilst not always showing the same skill of language is apparent, although many scientists are reluctant to attribute consciousness to bacteria or cells. Ironically, this reluctance probably has more to do with our human language for describing consciousness, than it does to the nature of any "language" or "communication" of cells or bacteria!

We believe human consciousness is special, or exceptional, partly because we have developed a language for making distinctions about consciousness which reinforces a separation between human thought and other features of the natural world. In philosophy, the distinction boils down to "mind" and "body". We have now reached a stage of development where continuing to think like this will most likely destroy our environment, and us with it.

Human technology is a product of human thought. We might believe our computers and big data to be somehow "objective" and separate from us, but we are looking at the manifestations of consciousness. Like other manifestations of consciousness such as art, music, mathematics and science, our technologies tell us something about how consciousness works: they carry an imprint of consciousness in their structure. This is perhaps easiest to see in the artifice of mathematics, which whilst being an abstraction, appears to reveal fundamental patterns which are reproduced throughout nature. Fractals, and the imaginary numbers upon which they sit, are good examples of this.

It is also apparent in our technologies of machine learning. Behind the excitement about AI and machine learning lies a fundamental problem of perception: these tools display remarkable properties in their ability to record patterns of human judgement and reproduce them, but we have little understanding of how it works. Of course, we can describe the architecture of a convolutional neural network (for example), but in terms of what is encoded in the network, how it is encoded, and how results are produced, we have little understanding. Work with these algorithms is predominantly empirical, not theoretical. Computer programmers have developed "tricks" for training networks, such as training a full network with existing public domain image sets (using, for example, the VGG16 model), but then retraining the bottom layer for the specific images that they want identified (for example, images of diabetic retinopathy, or faces). This works better than training the whole network on specific images. Why? We don't know - it just does.

It seems likely that whatever is happening in a neural network is some kind of fractal. The training process of back-propagation involves recursive processing which seeks fixed points in the production of results across a vast range of variables from one layer of the network to the next. The fractal nature of the network means that retraining the network cannot be achieved by tweaking a single variable: the whole network must be retrained. Neural networks are very dissimilar from human brains in this way. But the fractal nature of neural networks does raise a question as to whether the structure of human consciousness is also fractal.

There is an important reason for thinking that it might be. Fractals are by definition self-similar, and self-similarity means that a pattern perceived at one level with one set of variables can be reproduced at another level, with a different set of variables. In other words, a fractal representation of one set of events can have the same structure as the fractal pattern of a different set of events: perception of the first set can anticipate the second set.

I've been fascinated by the work of Daniel Dubois on Anticipatory Systems recently partly because it is closely related to fractals, and it also seems to have a strong correlation to the way that neural networks work. Dubois makes the point that an anticipatory system processes events over time by developing models that anticipate them, whilst also generate multiple possible models and selecting the best fit. Each of these models is a differently-generated fractal.

If we want to understand what AI and machine learning really mean for society, we need to think about what use an artificial anticipatory system might be. One dystopian view is that it means the "Minority Report" - total anticipatory surveillance. I am sceptical about this, because an artificial anticipatory system is not a human system: its fractals are rigid and inflexible. Human anticipation and machine anticipation need to work together. But a personal artificial anticipatory system is something that is much more interesting. This is a system which processes the immediate information flows of experience and detects patterns. Could such a system help individuals establish deeper coherence in their understanding and action? It might. Indeed, it might counter the deep dislocation produced by overwhelming information that we are currently immersed in, and provide a context for a deeper conversation about understanding.

1 comment:

Unknown said...
This comment has been removed by a blog administrator.