Pages

Monday, 16 September 2019

Topology and Technology: A new way of thinking

I'm back in Russia for the Global Scientific Dialogue course. We had the first day today with 250 first year students. Another 250 second year students follow on Wednesday. They seemed to enjoy what we did with them. It began with getting them to sing. I used a sound spectrum analyzer, and discussed the multiplicity of frequencies which are produced with any single note that we might sing. With the spectrum analyzer, it is possible to almost "paint" with sound: which in itself is another instance of multiple description. A very unusual way to begin course on management and economics!

The message is really about conversation, learning and systems thinking. Conversation too is characterised by multiple descriptions - or rather the "counterpoint" between multiple descriptions of things. This year, largely due to my work on diabetic retinopathy, the course is spending a lot of time looking at machine learning. Conversations with machines are going to become an important part of life and thanks to the massive advances in the standardisation of ML tools (particularly the Javascript version of tensorflow meaning you can do anything in the web), you don't have to look far to find really cool examples of conversational machine learning. I showed the Magic Sketchpad from Google fantastic Magenta project (a subset of their Tensorflow developments): https://magic-sketchpad.glitch.me/. This is clearly a conversation.

It feels like everything is converging. Over the summer we had two important conferences in Liverpool. One was on Topology - George Spencer-Brown's Laws of Form. The other was on physics (Alternative Natural Philosophy Association) - which ended up revolving around Peter Rowlands's work. The astonishing thing was that they were fundamentally about the same two key concepts: symmetry and nothing. At these conferences there were some experts on machine learning, and other experts on consciousness. They too were saying the same thing. Symmetry and nothing. And it is important to note that the enormous advances in deep learning are happening as a result of trial and error, and there is no clear theoretical account as to why they work. That they work this well ought to be an indication that there is indeed some fundamental similarity between the functioning of the machine and the functioning consciousness.

My work on diabetic retinopathy has basically been about putting these two together. Potentially, that is powerful for medical diagnostics. But it is much more important for our understanding of ourselves in the light of our understanding of machines. It means that for us to think about "whole systems" means that we must see our consciousness and the mechanical products of our consciousness (e.g. AI) as entwined. But the key is not in the technology. It is in the topology.

Any whole is unstable. The reasons why it is unstable can be thought of in many ways. We might say that a whole is never a whole because something exists outside it. Or we might say that a whole is the result of self-reference, which causes a kind of oscillation. Lou Kauffman, who came to both Liverpool conferences, draws it like this (from a recent paper):


Kauffman's point is that any distinction is self-reference, and any distinction creates time (a point also made by Niklas Luhmann). So you might look at the beginning of time as the interaction of self-referential processes:
But there's more. Because once you create time, you create conversation. Once the instability of a whole distinction is made, so that instability has to be stabilised with interactions with other instabilities. Today I used the idea of Trivial Machine, proposed by Heinz von Foerster. Von Foerster contrasted a trivial machine with a non-trivial machine. Education, he argues, turns non-trivial machines into trivial machines. But really we need to organise non-trivial machines into networks where each of them can coordinate their uncertainty.
I think this is an interesting alternative representation of Lou's swirling self-referential interactions. It is basically a model of conversation.

But this topology opens out further. Stafford Beer's viable system model begins with a distinction about the "system" and the "environment". But it unfolds a necessary topology which also suggests that conversation is fundamental. Every distinction (the "process language" box) has uncertainty. This necessitates something outside the system to deal with the uncertainty. If we assume that this thing outside is dealing with the uncertainty, then we have to assume that it must both address uncertainty within the system, and uncertainty outside it. Since it cannot know the outside world, it must perform a function of probing the outside world as a necessary function of absorbing uncertainty. Quickly we see the part of the system which "mops up" the uncertainty of the system develops its own structure, and must be in conversation with other similar systems...


What does this mean?

Beer's work is about organisation, and organisation is the principle challenge we will face as our technology throws out phenomena which will be completely new to us. It will confuse us. It is likely that the uncertainty it produces will, in the short run, cause our institutions to behave badly - becoming more conservative. We have obvious signs right now that this is the case.

But look at Beer's model. Look at the middle part of the upper box: "Anticipation". Whole distinctions make time, and create past and future. But to remain whole, they must anticipate. No living system cannot anticipate.

With the rapid development of computers over the last 80 years, we have had deterministic systems. They are not very good at anticipation, but they are good at synergy and coordination (the lower part of the upper box). But we've lacked anticipation - having to rely on our human senses which have been diminished by the dominance of deterministic technology.

I could be wrong on this. But our Deep Learning looks like it can anticipate. It's more than just a "new thing". It's a fundamental missing piece of a topological jigsaw puzzle.






No comments:

Post a Comment