Monday, 16 September 2019

Topology and Technology: A new way of thinking

I'm back in Russia for the Global Scientific Dialogue course. We had the first day today with 250 first year students. Another 250 second year students follow on Wednesday. They seemed to enjoy what we did with them. It began with getting them to sing. I used a sound spectrum analyzer, and discussed the multiplicity of frequencies which are produced with any single note that we might sing. With the spectrum analyzer, it is possible to almost "paint" with sound: which in itself is another instance of multiple description. A very unusual way to begin course on management and economics!

The message is really about conversation, learning and systems thinking. Conversation too is characterised by multiple descriptions - or rather the "counterpoint" between multiple descriptions of things. This year, largely due to my work on diabetic retinopathy, the course is spending a lot of time looking at machine learning. Conversations with machines are going to become an important part of life and thanks to the massive advances in the standardisation of ML tools (particularly the Javascript version of tensorflow meaning you can do anything in the web), you don't have to look far to find really cool examples of conversational machine learning. I showed the Magic Sketchpad from Google fantastic Magenta project (a subset of their Tensorflow developments): https://magic-sketchpad.glitch.me/. This is clearly a conversation.

It feels like everything is converging. Over the summer we had two important conferences in Liverpool. One was on Topology - George Spencer-Brown's Laws of Form. The other was on physics (Alternative Natural Philosophy Association) - which ended up revolving around Peter Rowlands's work. The astonishing thing was that they were fundamentally about the same two key concepts: symmetry and nothing. At these conferences there were some experts on machine learning, and other experts on consciousness. They too were saying the same thing. Symmetry and nothing. And it is important to note that the enormous advances in deep learning are happening as a result of trial and error, and there is no clear theoretical account as to why they work. That they work this well ought to be an indication that there is indeed some fundamental similarity between the functioning of the machine and the functioning consciousness.

My work on diabetic retinopathy has basically been about putting these two together. Potentially, that is powerful for medical diagnostics. But it is much more important for our understanding of ourselves in the light of our understanding of machines. It means that for us to think about "whole systems" means that we must see our consciousness and the mechanical products of our consciousness (e.g. AI) as entwined. But the key is not in the technology. It is in the topology.

Any whole is unstable. The reasons why it is unstable can be thought of in many ways. We might say that a whole is never a whole because something exists outside it. Or we might say that a whole is the result of self-reference, which causes a kind of oscillation. Lou Kauffman, who came to both Liverpool conferences, draws it like this (from a recent paper):


Kauffman's point is that any distinction is self-reference, and any distinction creates time (a point also made by Niklas Luhmann). So you might look at the beginning of time as the interaction of self-referential processes:
But there's more. Because once you create time, you create conversation. Once the instability of a whole distinction is made, so that instability has to be stabilised with interactions with other instabilities. Today I used the idea of Trivial Machine, proposed by Heinz von Foerster. Von Foerster contrasted a trivial machine with a non-trivial machine. Education, he argues, turns non-trivial machines into trivial machines. But really we need to organise non-trivial machines into networks where each of them can coordinate their uncertainty.
I think this is an interesting alternative representation of Lou's swirling self-referential interactions. It is basically a model of conversation.

But this topology opens out further. Stafford Beer's viable system model begins with a distinction about the "system" and the "environment". But it unfolds a necessary topology which also suggests that conversation is fundamental. Every distinction (the "process language" box) has uncertainty. This necessitates something outside the system to deal with the uncertainty. If we assume that this thing outside is dealing with the uncertainty, then we have to assume that it must both address uncertainty within the system, and uncertainty outside it. Since it cannot know the outside world, it must perform a function of probing the outside world as a necessary function of absorbing uncertainty. Quickly we see the part of the system which "mops up" the uncertainty of the system develops its own structure, and must be in conversation with other similar systems...


What does this mean?

Beer's work is about organisation, and organisation is the principle challenge we will face as our technology throws out phenomena which will be completely new to us. It will confuse us. It is likely that the uncertainty it produces will, in the short run, cause our institutions to behave badly - becoming more conservative. We have obvious signs right now that this is the case.

But look at Beer's model. Look at the middle part of the upper box: "Anticipation". Whole distinctions make time, and create past and future. But to remain whole, they must anticipate. No living system cannot anticipate.

With the rapid development of computers over the last 80 years, we have had deterministic systems. They are not very good at anticipation, but they are good at synergy and coordination (the lower part of the upper box). But we've lacked anticipation - having to rely on our human senses which have been diminished by the dominance of deterministic technology.

I could be wrong on this. But our Deep Learning looks like it can anticipate. It's more than just a "new thing". It's a fundamental missing piece of a topological jigsaw puzzle.






Monday, 9 September 2019

Organisation and Play in Education

"Play" in learning has become a dominant theme among pedagogical innovators in recent years. Far from the stuffy lecture halls, the enthusiasts of play will bring out the Lego and Plasticine as a way of motivating engagement from staff and students. Sometimes, the "play" objects are online. Often it is staff who are exhorted to play with their students, and I've done a fair bit of this myself in the past - most recently on the Global Scientific Dialogue (GSD) course at the Far Eastern Federal University in Russia.

I first encountered playful learning approaches in three brilliant conferences of the American Society for Cybernetics which were organised by the late Ranulph Glanville. I was initially skeptical at first, but on reflection I found that these conferences deeply influenced my thinking about what happens in not only in scientific conferences, but also in educational experience. The last of these ASC conferences, which took place in Bolton, UK, was the subject of a film, and the concept of the conference led to a book. There was lots of music (participants had to bring a home-made musical instrument). The previous conference featured the great American composer Pauline Oliveros, who had a bunch of engineers and cyberneticians singing every morning around the swimming pool of the hotel we stayed in the Midwest!


In 2018 I organised the Metaphorum conference in Liverpool, and attempted to bring a more playful approach encouraging delegates not to "talk at" each other when presenting their ideas, but to organise an activity. This conference was attended by two academics from Russia, and the experience of it led directly to the design of the Global Scientific Dialogue module: a course like a conference, a conference with a set of activities and lots of discussion, focused on science and technology.

The important point about this approach to pedagogy (and to conferences) is that "play" is not an end in itself. As an end in itself, play is empty - and there is nothing worse that being forced to play when you either don't want to, or can't see the point. Games only work when people want to play - and overwhelmed academics are sometimes understandably sceptical about the pedagogical exhortation to "get out the Lego".

So what is play about?

Fundamentally, it is about organising conversations. More specifically, it concerns creating the conditions for conversations which would not otherwise occur within the normal contexts of education. This is what matters, because the "normal contexts" create barriers between people and ideas which shouldn't be there, or at least should be challenged. Play does this by introducing uncertainty into the educational process. In an environment where everyone - teachers and learners together - are uncertain, they have to find new ways of organising themselves to express their uncertainty, and coordinate their tenuous understanding with others.

The organisational reasons for introducing play are to break down barriers and to create the conditions for new conversations. On the Global Scientific Dialogue module, this is precisely how it works, and the elements of uncertainty which are amplified are not just contained in the activities, but in the content which draws on current science and technology about which nobody is certain. Inevitably, everyone - learners and teachers - are in the same boat, and what happens is a kind of social reconfiguration.

However, if play is imposed on the unwilling, then it reinforces barriers between the pedagogical idealists and exhausted teachers struggling to manage their workload. This raises the question as to how an organisational intervention might serve the purpose of reorganising relationships between exhausted academics in such a way that the underlying causes of exhaustion might be reconceived and addressed together.

In the final analysis, effective play is the introduction of a particular set of constraints within which the reorganisation that we call "learning" occurs. But every teacher knows they can get their constraints wrong, and it can have an oppressive effect. Play in itself cannot be the thing to aim for. Like all teaching, the effective manipulation of constraints, or the effective organisation of contexts for learning conversations is what matters. The magic of this is that in coordinating this, teachers reveal their understanding of the world, their students and themselves. 

Saturday, 7 September 2019

Information Loss and Conservation

One of the ironies of any "information system" is that they discard information. Quite simply, anything which processes large amounts of data to produce an "answer", which is then acted on by humans, is attenuating those large amounts of data in various ways. Often this is done according to some latent biases within either the humans requesting the information, bias within the datasets that are processed, or bias within the algorithms themselves. Bias is also a form of attenuation, and the biases which have recently been exposed around racial prejudice in machine learning highlight the fundamentally dangerous problem of loss of information in organisations and society.

In his book "The human use of human beings" Norbert Wiener worried that our use of technology sat on a knife-edge between it either being used to destroy us, or to save us from ourselves. I want to be more specific about this "knife edge". It is whether we learn how to conserve information within our society and institutions, and avoid using technology to accelerate the process of information destruction. With the information technologies which we have had for the last 50 years, with their latency (which means all news is old news) and emphasis on databases and information processing, loss of information has appeared inevitable.

This apparent "inevitable" loss of information is tacitly accepted by all institutions from government downwards. Given the hierarchical structures of our institutions, we can only deal with "averages" and "approximations" of what is happening on the ground, and we have little capacity for assessing whether we are attenuating out the right information, or whether our models of the world are right. To think this is not inevitable, is to think that our organisations are badly organised - and that remains an unthinkable thought, even today. Beyond this, few organisations run experiments to see if the world they think they are operating in is the actual world they operate in. Consequently, we see catastrophe involving the destruction of environments, whether it is the corporate environment (banking crisis), social environment (Trump, Brexit), the scientific environment (university marketisation), global warming, or the economic system.

Of course, attenuation is necessary: individuals are less complex than institutions, and institutions are less complex than societies. Somehow, a selection of what is important among the available information must be made. But selection must be made alongside a process of checking that whatever model of the world is created through these selections is correct. So if information is attenuated from environment to individual, the individual must amplify their model of the world and themselves in the environment. This "amplification" can be thought of as a process of generating alternative descriptions of the information they have absorbed. Many descriptions of the same thing are effectively "redundant" - they are not strictly necessary, but at the same time, the capacity to generate multiple descriptions of the world creates options and flexibility to manage the complexity of the environment. Redundancy creates opportunities to make connections with the environment - like creating a niche, or a nest - rather in the same way that a spider spins a web (that is a classic example of amplification).

The problem we have in society (and I believe the root cause of most of our problems) is that the capacity to produce more and more information has exploded. This has produced enormous unmanageable uncertainty, and existing institutions have only been able to mop-up this uncertainty by asserting increasingly rigid categories for dealing with the world. This is why we see "strong men" (usually men) in charge in the world. They are rigid, category-enforcing, uncertainty-mops. Unfortunately (as we see in the UK at the moment) they exacerbate the problem: it is a positive-feedback loop which will collapse.

One of the casualties of this increasing conservatism is the capacity to speculate on whether the model of the world we have is correct or not. Austerity is essentially a redundancy-removal process in the name of "social responsibility". Nothing could be further from the truth. More than ever, we need to generate and inspect multiple descriptions of the world that we think we are living in. It is not happening, and so information is being lost, and as the information is lost, the conditions for extremism are enhanced.

I say all this because I wonder if our machine learning technology might provide a corrective. Machine learning can, of course, be used as an attenuative technology: it simplifies judgement by providing an answer. But if we use it like this, then the worst nightmares of Wiener will be realised.

But machine learning need not be like this. It might actually be used to help generate the redundant descriptions of reality which we have become incapable of doing ourselves. This is because machine learning is a technology which works with redundancy - multiple descriptions of the world - which determine an ordering of judgements about the things it has been trained with. While it can be used to produce an "answer", it can also be used to preserve and refine this ordering - particularly if it is closely coupled with human judgement.

The critical issue here is that the structures within a convolutional neural network are a kind of fractal (produced through recursively seeking fixed points in the convolutional process between different levels of analysis), and these fractals can serve the function of what appears to be an "anticipatory system". Machine learning systems "predict" the likely categories of data they don't know about. The important thing about this is, whatever we think "intelligence" might be, we can be confident that we too have some kind of "anticipatory system" built through redundancy of information. Indeed, as Robert Rosen pointed out, the whole of the natural world appears to operate with "anticipatory systems".

We think we operate in "real time", but in the context of anticipatory systems, "real-time" actually means "ahead of time". An anticipatory system is a necessary correlate of any attenuative process: without it, no natural system would be viable. Without it, information would be lost. With it, information is preserved.

So have we got an artificial anticipatory system? Are we approaching a state where we might preserve information in our society? I'm increasingly convinced the answer is "yes". If it is "yes", then the good news is that Trump, Brexit, the bureaucratic hierarchy of the EU, are all the last stages of a way of life that is about to be supplanted with a very different way of thinking about technology and information. Echoing Wiener, IF we don't destroy ourselves, our technology promises a better and fairer world beyond any expectations that we might allow ourselves to entertain right now.