Monday, 16 September 2019

Topology and Technology: A new way of thinking

I'm back in Russia for the Global Scientific Dialogue course. We had the first day today with 250 first year students. Another 250 second year students follow on Wednesday. They seemed to enjoy what we did with them. It began with getting them to sing. I used a sound spectrum analyzer, and discussed the multiplicity of frequencies which are produced with any single note that we might sing. With the spectrum analyzer, it is possible to almost "paint" with sound: which in itself is another instance of multiple description. A very unusual way to begin course on management and economics!

The message is really about conversation, learning and systems thinking. Conversation too is characterised by multiple descriptions - or rather the "counterpoint" between multiple descriptions of things. This year, largely due to my work on diabetic retinopathy, the course is spending a lot of time looking at machine learning. Conversations with machines are going to become an important part of life and thanks to the massive advances in the standardisation of ML tools (particularly the Javascript version of tensorflow meaning you can do anything in the web), you don't have to look far to find really cool examples of conversational machine learning. I showed the Magic Sketchpad from Google fantastic Magenta project (a subset of their Tensorflow developments): https://magic-sketchpad.glitch.me/. This is clearly a conversation.

It feels like everything is converging. Over the summer we had two important conferences in Liverpool. One was on Topology - George Spencer-Brown's Laws of Form. The other was on physics (Alternative Natural Philosophy Association) - which ended up revolving around Peter Rowlands's work. The astonishing thing was that they were fundamentally about the same two key concepts: symmetry and nothing. At these conferences there were some experts on machine learning, and other experts on consciousness. They too were saying the same thing. Symmetry and nothing. And it is important to note that the enormous advances in deep learning are happening as a result of trial and error, and there is no clear theoretical account as to why they work. That they work this well ought to be an indication that there is indeed some fundamental similarity between the functioning of the machine and the functioning consciousness.

My work on diabetic retinopathy has basically been about putting these two together. Potentially, that is powerful for medical diagnostics. But it is much more important for our understanding of ourselves in the light of our understanding of machines. It means that for us to think about "whole systems" means that we must see our consciousness and the mechanical products of our consciousness (e.g. AI) as entwined. But the key is not in the technology. It is in the topology.

Any whole is unstable. The reasons why it is unstable can be thought of in many ways. We might say that a whole is never a whole because something exists outside it. Or we might say that a whole is the result of self-reference, which causes a kind of oscillation. Lou Kauffman, who came to both Liverpool conferences, draws it like this (from a recent paper):


Kauffman's point is that any distinction is self-reference, and any distinction creates time (a point also made by Niklas Luhmann). So you might look at the beginning of time as the interaction of self-referential processes:
But there's more. Because once you create time, you create conversation. Once the instability of a whole distinction is made, so that instability has to be stabilised with interactions with other instabilities. Today I used the idea of Trivial Machine, proposed by Heinz von Foerster. Von Foerster contrasted a trivial machine with a non-trivial machine. Education, he argues, turns non-trivial machines into trivial machines. But really we need to organise non-trivial machines into networks where each of them can coordinate their uncertainty.
I think this is an interesting alternative representation of Lou's swirling self-referential interactions. It is basically a model of conversation.

But this topology opens out further. Stafford Beer's viable system model begins with a distinction about the "system" and the "environment". But it unfolds a necessary topology which also suggests that conversation is fundamental. Every distinction (the "process language" box) has uncertainty. This necessitates something outside the system to deal with the uncertainty. If we assume that this thing outside is dealing with the uncertainty, then we have to assume that it must both address uncertainty within the system, and uncertainty outside it. Since it cannot know the outside world, it must perform a function of probing the outside world as a necessary function of absorbing uncertainty. Quickly we see the part of the system which "mops up" the uncertainty of the system develops its own structure, and must be in conversation with other similar systems...


What does this mean?

Beer's work is about organisation, and organisation is the principle challenge we will face as our technology throws out phenomena which will be completely new to us. It will confuse us. It is likely that the uncertainty it produces will, in the short run, cause our institutions to behave badly - becoming more conservative. We have obvious signs right now that this is the case.

But look at Beer's model. Look at the middle part of the upper box: "Anticipation". Whole distinctions make time, and create past and future. But to remain whole, they must anticipate. No living system cannot anticipate.

With the rapid development of computers over the last 80 years, we have had deterministic systems. They are not very good at anticipation, but they are good at synergy and coordination (the lower part of the upper box). But we've lacked anticipation - having to rely on our human senses which have been diminished by the dominance of deterministic technology.

I could be wrong on this. But our Deep Learning looks like it can anticipate. It's more than just a "new thing". It's a fundamental missing piece of a topological jigsaw puzzle.






Monday, 9 September 2019

Organisation and Play in Education

"Play" in learning has become a dominant theme among pedagogical innovators in recent years. Far from the stuffy lecture halls, the enthusiasts of play will bring out the Lego and Plasticine as a way of motivating engagement from staff and students. Sometimes, the "play" objects are online. Often it is staff who are exhorted to play with their students, and I've done a fair bit of this myself in the past - most recently on the Global Scientific Dialogue (GSD) course at the Far Eastern Federal University in Russia.

I first encountered playful learning approaches in three brilliant conferences of the American Society for Cybernetics which were organised by the late Ranulph Glanville. I was initially skeptical at first, but on reflection I found that these conferences deeply influenced my thinking about what happens in not only in scientific conferences, but also in educational experience. The last of these ASC conferences, which took place in Bolton, UK, was the subject of a film, and the concept of the conference led to a book. There was lots of music (participants had to bring a home-made musical instrument). The previous conference featured the great American composer Pauline Oliveros, who had a bunch of engineers and cyberneticians singing every morning around the swimming pool of the hotel we stayed in the Midwest!


In 2018 I organised the Metaphorum conference in Liverpool, and attempted to bring a more playful approach encouraging delegates not to "talk at" each other when presenting their ideas, but to organise an activity. This conference was attended by two academics from Russia, and the experience of it led directly to the design of the Global Scientific Dialogue module: a course like a conference, a conference with a set of activities and lots of discussion, focused on science and technology.

The important point about this approach to pedagogy (and to conferences) is that "play" is not an end in itself. As an end in itself, play is empty - and there is nothing worse that being forced to play when you either don't want to, or can't see the point. Games only work when people want to play - and overwhelmed academics are sometimes understandably sceptical about the pedagogical exhortation to "get out the Lego".

So what is play about?

Fundamentally, it is about organising conversations. More specifically, it concerns creating the conditions for conversations which would not otherwise occur within the normal contexts of education. This is what matters, because the "normal contexts" create barriers between people and ideas which shouldn't be there, or at least should be challenged. Play does this by introducing uncertainty into the educational process. In an environment where everyone - teachers and learners together - are uncertain, they have to find new ways of organising themselves to express their uncertainty, and coordinate their tenuous understanding with others.

The organisational reasons for introducing play are to break down barriers and to create the conditions for new conversations. On the Global Scientific Dialogue module, this is precisely how it works, and the elements of uncertainty which are amplified are not just contained in the activities, but in the content which draws on current science and technology about which nobody is certain. Inevitably, everyone - learners and teachers - are in the same boat, and what happens is a kind of social reconfiguration.

However, if play is imposed on the unwilling, then it reinforces barriers between the pedagogical idealists and exhausted teachers struggling to manage their workload. This raises the question as to how an organisational intervention might serve the purpose of reorganising relationships between exhausted academics in such a way that the underlying causes of exhaustion might be reconceived and addressed together.

In the final analysis, effective play is the introduction of a particular set of constraints within which the reorganisation that we call "learning" occurs. But every teacher knows they can get their constraints wrong, and it can have an oppressive effect. Play in itself cannot be the thing to aim for. Like all teaching, the effective manipulation of constraints, or the effective organisation of contexts for learning conversations is what matters. The magic of this is that in coordinating this, teachers reveal their understanding of the world, their students and themselves. 

Saturday, 7 September 2019

Information Loss and Conservation

One of the ironies of any "information system" is that they discard information. Quite simply, anything which processes large amounts of data to produce an "answer", which is then acted on by humans, is attenuating those large amounts of data in various ways. Often this is done according to some latent biases within either the humans requesting the information, bias within the datasets that are processed, or bias within the algorithms themselves. Bias is also a form of attenuation, and the biases which have recently been exposed around racial prejudice in machine learning highlight the fundamentally dangerous problem of loss of information in organisations and society.

In his book "The human use of human beings" Norbert Wiener worried that our use of technology sat on a knife-edge between it either being used to destroy us, or to save us from ourselves. I want to be more specific about this "knife edge". It is whether we learn how to conserve information within our society and institutions, and avoid using technology to accelerate the process of information destruction. With the information technologies which we have had for the last 50 years, with their latency (which means all news is old news) and emphasis on databases and information processing, loss of information has appeared inevitable.

This apparent "inevitable" loss of information is tacitly accepted by all institutions from government downwards. Given the hierarchical structures of our institutions, we can only deal with "averages" and "approximations" of what is happening on the ground, and we have little capacity for assessing whether we are attenuating out the right information, or whether our models of the world are right. To think this is not inevitable, is to think that our organisations are badly organised - and that remains an unthinkable thought, even today. Beyond this, few organisations run experiments to see if the world they think they are operating in is the actual world they operate in. Consequently, we see catastrophe involving the destruction of environments, whether it is the corporate environment (banking crisis), social environment (Trump, Brexit), the scientific environment (university marketisation), global warming, or the economic system.

Of course, attenuation is necessary: individuals are less complex than institutions, and institutions are less complex than societies. Somehow, a selection of what is important among the available information must be made. But selection must be made alongside a process of checking that whatever model of the world is created through these selections is correct. So if information is attenuated from environment to individual, the individual must amplify their model of the world and themselves in the environment. This "amplification" can be thought of as a process of generating alternative descriptions of the information they have absorbed. Many descriptions of the same thing are effectively "redundant" - they are not strictly necessary, but at the same time, the capacity to generate multiple descriptions of the world creates options and flexibility to manage the complexity of the environment. Redundancy creates opportunities to make connections with the environment - like creating a niche, or a nest - rather in the same way that a spider spins a web (that is a classic example of amplification).

The problem we have in society (and I believe the root cause of most of our problems) is that the capacity to produce more and more information has exploded. This has produced enormous unmanageable uncertainty, and existing institutions have only been able to mop-up this uncertainty by asserting increasingly rigid categories for dealing with the world. This is why we see "strong men" (usually men) in charge in the world. They are rigid, category-enforcing, uncertainty-mops. Unfortunately (as we see in the UK at the moment) they exacerbate the problem: it is a positive-feedback loop which will collapse.

One of the casualties of this increasing conservatism is the capacity to speculate on whether the model of the world we have is correct or not. Austerity is essentially a redundancy-removal process in the name of "social responsibility". Nothing could be further from the truth. More than ever, we need to generate and inspect multiple descriptions of the world that we think we are living in. It is not happening, and so information is being lost, and as the information is lost, the conditions for extremism are enhanced.

I say all this because I wonder if our machine learning technology might provide a corrective. Machine learning can, of course, be used as an attenuative technology: it simplifies judgement by providing an answer. But if we use it like this, then the worst nightmares of Wiener will be realised.

But machine learning need not be like this. It might actually be used to help generate the redundant descriptions of reality which we have become incapable of doing ourselves. This is because machine learning is a technology which works with redundancy - multiple descriptions of the world - which determine an ordering of judgements about the things it has been trained with. While it can be used to produce an "answer", it can also be used to preserve and refine this ordering - particularly if it is closely coupled with human judgement.

The critical issue here is that the structures within a convolutional neural network are a kind of fractal (produced through recursively seeking fixed points in the convolutional process between different levels of analysis), and these fractals can serve the function of what appears to be an "anticipatory system". Machine learning systems "predict" the likely categories of data they don't know about. The important thing about this is, whatever we think "intelligence" might be, we can be confident that we too have some kind of "anticipatory system" built through redundancy of information. Indeed, as Robert Rosen pointed out, the whole of the natural world appears to operate with "anticipatory systems".

We think we operate in "real time", but in the context of anticipatory systems, "real-time" actually means "ahead of time". An anticipatory system is a necessary correlate of any attenuative process: without it, no natural system would be viable. Without it, information would be lost. With it, information is preserved.

So have we got an artificial anticipatory system? Are we approaching a state where we might preserve information in our society? I'm increasingly convinced the answer is "yes". If it is "yes", then the good news is that Trump, Brexit, the bureaucratic hierarchy of the EU, are all the last stages of a way of life that is about to be supplanted with a very different way of thinking about technology and information. Echoing Wiener, IF we don't destroy ourselves, our technology promises a better and fairer world beyond any expectations that we might allow ourselves to entertain right now.


Thursday, 22 August 2019

Luhmann on Time and the Ethical Reaction to New Technology

In the wake of some remarkable technical developments in predictive and adaptive technologies, there has been a powerful - and sometimes well-funded - ethical reaction. The most prominent developments are Oxford's Digital Ethics Lab (led by Luciano Floridi), and the Schwarzman Institute specifically looking at AI ethics benefiting from "The largest single donation to the university since the Renaissance". I wonder how Oxford's Renaissance academics sold the previous largest donation! And there are a lot of other initiatives which google will list. But there is an obvious question here: what exactly are these people going to do? How many ethicists does it take to figure out AI?

What they will do is write lots of papers in peer-reviewed journals which will be submitted to the REF for approval (a big data analytical exercise!), compete with each other to become the uber-AI-ethicist (judged partly by citation counts, and other metrics), compete for grants (which after this initial funding will probably become scarcer as the focus of investment shifts to making the technology work), and get invited to parliamentary review panels when the next Cambridge Analytica strikes. Great. It's as if society's culture of surveillance and automation can be held at bay safely within a university department focusing on the rights and wrongs of it all. And yet, it is to miss the obvious point that Cambridge Analytica itself had very strong ties to the university! Are these "Lady Macbeth" departments wringing their hands at the thought of complicity? And what is it with ethics anyway?

In a remarkable late paper called "The Control of Intransparency", Niklas Luhmann observed the "ethical reaction" phenomenon in 1997. There are very few papers which really are worth spending a long time with. This is one. Most abstractly, Luhmann shows how time and anticipation lie implicit in the making of a distinction - something which had been prefigured in the work of Heinz von Foerster and something that Louis Kauffman, who elaborated much of the maths with von Foerster (see my previous post), had been saying. I suspect Luhmann got some of this from Kauffman.

It all rests in understanding that social systems are self-referential, and as such produce "unresolvable indeterminacy".  Time is a necessary construct to resolve this indeterminacy, where the system imagines possible futures, distinguishing between past and future, and choosing which possible futures meet the goals of the system and which don't. This raises the question: What are the selection criteria for choosing desired futures and how are they constructed?

"One may guess that at the end of the twentieth century this symphony of intransparency reflects a widespread mood. One may think of the difficulties of a development policy in the direction of modernizing, as it was conceived after the Second World War. [...] One may think of the demotivating experiences with reform politics, e.g. in education.[...] The question is, to what degree may we accommodate our cognitive instruments and especially our epistemologies to this?
As we know, public opinion reacts with ethics and scandals. That certainly is a well-balanced duality, which meets the needs of the mass media, but for the rest promises little help. Religious fundamentalists may make their own distinctions. What was once the venerable, limiting mystery of God is ever more replaced by polemic: one knows what one is opposed to, and that suffices. In comparison, the specifically scientific scheme of idealization and deviation has many advantages. It should, however, be noticed that this is also a distinction, just like that of ethics and scandals or of local and global, or of orthodox and opponents. Further, one may ask: why is one distinction preferred over the other?"
The scope of Luhmann's thinking here demands attention. Our ethical reactions to new technologies are inherent in the distinctions we make about those technologies. The AI ethics institutes are institutions of self-reference attempting to balance out the indeterminacy of the distinctions that society (and the university) is making about technology. Luhmann is trying to get deeper - to a proper understanding of the circular dynamics of self-referential systems and their relation to time. This, I would suggest, is a much more important and productive goal - particularly with regard to AI, which is itself self-referential.

Luhmann considers the distinction between cause and constraint (something which my book on "Uncertain Education" is also about). Technologies constrain practices, but we cannot determine the interference between different constraints among the different technologies operating in the world. Luhmann says:

"The system then disposes  of a latent potentiality which is not always but only incidentally utilized. This already destroys the simple, causal-technical system models with their linear concept and which presuppose the possibility of hierarchical steering. With reflective conditioning the role of time changes. The operations are no longer ordered as successions, but depend on situations in which multiple conditionings come together. Decisions then have to be made according to the actual state of the system and take into account that further decisions will be required which are not foreseeable from the present point in time. Especially noteworthy is that preciseley complex technical systems have a tendency in this direction. Although technology intends a tight coupling of causal factors, the system becomes intransparent to itself, because it cannot foresee at what time which factors will be blocks, respectively released. Unpredictabilities are not prevented but precisely fostered by increased precision in detail."
So technology creates uncertainty. It does it because the simple causal-technical system produces new options (latent potentialities) which exist alongside other existing options which carry their own constraints. All of these constraints interfere with one another. Indeterminacy increases. Something must mop-up the indeterminacy.

But as Luhmann says, the ethical distinction which attempts to address the uncertainty behave in a similar way: uncertainty proliferates despite and because of attempts to manage it. This may keep the AI ethics institutes busy for a long time!

Yet it may not. AI is itself an anticipatory technology. It relies on the same processes of distinction-making and self-reference that Luhmann is talking about. Indeed, the relationship of re-entry between human distinction-making and machine distinction-making may lead to new forms of systemic stability which we cannot yet conceive of. Having said this, such a situation is unlikely to operate within the existing hierarchical structures of our present institutions: it will demand new forms of human organisation.

This is leading me to think that we need to study the ethics institutes as a specific form of late-stage development within our traditional universities. Benign as they might appear, they might have a similar institutional and historical structure to an earlier attempt to maintain traditional orthodoxy in the wake of technological development and radical ideas: the Spanish Inquisition.

Monday, 19 August 2019

Emerging Coherence of a New View of Physics at the Alternative Natural Philosophy Association

The Alternative Natural Philosophy Association met in Liverpool University last week following a highly successful conference on Spencer-Brown's Laws of Form (see http://lof50.com).  There is a profound connection between Spencer-Brown and the physics/natural science community of ANPA, not least in the fact that Louis Kauffman is a major contributor to the development of Spencer-Brown's calculus, and also a major contributor to the application of these ideas in physics.

Of central importance throughout ANPA was the concept of "nothing", which in Spencer-Brown maps on to what he calls the "unmarked state". At ANPA 4 speakers, all of them eminent physicists, gave presentations referencing each other, with each of them saying that the totality of the universe must be zero, and that "we must take nothing seriously". 

The most important figure in this is Peter Rowlands. Rowlands's theory of nature has been in development for 30 years, and over that time he has made predictions about empirical findings which were dismissed when he made them, but subsequently discovered to be true (for example, the acceleration of the universe, and the ongoing failure to discover super-symmetrical particles). If this was just a lucky guess, that would be one thing, but for Rowlands it was the logical consequence of a thoroughgoing theory which took zero as its starting point.

Rowlands articulates a view of nature which unfolds nothing at progressively complex orders. He argues that the dynamic relationship between the most basic elements of the universe (mass, space, time and charge) arrange themselves at each level of complexity in orders which effectively cancel each other out through a mathematical device where things which multiply each other create zero, called a nilpotent.

This brilliant idea cuts through a range of philosophical problems like a knife. It is hardly surprising that, as John Hyatt pointed out in a brilliant presentation, Shakespeare had an intuition that this might be how nature worked:
Our revels now are ended.
These our actors,
As I foretold you,
were all spirits, and
Are melted into air, into thin air:
And like the baseless fabric of this vision,
The cloud-capp'd tow'rs, the gorgeous palaces,
The solemn temples, the great globe itself,
Yea, all which it inherit, shall dissolve,
And, like this insubstantial pageant faded,
Leave not a rack behind.
We are such stuff
As dreams are made on;
and our little life
Is rounded with a sleep.
But Rowlands needs a mechanism, or an "engine" to drive his "nothing-creating" show. He uses group theory in mathematics, and William Rowan Hamilton's concept of Quaternions: a 3-dimensional complex number, notated as i, j, k, where i*i = j*j = k*k = i*j*k = -1. Mapping these quaternions on to the basic components of physical systems (plus a unitary value which makes up the 4), he sees mass, time, charge and space represented in a dynamic numerical system which is continually producing nilpotent expressions. This provides an ingenious way of re-expressing Einstein's equation of mass-energy-momentum, but most importantly it allows for the Einstein equation to be situated as entirely consistent with Dirac's equation of quantum mechanics. Rowlands is able to re-express Dirac's equation in simpler terms using his quaternions as operators in a similar and commensurable way to how he deals with Einstein's equation.

As Mike Houlden argued at the conference, this way of thinking helps to unpick some fundamental assumptions made about the nature of the universe and the beginning of time. For example, the concept held by most physicists that there is a fixed amount of dark matter in the universe which was created instantly at the big bang is challenged by Rowlands's system. It articulates a continual creation process that sees a recursive process of symmetry-breaking throughout nature, from quantum phenomena through to biology, and by extension consciousness.

Rowlands articulates a picture similar to that of Bohm - particularly in upholding the view of nature as a "hologram" - but his thoroughgoing mathematics produces what Bohm was arguing for: an algebra for the universe.

Empirical justification for these ideas may not be far off. As Mike Houlden argued, the discovery of dark energy (presumed to be the driver for the acceleration of the universe) and the assumption that the proportion of dark matter in the universe was fixed at the big bang (whatever that is) are likely to be questioned in the future. Rowlands's theory helps to explain the creation of dark matter and dark energy as balancing processes which are the result of the creation of mass, and which serve to maintain the nilpotency of the universe.

From an educational perspective this is not only extremely exciting, but also relevant. The fundamental coherence of the universe and the fundamental coherence of our understanding of the universe are likely to be connected as different expressions of the same broken symmetry. Learning, like living, as Shakespeare observed, is also much ado about nothing. It's not only the cloud capp'd towers which disappear. 

Sunday, 4 August 2019

China's experiments with AI and education

At the end of Norbert Wiener's "The Human Use of Human Beings", he identified that there was a "new industrial revolution" afoot, which would be dominated by machines replacing, or at least assisting, human judgement (this is 1950).  Wiener, having invented cybernetics, feared for the future of the world: he understood the potential of what he and his colleagues had unleashed, which included computers (John von Neuman), information theory (Claude Shannon) and neural networks (Warren McCulloch). He wrote:
"The new industrial revolution is a two-edged sword. It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction." (p.162)
The destructive power of technology would result, Wiener argues, from our "burning incense before the technology God". Well, this is what's going on in China in their education system right now (see https://www.technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/)

There has, unsurprisingly, been much protest by teachers online to this story. However, sight must not be lost of the fact that there are indeed benefits that the technology brings to these students, autonomy being not the least of them. But we are missing a coherent theoretical strand that connects good face-to-face teaching to Horrible Histories, Khan academy and this AI (and many steps in-between). There is most probably a thread that connects them and we should seek to articulate it as precisely as we can, otherwise we will be beholden to the rough instinct of human beings unaware of their own desire to maintain their existence within their current context, in the face of a new technology which will transform that context beyond recognition.

AI gives us a new powerful God in front of which we (and particularly our politicians) will need to resist the temptation to light the incense. But many will burn incense, and this will fundamentally be about using this technology to maintain the status quo in education in an uncertain environment. So this is AI to get the kids through "the test" more quickly. And (worse) the tests they are concerned with are STEM. Where's the AI that teaches poetry, drama or music?

It's the STEM thing which is the real problem here, and ironically, it is the thing which is most challenged by the AI/Machine learning revolution (actually, I think the best way to describe the really transformative technology is to call it an "artificial anticipatory system", but I won't go into that now). This is because in the world that's going to unfold around us - the world that we're meant to be preparing our kids for - machine learning will provide new "filters" through which we can make sense of things. This is a new kind of technology which clearly works - within limits, but well beyond expectations. Most importantly, while the machine learning technology works, nobody knows exactly how these filters work (although there are some interesting theories: https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9)

Machine learning is created through a process of "training" - where multiple redundant descriptions of phenomena are fed into a machine for it to understand the underlying patterns behind them. Technical problems in the future will be dealt with through this "training" process, in the way that our current technical problems demand "coding" - the writing of specific algorithms. It is also likely that many professionals in many domains will be involved in training machines. Indeed, training machines will become as important as training humans.

This dominance of machine training and partnership between humans and machines in the workplace means that the future of education is going to have to become more interdisciplinary. It won't be enough for doctors to know about the physiological systems of the body; professionally they will have to be deeply informed about the ways that the AI diagnostic devices are behaving around them, and take an active role in refining and configuring them. Moreover, such training processes will involve not only the functional logic of medical conditions, but the aesthetics of images, the nuances of judgement, and the social dynamics of machines and human/organisational decision-making. So how do we prepare our kids for this world?

The fundamental problems of education have little to do with learning stuff to pass the test: that is a symptom of the problem we have. They have instead to do with organising the contexts for conversations about important things, usually between the generations. So the Chinese initiative basically exacerbates a problem produced by our existing institutional technologies (I think of Wiener's friend Heinz von Foerster: "we must not allow technology to create problems it can solve"). So AI is dragged out of what Cohen and March famously called the "garbage can of institutional decision-making" (see https://en.wikipedia.org/wiki/Garbage_can_model), when the real problem (which is avoided) is, "how do we reorganise education so as to prepare our kids for the interdisciplinary world as it will become?"

This is where we should be putting our efforts. Our new anticipatory technology provides new means for organising people and conversations. It actually may give us a way in which we might organise ourselves such that "many brains can think as one brain", which was Stafford Beer's aim in his "management cybernetics" (Beer was another friend of Wiener). My prediction is that eventually we will see that this is the way to go: it is crucial to local and planetary viability that we do.

Will China and others see that what they are currently doing is not a good idea? I suspect it really depends not on their attitude to technology (which will take them further down the "test" route), but their attitude to freedom and democracy. Amartya Sen may well have been right in "Development as Freedom" in arguing that democracy was the fundamental element for economic and social development. We shall see. But this is an important moment.

Wednesday, 31 July 2019

Fractals of Learning

I've been doing some data analysis on responses of students to a comparative judgement exercise I did with them last year. Basically, they were presented with pairs of documents on various topics in science, technology and society, and asked "Which do you find more interesting and why?"

The responses collected over two weeks from about 150 students were surprisingly rich, and I've become interested in drawing distinctions between them. Some students clearly are transformed by many of the things which they read about (and this was in the context of a face-to-face course which also gravitated around these topics), and their answers reflect an emerging understanding. Other students, while they might also appear to engage with the process, are a bit more shallow in their response. 

To look at this, I've looked at a number of dimensions of their engagement and plotted the shifts in entropy in each dimension. So, we can look at the variety of documents or topics they talk about: some students stick to the same topic (so there is continually low entropy), while others choose a wide variety (so entropy jumps around). The amount of text they write also has an entropy over time, as does the entropy of the text itself. This last one is interesting because it can reveal key words in the same way that a word cloud might: key concepts get repeated, so the entropy gets reduced. 

What then would we expect to see of a student gradually discovering some new concept which helps them connect many topics? Perhaps an initial phase of high entropy in document choice, high entropy in concepts used and low entropy in the amount of text (responses might be a similar length). As time goes on, a concept might assert itself as dominant in a number of responses. The concept entropy goes down, while the document entropy might continue to oscillate. 

The overall pattern is counterpoint, rather like this graph below:


The graphical figure above is a representation of the positive and negative shifts in entropy of the main variables (going across the top), followed by the positive and negative shifts in the relative entropy of variables to one another. The further over to the right when patterns change is an indication of increasing "counterpoint" between the different variables. The further to the left is a sign of particular change in particular variables. From top to bottom is time, measured in slots where responses were made.


Not all the graphs are so rich in their counterpoint. This one (admittedly with fewer comparisons) is much more synchronous. There's a "wobble" in the middle where things are shifted in different directions, while at the end, the comments on the documents, the type of documents, and the type of topics all vary at once. If there was a common concept that had been found here, one would expect to see that the entropy of the comments would really be lower. But the graph and the diagram provide a frame for asking questions about it.
This one is more rich. It has a definite structure of entropies shifting up and down, and at the end there is a kind of unity which is produced. Looking at the student comments, it was quite apparent that there were a number of concepts which had an impact.

It doesn't always work as a technique, but there does appear to be a correlation between the shape of these graphs and the ways in which the students developed their ideas in their writing which merits further study.

More interestingly, this one (below) produced a richly contrapuntal picture, but when I looked at the data, it was collected over a very short period of time, meaning that this was the result of a one-off concentrated effort, rather that a longitudinal process. But that is interesting too, because there is a fractal structure to this stuff. A small sample can be observed to display a pattern which can then be contextualised within a larger context where that pattern might be repeated (for example, with a different set of concepts), or it might be shown to be an isolated island within a larger pattern which is in fact quite different.
Either way, the potential is there to use these graphs as a way of getting students to reflect on their own activities. I'm not sure I would go so far as to say "your graph should look like this", but awareness of the correlations between intellectual engagement and patterns of entropy is an interesting way of engaging learners in thinking about their own learning processes. Actually, it also might be possible to produce a 3d landscape from these diagrams, and from that a "google map" of personal learning: now that is interesting, isn't it?