Thursday 22 August 2019

Luhmann on Time and the Ethical Reaction to New Technology

In the wake of some remarkable technical developments in predictive and adaptive technologies, there has been a powerful - and sometimes well-funded - ethical reaction. The most prominent developments are Oxford's Digital Ethics Lab (led by Luciano Floridi), and the Schwarzman Institute specifically looking at AI ethics benefiting from "The largest single donation to the university since the Renaissance". I wonder how Oxford's Renaissance academics sold the previous largest donation! And there are a lot of other initiatives which google will list. But there is an obvious question here: what exactly are these people going to do? How many ethicists does it take to figure out AI?

What they will do is write lots of papers in peer-reviewed journals which will be submitted to the REF for approval (a big data analytical exercise!), compete with each other to become the uber-AI-ethicist (judged partly by citation counts, and other metrics), compete for grants (which after this initial funding will probably become scarcer as the focus of investment shifts to making the technology work), and get invited to parliamentary review panels when the next Cambridge Analytica strikes. Great. It's as if society's culture of surveillance and automation can be held at bay safely within a university department focusing on the rights and wrongs of it all. And yet, it is to miss the obvious point that Cambridge Analytica itself had very strong ties to the university! Are these "Lady Macbeth" departments wringing their hands at the thought of complicity? And what is it with ethics anyway?

In a remarkable late paper called "The Control of Intransparency", Niklas Luhmann observed the "ethical reaction" phenomenon in 1997. There are very few papers which really are worth spending a long time with. This is one. Most abstractly, Luhmann shows how time and anticipation lie implicit in the making of a distinction - something which had been prefigured in the work of Heinz von Foerster and something that Louis Kauffman, who elaborated much of the maths with von Foerster (see my previous post), had been saying. I suspect Luhmann got some of this from Kauffman.

It all rests in understanding that social systems are self-referential, and as such produce "unresolvable indeterminacy".  Time is a necessary construct to resolve this indeterminacy, where the system imagines possible futures, distinguishing between past and future, and choosing which possible futures meet the goals of the system and which don't. This raises the question: What are the selection criteria for choosing desired futures and how are they constructed?

"One may guess that at the end of the twentieth century this symphony of intransparency reflects a widespread mood. One may think of the difficulties of a development policy in the direction of modernizing, as it was conceived after the Second World War. [...] One may think of the demotivating experiences with reform politics, e.g. in education.[...] The question is, to what degree may we accommodate our cognitive instruments and especially our epistemologies to this?
As we know, public opinion reacts with ethics and scandals. That certainly is a well-balanced duality, which meets the needs of the mass media, but for the rest promises little help. Religious fundamentalists may make their own distinctions. What was once the venerable, limiting mystery of God is ever more replaced by polemic: one knows what one is opposed to, and that suffices. In comparison, the specifically scientific scheme of idealization and deviation has many advantages. It should, however, be noticed that this is also a distinction, just like that of ethics and scandals or of local and global, or of orthodox and opponents. Further, one may ask: why is one distinction preferred over the other?"
The scope of Luhmann's thinking here demands attention. Our ethical reactions to new technologies are inherent in the distinctions we make about those technologies. The AI ethics institutes are institutions of self-reference attempting to balance out the indeterminacy of the distinctions that society (and the university) is making about technology. Luhmann is trying to get deeper - to a proper understanding of the circular dynamics of self-referential systems and their relation to time. This, I would suggest, is a much more important and productive goal - particularly with regard to AI, which is itself self-referential.

Luhmann considers the distinction between cause and constraint (something which my book on "Uncertain Education" is also about). Technologies constrain practices, but we cannot determine the interference between different constraints among the different technologies operating in the world. Luhmann says:

"The system then disposes  of a latent potentiality which is not always but only incidentally utilized. This already destroys the simple, causal-technical system models with their linear concept and which presuppose the possibility of hierarchical steering. With reflective conditioning the role of time changes. The operations are no longer ordered as successions, but depend on situations in which multiple conditionings come together. Decisions then have to be made according to the actual state of the system and take into account that further decisions will be required which are not foreseeable from the present point in time. Especially noteworthy is that preciseley complex technical systems have a tendency in this direction. Although technology intends a tight coupling of causal factors, the system becomes intransparent to itself, because it cannot foresee at what time which factors will be blocks, respectively released. Unpredictabilities are not prevented but precisely fostered by increased precision in detail."
So technology creates uncertainty. It does it because the simple causal-technical system produces new options (latent potentialities) which exist alongside other existing options which carry their own constraints. All of these constraints interfere with one another. Indeterminacy increases. Something must mop-up the indeterminacy.

But as Luhmann says, the ethical distinction which attempts to address the uncertainty behave in a similar way: uncertainty proliferates despite and because of attempts to manage it. This may keep the AI ethics institutes busy for a long time!

Yet it may not. AI is itself an anticipatory technology. It relies on the same processes of distinction-making and self-reference that Luhmann is talking about. Indeed, the relationship of re-entry between human distinction-making and machine distinction-making may lead to new forms of systemic stability which we cannot yet conceive of. Having said this, such a situation is unlikely to operate within the existing hierarchical structures of our present institutions: it will demand new forms of human organisation.

This is leading me to think that we need to study the ethics institutes as a specific form of late-stage development within our traditional universities. Benign as they might appear, they might have a similar institutional and historical structure to an earlier attempt to maintain traditional orthodoxy in the wake of technological development and radical ideas: the Spanish Inquisition.

Monday 19 August 2019

Emerging Coherence of a New View of Physics at the Alternative Natural Philosophy Association

The Alternative Natural Philosophy Association met in Liverpool University last week following a highly successful conference on Spencer-Brown's Laws of Form (see http://lof50.com).  There is a profound connection between Spencer-Brown and the physics/natural science community of ANPA, not least in the fact that Louis Kauffman is a major contributor to the development of Spencer-Brown's calculus, and also a major contributor to the application of these ideas in physics.

Of central importance throughout ANPA was the concept of "nothing", which in Spencer-Brown maps on to what he calls the "unmarked state". At ANPA 4 speakers, all of them eminent physicists, gave presentations referencing each other, with each of them saying that the totality of the universe must be zero, and that "we must take nothing seriously". 

The most important figure in this is Peter Rowlands. Rowlands's theory of nature has been in development for 30 years, and over that time he has made predictions about empirical findings which were dismissed when he made them, but subsequently discovered to be true (for example, the acceleration of the universe, and the ongoing failure to discover super-symmetrical particles). If this was just a lucky guess, that would be one thing, but for Rowlands it was the logical consequence of a thoroughgoing theory which took zero as its starting point.

Rowlands articulates a view of nature which unfolds nothing at progressively complex orders. He argues that the dynamic relationship between the most basic elements of the universe (mass, space, time and charge) arrange themselves at each level of complexity in orders which effectively cancel each other out through a mathematical device where things which multiply each other create zero, called a nilpotent.

This brilliant idea cuts through a range of philosophical problems like a knife. It is hardly surprising that, as John Hyatt pointed out in a brilliant presentation, Shakespeare had an intuition that this might be how nature worked:
Our revels now are ended.
These our actors,
As I foretold you,
were all spirits, and
Are melted into air, into thin air:
And like the baseless fabric of this vision,
The cloud-capp'd tow'rs, the gorgeous palaces,
The solemn temples, the great globe itself,
Yea, all which it inherit, shall dissolve,
And, like this insubstantial pageant faded,
Leave not a rack behind.
We are such stuff
As dreams are made on;
and our little life
Is rounded with a sleep.
But Rowlands needs a mechanism, or an "engine" to drive his "nothing-creating" show. He uses group theory in mathematics, and William Rowan Hamilton's concept of Quaternions: a 3-dimensional complex number, notated as i, j, k, where i*i = j*j = k*k = i*j*k = -1. Mapping these quaternions on to the basic components of physical systems (plus a unitary value which makes up the 4), he sees mass, time, charge and space represented in a dynamic numerical system which is continually producing nilpotent expressions. This provides an ingenious way of re-expressing Einstein's equation of mass-energy-momentum, but most importantly it allows for the Einstein equation to be situated as entirely consistent with Dirac's equation of quantum mechanics. Rowlands is able to re-express Dirac's equation in simpler terms using his quaternions as operators in a similar and commensurable way to how he deals with Einstein's equation.

As Mike Houlden argued at the conference, this way of thinking helps to unpick some fundamental assumptions made about the nature of the universe and the beginning of time. For example, the concept held by most physicists that there is a fixed amount of dark matter in the universe which was created instantly at the big bang is challenged by Rowlands's system. It articulates a continual creation process that sees a recursive process of symmetry-breaking throughout nature, from quantum phenomena through to biology, and by extension consciousness.

Rowlands articulates a picture similar to that of Bohm - particularly in upholding the view of nature as a "hologram" - but his thoroughgoing mathematics produces what Bohm was arguing for: an algebra for the universe.

Empirical justification for these ideas may not be far off. As Mike Houlden argued, the discovery of dark energy (presumed to be the driver for the acceleration of the universe) and the assumption that the proportion of dark matter in the universe was fixed at the big bang (whatever that is) are likely to be questioned in the future. Rowlands's theory helps to explain the creation of dark matter and dark energy as balancing processes which are the result of the creation of mass, and which serve to maintain the nilpotency of the universe.

From an educational perspective this is not only extremely exciting, but also relevant. The fundamental coherence of the universe and the fundamental coherence of our understanding of the universe are likely to be connected as different expressions of the same broken symmetry. Learning, like living, as Shakespeare observed, is also much ado about nothing. It's not only the cloud capp'd towers which disappear. 

Sunday 4 August 2019

China's experiments with AI and education

At the end of Norbert Wiener's "The Human Use of Human Beings", he identified that there was a "new industrial revolution" afoot, which would be dominated by machines replacing, or at least assisting, human judgement (this is 1950).  Wiener, having invented cybernetics, feared for the future of the world: he understood the potential of what he and his colleagues had unleashed, which included computers (John von Neuman), information theory (Claude Shannon) and neural networks (Warren McCulloch). He wrote:
"The new industrial revolution is a two-edged sword. It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction." (p.162)
The destructive power of technology would result, Wiener argues, from our "burning incense before the technology God". Well, this is what's going on in China in their education system right now (see https://www.technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/)

There has, unsurprisingly, been much protest by teachers online to this story. However, sight must not be lost of the fact that there are indeed benefits that the technology brings to these students, autonomy being not the least of them. But we are missing a coherent theoretical strand that connects good face-to-face teaching to Horrible Histories, Khan academy and this AI (and many steps in-between). There is most probably a thread that connects them and we should seek to articulate it as precisely as we can, otherwise we will be beholden to the rough instinct of human beings unaware of their own desire to maintain their existence within their current context, in the face of a new technology which will transform that context beyond recognition.

AI gives us a new powerful God in front of which we (and particularly our politicians) will need to resist the temptation to light the incense. But many will burn incense, and this will fundamentally be about using this technology to maintain the status quo in education in an uncertain environment. So this is AI to get the kids through "the test" more quickly. And (worse) the tests they are concerned with are STEM. Where's the AI that teaches poetry, drama or music?

It's the STEM thing which is the real problem here, and ironically, it is the thing which is most challenged by the AI/Machine learning revolution (actually, I think the best way to describe the really transformative technology is to call it an "artificial anticipatory system", but I won't go into that now). This is because in the world that's going to unfold around us - the world that we're meant to be preparing our kids for - machine learning will provide new "filters" through which we can make sense of things. This is a new kind of technology which clearly works - within limits, but well beyond expectations. Most importantly, while the machine learning technology works, nobody knows exactly how these filters work (although there are some interesting theories: https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9)

Machine learning is created through a process of "training" - where multiple redundant descriptions of phenomena are fed into a machine for it to understand the underlying patterns behind them. Technical problems in the future will be dealt with through this "training" process, in the way that our current technical problems demand "coding" - the writing of specific algorithms. It is also likely that many professionals in many domains will be involved in training machines. Indeed, training machines will become as important as training humans.

This dominance of machine training and partnership between humans and machines in the workplace means that the future of education is going to have to become more interdisciplinary. It won't be enough for doctors to know about the physiological systems of the body; professionally they will have to be deeply informed about the ways that the AI diagnostic devices are behaving around them, and take an active role in refining and configuring them. Moreover, such training processes will involve not only the functional logic of medical conditions, but the aesthetics of images, the nuances of judgement, and the social dynamics of machines and human/organisational decision-making. So how do we prepare our kids for this world?

The fundamental problems of education have little to do with learning stuff to pass the test: that is a symptom of the problem we have. They have instead to do with organising the contexts for conversations about important things, usually between the generations. So the Chinese initiative basically exacerbates a problem produced by our existing institutional technologies (I think of Wiener's friend Heinz von Foerster: "we must not allow technology to create problems it can solve"). So AI is dragged out of what Cohen and March famously called the "garbage can of institutional decision-making" (see https://en.wikipedia.org/wiki/Garbage_can_model), when the real problem (which is avoided) is, "how do we reorganise education so as to prepare our kids for the interdisciplinary world as it will become?"

This is where we should be putting our efforts. Our new anticipatory technology provides new means for organising people and conversations. It actually may give us a way in which we might organise ourselves such that "many brains can think as one brain", which was Stafford Beer's aim in his "management cybernetics" (Beer was another friend of Wiener). My prediction is that eventually we will see that this is the way to go: it is crucial to local and planetary viability that we do.

Will China and others see that what they are currently doing is not a good idea? I suspect it really depends not on their attitude to technology (which will take them further down the "test" route), but their attitude to freedom and democracy. Amartya Sen may well have been right in "Development as Freedom" in arguing that democracy was the fundamental element for economic and social development. We shall see. But this is an important moment.