Thursday, 22 August 2019

Luhmann on Time and the Ethical Reaction to New Technology

In the wake of some remarkable technical developments in predictive and adaptive technologies, there has been a powerful - and sometimes well-funded - ethical reaction. The most prominent developments are Oxford's Digital Ethics Lab (led by Luciano Floridi), and the Schwarzman Institute specifically looking at AI ethics benefiting from "The largest single donation to the university since the Renaissance". I wonder how Oxford's Renaissance academics sold the previous largest donation! And there are a lot of other initiatives which google will list. But there is an obvious question here: what exactly are these people going to do? How many ethicists does it take to figure out AI?

What they will do is write lots of papers in peer-reviewed journals which will be submitted to the REF for approval (a big data analytical exercise!), compete with each other to become the uber-AI-ethicist (judged partly by citation counts, and other metrics), compete for grants (which after this initial funding will probably become scarcer as the focus of investment shifts to making the technology work), and get invited to parliamentary review panels when the next Cambridge Analytica strikes. Great. It's as if society's culture of surveillance and automation can be held at bay safely within a university department focusing on the rights and wrongs of it all. And yet, it is to miss the obvious point that Cambridge Analytica itself had very strong ties to the university! Are these "Lady Macbeth" departments wringing their hands at the thought of complicity? And what is it with ethics anyway?

In a remarkable late paper called "The Control of Intransparency", Niklas Luhmann observed the "ethical reaction" phenomenon in 1997. There are very few papers which really are worth spending a long time with. This is one. Most abstractly, Luhmann shows how time and anticipation lie implicit in the making of a distinction - something which had been prefigured in the work of Heinz von Foerster and something that Louis Kauffman, who elaborated much of the maths with von Foerster (see my previous post), had been saying. I suspect Luhmann got some of this from Kauffman.

It all rests in understanding that social systems are self-referential, and as such produce "unresolvable indeterminacy".  Time is a necessary construct to resolve this indeterminacy, where the system imagines possible futures, distinguishing between past and future, and choosing which possible futures meet the goals of the system and which don't. This raises the question: What are the selection criteria for choosing desired futures and how are they constructed?

"One may guess that at the end of the twentieth century this symphony of intransparency reflects a widespread mood. One may think of the difficulties of a development policy in the direction of modernizing, as it was conceived after the Second World War. [...] One may think of the demotivating experiences with reform politics, e.g. in education.[...] The question is, to what degree may we accommodate our cognitive instruments and especially our epistemologies to this?
As we know, public opinion reacts with ethics and scandals. That certainly is a well-balanced duality, which meets the needs of the mass media, but for the rest promises little help. Religious fundamentalists may make their own distinctions. What was once the venerable, limiting mystery of God is ever more replaced by polemic: one knows what one is opposed to, and that suffices. In comparison, the specifically scientific scheme of idealization and deviation has many advantages. It should, however, be noticed that this is also a distinction, just like that of ethics and scandals or of local and global, or of orthodox and opponents. Further, one may ask: why is one distinction preferred over the other?"
The scope of Luhmann's thinking here demands attention. Our ethical reactions to new technologies are inherent in the distinctions we make about those technologies. The AI ethics institutes are institutions of self-reference attempting to balance out the indeterminacy of the distinctions that society (and the university) is making about technology. Luhmann is trying to get deeper - to a proper understanding of the circular dynamics of self-referential systems and their relation to time. This, I would suggest, is a much more important and productive goal - particularly with regard to AI, which is itself self-referential.

Luhmann considers the distinction between cause and constraint (something which my book on "Uncertain Education" is also about). Technologies constrain practices, but we cannot determine the interference between different constraints among the different technologies operating in the world. Luhmann says:

"The system then disposes  of a latent potentiality which is not always but only incidentally utilized. This already destroys the simple, causal-technical system models with their linear concept and which presuppose the possibility of hierarchical steering. With reflective conditioning the role of time changes. The operations are no longer ordered as successions, but depend on situations in which multiple conditionings come together. Decisions then have to be made according to the actual state of the system and take into account that further decisions will be required which are not foreseeable from the present point in time. Especially noteworthy is that preciseley complex technical systems have a tendency in this direction. Although technology intends a tight coupling of causal factors, the system becomes intransparent to itself, because it cannot foresee at what time which factors will be blocks, respectively released. Unpredictabilities are not prevented but precisely fostered by increased precision in detail."
So technology creates uncertainty. It does it because the simple causal-technical system produces new options (latent potentialities) which exist alongside other existing options which carry their own constraints. All of these constraints interfere with one another. Indeterminacy increases. Something must mop-up the indeterminacy.

But as Luhmann says, the ethical distinction which attempts to address the uncertainty behave in a similar way: uncertainty proliferates despite and because of attempts to manage it. This may keep the AI ethics institutes busy for a long time!

Yet it may not. AI is itself an anticipatory technology. It relies on the same processes of distinction-making and self-reference that Luhmann is talking about. Indeed, the relationship of re-entry between human distinction-making and machine distinction-making may lead to new forms of systemic stability which we cannot yet conceive of. Having said this, such a situation is unlikely to operate within the existing hierarchical structures of our present institutions: it will demand new forms of human organisation.

This is leading me to think that we need to study the ethics institutes as a specific form of late-stage development within our traditional universities. Benign as they might appear, they might have a similar institutional and historical structure to an earlier attempt to maintain traditional orthodoxy in the wake of technological development and radical ideas: the Spanish Inquisition.

No comments: