Tuesday 31 October 2023

Iconicity and Epidemiology: Lessons for AI and Education

The essence of cybernetics is iconicity. It is partly, but not only, about thinking pictorially. More deeply it is about playing with representations which open up a dance between mind and nature. This is distinct from approaches to thought which are essentially "symbolic". Mathematics is the obvious example, but actually, most of the concepts one learns in school are symbols that stand in relation to one another, and whose relation to the world outside has to be "learnt". This process can be difficult because the symbols themselves are shrouded in obscure rules which are often unclear and sometimes contradictory.

Iconic approaches make the symbols as simple as possible: a distinction, a game, a process - onto which we are invited to project our experience of a particular subject or problem. It was something that was first considered by C.S. Peirce who developed his own approaches to iconic logic (see this for example: Peirce.pdf (uic.edu)). Cybernetics followed in Peirce's footsteps, and the iconicity of its diagrams and technical creativity makes its subject matter transdisciplinary. It also makes cybernetics a difficult thing for education to deal with, because education organises itself around subjects and their symbols, not icons and games. 

But thinking iconically changes things.

I am currently teaching epidemiology which has been quite fun. But I'm struck by how the symbols of epidemiology - not just the equations, but the classifications of study types, problematisation of things like bias and confounding, etc, all put barriers in the way of understanding something that is basically about counting. So I have been thinking about ways of doing this more iconically.

To do this is to invite people into the dance between mind and nature, and to do that, we need new kinds of invitations. I'm grateful to Lou Kauffman who recommended Lancelot Hogben's famous "Mathematics for the Million" as a starting point. 

Hogben's book teaches the context and history of mathematical inquiry first, and then delves into the specifics of its symbolism. That is a good approach, and one that needs updating for today (I don't know of anything quite like it). Having said that, there are some great online tools to do iconic things: The "Seeing theory" project from Brown university is wonderful (and open source): https://seeing-theory.brown.edu/  (again, thanks to Lou for that)

Then of course, we have games and simulations - and now we have AI. Here's a combination of those things I've been playing with inspired by Mary Flannagan's "Grow a Game" Grow a Game - Mary Flanagan

My AI version http://13.40.150.219:9995/



Basically enter a topic, select a game and chatGPT will produce prompts suggesting rule changes to the game to reflect the topic. Of course, whatever the AI comes up with can be tweaked by humans - but its a powerful way of stimulating new ideas and thought in epidemiology. 

There's more to do here.

Friday 27 October 2023

Computer metaphors and Human Understanding

One of the most serious accusations levelled against cognitivism is that it imposed a computer metaphor over natural processes of consciousness. At the heart of the approach is the concept of information as conceived by engineers of electronic systems in the 1950s (particularly Shannon). The problem with this is that there is no coherent definition of information that applies to all the different domains in which one might speak of information: from electronics, to biology, to psychology to philosophy, theology and physics.

Shannon information is a particularly special case and unique in the sense that it provides a method of quantification. Shannon himself, however, made no pretence in applying this to other phenomena than the engineering situation he focused on. But the quantified definition contains concepts other than information - most notably, redundancy (which Shannon, following the cyberneticians including Ashby identified as constraint on transmission) and noise. Noise is the reason why the redundancy is there - Shannon's whole engineering problem concerned the distinguishing of signal from noise on a communication channel (i.e. a wire). 

Shannon was involved with the establishment of cybernetics as a science. He was one of the participants at the later "Macy conferences" where the term "cybernetics" was defined by Norbert Wiener (actually, it may have been the young Heinz von Foerster who is really responsible for this). Shannon would have been aware that other cyberneticians saw redundancy rather than information as the key concept of natural systems: most notably, Gregory Bateson saw redundancy as an index of "meaning" - something which was also alluded to by Shannon's co-author, the philosopher Warren Weaver.

But in the years that followed the cybernetic revolution, it was information that was the key concept. Underpinned by the technical architecture that was first established by John von Neumann (another attendee of the Macy conferences), computers were constructed from a principle that separated processing from storage. This gave rise to the cognitivist separation of "memory" from "intelligence". 

There were of course many critiques and revisions: Ulrich Niesser, for example, among early cognitivists, came to challenge the cognitivist orthodoxy.  Karl Pribram wrote a wonderful paper on the importance of redundancy on cognition and memory (The Four Rs of Remembering6 see karlpribram.com/wp-content/uploads/pdf/theory/T-039.pdf). But the information processing model prevailed, inspiring the first wave of Artificial Intelligence and expert systems from the late 80s to the early 90s. 

So what have we got now with our AI? 

What is really important is that our current AI is NOT "information" technology. It produces information in the form of predictions, but the means by which those predictions are formed is the analysis and processing of redundancy. This is unlike early AI. The other thing to say is that the technology is inherently noisy. Probabilities are generated for multiple options, and somehow a selection must be made between those probabilities: statistical analysis becomes really important in this selection process. Indeed, within own involvement with AI development in medical diagnostics, the development of models (for making predictions about images) was far less important than the statistical post-processing that cleaned the noise from the data, and increased the sensitivity and specificity of the AI judgement. It will be the same with chatGPT: there the statistics must ensure that the chatBot doesn't say anything that will upset OpenAI's investors!

Information and redundancy are two sides of the same coin. But redundancy is much more powerful and important in natural systems, as has been obvious to researchers in ecology and the life sciences for many years (notably, statistical ecologist Robert Ulanowicz, economist Loet Leydesdorff, Bateson, Terry Deacon, etc). It is also fundamental to education - but few educationalists recognise this.

The best example is in the Vygotskian Zone of Proximal Development. I described a year or so ago how the ZPD was basically a zone of  "mutual redundancy" (here: Reconceiving the Digital Network: From Cells to Selves | Request PDF (researchgate.net) ), drawing on Leydesdorff's description. ChatGPT emphasises this: Leydesdorff's work is of seminal importance in understanding where we really are in our current phase of socio-technical development. 

Nature computes with redundancy, not information - and this is computation unlike how we think of computation with information. This is not to leave Shannon behind though: in Shannon, what happens is selection. Symbols are selected by a sender, and interpretations are selected by a receiver. The key in the ability to communicate is that the complexity of the sending machine is equivalent to the complexity of the receiving machine (which is a restatement of Ashby's Law of Requisite Variety - Variety (cybernetics) - Wikipedia). If the receiver doesn't have the complexity of the sender there will be challenges in communication. With such challenges - either because of noise on the channel, or because of insufficient complexity on the part of the receiver, it is necessary for the sender to create more redundancy in the communication: sufficient redundancy can overcome a deficiency in the complexity of the receiver to interpret the message. 

One of the most remarkable features of AI generally is that it is both created with redundancy, and it is capable of generating large amounts of redundancy. If it didn't, its capacity to appear meaningful would be diminished. 

For many years (with Leydesdorff) the nature of redundancy in the construction of meaning and communication has fascinated me. Music provides a classic example of redundancy in communication - there is so much repetition, which we analysed here: onlinelibrary.wiley.com/doi/full/10.1002/sres.2738. I've just written a new paper on music and biology which will be published soon which develops these ideas, drawing on the importance of what might be called a "topology of information" with reference to evolutionary biology. 

It's not just that the computer metaphor doesn't work. The metaphor that does work is probably musical.