Pages

Tuesday, 5 November 2013

Learning, Entropy and Supra-individuality

There is a problem in thinking about learning: we can only see the outward signs of cognition: linguistic utterances, body movements, etc. Our understanding of what a person might be thinking, of what might be going on in the learning process is derived from the way in which people play particular language games. Focus on 'agency' in learning seems to condemn us to metaphysical conjecture: no judgements can really be defended; they are effectively intellectual posturing.

The alternative to focusing on agency is to focus on structures which are, in the final analysis, communications. It is reasonable to suggest that communicative structures may provide some glimpse to the world of the agent: certainly some of the constraints that operate on the agent in their behaviour are available for inspection. But also, we might be able to test theories of agency which relate agency to structure by examining evidence from communications. With this supra-individual perspective, we can ask fundamental questions about the constitution of the self: are minds within the head? are selves within a body? and so on.

Supra-individuality is also metaphysical conjecture of course. But at least it provides us something that we can explore in a systematic and analytical way; studying agency per se on the other hand either leads to the ontological difficulties of the psychologist's lab, or to the ineffectual hand of phenomenological interpretivism.

So what tools do we have? One of the techniques we have for doing this is Shannon's communication theory. Shannon's insight into the transmission of messages between machines is deceptively simple: the issue is the uncertainty of a receiver in being able to predict the messages being sent to it (remember this is machines, not people!). The greater the uncertainty, the greater the amount of information gained if the receiver were to be able to predict the message. But messages are transmitted within constraints of grammar, meaning, syntax, etc. These mean that symbols are not equally probable; some symbols are more probable than others depending on the context. The thing that determines the distribution of probabilities of symbols Shannon calls 'redundancy': some things are repeated more often than others (like the letter 'e' in English); equally, some messages are repeated to ensure correct transmission if the medium is "noisy". Redundancies can be added to ensure effective transmission over a noisy medium: either by repeating messages, or by simplifying the syntax or grammar: both these techniques effectively add 'extra bits of extraneous information' to the message to ensure its transmission. Entropy is framed by redundancy.

Shannon was resistant to the idea of applying his ideas to human communication. The missing ingredient for Shannon would be 'meaning'. It's all very well to talk about information and entropy, but the relationship between meaning and entropy is of a different order. One way of characterising this 'different order' is to conceive of human communication as a process of selecting anticipations of communication processes. If we pursue this line of inquiry, then agency starts to become more tractable.

My utterances on this blog are not performed in the vacuum of my head (!) They are performed in anticipation of the likely responses I might have to it. Actually, I reckon very few people will bother to read this far in my post; but one or two (the people I want to communicate with) might. I can imagine what they might say. Actually, I am talking to myself in wondering what they might say; but as I do this, I am exploring a set of anticipations which will inform the utterances I choose to make. My hope is that the utterances I eventually select will stimulate conversation with those who I wish to communicate with. So I have anticipations of communication and I have models of other people.

There are ways of conceiving of anticipations of communication. Daniel Dubois has done very interesting work on anticipatory systems in computer science (see http://wohlstandfueralle.com/documents/DUBOISHYPERINCURSION.pdf). But what is interesting in this work is the way that we come to eventually select utterances: the criteria for agency. The exploration of anticipations is also an entropic systetm; it is constrained by redundancies. But the redundancies which constrain the anticipations of communications between humans must necessarily be shared. My communication with you will be a process of selecting an utterance to which I calculate you will respond to in a particular way; I can only predict this if I have some insight into the constraints that operate between in our communicating.

I find this an exciting result. Because the redundancies in our communications are measurable. When we look at emails, documents on the internet, twitter posts, blogs, etc we can determine levels of redundancy. In comparing levels of redundancy in discourses, there does appear to be a way of getting back to some conception of the processes of agency which created the conditions for the making of utterances. In the multi-dimensional world of human communications, it is the redundancies which are shared between people where meaningful discourse arises. Shannon's theory on its own cannot determine this (indeed, in multi-dimensions, it would suggest that communications ought to fall apart as Klaus Krippendorff argues). But multi-dimensional human communications do not fall apart - in fact, they often work rather well. Thinking about anticipations and mutual redundancies is a way of understanding our extraordinary success!

No comments:

Post a Comment