Sunday, 29 January 2017

Options and Selection in Evolution

When we think about selection - for example, natural selection, or the selection of symbols to create or decode a message (Shannon), or the selection of meanings of that message (Luhmann) - there is an implicit notion that selection is a bit like selecting cereal in the supermarket. There is a variety of cereal, and the capacity to select must equal the degree of variety (this is basically a rephrasing of Ashby's Law of requisite variety). Rather like Herbert Spencer's presentation of natural selection as "survival of the fittest", this concept of selection has a close relation to capitalism - both as it manifested in the 19th century economy, and in the 19th century schoolroom (which must, as Richard Lewontin pointed out, have influenced Darwin).

The problem with this idea is that it is assumed that the options available for choosing are somehow independent from the agents that make the choice, and independent from the processes of choosing. But the idea of independent varieties from which a selection is made is a misapprehension. It leads to confusion when we think about how humans learnt that certain things were poison. It is easily imagined that blackberries and poison ivy might have been on the same "menu" at some point in history as "things that potentially could be consumed", and through the unfortunate choices of some individuals, there was social "learning" and the poison ivy was dropped. But the "menu" evolved with the species - the physical manifestation of "food" was inseparable from practices of feeding which had developed alongside the development of society. The eating of poison ivy was always an eccentricity - a transgression which which would have been felt by anybody committing it before they even touched the ivy. The taste for novelty is always transgressive in this way - the story of Adam and Eve and the apple carries some truth in conveying this transgressive behaviour.

This should change the way we think about 'selection' and 'variety'. It may be a mistake, for example, to think of the environment having high variety, and individual humans having to attenuate it (as in Beer's Viable System Model). At one level, this is a correct description, yet it overlooks the fact that the attenuation of the environment is something done by the species over history, not by individuals in the moment. Some variety is only "out there" because of historical transgressions. What produces this variety? - or rather, what produces the need to transgress? Colloquially, our thirst for "variety" comes from boredom. It comes from curiosity about what lies beyond the predictable, or a critique of what becomes the norm. From this perspective, redundancies born out of repetition become indicative of the quest for increased variety. This thirst for novelty born from redundancy is most clearly evident in music.

Music generates redundancies at many levels: melodies, pitches, rhythms, harmonies, tonal movements - they all serve to create a non-equilibrium dynamic. Tonal resolutions affirm redundancies of tonality, whilst sowing the seed for the novelty that then may follow. I remember the wonderful Ian Kemp, when discussing Beethoven string quartets at the Lindsay quartet sessions at Manchester University, would often rhetorically ask at the end of a movement, "Well what is he going to do now?... something different!". The "something different" is not a selection from the menu of what has gone before. It is a complete shift in perspective which breaks with the rules of what was established to this point. The genius is that the perspectival shift then makes the connection to what went before.

Bateson's process of bio-entropy is a description of this. The trick is to hover between structural rigidity and amorphous flexibility. Bateson put the tension as between "rigour" - which at extreme brings "paralytic death", and "imagination" which at extreme brings madness (Mind and Nature, "Time is Out of Joint"). In Robert Ulanowicz's statistical ecology, there is a similar Batesonian balance between rigidity and flexibility, as there is in Krippendorff's lattice of constraints (see http://dailyimprovisation.blogspot.co.uk/2016/06/unfolding-complexity-of-entanglements.html).

I suspect selection is a more complex game than Shannon supposes. Our choices are descriptions. Our transgressions are wild poetic metaphors which only at the deepest level connect us to normal life. In Nigel Howard's metagame theory, such wild metaphors are deeper level metagames that transform the game everyone else is playing into something new. I've struggled to understand where and how these metagames arise. The best I can suggest is that as mutual information rises in communicative exchanges, so the degree of "boredom" rises because messages become predictable. That means that the descriptions of A match the descriptions of B. What this boredom actually is is the increasing realisation of the uselessness of rational selection: this is a moment of Nigel Howard's "breakdown of rationality". Bateson would call it a double-bind. To escape it, at some point A or B will generate a new kind of description unknown in the communication so far. This shakes things up. At a deep level, there will be some connection to what has gone before, but at the surface level, it will be radically different - maybe even disturbing.

In music this unfolds in time. It is the distinction between a 1st and 2nd subject in sonata form, or perhaps at a micro-level, between a fugal subject and its countersubject. A theme or section in the music is coherent because it exists within a set of constraints. Descriptions are made within those constraints. The "transgressive" exploration of something different occurs when the constraints change. This can happen when redundancies in many elements coincide: so, for example, a rhythm is repeated alongside a repeated sequence of notes, or a harmony is reinforced (the classic example is in the climax of the Liebestod in Wagner's Tristan). These redundancies reduce the richness of descriptions which can be produced, highlighting more fundamental constraints, which can then be a seed for the production of a new branch of descriptions. It is, of course, also what happens in orgasm.

I've been trying to create an agent-based model which behaves a bit like this. It's a challenge which has left me with some fundamental questions about Shannon redundancy. The most important question is whether Shannon redundancy is the same as McCulloch's idea of "Redundancy of potential command" which he saw in neural structures (and in Nelson's navy!) Redundancy of potential command is the number of alternatives I have for performing function x. Shannon redundancy is the number of alternative descriptions I have for x. McCulloch can be easily visualised as alternative pathways through the brain. Shannon redundancy is typically thought about (and measured) as repetition. I think the two concepts are related, but the connection requires us to see that repetition is never really repetition; no two events are the same.

Each repetition which is counted by Shannon, reveals subtle differences in constraint behind the description that a is the same as b. Each repetition brings into focus the structure of constraint. Each overlapping of redundancy (like the rhythm and the melody) brings the most fundamental constraint more into focus. As it is brought more into focus, so new possibilities become more thinkable. It may be like fractal image compression, or discrete wavelet transforms...

Thursday, 26 January 2017

What has technology done to education?

In 2017 we celebrate 500 years since Martin Luther sparked the reformation by nailing his 95 theses to All Saint's Church in Wittenberg (as well as sending them to the Archbishop of Mainz). Although it caused a religious crisis with which we have lived ever since, this was an academic disputation. But it was one which owed its origin to the invention, almost 80 years earlier, of the Guttenberg printing press. There is no better example of the impact of technology on institutions, no better demonstration of the antagonistic and dialectical relationship between technology and institutions, and no greater warning to our institutions today to keep up with technology and embrace the changes and possibilities it brings.

Technology, and particularly communications technology, creates new possibilities for individuals to communicate, and by extension, to organise themselves. Institutions are also a kind of technology (David Graeber writes well about this in his recent "The Utopia of Rules"): they provide a social mechanism to uphold relations based on roles, rights, duties and responsibilities (we see this very clearly in education and the church both today and in history). Communication technology threatens institutional structures and this results in various political reactions. For the church in the 15th century, printing was clearly very powerful. But it was also an implicit threat to blow up existing power structures.

The relationship between technology and institutions is dialectical. The first reaction of the church to the printing press was to exploit it to serve its existing structures. Guttenberg's first print-runs were for indulgences. The hierarchy of the church (and Guttenberg) must have been rubbing their hands! This was a license to print money!

But of course it wasn't long before the new freedoms to communicate and organise threatened the status quo. Luther's German bible didn't appear until 1522, but it must have been obvious that such a thing was possible, and moreover that if individuals could read the bible for themselves in their own language, the role of the church as the gateway to divine mercy might be challenged by a "do it yourself" attitude.

The internet is much more recent to us now than Guttenberg's press was to Martin Luther. If we run on the same timescale, then our Universities are at the stage of "printing indulgences" with the technology (MOOC anyone?) Perhaps more importantly, the Catholic church in 1470 must have thought it had got to grips with printing, that it could harness it in reinforcing its existing structures and political power (Have you submitted your assignment to Turnitin?). I think this may be where we are in our Universities' attitude to technology right now. In a recent call for 'co-design projects', JISC has asked for advice on new developments in AI, Internet of things, Learning Analytics, etc... (see https://www.jisc.ac.uk/rd/how-we-innovate/co-design-consultation-2016-17) but all with the focus of serving the institution as it currently is constituted (this is because the new 'market-oriented' JISC wants to sell services to the institution as it is currently constituted!). There is little consideration that the nature of the institution, its structures, its focus of activity, its funding, its science, its certification, or its scientific communication might all change.  Yet we know that this has happened before.

At the moment we are seeing dramatic and rapid technical developments. At a trade exhibition in London on the "Internet of things" the other day, a friend told me that the emphasis was not so much on smart devices but (to her surprise) on radical new data structures and technologies that sit behind them (this is why some universities are jumping on the Blockchain bandwaggon - http://blockchain.cs.ucl.ac.uk/) There were a lot of banks interested - they worry about the threat to their existing business models. Universities should too.

We are also seeing radical new business models coupled with innovative technology. Today you can be a taxi driver (Uber), a hotelier (AirBnB) and a postman (Deliveroo) - all at once. How many other jobs are going to be added to that list? What about Carer, Cleaner, Doctor, Architect, Musician...? What about Philosopher, Biologist, Engineer, Computer programmer, Biohacker? What about the learning whereby someone acquires the skills to do any of these? What do universities become in such a world? What value does certification (which is the principal product of the university) hold in a world where status is determined not by a certificate but by customer rating (as with Uber)? Where does science, the preservation and growth of knowledge occur? What happens to the library? What about the arts? How does civil society coordinate itself?

Right now it doesn't look like technology has blown up education. All seems ok. But printing did blow up the Catholic church. The explosion took 80 years, and there were stages in its unfolding which gave no clue as to what might follow. Learning from history is important. Being alive to the dialectics of technology-institution relations even more so. We need good antennae. 

Tuesday, 24 January 2017

Brains and Descriptions - some Agent-based modelling

Imagine that a brain is a set of constraints which produce multiple descriptions. The constraints organise themselves as a binary tree: with enough layers of recursion it quickly becomes very complex:
In the diagram (left) produced in NetLogo, they are multi-coloured. After 2 levels of growth, the resulting four nodes are coloured red, green, yellow and blue. These colours are then inherited by subsequent levels.

The question is how does the brain decide which descriptions to make? If red, yellow, green or blue are the choices for different kinds of action, how does it select which action it should perform?

The answer to this question which I'm currently exploring is that it is the action which carries the maximum number of possible descriptions. Some descriptions are very powerful for us in being able to describe a very wide range of phenomena. These descriptions, and the actions associated with them, are the ones we select. This can explain why religious descriptions, for example, can have so much power because they can be expressed in so many ways.

But the choice as to action and different descriptions fluctuates. To simulate the fluctuating levels of different kinds of description, I've created a kind of "brain eating" algorithm. Basically this simulates a kind of mental atrophy - a gradual process of losing richness of description. Since the process is random, different kinds of actions are selected because the balance of "maximum descriptions" shifts from one moment to the next.

However, brains do not die like this. Knowledge grows through communication. The actions (red, yellow, green or blue) might be communications with another brain. The result on the other brain is to stimulate it into making descriptions... and indeed to stimulate reflexive processes which in turn can lead to mental development.

In the communicating process, there is a fundamental question: what is it one brain seeks from another? The answer, I think, is a kind of symbiosis: brains need each other to grow. It is in the "interests" of one brain that another brain acquires rich powers of description.

This is interesting when thinking about teaching. The teacher's brain makes rich descriptions, and in response to these, the learner's brain reveals the patterning of its constraints. The teacher's brain then needs to work on these constraints - often by revealing their own constraints - and transform them so that the learner is able to make a richer set of descriptions. Deep reflexivity in the teacher - that is, deep levels of recursion - help to generate the powerful descriptions which stimulate growth in the learner and help transform their own capacity for making descriptions. 

Tuesday, 17 January 2017

Reasoning and Cognition: The difference between Modelling Logic and Modelling Reality

I'm wrestling with the problem of formalising the way in which learning conversations emerge. Basically, I think what happens when a teacher cares about what they do, and reflects carefully about how they act, they will listen carefully to the constraints of a teacher-learner situation. There is rarely (when it is done well) a case where a teacher will insist on "drilling" concepts and give textbook accounts of the concept - particularly when it is obvious that a learner doesn't understand from the start. What they do instead is find out where the learner is: the process really is one of leading out (educare) - they want to understand the constraints bearing upon the learner, and reveal their own constraints of understanding. The process is fundamentally empathic. Unfortunately we don't see this kind of behaviour by teachers very often. The system encourages them to act inhumanely towards learners - which is why I want to formalise something better.

My major line of inquiry has concerned teaching and learning as a political act. Education is about power, and when it is done well, it is about love, generosity and emancipation. The question is how the politics is connected to constraints. This has set me on the path of examining political behaviour through the lens of double binds, metagame theory, dialectics and so on. Particularly with metagame theory, I found myself starting at models of recursion of thought: intractable loops of connections between constraints which would erupt into patterns of apparently irrational behaviour. It looks like a highly complex network.


We're used to complex networks in science today. I thought that the complex network of Howard's metagame trees looked like a neural network - and this made me uncomfortable. There is a fundamental difference between a complex network to map processes of reasoning, and a complex network to map processes of cognition. The map of cognition, the neural net, is a model of an abstract cognising subject (a transcendental subject) which we imagine could be a "bit like us". Actually, it isn't - it's a soul-less machine - all wires, connections and an empty chest. That's not at all like us - it would be incapable of political behaviour.


The logical map, or the map of reasoning - which is most elegantly explored in Nigel Howard's work, is not a model of how we cognize, but a model of how we describe. What is the difference? The distinction is about communication and the process of social distinction-making. Models of cognition, like a neural network, are models of asocial cognition - distinction-making within the bounds of an abstract skin in an abstract body. The constraints within which such an "AI" distinction is made are simply the constraints of the algorithm producing it, and (more fundamentally) the ideas of the programmer who programmed it in the first place.

A map of reasoning processes and political behaviour is, by contrast, precisely concerned with constraint - not with the particular distinction made. In conversation with a machine, or a humanoid robot, a teacher would quickly puzzle over the responses of the machine and think "there are some peculiar constraints here which I cannot fathom which make me think this is not a human being". Perhaps, more than anything else, the tell-tale sign would be the failure to act irrationally...

Sunday, 15 January 2017

Bio-Entropy and Learning

Gregory Bateson had a word for the underlying process of living things. In contrast to the idea of entropy in physics, where everything eventually runs down and where order gives way to disorder, Bateson argued that a different kind of process was at work in living things: Bio-Entropy, which he saw as a continual struggle to maintain a median point between extremes of rigidity and flexibility. Rigid organisms die because they are unable to adapt. Organisms which are almost amorphous (the extreme of flexibility) die because they cannot organise to exploit the resources available to them which would help them to survive. To live is to exist in the middle, to oscillate between periods of semi-rigidity or semi-amorphousness.

In information theory, information - or neg-entropy - is seen as the countervailing force to physical entropy, but derived its formulation (by Shannon) from the statistical representation of physical entropy created by Boltzmann. Negentropy creates surprise and difference, which can stimulate complexification in response to entropy's run-down. Terry Deacon talks of Orthograde and Contragrade forces in biological systems. In Stafford Beer's work on management cybernetics, there is a similar relationship between vertical forces of top-down management (Beer calls this "metasystemic intervention"), and the horizontal forces of self-organisation: the trick in organisations (and government) is to get the balance right. Ecologist Robert Ulanowicz, himself deeply influenced by Bateson's ideas, uses information calculus as a way of gauging the ways in which the orthograde and contragrade forces operate in ecologies by measuring the amount of surprisingness generated in a system through taking various information measures from the components of an ecosystem. Information theory has been useful to ecologists for many years, but Ulanowicz is also aware of the deep confusion which lies inherent in its apparently simple formulation.

I find Bio-entropy the most powerful idea when I think of education, learning, organisation, love, art and music. It is, fundamentally, a different kind of way of expressing dialectics. In things like music, we experience it in action in a most direct way. The challenge to us is to find a better way of measuring it.

Shannon's formulation of information was intended to address a specific engineering problem. We do not talk with each other in the way that computers exchange messages. Our talking is more like a dance, or perhaps a game, within which we can become incredibly happy, angry, sad, bored, and so on. Out of our talking, politics arises. One person might realise a power to make another feel a certain way, and then to do certain things, and to exploit this to their own ends. Capitalism begins (and possibly, ends) in conversation. But if it's a game, what kind of a game is it? How does it arise? How are the rules decided? What is it to play or not to play? What is it to win or lose?

Another of Bateson's aphorisms concerns the "multiple descriptions of the world": cognition is connotative in the sense that it feeds its way through differing descriptions of the same thing. We have two eyes which give a slightly different description of the world. When we talk, our words describe one thing, whereas our body language describes something else. Sometimes the words and the body language describe different things, leading to what Bateson famously called a "Double-bind".

Great artists and great teachers are similar in the sense that they are masterful in their manipulation of descriptions. A maximal ability to generate descriptions for a variety of different circumstances (some of which might challenge the viability of other people) whilst maintaining a coherent centre for those descriptions is the true mark of the expert, not the dogmatic insistence on a single description (we have misunderstood expertise for a long time!). It is this ultimate flexibility of description-making that great teachers seek to inculcate in their students.

A utterance by a great teacher or an artist will contain within it many descriptions. There might be the statement of a concept, or an observation of something they find interesting, coupled with their tone of voice, posture, body language, etc. People might hear the utterance, but they also sense the other descriptions that are presented to them. If the utterance is the central focus (which it usually is), then all the other features envelop it:
((((Utterance) voice) body language) posture) environment
One utterance will be followed by another, and another set of descriptions is produced. Sometimes the context of these descriptions will be  different - maybe it's only expressed in text:
((Utterance) text medium) online environment
A learner is really a different kind of environment for a great teacher. The teacher realises they can change this environment (the learner) by making the right kind of descriptions. The response to the descriptions that the teacher makes tells the teacher what kind of an environment the learner is. The process of teaching is a process of providing the learner with greater flexibility to make their own descriptions across the different environments which they inhabit.

Maintaining the balance of Bio-entropy is, I think, related to the process of creating the maximal number of descriptions. If a formalisation of this is at all possible (and I'm not sure), I think it looks rather different from Shannon information theory. It's not surprise we are looking for; it's the coordination of constraint.

Friday, 13 January 2017

Personal Learning and Political Behaviour: The Utility of a Metagame approach

Computer technology in its various forms has vastly increased the possibilities for communication. This has manifested itself in transformed practices in many areas of life. Publishing, shopping, learning, social coordination, political organisation, artistic expression and entertainment have all been transformed by increasing possibilities. The expansion of the possibilities of communication produces political behaviour within organisations and institutions. The dynamics involved in the interaction between technological possibilities and political behaviour are poorly understood and little studied. As a framework for understanding the transformation of political behaviour, metagame theory presents a coherent framework for reasoning about institutional transformation and decision-making which can sometimes produce irrational outcomes.

Expansion in the number of possibilities for communication does not translate into an expansion of the realisation of all those possibilities in all walks of life. Previous expansions of communicative possibilities (eg. printing) showed how technology is disruptive to existing practices and to social structures which surround those practices. In addressing the challenge of technology, institutional actors engage in strategies for technological implementation which anticipate the actions of individuals and which seek to maintain institutional viability in a changing world. For educational institutions and learners, the process can be seen as one of competing 'games' of education: technology challenges the "old game" of the traditional institution; institutions and individuals seek to find a new game which works better for them. As a game about possible games, the approach lends itself to the "metagame" approach developed by Nigel Howard in the early 1970s, and which he further developed into a theory of political behaviour called "drama theory" which was used with some success in military conflict situations (see https://en.wikipedia.org/wiki/Drama_theory)

The metagame that institutions have so far played in responding to technology is conservative in the sense that it aims to find new ways of upholding institutional authority in the face of enhanced personal capabilities. The question pursued by institutions is "how can education exploit the technology", rather than "given that we have this technology, what should education be like?" Addressing the first question, the institutional technological responses are limited to delivering content through Learning Management Systems, or the use of e-portfolio tools where students may use mobile tools to submit material for assessment or access their institutional timetables. Effectively, it produces a "stand-off" between the institution and technology. However, technological and political developments continue to change the rules of the game of education. Rising student fees, and the consequent long-term personal debt, stubborn adherence to traditional lecturing, transformation in the practices of scientists, the centrality of the computer in research across all disciplines, and new possibilities for online learning would make it appear that the conservative metagame isn't a viable long-term solution - not least because it establishes paradoxes in the uses of technology in informal and formal learning contexts.

Learners too are pursuing their own metagames with their personal tools and their relations with the formal education system. Sometimes, learner strategies exploit the standoff between personal practices with technology and institutional prohibitions: whilst instant "googling" for information or entertainment through mobile phones in personal life has become a ubiquitous practice, uses of mobile phones in formal education is often banned on the grounds that it is disruptive to formal learning processes like lecturing. Institutional conservatism towards technology has been reinforced by worries about the uses of social media platforms, issues of privacy and data, the phenomena of the "echo chamber" of social media, superficial levels of intellectual engagement, a "cut and paste" mentality, plagiarism and other pathologies.

The drama theory approach explores not only the options individuals have for action, but to explore the possible responses to each possibility, the extent to which certain possibilities for action are constrained, and the extent to which rational strategies can lead to unwinnable situations which then create the conditions for "changing the game" - or ascending to a higher level of the metagame. The political behaviour of individual learners and the political behaviour within institutions in response to technology can be characterised in this way so as to explain some of the current pathologies of institutional education, and to highlight new productive pathways for development. At the heart of the question is the challenge of understanding how the needs of individual learners and the needs of society can be coordinated, and the role that technology and the institutions of education play in this.

Wednesday, 4 January 2017

The Game of Shannon Information

In Information Theory, the transmission of information, or mutual information is defined as the number of bits that need to be transmitted in order for a receiver to be able to predict the message sent by a sender. It's a bit like a game of Battleships: each player must guess the configuration (message) of the other's battleships by trying to place their bombs in positions where the other's battleships will be destroyed. Each move conveys information. The player who succeeds with the least number of bits (i.e. number of turns) wins. Like all games, there are processes of reflexivity which accompany moves in the game; there are "metagames" - games about the game - as players speculate on the different strategies of each other.

Thinking about Shannon information in this way leads me to think about the extent to which the mathematical theory of communication is really a description of a game. This is particularly interesting when we consider "meaning". Some moves in Battleships can be more meaningful than others. The most meaningful moves are those which which reveal where the opponent's battleships are. This can be partly revealed by where opponents don't place their bombs, and indeed, Player B's understanding of player A's game concerns where they think player A's ships are. The purpose of the game is to reveal the constraints of the other player.

In ordinary language, we understand and share many of the basic constraints of communication - particularly the grammar of a language. These constraints, in this sense, manifest themselves in the displayed redundancy of the language - the fact that there are more e's in an English sentence than z's, or that we say "the" more often in sentences than "hippopotamus". The measure of Shannon entropy, defined as the average "surprisingness" of signals in a message, contains within it this notion of redundancy, without which nothing would be surprising. Its virtue is that it is easily measurable since it simply involves counting words or letters. Yet Shannon's formula glosses over the fact that this shared constraint of grammar was, once upon a time, learnt by us. How did this happen?

Here the broader notion of constraint helps us by making a connection back to the game of Battleships. In the game of language acquisition, the objective is similar: to discover the constraints bearing on those who talk a particular language, just as Battleships aims to discover the constraints bearing on the player who seeks to destroy my ships without revealing their own. The language acquisition game is different not just in that it doesn't appear adversarial, but also that there are many kinds of 'move' in the game, and crucially, a single move (or utterance) might be described in many ways simultaneously: with sound, a facial expression, a gesture, and so on. In other words, the Shannon notion of redundancy as "extraneous bits of information" is quite apparent. Such redundancy of expression reveals constraints on the person making the utterance. At other times, such redundancy can serve to constrain the other person to encourage them to do something particular (saying "do this" whilst pointing, using a commanding tone of voice, etc).

At this point, we come back to Shannon's theory and his idea of Information and redundancy. The game of "constraint discovery" can account for Information transmission in a way which doesn't make such a big deal about "surprisingness". Suprisingness itself is not very useful in child language acquisition: after all, a truly surprising event might scare a child so as to leave them traumatised! Shannon's notion of Redundancy is more interesting, since it is closely associated with the apprehension of regularity and the related notion of analogy. Redundancy represents the constraints within which communication occurs. Shannon's purpose in Information theory is to consider the conditions within which messages may be successfully transmitted. The information 'gained' on successful transmission is effectively the change in expectation (represented by shifting probabilities) by a receiver such that the sender's messages might be predicted.

However, communication is an ongoing process. It is a continual and evolving game. We discover the constraints of language through listening and talking to many people, and through identifying analogies between the many forms of expression we discover, whilst at the same time, learning to communicate our own constraints, and seeing the ways in which we can constrain others. Eventually, we might grow up to communicate theories about information and how we communicate, seeking sometimes to constrain the theories of others, or (more productively) to reveal the constraints on our own theorising so that we invite others to point out constraints that we can't see.

Isn't that a game too?

Tuesday, 3 January 2017

Rethinking Computers and Education

There are few who would disagree that it is a bad time in education (or that it is a bad time in world right now). Education is afflicted with deep confusion and a lack of conviction coupled with aggressive and pathological managerialism (with its insincere conviction). Technology has played a key role in this - in the bibliometrics which drive the "status game", or the research assessment exercises, league tables and other manifestations of status pathology, through to assessment regimes (MCQs, automatic essay marking), "plagiarism checking", endless student survey questionnaires, VLEs and MOOCs, and 'analytics'. The pioneers of educational technology hoped for better education; what we have is far worse than anyone imagined. The situation was, however, predictable: Nigel Howard puts it well (in Paradoxes of Rationality, p.xxii)
"There are two main causes of all the evil that exists in the world. One is that humans are very wicked; the other is that they are very stupid. Technocrats tend to underestimate the first factor and revolutionaries the second."
The educational technologists were both proto-technocrats (although they would have hated the description) and revolutionaries (which they would have liked). They were good people (on the whole), but some of their tools were adopted by people who were greedy and status-hungry, seeing opportunities to manipulate the political discourse and control awkward academics for their own nefarious ends. Education became a money making opportunity. The senior figures in education and politics were far more stupid and easily led than anyone expected or had hoped for. And this is where we are.

The difference between pathological and effective use of tools is the extent to which one understands the problem one is trying to solve. The problem of education is not much less complex than the problem of society. For most people, it is too difficult, and so they seek solutions to easier problems which they can define, and which are usually determined by existing technologies. This is why we have e-portfolio and VLEs, and not much has changed since: these technologies were answers not to the question "given that we have technology, what might education be like?" (which is the question we should have asked), but more prosaically, "how can education exploit the internet?".

The problem of education is a problem of highly complex inter-relations of constraint. In education, we are each other's constraints, in a world of material, political, economic, social, psychological and technological constraints. Computers in educational have not merely added to the constraints of education, but by their mutable nature, have transformed the pre-existing constraints within which the education system emerged in the first place. The result is deep confusion. In this way, computer technology was a hand-grenade exploding traditional education.

In many walks of life, however, computers have been very useful. The use of simulation and optimisation in the designs of buildings, public spaces, advanced materials, cars and planes, transport networks, space rockets, drugs and high-tech machinery has transformed much of our public and private environment. Why can't computers be used intelligently in education?

I have two answers to this. Firstly, the use of computers in all the fields mentioned above occurs within clearly codified constraints. The design of a London Underground station must account for the flow of a certain number of people, a building must optimise its use of space, aircraft must reduce their air resistance, and so on. In many cases, techniques like linear programming or genetic algorithms can all be gainfully employed because the 'fitness function' and the constraints which must be negotiated, or the desired criteria which must be met, can be clearly specified.

What is the fitness function of education? For the advocates of AI computer-based instruction, this is a worryingly unproblematic question: it's about getting the "right answer". But any teacher knows that it isn't just about getting the right answer. The right answer might be wrong, to start with: if education was simply about right answers, science would stop. But more importantly, the nature of reasoning, and the nature of the conversation which gives rise to reasoning is much more important.

Some advocates of Learning Design had a different answer to the "fitness function": fitness was about stimulating conversation and action. The richer the conversations, the better the fit. It's a better answer than the AI one, but the criteria for design tend to be based on woolly theories of conversational learning whose inadequacies are never explored. Too many students simply drop out of the conversation, and nobody seems particularly interested in what happened to them, preferring to make grandiose claims for the students who keep talking.

Interestingly, in both these approaches, little use is made of the processing power of computers. The question is whether computers might be used to model the complex dynamics of interacting constraint where we become each other's constraints. The first challenge is to specify the problem in a new way: what, then, do we need in education?

On the one hand, education is about managing complexity. It concerns the complexity provisioning resources, activities and tools so as to support effective learning conversations between learners and teachers. This is fundamentally the concern of cybernetics. But there is a deeper level of thinking which concerns the fact that any "effective relationship" whether between a teacher or a learner, or between learners, involves the dynamics of mutual constraint. Teachers maintain learning conversations by manipulating constraints: sometimes by allocating resources ("read this!"), or by organising activities ("do this!"), or by introducing tools ("do this by using this!"). We currently have little understanding about how the dynamics of constraint in education work. But computers do provide new ways of manipulating constraint with far greater capacity and organisational potential than any teacher could possibly manage. And not only can computers manipulate constraints, they can also provide degrees of meta-analysis about the manipulation of constraint (which itself is a constraint).

Perhaps the answer to the question, "what do we need in education?" is simple: better theory and better evaluation. 

Sunday, 1 January 2017

Descriptions, Metagames and Teaching

Over the last year, I have found myself creating videos for various purposes. Video is a powerful medium for communication, partly because it presents many simultaneous descriptions of the same thing: I will make the graphics, the text, the voice, animations. There are inflections in my voice too, and ways in which the content is structured when I present it. Each of these descriptions is redundant in the sense that any particular description could be removed, and the sense of what I am communicating is preserved: it just wouldn't be as effective. There is perhaps a general rule: the most powerful approach to teaching, the richest the array of descriptions which can be brought to bear in communicating.

Another way of looking at this is to say that if I want to communicate a concept A, and A is constrained by a set of factors (or other concepts), B,C,D and E, then the richest number of descriptions will apply not to A, but to E (the deepest constraint), because E will be a deep constraint not just of A, but of W, X, Y and Z...or anything else. If A is a very specific thing, constrained by a particular perspective, then A has to be imparted negatively through the multiple presentation of descriptions of the things which constrain it.  It is the way in which a child might be convinced of A, where there are multiple (and nested) Why? questions: ((((A)Why?)Why?)Why?)Why?)Why?

A few years ago, I wrote a paper with Loet Leydesdorff on Systems theory approaches to communication and organisation, drawing particular reference on the similarities between Stafford Beer's Viable System Model and Luhmann's social systems theory: see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2279467. In it, I drew on Nigel Howard's metagame theory, arguing that both systems theories were effectively 'games' played with either members of an organisation (Beer) or the community of sociologists (Luhmann). It's been pretty ignored since, but my recent thinking about constraint has led me back to it.

So I'm asking whether there is a connection between the 'string rewriting' ideas that I touched on in my last post, and Howard's metagames as we presented it in the paper. The basic idea was that metagame trees (that is, decision trees about speculated strategies of opponent moves in games) quickly get enormously complex. They are so enormously complex that we, not being computers, easily forget sets of permutations and options which are logically possible. Certain speculations about the future are constrained by factors in the environment about which we have little knowledge. The effect of these constraints is to privilege one action over another. In the paper I argued that the action chosen was the one that emerged most dominant in all levels of recursion given the constraints - a kind of 'universal concept' which emerged through constraint. To find this mathematically, it was simply a matter of counting the outcome which was most dominant:


So in the above diagram, given the 'gaps' in reasoning (holes in the table), outcome Pa ,Pb emerges as dominant. But what does "emerging dominant" mean? In counting the number of times this particular outcome emerges at all levels of recursion, we are effectively counting the number of descriptions of this outcome. So Pa ,Pb can be described as (b)a and ((b)b)a) and ((b)a)b and so on. What we have then are ways of rewriting the string for outcome Pa ,P

Of course, these are the constraints seen from one player A's perspective imaging the constraints of player B. There is a point at which Player A and Player B have similar constraints: they will be able to exchange different and equivalent descriptions of a concept, and at this point they will have knowledge of the game each other is playing. At this point, the metagame of this game becomes the interesting thing: someone will break the rules. And then we learn something new.