Thursday 31 March 2016

Music Black-Boxes

Everyone knows what a music box is - when we turn the handle a known tune will start at some random point. We understand the mechanism inside. The charm is perhaps that despite understanding the mechanism, the music itself can make us feel different at different moments. The box might be predictable and mechanical, but we are not.

This has got me thinking about music itself and the somewhat mechanical way we are taught to think about it. We are told of the component parts of music: there is rhythm, harmony, melody, tonality, dynamics, etc. Music is such a difficult subject because all of these things are incredibly abstract: where is the difference between harmony and counterpoint, for example? Ian Kemp, my professor of music at Manchester would often say, "well, perhaps it's just a nice noise".

We only think we understand the components of the mechanism of the music box. Just as we only think we understand the components of music itself. In both cases, we lose sight of ourselves as the context within which the understanding takes place. Imagine the music box is broken down into independent boxes: one for the crank, one for the chimes, one for the sprockets, one for the barrel, etc. To make the music, these components have to 'communicate' in the right way. The glue that makes things work is us: and we are not predictable and mechanical in the way we imagine the components to be. Indeed, the components and their interconnections are a product of our unpredictableness.

To see how we invent components and their relations, I've found it useful to consider not the components as "things" (like melody, harmony, etc), but simply as 'black boxes': entities whose behaviour is obscure, and whose mechanisms cannot be inspected, but which define themselves through their interactions with us and with other black box entities. A music black box is an entity with behaviours which we can only speculate on through observation. As we speculate we may in fact create new kinds of black box: rhythm, harmony, melody, and so on. Doing so creates more things to distinguish any particular black box against.

What's interesting me is that all intellectual inquiry about things such as music creates multiple black boxes with various inter-connections. A black box establishes itself in its contradistinction to other black boxes. It is not in itself real; it is only real in its relations. We can do analysis or research on this by making explicit judgement about particular kinds of transactions which one boxes engages with another. So there might be rhythm transactions, and harmony transactions, and melody transactions and so on.

By determining a body of transactions and associating it with a particular black box we can speculate on the dynamics of the interaction between an interconnected chain of black boxes.  We are part of the chain, and the 'nice noise' emerges from the ensemble.

Where does this take us?

The important thing here is the relations between black boxes. Each box exercises constraint on the others, and each produces the conditions for growth in the others. There is mutuality of constraint in their relations and there is a kind of mutual catalysis. This may be like Bateson's distinction between 'symmetrical schismogenesis' and 'complementary schismogenesis'. There are empirical ways we can analyse this through information theory. Each component transforms and is constrained by a lifeworld of expectations and events which affects all components. The environment contains the realm of the 'possible' - and this realm changes are things progress. It is the equivalent of 'maximum entropy' in information theory. So the rhythm of 'Bolero' constrains other elements like harmony, melody, orchestration in a way which maintains maximum entropy; the Tristan chord shapes melody and tonality (particularly) in a way which increases it (this is why it was revolutionary).


Every distinction we trace through a pattern of growth and development takes the form of a black box defined in contradistinction to others on account of its 'transactions' with its neighbours and its environment. Studying the transactions between things: the mutuality of constraint, the autocatalysis can give some indication of the underlying conditions within which things grow - including us. Understanding how we grow can help us understand the conditions for creating new distinctions and new black boxes...

Monday 28 March 2016

Stafford Beer's comments on a societal meta-language

As I've been contemplating the impact of Block Chain on education, and thinking about how new kinds of transactional systems might transform the way we think about our institutions, my attention has been drawn back to Stafford Beer's book "Platform for Change'. Beer attempted to articulate a cybernetic social transformation where he speculated on the move from old institutions to new, cybernetic, institutions. At the heart of a new world was the development of a "meta-language" which would describe the new society in terms of ecological dynamics. That brings new categories of thinking about society, work, institutions, etc. and it brings new kinds of metrics. Beer's motivation was driven by the existential fears of the 1970s. Much of the catastrophe which he predicts is playing itself out around us now - he would only be surprised that it has taken this long (which is an interesting point to consider). Here he describes the meta-language (Platform for Change p.39):

Stereotypes are often useful concepts to deploy
Indeed it seems likely that we cannot manage without them.
What is wrong with the ones that we have
is that they relate to a vanished world.

In a changes world, the old stereotypes
give rise to undecidable propositions.

For example, if the means of livelihood
must be emoluments derived from work put in
to enterprises which themselves must make a profit
then
thinkers
teachers
artists
social workers
government
cannot be paid at all unless
everyone agrees to pretend
that the work they do is profitable
not in the general sense of the word
but as having a measurable monetary value.

But this remains a pretence.

The question : what are teachers, policemen, nurses worth?
is strictly undecidable within the language
of the stereotype.

That is why none of them is properly paid.

And as the world moves steadily away from the Homo Faber culture,
which it is doing because the production of goods
becomes more and more automated,
not only individual decisions but whole policies
become undecidable within the language.

The answer is not to invest bogus measures of benefit
but to devise a metalanguage
in which questions of value can be set and answered
in quantified but not monetary terms.

Note that this remark is meaningless
rather than merely puzzling
inside the existing language:
that language decreates the measures we need.

Any new argument must be competent to describe
new kinds of organization too.
The Argument for Change declared that such organisations
would look like 'anything organic'.
The passage as spoken entails the existence
of commonalities between all organic systems.

That there are such commonalities
is a fundamental tenet of cybernetic science
(as will be argued later).
The statement of commonality is necessarily metalinguistic.
Thus I see it as something more than a generalisation
of the machinery of particular systems -
for such a generalisation would be a perception
made by a neutral observer.

The metalanguage will be something more than this:
it will be positively generated at the focus of meaning
that lies beyond an entire range of undecidable propositions.

So the second comment is this.

We should try to envisage the developing general thgesis
in systemic terms
so as to show not only how things are connected
but also how the inherent undecidability
of the language used in each system
is expected to generate a metalanguage -
and of what form.

----------------------------------------------------

The idea that the metalanguage will be "positively generated at the focus of meaning that lies beyond an entire range of undecidable propositions" is intriguing. I want to say that the metalanguage will be positively produced by the constraints which limit decision-making in the old system. I want to say that those constraints autocatalyse new thinking. The problem is that they do this whilst at the same time destroying the conditions within which any new thinking can flourish.


Saturday 19 March 2016

@Telegram, Raspberry Pi, Block Chains and Educational Transactions

@Audreywatters is collecting some interesting stuff on block chain at the moment. I think one of the fundamental objections to Block Chain is that it is claimed that education is not transactional. Perhaps this assertion is made without thinking too much about what a 'transaction' is, or that it is as simple as 'buying and selling' (as in BitCoin) and that education is clearly more than that. But in economics, although buying and selling are the classic transactions, at a deeper level, they are quite mysterious intersubjective things. Von Mises and Hayek's focus on 'Catallactics' - the science of exchange - becomes very important here, I think. If Block Chains are about transactions, that means they are about catallaxy. Now what does that mean?? Perhaps we should distinguish between a kind of "potlatch catallaxy" and the "market catallaxy" that Hayek focused on. I think potlatch is at the heart of educational transactions: when education works, it is about giving.

The value of the block chain discussion is that we start to see transactions everywhere in education. I recently created a Raspberry-pi/Arduino-driven registration system for some medical students. They are employed by the health authority, who want twice-daily reports on their attendance. My hardware uses an NFC reader to scan their student cards, store it in a database (on the Pi) and provide the data on a simple web interface which the course administrator can query.

The student waving their card in front of the device is a kind of transaction: a statement "I'm here. I comply with the regulations." This transaction is of course the prelude to what the student ought to be doing next - listening attentively and learning how to be a medic (well, that's what we hope!). The students wanted some visual feedback from my registration scanner (at the moment it only beeps). I was thinking about interesting ways of doing this when I stumbled across the Telegram app (see https://telegram.org/).

Telegram is different. It's a communication app a bit like WhatsApp, but it makes itself highly configurable - both on its front-end (so you can create custom interfaces) and (most interestingly) at the back-end. Most powerfully, in the back-end you can program Bots. Bots can communicate with Telegram clients, and can be integrated with all kinds of devices. So, I thought, "let's get the register scanner to send a personalised message to Telegram when the students scan in". I'm working on this now... but having played with the Telegram API, wrestled with SSL certificates (it works through SSL), I'm now very excited about how the transaction of registering could be integrated with all kinds of other transactions - for example, the students' engagement with their e-portfolios, or real-time class tools like Socrative, or other physical devices. Or with each other. With Bot we can play games. It's possible to create group activities where individual students have to seek out other students to work on a problem together, or groups have to break apart, intermingle, go to different locations, collect data, interact with artefacts, and so on. Suddenly the combination of the ability to track transactions combined with the ability to capture different kinds of engagement in different kinds of environmental setting makes possible coordinations of learning across a range of environments which simply hasn't been possible until now. It's what the Alternative Reality Gaming people were aiming for - what things like Blast Theory's wonderful Ivy4ever (http://www.blasttheory.co.uk/projects/ivy4evr/) did using text messaging.

I think sophisticated transaction handling can create new forms of conviviality which do not depend on heavy-handed and unsophisticated management of the educational cohort. Given the mess that some managements are making of education at the moment, and the fact that the transactions they engage in have little to do with learning (what's this about, for example?  http://www.theboltonnews.co.uk/news/14255310.University_hits_back_at_claims_it_is_one_of__UK___s_least_transparent_/) I wonder at the possibility of replacing the whole of lot of them with computers. We certainly use computers in a very bad way in education: they should be used to automate management, not amplify existing practices.

And of course, there's really cool stuff we can do. My engagement with telegram came from this from Nick Johnstone (http://widdersh.in/controlling-sonic-pi-from-vim-or-anywhere-else/). This is why we should not give up on technology! The requirement is (as ever) for a combination of sensitivity, humility, creativity and reason - with some carefully considered political steering.

Tuesday 15 March 2016

Educational Transactions, Artistic Transactions and Improvisation

One of the things that fascinates me about Live Coding (see http://dailyimprovisation.blogspot.co.uk/2015/06/improvisation-and-performing-coding.html) is that every improvisational act is made explicit - there as a record of a particular aesthetic decision taken at a particular point in time, in a particular context. As I hack around with Sonic Pi and the equally wonderful and more sophisticated Overtone this transactional aspect of improvisatory experience in Live Coding is becoming more important to me.

This is at a time when I am exploring (in a potential bid) the possibility that education itself might become more transactional in the same kind of way. I've had many arguments with friends about whether education is or is not transactional. I say that as far as any kind of final assessment is concerned, of course there is a transaction there: "I've done this work; you give me a mark". The problem is that this transaction is large-scale, processed in batch and only done once at the end of a learning process which has involved many much smaller interactions which are also transactional and which are never captured, nor can ever be inspected.

Education is organised in the way it is because it cannot break its transactions down. It has enlisted technologies to reinforce its traditional batch-mode of operation, and reinforce the status of institutions which are seen as the guardians of 'quality' - a substitute for the inability of the system to expose its decision-making processes. So the inability to handle transactions has implications not just in the management of learners on an everyday basis, but the global organisation of education, access to opportunity, social status, elitism and so on.

Coming back to music, and seeing improvisatory music as a set of artistic transactions, in a group there are processes of evaluation and ranking of individual decisions. One of the things which I am about to explore is group improvisation using Live Coding. Here, individuals not only listen to what they do, but to what others do. They copy. In doing so, a natural structure of ideas and the creative agencies which produce those ideas emerges.

Teaching and learning is like this too. Lecturing, for example, is usually quite an improvisatory performance: good preparation for a lecture means leaving enough space to react to the situation among the learners. Teachers and learners make declarations about concepts, or rather reinforce other peoples' declarations. Teachers make declarations about the declarations made by learners (right or wrong...) What if this was captured in the way that Live Coding captures a musical performance? I'm not suggesting that teachers and learners actually engage in writing code as part of their interaction, or even that a big brother system captures every utterance. But we lose consciousness of these transactions because we've got so used to the assessment (the big transaction) being the only thing that matters, we've forgotten to listen to the small transactions. At least acknowledging the transactional nature of what we do would be a good start.

First of all, it would discourage teachers talking too much. It would encourage more "What do you think?" moments. And it emphasises our inability to capture the What do you think moments and stand back from them and reassess our approach. Maybe some real-time polling systems can be used for this (e.g. Socrative) - personally, I'm much more interested in telegram.

There's something else about recording transactions which I think is important. Education operates in batch mode partly because of the conviviality of the classroom. The class we-relation is important - even if the lecture is terrible. What is that we-relation? Alfred Schutz, who invented the term, thought that it had something to do with the shared passing of time. Music creates a we-relation even when the performers are no co-present with the listener, or the composer is dead. Might a transaction record similarly have this temporal quality? There's a need to experiment...

Thursday 10 March 2016

Sociology and Lakatos

There was an interesting twitter exchange today between @profstevefuller and @mark_carrigan about sociology: https://twitter.com/ProfSteveFuller/status/707860253682110464. Fuller stated that
"So-called 'public sociology' often looks like a kind of ventriloquism in which the sociologist is the dummy."
This questions not only the scientific content of sociology, but also its mission. I found myself , by coincidence, reading Lakatos's "The methodology of scientific research programmes" today, and found him make this statement (I leave out his discussion on Newton and Halley for brevity - but it is worth reading)
"all the research programmes I admire have one characteristic in common. They all predict novel facts, facts which had been either undreamt of, or have indeed been contradicted by previous or rival programmes. [...] Einstein's [research] programme [...] made the stunning  prediction that if one measures the distance between two stars in the night and if one measures the distance between them during the day (when they are visible during an eclipse of the sun), the two measurements will be different. Nobody had thought to make such an observation before Einstein's programme. Thus in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts. In degenerating programmes, however, theories are fabricated only in order to accommodate known facts. Has, for instance, Marxism ever predicted a stunning novel fact successfully? Never! It has some famous unsuccessful predictions[...]"
I think the attack on Marxism is a broader assault on sociology doing what Fuller accuses it of. The ventriloquism is the process of making the theory (the sociologist's utterances) fit the facts (the agent's utterances).

The point about the prediction of novel facts is not, I think, about any kind of superiority of physics over sociology. It is about the demarcated roles of theory vs. experiment. From a cybernetic perspective it was quite simply expressed by Ashby: theories generate logical possibilities; experiments tell us how those logical possibilities are constrained in nature. The tension for any scientist is between explaining things, and identifying constraints or errors that are revealed through exploring the logical possibilities. It is, however, from the identification of error between theory and nature that science progresses. But errors exist at multiple and entangled levels: matter, bodies, discourse, ethics and so on.

The discovery of constraints ought to lead to theoretical refinement, which in turn ought to lead to new generative possibilities. It is this last bit which Marxism hasn't done properly: it refines its theories to fit the facts, but then doesn't re-run the implications of the adjusted theory. It simply tries to patch itself up. Lakatos puts it:
"Marxists explained all their failures: they explained the rising living standards of the working class by devising a theory of imperialism; they even explained why the first socialist revolution occurred in industrially backward Russia. They 'explained' Berlin 1953, Budapest, 1956, Prague 1968. They 'explained' the Russian-Chinese conflict. But their auxiliary hypotheses were all cooked up after the event to protect Marxian theory from the facts." 
What happens with this patched-up social theory is that it becomes so open to interpretation, the advantage of having a codified theory as an object to coordinate scientific engagement is completely lost. There is no agreement about theoretical categories, and no agreement about empirical procedures. Particularly without the latter, the discourse floats away - it has nothing to tether itself to: there is no shared object in the lifeworld around which scientists can have a constructive discussion.

The importance of any theory is its codified logical consistency which is the source of possibilities which may or may not actually exist. These are the source of new distinctions to be made about measurement (like Einstein's distance between stars), or the constraints that are discovered by a social theory not fitting lead to refinement which generates new possibilities amongst which might be some new insight into a "novel fact".

My favourite piece of predictive social science would probably be sniffed at by many sociologists: Winograd and Flores's "Understanding Computers and Cognition" of 1986. Not only is there a refutation of the prevailing view of technology at the time, but a clear and accurate prediction of where things would be 6 years later (and 6 years is a long time in technology!)

Wednesday 9 March 2016

Musical Ecologies

I've got my head in some pretty technical stuff at the moment which I find exciting and energising, but which, as with all technology, is likely also to be ultimately disappointing (although the energy from it will move me on). I'm used to this pattern: I go through phases of functionalism where I am fascinated and motivated by what might be changed through technology. Whilst I'm in it, though, I also feel the need to temper my enthusiasm in some way.

Music is my corrective. I've been thinking back to this: http://dailyimprovisation.blogspot.co.uk/2015/09/entropy-and-aesthetics-some-musical.html. It's about music and ecology.

When we design technical systems - even technical systems which are meant to be emancipatory for people - really we are designing something with ourselves as designer in the driving seat. The status function for the technology is made by the technical designer/hacker. I've been making a lot of status function declarations about blockchain recently. I think it's important because it's certainly a new kind of thing, and its implications need to be investigated. But in making those status function declarations, I put myself in a privileged position. Do I wish for some social acclaim which says "Ah yes! Johnson was right all along!"? Wanting to be right is a real problem (I think I am right about that)

A piece of music is a kind of declaration. But it's a declaration that comes from the heart as well as from the pen. The heart-bit is important. Feelings are communicated in music because, I think, we share a sense of time with the performers and other listeners. This despite the fact that many composers are long dead. It still communicates in what Alfred Schutz would call a pure 'we-relation': i.e. a form of communication which is usually only available in intimate face-to-face exchange. This is unlike most declarations, including declarations about technology. They are made in the 'world of contemporaries' - a world of remote relationships with people who live at the same time as us, but who we do not share a 'vivid simultaneity'. Indeed, we have very little understanding of what our technological propositions might do to them.

When we try and pursuade someone face-to-face about the value of a technology, we find a way of communicating face-to-face which bypasses the deeper emotional aspects of the pure we-relation. Technologies turn our face-to-face engagements into remote engagements which occur face-to-face. This is the great trick of capitalism: the bypassing of time and intimacy, and its replacement with terminology codified in the world of contemporaries. Too much academic engagement in Universities has become like this too. Psychotherpeutic, curiosity-driven learning rarely happens: instead we drill learning outcomes.

Coming back to music, we can codify its parts in an abstract way (again, the world of contemporaries). But in shared experience in time, we can see that none of these parts are separable, any more than the organs of the body are separable. Moreover, there is no executive function among these parts: each part relates to the others in a fundamental, intertwined way. Just as the brain does not 'control' the body, but ensures that effective relationships are managed between all its components, including itself. 'Management' is inherent in the organisation of music.

That is what I have to ensure in my thinking about technology. The management must be inherent in the organisation of the technological components. In educational technology this is a real challenge. Only if this is achieved will a technical system start to feel like music rather than an authoritarian state.

Measuring the effectiveness of the organisation of music, or technology, may be the key thing. These, by necessity, are measures of ecological viability.

Tuesday 8 March 2016

Searle's Status Functions and the Educational Block Chain

@dkernohan made an interesting blog post about blockchain yesterday (http://followersoftheapocalyp.se/the-chain/) which also contains some great links to current work and thinking. When new technologies come along, particularly in a time of great confusion in the world - not just about education, but about everything - it's exciting for some. I'm one of the people who gets excited about technology. This is partly because technical change is so often the herald of social change - and the prospect of change is a sign of hope that the world won't continue to be as terrible as it currently is. However, technology frequently disappoints. People either don't use it (which may in fact be the best outcome!), or if they do, managers will exploit the fact that the tools work to exert greater control on everyone else. In recent years, technologies which promised emancipation, disruption, subversion and autonomy appear to have accelerated and exacerbated inequality, surveillance, control and oppression. Yet I still find the prospect of technological transformation exciting.

I think this is partly due to the nature of the functionalism which surrounds any technological development. Functionalism is inherently positive - a can-do, "we can fix it" state of mind. Part of the problem with functionalism is that it loses sight of how power operates, and is often blind to the power inherent in its own propositions.

Recently, I've begun to see power through the perspective of John Searle's idea of 'status function'. A powerful person or organisation is something capable of making a assertion about the existence about other social entities, and to have that assertion supported by the general 'collective intentionality' (Searle's term for the common thoughts of a community) to support it. They have what Searle calls 'deontic power' to make a 'status function declaration' - in other words the assertion that a banknote is a banknote, a VLE is a VLE, a degree is a degree and so on. Technologists make status function declarations about technology. If nobody believes them then the technology doesn't take-off. If powerful institutions believe them, and support the status declaration about a technology, then the technology is more likely to be successful.

Entities in society which have the greatest deontic power tend to be institutions like governments, banks, universities, monarchs, priests and the media. It is interesting to reflect that in recent years, each of these institutions (apart from the universities - it's only a matter of time) has had its deontic power questioned by the exposure of scandals.

Interesting things happen when status function declarations get mixed up in unusual ways. That's basically what's happened with BitCoin. BitCoin plays with "collective intentionality" in a new kind of way. Technologists make a status function declaration about the trustworthiness of the "distributed ledger" which underpins BitCoin. There is a vested-interest among the investors in BitCoin to uphold this status function declaration. Not just because they like the idea of a currency without a central bank, but because in supporting the status function declaration about the ledger, they support the value of the BitCoin currency which they possess. The dynamic of trust (the collective intentionality), and the dynamic of reward is finely balanced - and the reward for the individual reinforces their desire to avoid the centralisation of institutions.

This point is very important when we consider the potential of BitCoin in areas like education.

It's not enough simply to declare the existence of what amounts to a kind of distributed database for educational transactions. Such a declaration alone will have no power, because no institution will support it, and there is no loop of self-interest by its users which can subvert institutional authority. It will be too difficult to get people to submit their data to it: what do they get out of it?

Somehow the loop has to be closed. What matters in education is the declaration of status of learners: the gradual acquisition of their own 'deontic power' (I am qualified to say such-and-such because I have a degree from...) So collecting transactional data is not enough. There needs also to be a mechanism for validating, comparing and ranking between individuals.

So what if we had a process whereby on the one hand, educational transactions are recorded, but alongside it, the emergent transaction ledgers are validated and ranked? And what if the process of validating and ranking educational transaction ledgers reinforced ones own educational transaction ledger (so a judgement about someone else is a transaction). Would that close the loop?

My incentive for recording transactions would be to enable me to make judgements about the transactions of others, which would enhance my own ledger of transactions. Is this a closed, self-organising non-institutional status-enhancing machine? Possibly. But one thing seems clear to me - the chain needs to be a loop. 

Saturday 5 March 2016

The Cybernetics of Learning in Stafford Beer and Gordon Pask

One of the definitions of Cybernetics is that it is a "Way of thinking", or even a "Way of thinking about ways of thinking" (this is by Larry Richards). As a 'way' the important thing with cybernetic inquiry it seems to me is that it is a method. It's rather like Husserl's phenomenology which, rather than aiming to specify the structure of consciousness (which was later attempted by the likes of Heidegger), Husserl was anxious to specify the method for approaching consciousness. Increasingly I think cybernetics is a method for approaching science. It is meta-scientific.

So then I began thinking about how the classic cybernetic developments of the past can be seen as methods. Of most interest are the two which I have had most to do with in education: Stafford Beer's Viable System Model and Gordon Pask's conversation theory. So here goes:

Stafford Beer’s Management Cybernetics


Stafford Beer’s Management Cybernetics is an approach to learning and communication for people within organisations. At the heart of Beer’s understanding of the cybernetics of management is a concept of ‘control’:
“Control is an attribute of a system. This word is not used in the way in which either an office manager or a gambler might use it; it is used as a name for connectiveness” (Cybernetics and Management).
Control, in other words, is an index of relationships. More specifically, to examine a “control system” – a set of components between which there are relationships – is to examine patterns of constraint where each component is subject to constraints applied from its environment (which is composed of other components).


Beer’s question concerns the specification of types of relationship expressed in models and their presence in nature. He developed a generic model which specified a set of relationships between regulating mechanisms (that is, mechanisms which exercise constraint or control on each other). Drawing on Ashby’s multi-level regulating mechanisms, Beer considers that relationships at macro and micro levels exhibit particular kinds of pattern: that is, there were particular distinctions which could be assigned to different kinds of relationship and that these distinctions could be applied at different levels of a system. Drawing on the human body, he considered the relationship between the different components of brain, heart, lungs and so on, the endocrine system, the circulatory system, and so on. There were relationships between the components which demanded different patterns of coordination. There were different kinds of constraint which needed to be exercised depending on what it was a component was doing and where it was doing it. For example, there were relationships of self-organisation as the adaptation of one entity would coordinate with the adaptation of another entity. There were also relationships between a higher observer and these self-organising entities. Then there were coordinating forces which controlled the behaviour of the components within an environment. Finally, there was an executive function whose role was to balance and coordinate the respective needs of the other components.


Beer considered two fundamental kinds of distinction – the distinction between the different kinds of relationship within a system, and the distinction between levels of recursion between a system's components. There were different regulating systems and different channels of communication. Additionally, there were broader distinctions concerning the degree of coordination which occurred from top to bottom, and the degree of self-organisation which occurred horizontally.


The distinctions of the Viable System Model provide a way for people to talk about the organisational situation they find themselves in. The empirical process of Beer’s approach is one of relating the distinctions of the model to distinctions about reality. This is a stimulus for conversation. Beer’s model encourages people to express their understanding or lack of it in terms of the concepts inherent in the model itself. Since there are few concepts in Beer’s model, the result is an attenuation of the concepts expressed by participants, where key questions like “what are the components of the business?”, “at what level of recursion do those concepts exist?” are asked.


Beer’s model effectively operates as a way of ‘indexing’ real experience: distinctions are made in the model, and those distinctions are identified in key experiences and things in the real world. In the use of the model, what is agreed are the distinctions that connect the shared experience of the environment with the model. In the process of identifying this mapping, something important happens: the kind of mapping between the model and experience of reality is necessarily reductive. There will always be some phenomenon which exists outside the index. It may only be a feeling – but the inability to express it within the context of the present understanding of the model drives a process of deeper inquiry. Such moments of discovery of that which can't be explained within the given model is a moment of identifying constraint. The challenge it presents is to rethink the mapping of experiences on reality to the distinctions of the model.


In thinking through the different ways of applying indexes to reality, different kinds of constraint are revealed. This identification of constraint in the model was fundamental to the processes which drove the conversation. The misidentification of a system component or a level of recursion would gradually emerge in the configuring of the model to the real environment. Beer’s model is richly generative of possibilities. Constraints are identified by exploring the different ways that the model might map on to nature. Yet the locus of constraints is not abstract; it is concrete within the relationships between those who use the model to talk about their experience of the business. Like many other systems management techniques, the VSM is a tool for coordinating understanding, which necessarily involves sharing understanding of the different constraints that different actors in the business operate within. 

Gordon Pask and Education


Gordon Pask was a cybernetician whose areas of innovation were in educational technology, art and music and architecture. Pask’s approach to the cybernetics of learning differed from Beer’s Viable System Model, in the sense that he sought not to identify a specific generative model which could be mapped onto reality, but rather to think about the inter-human conditions within which each of us build our own models as part of our learning. For this reason, Pask significance to educational thinking is based on his assumption that each individual, teachers and learners, is a kind of “cybernetician”, continually building models of the world and asking questions of reality in the light of their models and adapting as their environment changes.


Whilst for Beer, these adaptive processes may be accounted for in the balancing of variety between the different components of VSM, for Pask, the adaptive process arose through the emergence of distinctions in conversation with others and in the shared engagement with the environment. In order to express this, Pask specified a broad description of the organisational situation individuals find themselves in when they engage in discussing their environment and their knowledge. The resulting interaction dynamics share many properties with the dynamics of conversations around the VSM – including Ashby’s concept of multi-level mechanisms - except for the fact that there are no fixed distinctions.


Pask presents what on first inspection looks like a rather cold “computational” view of the human being. He discusses how the adaptive cognitive apparatus of human psychology (what he calls the P-individual) works as ‘software’ running on the ‘hardware’ of the human brain – something he called the M-individual. The conjunctions between a P-individual and an M-individual engage interactively in a process Pask called ‘conversation’ which he imagined was rather like the ‘dance’ between the regulators in Ashby’s homeostat. His view was that:
The real generative processes of the emergence of mind and the production of knowledge can be usefully modelled as multilevel conversations between conversants (some called P-individuals, others merely “participants”) interacting through a modelling and simulation facility.”


The important thing here was the idea of a ‘modelling and simulation facility’. This was, in effect, the negotiating table around which distinctions could be agreed between the teacher and the learner. Of course, the specification of a modelling and simulation facility was also a spur to the creation of a series of highly innovative teaching machines who purpose was to facilitate and explore the ways in which agreements about distinctions could be managed.


Like Ashby and Beer, Pask saw the discursive process as a multi-layered regulatory system. Distinctions existed at different levels with distinct relations to one another: the learning process involved understanding the relations between distinctions in the same way that mapping of the VSM involved the specification of levels of recursion, or the understanding of the difference between ‘system 1’ and ‘system 4’.


Coexisting in the conversation process, there were many different things happening. At one level, there were utterances about distinction, at another, practical engagement with the environment, or coordinating instructions between the learner and the teacher. Pask argues:


“Various emergent levels and meta-levels of command control and query (cybernetic) language (L0 L1—Ln L) need to be explicitly recognized, distinguished, and used in strategically and tactically optimal ways.”


Within Pask’s P-individual there was a coordination of utterances within a conversation - which in various ways interacted with other kinds of discourse. Pask’s saw the P-individual using a computational metaphor: the P-individual was a kind of “algorithmic procedure” enacted on the environment and producing an output which was then processed in other ways. With this computational methaphor, he believed emergent dynamics could give rise to new ideas, and alongside this, a new human actor, team or organisation might emerge. Supporting the computational process were physiological functions located in the ‘hardware’ (brains) performing more fundamental operations like memory. Not surprisingly, this computational account excludes the role emotion plays in conversation: his model considered what he called a “strict conversation model” which ignored the peoples’ feelings, but concentrated on utterances, psychomotor skills and perception.
The theory specifies a set of relationships: there is a relationship between physical systems – the environment and the biological constitution of bodies; and there is a relationship between concepts existing in minds which interact through conversation. Each actor harbours a model – the articulation of set of ideal possibilities. The conversation is driven by the difference between the model in the teacher and the model in the learner. Both the teacher and the learner are building models and then exploring the models for their fit in the environment. As they do so, they encounter phenomena which don’t fit their existing model: for example, something may occur in the environment which they don’t understand, or the teacher (who has a deeper knowledge of concepts) will say something which exists outside the learner’s current model.


What occurs in this process is that errors in the respective models are identified: the constraints or disparity between ideas and experience. Something may occur which demands that the learner has to find a new articulation of some constraint. The distinctions which the learner has to coordinate are concepts, and the reaction to events is to reorganise, or re-express concepts: both indexing concepts with reality, or seeing concepts in a different kind of hierarchical relation to one another. This necessitated that the articulated concepts had to have some kind of structure: some concepts were more universal or deeper than others. Discovering the hierarchy of concepts was a way of reframing indexes of concepts and positions of concepts. Where Beer’s concepts caused a simplified conversation of calibrating a rich and powerful generative model with reality, Pask’s conceptual positioned complex systems (people) in a way where the logical possibilities of the model were generated through the process of conversation. This had the advantage of not requiring a specialised language: a concept is a concept in whichever environment it occurs. However, Pask’s model assumed much about the constraints which would be experienced by learners, and the reasons for those constraints would be lack of understanding about concepts. What Pask didn’t consider was that the constraints bearing upon learners were emotional. This is particularly apparent in the domain of educational technology.


Pask’s conversation theory is both a theory about learning and adaptation and a meta-description of what it is to do cybernetics. We might, for example, imagine that Ashby has a Paskian conversation with his model of the brain, and that in the light of his experience with reality, he pursued errors in his distinctions and generated of new concepts. Equally, Beer’s setting upon the Viable System Model was a way of setting upon reality and making distinctions about it where applying it led to the identification of deficiencies in particular mappings, and the search for better mappings.


In real education, there is no learning interaction where emotion does not play a major role. This would suggest that Pask’s concentration on computation at the expense of emotion in conversation is deeply deficient. However, emotional blockages are precisely the kind of ‘error’ which might be identified in a Paskian learning conversation. More fundamentally still, however, is the problem as to whether the pursuit of error is something that learners and teachers actually do in their conversations. Whilst Pask’s computational model presents the pursuit of error as rational, it is often the constraints of emotion which prevent individuals behaving in such rational error-seeking ways. However, the value of Pask’s model is that it lends itself to educational intervention, and it is through educational intervention where the strengths and weaknesses of the model can be explored.


Pask himself developed a number of teaching machines in the 1950s. Mostly these concerned the teaching of well-defined cognitive skills such as the programming of ‘punch cards’ (the main means of data input into the computers of the time). In 1999, Pask’s model became the foundation of a broader ‘conversation model’ in educational technology, promoted by Diana Laurillard. Laurillard’s presentation of the conversation left out much of the detail of the broader theory behind Pask’s work. Yet, to its audience – who were then enthralled and threatened by fast approaching technology - this did not matter: it was clear that computers supported new ways of conducting conversations, and that within the dynamics of those conversations, there were processes of ‘teach back’ between teachers and learners working in a shared environment. Yet the nature of the learning environment was complex.


These complex inner dynamics could, of course, be ignored in the general presentation of the interaction between learners and technology. For example, Boyd argues that the conversation model fits the following situation:


"A is a medical student and B is an engineering student. The modeling facility they have to work with might be Pask’s CASTE (Course Assembly System and Tutorial Environment, Pask,1975); equally possibly now one might prefer STELLA or prepared workspaces based on Maple, MathCad, or Jaworski’s j-Maps. The recording and playback system may conveniently be on the same computers as the modeling facility, and can keep track of everything done and said, very systematically.”


Boyd illustrates the kinds of conversation that might follow.
“In reply to some question by A such as, “HOW do engineers make closed loop control work without ‘hunting’?” B acts on the modelling facility to choose a model and set it running as a simulation. At the same time B explains to A how B is doing this. They both observe what is going on and what the graph of the systems behaviour over time looks like. A asks B, “WHY does it oscillate like that?” B explains to A, “BECAUSE of the negative feedback loop parameters we put in.” Then from the other perspective B asks A, “How do you model locomotor ataxia?” A sets up a model of that in STELLA and explains How A chose the variables used. After running simulations on that model, A and B discuss WHY it works that way, and HOW it is similar to the engineering example, and HOW and WHY they differ. And so on and on until they both agree about what generates the activity, and why, and what everything should be called." 


Within this situation, Boyd argues that t is possible to determine different levels of (Ashbian) regulation occurring within each learner.
“Level 0–Both participants are doing some actions in, say, CASTE (or, say, STELLATM), and observing results (with, say, THOUGHTSTICKER) all the while noting the actions and the results.
Level 1—The participants are naming and stating WHAT action is being done, and what is observed, to each other (and to THOUGHTSTICKER, possibly positioned as a computer mediated communication interface between them).
Level 2—They are asking and explaining WHY to each other, learning why it works.
Level 3—Methodological discussion about why particular explanatory/predictive models were and are chosen, why particular simulation parameters are changed, etc..
Level 4—When necessary the participants are trying to figure out WHY unexpected results actually occurred, by consulting (THOUGHTSTICKER and) each other to debug their own thinking.”


This is clearly something of an ideal situation. In reality, as with anything to do with technology, there are a whole set of unarticulated conversaations and levels of communication which can create within the participants of a conversation various kinds of emotional confusion. Whilst it may be considered that the conversation between scientists is a multi-level emergent discovery of convpts at different levels, the conversation between a teacher and a learner is much more complex. Most fundamentally, the ‘why’ questions which are asked by the teacher are not the why questions asked by the learner: they are why questions asked about the learner. Fundamentally, what constraints are producing the behaviour/utterances that are witnessed?


In terms of characterising these constraints, the most important element is the systemic multi-level constraints which might produce the kind of emotional confusion which was articulated by Bateson in his ‘double-bind’ theory. The particular problem of technological engagement is that the technology its itself a constraint of the communication of constraint. Moreover, it is a constraint bearing upon the communication of constraint whose position in the conversation is asserted by a powerful figure (say, a learning technologist, or an institution).


Given the identification of emotional constraint, is to think of ways in which the deficiencies in the model might be addressed. Here it may be argued that Pask failed to see how the different levels of conversation get tied up with one another. Bateson on the other hand, was particularly interested in this aspect. Watslawick explains this dimension: Watslawick explains that:
"Once it is realized that statements cannot always be taken at face value, least of all in the presence of psychopathology - that people can very well say something and mean something else - and, [...] that there are questions the answers to which may be totally outside their awareness, then the need for different approaches becomes obvious."
Saying and meaning are very different things. The latter has to do with expectations and is a much deeper level function. Watslawick quotes Bateson saying:
"as we go up the scale of orders of learning, we come into regions of more and more abstract patterning, which are less and less subject to conscious inspection. The more abstract - the more general and formal the premises upon which we put our patterns together - the more deeply sunk these are in the neurological or psychological levels and the less accessible they are to conscious control."


Is is then enough to include the double-binds that might interfere between different levels of regulation and then to think what it is the teacher should do in response. The issue is that fundamentally Pask’s thesis that communication is a coordination of coordinating mechanisms, the utterances of words and their comparison by a teachers is too shallow. Indeed, it fails to match what Pask himself was doing in his exploration of conversation. Pask must have observed and participated in conversations and wondered what they were about. But what if the model Pask sought was the same process that Pask himself went through in coming up with the theory.


Conclusion


There are many definitions of cybernetics but behind them all is what amounts to a way of thinking which is distinct from the approach of classical science. At the heart of this “strangeness” is the idea that cybernetics does not seek to uncover causal relations. Whilst it has its own body of empirical practices involving the building of models and machines, it does so to explore the relations between the generative possibilities of ideas about how the world works, and how nature appears to work. Cybernetics fundamentally orients around the pursuit of error in the relations between ideas and nature, not the search for proof of ideas: it can be thought of as in a continuous process of falsification. Cybernetics makes the appeal for its approach by arguing that the problems of the world, from the ecological catastrophe, to social inequality and political turmoil result from what Bateson calls a “disparity between the way nature works and the way humans think”. It therefore can make the claim that the best way of addressing this disparity is to turn it itself into the object of scientific investigation.
In order to identify and agree error, it is important to have a shared understanding of whatever model or theory is being compared to nature. The value of any theory is not its particular explanatory power, but that it can specify mechanisms which can be agreed by a group of scientists or educationalists. With agreement over a clearly-expressed theory, attention can focus on practice where attempts can be made to map the distinctions of theory to nature. Because theories are deficient, there will be agreement and disagreement regarding the interpretation of things that happen. Where what happens does not map to the model, or what is predicted in the model does not happen, then cybernetic science moves forwards.
I have tried to explain how this fundamental methodological orientation can characterise many cybernetic interventions in the past. The two cases we considered took a slightly different path, but deep down they are the same. Beer’s approach was to produce a model with various distinctions, and for individuals to explore their experience of their organisational environment by agreeing a map of these distinctions to their understanding of the organisation. Pask’s conversation theory comes with no predefined distinctions of this sort, but rather a description of inter-human dynamics which would generate distinctions which would then be agreed as part of a learning process as the conversation evolves in a shared environment.
In the reality of education, we do both. Syllabi and curricula present predefined distinctions (albeit not as rich and powerful as Beer’s) to which a learner’s exploration of their learning material must eventually fit. At the same time, conversations are the driver for the emergence of new distinctions whose disparity with experience can either be the cause for further and deeper inquiry and discovery, or it can be the driver for the kind of double-bind dynamics which place learners in intractable emotional states of confusion and inaction.
However, so far we have only considered the ways in which the dynamics of a model may be specified and agreed. We have not yet explored the ways in which the experience of nature may be measured and recorded, and those measurements may be related to the dynamics of a model. To do this, we need to explore the dynamics of our relations with nature: the information environment with which our senses interact.

Friday 4 March 2016

Personal Learning Environments 2.0: The other side of the (Bit)Coin

We all have personal tools for learning. We all have mobile phones, tablets, and many of us use Twitter, Facebook and blogging platforms to keep abreast of and contribute to domains of knowledge which interest us. Our engagement with these tools amounts to many small transactions which occur between us and online platforms. Typically these transactions are recorded by the providers of social software platforms, who aggregate this information, analyse it, sell it, and use it to try and sell us things.

The PLE argued that the coming of personal technologies meant that learners should take control of their learning, and use their own tools rather than institutional tools. The PLE has remained something of a pipe-dream in terms of real education, and in many ways institutional provision has become more centralised and controlling: this, despite the fact that many learners also use their personal tools to amplify and enrich institutional learning processes. Fundamentally, the institution is still in control.

The reason why this is the case is TRUST. Institutions are trusted to certify learning in ways which Facebook and Twitter are not. Institutions uphold the trust placed in them by guarding various arcane and inefficient practices in education, including the treatment of learners in 'batch processes' (see yesterday's post), the obfuscation of assignments and assessment requirements for the award of certificates, the tyranny of the timetable, module pre-requisites, and so on. By obscuring what they do, and wrapping themselves up in even-more-obscure 'quality management' procedures, institutions uphold their status as 'trusted' certifiers of learning, and with it, social status. Yet, beyond their certification processes, there is little opportunity to inspect the transactions which led to any certificate.

I was accused of 'credentialism' yesterday in my post about Block Chain, and so whilst I think it is important to understand what new transaction-based technologies like Block chain might deliver for us, it is also important to understand the nature of certification, and mechanisms of trust which go alongside them.

Thorstein Veblen, over 100 years ago, argued that education was a kind of plaything of the 'leisure classes' (his term for the bourgeoisie) to enable them to impress upon each other how much cleverer they were. In other words, Veblen saw status as the essential driver behind education within universities.

Status depends on trust. If we didn't trust the authority of the institution conferring degrees, then we wouldn't trust the degrees. In Searle's social ontology - which I've found really useful - the status of an institution and the status of a degree certificate are the result of a particular kind of speech act called a "status function" made by an entity (like an institution) with sufficient 'deontic power' (Searle's word for 'authority') to make the declaration in such a way that it is generally upheld by the community for whom it is intended. Monarchs, presidents, armies, hospitals, schools, churches, human rights and families are all the result of this kind of status declaration.

Banks are interesting. They are central in upholding the trust which supports the operation of a currency: "I promise to pay the bearer". More interesting still is that this is precisely what's been undermined (!) by BitCoin. There is no central authority to "pay the bearer". All there is is a transparent, distributed, peer-to-peer ledger. It is the ledger of transactions which becomes the object of trust.

This, to me, is the missing jigsaw piece of personal learning - a reason for heralding the PLE2.0. If all the transactions which we engage in online are recorded in some way in a "ledger" - without compromising security or privacy - then the ledger forms a living record of capability, competency, social capital, intellectual power, practical ability and so on: why trust an institution's arcane certification when a personal learning "ledger" (or whatever you want to call it) can do a better and more trustworthy job of it?

Of course, this is not to say anything about 'learning'. I believe that we said too much about learning when we first looked at the PLE. There are very few defensible statements that anyone can make about learning: we simply don't know what it is - all we can do is speculate. Moreover, bad institutional managers will want to jump on to a definition of 'learning' in order to bully their staff and make themselves look like educational Messiahs. All we can say is that learning occurs within constraints which are material and tangible in a shared lifeworld: textbooks, teachers, timetables, certificates, etc, etc. Many of these constraints exist due to trust relationships.

Block Chain is a disruption among the constraints within which humans organise themselves in their learning. It is the other side of the 'coin' of personal learning where we already have personal tools for learning, but now we also have a de-institutionalised trust mechanism. It seems important.

Thursday 3 March 2016

Why does education still operate in 'batch mode'?

Technologies have transformed business practices by enabling instant on-demand processing and production, streamlined workflows and detailed tracking of transactions. Much of this has been achieved because computers are very good at managing and coordinating individual transactions efficiently. So here's an obvious question: Why does education still operate in 'batch mode'?

We process our learners en-mass through lectures, timetables, assignments and examinations. Learners produce all their work at once, and teachers make judgements about it at a single point, struggling to manage piles of marking and trying to give meaningful feedback. What do our learning technologies do? They reinforce this batch-processing model by offering facilities to attempt to manage the huge number of documents which are produced in the educational process (think of the VLE or e-Portfolio), but in the end doing it so that all these documents can be submitted for assessment en mass at the appointed time.

Why are our e-learning systems batch-processing systems? Why can't we use computers more sensibly to support a way of working in education which tracks educational interactions individually as they occur - educational transactions?

One of the fascinating features of the current Block Chain discussion is the fact that Block Chain is a technology for storing transactions in a distributed, open and transparent way. What would a transaction processing approach to education look like?

An educational interaction is some kind of engagement between a teacher and a learner which is mediated by some entity which they can both see - like a document. A simple document might be a single question "How do I...?", or it might be a link to a web page, or a photo or a video on YouTube. In response to such a document, a teacher will make a response which is also likely to be a document. In summative assessment, this document will be a judgement about the learner's document - how much better or worse it is than other similar documents (and consequently a mark will be given). In both cases, a document may itself declare a set of relationships to other people or concepts. A judgement by the teacher of the learner is a measure of the disparity between the learner's relationship to other people and concepts and the teacher's own.

In traditional education, the most important relationships for a learner are to their peers. In the batch-mode of education, these relationships are reinforced by putting everyone into a class together and processing them together. In the 'transactional' mode, peers are organised according to their own sets of relationships to concepts and other people. Where strong relationships are formed, then there will be exchanges of documents (usually in the form of online conversation) between peers, where the network of connections gradually expands.

In any form of education, the key moment is when a learner will assert a new idea which is either original or one that the teacher hasn't heard of. Under these circumstances, the teacher might not be able to say very much about the learner's ideas: they may even dismiss it. But if the learner's idea is a good one, then other networks will reinforce it, leading eventually to a reappraisal of existing networks and judgements. Utterances by a learner which cause structural adjustments to existing judgement relationships are important moments of learning. The causing of a structural adjustment can be a trigger for accreditation perhaps... but it is the moment when the apprentice becomes the master: which, I think, is what it is all about in the end.

Inverting Learning Design: Revisiting some classic work in E-learning

I'm in the middle of a research project at the Far Eastern Federal University in Russia where we are examining the constraints within which staff are operating as they balance the requirements of their teaching and research in raising the international standing of the institution. In thinking about these constraints, something occurred to me about the broad categories within which they might be placed. Put simply, there are:

  • constraints concerning the individual person as an embodied individual endowed with certain capabilities.
  • constraints concerning the tools they had access to
  • constraints concerning the resources at their disposal
  • constraints concerning the kinds of activities they had to undertake
  • constraints concerning the social environment and communication
So people would say that workload was a problem (activities), or their ability to speak English (capability), or a lack of time (resources), or contact with academic communities (social) and so on.

This started to ring bells for me. It was precisely these categories - tools, resources, people, and activities - which we worked with when specifying the components of effective 'learning designs'. These were the categories which underpinned technologies like the IMS Learning Design specification which grew from Koper's 'Educational Modelling Language'. 

With Learning Design, these categories were treated as variables which, it was considered, in the right configuration, would deliver effective learning experiences. Although less formally expressed, this idealism about the variables of effective learning design is very much still with us. Yet, whether expressed formally in IMS LD, or softly (as in Laurillard's Pedagogic Planner), it doesn't really work. 

However just because attempts to make Learning Design work failed doesn't mean that the distinctions surrounding Learning Design are invalid. The challenge is to examine the categories of Learning Design in such a way that these categories are not seen to represent the 'independent variables of successful learning' (looking back, of course that was ridiculous!), but rather the expression of different kinds of constraint bearing upon the human self organisation of learning. 

Looked at this way, not only do the distinctions - tools, resources, people, activities - provide a way of describing the constraints that academics in a far-flung university experience in their daily working practice, but it can also (ironically) be turned to analyse the constraints that the technologies of Learning Design placed on its users, and (more importantly) why it could never have worked. 

The declaration and provision of tools for performing an activity is itself a constraint. Whilst tools within Learning Design environments aimed to address problems of provisioning of resources and learning tools for learners, it could only do this by constraining the tools and resources available to teachers. This was couched in the language of a kind of 'regulative sociology' - that the practices of teaching - and particularly teaching online - required regulation through design tools. 

The problem, which has been borne out by the evidence of what happened, is that there is a dynamic of constraint surrounding all stakeholders in the education process. There are not causal factors which determine successful learning. There are configurations of mutual constraint which produce processes of ecological growth between teachers and learners. 

In realising that we need to understand these constraint dynamics, it turns out that the distinctions about the 'variables' of Learning Design are valuable as distinctions about constraint. So perhaps we weren't wasting our time after all!