Tuesday, 25 April 2017

Revisiting Cybernetic Musical Analysis

I had a nice email yesterday from a composer who had seen a video of an analysis of music by Helmut Lachenmann which I did in 2009 using cybernetic modelling. I'd forgotten about it - partly embarrassed by what I thought of as a crude attempt to make sense of difficult music, but also because it was closely related to my PhD which I'm also slightly embarrassed by. In the intervening time, I became dissatisfied with some of the cybernetic underpinnings, and became more interested in critical aspects of theory. Being embarrassed about stuff can be a block to taking things further: I have so many almost-finished unpublished papers - often discouraged from publishing them because of the the frequent nastiness of peer review. So it's nice to receive an appreciative comment 8 years after something was done.

It's made me want to collate the set of music analysis videos that I made in 2009. They are on Schumann, Haydn, Ravel and Lachenmann. In each I pursue the same basic theory about "prolongation" - basically, what is it that creates the sense of coherence and continuity of experience in the listener. The basic theory was inspired by Beer's Viable System Model - that coherence and continuity is a combination of different kinds of manipulation of the sound as "disruptive" - sound that interrupts and surprises; "coercive" - sound that reinforces and confirms expectations; "exhortational" - sound that transforms one thing into another.

What do I think about this 8 years later?

First of all, what do I now think about the Viable System Model which was the foundation for this music analysis? There is a tendency in the VSM to refer to the different regulating layers allegorically: this is kind-of what I have done with coercion, exhortation, etc. But now I think the VSM is more basic than this: it is simply a way in which a system might organise itself so as to maintain a critical level of diversity in the distinctions it makes. So it not that there is coercion, or disruption or exhortation per se... it is that the system can distinguish between them and can maintain the possibility that any of them might occur.

Furthermore, each distinction (coercion, exhortation, etc) results from a transduction. That it, the conversion of one set of signals into another. Particular transductions attenuate descriptions on one side to a particular type which the transduced system can deal with: so the environment is attenuated by the skin. But equally, any transducer is held in place by the descriptions which arise from the existence of the transducer on its other side. It's a bit like this:


Each transducer attenuates complexity from the left and generates it to the right. This is where Beer's regulating levels come from in the Viable System Model. 

The trick of a viable system - and any "viable music"  (if it makes sense to talk of that) - is to ensure that the richness of possible transductions and descriptions is maintained. In my music videos I call this richness "Coercion", "exhortation", "disruption" - but the point is not what each is, but that each is different from the others, and that they are maintained together.

Understanding transduction in this way gives scope for saying more about analysis. The precursor to a transducer forming is an emerging coherence between different descriptions of the world. A way of measuring this coherence is to use the Information Theoretical calculation of relative entropy.  I've become very curious about relative entropy since I learnt that it is the measure used in quantum mechanisms for measuring the entanglements between subatomic particles. Given that quantum computers are programmed using a kind of musical score (see IBM's Quantum Experience interface), this coherence between descriptions expressed as relative entropy makes a lot of sense to me. 

So in making the distinctions that I make in these music videos, I would now put more emphasis on the degrees of emerging relative entropy between descriptions. Effectively coherences can be seen in what I called "coercive" moments as repetition, and this produces descriptions of what isn't repeated - or what is surprising. Surprises on larger structural layers such as harmony or tonality amount to transformations - but this is also a higher-level transduction.  

The viable system which makes these distinctions is, of course, the listener (I was right about this in the Lachenmann analysis). The listener's system has to continually recalibrate itself in the light of events. It performs this recalibration so as to maintain the richness of the possible descriptions which it can generate. 

The world is fucked at the moment because our institutions cannot do this. They cannot recalibrate effectively and they lose overall complexity and variety - and consequently they lose the ability to adapt to changing environments. 

Here are the videos:

Lachenmann

Haydn


Ravel


Schumann


Saturday, 22 April 2017

Porous Boundaries and the Constraints that separate the Education System and Society

I'm taking part in a conference on the "Porous University" early next month (https://www.uhi.ac.uk//en/learning-and-teaching-academy/events/the-porous-university---a-critical-exploration-of-openness-space-and-place-in-higher-education-may-2017.html), and all participants have to prepare a position statement about the conference theme. In my statement, I'm going to focus on the issue of the "boundary" and what the nature of "porosity" means in terms of a boundary between education and society.

We often think of formal education as a sieve: it filters out the wheat from the chaff in recognising the attainment and achievement of students. Sieves are porous boundaries - but they are the antithesis of the kind of porosity which is envisaged by the conference, which - to my understanding - is to make education more accessible, socially progressive, engaged in the community, focused on making practical interventions in the problems of daily life. The "education sieve" is a porous boundary which upholds and reinforces the boundary between education and society; many progressive thinkers in education want to dissolve those boundaries in some way - but how? More porosity in the sieve? Bigger holes? Is a sieve with enormous holes which lets anything through still a sieve? Is education still education without its boundary?

Part of the problem with these questions is that we focus on one boundary when there are many. The education system emerges through the interaction of multiple constraints within society - it's not just the need for disseminating knowledge and skill, but the need for keeping people off the streets (or the unemployment register, or out of their parents' houses!), or the need to maintain viable institutions of education and their local economies, or the need to be occupied in the early years of adult life, or the desire to pursue intellectual interests, or the need to gain status. These multiple constraints are constantly manipulated by government. The need to pay fees, the social exclusion which results from not having a degree (which is partly the consequence of everyone having them!), or the need for professionals like nurses to maintain accreditation are only recent examples of continual tweaking and political manipulation. Now we even have the prospect of official "chartered scientists" (http://sciencecouncil.org/scientists-science-technicians/which-professional-award-is-right-for-me/csci/)! Much of this is highly destructive.

Widening participation, outreach, open learning, open access resources are as much symptoms of the current pathology as they appear to be efforts to address it: it's something of an auto-immune response by a system in crisis. Widening participation? Find us more paying customers! Open Access Resources? Amplify our approved forms of communication so everyone can learn "how to fit the system" (whilst enabling academics to boost their citation statistics) - and then we can enrol them!

A deep problem lies within universities; a deeper problem lies within science. Universities are powerful and deeply confused institutions. They establish and maintain themselves on the reputations of scholars and scientists from the past - many of whom would no longer be employable in the modern institution (and many who had difficult careers in their own time!) - and make promises to students which, in many cases, they don't (and cannot possibly) keep. The University now sees itself as a business, run by business people, often behaving in irrational ways making decisions about future strategy on a whim, or behaving cruelly towards the people they employ. There is nobody who isn't confused by education. Yet the freedom one has to express this confusion disappears in the corridors of power.

Boundaries are made to maintain viability of an organism in its environment: the cell wall or the skin is created to maintain the cell or the animal. These boundaries can be seen as transducers: they convert one set of signals from one context into another for a different context. Education, like an organism, has to maintain its transducers.

Transduction can be seen as a process of attenuating and amplifying descriptions across a boundary. The environment presents many, many descriptions to us. Our skin only concerns itself with those descriptions are deemed to be of importance to our survival: these are presented as "information" to our biological systems. Equally a university department acquires its own building, a sign, courses (all transductions) when a particular kind of attenuation of signals from the environment can be distilled into a set of information which the department can deal with. Importantly, both the skin and the departmental identity is established from two sides: there is the distilling of information from the environment, and there are the sets of descriptions which arise from the boundary having been formed. The liver and kidneys require the skin as much as the skin attenuates the environment.

Pathology in organisations results where organisations reconfigure their transducers so that too much complexity is attenuated. Healthy organisations maintain a rich ecology of varied distinctions. Pathological organisations destroy this variety in the name of some simple metric (like money - this is what happens in financialisation). This is dangerous because if too much complexity is attenuated, the institution becomes too rigid to adapt to a changing environment: it loses overall complexity. Equally if no attenuation occurs, the institution loses the capability of making any distinctions - in biology, this is what happens in cancer.

If we want to address the pathology of the distinction between education and society, we must address the problem of boundaries in institutions and in society. Removing boundaries is not the answer. Becoming professionally and scientifically committed to monitoring the ecology of the educational and social system is the way forwards. Since this is a scientific job, Universities should lead the way.


Tuesday, 11 April 2017

Scarcity and Abundance on Social Media and Formal Education

Education declares knowledge to be scarce. That it shouldn't do this is the fundamental message in Illich's work on education. Illich attacked "regimes of scarcity" wherever he saw them: in health, energy, employment, religion and in the relations between the sexes.

Illich's recipe for avoiding scarcity in education is what he calls "institutional inversion", where he (apparently presciently) visualised "learning webs". When we got Social media and wikipedia, it seemed to fit Illich's description. But does it?

I wrote about the passage in Deschooling Society a few years ago where Illich speaks of his "education webs" (see http://dailyimprovisation.blogspot.co.uk/2013/11/personalisation-and-illichs-learning.html) but then qualifies it with "which heighten the opportunity for each one to transform each moment of his living into one of learning, sharing, and caring". Learning, sharing and caring. Is this Facebook?

Despite Illich's ambivalent attitude towards the church, he remained on the one hand deeply catholic and on the other communitarian. Like other Catholic thinkers (Jaques Ellul, Marshal McLuhan, Jean Vanier) there is a deep sense of what it means for people to be together. It's the togetherness of the Mass which influences these people: the experience of being and acting together, singing together, sharing communion, and so on. The ontology of community is not reducible to the exchange of messages. It is the ontology which interests Illich, not the mechanics.

So really we have to go further and explore the ontology. Illich's "institutional inversion" needs unpacking. "Institution" is a problematic concept. The sociological definition typically sees it as a complex of norms and practices. New Institutionalism sees it as focus of transactions which are conducted through it by its members. At some level, these descriptions are related. But Facebook and Twitter are institutions, and the principal existential mechanisms whereby social media has come into being is through facilitating transactions with customers. The trick for social media corporations is to drive their mechanisms of maintaining and increasing transactions with customers by harvesting the transactions that customers have already made.

In more traditional institutions, the work of attracting and maintaining transactions is separate from the transactions of customers. It is the marketing and manufacturing departments which create the opportunities for customer transactions. The marketing and manufacturing departments engage in their own kind of internal transaction, but this is separate from those produced by customers: one is a cost, the other is income.

The mechanism of driving up the number of transactions is a process of creating scarcity. Being on Twitter has to be seen to be better than not being on it; only by being on Facebook can one hope to remain "in the loop" (Dave Elder-Vass writes well about this in his recent book "Profit and Gift in the Digital Economy").  Formal education drives its customer transactions not only by declaring knowledge to be scarce, but by declaring status to be tied to certification from prestigious institutions. At the root of these mechanisms is the creation of the risk of not being on Twitter, not having a degree, and so. At the root of this risk is existential fear about the future. The other side of the risk equation is the supposed trust in institutional qualifications.

Illich didn't go this far. But we should now - partly because it's more obvious what is happening. The issue of scarcity is tied-up with risk and worries about a future which nobody can be sure about. That this has become a fundamental mechanism of capitalism is a pathology which should worry all of us.


Monday, 3 April 2017

Lakatos on History and the Reconstruction and Analysis of Accidents

"Fake news" and Brexit has inspired a reaction from Universities, anxious that their status is threatened, that they must be the bastions of facts, truth and trust. The consequences of this are likely to reinforce the already conservative agenda in education. Universities have been post-truth for many years - particularly as they chased markets, closed unpopular departments (like philosophy), replaced full-time faculty with adjuncts, became status-focused and chased league table ranking, appointed business people to run them, became property developers, and reinforced the idea that knowledge is scarce. On top of that, they protected celebrity academics - even in the face of blatant abuse of privilege and power by some. The allegations against John Searle are shocking but not surprising - the scale of the sexual harassment/abuse problem (historical and present) in universities is frightening - just as the compensation claims will be crippling. Current students and society will pay for it.

What is true news? I picked up an interesting book on Lakatos by John Kadvany at the weekend (it was in the bookshop that I learnt of the Searle problem). Latakos was interested in rationality in science, maths and history. Along with Popper, Feyerabend and Kuhn, he was part of a intellectual movement in the philosophy of science in the 1960s and 70s from which few sacred cows escaped unscathed.

Kadavny quotes Lakatos's joke that:
"the history of science is frequently a caricature of its rational reconstructions; that rational reconstructions are frequently caricatures of actual history; and that some histories of science are caricatures both of actual history and of its rational reconstructions" ("The History of Science and its rational reconstructions")
In practical life we meet this problem with history directly in the analysis of risk and accidents in institutions. In the flow of time in a hospital, for example, things happen, none of which - in the moment in which they happen - appear untoward. A serious accident emerges as a crisis whose shock catches everyone out - suddenly the patient is dying, suddenly the catastrophic error, blame, etc is revealed when in the flow of time at which it happened, nothing was noticed.

The reconstruction is reinforced with the investigation process. The narrative of causal events establishes its own reality, scapegoats, etc. Processes are 'tightened up', management strategies are reinforced, and.... nothing changes.

Lakatos's position was that historical reconstruction was "theory-laden": "History without some theoretical bias is impossible. [...] History of science is a history of events which are selected and interpreted in a normative way"

In this way, all histories are "philosophies fabricating examples... equally, all physics or any kind of empirical assertion (i.e. theory) is 'philosophy fabricating examples'"

Is it just philosophy? In organisational risk, for example, there is a philosophy of naive causal successionism, and obscure selection processes which weed-out descriptions which don't fit the narrative. But the purpose of all of this is to reinforce institutional structures who themselves exist around historical narratives.

Where does Lakatos go with this? He wants to be able to distinguish "progressive" and "degenerative" research programmes. A research programme is the sequence of theories which arise within a domain (like the successive theories of physics): changes in theoretical standpoint are what he calls "problem shifts". The difference between progressive and regressive research programmes rests on the generative power of a theory. Theories generate descriptions of observable phenomena. In order to be progressive, each problem shift needs to be theoretically progressive (it generates more descriptions) and occasionally empirically progressive. If these conditions are not met, the research programme is regressive.

I agree with this to a point. However, the structure of institutions is an important element in the generative power of the institution's ideas about itself. Lakatos is really talking about "recalibration" of theory and practice. But recalibration is a structural change in the way things are organised.

That there is rarely any fundamental recalibration in the organisation and management of health in the light of accidents is the principal reason why their investigations are ineffective. 

Thursday, 30 March 2017

Giddens on Trust

Giddens's criticism of Luhmann which I discussed in my last post, leads to a 10-point definition of trust. I'm finding this really interesting - not least because it was written in the early 90s, but now seems incredibly prescient as we are increasingly coming to trust technological systems, and do less of what Giddens calls "facework" (something which he took from Goffman, who in turn took it from Schutz's intersubjectivity). Whether he's right on every detail here is beside the point. I find the level of inquiry impressive.

Giddens writes:
"I shall set out the elements involved [in trust] as a series of ten points which include a definition of trust but also develop a range of related observations:
  1. Trust is related to absence in time and in space. There would be no need to trust anyone whose activities were continually visible and whose thought processes were transparent, or to trust any system whose workings were wholly known and understood. It has been said that trust is "a device for coping with the freedom of others," but the prime condition of requirements for trust is not lack of power but lack of full information.
  2. Trust is basically bound up, not with risk, but with contingency. Trust always carries the connotation of reliability in the face of contingent outcomes, whether these concern the actions of individuals oir the operation of systems. In the case of trust in human agents, the presumption of reliability involves the attribution of "probity" (honour) or love. This is why trust in persons is psychologically consequential for the individual who trusts: a moral hostage to fortune is given.
  3. Trust is not the same as faith in the reliability of a person or system; it is what derives from that faith. Trust is precisely the link between faith and confidence, and it is this which distinguishes it from "weak inductive knowledge". The latter is confidence based upon some sort of mastery of the circumstances in which confidence is justified. All trust is in a certain sense blind trust!
  4. We can speak of trust in symbolic tokens or expert systems, but this rests upon faith in the correctness of principles of which one is ignorant, not upon faith in the "moral uprightness" (good intentions) of others. Of course, trust in persons is always to some degree relevant to faith in systems, but concerns their proper working rather than their operation as such.
  5. At this point we reach a definition of trust. Trust may be defined as confidence in the reliability of a person or system, regarding a given set of outcomes or events, where that confidence expresses a faith in the probity or love of another, or in the correctness of abstract principles (technical knowledge)
  6. In conditions of modernity, trust exists in the context of (a) the general awareness that human activity - including within this phrase the impact of technology upon the material world - is socially created, rather than given in the nature of things or by divine influence; (b) the vastly increased transformative scope of human action, brought about by the dynamic character of modern social institutions. The concept of risk replace that of fortuna, but this is not because agents in pre-modern times could not distinguish between risk and danger. Rather it represents an alteration in the perception of determination and contingency, such that human moral imperatives, natural causes, and chance reign in place of religious cosmologies. The idea of chance, in its modern senses, emerges at the same time as that of risk.
  7. Danger and risk are closely related but are not the same. The difference does not depend upon whether or not an individual consciously weight alternatives in contemplating or undertaking a particular course of action. What risk presumes is precisely danger (not necessarily awareness of danger). A person who risks something courts danger, where danger is understood as a threat to desired outcomes. Anyone who takes a "calculated risk" is aware of the threat or threats which a specific course of action brings into play. But it is certainly possible to undertake actions or to be subject to situations which are inherently risky without the individuals involved being aware how risk they are. In other words, they are unaware of the dangers they run.
  8. Risk and trust intertwine, trust normal serving to reduce or minimise the dangers to which particular types of activity are subject. There are some circumstances in which patterns of risk are institutionalised, within surrounding frameworks of trust (stock-market investment, physically dangerous sports). Here skill and chance are limiting factors upon risk, but normal risk is consciously calculated. In all trust settings, acceptable risk falls under the heading of "weak inductive knowledge" and there is virtually always a balance between trust and the calculation of risk in this sense. What is seen as "acceptable" risk - the minimising of danger - varies in different contexts, but is usually central in sustaining trust. Thus traveling by air might seem an inherently dangerous activity, given that aircraft appear to defy the laws of gravity. Those concerned with running airlines counter this by demonstrating statistically how low the risk of air travel are, as measured by the number of deaths per passenger mile. 
  9. Risk is not just a matter of individual action. There are "environments of risk" that collectively affect large masses of individuals - in some instances, potentially everyone on the face of the earth, as in the case of the risk of ecological disaster or nuclear war. We may define "security" as a situation in which a specific set of dangers is counteracted or minimised. The experience of security usually rest upon a balance of trust and acceptable risk. In both its factual and its experiential sense, security may refer to large aggregates or collectivities of people - up to and including global security - or to individuals.
  10. The foregoing observations say nothing about what constitutes the opposite of trust - which is not, I shall argue later, simply mistrust. Nor do these points offer much concerning the conditions under which trust is generated or dissolved."

Tuesday, 28 March 2017

Trust and Risk (Giddens and Luhmann)

In The Consequences of Modernity Giddens critiques Luhmann's idea of trust and its relation to risk and danger. I find what he has to say about Luhmann very interesting, as I am currently exploring Luhmann's book on Risk. Giddens says:

Trust, he [Luhmann] says, should be understood specifically in relation to risk, a term which only comes into being in the modern period. The notion  originated with the understanding that unanticipated results may be a consequence of our activities or decisions, rather than expressing hidden meanings of nature or ineffable intentions of the Deity. "Risk" largely replaces what was previously thought of as fortuna (fortune or fate) and becomes separated from cosmologies.  Trust presupposes awareness of circumstances of risk, whereas confidence does not. Trust and confidence both refer to expectations which can be frustrated or cast down. Confidence, as Luhmann uses it, refers to a more or less taken-for-granted attitude that familiar things will remain stable: 
"The normal case is that of confidence. You are confident that your expectations will not be disappointed: that politicians will try to avoid war, that cars will not break down or suddenly leave the street and hit you on your Sunday afternoon walk. You cannot live without forming expectations with respect to contingent events and you have to neglect, more or less, the possibility of disappointment. You neglect this because it is a very rare possibility, but also because you do not know what else to do. The alternative is to live in a state of permanent uncertainty and to withdraw expectations without having anything with which to replace them."
Where trust is involves, in Luhmann's view, alternatives which are consciously borne in mind by the individual in deciding to follow a particular course of action. Someone who buys a used car, instead of a new one, risks purchasing a dud. He or she places trust in the salesperson or the reputation of the firm to try to avoid this occurrence. Thus, an individual who does not consider alternatives is in a situation of confidence, whereas someone who does recognise those alternatives and tries to counter the risks thus acknowledges, engages in trust. In a situation of confidence, a person reacts to disappointment by blaming others; in circumstances of trust she or he must partly shoulder the blame and may regret having placed trust in someone or something. The distinction between trust and confidence depends upon whether the possibility of frustration is influenced by one's own previous behaviour and hence upon a correlate discrimination between risk and danger. Because the notion of risk is relatively recent in origin, Luhmann holds, the possibility of separating risk and danger  must derive from social characteristics of modernity.
Essentially,. it comes from a grasp of the fact that most of the contingencies which affect human activity are humanly created, rather than merely given by God or nature. 
Giddens disagrees with Luhmann, and explores the concept of trust from a different aspect to that of Luhmann's double-contingency-related view. The argument is important though. Trust is going to become one of the most important features of the next wave of technology: BitCoin, Blockchain, etc are all technologies of trust. Conceptualising what this means is a major challenge for social theory.

It's worth noting that Luhmann comments on Giddens's position in his Risk book with regard to the distinction between risk and danger. Giddens rejects the distinction, but Luhmann says "we must differentiate between whether a loss would occur even without a decision being taken or not - whoever it is that makes this causal attribution"

However, Luhmann throws in something into the "risk pot" which I find fascinating. He calls it "time-binding" - time, for Luhmann is at the centre of risk (another blog post needed there). Time-binding looks very much like sociomateriality + time to me.

Friday, 24 March 2017

Everett Hughes on Organisational Risk in Health and Education

My work on organisational risk in healthcare has taken me back to the work of Everett Hughes. Hughes was the leading exponent of the Chicago School of Sociology which focused on an ecological approach to social institutions. It seems quite obvious that the problems in all our institutions is ecological - not in the sense of trees, but in the sense of a 'coordinated diversity'. Indeed, in our educational institutions, diversity is becoming scarce - driven by a technologically-mediated metricisation which eliminates difference.

In Hughes's book of collected papers, "The Sociological Eye" there is a paper on "Mistakes at Work" from 1951 which contains some insights from that time which I think remain relevant (but overlooked) today.

He starts by appealing for a comparative study of work - that we should look across fields of professional activity in order to understand them. He says we should study plumbers to understand doctors, and prostitutes to understand psychiatrists (!). He goes on to say:
One of the themes for human work is that of “routine and emergency”. By this I mean that one man’s routine of work is made up of the emergencies of other people. In this respect, the pairs of occupations named above do perhaps have some rather close similarities. The physician and the plumber practise esoteric techniques for the benefit of people in distress. The psychiatrist and the prostitute must both take care not to become too personally involves with clients who come to them with rather intimate problems.
Routine is of particular interest to him. He points out that one person's chaos and distress becomes another person's routine work.
There are psychological, physical, social and economic risks in learning and doing one's work. And since the theoretical probability of making an error some day is increased by the very frequency of the operations by which on makes one’s living, it becomes natural to build up some rationale to carry one through. It is also to be expected that those who are subject to the same work risks will compose a collective rationale which they whistle to one another to keep up their courage, and that they will build up collective defences against the lay world. These rationales and defences contain a logic that is somewhat like that of insurance, in that they tend to spread the risk psychologically (by saying that it might happen to anyone), morally, and financially. A study of these risk-spreading devices is an essential part of comparative study of occupations. They have a counterpart in the devices which the individual finds for shifting some of the sense of guilt form his own shoulders to those of the larger company of this colleagues. Perhaps this is the basis of the strong identification with colleagues in work in which mistakes are fateful, and in which even long training and a sense of high calling do not prevent errors.
But being the social ecologist, Hughes looks at both sides of the equation in negotiating and managing these risks. He suggests that there is a psychological 'division of risk' between worker and client.
In a certain sense, we actually hire people to make our mistakes for us. The division of labor in society is not merely, as is often suggested, technical. It is also psychological and moral. We delegate certain things to other people, not merely because we cannot do them, but because we do not wish to run the risk of error. The guilt of failure would be too great. 
So the question then is one of fault and blame if something goes wrong.
Now this does not mean that the person who delegates work, and hence risk, will calmly accept the mistakes which are made upon him, his family, or his property. He is quick to accuse; and if people are in this respect as psychiatrists say they are in others, the more determined they are to escape responsibility, the quicker they may be to accuse others for real or supposed mistakes.
What are the defences of the professions to this? Ritual is the key thing, Hughes argues. Of psychotherapists, he says:
A part of their art is the reconstruction of the history of the patients’ illness. This may have some instrumental value, but the value put upon it by the practitioners is of another order. The psychotherapists, perhaps just because the standards of cure are so uncertain, apparently find reassurance in being adept at their art of reconstruction (no doubt accompanied by faith that skill in the art will bring good to patients in the long run).
Education is another example of ritualised practice which is seen to mitigate risk. His comment here resonates with much of what we see standing in for "quality" in education:
In teaching, where ends are very ill-defined – and consequently mistakes are equally so – where the lay world is quick to criticise and blame, correct handling becomes ritual as much as or even more than an art. If a teacher can prove that he has followed the ritual, the blame is shifted from himself to the miserable child or student; the failure can be and is put upon them.

Thursday, 23 March 2017

Widening Participation and Scientific Necessity

The popular argument for 'widening participation' or 'outreach' in education is about 'giving access' to those who might have at one point been excluded from education. From the institution's point of view, giving access makes good business sense: it might be renamed "Creating potential fee-paying customers". Giving access means providing people with the dispositions and habits of those who succeed in education - Those who can stomach the lecture, the assignment, the group work, the conversation, the reading, and increasingly, the VLE, the blog, the academic tweeting, the O-so-clever (but now rather dull and double-edged) digital media.

We should be clear that this kind of access is in the interests of institutions and the often rather unpleasant characters who run them, but not necessarily in the interests of students. The "loan bounty" which is guaranteed upon the living body of the student will pay for the Vice Chancellor's yacht, the new vanity projects, the racing car design building and the architectural destruction of the local civic environment.

Students from the constituencies which are targetted by widening participation want money, jobs, security, love, fulfilment - indeed, they want the things which were probably denied to them since they were born, and denied to their parents. Education - however much those of us hope for better - wants to financialise their bodies and give them a mark - and, maybe a certificate.

You cannot really blame individual institutions for this (notwithstanding some of the criminals who are running them). To use a cybernetic term, all institutions (education, health, legal, gubernatorial) are autopoietic: they survive by making and remaking their constituent components. Widening partipication is simply the trawling of the environment for new components to be fed into the institution's autopoietic machine. In the process, the institution may claim a "purpose" which is at odds with what it actually does.

The key operation that an educational institution must do in an educational market is what Ivan Illich would call the maintenance of the "regime of scarcity of knowledge". To have the status of a knowledgeable person, one must have a certificate from a respected educational institution.

As Illich pointed out (before the internet) knowledge isn't scarce. It is a remarkable paradox (and an indication of quite how seriously pathological education is) that scarcity of knowledge has been increased with the advent of the web. Institutions have successfully used technology to ramp up the scarcity of knowledge by using the technology to amplify its existing structures. So the MOOC is a giant classroom, assessment can be done by MCQ or (increasingly) automatic essay marking, plagiarism can be statisticised, academic status accorded through bibliometrics, and learning analytics might (universities hope) keep students from dropping out and maintain the fee income (that's the interesting one - it won't work!).

Universities follow an illustrious line of great institutions in commandeering technology like this. The classic example is the Catholic Church in the 15th century who used printing for the production of indulgences. (I think universities are currently in the equivalent of the 1460s... the Catholic hierarchy must have been rubbing their hands!) The moral of the story is that the technology gets you in the end... usually in a way which you weren't expecting.

But there is something else happening which I think is more profound: Computers have transformed the way we do science, the way we make measurements and do experiments, and the way we reason about causes. The university obsession with teaching and learning is recent and market-driven. It won't last. Universities are about scientific inquiry.

Following the impact of printing which produced the reformation, critical attention was focused on education, where universities were sticking to Aristotelian doctrine in their scientific teaching. Printing facilitated a discourse outside the institution which challenged this orthodoxy, which eventually led to Francis Bacon's "The Advancement of Learning". Experiment, observation and an entirely different model of causal reasoning was established. The Cambridge curriculum of 1605 which Bacon attacked was fundamentally transformed by 1700. In between, there was enormous social turmoil - civil war, regicide, republicanism, terror, etc. It affected all forms of communication and production: T.S. Eliot's idea of the "dissociation of sensibility" between the work of Ben Johnson and John Milton is another aspect of this transformation.

This is what happens when science changes. Our science today is no longer Newtonian. It is probabilistic, contingent and uncertain. Yet our modes of communication remain rooted in the model established in the 17th century by the Royal Society, and which were made for communicating empirically objective knowledge (as they saw it). There is an essential paradox when one wants to be an expert in uncertainty - inevitably university academics downplay the uncertainty, contingency, doubt. Nobody wants to look uncertain on the lecture stage.

In an uncertain science, listening counts. The logic of uncertainty means that the more people who are listened to the better. From this perspective, "widening participation" - by which is meant listening, not preaching - is not a marketing exercise, but a scientific necessity.

The point is cybernetic, a discipline which remains the principal scientific foundation for dealing with uncertainty, doubt, and social coordination. Heinz von Foerster stated three principles of education.


  1. Education is not right or a privilege. It is a necessity.
  2. The purpose of education is to ask legitimate questions - that is, questions to which nobody has the answer.
  3. Following these two principles, there is a political principle which cuts against the regime of scarcity of education: A is better off when B is better off




Sunday, 19 March 2017

Sameness and Janacek in Stockholm

I've just returned from Stockholm where I participated in a PhD examination and gave a presentation on the new threats that technology poses to educational institutions. I had a great time, and had some fantastic conversations about education, cybernetics, category theory and technology.

Stockholm is an interesting place in a way which is not immediately apparent. On the face of it, it's rather like many European cities. Stockholm's main shopping street, Drottninggatan, could be anywhere: London's Oxford street, Paris's Rue de Rivoli, Istanbul's Istiklal Cadessi, Cologne's Schildergasse, Amsterdam's Kalverstraat, etc, etc. This is modern global capitalism - and everything's the same. Is there really any point in travelling anywhere?

What I find interesting is that nothing is ever really the same - even in global capitalism. Having said that, Stockholm does its best to epitomise the movement.

At the weekend, Astrid and I went to the opera to see Janacek's Jenufa. Opera is another symptom of globalisation - although a more pleasant one. I enjoy going to the opera in whichever city I'm in - it's usually cheaper than London! But although every city has its opera house, no two performances are ever the same. Classical music is essentially the art of the "small difference". Inflection, intonation, articulation, timbre, etc are basically what it is about. Small differences in music can be deeply meaningful.

Janacek was a master at it. From the rapid ringing of the xylophone at the beginning of Jenufa, to the repeated - but never the same - motifs, he paints the trauma of dysfunctional family life. It's like Eastenders as if it was a Rembrandt painting. But Janacek knew it was all about repetition, and all about the differences we discern in repetition. Being the artist that he was, he not only paints the family trauma of a young woman whose bastard baby is murdered by her stepmother,  but expresses through sound what he knows of the dynamics of human feeling and social tension. These are the dynamics he represents with repetition.

Stockholm is a bit like the tied-up social expectations where everything has to run through rules which ensure cohesion and regularity. Janacek knows that these regularities are an illusion - messy real life seeps through the cracks, and brings the small differences which become most meaningful. We gaze at the cities of global capitalism like we might gaze on many different faces: each face is essentially the same; yet each new face which is subtly different drives us to seek another different one.

Global capitalism offers less difference than human faces. If it is a form of oppression, it is because it leads us to generate the difference which it itself does not provide. It is characterised by a loss of variety. But the consequence of the loss of variety it produces is the production of variety in the gaps which it can't control. Increasingly those gaps are finding political expression.




Tuesday, 14 March 2017

Relative Entropy in Risk Analysis and Education

Sometimes patients die in hospital when they shouldn't have done. When this happens, there is an elaborate process of inquiry which examines all the different causal factors which might have produced the accident. The point of this process is to attempt to mitigate the risk of future incidents.

I've been reading through a number of these reports. The level of detail and the richness of description of different levels of the problem is impressive. I'm left wondering "Where is this level of description in education?". It's simply not there - apart from in fiction. In education, we move from from year to year, with various unfortunate (but rarely deadly) incidents occurring - but no rich description of what happens. We often console ourselves that "Nobody dies in education": but it is probably because nobody dies that there is no serious study of what happens; and it is not true that nobody dies - it's just that nobody dies quickly.

However, a second issue arises from thinking about adverse incidents in healthcare: despite the richness of the description, and the analytical probing of the investigating team and the identification of "root causes" - mitigation of error does not occur. In many cases, serious incidents keep on happening. There appears to be little organisational learning.

In education, a lack of organisational learning is endemic. But whilst this is rarely seen to be a problem at high levels of educational management, there appears to be a greater chance of it being taken seriously in healthcare. The problem lies in ways of thinking about the problem - and I think, if it can be addressed in health, we can use the same techniques in education.

Accidents happen because a system's model of the world is wrong. In ordinary life, when we trip up, or fail to kick a ball in the right direction, we recalibrate our system to correct the error. This is systemic learning. What changes results not from an analysis of all the different components of our knowledge, but of the relations between the different components of our knowledge. Recalibration is a shifting of relations: facts or procedures may change as a result of this, but they are not the thing which is directly changed.

How to measure relations? Shannon entropy gives some way of measuring the surprise in a particular description of the world, but not of its relations to other descriptions. Shannon's "mutual information" is more relational in the sense that it measures the common ground between two descriptions. Right now I'm most interested in the idea of "relational entropy". This measures the distance between two probability distributions. Over time, the distance between two different descriptions can be assessed: sometimes one description will change in pretty much the same way as another - it might be taken as an index for the other. In such a case, the distance between the distributions is very small. At other times the distribution of entropy is large - the two descriptions work independently.

These relations can characterise a situation where one description strongly constrains another - for example, a description of drug administration as against the health of the patient (if the drug is effective). Equally, two descriptions might be independent with the distribution of one having no effect on the other. However, even in this case, each of those descriptions might have constraining factors which connect the two descriptions at a deeper level.

I'm intrigued as to whether management might look at the relational entropy of an organisation as a way of being able to reconfigure the relational entropy, and so recalibrate the organisation in the light of an accident. Health is a good place to explore (and I'm in a good position to do it). If it works, however, it presents new possibilities for thinking about the way educational management should operate.

Saturday, 11 March 2017

Status, Trust and Emerging Technological Threats to Universities

Universities appear to have successfully neutralized the existential threat that appeared to be posed by technology. Early debates about technological personalisation and “Personal Learning Environments” envisaged new forms of education where flexible and personalised learning coordinated with new tools would replace traditional educational structures. It was argued that the constraints of classrooms, timetables, curricula and exams would be replaced with approaches to education which would fit learners rather than demanding that learners fit the constraints of institutions.

In history, new technologies are often initially commandeered to reinforce rather than transform existing institutional structures and practices. Printing, for example, was initially seized on by the Catholic Church as a means of mass-producing indulgences. It took 80 years for the impact of the technology to transform (and almost destroy) the institution.

Educational technology is in its “early Guttenberg” phase – it reinforces and amplifies the curriculum structures (the VLE/LMS), assessment practices (Plagiarism detection, Mutliple choice exams, and emerging automatic essay marking), produces giant classrooms (MOOCs), increases institutional authority and status (bibliometrics, QS rankings), and (as with the printing of indulgences) ramps-up a financialisation process where education is increasingly seen as an ‘industry’ with vast profits being made in many areas on the back of student debt. As must have appeared to the Catholic Church hierarchy in the 1460s, the institution seems to have technology fully under its control. So what could possibly go wrong?

Whilst educational technologists have been attempting to transform education for over 50 years, it appears to be not education but employment which is being turned-upside down by technology. Today almost anybody can be a taxi driver (Uber), a postman (Deliveroo) or a hotelier (AirBnB). Similar business models are colonising medicine (e.g. PushDoctor), whilst virtual currencies like BitCoin dispense with the institutional authority of a bank, and the underlying technology of BlockChain looks set to transform contract law, among many other things. What’s happening? What might it mean for education?

What is unfolding is an “internet of trust”. Uber and Bitcoin work because they are trusted technologies. Where trust would once have been invested in the badge of an institution (a bank, a local taxi firm) it is now invested in an algorithm. The traditional structures of education depend on trust and status: a degree from Oxford is not seen in the same way as a degree from Bolton. The stamp of the institution on the status of individuals is acquired by bureaucratic and cumbersome processes of assessment and “quality control”. It is precisely this institutional rigidity which Blockchain and Uber address by reconfiguring the constraints of an operation and transforming the way transactions are managed.

I've been working on this in projects on medical education in China, and organisational risk in hospitals, exploring new technological approaches to assessment and status. It's some of the most interesting and exciting work I've ever been involved in. Cybernetics is important: it clearly demonstrates the seriousness of the threat posed by technology to existing institutional life – and suggests what institutions might do about it.

Tuesday, 7 March 2017

Trump Supporters and Susanne Langer on Music and Expression

Langer’s “Philosophy in a new Key” has sat on my bookshelf for years, but it’s been one of those books I have always had trouble getting into, whilst at the same time knowing that it is an important book. Although it makes plenty of musical references, not just in the title, it is not a book about music. It is a philosophical book about expression and aesthetic communication. Langer deals with expression in art, religion, primitive society, politics.. and music. She sees the world through a musical lens - and I believe this is very important for our time now - particularly our politics.

This is a fascinating and entertaining interview with two Trump supporters by Evan Davis:


There's a lot of emotion going on there. Now here's Langer:

"Whenever people vehemently reject a proposition, they do so not because it simply does not recommend itself, but because it does, and yet it's acceptance threatens to hamper their thinking in some important way. If they are unable to define the exact mischief it would do, they just call it "degrading", "materialistic", "pernicious" or any other bad name. Their judgement may be fuzzy, but the intuition they are trying to rationalize is right; to accept the opponent's proposition as it stands, would lead to unhappy consequences.
So it is with "significant form" in music: to tie any tonal structure to a specific and speakable meaning would limit musical imagination, and probably substitute a preoccupation with feelings for a whole-hearted attention to music. "An inward singing" says Hanslick, "and not an inward feeling, prompts a gifted person to compose a musical piece". Therefore it does not matter what feelings are afterward attributed to it, or to him; his responsibility is only to articulate the "dynamic tonal form".
It is a peculiar fact that some musical forms seem to bear a sad and a happy interpretation equally well. At first sight that looks paradoxical; but it really has perfectly good reasons, which do not invalidate the notion of emotive significance, but do bear out the right-mindedness of thinkers who recoil from the admission of specific meanings. For what music can actually reflect is only the morphology of feeling; and is it quite plausible that some sad and some happy conditions may have a very similar morphology." (Philosophy in a New Key, p 238)

Langer’s philosophical foundation is the Wittgenstein of the Tractatus. She takes Wittgenstein’s “picture theory” of meaning (whereby the logical form of a proposition is seen as a representation of things in the world), and adapts it to say that artistic expression is a “picture of emotion” - or here, a picture of "the morphology of feeling". Of course, this was written before Wittgenstein’s attention shifted to the way that language is used in everyday life – and to the role of expression of language. However, I think Langer makes a contribution which is also helpful in considering Wittgenstein’s later view of communication. For Langer, the logical expression of feeling is not of the individual artist’s feeling; it is an epistemological position about feeling in general. In other words, composers (and other artists) express what they know about how emotions work through the creation of an artefact of homologous form.

I think this could be right – it makes a key distinction about emotion and expression which would take the cry of a baby as “I am feeling unhappy” to the artist’s representation of that cry as “this is what I believe feeling unhappy is”. Artists are epistemologists working below the level of language. Ironically, Langer’s Wittgensteinian approach digs into precisely what he famously said “Whereof one cannot speak, thereof one must be silent”.

I'm not blaming Wittgenstein for Trump (how unfair would that be?!), but we have passed over too many things in silence, only to concentrate on the rational and technocratic. Trump is a technocratic and rationalised response to the alienation which this silence has produced. 

Of course, many questions remain: What Langer doesn’t deal with is how these artistic epistemological propositions are communicated. How is it that we read the artist’s proposition? How is it that on being moved to tears, we might learn something of what it is to be moved to tears? And, perhaps most importantly, is her theory of the "communication of the morphology of feeling" universally true? It might work for Rigoletto or Beethoven's 9th, but does it work for design or architecture? I might prefer to talk about the "morphology of being" rather than feeling...


Friday, 3 March 2017

Gombrich, Ashby and Seth on Consciousness

I attended the Manchester Intervarsity club last night for a discussion about “The neuroscience of consciousness”. We watched a video of a presentation by Anil Seth. I wouldn’t necessarily have paid much attention to this had it not been for the meeting - I’m glad I went along.



For all the neuro fetishism and misplaced confidence in the ability to create metrics of consciousness (which is partly what Seth is about) – stuff which makes me uneasy – Seth’s deeper theorizing draws on Ross Ashby and Ernst Gombrich. That’s useful, because Seth is a mainstream cognitive psychologist looking at cybernetics, which enables today’s cyberneticians to make references to empirical things which are going on now, rather than stuff which was happening in the 1960s. (He doesn’t mention Bill Powers “Perceptual Control Theory” – but that it perhaps the closest correlate of what he is articulating).
Seth builds on Ashby’s central idea of constraint: that perception involves cognitive processes of prediction of possibilities which are constrained by the senses. In essence, consciousness is cybernetic: we all observe “what might have happened but did not”. Seth applies the principle not only to perception of the external environment, but to the body. This isn’t a new idea. Apart from Ashby, Robert Rosen’s work in biology, and Daniel Dubois’ mathematical articulation of anticipatory systems (not forgetting Loet Leydesdorff’s unfolding of these ideas at the social level) are all fishing in the same pond.

I think this is basically right, but if I was to take issue with him, there is an implicit assumption (highlighted well by Rupert Sheldrake) that the mind is in the head. I also think Seth is unaware of the sophistication of Ashby’s thought with regard to problems like “analogy”, “induction”, “regularity”, “isomorphism” and so on. But that means there’s a discussion here. The problem with ignoring the first point is that consciousness is seen as an apolitical issue. The problem is immediately apparent in Seth’s work on measuring consciousness (John Searle also suffers from the same problem) – the possibility of “consciousness pills”, of tests for “how conscious is your child?”. You don’t have to be Aldous Huxley to work out the implications.

Gombrich’s influence on Seth is perhaps more subtle – but I think it is equally important. Seth takes from Gombrich the basic assumptions of Gestalt psychology, and the role of reflexive processes in “making reality”. But this gets more interesting. Gombrich’s work on pattern directly referenced information theory (particularly in his “A sense of order”), and by implication, the role of information redundancy. Gombrich’s social network included two other Viennese émigrés: Friedrich Hayek and Karl Popper. I find that all three were very close in their thinking, not just in their friendship.

Hayek also wrote a book about consciousness (his book of 1952 – “The Sensory Order”), and he clearly understood the cybernetic principles which Ashby articulated. Stafford Beer, on meeting him, apparently declared “At last! An economist who understands cybernetics” – only to revoke any approval of Hayek on becoming aware of his right-wing sympathies, and particularly his support for Pinochet and Thatcher. Whilst I share Beer’s disgust for this, Hayek’s work remains of the highest order and sight should not be lost of it.

But Hayek is warning for Seth. Seth’s idea of consciousness leads to fascism. Nick Land, the British philosopher who has become the intellectual voice of the Alt-Right, and who most articulately utters a form of second-order cybernetics, shows us exactly where this stuff leads.

The way out of this is to look more deeply at Ashby and Gombrich. The relationship between constraint, information and description is fundamental. Seth gives us a description of consciousness. Shakespeare does the same – but there is a difference between the two. Gombrich knew very clearly what the difference is; Ashby knew something about how different descriptions interact with one another, which is where that difference lies.

Sunday, 26 February 2017

Meaning, Multiple Description and Organisational Risk in Healthcare

There's a fascinating passage in Von Foerster's paper on "Perception of the future and the future of perception" where he speculates (with the help of Herbert Brün):
"Wouldn’t it be fascinating to contemplate an educational system that would ask of its students to answer “legitimate questions” that is questions to which the answers are unknown (H. Brün in a personal communication). Would it not be even more fascinating to conceive of a society that would establish such an educational system? The necessary condition for such an utopia is that its members perceive one another as autonomous, non-trivial beings. Such a society shall make, I predict, some of the most astounding discoveries. Just for the record, I shall list the following three:
  1. “Education is neither a right nor a privilege: it is a necessity.” 
  1. “Education is learning to ask legitimate questions.” 

A society who has made these two discoveries will ultimately be able to discover the third and most utopian one:
  1. “A is better off when B is better off.” (Von Foerster, Understanding Understanding, p209)
There are occasions in education where "legitimate questions" are asked. When they are asked, the responses are always meaningful. That this should be the case is partly a function of what a "legitimate question" is: by being a question about which the answers are unknown, it is also - by definition - an invitation to the production of many possible descriptions. The meaningfulness lies in the synergising of these many possible descriptions into some kind of formulation which satisfies the questioner that the complexity of the issue has been captured. It is, however, difficult to analyse what all the possible descriptions are: we tend not to have them available to us, and so our ability to analyse the meaningfulness of the answers to legitimate questions is hampered.

One domain of inquiry which is more specific in its identification of the multiplicity of description is in the realm of Organisational Risk in healthcare.

In the wake of a serious incident in a hospital, there is a process of making descriptions about the different dimensions of causal factors which might have led to the incident. So descriptions are made about the actors involved - doctors, patients, nurses, etc. Then there are descriptions about the protocols they should follow, the routine on the wards, the labelling of drugs, the medical experience of each individual, the assumptions of knowledge of each actor, power relations, and so on. From these multiple descriptions, a judgement is made about the causes of the accident and recommendations are eventually made which aim to prevent the accident happening again.

Recommendations are often ineffective - which raises a question as to how it is the judgements about the causes are seen to be satisfactory and plausible to the investigating team. The processes whereby the judgement about the incident becomes meaningful cannot relate to the causal relations between the different factors partly because there are many descriptions of each of those different factors. The meaningfulness must arise from the diversity of possible descriptions: the judgement must have generative power in being able to produce the multiplicity of descriptions which are made about the incident. Meaning arises at a boundary between synergies of multiple descriptions of events, and the multiple descriptions of that synergy. Meaning is a function of constraint.

Because meaning is a function of constraint, and because the constraints which produce an incident are different from the constraints which produce a judgement about the causes of that incident, particular care must be taken in understanding the dynamics of constraint. To understand these dynamics, it is important that "legitimate questions" are asked. The asking of a legitimate question like "Why did x do this?" is an invitation to generate multiple descriptions which contribute to the generation of a constraint. The problem with incident investigation processes is that they seek to reduce the number of descriptions when they should increase them.

The same problem applies to education. Educational processes seek to attenuate descriptions of the world to those which appear in a textbook. But questions which appear in a textbook are not legitimate questions. The asking of questions which aren't in the textbook are legitimate questions.

How do we get from that to "A is better off when B is better off"? The reason is, I think, that B is A's means of opening up new legitimate questions, and that the more legitimate questions which are raised, and the more descriptions which are generated, the better the judgements which will be formed.

Von Foerster tells this story about the interaction between a Great Inquisitor and a Holy man performing miracles:
Maybe you remember the story Ivan Karamazov makes up in order to intellectually needle his younger brother Alyosha. The story is that of the Great Inquisitor. As you recall, the Great Inquisitor walks on a very pleasant afternoon through his town, I believe it is Salamanca; he is in good spirits. In the morning he has burned at the stakes about a hundred and twenty heretics, he has done a good job, everything is fine. Suddenly there is a crowd of people in front of him, he moves closer to see what’s going on, and he sees a stranger who is putting his hand onto a lame person, and that lame one can walk. Then a blind girl is brought before him, the stranger is putting his hand on her eyes, and she can see. The Great Inquisitor knows immediately who He is, and he says to his henchmen: “Arrest this man.” They jump and arrest this man and put Him into jail. In the night the Great Inquisitor visits the stranger in his cell and he says: “Look, I know who You are, troublemaker. It took us one thousand and five hundred years to straighten out the troubles you have sown. You know very well that people can’t make decisions by themselves. You know very well people can’t be free. We have to make their decisions. We tell them who they are to be. You know that very well. Therefore, I shall burn You at the stakes tomorrow.”The stranger stands up, embraces the Great Inquisitor and kisses him. The Great Inquisitor walks out, but, as he leaves the cell, he does not close the door, and the stranger disappears in the darkness of the night.

Friday, 24 February 2017

What is an interface?

I'm writing a paper at the moment on the cybernetics of organisational risk in hospitals. Most of my thinking about cybernetics has revolved around self-regulating functions in one form or another - most notably in things like Beer's Viable System Model, or Ashby's Homeostat. In doing this kind of analysis, we tend to draw diagrams with lines and boxes, rather like this:
Our focus is on the relations between the boxes (although attention is often drawn to the labels in the boxes). The practice of Beer's cybernetic management analysis involves identifying the components of an organisation which map onto the boxes in the diagram, and identifying the different levels of recursion at which those components operate. The valuable thing in this exercise is usually the conversation that emerges as stakeholders talk about their experience of the organisational situation. 

The problem is that modern organisations are so fluid, it is very hard to identify clearly which components are which: the boundaries between things are continually redrawn - being blurred, dissolved, shifted, etc. So looking at the diagram above again, what is interesting is not the lines connecting the boxes, or the boxes themselves - it is the lines around the boxes which are most important. 

For Beer (and Ashby), what occurs between an organism and its environment is transduction - the conversion of one set of signals, or a form of energy, into another. Beer's genius was to map the engineering concept of transduction onto social systems, pointing out that transduction in social systems must involve the management of complexity. Between an organism of complexity x, and an environment of complexity y, where y>x, the viability of the organism could be achieved through a combination of the attenuation of the environment by the organism, and the amplification of complexity within the organism. The transduction lies in this pattern of amplification and attenuation. 

But we shouldn't stop there. The transduction happens at the boundary. What actually happens in the boundary? 

This is a difficult question, but I'm increasingly aware of its importance. It is the same question as "What is an interface?". There is transduction between me and my computer screen now. There is transduction between my brain and my liver, or between a patient and the hospital admissions. But Beer's talk of amplification and attenuation, whilst useful because it maps on to ideas about technology "extending" the body (McLuhan), is less helpful when all our boundaries start to melt into some kind of Dali-esque confusion. 

What is clear is that there are differences in description on either side of the boundary. The patient's descriptions of the hospital are different from the hospital's description of the patient. The computer programmer's description of me is different from my description of the software. Every description contains a dynamic of constraints: the constraints bearing on the computer programmer are very different from the constraints I operate within when using their tools (the constraints which they created). So the transduction appears to do something to the descriptions...

But equally, this may be the wrong way of looking at it. Perhaps we see the boundary, and observe the transduction, because the boundary is an emergent phenomenon arising from two systems with different constraint dynamics. The transduction is a kind of knotted-nexus. Labelling the boundary and identifying the transduction merely codifies what is in reality a dynamic process. 

I find this a more useful way of thinking about it, because it retains the possibility that constraints on either side of the boundary might be reconfigured, and as a result every other distinction, and the boundary, might shift. We're seeing this happen a lot around us at the moment. Rather than talk about transductions and interfaces, we would be better getting to grips with the dynamics of interacting constraints.

Sunday, 19 February 2017

Revisiting Inquiry Based Learning: Uncertainty-based teaching

Through Liverpool University, I've become re-engaged in the issue of Inquiry-Based Learning (IBL) - this time under the guise of research-based teaching and learning. IBL was one of the major things that I worked on at the University of Bolton's Institute for Educational Cybernetics, where we created a curriculum framework for IBL courses, called IDIBL. At Bolton, it didn't work as well as we'd hoped, although a similar initiative had worked quite well at Anglia Ruskin University, and I've always felt that it was the right thing to do. It's good to see many of the ideas return - but this is an opportunity to rethink things.

The problem with IBL is that it can so often seem (as I overheard two Manchester University students complain about their IBL course) that it involves "the teachers not telling us anything, and us having to work stuff out for ourselves". This happens because IBL aims to loosen the vertical curriculum structure which "delivers" knowledge and skill from teacher to student (via textbook, VLE, exams), and reinforce horizontal self-organisation among students. On IDIBL this horizontal self-organisation was meant to happen in online communities - a similar idea to Siemens and Downes original MOOC. However, it turned out that establishing self-organising online learning communities was much more difficult than anticipated.

However, many teachers are naturally skilled at reducing the amount of vertical coordination in teaching. One engineering lecturer in Liverpool said "I used to go to lectures with loads of Powerpoint slides; now I go with a blank piece of paper, and use a visualiser to project my notes as we start a discussion". However, he also said "I think the dialogue in the class works with a relatively small number of students. I don't think it would work with a large number."

I think in order to  confirm or confound that hypothesis we need a better model of the problem. Dialogue is the key word: dialogue is what self-organisation really looks like in education. Online, I don't think what we get is dialogue as such. But we get something else. The differences between the different situations relate to what the phenomenologists call "inter-subjectivity" - the understanding we reach of each others' 'inner-worlds'. This, I think, is what is powerful in the engineering lecturer's technique - he reveals something of the inner world. If you watched Mozart improvising, or Picasso doodling, you would get a similar impression.

Part of what happens is a shared experience of time. Alfred Schutz points out that in pure intersubjective experience of face-to-face engagement (what he calls the "pure we-relation"), we "get old together". But we can watch Picasso doodling, we can "get old together" with him - even though he's dead. Doesn't it have a similar quality of revealing his inner life?



What if each of us did this kind of thing as part of "maintaining a dialogue"? What would it require? What are its properties?

I think the requirements are "courage", and the essential property of doing something like this is that it is an "expression of uncertainty".

Uncertainty is crucial to understanding IBL. It is not that self-organisation should be imposed on learners by teachers (which is what the Manchester students were complaining about). Self-organisation is a natural consequence of shifting the focus from certainty to uncertainty.

We are at a strange moment in the history of science. Today's science is data-driven, and largely contingent and probabilistic. Yet within this uncertain scientific world, we insist on maintaining the communication practices of the 18th century - journals speak of Newtonian certainties, evidence and so on. We are not generally good at appearing uncertain - either in front of our peers, or our students.

IBL as it was conceived in IDIBL, was a pedagogic 'certainty' for those of us who devised it, and there was an asymmetry between our certainty and the uncertainty we aimed to impose on the students. The engineering lecturer engages in uncertainty-based teaching: he is not sure where the lesson will go, and there will be areas in the discussion which will throw up things which he isn't sure about. Yet there are other things - and skills - which he is sure about. It's all there; it's all modelled for the students. In the mix of things about which one is certain, and the things about which one is uncertain, there is a clue as to what is happening:

It is not knowledge, but the contraints of knowing which are communicated. 

Picasso communicates the constraints of his drawing - the pen, the page, time, the movements of his body, his emerging intentions. He gives a glimpse as to how he negotiates them. The engineering lecturer does the same.

In conventional teaching, we rarely talk about our constraints. Most of the time, constraints are taken for granted - time, the lecture space, the online space, the assessment, prior knowledge, skill (or lack of it), and so on. In IBL, we forced self-organisation as a new kind of constraint - but again, failed to really discuss it as a constraint, or why it might be there. But to communicate uncertainty, the only thing that can be done is to be open and honest about the constraints in which we all try to fathom what is happening around us.

Thursday, 16 February 2017

The problem of Information and Ergodicity

Information is not ergodic: the average surprisingness of an entire message is not the same as the surprisingess of a section of the message. So how does this affect the way we use Shannon equations? What makes information non-ergodic is the continual transformations of the games that we play when we communicate. A surprise for one game is different from a surprise in another.

The transformation from one game to another is a key part of Nigel Howard's metagame theory where a shift from a game with one set of rules, to a metagame of that game is prompted by a "paradox of rationality" - basically, a crisis. It is a bit like shifting up a level of bifurcation. Maybe even there's a shift up from one energy level to another (which makes the connection to Schroedinger and Kauffman). What's interesting is the cause of the shift.

In Howard, the crisis - his "paradox" of rationality - means that only the jump to the metagame can resolve the contradiction of the current game. The shift is a redescription of existing descriptions in new terms.

My examples for this are all emotional in some way - the experience of "crisis" is very real, but I think Luhmann is right that these things are social-systemic, not psychological. So, for example, in music the climax of the Liebestod at the end of Wagner's Tristan is a moment when constraints of pitch, harmony, rhythm, etc all coincide. It has a curious homology to orgasm. In intellectual life, Koestler's idea of "Bisociation" is also a synergy of multiple descriptions in a similar form. Luhmann's 'interpenetration' is another example, as is Schutz's 'intersubjectivity'. Bateson's levels of learning and Double-bind are further examples. Politically, there are some obvious examples of "regime change" at the moment - changing the game most obviously!

The result of this kind of process is that a distinction is made between the old game and the new one: a boundary produced. On one side of the boundary, there is a degree of entropy in the number of descriptions. On the other, there is a degree of synergy between those descriptions. It is these processes of continual game-change which are non-ergodic.

Whilst I doubt whether we should make Shannon entropy calculations across different games, Shannon is useful for counting within a single game - that would indicate how close a "regime change" might be. Shannon mutual information is itself a kind of game between sender and receiver. I think it would also be worth counting "game changes" - that seems do-able to me - the boundary-markers are the moments of synergy. [my physics colleague Peter Rowlands mentioned to me that he thought that the things that Boltzmann actually counted in statistical thermodynamics were 'bifurcations'. I don't fully understand what he means, but an intuitive reaction says that is a similar thing]

In Stuart Kauffman's "Investigations" he goes into quite a lot of detail about these transformation processes. It has woolly edges.. but "Investigations" is a powerful book - there's some value in it (I didn't like it at first). Kauffman's introduction to Ulanowicz's book "A Third Window" is quite revealing in terms of mapping the space between his ideas and those which centre more around Shannon. Both lines of thought have powerful contributions to make.

Tuesday, 7 February 2017

Finding the "Aesthetic Line"

The dynamics of human experience are such that there is continual blurring and shifting of focus. There are rare moments when focus brings revelation, insight and emancipation. At such moments, the contrapuntal lines of experience map each human being on each other. These are moments of solidarity. The opposite of this experience is where everyone is in the fog: every individual cannot see their own steps, let alone the steps of others. The feeling is of isolation, loss: anxiety pervades everything. At such times, it is not surprising that people clutch at straws: it's in these moments that the strong man appears, whipping up a collective fervour with tales about how things are, spewing falsehoods and directing hatred in perverse directions.

What it is that comes into focus at moments of revelation is a line: a set of connections between different levels of experience. It connects basic and immediate drives, with deep spiritual needs, the need to be loved, the need to belong. But I think lines may be drawn in two ways: either by excluding things, or by including things. The latter is much more difficult than the former. The former is the beloved technique of dictators.

We can consider two lines: the "Trump" line and the "Aesthetic" line. The Trump line is straight and unvarying. It 'straightens' all that it encounters (or perhaps, flattens it). It asserts the positive identity of itself and it threatens the existence of all which isn’t like it. By contrast, the "Aesthetic line" is sinuous, it bends and curls, whilst always maintaining its direction and purpose. The aesthetic line is like a tree branch or a river. It divides into other branches, or tributaries, always embracing the difference of its tributaries, but always aware of its one-ness. Each bend and curve in the aesthetic line is an inflection: a moment of additional description, which complements the descriptions of the line so far. The aesthetic line is formed of multiple descriptions (the L-System, which produced the picture below, is a set of string 'descriptions'); the "Trump" line is a single description.


In the fog of experience, lines are presented to us in a haze. We can only work out what they are by examining multiple hazy descriptions. The addition of hazy descriptions can gradually bring different levels of the line into focus. It is by accumulating descriptions that we work out what is what. It's a bit like the addition of waveforms which make up a complex wave in Fourier analysis. Each contributing waveform is a kind of "redundant description". But together they bring the concrete reality of a rich sound into focus.

Music and the arts reveal this kind of process. Any piece of music immediately presents multiple descriptions: a rhythm, melody, pitches, timbres, and so on. Each is an inflection on everything else; each inflection depends on everything else. On first presentation of multiple descriptions, we are in the fog. We cannot connect where things are going, where the lines are. Multiple descriptions at the same level clarify the situation. They negatively specify the pulse, direction, trajectory of where things are going. As the trajectory becomes clearer, new layers of description – descriptions about description, description about the ways in which descriptions are revealed, are all added. The deeper inflections become part of the whole. At some point, the "root" description – the description which generates all others - comes into view. There is a moment when the presentation of the root description – usually a harmony – brings a piece of music to a close.

The production of multiple descriptions and the emergence of lines does more to us that affect our own experience. They give us an insight into the experience of others. Finding the line is to find the connections between one another – to see the inner world of others is to discover new ways of organising ourselves and transcending adversity.

I think of the aesthetic line in the context of Nigel Howard's meta-game trees. Howard makes the point that the ascent to a meta-level is the reaction to a process of confusion at the existing level. Howard provides another way of thinking about transcending double-binds: we change the game. So when descriptions are presented they are joined up in ways which might at first appear that they contradict one another: this is the fog. The metagame tree needs to be reorganised in some way. New descriptions bring deeper reflection and articulation of more complete aesthetic lines. In this way, decisions are felt emotionally. 

At an analytical level, what concerns us is the redundancy of description on the one hand, and the inflections between different descriptions on the other. This is easy to see in music: various elements, like rhythm, tonality, harmony, pitch can display redundancy. Each represents a different inflection of the line. The need for novelty is inherent in the production of redundancy, and the need to generate new inflections at a deeper level - particularly when redundancy in a number of areas (like a continually repeated melody) is very high (as when things are repeated too much). 

The use of technology also brings very high degrees of redundancy, creating the need for novelty and new forms of inflection. In technocratic environments, new forms of inflection can be prohibited quite easily. This produces a general frustration and depression, with the end result that a more radical "change of the game" is generated. Trump is the result of an inability to vary the inflections of experience which has been produced by a toxic mixture of neoliberalism and technology. Unfortunately, his success means that there is a positive-feedback mechanism which will make the finding of the aesthetic line even harder.


Saturday, 4 February 2017

Ashby's Experimental Method in "Design for a Brain"

Ashby's Design for a Brain is a remarkable book containing a lot of detail about how Ashby saw cyberentics as a science. Some of the most interesting passages concern his defence of his own methodology as he sought to create a "mechanical brain", building on his earlier work with homeostats. In contrast to much second-order cybernetics, Ashby remains a down-to-earth and practical scientist. But he is looking at the world in a different way: he is exploring constraints and relations rather than causation. The principal relation which concerns him is the relation between the experimenter and the experimental situation. In this, of course, he is very close to second-order cybernetics - but with a penetration of analytical thought which has been unfortunately overlooked by many of his cybernetic colleagues.
"It will be appreciated that every real 'machine' embodies no less than an infinite number of variables, most of which must of necessity be ignored. Thus if we were studying the swing of a pendulum in relation to its length we would be interested in its angular deviation at various times, but we would often ignore the chemical composition of the bob, the reflecting power of its surface, the electric conductivity of the suspending string, the specific gravity of the bob, its shape, the age of the alloy, its degree of bacterial contamination, and so on. The list of what might be ignored could be extended indefinitely. Faced with this infinite number of variables, the experimenter must, and of course does, select a definite number for examination - in other words, he defines the system." (Design for a Brain, pp15-16).

Ashby goes on to describe specific examples of empirical practice:

"In chemical dynamics the variables  are often the concentrations of substances. Selected concentrations are brought together, and from a definite moment are allowed to interact while the temperature is held constant. The experimenter records the changes which the concentrations undergo with time.
[...]
In experimental psychology, the variables might be "the number of mistakes made by a rat on a trial in a maze" and "the amount of cerebral cortex which has been removed surgically" [ugh!]. The second variable is permanently under the experimenter's control. The experimenter starts the experiment and observes how the first variable changes with time while the second variable is held constant, or caused to change in some prescribed manner.
While a single primary operation may seem to yield little information, the power of the method lies in the fact that the experimenter can repeat it with variations, and can relate the different responses to the different variations. Thus, after one primary operation the next may be varied in any of three ways the system may be changed by the inclusion of new variables or by the omission of old; the initial state may be changed or the prescribed courses may be changed. By applying these variations systematically, in different patterns and groupings, the different responses may be interrelated to yield relations
By further orderly variations, these relations may be further interrelated to yield secondary, or hyper-relations; and so on. In this way the "machine" may be made to yield more and more complex information about its inner organisation." (pp17-18)
What Ashby was arguing was that the internal relations of a system - its internal constraints - are revealed by applying constraints to their investigation.  ("Relation" and "constraint" Ashby saw as synonymous terms)

As to the detail of what it means to apply constraint, Ashby argues that it is about creating regularity. This is not to suggest that regularities are necessarily real, or external to the observer, but that they arise in the relations between the experimenter and their subject.

"If, on testing, a system is found not to be regular, the experimenter is faced with the common problem of what to do with a system that will not give reproducible results. Somehow he must get regularity. The practical details vary from case to case, but in principle the necessity is always the same: he must try a new system. This means that new variables must be added to the previous set, or, more rarely, some irrelevant variables omitted."
I find "he must try a new system" a very powerful statement. So often in educational theory, psychology - even cybernetics - regularity is assumed at various levels of the system. It is treated as a foundation upon which all other variables are tested. Maturana's autopoietic theory, for example, makes great statements about the need to adapt to regularities in perception: as if regularities are real, the mechnaism of perception is regular too, and what needs to happen is that the two come together in some way. Ashby doesn't say this. If a scientist discovers a regularity, it exists in the relationships of the experimental situation. If they fail to find regularities, they seek (create) another experimental situation. The conclusion one might draw from this is that there are no real regularities beyond a relationship.

Ashby wrestles with the idea of objectivity always conscious that this is one of the fundamental criteria for a commonsense view of the world. He wants to make the distinction between a "natural system" and an "absolute system" and reflects the challenges of making association between the mechanisms of physics and those of biology. He asks:

"both science and common sense insist that if a system is to be studied with profit its variables must have some naturalness of association. But what is natural?"
His definition of naturalness of association is related to his definition of an "absolute system" where the state of a system in entirely dependent on its historical (previous) states.  With a operating definition of an absolute system, he sets the criteria for defining a "natural association". These criteria insist on some alignment to common sense, and some sense of "objectivity". In effect this places the scientist's own reflexivity in the frame: "only experience can show whether it [an idea of a system] is faulty or sound". It is not to deny objectivity, but it is to resituate as a relation, or a constraint.