Saturday 28 November 2015

Markets and Variety

One of the great claims of the triumph of capitalism centres around evidence of the variety of commodities which individuals could buy. In less "developed" societies, there was (say) only one variety of breakfast cereal (if they had breakfast cereal); in the supermarkets of the West there were hundreds of varieties of cereal. Consumers could choose in a free market, according to their ability to pay. What actually happened was that consumers would deal with the overwhelming complexity they were faced with by adopting habits and "brand allegiances", where marketing departments of competing brands would do their best to change consumer habits; consumers were made to feel guilty for purchasing the brand they could only afford, instead of the one which they "ought" to have bought. The market developed ways people could assuage their guilt including offering ways in which people could be made to feel richer than they were: the capitalisation of short-term escapism became a long-term nightmare.

These ideas of "variety" and "choice" need re-inspecting - particularly in the light of the translation of these same concepts of choice to the world of education. It does not appear that marketisation in education has increased the variety of educational offerings. Why not? Whilst in education we might hope to see variety in the kinds of things that go on institutions (not just lectures, or tedious modules and learning outcomes, but rich and diverse conversations, many exciting (maybe eccentric) academics, many ways of finding one's voice, new ways of mixing disciplines, new ways of gaining certification, and so on), perhaps we were mistaken in thinking about  the variety of breakfast cereals or Heinz's tin cans in the first place.

For example, every week McDonalds seems to produce a 'new' burger. Except that it isn't a new burger at all. It's pretty much the same burger as all the others. What it does have is a new picture and a new name. The variety is on the surface, not in the substance. In former communist countries, this superficiality of variety is quite apparent. Moscow has a huge department store in Red Square called "Gum". The shops glisten with handbags, cosmetics, coffee and so on - much like any Mall anywhere in the world. And yet, there is a remarkable lack of variety too. It's all the same stuff, repackaged behind different shop windows.

When capitalism measures profit, it analyses sales according to individual varieties. It will then develop those varieties according to their performance. It calls itself "Darwinian", although it's really Spencerian. As a policy, however, it is the antithesis of what happens in the natural world. Gregory Bateson makes the point eloquently...

"It is now empirically clear that Darwinian evolutionary theory contained a very great error in its identification of the unit of survival under natural selection. The unit which was believed to be crucial and around which the theory was set up was either the breeding individual or the family line or the sub-species or some similar homogeneous set of conspecifics. Now I suggest that the last hundred years have demonstrated empirically that if an organism or aggregate of organisms sets to work with a focus on its own survival and thinks that that is the way to select its adaptive moves, its "progress" ends up with a destroyed environment." (Steps to an Ecology of Mind, p457)

So what of variety and selection if the result is destruction? Is that really what nature does? Bateson is right, and that means better definition is required of the concept of variety... 

Thursday 26 November 2015

The #REF, #TEF and Contingency in Higher Education

Of all the warning signs about the terrible state of our Universities, the suicide last year of Stefan Grimm, Professor of toxicology at Imperial College, was the most desperate. Like any unnecessary death – and certainly the tragedy of suicide - we are left asking What if? Not only the What ifs of the professor’s work – the ideas he was working on, the ideas he would have gone on to develop had he lived – but also the “What ifs” of the fallout from his death: the damage to those who were implicated in it, the effect on friends and colleagues, the negative publicity, let alone the effects on those who loved him. What if organisational circumstances and institutional politics weighed more in his favour? Grimm, despite being well-published, had been deemed by his departmental management to not have brought in enough money: by the laws of toxic managerialism, he had to go. But his death touched a great many as they pondered the kind of madness we have arrived at. Viewed through the distorted mirror of academic metrics, his death had “impact”. But it was a death: the end of a set of possibilities for what might have been. Whilst we are touched by the tragedy of suicide as if watching a university soap opera, the risk is to lose sight of exactly what is lost. What is lost with the death of someone like Grimm is contingency: it is the snuffing-out of possibilities and as-yet unrecognised ideas. Contingencies in the University are not only at risk from tragic events like Stefan Grimm. They are systematically being eroded by performance metrics like the REF, and now the TEF will have a similar disastrous effect. Whilst contingency is at the heart of what Universities do, our current measures for the effectiveness of the university sector cannot see it. I want to suggest some ways of addressing this.

First of all, let’s consider how the REF removes contingencies in the system. Of all the possible brilliant ideas for research, only a few are likely to achieve impact and success, immediately rewarding the investment in them. There is no way of telling which of the many ideas, plans, individual academics, and so on, are likely to 'pay out' a successful return. This is partly because there is no single idea, plan or individual whose merit can be individually measured: success depends on the intellectual climate, market conditions, history, existing research trajectories and social networks. An individual measure of the likelihood of success - like publication - is on its own a poor indicator, particularly as it acquires a reputation as an indicator by which funding decisions are made.

Contingencies can be removed if we fail to see them. The easiest way of not seeing contingency is to see no differences between contingencies. This is to “analogise” contingency: to see that contingency x is the same as y – effectively to see x or y as ‘superfluous’ or ‘redundant’. Academic judgements of quality are in a large part identifications of analogies of arguments and results. Another way of removing contingency is to eliminate it because despite any original academic difference it presents, this difference is seen either not to fit the particular reductionist disciplinary criteria of a reviewer (“this is not about education, but economics…”), or to be published in an insufficiently “high-ranking” journal. Judgements of quality are judgements about redundancy of ideas based on written communications – and redundant work can lead to redundant academics. As with peer review, analogies, redundancies and contingencies exist as relationships between reviewers and the things they review: there is no objective assessment, and there is no way of assessing what analogies or differences a reviewer is predisposed to identify in the first place.  We understand this so poorly, and so little of it is available for inspection. Its consequence in systematically removing contingency from the system is dire.

Of course, it might be argued that removing some contingency may sometimes be necessary, as a gardener might deadhead roses. But the gardener does this not to reduce contingency in the long-run, but to maintain multiple contingencies of stems, leaves and flowers. In the university contingencies of practices, ideas, relationships and conversations are necessary so that the institutional conditions are maintained to make maximum benefit of the most appropriate ideas in the appropriate conditions. The British Library or the Bodleian make a point of preserving contingencies by keeping a copy of everything that is published: one would hope this would reflect a similar culture in our universities which traditionally has always exhibited many contingencies – it is the principal distinction between higher learning and schooling.

The consequence of removing contingency is increasing rigidity in the system, producing an education system which knows only a few ways to respond to a fast-changing world. There are contingencies not only among the possible ideas which might be thought, researched and developed within the university; there are contingencies in ways of teaching, the activities that are conducted by learners and teachers; the ways learners are assessed; the conditions within which teachers and learners can meet and talk; the technological variety for maintaining conversations, and the broader means by which conversations are sustained.

Contingencies are not only under attack from research budgets and assessment exercises. Government-inspired regulatory mechanisms are the handmaiden of marketing campaigns. Good scores = good marketing = good recruitment. But marketization produces its own pressures on the removal of contingencies: closure of whole departments like philosophy, concentration on popular subjects like IT or Business, not to mention the blinkered drive for ‘STEM’ as universities confuse science with textbook-performances of useless sums. Alongside these pressures to remove academic contingencies is an attempt to remove contingencies in academic and pedagogical practice. The contingencies of university life are deeply interconnected: the contingencies of pedagogy have been eroded by learning outcomes, disciplinary reductionism, competency frameworks, and the various indicators of ‘academic quality’. The recently-announced Teaching Excellence Framework amounts to a renewed assault on the contingencies in the classroom. An institution not recognised by the REF might nevertheless claim success in teaching, but if this success can only be defined through recognition in metrics, the TEF will reduce diversity of teaching practice, drive out experimentation, and bureaucratise the process to produce outcomes that fit locally-defined criteria aimed at gaming success with national inspection. One university I know, its ear close to Westminster, announced its new strategy of being “Teaching Intensive, Research Informed” in a bid to find favour with the new regulatory climate: in a stroke, fearless pedagogical experimentation, diversity, freedom and flexibility become subsumed into ‘intensive teaching’ driven by metrics on teacher performance and ‘student satisfaction’, accompanied with implicit threats of redundancy, with the only real desire that students stay on the course and continue to pay their fees.

The REF and the TEF are two sides of the same coin. Following a ‘business-oriented’ logic, their effect is to reduce contingencies in the University. But universities are unlike businesses precisely in their relationship to contingency: if universities lose contingencies they cease to be universities but (at best) schools. What should we do?

We can and should be measuring the contingencies of the higher education system, and allocating funding according to a much broader conception of a higher education ecology. Ironically, bibliometric approaches partly used in the REF take us half-way there. Typically, bibliometrics measure the ‘mutual information’ in discourses: those topics which recur across different contexts – those areas where contingency is lost. Contingencies sit in the background to this ‘mutual information’. In effect they operate as the “constraints” which produce repeated patterns of practice, which if probed, can unlock new research potential. New discoveries are made when we see things that we once thought were analogous to be fundamentally different, and then start to explore these differences.

The contingencies of pedagogy are also measurable: if we took all the learning outcomes, all the assignment briefs, subject handbooks and so on in the country, we would see high degree of ‘mutual information’ (of course, our ‘quality regime’ depends on this!). What are the constraints which produce this? (apart from the QAA or its successor) Why is there not more diversity? How can funding be targeted to generate more variety in pedagogic practice? If we are to get the balance between contingency and coherence in our Universities, a much broader, but also more analytical approach is required. Most importantly it has to sit outside marketization – at the level of government: marketization is one of many constraints which currently serve to reduce contingency. At the moment, the REF and the TEF both feed marketisation producing a positive feedback loop. Higher Education is out-of-control. The monitoring of levels of contingency would show where things are going wrong. We might hope that it also helps us to steer our higher education system to maximise, not reduce, its contingency. At the very least, we should aim to produce the conditions within which Stefan Grimm would still be alive thinking new ideas. 

Tuesday 17 November 2015

Conservation of Constraint? - Some vague speculations about learning, music and violence

This is a very speculative post (well - that's partly what blogs should be about!). I've been ruminating about constraints for a number of years now. The technical measurable component of constraint presents itself in Shannon's redundancy measure. This is the inverse of the entropy calculation, which measures the average uncertainty of a message: the constraint is the thing which must be present in order to produce that uncertainty - for example, with regard to the uncertainty between words of a language, the grammar of that language performs this function. One of the functions that constraint performs is to ensure effective communication: grammars restrict choices, and structure things such that certain key aspects of meaning are emphasised or repeated.

Redundancy in information theory can refer to a number of things. On the one hand, it might refer to 'repeated information'. If we are to send a message in a noisy environment, it might be necessary to repeat it a few times. This kind of redundancy plays out over time. I would like to call it 'diachronic redundancy' or 'diachronic constraint'. Alternatively, there is redundancy where a message is conveyed simultaneously in different ways: I might say "I don't understand" whilst at the same time shrug my shoulders, or shake my head. Between the three different signals, the message is conveyed through a kind of connotative process. This type of redundancy is "synchronic redundancy", or perhaps "synchronic constraint".

Human communication obviously takes place within both synchronic and diachronic dimensions. However, I find myself sometimes more focused on diachronic processes in time which express redundancy (something repetitive like typing, or walking, or any kind of repetitive sequential action). Other times, I am deeply immersed in a multi-sensory contemplation of many different signals: when I study a painting, or listen to music, or have a deep conversation with somebody face-to-face over a beer. This is more synchronic. Then I am mindful that the diachronic gives way to the synchronic in the way that action gives way to reflection; in the way that contemplation is balanced by action.

So here's my question: is constraint conserved in human relations? Is the sum of diachronic and synchronic constraint constant? (assuming we have a way of easily measuring each). Music may provide some grounds for investigating this perhaps: the difference between moments of harmonic richness and moments of rhythmic drive.

There is an added complication however (of course!). Redundancy is measured by the formula:
1 - H/Hmax
and von Foerster convincingly argues that self-organisation and development works by increasing the bounds of  Hmax, or the 'maximum entropy' so that self-organising systems become more complex. (I wrote about this here: http://dailyimprovisation.blogspot.co.uk/2015/09/learning-gain-and-measurement-of-order.html) So it may be that constraint isn't conserved exactly, but rather the balance between diachronic and synchronic constraint gives rise to a mechanism for increasing the maximum entropy: increasing complexification. This, I think, is important in understanding the learning process.

Intuitively, we move between synchronic and diachronic constraints, as between contemplation and action, and along the way expand the domain within which constraints apply themselves. There's a link to Vygotsky here: the Zone of Proximal Development is a way of expressing the synchronic constraints (closeness of a teacher) in balance with the diachronic constraints (the particular activities a learner engages in on their own). Does the ZPD "draw out" synchronic constraints (teacher-pupil relationship) as diachronic constraint (scaffolded activity)?

Maybe there's also a link to terrorism (which is obviously on everyone's mind at the moment). What is it that leads people to carry out sequential, violent and repetitive activities like shooting people? This looks like a kind of diachronic constraint. The synchronic component is fanatical religion. Is terrorist violence a 'drawing-out' of repetitive diachronic activity from intense synchronic experience? Is this how the sense of exclusion and injustice (which is part of that synchronic experience) feeds into the execution of violent plans? Of course, in this case, it isn't stable; but is there an analysable relationship between the synchronic aspects and the diachronic execution? (Of course, the same applies to the military response by the state!). Perhaps this is over-thinking something at a very difficult time. But difficult times are valuable in producing a lot of thinking.... they are full of synchronic, multi-layered constraints.

Monday 16 November 2015

Music and Murder: Have our ears changed after the Paris attacks?

There are few moments where music has a direct semantic reference to concrete things: Beethoven did it in the various overtures to Fidelio - an off-stage trumpet meant an approaching army. With Napoleon at the gates of the city, most hearing this trumpet call in the Theater an der Wien in 1805 would have taken it as an explicit sign, rather than an abstract constituent of the music. Birtwistle has a telephone ringing in The Second Mrs Kong (ironically with a traditional bell which in its contemporary setting would now be be a kind of anachronism).

What we have now is the sound of rock music, heavy whining guitars, amplified vocals blasting out, with the entry of an arhythmic cork-popping rat-tat-tat sound accompanied with screaming. In the months and years from now, will we be able to hear this without making a direct association to the terrible events of the weekend? Have our ears changed?

That rat-tat-tat sound becomes a dull, terrible trope. Just like when we hear 'La donna e mobile' in Rigoletto, we know that trouble and tragedy are coming (Verdi's genius is to give his 'sign' the best tune!), so when we hear Palestrina, Beethoven, Bach or Mozart with a rat-tat-tat entry, it will stand for the opposite of what is expressed in the music.

9/11 afflicted our eyes with real-life images which we already knew from disaster movies. Paris may have changed our ears. That is a more profound thing.

Wednesday 11 November 2015

Information and Second-Order Cybernetics: Constructivist Foundations and Empirical Approaches

The central challenge in differentiating varieties of second-order cybernetics lies in disentangling epistemological differences where, on the surface, there are shared claims for epistemological coherence based on principles of reflexivity, circular causality and observation. Whilst it might be claimed that, for example, Maturana’s Biology of Cognition is consistent with Luhmann’s Social System theory, or von Foerster’s cybernetic cognitivism, discussions between scholars representing different varieties of second-order cybernetics soon find themselves in disagreements which appear almost sectarian in character. In these disputes, there appear to be two dimensions. On the one hand, there is conflict about what each variety of second-order cybernetics stands against, and how each variety may accuse the other of lending tacit support to a position which both of them claim to oppose. On the other hand, there are differences that arise in uninspected assumptions concerning those principles which all varieties of second-order cybernetics uphold, amongst which principles of circularity, induction and adaptation appear universal.  


In this paper, we uphold empiricism as an essential element in the coordination of a coherent second-order cybernetic discourse. We build on recent critical work of Krippendorff and Mueller, who identified historically-embedded inconsistencies across varieties of second-order cybernetics stemming from the distinction between the General Systems Theory of Bertalanffy and the cybernetics of Wiener. Both Krippendorff and Mueller argue that GST belongs to an intellectual tradition of holistic theorising – a tradition that stretches back to the German idealism of Schelling, Hegel and Fichte. GST replaced vitalism with mechanism but essentially maintained the same objective in seeking a totalising mechanistic description. By contrast, cybernetics was a pragmatic and empirical endeavour which evolved in the practice of early cyberneticians like Ashby, for whom cybernetics was a scientific orientation with regard to constraints, rather than mechanistic causation. In contrast to GST, cybernetics aimed not for ideal descriptions of mechanisms, but to actively seek the conditions of possibility for effective organisation. Krippendorff argues that in second-order cybernetics, mechanistic idealism and constraint-oriented investigation became conflated under the broad umbrella of “cybernetics of cybernetics”, and in the process lost sight of earlier cybernetic work which enshrined principles of reflexivity and observer orientation which were accompanied by an active empirical engagement.
We acknowledge that the empirical orientation of second-order cybernetics has been overshadowed by accusations of objectivism and inconsistency with second-order cybernetic theory. However we take this as an invitation to reconsider what it is to be empirical, reflecting on the relationship between second-order cybernetic epistemology in its relation to the philosophy of science, and inspecting current empirical practices allied to second-order cybernetics. We believe the accusation of objectivism towards empiricism is a mistake both in terms of a misunderstanding of the philosophy of science (particularly Hume’s epistemology), and in terms of appreciating the intellectual contribution of second-order cybernetics in a number of present-day empirical practices. We begin our analysis by considering what varieties of second-order cybernetics stand against, separating different theoretical orientations towards foundationalism, objectivism and universalism: it is, we argue, in these various unarticulated orientations where inconsistencies in the discourse arise. We then consider what varieties of second-order cybernetics support, focusing on fundamental principles of induction, adaptation and circularity. Behind common descriptions of induction and adaptation, lie distinctions about regularities and the development of knowledge. We follow Hume, and critique of Hume by Keynes in analysing the way analogies are identified in inductive processes. Extending Keynes, it is argued that second-order cybernetic invokes two kinds of analogy: the analogies between events, and the analogies between the different states in the observer.
The problem of induction in empirical practice has played an important role in the philosophy of science. Hume’s critique of probabilities and the way that event regularities contribute to scientific knowledge unites problems in probability theory with problems in the philosophy of science. In developing Hume’s stance, Keynes’s contributions to Hume’s theory argued that experiment led to the identification of analogies negatively. These arguments are important to second-order cybernetics because the principal empirical approach involves Shannon’s information theory, which similarly unites probability theory with the growth of knowledge. At the heart of Hume and Keynes’s concerns is the difference between novelty and analogy: a distinction which has similarly formed the basis for critique of Shannon’s information theory, which current second-order cybernetic empirical approaches have been actively engaged with.
In the final section of the paper, we uphold empirical practice as a way of addressing the confusion over double-analogies in second-order cybernetics and in contributing to the coordination of a coherent second-order cybernetic discourse. In returning to Hume’s sceptical philosophy, we argue that the role of empiricism is to maintain reflexive discursive coherence rather than uphold objectivism. The information theoretical techniques we present, whilst none of them perfect, have the potential to ground second-order cybernetics in an empirical practice which can stimulate and support a deeper reflexive science, whilst avoiding the aporia of ungrounded disputes which lose themselves amongst the double-analogies of second-order cybernetic epistemology.
What Second-Order Cybernetics stands against
Ostensibly defined as the "cybernetics of observing systems", there are a variety of interpretations of what this might mean – particularly given the fact that cybernetics itself is multiply and inconsistently defined. For example, both Niklas Luhmann and Humberto Maturana are both second-order cyberneticians, and yet each has criticised the other for an inconsistent application of second-order cybernetic principles. Luhmann's borrowing of Maturana's theory of autopoiesis as a way of developing sociological theory (particularly developing Parson's Social Systems theory), and its entailed view that communication systems are 'autopoietic' (i.e. it is an organisationally-closed, structurally-determined system which regenerates its own components) appears to impute some kind of mind-independence to the communications system which subsumes psychological, perceptual and agential issues. Luhmann escapes the accusation of objectivism in this approach by presenting “agency” of minds as an epiphenomenon of the dynamics of communication systems: the ‘personal’ is subsumed within the dynamics of the collective. This move, however, subverts the biological foundations for autopoietic theory. When Maturana argues that:
"a cognitive system is a system whose organisation defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain,"
the implication is that there is what Varela calls “in-formation” of the self-organisation of interacting organisms, rather than mind-independent information. Luhmann's redescription of sociology in terms of autopoiesis has been taken by Maturana and his followers as something of a betrayal and distortion. And yet, Luhmann's redescription of sociology has been highly influential, attracting the attention of eminent sociologists and philosophers, including Habermas and many others, for whom systems thinking would otherwise have been sidelined.
The contrast between Luhmann and Maturana is illustrative of deeper tensions within the domain of issues which varieties of second-order cybernetics stands against. In reviewing a similar and related problem of “varieties of relativism”, Harre identifies three major areas where intellectual positions if what is opposed by varieties of second-order cybernetics can be contrasted. These are positions relate to:
  • Objectivism: the position that there are objects and concepts in the world independent of individual observers;
  • Universalism: the position that there are beliefs which hold good in all contexts for all people;
  • Foundationalism: the position that there are fundamental principles from which all other concepts and phenomena can be constructed.
Whilst each second-order cybernetic theory stands against objectivism, each is vulnerable to the claim of objectivism in some aspect, and in each variation, the locus of any objection is different. Objectivist vulnerability in Maturana lies in the biological and empirical basis of his original theory; in Luhmann, the criticism is made that his communication system is mind-independent where Luhmann claims it is mind-constitutive. In Pask’s cybernetic computationalism, Krippendorff criticises the objectivism of his computational metaphors and his reduction to physics, with the implication that mind is a computer. In von Foerster’s cognitivism, there is an implicit objectivism in the reduction to mathematical recursive processes.  
The stance of Second-order cybernetics to Universalism is more complex, reflecting the critique of Mueller and Krippendorff about the relationship between cybernetics and General Systems Theory (which is clearly universalist). There is an implicit view within second-order cybernetics which allies itself to philosophical scepticism: that there is no 'natural necessity', or naturally-occurring regularities in nature. However, second-order cybernetics does appear to uphold a law-like nature of its own principles, arguing for these as a foundation for processes of construction of everything else. At the heart of this issue is the nature of causation inherent within universal laws. Second-order cybernetics upholds a view that rather than universal causal laws in operation, self-organising systems operate with degrees of freedom within constraints. However, in taking this position, different varieties of second-order cybernetic differ in their understanding of what those constraints might be, and how the system might organise itself with regard to them. Maturana's constraints are biological; Luhmann's are discursive; Pask’s are physical; von Forster’s are logical.
With regard to foundationalism, all varieties of second-order cybernetics appear to wish to maintain principles of self-organisation as foundational. In this, however, irrespective of the mechanisms and constraints constraints which bear upon the self-organisation of a system in its environment, there is also a need to consider the constraints that bear upon the second-order cybernetician who concocts foundational theories. How does this happen? How does it vary from one second-order theory to another? Distinguishing foundationalism between different varieties of cybernetics entails exploring the core ideas of adaptation and induction.
The Problem of Induction and the Double-Analogies of Second-Order Cybernetics
The relationship between observer and observed within second-order cybernetics is one of organisational adaptation within structurally-determined and organisationally-closed systems. Luhmann explains that “the operative closure of autopoietic systems produces a difference, namely, the difference between system and environment. This difference can be seen. One can observe the surface of another organism, and the form of the inside/outside distinction motivates the inference of an unobservable interiority.” (Luhmann’s italics). Luhmann draws on Spencer-Brown’s Laws of Form, arguing for the connection between drawn distinctions and internal restructuring of obervers. “Adaptation” is the name given to this restructuring. Different domains of adaptation include biological, discursive, cognitive, atomic and so on.  However, a domain of adaptation and observation entails the identification of sameness. Whilst the logic of adaptation is an abstract dynamic process, the logic of sameness is specific: to be the same involves both the sameness of biological, discursive or cognitive perceptions and a sameness within the perceiving system. By contrast, to paraphrase Bateson, a difference is not a difference unless it makes a difference in the perceiver.
With regard to the sameness of events, adaptation within second-order cybernetics is generally regarded to be inductive, with adaptations responding to ‘regularities’ of events which stimulate structural change in the organism. In the descriptions of biological adapation of Maturana’s cells and organisms to environmental 'niches’, Maturana argues:
“the living system, due to its circular organisation, is an inductive system and functions always in a predictive manner: what occurred once will occur again. Its organisation (both genetic and otherwise) is conservative and repeats only that which works.” (1970)
Recurrence and regularity of events is characterised elsewhere in autopoietic theory. Varela, in distinguishing the concept of ‘in-formation’, describes it as “coherence or regularity”. Across the varieties of second-order cybernetics, there is a distinguishing of events which cohere with existing structural conditions of the organism amongst which coherences and regularities can be determined, and those events which demand organisational transformation. Across the varieties of second-order cybernetics, regularities are suggested between biological cells, logical structures emerging from self-reference (what von Foerster identifies as 'eigenvalues'), or coherences and stabilities within a discourse (for example, Luhmann's social systems or Beer's 'infosets').
Von Glasersfeld, whose radical constructivism makes explicit reference to inductive processes, provides a revealing interpretation of Piagetian 'assimilation' in his 'schema'  theory of learning. Von Glasersfeld illustrates Piaget's assimilation: “if Mr Smith urgently needs a screwdriver to repair the light switch in the kitchen, but does not want to go and look for one in his basement, he may ‘assimilate’ a butter knife to the role of tool in the context of that particular repair schema.” Here the double-analogy of the inductive process involves:
  1. the identification of some analogy between the butter knife and the screwdriver (Gibson might call this an ‘affordance’)
  2. the identification of analogies within the observer’s knowledge of ‘ways of repairing the light switch’
In both cases, these analogies will have been established through repetition: screwdrivers, screws and broken light switches are encountered in numerous configurations; just as the practices of screwing screws is also acquired through repeated performance. Against the background of analogies of observer and observed, there are also differences which produce the structural adaptations which enable an adjustment to existing known practices so as to find a suitable way of using the butter knife. Fundamentally, however, were the analogies not perceived – both from the observed knife, and within the perceiving subject – there is no ground for the establishment of a difference and its consequent transformation of practice.
The role of repetition in the establishment of analogy and induction is a topic which has attracted the attention of philosophers since Hume. Hume’s example asked how we might acquire an expectations of the taste of eggs. The process, he argues, requires the identification of the 'likeness' between many eggs. With many examples of eggs tasting the same way, an expectation (knowledge) is created concerning the taste of eggs. The process of analogy occurs because of a ‘fit’ between the recognition of analogy in perception and the repetition of that analogy over many instances.
In second-order cybernetics, the mere observation of the ‘likeness’ of eggs is insufficient; we must also consider the ‘likeness’ of the relationship between the observer of the eggs and the eggs themselves. All varieties of second-order cybernetics entail a description of the observer as an adaptive mechanism. Hume’s philosophy only considered a single-analogy of events; Second-order cybernetics has to consider a double-analogy. It is possibly for this reason that second-order cybernetics, and particularly its close relation, second-order science, finds itself fighting a battle on two fronts: on the one hand, there is a battle with positivist empiricism; on the other, there is a battle with philosophers.
Double analogy can be used to separate different approaches to second-order cybernetics. In Luhmann, the 'observer' is a discursive organisational structure which maintains itself in the light of new discursive performances. The identification of differences in discursive structure form a fundamental plank in Luhmann's differentiation of social systems. In order for a discourse to adapt (for example, through innovation), the discourse must be able to identify those aspects of linguistic performance which are analogous to existing discursive structure, and then to reformulate its discursive structure such that subsequent discursive events may be anticipated. In Maturana, the observer is the biological entity, whose organisation has its own implicit analogies, together with the analogies of the perturbations which confront it.
Double analogy presents the central problem facing the coordination of discourse between varieties of second-order cybernetics: how can the analogies of perturbation be determined and compared if the analogies of perceiving structure are so varied across different cybernetic theories? In other words, how is it possible to have a coherent and stable second-order cybernetic discourse where quite different interpretations can be created for the same perceived events?
Hume's empirical theory and his separation between analogy and induction is useful. Whilst much second-order cybernetics has tended to eschew empiricism as first-order reasoning, Hume's concept of the shared empirical inquiry presents a solution to the mismatch between analogies of observational structure and analogies of perturbation. The question concerns the way reproducible empirical experiences create, at the very least, a foundational context and coordinating framework for debate and discussion. Indeed, the experience of discourse within discourse is already empirical in the way that Hume envisaged it: the experience of discourse itself presents a shared 'life-world' for participants to reflect not only on the substance of their discussion, but on the dynamics of the discourse itself. Discourse itself carries its own observable analogies which can be studied. However, material engagement also produces analogies which can brought into discourse. Keynes extended and critiqued Hume’s theory by arguing that regularities in experiment were not enough: analogies are identified negatively through varied repetition.
Keynesian Negative Analogies and Reflexivity
Keynes argued that Hume’s analogies of eggs did not go far enough. Keynes argues that:
“His argument could have been improved. His experiments should not have been too uniform, and ought to have differed from one another as much as possible in all respects save that of the likeness of the eggs. He should have tried eggs in the town and in the country, in January and in June. He might then have discovered that eggs could be good or bad, however like they looked.”
Keynes suggests the concept of a ‘negative analogy’ where there are multiple experiences coupled with a “subtractive” identification of the core features which are common. Keynes’s view of analogy enhances Hume’s by suggesting that the adaptation occurs through event regularities in a relational manner: scientific knowledge emerges in the interaction and adaptation of an observer with observed regularities. More significantly, he argued that this process of adaptation is negative: what occurs in empirical process was an adaptation to a variety of events, similar only in some essential core aspect.
Keynes was acutely aware of the role of ideas and reflexivity and their relation to experiment: observers frame the regularities that they perceive. His oft-quoted opening of the General Theory (“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back”) suggests that Keynes’s view of empiricism saw there to be two aspects to analogy: in events and in the observer. The analogies of the observer were reflected in the ways that new ideas are generated, or old ideas applied to new contexts. Keynes’s view was that assumptions about different analogies should be made explicit and should be tested empirically in a variety of ways and contexts.
The Keynesian view is useful because it helps to situate current views on the state of second-order cybernetics. Krippendorff has recently argued that second-order cybernetics requires four distinct ‘reflexive turns’. These are:
  1. The cognitive autonomy of the observer
  2. Reflexivity of participation in use, design and conversation
  3. Realising human agency in the relativity of discourse
  4. The social contextualisation of cybernetics
As forms of reflexivity, each of these have distinct loci of double-analogies, and each ‘turn’ entails successive turns. The cognitive autonomy of the observer presents analogies in the observer’s structure in relation to events: observation occurs in operationally-closed and structurally-determined systems: this is the principle ingredient of second-order cybernetics’s anti-objectivism. It is also the starting point for asking deeper questions about the nature of the analogies in the observer and the analogies between events.  These issues are first approached through “participation in use, design and conversation”: without practical engagement with event regularities and their analogies how are the analogies of observation and the constraints within which observation occurs to be identified? This opens onto a new set of questions concerning the discourse itself: talking about theory, experiments and results is itself a participative empirical engagement. The reflexivity of discourse concerns the analogies of ways of describing phenomena and ideas. But then we consider also the discourse in a community discussing empirical results, for this is itself an ‘empirical domain’: what counts as analogical within the discursive domain beyond what is counted as analogical in the domain of measurement? Participation in discourse leads consideration that there be also “analogies of the gut”: intuitive and ethical concerns which may have no codification, but which each human being experiences. In asking “is your gut feeling the same as mine?” we face embodied constraints which only reveal themselves as regularities of sensation and as the negative image of behaviour, where fundamental ethical issues can be codified in the regularities of discourse.
Krippendorff’s ‘reflexive turns’ are interconnected where analogies at one level open questions which lead to a search for analogies at the next. Keynes’s and Hume’s focus on analogies presents empirical practice as a way of coordinating the discourse of scientists: it sits at the link between Krippendorff’s turn 2 and 3. Since the analogies of observation depend on participation, there is also a connection between 1 and 2. In terms of the focus for empirical practice at 2, Shannon’s information theory presents a way in which the analogies of events and the analogies of observation may be measured and modelled. It also can be used as a component to analyse discourse at level 3. This is not to say that Shannon's theory has a special status per se, but rather that it occupies an important position as a theory which unites coherent articulations about the lifeworld with a model of the observer as an adaptive system. By bridging the gap between analogies of perception and analogies of events, Shannon's theory (and its variants) contributes to the conditions for a coherent second-order cybernetic discourse with its multiple levels of reflexivity.
Three empirical approaches
Within information theory the 'sameness' of events must be determined in order for one event to be counted and compared to another and for its probability (and consequently its entropy) to be calculated. The use of information theory may (and frequently does) slip into objectivism: this arises when algorithmically-generated results are declared as “accurate” representations of reality, whilst overlooking the discursive context within which such declarations are made (i.e. a failure at level 3 of Krippendorff reflexivity). There is, however, no reason why information-theoretical results should not declare themselves as questions or prompts for deeper reflection within an academic community rather than an assertion of objectivity. This conception is much closer to Hume’s original view of the role of empiricism in the growth of scientific knowledge: measurement is an aid to the coordination of a deeper reflexive discourse among scientists. Information theory’s explicit modelling of the two sides of analogy make it particularly powerful in the conduct of a reflexive science. It challenges scientists to be explicit about what they see as analogical; it invites others to argue about distinctions; it insists on clarity, not rhetoric.
In turning our attention to three examples of information-theoretical empirical practice, we focus on what is counted, what is considered analogical on what grounds, what is inferred from measurements, and what is analogical in what is inferred. At a basic level, this entails identification of analogies in the letters in a message, the occurrence of subject keywords in a discourse, or the physical measurements of respirations of biological organisms. Reflection concerning the identification and agreement of analogies is a participative process with the phenomena under investigation, as well as a reflection and analysis of the discourse through which agreement about the analogies established in that phenomenon is produced.
Our three empirical examples have an explicit relation to second-order cybernetics. In the statistical ecology of Ulanowicz mutual information calculations between respirations and consumption of organisms in an ecosystem has opened critical debate not just about the biological phenomena under investigation, but about refinements to Shannon’s equations and critical engagement with problems of analogy and induction. In Leydesdorff’s information-theoretic analysis of scientific discourse, social systems theory sheds light on the possibility of a “calculus of meaning” which has stimulated discourse in evolutionary economics, and invited reflexive engagement with the relations between scientific discourse, economic activity and government policy. In Haken’s synergetic and theory of ‘information adaptation’, although being largely independent from cybernetics, has deployed information theory in conjunction with powerful analogies from physics to develop a broader socio-analytical framework for examining a range of phenomena from biology to the dynamics of urban development. We consider each of these in turn.
Statistical Ecology
Ulanowicz's statistical ecology uses information theory to study the relations between organisms as components of interconnected systems. Measurements of different aspects of behaviour of organisms result in information, and the central premise of statistical ecology is that analysis of this information can yield insights into the organisation, structure and viability of these systems. Drawing on established work on “food webs”, and also cognisant of economic models such as Leontieff's 'input-output' economic models, Ulanowicz has established ways in which the propensities for development of ecosystems may be characterised through studying the 'average mutual information' between the components. Calculations produced through these statistical techniques have been compared to the course of actual events, and a good deal of evidence suggests the information theoretic approach to be effective.
In aiming to produce indices of ecological health, Ulanowicz sees his task from a Batesonian perspective as being concerned to take further steps towards an ‘ecology of mind’. Material results and defensible consistencies between theory and empirical data become a spur for deeper critical reflection on the nature of information, and the relationship between mind and nature. In recent years, he has engaged with criticism that Shannon's measure of uncertainty (H) fails to distinguish (in itself) the novelty of events, and those events which confirm what already exists: in other words those events which are analogous to existing events. Whilst building on his existing empirical work, Ulanowicz has sought to refine Shannon's equations so as to account for the essentially relational nature of information theory. In this regard, Ulanowicz has distinguished between the average mutual information in the system, which is effectively a measure of the system’s analogies, and the contingencies generated within the system which provide it with flexibility of options for adaptation. With excessive average mutual information at the expense of contingency, ecological systems become vulnerable to external shock; with excessive generation of contingency at the expense of average mutual information, then coordination is lost as the system becomes an anarchic threat to itself.
At the heart of Ulanowicz’s arguments for refinement in approaches to information theory is the consideration of the ‘background’ of information: what in Shannon's original theory is termed “redundancy”, but which Ulanowicz more broadly defines as “apophatic information”, or what is not-information. The arguments he presents have some resonance with previous arguments about ways to measure order: von Foerster, for example, draws attention to Shannon’s concept of redundancy as an indicator of self-organisation within a system.
Ulanowicz demonstrates the empiricist’s reflexivity by coordinating measurements of natural phenomena with a critical debate about the techniques used to produce those measurements and their epistemological implications. As with all measurement techniques, uncritical application of formulae will lead to objectivism, but this is something which Ulanowicz himself is acutely aware of: statistical ecology is a grounding for discourse. However, once empirical results are in the discursive domain, the ecologies of the discourse itself also present opportunities for investigation. Here Ulanowicz’s suggested refinements to Shannon’s equations may in time prove powerful. However, Shannon’s basic equations are well-suited to studying the discourse empirically, and it on this that Leydesdorff’s related information-theoretic approach focuses.
Leydesdorff’s Statistical Analysis of Social Systems
Leydesdorff's work on discourse uses Shannon's equations as a way of empirically investigating discourse dynamics and providing a foundation for theoretical claims made by Luhmann concerning the relationship between information and meaning. Following Luhmann's second-order cybernetic theory, Leydesdorff argues for the possibility of a calculus of 'meaning' by studying the observable uncertainty within discourses and extrapolating the implicit uncertainties of meaning.. Like Ulanowicz, the principle focus of this has been on mutual information between discourses in different domains. Drawing on Luhmann's identification of the dynamics between different discourses, Leydesdorff has layered a quantitative component, facilitated by the enormous amounts of data on the internet, applying this to the study of innovation capacity in economies. Using longitudinal analysis of communication data involving scientific publications, industrial activity in the production of patents, and regulatory activity of governments through policy, correlations between the dynamics of mutual information and economic development have been established.
Leydesdorff argues that mutual information dynamics within discourses is an indicator of deeper reflexive processes in the communication system. Reflecting Ulanowicz’s identification of a balance between flexibility and mutual information, Leydesdorff has in recent years considered both the mutual information between discourses, and balanced it with consideration of “mutual redundancy” as an index of “hidden options” within an economy: in other words, those ideas and innovations which remain latent but undeveloped, but with the potential for development. Like Ulanowicz, this has inspired a critical engagement with Shannon, but in Leydesdorff’s case, this has been prompted by the puzzle of Shannon’s equations for mutual information. In more than two dimensions of discourse (i.e. more than two interacting discourses), Shannon’s equation for mutual information produces a result with a fluctuating positive or negative sign. Both Ashby and Krippendorff have speculated on what the fluctuating sign might indicate. Leydesdorff has argued that a positive mutual information is an indicator of the generation of missing options within the discourse, and that this can be considered alongside measurements of ‘mutual redundancy’. Whilst mutual information provides a ‘subtractive’ perspective on the interactions of discourses (because mutual information is the overlapping space left when differences in discourses are removed), mutual redundancy provides an additive perspective relating to those dynamics which contribute to the auto-catalysis of options in discourse.
The additive approach of the measurement of mutual redundancy presents a simplified statistical index of discourse dynamics which has implications in the consideration of the relationship between analogies and second-order cybernetic theory. The complexities of the subtractive approach to measuring mutual information are relative to the number of dimensions. The assumptions made in mutual information involve double-analogies relating to each factor: in Parsons’s terms, that ‘ego’ and ‘alter’ see analogies in the same way. By contrast, mutual redundancy provides a general index of constraint, and need not be concerned with whether a perceiving subject similarly recognises specific variables of redundancy; it is only concerned with the fact that a perceiving subject is constrained in various ways, and that the mutual redundancy measure is an index of this. As with Ulanowicz’s apophatic measurements, this particular feature has important implications in simplifying complex multivariate analysis. Taken together with calculations of mutual information, Leydesdorff’s measurements provide an important bridge between Shannon’s ‘engineering problem’ of information, with Luhmann’s speculations about communication dynamics and social systems.
Leydesdorff’s economic work may be mistaken as another econometric technique and treated in an objectivist way. Equally, the work may be taken as “evidence” which endorses Luhmann’s sociology. Neither perspective is faithful to the reflexive manner in which Leydesdorff’s approach has developed, where its empirical foundation has grounded aspects of second-order cybernetic discourse. Although Luhmann remains the dominant figure in the work, critical engagement with him follows the data analysis: Luhmann’s relationship with Maturana remains problematic; his transcendentalising of subjectivity in what Habermas calls ‘network subjectivity’ is open to critique on ethical grounds; his interpretation of issues of intersubjectivity about which Parsons and Schutz disagreed remain open questions; and interpretations of economic calculation remain generators of questions rather than assertions of objectivity. Yet engagement with economic data has stimulated critical engagement with cybernetic theory and information theory. Leydesdorff’s approach is one of grounding the second-order cybernetic discourse within empirical practice: as with Ulanowicz, there is a co-evolution of theory with empirical results. Most impressive is the fact that despite the apparent simplicity of identifying analogies in the co-occurrence of key terms in different discourses, convincing arguments and comparative analyses have become possible concerning the specific dynamics of discourses, and this has been a source of new hypotheses which have driven new theory and new empirical practice.
Haken’s Synergetics
Herman Haken in recent years has turned his attention to the idea of ‘information adaptation’ and in so doing echoes themes from the work of both Ulanowicz and Leydesdorff as well as the observer-orientation of second-order cybernetics. In introducing the relationship between Shannonian information and meaning, Haken explicitly points out the need for analogies to be identified by the observer, or in his terminology, the “index” within Shannon’s formula to be made explicit. Meaning, he argues, enters into the Shannon equation “in disguise” through this process of determining the analogies between different phenomena. Implicated in this process is what Haken calls the “Mind-Brain-Body” (MBB) system. He then explains how the MBB system produces cognition through dynamics of information ‘deflation’ and information ‘inflation’, which respectively reduce or increase Shannon’s entropy measurement. Haken’s ideas carry echoes of Leydesdorff’s concept of the generation of hidden options (information inflation), or Ulanowicz’s distinction between processes of mutual information (deflation) and autocatalysis (inflation). A further example might be cited in Deacon’s distinction between ‘contragrade’ (deflation) and ‘orthograde’ (inflation) processes in information transmission.  
Haken’s empirical approach to this is to deploy what he calls a ‘synergetic computer’ to analyse the inflation/deflation dynamics. Processes of information adaptation are seen as a development of Haken’s ‘synergetic’ theory which began in the 1970s in seminal work on the self-organising behaviour of photons in lasers. Haken’s early physics work provides him with a powerful metaphor of self-organisation which he has explored in many different domains, from biology, chemistry through to urban planning. These different levels of empirical activity present questions about the assumptions made concerning the analogies identified: both within a scientific domain (e.g. physics) and between scientific domains (e.g. from physics to social systems).
Haken appears partly aware of the problem of analogies. In supporting Weaver’s assertion that Shannon’s information theory had application beyond the ‘engineering problem’ that Shannon himself saw as fundamental, Haken acknowledges the implicit reflexive semantics that sits behind information theory. However, there appears to be a limit in his work as to how far he takes this critical engagement which illustrates the importance of a discourse of second-order cybernetics to reflexively engage in empirical practice. Synergetics, having grown from physics, was only loosely associated with cybernetics. The work on lasers was powerful in presenting clear evidence for self-organisation in a self-sufficient way that freed it from what some practitioners might have regarded as the reflexive baggage of second-order cybernetics. The appeal of synergetics was one of unabashed objectivism. In applying synergetic principles to other domains, objectivism gives rise to universalism.
With his work on information adaptation, objectivism is challenged; however, universalism remains. At issue is the identification of analogy from one level to another. These are analogies of the observing system. In the work of Leydesdorff and Ulanowicz, analogies and differences of observed phenomena are made explicit in order for the information theoretical calculations to be made. It is assumed that the observing system will also have analogies and differences. The precise nature of the analogies studied becomes a key point of critique of the empirical process, and this then supports engagement with second-order cybernetic theory concerning the relationship between the observer’s analogies and differences, and those of events. In Haken’s synergetics, the analogies are first identified in the behaviour of photons. Beyond this empirical identification, the metaphor of ‘synergy’ is applied to other systems, with information theory becoming the unifying tool which makes calculations in each different domain appear common.
Empiricism and Second-Order Cybernetics
The problem of coordinating a coherent discourse within second-order cybernetics can be addressed through empirical engagement and critical reflection. Each of the three empirical approaches discussed deploys information theoretic concepts to characterise inductive-adaptive processes between observer and observed. Our argument in this paper has been that through doing so, each method provides a platform for structuring critical argument within second-order cybernetics. Each method produces data from the measurement of relations between phenomena. Hypotheses generate approaches to measurement – what is measured, how it is it measured – and new questions emerge from the results which then stimulate discourse. Principal amongst these question are: What are the analogies at each level for distinguishing and comparing events? What are the analogies in the development and adaptation of the structure of the perceiving system? The necessity for empirical practice in conjunction with these questions rests upon the pathologies of different orientations of Second-order cybernetics towards objectivism, universalism and foundationalism. Identification of analogies and confusion between analogies lies at the heart of the difficulties of coordinating coherent debate. The empirical application of information theory necessitates the specific identification of analogies between events. Critical appreciation of information theoretic results generates questions and possibilities concerning the analogies of the perceiving subject and in generating new possibilities in the development of information theoretic techniques. Empirical results coordinate the process by explicitly identifying the reflexive and empirical constraints within which analogies are identified. The approach supports what Krippendorff sees as Ashby’s empirical practice in using cybernetic theory to reflexively generate possibilities and then to discover which of them may be found in nature.
The observer-orientation of second-order cybernetics, and particularly its stance against objectivism and universalism are reflexive operations which guard against the fetishisation of results. If the purpose of empiricism is seen to be the production of results - the uncovering of fundamental mechanisms in the generation of meaning, or mechanisms of perception or laws of ecology - then empiricism becomes objectivist. We have argued that since Hume did not believe in a natural necessity of causal laws, the empiricism he supported concerned the coordination of discourse amongst scientists. This, we argue, is a proper foundation for second-order cybernetic inquiry which is historically and philosophically grounded: results coordinate scientific discourse by reflexively identifying constraints within which assumptions about analogies and differences are made. That empiricism is necessary within second-order cybernetics results from the assumptions about analogies in second-order cybernetic theory both of events and of the observer, which are uninspected and confused. The fact that there are varieties of second-order cybernetics, and that these varieties conflict in their epistemological stances, necessitates an empirical engagement.
In our survey of approaches, there are relations between the different techniques. In Ulanowicz’s statistical ecology, the focus concerns the like-relations between biological components. In Leydesdorff’s work, the central theme concerns the dynamics of like-relations between discourses, and how these dynamics may cohere with second-order cybernetic speculations about reflexivity and communication. Similarly Haken focuses on information dynamics in processes of perception and meaning, by studying the structural properties of emergent results (for example, the structure of cities). In each case, results are produced which generate new questions and hypotheses which feed back into the discourse.
Ulanowicz and Leydesdorff’s work may be seen to be mutually complementary. Both have focused on ways of measuring constraint: Leydesdorff has focused on Shannon redundancies, whilst Ulanowicz has suggested new techniques for identifying what he calls ‘apophatic information’. More significant is the difference between their different domains: since all scientific work produces and participates in discourse, Leydesdorff’s analogies of words and discourse dynamics is relevant to all other forms of scientific empiricism. However, that alternative statistical approaches like Ulanowicz’s apophatic information are generated in other empirical domains, presents new options for the empirical analysis of discourse, alongside other forms of discourse analysis. Haken’s Synergetics and his concept of “information adaptation” also presents new techniques, and his situating of semantics within Shannonian information appears consistent with ideas within Luhmann’s social system theory. However, Haken’s approach also illustrates the difference between an observer-oriented perspective and a perspective which theorises observation and meaning from a universalist stance. Haken’s approach uses analogy between mechanisms at different levels: from the self-organising behaviour of photons to social behaviour in cities. Whilst he doesn’t address the question, this is empirical work which invites deeper questioning about the analogies made between different domains of investigation as much as it invites critique of analogies within each level. The analogies between different levels of phenomena are analogies between the structuring of the observer: as with von Glasersfeld’s example of the butter-knife as a screwdriver, the use of synergetics to explain cities as well as photons is an identification of analogies within the observing system as it adapts the same tool to different phenomena. How could this be empirically explored?
This issue highlights the importance of clarity in empirical practice, and how critique within second-order cybernetics introduces distinctions which can then be used as reflexive tools in considering empirical practices. There is more to the distinctions between varieties of second-order cybernetics than the tension between the universalism of General Systems Theory and the pragmatic empiricism of cybernetics. In arguing that the conflation between the two has resulted in a lack of critical engagement in the identification of analogies is to open grounds for critique on the basis of varieties of objectivism, varieties of universalism and varieties of foundationalism. More importantly, the empirical examples considered here expose the assumptions made about analogies between events and analogies in the observer which can be the focus for doing what Hume always believed was the principle purpose of experiment: to help coordinate discourse.

Monday 9 November 2015

If... #TEF ...

It can be a powerful intellectual move to expose the weirdness and irrationality of things that we think are ordinary and rational. Humour can point out irrationality and make us laugh (there are some wonderful parodies of school and university), but laughing is not always the best thing to do if you want to provoke people into action: it can lead to a kind of complicity (po-faced Theodor Adorno was probably right here: the culture-industry feeds on the inherent contradictions of capitalism). Much better is to disturb, horrify, expose and shame. Anger is the response we should really feel to the weirdness and irrationality of education because so much of what goes on in universities and schools is about power, inequality, exclusion, and vested interests.

What would Lindsay Anderson's "If..." look like if it were a university? How would the toxic combination of peer pressure, elite power dynamics, militarism, ceremony, and absurd assumptions about the purpose of education and life look? Someone ought to do it! Everything from the obfuscation by universities about what courses actually do, exploitative financial bargains made with students, peer pressure and party cultures, exploitation of parents, government interference, commodification of knowledge, weird ceremonies and weirder expectations, to the self-aggrandisement of institutional leaders and academics, and then... the sheer boredom and inauthenticity of it all.

We need to see education for what it is. This, supposedly, is what things like the Teaching Excellence Framework are meant to do. But in fact they do the opposite. Their real purpose is to obfuscate the weirdness, to pretend that something perfectly reasonable, transparent, and transactionally fair is going on. Its purpose is to say that all the weirdness is worth the sums of money paid for it, that Vice-Chancellors and their cronies are worth their absurd salaries, and that despite the fact that nobody understands what it's all meant to be about, students should incur massive debt.

The 'If...' move disrupts the assumptions we make about education. The TEF is yet another stupid initiative in a sea of stupid initiatives. Stupid has become normal - and that's the real problem in attempting to attack the TEF: one is drawn into an irrational battle between one stupid 'rationally-defended' proposition to another. In the final analysis, technology presents a more rationally justifiable way forward for education. Ironically, that may be its weakness!