Monday, 28 December 2015

Educational Metatechnology and Indifferent Data

Data, whether big, small or indifferent, is really all about counting. Typically, 'big data' involves the counting of words. I rather like the idea of 'indifferent data': how would you count that? Being indifferent, there's not much remarkable, or countworthy, in it one would think. But what makes 'big data' so countable? What, in fact are we counting?

There are two things to clarify here. If you've only heard of 'big data' as the new scientific buzz-word, but not thought about exactly what it is, then my statement that it is simply about "counting" might strike you as a crude oversimplification. But it's not: most techniques of data analysis rely on probabilitistic (hmmm - that's problematic!) theories of Shannon or Bayes, each of which relies on counting like-events and distinguishing them from unlike-events. Yet by counting words in big data - facebook posts, tweets, blogs and so on, we can indeed create remarkable inferences: Google translate does a pretty good job of converting one language to another by simply counting words! But the success of Google translate raises more questions: what is it that happens when we 'count' anything?

David Hume puzzled over this over two hundred and fifty years ago. His question was how scientific knowledge was possible; our question now is something like "how is big data analysis scientific"? Yeats says "measurement began our might" no doubt partly thinking of Blake's foolish Urizen (who Blake saw as a demonic Newton) using his divider to map the heavens, but also acknowledging that some balance had to be struck between Bacon, Newton, Locke and the poets. Hume is perhaps the figure whose scepticism is well-placed to create Yeats's balance. He saw that counting required the identification of analogies: that a 1 by virtue of its similarity to another 1, together make 2, and upon the induction that other 1s will also be analogous, knowledge is founded. Yet, Hume asks, upon what grounds is this similarity determined? The question is more pressing when we try and count words. How many times do I say 'words' in this document? Is each use of the word 'words' an equivalence? Does it mean that each time I mean the same thing? Might I not be like Humpty Dumpty in Alice through the Looking Glass and say that when I use a word, it means whatever I want it to mean?! And what if I look for the word "word", rather than the word "words"? What then?

Counting is the determination and aggregation of analogies, and of surprises - or anomalies. Analogies are not determinable without the determination of anomalies. Fundamentally there is a distinction. More fundamentally, these distinctions have to be agreed - at least between scientists. Since Hume's scientific epistemology was all about the agreement between scientists about causes, the agreement about analogies is a pretty central part of that. Actually, Hume wasn't entirely clear on this. It took a 20th century genius to dig into this problem, as he grappled with the madness of mathematical abstractions in economics. John Maynard Keynes's "Treatise on Probability" of 1921 is his masterwork (not so much the General Theory, which owes so much to it). The Keynesian twist is to see that the business of 'analogising' in order to count is a continual process of breaking down things that we initially see to be 'the same' (in other words, things that we are indifferent to) and gradually determining new surprises (anomalies) and new analogies.

The point is that the agreement of analogies and anomalies is a conversation between scientists. Without actual embodied participation in the phenomena which produce the analogies and anomalies, there is no way of coordinating the conversation. Without any way of coordinating the conversation, there is an encroaching mysticism: nonsense explanatory principles take over - the 21st century equivalent of phlogiston, or the 'dormitive principle of ether' in Moliere. Data becomes a religion divorced from science. Education driven by data in this way is also divorced from science. We end up in the worst-case scenario: an educational system renouncing the humanities and arts because they are unscientific, whilst embracing a science which is in the thrall of quackish data analysis!

Can data restore the scientific balance? Can we answer the question "how is big data analysis scientific"? The trick, I believe is to see the identification and counting of analogies and anomalies as the identification of constraints - the identification of what is not there. The problem with Western science is that it has become over-focused on causation, or presence and actuality. Education is a domain which shows that causation is clearly a nonsense concept: so much idiocy has been devoted to the 'causes of learning' - including the forthcoming Teaching Excellence Framework. (There's no point in fighting the TEF: it will happen, it will fail - and maybe then people will think harder). But science really is about constraint, because all living and non-living things realise the possibilities of their existence within constraints. In education, 'realising the possibilities of existence' is something we call "learning". Teachers manipulate the constraints - if they are themselves free enough of constraints to do the job properly.

We can count words in documents and in doing so we can learn something about the constraints we are operating within. In this blog post, English grammar constrains me as much as the meaning I am trying to convey. We can agree the analogies of our counting. We can critique the analogies of our counting, and seek new analogies and anomalies to focus on. Each step of the way we discover more about the conditions within which we live and the ways those conditions are reproduced and transformed by us. We can, of course, do much more than count words in documents: there are analogies to be found everywhere; new defensible "countings" to be performed. At each level, we see what is not. We will see how warped our education system has become, how its ecology is under threat, how the collapse of university education into apparently 'successful' businesses threatens civil society, how the market in education works like CFCs on the ozone of our social fabric. This is the beginning of an educational metatechnology.

So measurement did "begin our might". But the language of poets and musicians can also be counted in ways which show how an aesthetic ordering of constraint - of what is not - might be coordinated for the flourishing of an ecological social fabric.

Saturday, 26 December 2015

Educational Metatechnology and Listening

Our computer technologies are bad at listening. Of course, they record all the text we send to each other - to which the security services continually 'listen' to make sure we are not extremist nutters - but among the many reasons why we should object to what the security services do, the most important is the fact that their analysis of vast amounts of data is NOT listening. No doubt, some terrorist plots have been intercepted - for which some people should be grateful. But it doesn't pick up the disquiet about liberty and surveillance, increasing social alienation, atomisation and technocracy all of which feed the appetite of those who would seek to commit terrible crimes. Selective listening is not listening: more than not listening to the dynamics of social pathology, it becomes part of that pathology - selective listening is listening to fear and this is what our communication technology gives us.

It doesn't have to be GCHQ. Online education produces vast amounts of data. We analyse the data and detect that 90% of people have dropped off MOOC a, and 87% dropped MOOC b. Surmising what this tells us about the respective merits of MOOCs a and b is not an act of listening. It is the opposite of listening. MOOCs are a bit like very bad elevator music: it makes everyone feel bad, but some people manage to get to the top floor in the elevator despite this. But because we think that listening to the data of online education is listening, and because we act in response to it, we turn up the volume of the bad music, piling injury upon injury to education as we increasingly fail to listen to what's happening.

What we should really be listening to is our feelings. Technocracy and functionalism have overridden education to the point that the phenomenology and the politics of education have been squeezed out in favour of data and marketisation. A conversation is not an exchange of text whose contents can be analysed. If it were, all our conversations would involve us creating documentary evidence of our utterances and our meaning (the metadata!). We don't do this in the flow of everyday conversation. The word 'conversation' is from the latin - to "turn together" (con-versare). Turning together through the exchange of text documents is a somewhat stilted affair. In most online conversations, we dance alone with imaginary partners, only to correct our moves in the light of text signals from other people. But it is hit-and-miss.

It is the job of universities to listen. They now fail to listen because they have become constrained by technocracy, technology and markets. As they fail to listen, they will hurt people. The emotional damage they risk creating is not just to the staff they sack in their effort to be efficient, or the brilliant minds they never employ (most of my favourite academics would not get jobs in Universities today), but the damage to their students, alienated by increasingly rigid curricula, failed through unrealistic promises about "graduate premiums", burdened by debts that strike when they are beginning to establish their own families on incomes more meagre than they had hoped or were promised. Behind all of this are feelings of betrayal, anger and confusion.

Emotions really count. All good teachers know this. Not listening to emotions is very stupid behaviour, and our universities risk becoming good at not listening, or believing they are listening when they are not (even more stupid). In the final analysis, emotions drive new movements and change the world - but in ways which are not always peaceful. Universities must listen to everything - not just their market constraints. Their job is to reimagine the world, to explore what Ron Barnett calls "feasible utopias".

We have not got our technology right. Which leads to the question as to what a proper 'listening' technology might look like. I believe that what we should be looking at is a technology which helps us to understand our constraints: a technology for the management of a "social ecology". Since today's technology has become the single most powerful constraint upon the ways each of us lives, a technology which helps understand constraints is a metatechnology. It has to be a facilitator of conversation about technology: not conversation as the exchange of text documents, but the means by which we look into each others eyes and ask what our screens are doing to us.

Monday, 21 December 2015

Christmas in Vulgaria

Zoltan Hubriski can do no wrong. His hugely successful recruitment drive bringing "his people" (nobody quite knows who 'his people' are) into the University has united more bottoms with more seats than any other recruitment initiative in the University's history. How does he do it? Was it the white-suited sales pitch or the winkle-pickers that did it? Hubriski explains it thus: "My people are like sheep. Once you get one, everyone else follows!". So there we have it! This success is skin-deep; it's actually a catastrophic failure of principle, honesty and integrity dressed-up as success. So much for success at the University of Vulgaria.

Hubriski's cynicism is only matched by the cynicism of his overlords who gaze on approvingly. "it's for the good of the learners," they tell everyone earnestly, whilst quietly saying to themselves, "it's for the good of me." Nobody believes they really care for the 'sheep': deep down everyone knows the score - even the sheep!

A full-moon at Christmas and there's a certain eeriness in the air this year. In Chateau Turtonovski, the baron's Bentley pulled-up as the wind and rain rattled the windows. The baron rarely stops to listen to anything, but this time something in the whistle of the wind caught his ear. It wasn't thoughts of Hubriski ("The man's an idiot, but for some reason he's successful!" he puzzled). He thought of the Markeyovich houshold. "What were they doing now?" asked the baron to himself, remembering how he had ruthlessly banished them from Vulgaria earlier in the year. The baron didn't want to think about this, but he couldn't stop himself. His mind tried to shake-off the thought - "I don't care for Markeyovich or his ghastly family!". But no sooner did he think he had shaken this one off, other names came to mind... the screwdriver man, banished for 'stealing a screwdriver' - ("that was a close-shave - we had to back-down because of the idiot Hubriski!" muttered the baron to himself disconsolately). Then there was his former educational research department whose intellectual critique of the damage that was being done the University by the Baron and his cronies inevitably sealed their fate ("they had to go - we engineered their departure rather well by starving them of funding!" he said with some satisfaction, as the words "knowledge" and "university" irritated his brain in a way he couldn't fathom). Then there was the dismembering of so many others: The Pro-VC ("he stuck to his principles and departed from the script!"), the academic registrar ("she asked too many questions"), the HR trouble-shooter who didn't last long ("I threw him out of the car!"), half the psychology department ("we can get better and cheaper academics!"), most of the old business school ("I wish we could get the cheaper replacements to stay..."). Among them all some wonderful souls with wonderful ideas and caring hearts. "I don't care! We need to make money! Hubriski's the man! Why couldn't they be like Hubriski?" shouted the baron defiantly at the wind and rain.

Then there was the decommissioning of various heads of department - perhaps Prof. Veritaski as head of health was the most brutal, or the dismembering of the staff-development unit ("we don't need staff development; we don't want people to stay - get 'em in, work 'em to the bone, get 'em out!"). Or there was the sudden disappearance of the newly-arrived PA to the newly-arrived prof. McSortemout who caught wind of a salacious rumour concerning an attractive member of staff and accidentally trod on a land-mine. Then there was the disproportionate number of middle-aged women who disappeared from the university: more than any other group, they bore the brunt of banishment ("hmmm - blame the baroness" the baron explained to himself, "she's a jealous woman - I'm scared of her!"). "That wonderful lady from Personnel... the one from engineering (that was the first sign things were going badly wrong)... a few from health...education..." All gone. "I don't care," he repeated to himself.

He struggled with his key in the rain. It wouldn't work. "Damn it!" he said as anger took hold. The angry wind whistled vengefully around the ornamental garden. "Angry! - that's how they all feel!" he thought to himself. "You wanted to control everything," he couldn't stop the voice of the collective ghosts of the University telling him. "You wanted a University in your image. The University of You. You didn't care for justice, knowledge or decency - you just went after what you wanted. You are a greedy and insecure man who destroys knowledge and surrounds himself with thugs. For some of us, who care for knowledge and the university, what you did has left a deep scar. Injustice really hurts, you see. We lay in a kind of purgatory for some time - some of us are still there. You thought you had done the job and we had gone away. But we're only just beginning to have the strength to get really angry". The baron pulled his coat tighter and looked over his shoulder nervously. He noticed how shallow success looks different in the dark. He managed to open the door. Behind him the ghosts began to rise from their individual, isolated torment and stand together.

Friday, 18 December 2015

Theory-Practice Gaps in Educational Technology: Why Cybernetics matters

There is a gap between the conceptual discourse about education and the practical stuff which people do with each other in teaching and learning. Conceptual theorists ask what we want from education; teachers, educational technologists and designers ask about the best ways of organising teaching and learning using the available tools and resources. Conceptual meta-theories do not often translate well into practical activity, and practical people are less concerned with thinking about the state of education. Yet both sides of this are important. There is one discipline which fits between the gaps, and that is the discipline of cybernetics.

To say “cybernetics” today is to invite some puzzled looks – “is that about robots?” (ask people who think of the “Cybermen” in Dr. Who), or perhaps people who associate it with “cyberspace” and assume it’s about the internet. In fact, it has some relation to both robots and the internet, neither of which would be possible were it not for the pioneering work of cyberneticians in the 1940s and 50s like John Von Neumann (whose computer architecture set the blueprint for every computer, mobile phone and smartwatch we use today), or Claude Shannon, whose reasoning about data transmission in networks lay the foundations for today’s computer networks, data compression algorithms, encryption, and without which there would be no internet. 

People are less likely to associate ‘cybernetics’ with psychotherapy or anthropology: and yet within these far more human disciplines, cybernetics made transformative contributions through the work of anthropologists like Gregory Bateson and Margaret Mead, and psychologist R. D. Laing, whose ‘family therapy’ (now a mainstay of psychotherapeutic services) was based in cybernetic theory, or the deep understanding of the developing child in the work on ‘attachment’ by John Bowlby. The connection of cybernetics to management and business is also unlikely, yet key thinkers from the business community have been deeply influenced by cybernetics, from Stafford Beer’s Management Cybernetics, through to George Soros’s economic reflexivity. The connection of cybernetics to biology is also unlikely to be acknowledged, even though it is through biology that the first identification of a ‘system’ was established long before the cybernetic revolution, and where biological cybernetics has inspired not only new thinking about biological development, but ecology and epistemology. Neither will people think that cybernetics has had any influence on our understanding of society, despite considerable impact of sociologist like Niklas Luhmann. Perhaps least likely will be any awareness of the importance of cybernetics in theories of learning and education. Yet learning theories from Piaget to Bruner to Mezirow adopt systemic cybernetic ideas. In education, perhaps more than in any other field, there is a deep need to connect the questions about WHY things are the way they are, HOW things operate in the way they do, with practical inquiry about WHAT IF things were done differently.

One of the more challenging responses to the mention of the word “cybernetics” is the response (possibly from those who know something about it) that it is DEAD, that it was something people talked about in the 60s and 70s, that it was utopian, control-oriented, philosophically-ungrounded – something to be treated with suspicion. Today, people talk about ‘big data’ and surveillance, economic inequality is rife, violent extremism harnesses technologies to terrorise the people, exclusive university education becomes increasingly expensive, and the ecological balance of the planet is under threat. Under these conditions it is hard to see how a subject which offers a genuine transdisciplinary approach to looking at the world's problems could be seen to be dead: except to say that the perception of its death is a symptom of the terrible mess we are in. 

Then of course, there are the other sciences which have emerged from cybernetics, and those sciences which transformed themselves in its shadow. From Artificial Intelligence to Complexity science, ecology to neuroscience, each took a small part of what existed in cybernetics that was of interest to them and developed it, in the process, losing sight of what they left behind. The whole of cybernetics is greater than the sum of these parts; indeed the existence of the parts instead of the whole thing is symptomatic of the pathologies of reductionism within the education system.

Cybernetics is a way of thinking which isn't hide-bound by disciplines. It is for this reason that cyberneticians have rarely found comfortable places in Universities. But then perhaps "comfortable" places in universities are not the places to be in universities in the first place! Cybernetics belongs in the awkward places between things - and it is possibly for this reason that a number of cyberneticians have taken an interest in educational technology. 

Monday, 7 December 2015

Visualising Constraint in Educational Institutions with Parallel Coordinates

I'm preparing a presentation for the SRHE conference later this week about status functions and constraints in institutions. Status functions are Searle's idea for how social reality is manifested through particular kinds of declarative speech act - i.e. "This is a University/certificate/banknote...etc". The argument of the paper is that they can be analysed by looking at institutional strategies (these are, in the end, collections of status functions), technologies, league tables and so on, and each of these status functions has a constraining effect on institutional life. Moreover, status function declarations exist in webs where there are many inconsistencies, contradictions and often 'knots' where competing status functions constrain each other to create a kind of stability. I think Searle is not quite right in saying that social reality results from status function declarations; but it seems reasonable to argue that social reality is certainly constrained by status functions. Maybe things like Universities, monarchies, nation states and e-portfolio(!) only exist because of the knotted constraints they tie in each of us...

Analysing institutional strategies is one way of indicating status functions, but Searle also says that status functions have to be upheld by the 'collective intentionality' of the community for which they are intended. Deep down, we agree that David Cameron is Prime Minister, and a £10 note is worth £10. We're probably less clear about education, but the knotted constraints of education including the value of certification and social and cultural capital for employability leads people to go to extraordinary lengths to ensure that their children get "the best" education, and leads young people to ask few questions about heavily indebting themselves for a degree. And that's before we consider the status functions of education itself: learning outcomes, assessments, curricula, timetables, lectures, VLEs, e-portfolios, academic papers, textbooks, and so on. We all buy into the whole thing and tend to ask few questions about something which would seem quite perverse to a Martian visitor!

Whilst the status functions of strategies and league tables are there to be seen, seeing the 'collective intentionality' requires that we ask people about it. The paper reports on some work I did with colleagues at the Far Eastern Federal University (FEFU) in Vladivostok. For Russian universities, the status functions of league tables and prestigious journals are particularly constraining because they are all in English, and dominated by a Western European/Anglo-Saxon academic culture. Russian academic traditions are different, yet the status functions are made by Western publishers like the Times Higher Educational Supplement. The Russians have to buy into it, but it's hardly a level playing field (the same argument applies to non-research universities in the UK, of course).

The problem is that things like the QS rankings constrain the strategic decision-making within institutions, where managers feel that their institution should be doing all it can to raise its league table position. This can put teachers in an impossible position.

So at FEFU we asked teachers to rank the constraints they felt prevented them from enhancing research and teaching. We then asked them to consider those things which constrained them least, and asked them what they might do to overcome the least constraining things. The data we got back was quite messy, but a visual analytic approach helped to identify some patterns.

Parallel coordinates, pioneered by Alfred Inselberg (who has a cybernetics background), is a powerful technique for doing multivariate analysis in a visual way. I've used the javascript library D3 which does a nice job of making interactive parallel coordinate graphs. There are no surprises about the consensus as to what is most constraining with regard to research: the second bar along in the graph above represents "too much teaching" and the 9th represents "bureaucracy". But the strong constraints are possibly less interesting than the weak ones. The only problem with "weak constraints" is that people don't really see them as constraints; rather they might perceive a 'weak constraint' as 'irrelevant'. Quite a few teachers felt that the abilities of their students wasn't a great constraint on their research activities (indeed, I suspect they may have seen it to be an irrelevant factor). In answer to the question "What can you do to address the least significant constraint?" some responded by emphasising the possibilities of doing more interesting things in class and enthusing their students: this at least was within reach for teachers - although it doesn't seem to fit with the constraint of feeling that there was too much teaching!
What I find interesting at a broader level about this is that the graphs provide a kind of model of the collective intentionality of the staff, and that with this knowledge, institutional policies which work with, rather than work against, the prevailing collective intentionality are more likely to be successful. More importantly, modest strategies for more inventive teaching (say) or freeing up the curriculum could change the institutional constraints to the point that ways of raising the international reputation of the university can reveal themselves without directly tackling things like the QS rankings. 

Oblique strategies and effective realistic analysis of where the land lies may be far more successful in the long term. What is important is to have analytical techniques which can pinpoint areas where strategic status functions in the form of new initiatives might be created which are likely to be upheld in a constructive way by the majority of teachers.


Thursday, 3 December 2015

Educational Technology and the Intersubjectivity of the Anatomical Theatre

I started a new job working in a faculty of health (*but now I can't say where*). I think health is a fascinating and important domain for educational technology. It was the domain which largely drove initial developments in e-portfolio, competency management and simulation, as well as being an important test-ground for pedagogies like PBL (although this has interestingly fallen out-of-favour). In my interview I commented that one of the things that really fascinates me in health is that it has become apparent over the years that "competencies are not enough"; or rather that the bureaucratisation of box-ticking and form-signing has tended to instrumentalise the educational process, leaving much of the important learning beyond assessment schemes (although no doubt it influences outcomes). Maybe that's as it should be, but it does seem that instrumental education is not great either for teachers or learners.

All this made think about how medical education has happened over the centuries. One of the most fascinating educational technologies' in health education is the 'anatomical theatre'. This designed room where students would gather round on steeply banked platforms usually in a circle, where at the centre would be a surgeon demonstrator and a cadaver.

What was going on in this space? What happened in the minds of the students? First and foremost, it was theatre, so perhaps the question is one that relates to questions about what happens to us when we watch a play or television. Except that the groups of students would have been more involved in the action. It would have been hot, intense, probably smelly - there would have been a complex relationship between the demonstrator surgeon and the students and between the students and each other. Each student could look across at each others' reaction. The demonstrator would occasionally look up at the students staring down.

What was going on here is what Alfred Schutz calls 'tuning in' to the inner life of each other. Schutz emphasises the shared experience of time in these kind of intense situations.  Within these 'tuning in' engagements, knowledge would have emerged about the nature of causes and effects within human anatomy: "if I do this, this happens". But it was more than the information about cause and effect; it was the sharing of the experience of activating a cause and experiencing its effect - not just on the body, but on everybody watching.

Coming back to the instrumentalisation of education, the business of theatre and the construction of knowledge about causes and effects has largely been replaced by the simple didacticism of cause and effect. Cause and effect has been stripped of its intersubjective context. And the logical development of this is competency divorced from its social practice and context.

My point in saying this is that there's an opportunity to address this. If we could think back in history and understand the intersubjective relations of the anatomical theatre then we would design our educational technology, and our assessment frameworks differently.  Our current learning technologies make assumptions about knowledge which sit on shaky foundations and which our ancestors possibly had a better grasp of than we do. A bit of history might go a long way in rethinking our current educational practices!

Tuesday, 1 December 2015

Re-understanding "Understanding Computers and Cognition"

I will always regret nobody told me about Winograd and Flores's "Understanding computers and cognition" when I was a teenager first encountering computers. As it was, I read it at the recommendation of Oleg Liber in 2002, and it transformed my perspective of not only technology, but art, education, emotion and meaning. It provided a framework for a deeper understanding of technology and its relation to human organisation which is so fundamental to the exploitation of computers in education. I am so grateful for this, although in the years that have passed, through numerous projects involving technology and education, my enthusiasm for the cybernetic theory and the phenomenological and analytical philosophy (Heidegger and Speech act theory) that underpinned Winograd and Flores has waxed and waned. But now we live in age when our teenagers cannot remember a world without the internet, it is a book that demands study even more urgently.

Winograd and Flores's (really, it's Flores's book) real achievement is that they asserted that computers were about communication not data processing and AI (which was the dominant view in 1986), and they were proved spectacularly right a few years later with the advent of the web. It's notoriously hard to make technological predictions: they showed the way to do it - with cybernetics and philosophy!

But that was 1986. What would they say if they were to write it now? Their theoretical ground is slow moving - but there has been movement - most notably from John Searle, whose Speech act theory they relied on most heavily, using it to construct their "conversation for action" model. In recent years, Searle has thought very deeply about "social reality" - something which his younger self would have dismissed as an epiphenomenon of speech acts. His recent work remains language-based, but he acknowledges the existence of social institutions, presidents, money and armed forces as something more than an individual construction. Social reality is constituted by special kinds of speech act called 'status functions': declarations by powerful individuals or institutions about states of affairs, networks of rights, responsibilities, obligations and commitments, upheld by a 'collective intentionality' which plays along with the declaration. So we have 'money' (the status function declaration "I promise to pay the bearer..."), university certificates, company declarations, laws, and so on.

We also now have software, online networks, educational technologies, web services, systems interoperability, Twitter, Facebook, porn, trolls and Tinder (to name a few!) How do status functions amd collective intentionality relate to these? The complexity of these new technological forms of "social reality" make me think that Winograd and Flores's original "conversation for action" diagram now needs to be re-thought. They saw computers as ways of managing social commitments we make to each other (commitment has been a key feature of Flores's work).. But commitments are situated within a web of status function declarations which make up the social world. The speech acts that people make in agreeing or disagreeing to do something are much more nuanced than Winograd and Flores originally thought. Technologies now come with layers of commitments: to agree to use system x is to get sucked into a range of new status functions which aren't immediately visible on the surface. Teachers might initially think e-portfolio is a good idea; but after experience with the e-portfolio system, they find the commitments to the various sub-status functions of the system conflict with other aspects of their practice, and so they find themselves either not doing what they originally committed to do, or having to rethink fundamental parts of their practice which they might not have reckoned with at the outset. This can help to explain why thousands of people sign-up for MOOCs, but so few complete them.

As our technology becomes more complex and our institutions become more technocratic, the accretions of layers of status functions within the technology demands an ever-shifting compliance. The problem is that critical engagement with the technology - where we seek appropriate technical solutions to real social problems - can lose out to slavish human adaptation to the technical machinery with the consequent loss of responsibility-taking and autonomy: we let the technology create problems it can solve. The result is a conflicted self, torn between human needs and technical requirements. The later Heidegger (which Winograd and Flores ignore, concentrating on his earlier work) had a rather bleak name for this: "enframing". 

Saturday, 28 November 2015

Markets and Variety

One of the great claims of the triumph of capitalism centres around evidence of the variety of commodities which individuals could buy. In less "developed" societies, there was (say) only one variety of breakfast cereal (if they had breakfast cereal); in the supermarkets of the West there were hundreds of varieties of cereal. Consumers could choose in a free market, according to their ability to pay. What actually happened was that consumers would deal with the overwhelming complexity they were faced with by adopting habits and "brand allegiances", where marketing departments of competing brands would do their best to change consumer habits; consumers were made to feel guilty for purchasing the brand they could only afford, instead of the one which they "ought" to have bought. The market developed ways people could assuage their guilt including offering ways in which people could be made to feel richer than they were: the capitalisation of short-term escapism became a long-term nightmare.

These ideas of "variety" and "choice" need re-inspecting - particularly in the light of the translation of these same concepts of choice to the world of education. It does not appear that marketisation in education has increased the variety of educational offerings. Why not? Whilst in education we might hope to see variety in the kinds of things that go on institutions (not just lectures, or tedious modules and learning outcomes, but rich and diverse conversations, many exciting (maybe eccentric) academics, many ways of finding one's voice, new ways of mixing disciplines, new ways of gaining certification, and so on), perhaps we were mistaken in thinking about  the variety of breakfast cereals or Heinz's tin cans in the first place.

For example, every week McDonalds seems to produce a 'new' burger. Except that it isn't a new burger at all. It's pretty much the same burger as all the others. What it does have is a new picture and a new name. The variety is on the surface, not in the substance. In former communist countries, this superficiality of variety is quite apparent. Moscow has a huge department store in Red Square called "Gum". The shops glisten with handbags, cosmetics, coffee and so on - much like any Mall anywhere in the world. And yet, there is a remarkable lack of variety too. It's all the same stuff, repackaged behind different shop windows.

When capitalism measures profit, it analyses sales according to individual varieties. It will then develop those varieties according to their performance. It calls itself "Darwinian", although it's really Spencerian. As a policy, however, it is the antithesis of what happens in the natural world. Gregory Bateson makes the point eloquently...

"It is now empirically clear that Darwinian evolutionary theory contained a very great error in its identification of the unit of survival under natural selection. The unit which was believed to be crucial and around which the theory was set up was either the breeding individual or the family line or the sub-species or some similar homogeneous set of conspecifics. Now I suggest that the last hundred years have demonstrated empirically that if an organism or aggregate of organisms sets to work with a focus on its own survival and thinks that that is the way to select its adaptive moves, its "progress" ends up with a destroyed environment." (Steps to an Ecology of Mind, p457)

So what of variety and selection if the result is destruction? Is that really what nature does? Bateson is right, and that means better definition is required of the concept of variety... 

Thursday, 26 November 2015

The #REF, #TEF and Contingency in Higher Education

Of all the warning signs about the terrible state of our Universities, the suicide last year of Stefan Grimm, Professor of toxicology at Imperial College, was the most desperate. Like any unnecessary death – and certainly the tragedy of suicide - we are left asking What if? Not only the What ifs of the professor’s work – the ideas he was working on, the ideas he would have gone on to develop had he lived – but also the “What ifs” of the fallout from his death: the damage to those who were implicated in it, the effect on friends and colleagues, the negative publicity, let alone the effects on those who loved him. What if organisational circumstances and institutional politics weighed more in his favour? Grimm, despite being well-published, had been deemed by his departmental management to not have brought in enough money: by the laws of toxic managerialism, he had to go. But his death touched a great many as they pondered the kind of madness we have arrived at. Viewed through the distorted mirror of academic metrics, his death had “impact”. But it was a death: the end of a set of possibilities for what might have been. Whilst we are touched by the tragedy of suicide as if watching a university soap opera, the risk is to lose sight of exactly what is lost. What is lost with the death of someone like Grimm is contingency: it is the snuffing-out of possibilities and as-yet unrecognised ideas. Contingencies in the University are not only at risk from tragic events like Stefan Grimm. They are systematically being eroded by performance metrics like the REF, and now the TEF will have a similar disastrous effect. Whilst contingency is at the heart of what Universities do, our current measures for the effectiveness of the university sector cannot see it. I want to suggest some ways of addressing this.

First of all, let’s consider how the REF removes contingencies in the system. Of all the possible brilliant ideas for research, only a few are likely to achieve impact and success, immediately rewarding the investment in them. There is no way of telling which of the many ideas, plans, individual academics, and so on, are likely to 'pay out' a successful return. This is partly because there is no single idea, plan or individual whose merit can be individually measured: success depends on the intellectual climate, market conditions, history, existing research trajectories and social networks. An individual measure of the likelihood of success - like publication - is on its own a poor indicator, particularly as it acquires a reputation as an indicator by which funding decisions are made.

Contingencies can be removed if we fail to see them. The easiest way of not seeing contingency is to see no differences between contingencies. This is to “analogise” contingency: to see that contingency x is the same as y – effectively to see x or y as ‘superfluous’ or ‘redundant’. Academic judgements of quality are in a large part identifications of analogies of arguments and results. Another way of removing contingency is to eliminate it because despite any original academic difference it presents, this difference is seen either not to fit the particular reductionist disciplinary criteria of a reviewer (“this is not about education, but economics…”), or to be published in an insufficiently “high-ranking” journal. Judgements of quality are judgements about redundancy of ideas based on written communications – and redundant work can lead to redundant academics. As with peer review, analogies, redundancies and contingencies exist as relationships between reviewers and the things they review: there is no objective assessment, and there is no way of assessing what analogies or differences a reviewer is predisposed to identify in the first place.  We understand this so poorly, and so little of it is available for inspection. Its consequence in systematically removing contingency from the system is dire.

Of course, it might be argued that removing some contingency may sometimes be necessary, as a gardener might deadhead roses. But the gardener does this not to reduce contingency in the long-run, but to maintain multiple contingencies of stems, leaves and flowers. In the university contingencies of practices, ideas, relationships and conversations are necessary so that the institutional conditions are maintained to make maximum benefit of the most appropriate ideas in the appropriate conditions. The British Library or the Bodleian make a point of preserving contingencies by keeping a copy of everything that is published: one would hope this would reflect a similar culture in our universities which traditionally has always exhibited many contingencies – it is the principal distinction between higher learning and schooling.

The consequence of removing contingency is increasing rigidity in the system, producing an education system which knows only a few ways to respond to a fast-changing world. There are contingencies not only among the possible ideas which might be thought, researched and developed within the university; there are contingencies in ways of teaching, the activities that are conducted by learners and teachers; the ways learners are assessed; the conditions within which teachers and learners can meet and talk; the technological variety for maintaining conversations, and the broader means by which conversations are sustained.

Contingencies are not only under attack from research budgets and assessment exercises. Government-inspired regulatory mechanisms are the handmaiden of marketing campaigns. Good scores = good marketing = good recruitment. But marketization produces its own pressures on the removal of contingencies: closure of whole departments like philosophy, concentration on popular subjects like IT or Business, not to mention the blinkered drive for ‘STEM’ as universities confuse science with textbook-performances of useless sums. Alongside these pressures to remove academic contingencies is an attempt to remove contingencies in academic and pedagogical practice. The contingencies of university life are deeply interconnected: the contingencies of pedagogy have been eroded by learning outcomes, disciplinary reductionism, competency frameworks, and the various indicators of ‘academic quality’. The recently-announced Teaching Excellence Framework amounts to a renewed assault on the contingencies in the classroom. An institution not recognised by the REF might nevertheless claim success in teaching, but if this success can only be defined through recognition in metrics, the TEF will reduce diversity of teaching practice, drive out experimentation, and bureaucratise the process to produce outcomes that fit locally-defined criteria aimed at gaming success with national inspection. One university I know, its ear close to Westminster, announced its new strategy of being “Teaching Intensive, Research Informed” in a bid to find favour with the new regulatory climate: in a stroke, fearless pedagogical experimentation, diversity, freedom and flexibility become subsumed into ‘intensive teaching’ driven by metrics on teacher performance and ‘student satisfaction’, accompanied with implicit threats of redundancy, with the only real desire that students stay on the course and continue to pay their fees.

The REF and the TEF are two sides of the same coin. Following a ‘business-oriented’ logic, their effect is to reduce contingencies in the University. But universities are unlike businesses precisely in their relationship to contingency: if universities lose contingencies they cease to be universities but (at best) schools. What should we do?

We can and should be measuring the contingencies of the higher education system, and allocating funding according to a much broader conception of a higher education ecology. Ironically, bibliometric approaches partly used in the REF take us half-way there. Typically, bibliometrics measure the ‘mutual information’ in discourses: those topics which recur across different contexts – those areas where contingency is lost. Contingencies sit in the background to this ‘mutual information’. In effect they operate as the “constraints” which produce repeated patterns of practice, which if probed, can unlock new research potential. New discoveries are made when we see things that we once thought were analogous to be fundamentally different, and then start to explore these differences.

The contingencies of pedagogy are also measurable: if we took all the learning outcomes, all the assignment briefs, subject handbooks and so on in the country, we would see high degree of ‘mutual information’ (of course, our ‘quality regime’ depends on this!). What are the constraints which produce this? (apart from the QAA or its successor) Why is there not more diversity? How can funding be targeted to generate more variety in pedagogic practice? If we are to get the balance between contingency and coherence in our Universities, a much broader, but also more analytical approach is required. Most importantly it has to sit outside marketization – at the level of government: marketization is one of many constraints which currently serve to reduce contingency. At the moment, the REF and the TEF both feed marketisation producing a positive feedback loop. Higher Education is out-of-control. The monitoring of levels of contingency would show where things are going wrong. We might hope that it also helps us to steer our higher education system to maximise, not reduce, its contingency. At the very least, we should aim to produce the conditions within which Stefan Grimm would still be alive thinking new ideas. 

Tuesday, 17 November 2015

Conservation of Constraint? - Some vague speculations about learning, music and violence

This is a very speculative post (well - that's partly what blogs should be about!). I've been ruminating about constraints for a number of years now. The technical measurable component of constraint presents itself in Shannon's redundancy measure. This is the inverse of the entropy calculation, which measures the average uncertainty of a message: the constraint is the thing which must be present in order to produce that uncertainty - for example, with regard to the uncertainty between words of a language, the grammar of that language performs this function. One of the functions that constraint performs is to ensure effective communication: grammars restrict choices, and structure things such that certain key aspects of meaning are emphasised or repeated.

Redundancy in information theory can refer to a number of things. On the one hand, it might refer to 'repeated information'. If we are to send a message in a noisy environment, it might be necessary to repeat it a few times. This kind of redundancy plays out over time. I would like to call it 'diachronic redundancy' or 'diachronic constraint'. Alternatively, there is redundancy where a message is conveyed simultaneously in different ways: I might say "I don't understand" whilst at the same time shrug my shoulders, or shake my head. Between the three different signals, the message is conveyed through a kind of connotative process. This type of redundancy is "synchronic redundancy", or perhaps "synchronic constraint".

Human communication obviously takes place within both synchronic and diachronic dimensions. However, I find myself sometimes more focused on diachronic processes in time which express redundancy (something repetitive like typing, or walking, or any kind of repetitive sequential action). Other times, I am deeply immersed in a multi-sensory contemplation of many different signals: when I study a painting, or listen to music, or have a deep conversation with somebody face-to-face over a beer. This is more synchronic. Then I am mindful that the diachronic gives way to the synchronic in the way that action gives way to reflection; in the way that contemplation is balanced by action.

So here's my question: is constraint conserved in human relations? Is the sum of diachronic and synchronic constraint constant? (assuming we have a way of easily measuring each). Music may provide some grounds for investigating this perhaps: the difference between moments of harmonic richness and moments of rhythmic drive.

There is an added complication however (of course!). Redundancy is measured by the formula:
1 - H/Hmax
and von Foerster convincingly argues that self-organisation and development works by increasing the bounds of  Hmax, or the 'maximum entropy' so that self-organising systems become more complex. (I wrote about this here: http://dailyimprovisation.blogspot.co.uk/2015/09/learning-gain-and-measurement-of-order.html) So it may be that constraint isn't conserved exactly, but rather the balance between diachronic and synchronic constraint gives rise to a mechanism for increasing the maximum entropy: increasing complexification. This, I think, is important in understanding the learning process.

Intuitively, we move between synchronic and diachronic constraints, as between contemplation and action, and along the way expand the domain within which constraints apply themselves. There's a link to Vygotsky here: the Zone of Proximal Development is a way of expressing the synchronic constraints (closeness of a teacher) in balance with the diachronic constraints (the particular activities a learner engages in on their own). Does the ZPD "draw out" synchronic constraints (teacher-pupil relationship) as diachronic constraint (scaffolded activity)?

Maybe there's also a link to terrorism (which is obviously on everyone's mind at the moment). What is it that leads people to carry out sequential, violent and repetitive activities like shooting people? This looks like a kind of diachronic constraint. The synchronic component is fanatical religion. Is terrorist violence a 'drawing-out' of repetitive diachronic activity from intense synchronic experience? Is this how the sense of exclusion and injustice (which is part of that synchronic experience) feeds into the execution of violent plans? Of course, in this case, it isn't stable; but is there an analysable relationship between the synchronic aspects and the diachronic execution? (Of course, the same applies to the military response by the state!). Perhaps this is over-thinking something at a very difficult time. But difficult times are valuable in producing a lot of thinking.... they are full of synchronic, multi-layered constraints.

Monday, 16 November 2015

Music and Murder: Have our ears changed after the Paris attacks?

There are few moments where music has a direct semantic reference to concrete things: Beethoven did it in the various overtures to Fidelio - an off-stage trumpet meant an approaching army. With Napoleon at the gates of the city, most hearing this trumpet call in the Theater an der Wien in 1805 would have taken it as an explicit sign, rather than an abstract constituent of the music. Birtwistle has a telephone ringing in The Second Mrs Kong (ironically with a traditional bell which in its contemporary setting would now be be a kind of anachronism).

What we have now is the sound of rock music, heavy whining guitars, amplified vocals blasting out, with the entry of an arhythmic cork-popping rat-tat-tat sound accompanied with screaming. In the months and years from now, will we be able to hear this without making a direct association to the terrible events of the weekend? Have our ears changed?

That rat-tat-tat sound becomes a dull, terrible trope. Just like when we hear 'La donna e mobile' in Rigoletto, we know that trouble and tragedy are coming (Verdi's genius is to give his 'sign' the best tune!), so when we hear Palestrina, Beethoven, Bach or Mozart with a rat-tat-tat entry, it will stand for the opposite of what is expressed in the music.

9/11 afflicted our eyes with real-life images which we already knew from disaster movies. Paris may have changed our ears. That is a more profound thing.

Wednesday, 11 November 2015

Information and Second-Order Cybernetics: Constructivist Foundations and Empirical Approaches

The central challenge in differentiating varieties of second-order cybernetics lies in disentangling epistemological differences where, on the surface, there are shared claims for epistemological coherence based on principles of reflexivity, circular causality and observation. Whilst it might be claimed that, for example, Maturana’s Biology of Cognition is consistent with Luhmann’s Social System theory, or von Foerster’s cybernetic cognitivism, discussions between scholars representing different varieties of second-order cybernetics soon find themselves in disagreements which appear almost sectarian in character. In these disputes, there appear to be two dimensions. On the one hand, there is conflict about what each variety of second-order cybernetics stands against, and how each variety may accuse the other of lending tacit support to a position which both of them claim to oppose. On the other hand, there are differences that arise in uninspected assumptions concerning those principles which all varieties of second-order cybernetics uphold, amongst which principles of circularity, induction and adaptation appear universal.  


In this paper, we uphold empiricism as an essential element in the coordination of a coherent second-order cybernetic discourse. We build on recent critical work of Krippendorff and Mueller, who identified historically-embedded inconsistencies across varieties of second-order cybernetics stemming from the distinction between the General Systems Theory of Bertalanffy and the cybernetics of Wiener. Both Krippendorff and Mueller argue that GST belongs to an intellectual tradition of holistic theorising – a tradition that stretches back to the German idealism of Schelling, Hegel and Fichte. GST replaced vitalism with mechanism but essentially maintained the same objective in seeking a totalising mechanistic description. By contrast, cybernetics was a pragmatic and empirical endeavour which evolved in the practice of early cyberneticians like Ashby, for whom cybernetics was a scientific orientation with regard to constraints, rather than mechanistic causation. In contrast to GST, cybernetics aimed not for ideal descriptions of mechanisms, but to actively seek the conditions of possibility for effective organisation. Krippendorff argues that in second-order cybernetics, mechanistic idealism and constraint-oriented investigation became conflated under the broad umbrella of “cybernetics of cybernetics”, and in the process lost sight of earlier cybernetic work which enshrined principles of reflexivity and observer orientation which were accompanied by an active empirical engagement.
We acknowledge that the empirical orientation of second-order cybernetics has been overshadowed by accusations of objectivism and inconsistency with second-order cybernetic theory. However we take this as an invitation to reconsider what it is to be empirical, reflecting on the relationship between second-order cybernetic epistemology in its relation to the philosophy of science, and inspecting current empirical practices allied to second-order cybernetics. We believe the accusation of objectivism towards empiricism is a mistake both in terms of a misunderstanding of the philosophy of science (particularly Hume’s epistemology), and in terms of appreciating the intellectual contribution of second-order cybernetics in a number of present-day empirical practices. We begin our analysis by considering what varieties of second-order cybernetics stand against, separating different theoretical orientations towards foundationalism, objectivism and universalism: it is, we argue, in these various unarticulated orientations where inconsistencies in the discourse arise. We then consider what varieties of second-order cybernetics support, focusing on fundamental principles of induction, adaptation and circularity. Behind common descriptions of induction and adaptation, lie distinctions about regularities and the development of knowledge. We follow Hume, and critique of Hume by Keynes in analysing the way analogies are identified in inductive processes. Extending Keynes, it is argued that second-order cybernetic invokes two kinds of analogy: the analogies between events, and the analogies between the different states in the observer.
The problem of induction in empirical practice has played an important role in the philosophy of science. Hume’s critique of probabilities and the way that event regularities contribute to scientific knowledge unites problems in probability theory with problems in the philosophy of science. In developing Hume’s stance, Keynes’s contributions to Hume’s theory argued that experiment led to the identification of analogies negatively. These arguments are important to second-order cybernetics because the principal empirical approach involves Shannon’s information theory, which similarly unites probability theory with the growth of knowledge. At the heart of Hume and Keynes’s concerns is the difference between novelty and analogy: a distinction which has similarly formed the basis for critique of Shannon’s information theory, which current second-order cybernetic empirical approaches have been actively engaged with.
In the final section of the paper, we uphold empirical practice as a way of addressing the confusion over double-analogies in second-order cybernetics and in contributing to the coordination of a coherent second-order cybernetic discourse. In returning to Hume’s sceptical philosophy, we argue that the role of empiricism is to maintain reflexive discursive coherence rather than uphold objectivism. The information theoretical techniques we present, whilst none of them perfect, have the potential to ground second-order cybernetics in an empirical practice which can stimulate and support a deeper reflexive science, whilst avoiding the aporia of ungrounded disputes which lose themselves amongst the double-analogies of second-order cybernetic epistemology.
What Second-Order Cybernetics stands against
Ostensibly defined as the "cybernetics of observing systems", there are a variety of interpretations of what this might mean – particularly given the fact that cybernetics itself is multiply and inconsistently defined. For example, both Niklas Luhmann and Humberto Maturana are both second-order cyberneticians, and yet each has criticised the other for an inconsistent application of second-order cybernetic principles. Luhmann's borrowing of Maturana's theory of autopoiesis as a way of developing sociological theory (particularly developing Parson's Social Systems theory), and its entailed view that communication systems are 'autopoietic' (i.e. it is an organisationally-closed, structurally-determined system which regenerates its own components) appears to impute some kind of mind-independence to the communications system which subsumes psychological, perceptual and agential issues. Luhmann escapes the accusation of objectivism in this approach by presenting “agency” of minds as an epiphenomenon of the dynamics of communication systems: the ‘personal’ is subsumed within the dynamics of the collective. This move, however, subverts the biological foundations for autopoietic theory. When Maturana argues that:
"a cognitive system is a system whose organisation defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain,"
the implication is that there is what Varela calls “in-formation” of the self-organisation of interacting organisms, rather than mind-independent information. Luhmann's redescription of sociology in terms of autopoiesis has been taken by Maturana and his followers as something of a betrayal and distortion. And yet, Luhmann's redescription of sociology has been highly influential, attracting the attention of eminent sociologists and philosophers, including Habermas and many others, for whom systems thinking would otherwise have been sidelined.
The contrast between Luhmann and Maturana is illustrative of deeper tensions within the domain of issues which varieties of second-order cybernetics stands against. In reviewing a similar and related problem of “varieties of relativism”, Harre identifies three major areas where intellectual positions if what is opposed by varieties of second-order cybernetics can be contrasted. These are positions relate to:
  • Objectivism: the position that there are objects and concepts in the world independent of individual observers;
  • Universalism: the position that there are beliefs which hold good in all contexts for all people;
  • Foundationalism: the position that there are fundamental principles from which all other concepts and phenomena can be constructed.
Whilst each second-order cybernetic theory stands against objectivism, each is vulnerable to the claim of objectivism in some aspect, and in each variation, the locus of any objection is different. Objectivist vulnerability in Maturana lies in the biological and empirical basis of his original theory; in Luhmann, the criticism is made that his communication system is mind-independent where Luhmann claims it is mind-constitutive. In Pask’s cybernetic computationalism, Krippendorff criticises the objectivism of his computational metaphors and his reduction to physics, with the implication that mind is a computer. In von Foerster’s cognitivism, there is an implicit objectivism in the reduction to mathematical recursive processes.  
The stance of Second-order cybernetics to Universalism is more complex, reflecting the critique of Mueller and Krippendorff about the relationship between cybernetics and General Systems Theory (which is clearly universalist). There is an implicit view within second-order cybernetics which allies itself to philosophical scepticism: that there is no 'natural necessity', or naturally-occurring regularities in nature. However, second-order cybernetics does appear to uphold a law-like nature of its own principles, arguing for these as a foundation for processes of construction of everything else. At the heart of this issue is the nature of causation inherent within universal laws. Second-order cybernetics upholds a view that rather than universal causal laws in operation, self-organising systems operate with degrees of freedom within constraints. However, in taking this position, different varieties of second-order cybernetic differ in their understanding of what those constraints might be, and how the system might organise itself with regard to them. Maturana's constraints are biological; Luhmann's are discursive; Pask’s are physical; von Forster’s are logical.
With regard to foundationalism, all varieties of second-order cybernetics appear to wish to maintain principles of self-organisation as foundational. In this, however, irrespective of the mechanisms and constraints constraints which bear upon the self-organisation of a system in its environment, there is also a need to consider the constraints that bear upon the second-order cybernetician who concocts foundational theories. How does this happen? How does it vary from one second-order theory to another? Distinguishing foundationalism between different varieties of cybernetics entails exploring the core ideas of adaptation and induction.
The Problem of Induction and the Double-Analogies of Second-Order Cybernetics
The relationship between observer and observed within second-order cybernetics is one of organisational adaptation within structurally-determined and organisationally-closed systems. Luhmann explains that “the operative closure of autopoietic systems produces a difference, namely, the difference between system and environment. This difference can be seen. One can observe the surface of another organism, and the form of the inside/outside distinction motivates the inference of an unobservable interiority.” (Luhmann’s italics). Luhmann draws on Spencer-Brown’s Laws of Form, arguing for the connection between drawn distinctions and internal restructuring of obervers. “Adaptation” is the name given to this restructuring. Different domains of adaptation include biological, discursive, cognitive, atomic and so on.  However, a domain of adaptation and observation entails the identification of sameness. Whilst the logic of adaptation is an abstract dynamic process, the logic of sameness is specific: to be the same involves both the sameness of biological, discursive or cognitive perceptions and a sameness within the perceiving system. By contrast, to paraphrase Bateson, a difference is not a difference unless it makes a difference in the perceiver.
With regard to the sameness of events, adaptation within second-order cybernetics is generally regarded to be inductive, with adaptations responding to ‘regularities’ of events which stimulate structural change in the organism. In the descriptions of biological adapation of Maturana’s cells and organisms to environmental 'niches’, Maturana argues:
“the living system, due to its circular organisation, is an inductive system and functions always in a predictive manner: what occurred once will occur again. Its organisation (both genetic and otherwise) is conservative and repeats only that which works.” (1970)
Recurrence and regularity of events is characterised elsewhere in autopoietic theory. Varela, in distinguishing the concept of ‘in-formation’, describes it as “coherence or regularity”. Across the varieties of second-order cybernetics, there is a distinguishing of events which cohere with existing structural conditions of the organism amongst which coherences and regularities can be determined, and those events which demand organisational transformation. Across the varieties of second-order cybernetics, regularities are suggested between biological cells, logical structures emerging from self-reference (what von Foerster identifies as 'eigenvalues'), or coherences and stabilities within a discourse (for example, Luhmann's social systems or Beer's 'infosets').
Von Glasersfeld, whose radical constructivism makes explicit reference to inductive processes, provides a revealing interpretation of Piagetian 'assimilation' in his 'schema'  theory of learning. Von Glasersfeld illustrates Piaget's assimilation: “if Mr Smith urgently needs a screwdriver to repair the light switch in the kitchen, but does not want to go and look for one in his basement, he may ‘assimilate’ a butter knife to the role of tool in the context of that particular repair schema.” Here the double-analogy of the inductive process involves:
  1. the identification of some analogy between the butter knife and the screwdriver (Gibson might call this an ‘affordance’)
  2. the identification of analogies within the observer’s knowledge of ‘ways of repairing the light switch’
In both cases, these analogies will have been established through repetition: screwdrivers, screws and broken light switches are encountered in numerous configurations; just as the practices of screwing screws is also acquired through repeated performance. Against the background of analogies of observer and observed, there are also differences which produce the structural adaptations which enable an adjustment to existing known practices so as to find a suitable way of using the butter knife. Fundamentally, however, were the analogies not perceived – both from the observed knife, and within the perceiving subject – there is no ground for the establishment of a difference and its consequent transformation of practice.
The role of repetition in the establishment of analogy and induction is a topic which has attracted the attention of philosophers since Hume. Hume’s example asked how we might acquire an expectations of the taste of eggs. The process, he argues, requires the identification of the 'likeness' between many eggs. With many examples of eggs tasting the same way, an expectation (knowledge) is created concerning the taste of eggs. The process of analogy occurs because of a ‘fit’ between the recognition of analogy in perception and the repetition of that analogy over many instances.
In second-order cybernetics, the mere observation of the ‘likeness’ of eggs is insufficient; we must also consider the ‘likeness’ of the relationship between the observer of the eggs and the eggs themselves. All varieties of second-order cybernetics entail a description of the observer as an adaptive mechanism. Hume’s philosophy only considered a single-analogy of events; Second-order cybernetics has to consider a double-analogy. It is possibly for this reason that second-order cybernetics, and particularly its close relation, second-order science, finds itself fighting a battle on two fronts: on the one hand, there is a battle with positivist empiricism; on the other, there is a battle with philosophers.
Double analogy can be used to separate different approaches to second-order cybernetics. In Luhmann, the 'observer' is a discursive organisational structure which maintains itself in the light of new discursive performances. The identification of differences in discursive structure form a fundamental plank in Luhmann's differentiation of social systems. In order for a discourse to adapt (for example, through innovation), the discourse must be able to identify those aspects of linguistic performance which are analogous to existing discursive structure, and then to reformulate its discursive structure such that subsequent discursive events may be anticipated. In Maturana, the observer is the biological entity, whose organisation has its own implicit analogies, together with the analogies of the perturbations which confront it.
Double analogy presents the central problem facing the coordination of discourse between varieties of second-order cybernetics: how can the analogies of perturbation be determined and compared if the analogies of perceiving structure are so varied across different cybernetic theories? In other words, how is it possible to have a coherent and stable second-order cybernetic discourse where quite different interpretations can be created for the same perceived events?
Hume's empirical theory and his separation between analogy and induction is useful. Whilst much second-order cybernetics has tended to eschew empiricism as first-order reasoning, Hume's concept of the shared empirical inquiry presents a solution to the mismatch between analogies of observational structure and analogies of perturbation. The question concerns the way reproducible empirical experiences create, at the very least, a foundational context and coordinating framework for debate and discussion. Indeed, the experience of discourse within discourse is already empirical in the way that Hume envisaged it: the experience of discourse itself presents a shared 'life-world' for participants to reflect not only on the substance of their discussion, but on the dynamics of the discourse itself. Discourse itself carries its own observable analogies which can be studied. However, material engagement also produces analogies which can brought into discourse. Keynes extended and critiqued Hume’s theory by arguing that regularities in experiment were not enough: analogies are identified negatively through varied repetition.
Keynesian Negative Analogies and Reflexivity
Keynes argued that Hume’s analogies of eggs did not go far enough. Keynes argues that:
“His argument could have been improved. His experiments should not have been too uniform, and ought to have differed from one another as much as possible in all respects save that of the likeness of the eggs. He should have tried eggs in the town and in the country, in January and in June. He might then have discovered that eggs could be good or bad, however like they looked.”
Keynes suggests the concept of a ‘negative analogy’ where there are multiple experiences coupled with a “subtractive” identification of the core features which are common. Keynes’s view of analogy enhances Hume’s by suggesting that the adaptation occurs through event regularities in a relational manner: scientific knowledge emerges in the interaction and adaptation of an observer with observed regularities. More significantly, he argued that this process of adaptation is negative: what occurs in empirical process was an adaptation to a variety of events, similar only in some essential core aspect.
Keynes was acutely aware of the role of ideas and reflexivity and their relation to experiment: observers frame the regularities that they perceive. His oft-quoted opening of the General Theory (“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back”) suggests that Keynes’s view of empiricism saw there to be two aspects to analogy: in events and in the observer. The analogies of the observer were reflected in the ways that new ideas are generated, or old ideas applied to new contexts. Keynes’s view was that assumptions about different analogies should be made explicit and should be tested empirically in a variety of ways and contexts.
The Keynesian view is useful because it helps to situate current views on the state of second-order cybernetics. Krippendorff has recently argued that second-order cybernetics requires four distinct ‘reflexive turns’. These are:
  1. The cognitive autonomy of the observer
  2. Reflexivity of participation in use, design and conversation
  3. Realising human agency in the relativity of discourse
  4. The social contextualisation of cybernetics
As forms of reflexivity, each of these have distinct loci of double-analogies, and each ‘turn’ entails successive turns. The cognitive autonomy of the observer presents analogies in the observer’s structure in relation to events: observation occurs in operationally-closed and structurally-determined systems: this is the principle ingredient of second-order cybernetics’s anti-objectivism. It is also the starting point for asking deeper questions about the nature of the analogies in the observer and the analogies between events.  These issues are first approached through “participation in use, design and conversation”: without practical engagement with event regularities and their analogies how are the analogies of observation and the constraints within which observation occurs to be identified? This opens onto a new set of questions concerning the discourse itself: talking about theory, experiments and results is itself a participative empirical engagement. The reflexivity of discourse concerns the analogies of ways of describing phenomena and ideas. But then we consider also the discourse in a community discussing empirical results, for this is itself an ‘empirical domain’: what counts as analogical within the discursive domain beyond what is counted as analogical in the domain of measurement? Participation in discourse leads consideration that there be also “analogies of the gut”: intuitive and ethical concerns which may have no codification, but which each human being experiences. In asking “is your gut feeling the same as mine?” we face embodied constraints which only reveal themselves as regularities of sensation and as the negative image of behaviour, where fundamental ethical issues can be codified in the regularities of discourse.
Krippendorff’s ‘reflexive turns’ are interconnected where analogies at one level open questions which lead to a search for analogies at the next. Keynes’s and Hume’s focus on analogies presents empirical practice as a way of coordinating the discourse of scientists: it sits at the link between Krippendorff’s turn 2 and 3. Since the analogies of observation depend on participation, there is also a connection between 1 and 2. In terms of the focus for empirical practice at 2, Shannon’s information theory presents a way in which the analogies of events and the analogies of observation may be measured and modelled. It also can be used as a component to analyse discourse at level 3. This is not to say that Shannon's theory has a special status per se, but rather that it occupies an important position as a theory which unites coherent articulations about the lifeworld with a model of the observer as an adaptive system. By bridging the gap between analogies of perception and analogies of events, Shannon's theory (and its variants) contributes to the conditions for a coherent second-order cybernetic discourse with its multiple levels of reflexivity.
Three empirical approaches
Within information theory the 'sameness' of events must be determined in order for one event to be counted and compared to another and for its probability (and consequently its entropy) to be calculated. The use of information theory may (and frequently does) slip into objectivism: this arises when algorithmically-generated results are declared as “accurate” representations of reality, whilst overlooking the discursive context within which such declarations are made (i.e. a failure at level 3 of Krippendorff reflexivity). There is, however, no reason why information-theoretical results should not declare themselves as questions or prompts for deeper reflection within an academic community rather than an assertion of objectivity. This conception is much closer to Hume’s original view of the role of empiricism in the growth of scientific knowledge: measurement is an aid to the coordination of a deeper reflexive discourse among scientists. Information theory’s explicit modelling of the two sides of analogy make it particularly powerful in the conduct of a reflexive science. It challenges scientists to be explicit about what they see as analogical; it invites others to argue about distinctions; it insists on clarity, not rhetoric.
In turning our attention to three examples of information-theoretical empirical practice, we focus on what is counted, what is considered analogical on what grounds, what is inferred from measurements, and what is analogical in what is inferred. At a basic level, this entails identification of analogies in the letters in a message, the occurrence of subject keywords in a discourse, or the physical measurements of respirations of biological organisms. Reflection concerning the identification and agreement of analogies is a participative process with the phenomena under investigation, as well as a reflection and analysis of the discourse through which agreement about the analogies established in that phenomenon is produced.
Our three empirical examples have an explicit relation to second-order cybernetics. In the statistical ecology of Ulanowicz mutual information calculations between respirations and consumption of organisms in an ecosystem has opened critical debate not just about the biological phenomena under investigation, but about refinements to Shannon’s equations and critical engagement with problems of analogy and induction. In Leydesdorff’s information-theoretic analysis of scientific discourse, social systems theory sheds light on the possibility of a “calculus of meaning” which has stimulated discourse in evolutionary economics, and invited reflexive engagement with the relations between scientific discourse, economic activity and government policy. In Haken’s synergetic and theory of ‘information adaptation’, although being largely independent from cybernetics, has deployed information theory in conjunction with powerful analogies from physics to develop a broader socio-analytical framework for examining a range of phenomena from biology to the dynamics of urban development. We consider each of these in turn.
Statistical Ecology
Ulanowicz's statistical ecology uses information theory to study the relations between organisms as components of interconnected systems. Measurements of different aspects of behaviour of organisms result in information, and the central premise of statistical ecology is that analysis of this information can yield insights into the organisation, structure and viability of these systems. Drawing on established work on “food webs”, and also cognisant of economic models such as Leontieff's 'input-output' economic models, Ulanowicz has established ways in which the propensities for development of ecosystems may be characterised through studying the 'average mutual information' between the components. Calculations produced through these statistical techniques have been compared to the course of actual events, and a good deal of evidence suggests the information theoretic approach to be effective.
In aiming to produce indices of ecological health, Ulanowicz sees his task from a Batesonian perspective as being concerned to take further steps towards an ‘ecology of mind’. Material results and defensible consistencies between theory and empirical data become a spur for deeper critical reflection on the nature of information, and the relationship between mind and nature. In recent years, he has engaged with criticism that Shannon's measure of uncertainty (H) fails to distinguish (in itself) the novelty of events, and those events which confirm what already exists: in other words those events which are analogous to existing events. Whilst building on his existing empirical work, Ulanowicz has sought to refine Shannon's equations so as to account for the essentially relational nature of information theory. In this regard, Ulanowicz has distinguished between the average mutual information in the system, which is effectively a measure of the system’s analogies, and the contingencies generated within the system which provide it with flexibility of options for adaptation. With excessive average mutual information at the expense of contingency, ecological systems become vulnerable to external shock; with excessive generation of contingency at the expense of average mutual information, then coordination is lost as the system becomes an anarchic threat to itself.
At the heart of Ulanowicz’s arguments for refinement in approaches to information theory is the consideration of the ‘background’ of information: what in Shannon's original theory is termed “redundancy”, but which Ulanowicz more broadly defines as “apophatic information”, or what is not-information. The arguments he presents have some resonance with previous arguments about ways to measure order: von Foerster, for example, draws attention to Shannon’s concept of redundancy as an indicator of self-organisation within a system.
Ulanowicz demonstrates the empiricist’s reflexivity by coordinating measurements of natural phenomena with a critical debate about the techniques used to produce those measurements and their epistemological implications. As with all measurement techniques, uncritical application of formulae will lead to objectivism, but this is something which Ulanowicz himself is acutely aware of: statistical ecology is a grounding for discourse. However, once empirical results are in the discursive domain, the ecologies of the discourse itself also present opportunities for investigation. Here Ulanowicz’s suggested refinements to Shannon’s equations may in time prove powerful. However, Shannon’s basic equations are well-suited to studying the discourse empirically, and it on this that Leydesdorff’s related information-theoretic approach focuses.
Leydesdorff’s Statistical Analysis of Social Systems
Leydesdorff's work on discourse uses Shannon's equations as a way of empirically investigating discourse dynamics and providing a foundation for theoretical claims made by Luhmann concerning the relationship between information and meaning. Following Luhmann's second-order cybernetic theory, Leydesdorff argues for the possibility of a calculus of 'meaning' by studying the observable uncertainty within discourses and extrapolating the implicit uncertainties of meaning.. Like Ulanowicz, the principle focus of this has been on mutual information between discourses in different domains. Drawing on Luhmann's identification of the dynamics between different discourses, Leydesdorff has layered a quantitative component, facilitated by the enormous amounts of data on the internet, applying this to the study of innovation capacity in economies. Using longitudinal analysis of communication data involving scientific publications, industrial activity in the production of patents, and regulatory activity of governments through policy, correlations between the dynamics of mutual information and economic development have been established.
Leydesdorff argues that mutual information dynamics within discourses is an indicator of deeper reflexive processes in the communication system. Reflecting Ulanowicz’s identification of a balance between flexibility and mutual information, Leydesdorff has in recent years considered both the mutual information between discourses, and balanced it with consideration of “mutual redundancy” as an index of “hidden options” within an economy: in other words, those ideas and innovations which remain latent but undeveloped, but with the potential for development. Like Ulanowicz, this has inspired a critical engagement with Shannon, but in Leydesdorff’s case, this has been prompted by the puzzle of Shannon’s equations for mutual information. In more than two dimensions of discourse (i.e. more than two interacting discourses), Shannon’s equation for mutual information produces a result with a fluctuating positive or negative sign. Both Ashby and Krippendorff have speculated on what the fluctuating sign might indicate. Leydesdorff has argued that a positive mutual information is an indicator of the generation of missing options within the discourse, and that this can be considered alongside measurements of ‘mutual redundancy’. Whilst mutual information provides a ‘subtractive’ perspective on the interactions of discourses (because mutual information is the overlapping space left when differences in discourses are removed), mutual redundancy provides an additive perspective relating to those dynamics which contribute to the auto-catalysis of options in discourse.
The additive approach of the measurement of mutual redundancy presents a simplified statistical index of discourse dynamics which has implications in the consideration of the relationship between analogies and second-order cybernetic theory. The complexities of the subtractive approach to measuring mutual information are relative to the number of dimensions. The assumptions made in mutual information involve double-analogies relating to each factor: in Parsons’s terms, that ‘ego’ and ‘alter’ see analogies in the same way. By contrast, mutual redundancy provides a general index of constraint, and need not be concerned with whether a perceiving subject similarly recognises specific variables of redundancy; it is only concerned with the fact that a perceiving subject is constrained in various ways, and that the mutual redundancy measure is an index of this. As with Ulanowicz’s apophatic measurements, this particular feature has important implications in simplifying complex multivariate analysis. Taken together with calculations of mutual information, Leydesdorff’s measurements provide an important bridge between Shannon’s ‘engineering problem’ of information, with Luhmann’s speculations about communication dynamics and social systems.
Leydesdorff’s economic work may be mistaken as another econometric technique and treated in an objectivist way. Equally, the work may be taken as “evidence” which endorses Luhmann’s sociology. Neither perspective is faithful to the reflexive manner in which Leydesdorff’s approach has developed, where its empirical foundation has grounded aspects of second-order cybernetic discourse. Although Luhmann remains the dominant figure in the work, critical engagement with him follows the data analysis: Luhmann’s relationship with Maturana remains problematic; his transcendentalising of subjectivity in what Habermas calls ‘network subjectivity’ is open to critique on ethical grounds; his interpretation of issues of intersubjectivity about which Parsons and Schutz disagreed remain open questions; and interpretations of economic calculation remain generators of questions rather than assertions of objectivity. Yet engagement with economic data has stimulated critical engagement with cybernetic theory and information theory. Leydesdorff’s approach is one of grounding the second-order cybernetic discourse within empirical practice: as with Ulanowicz, there is a co-evolution of theory with empirical results. Most impressive is the fact that despite the apparent simplicity of identifying analogies in the co-occurrence of key terms in different discourses, convincing arguments and comparative analyses have become possible concerning the specific dynamics of discourses, and this has been a source of new hypotheses which have driven new theory and new empirical practice.
Haken’s Synergetics
Herman Haken in recent years has turned his attention to the idea of ‘information adaptation’ and in so doing echoes themes from the work of both Ulanowicz and Leydesdorff as well as the observer-orientation of second-order cybernetics. In introducing the relationship between Shannonian information and meaning, Haken explicitly points out the need for analogies to be identified by the observer, or in his terminology, the “index” within Shannon’s formula to be made explicit. Meaning, he argues, enters into the Shannon equation “in disguise” through this process of determining the analogies between different phenomena. Implicated in this process is what Haken calls the “Mind-Brain-Body” (MBB) system. He then explains how the MBB system produces cognition through dynamics of information ‘deflation’ and information ‘inflation’, which respectively reduce or increase Shannon’s entropy measurement. Haken’s ideas carry echoes of Leydesdorff’s concept of the generation of hidden options (information inflation), or Ulanowicz’s distinction between processes of mutual information (deflation) and autocatalysis (inflation). A further example might be cited in Deacon’s distinction between ‘contragrade’ (deflation) and ‘orthograde’ (inflation) processes in information transmission.  
Haken’s empirical approach to this is to deploy what he calls a ‘synergetic computer’ to analyse the inflation/deflation dynamics. Processes of information adaptation are seen as a development of Haken’s ‘synergetic’ theory which began in the 1970s in seminal work on the self-organising behaviour of photons in lasers. Haken’s early physics work provides him with a powerful metaphor of self-organisation which he has explored in many different domains, from biology, chemistry through to urban planning. These different levels of empirical activity present questions about the assumptions made concerning the analogies identified: both within a scientific domain (e.g. physics) and between scientific domains (e.g. from physics to social systems).
Haken appears partly aware of the problem of analogies. In supporting Weaver’s assertion that Shannon’s information theory had application beyond the ‘engineering problem’ that Shannon himself saw as fundamental, Haken acknowledges the implicit reflexive semantics that sits behind information theory. However, there appears to be a limit in his work as to how far he takes this critical engagement which illustrates the importance of a discourse of second-order cybernetics to reflexively engage in empirical practice. Synergetics, having grown from physics, was only loosely associated with cybernetics. The work on lasers was powerful in presenting clear evidence for self-organisation in a self-sufficient way that freed it from what some practitioners might have regarded as the reflexive baggage of second-order cybernetics. The appeal of synergetics was one of unabashed objectivism. In applying synergetic principles to other domains, objectivism gives rise to universalism.
With his work on information adaptation, objectivism is challenged; however, universalism remains. At issue is the identification of analogy from one level to another. These are analogies of the observing system. In the work of Leydesdorff and Ulanowicz, analogies and differences of observed phenomena are made explicit in order for the information theoretical calculations to be made. It is assumed that the observing system will also have analogies and differences. The precise nature of the analogies studied becomes a key point of critique of the empirical process, and this then supports engagement with second-order cybernetic theory concerning the relationship between the observer’s analogies and differences, and those of events. In Haken’s synergetics, the analogies are first identified in the behaviour of photons. Beyond this empirical identification, the metaphor of ‘synergy’ is applied to other systems, with information theory becoming the unifying tool which makes calculations in each different domain appear common.
Empiricism and Second-Order Cybernetics
The problem of coordinating a coherent discourse within second-order cybernetics can be addressed through empirical engagement and critical reflection. Each of the three empirical approaches discussed deploys information theoretic concepts to characterise inductive-adaptive processes between observer and observed. Our argument in this paper has been that through doing so, each method provides a platform for structuring critical argument within second-order cybernetics. Each method produces data from the measurement of relations between phenomena. Hypotheses generate approaches to measurement – what is measured, how it is it measured – and new questions emerge from the results which then stimulate discourse. Principal amongst these question are: What are the analogies at each level for distinguishing and comparing events? What are the analogies in the development and adaptation of the structure of the perceiving system? The necessity for empirical practice in conjunction with these questions rests upon the pathologies of different orientations of Second-order cybernetics towards objectivism, universalism and foundationalism. Identification of analogies and confusion between analogies lies at the heart of the difficulties of coordinating coherent debate. The empirical application of information theory necessitates the specific identification of analogies between events. Critical appreciation of information theoretic results generates questions and possibilities concerning the analogies of the perceiving subject and in generating new possibilities in the development of information theoretic techniques. Empirical results coordinate the process by explicitly identifying the reflexive and empirical constraints within which analogies are identified. The approach supports what Krippendorff sees as Ashby’s empirical practice in using cybernetic theory to reflexively generate possibilities and then to discover which of them may be found in nature.
The observer-orientation of second-order cybernetics, and particularly its stance against objectivism and universalism are reflexive operations which guard against the fetishisation of results. If the purpose of empiricism is seen to be the production of results - the uncovering of fundamental mechanisms in the generation of meaning, or mechanisms of perception or laws of ecology - then empiricism becomes objectivist. We have argued that since Hume did not believe in a natural necessity of causal laws, the empiricism he supported concerned the coordination of discourse amongst scientists. This, we argue, is a proper foundation for second-order cybernetic inquiry which is historically and philosophically grounded: results coordinate scientific discourse by reflexively identifying constraints within which assumptions about analogies and differences are made. That empiricism is necessary within second-order cybernetics results from the assumptions about analogies in second-order cybernetic theory both of events and of the observer, which are uninspected and confused. The fact that there are varieties of second-order cybernetics, and that these varieties conflict in their epistemological stances, necessitates an empirical engagement.
In our survey of approaches, there are relations between the different techniques. In Ulanowicz’s statistical ecology, the focus concerns the like-relations between biological components. In Leydesdorff’s work, the central theme concerns the dynamics of like-relations between discourses, and how these dynamics may cohere with second-order cybernetic speculations about reflexivity and communication. Similarly Haken focuses on information dynamics in processes of perception and meaning, by studying the structural properties of emergent results (for example, the structure of cities). In each case, results are produced which generate new questions and hypotheses which feed back into the discourse.
Ulanowicz and Leydesdorff’s work may be seen to be mutually complementary. Both have focused on ways of measuring constraint: Leydesdorff has focused on Shannon redundancies, whilst Ulanowicz has suggested new techniques for identifying what he calls ‘apophatic information’. More significant is the difference between their different domains: since all scientific work produces and participates in discourse, Leydesdorff’s analogies of words and discourse dynamics is relevant to all other forms of scientific empiricism. However, that alternative statistical approaches like Ulanowicz’s apophatic information are generated in other empirical domains, presents new options for the empirical analysis of discourse, alongside other forms of discourse analysis. Haken’s Synergetics and his concept of “information adaptation” also presents new techniques, and his situating of semantics within Shannonian information appears consistent with ideas within Luhmann’s social system theory. However, Haken’s approach also illustrates the difference between an observer-oriented perspective and a perspective which theorises observation and meaning from a universalist stance. Haken’s approach uses analogy between mechanisms at different levels: from the self-organising behaviour of photons to social behaviour in cities. Whilst he doesn’t address the question, this is empirical work which invites deeper questioning about the analogies made between different domains of investigation as much as it invites critique of analogies within each level. The analogies between different levels of phenomena are analogies between the structuring of the observer: as with von Glasersfeld’s example of the butter-knife as a screwdriver, the use of synergetics to explain cities as well as photons is an identification of analogies within the observing system as it adapts the same tool to different phenomena. How could this be empirically explored?
This issue highlights the importance of clarity in empirical practice, and how critique within second-order cybernetics introduces distinctions which can then be used as reflexive tools in considering empirical practices. There is more to the distinctions between varieties of second-order cybernetics than the tension between the universalism of General Systems Theory and the pragmatic empiricism of cybernetics. In arguing that the conflation between the two has resulted in a lack of critical engagement in the identification of analogies is to open grounds for critique on the basis of varieties of objectivism, varieties of universalism and varieties of foundationalism. More importantly, the empirical examples considered here expose the assumptions made about analogies between events and analogies in the observer which can be the focus for doing what Hume always believed was the principle purpose of experiment: to help coordinate discourse.