Thursday, 8 October 2015

What Second-Order Cybernetics stands against

Second-order cybernetics is a broad church and there is significant internal tension as a result. Ostensibly defined as the "cybernetics of observing systems", there are a variety of interpretations of what that might mean. For example, both Niklas Luhmann and Humberto Maturana are both second-order cyberneticians, and yet each has criticised the other for an inconsistent application of second-order cybernetic principles. This isn't helped by the fact that each wishes to define second-order cybernetic principles.  Luhmann's borrowing of Maturana's theory of autopoiesis as a way of developing sociological theory (particularly developing Parson's Social Systems theory), and its entailed view that communication systems are 'autopoietic' (i.e. it is an organisationally-closed, structurally-determined system which regenerates its own components) appears to impute some kind of reality to the communications system which subsumes psychological, perceptual and agential issues. Luhmann famously declared he was not interested in people, but in the dynamics of communication systems. Imputing the existence of a communication system existing beyond the biological boundary of the organism is the opposite of Maturana's thinking when he conceived of autopoietic theory. He argues:
"a cognitive system is a system whose organisation defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain"
There is no information, only self-organisation of the organism. The cognitive system organises itself within a domain of interactions.  Luhmann's redescription of sociology in terms of autopoiesis has been taken by Maturana and his followers as something of a betrayal and distortion. And yet, Luhmann's redescription of sociology has been the most influential social-cybernetic theory, attracting the attention of Habermas (who disagrees with Luhmann, but clearly takes him seriously) and many others, for whom systems thinking would otherwise have been sidelined. Few cybernetic thinkers (apart from possibly Bateson) can claim such extensive influence.

In unpicking the distinctions between different positions regarding second-order cybernetics, two approaches might be used. On the one hand, it is possible, following Harre's example (in his "varieties of relativism"), to identify the differences between positions with regard to what they oppose. Alternatively, it is possible to determine the differences in what the positions support. Here I want to deal with the former.

Harre identifies three major themes which intellectual positions concerning relativism stand against:

  1. Objectivism: the belief that there are objects and concepts in the world independent of individual observer;
  2. Universalism: that there are beliefs which hold good on all contexts for all people;
  3. Foundationalism: the belief that there are fundamental principles from which all other things can be constructed.
There are differences between different versions of Second-order cybernetics with regard to these categories. Objections to Objectivism would appear to be the most clear issue: as the cybernetics of observing systems, second-order cybernetics clearly opposes the assumption of a mind-independent reality. However, on examining different theoretical stances, there are discernable and differentiated traces of objectivism in each variety. For example, Maturana's philosophy derived from biological evidence. A common criticism therefore cites implicit objectivism in its biological foundation. Luhmann, by contrast, escapes this charge. 

With regard to Universalism, there is an implicit view within second-order cybernetics which allies itself to philosophical scepticism: that there is no 'natural necessity', or naturally-occurring regularities in nature: von Glasersfeld calls this a 'pious fiction'. However, second-order cybernetics does appear to uphold the law-like nature of its own principles, arguing for these as a foundation for processes of construction of everything else. At the heart of this issue is the nature of causation inherent within universal laws. Second-order cybernetics upholds a view that rather than universal causal laws in operation, self-organising systems operate with degrees of freedom within constraints. However, in taking this position, different varieties of second-order cybernetic differ in their understanding of what those constraints might be, and how the system might organise itself with regard to them. Maturana's constraints are biological; Luhmann's are discursive. 

With regard to foundationalism, all varieties of second-order cybernetics appear to wish to maintain their principles as foundational. Whatever constraints bear upon the self-organisation of a system in its environment, there is little consideration of the constraints that bear upon the second-order cybernetician who concocts the ideas of systems self-organising within constraints. Perhaps closest to the post-foundational position is von Glasersfeld, who has argued for his 'radical constructivism' as sitting on the fence with regard to an external reality or a human construction. He emphasises the in-betweenness of the intellectual position, albeit with a somewhat strident certainty that all is construction. Although Luhmann's social systems seem foundational, in his adoption of Parsons's ideas of 'double-contingency' of communication, the intersubjective flux of being which this presents is closely related to sociomaterial, post-foundational ideas about entanglements between subjectivity and objectivity. 

Sunday, 4 October 2015

Keynes and Hume on Probability - what would they make of Big Data?

Hume dedicates some attention to the problem of probability in his theory of scientific knowledge. One of the most penetrating commentaries on his approach and its relation to Hume's contemporaries was produced by John Maynard Keynes in his "Treatise on Probability" of 1921. Keynes analysis is not often mentioned today when probability plays an increasing role in underpinning the statistical approaches to big data and information theory. Keynes himself only had to worry about the statistical inferences in economic and social theory - what would he have said about Shannon's information theory?

Ernst Ulrich von Weizsäcker argues that Shannon's H measure conflated two concepts of 'novelty' and 'confirmation' inherent in meaningful information (see and  Robert Ulanowicz's paper However, this point about the conflation between novelty and confirmation is something that is picked-up on by Keynes:
“Uninstructed commonsense seems to be specially unreliable in dealing with what are termed 'remarkable occurrences'. Unless a ‘remarkable occurrence’ is simply one which produces on us a particular psychological eect, that of surprise, we can only define it as an event which before its occurrence is very improbable on the available evidence. But it will often occur—whenever, in fact, our data leave open the possibility of a large number of alternatives and show no preference for any of them—that every possibility is exceedingly improbable à priori. It follows, therefore, that what actually occurs does not derive any peculiar significance merely from the fact of its being ‘remarkable’ in the above sense.”
Keynes builds on Hume's thinking about causes, which emphasises the role of confirmation in causal reasoning:
"All kinds of reasoning from causes or effects are founded on two particulars, viz. the constant conjunction of any two objects in all past experience, and the resemblance of a present object to any of them. Without some degree of resemblance, as well as union, ’tis impossible there can be any reasoning"
"When we are accustomed to see two impressions conjoined together, the appearance or idea of the one immediately carries us to the idea of the other.... Thus all probable reasoning is nothing but a species of sensation. ’Tis not solely in poetry and music, we must follow our taste and sentiment, but likewise in philosophy. When I am convinced of any principle, ’tis only an idea, which strikes more strongly upon me. When I give the preference to one set of arguments above another, I do nothing but decide from my feeling concerning the superiority of their influence.”
Unless scientists can produce event regularities, there is no ground for reasoning about causes. However, if all regularities simply confirmed each other, there would be nothing that each repetition of the confirmation would add. The basis of reasoning is the repetition to produce some difference, as Keynes notes:
"The object of increasing the number of instances arises out of the fact that we are nearly always aware of some difference between the instances, and that even where the known difference is insignificant we may suspect, especially when our knowledge of the instances is very incomplete, that there may be more. Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy. For this reason, and for this reason only, new instances are valuable. "
Keynes's starting point is Hume's thinking about the expectation of the taste of eggs. Here again, Hume indicates the need for balance between novelty and confirmation:
"Nothing so like as eggs; yet no one, on account of this apparent similarity, expects the same taste and relish in all of them. ’Tis only after a long course of uniform experiments in any kind, that we attain a firm reliance and security with regard to a particular event. Now where is that process of reasoning, which from one instance draws a conclusion, so different from that which it infers from a hundred instances, that are no way different from that single instance? This question I propose as much for the sake of information, as with any intention of raising difficulties. I cannot find, I cannot imagine any such reasoning. But I keep my mind still open to instruction, if any one will vouchsafe to bestow it on me."
Keynes argues that Hume's argument combines analogy with induction. There is analogy in the identification of the likeness of phenomena (eggs being alike), and there is induction in having experienced so many eggs, a supposition about their taste arises: "We argue from Analogy in so far as we depend upon the likeness of the eggs, and from Pure Induction when we trust the number of the experiments."  Keynes also find echoes of Hume's distinctions in Cournot's theory of probability:
“Cournot, [...] distinguishes between ‘subjective probability’ based on ignorance and ‘objective probability’ based on the calculation of ‘objective possibilities,’ an ‘objective possibility’ being a chance event brought about by the combination or convergence of phenomena belonging to independent series.”
Keynes points out that the balance between analogy and induction is incomplete in Hume's thinking, and that Hume's doubt as to the contribution of many identical experiments to induction loses sight of the fact that some variation in experiments is a necessary condition for the construction of knowledge:
"His argument could have been improved. His experiments should not have been too uniform, and ought to have differed from one another as much as possible in all respects save that of the likeness of the eggs. He should have tried eggs in the town and in the country, in January and in June. He might then have discovered that eggs could be good or bad, however like they looked. This principle of varying those of the characteristics of the instances, which we regard in the conditions of our generalisation as non-essential, may be termed Negative Analogy. It will be argued later on that an increase in the number of experiments is only valuable in so far as, by increasing, or possibly increasing, the variety found amongst the non-essential characteristics of the instances, it strengthens the Negative Analogy.
If Hume’s experiments had been absolutely uniform, he would have been right to raise doubts about the conclusion. There is no process of reasoning, which from one instance draws a conclusion different from that which it infers from a hundred instances, if the latter are known to be in no way different from the former."
It seems to me that Keynes's 'negative analogy' is a deliberate probing for the constraints of a general principle. The implication is that Hume's regularity theory does not really depend on strict regularities; it requires a certain degree of difference.

So what about probability and information? The striking thing about both Keynes and Hume is that the human psychological aspect of probability is clearly on display: this is not a mathematical abstraction; probability cannot escape the human realm of expectation. Shannon's 'engineering problem' of information based on probability loses sight of this - his 'novelty' and 'confirmation' appear as a single number indicating the degree of 'uncertainty' of a symbol's value. Behind it, however, lies the analogical and inductive reasoning which is deeply human.

Information however, creates its own reality. It can create its own realm of novelty and confirmation to the point where what is confirmed to us is an artificial representation of some other reality, whose actual nature would not produce the same confirmation. Keynes's point about negative analogy would provide a corrective to this. We should explore our expectations against "a variety of non-essential characteristics" of instances.

Instead the designers of big data algorithms want to show that they "work". They can exploit the creation of 'false confirmation' and argue their case. And yet regularities of any sort are hard to identify, let alone the varying of non-essential characteristics. How is this scientific? The human expectation on viewing the results of big data analysis are already framed by technologies which are underpinned by the same formulae that produce the analyses. Part of the problem lies in the subsumption of phenomena within Shannon's formulae, which on the one hand, is blind to its human hinterland of "species of sensation", whilst on the other creates equivalences among phenomena which in reality are not equivalent. Unlike things become alike; everything becomes eggs!

And yet there is something important in Shannon's work - but it lies not in blind application of the equations. Instead it lies in the negative analogies produced and the novelty and confirmation that arise between Shannon's powerful generative ideas and the encounter with the real world. It is in discovering the contours of fit between Shannon's abstractions and the human intersubjective world of expectations and surprises. And this may fit with Hume's own thinking about probabilities. 

Saturday, 26 September 2015

Explaining Explaining... and Knowledge: Reflections on Chris Smith's Realist Personalism

Between realists and constructivists there are competing explanations of the world. The nature of explanation itself remains, however, unexplored. In his book "What is a Person", Chris Smith targets the kind of explanation which appears in what he calls 'variables social science'. He says:
"variables social science typically breaks down the complex reality of human social life into 'independent' and 'dependent variables,' whose answer categories are assigned numeric values representing some apparent variation in the world. Dependent and independent variables are then mathematically correlated, usually "net of" the possible effects of other variables, in order to establish independent statistical associations between them. Important independent variables are identified as related to the dependent variable through calculations of statistical significance, and statistical models are produced purporting to represent how certain social processes operate and produce the observed human social world"
Smith's fundamental objection might be succinctly summarised as the problem of the "mereological fallacy", about which Rom Harré has been arguing with Peter Hacker recently in the journal Philosophy (see Mereological fallacies are explanations of whole phenomena in expressed in terms of parts. Smith's concern in his book is the person, and he is right to point out that explanations of the person gets lost among the variables. He says:
"Persons hardly seem to exist in variables social science - they are rarely actually studied. What are studied instead are variables - which, when it comes to humans, are usually only single aspects or dimensions of persons or human social arrangements."
But then we come to realist explanations. Is this any better? From his Critical Realist standpoint, Smith argues that realism entails a change in perspective on the variable, not a rejection of it. This means to see causation not as the result of event regularities, but as the "operation of often nonobservable yet real powers and mechanisms that naturally exist at different levels of reality and operate (or not) under certain conditions and in particular combinations to tend to produce characteristic results". Variables are not excluded from the investigation of conditions, but a realist perspective would:
"take seriously the fact that variables are not causal actors. Variables do not make things happen in the social world. Human persons do. Persons are shaped by the enabling and constraining influences of their social structures - even as social structures are always emergent entities produced by human activity. And variables can represent aspects of social structures and relevant features of human actors. But variables do not cause outcomes. Nor do variables lead to increases and decreases in the values of other variables. Real persons acting in particular contexts in the real world do." 
Fine. But we still end up with an explanation, and yet there is no consideration of what an explanation is. Moreover, Smith's realism seems concerned with proscribing certain 'erroneous' ways of thinking among social scientists. Without an understanding of explanation itself, it seems that this kind of proscription is a constraint on the imagination where the warrant for the proscription is itself an explanation (i.e. Bhaskar's ontology). This is ironic because the book is about the person, and persons and explanations seem to be deeply entwined. Smith says when criticising evolutionary explanations of persons: "The very nature of human personhood [... ]- its reflexivity, self-transcendence, moral commitments, causal agency, responsibility, and freedom - means that persons can live, move and have their being in ways and for reasons not strictly tied to an evolutionist's monotonic explanation of everything." And yet he implies there is an explanation somehow within the reflexivity, self-transcendence and so on. I suspect there are many. It might be a mistake to believe variables to be causal actors, but equally it may be energizing and generative of powerful new ideas, initiatives, interventions and so on - many of which we are grateful for. Po-faced critical realism seems to lack a sense of fun and freedom: the erroneous imagination can produce remarkable and good things.

Explanations entail some concept of causation: the difference between realists, constructivists and positivists lies in the conception of causation as either being real, inherent in nature and discoverable, or being social constructions in the light of event regularities. But explanation itself is the elephant in the room in the debate between realists and constructivists (a debate which seems to me to be a bit worn-out now anyway).

An explanation is a kind of social constraint. Father Christmas and gravitation are explanations (or as Bateson would put it, "explanatory principles"), and each of them serve to coordinate social behaviour. We might think of "gravity" as having depended on event regularities for its establishment, or think of Santa as manifesting through 'normative behaviour' - although in both cases it is more multi-layered: each case has aspects of constraint materially, psychologically, socially, biologically, politically and so on. Both Father Christmas and gravity, however, are ideas which generate new practices, innovations, new forms of social organisation and so on - whether they are 'right' or not. Many of the ideas that they generate are infeasible. But by the fact that the ideas are there, we gain knowledge about the world in discovering the difference between what we might imagine and what we can bring into reality. We might imagine a world where everyone believes in Santa; but we know there is an age that our children (and we) reach when we realise it's a kind of game.

The confusion is at the interface between explanation and knowledge. It is the fault of the education system for conflating knowledge with the production of explanations. But knowledge usually emerges at the point where we realise our explanations don't work.

Thursday, 24 September 2015

Engeström's contradictions, Searle's Status Functions and Kauffman's Knots

In an analysis of a childrens' hospital, Engeström discusses the contradictions between different aspects of the institution:
(from "Activity Theory as a Framework for analyzing and redesigning work", 2000)

The contradictions are shown by the 'lightning arrows' between the different aspects of the activity theory diagram. For example, between objects and instruments (at the top), there is a contradiction in assuming that patients have simple conditions, for which a 'critical pathway' of care can be provided. Many patients however present multiple conditions: how does the critical pathway cope with this? Similarly, if a patient has multiple problems, and each of those problems is attended to by a different provider, then contradictions arise in the ways that those providers coordinate with each other. Engeström points out that "traditional rules of the hospital organization emphasize that each physician is alone responsible for the care of his or her patients". Similarly, multiple diagnosis patients cause problems between the different professionals attending to them within the organisation (the division of labour). 

I'm interested in the idea of contradiction in activity theory because it paints a picture of complex constraint-relations in organisational culture. Engeström seeks to find ways of working through contradictions in what he calls 'boundary-crossing', 'knotworking' and 'expansive learning'. These are fundamentally communicative engagements which seek to articulate, bring together and transform different perspectives on organisational problems. This works, I suspect, because the sources of the organisational conflict are communicative in the first place. 

In Searle's social ontology, "rules", "divisions of labour", "critical pathways", "hospitals", "doctors" and "patients" all result from what he calls "status functions". From Engeström's perspective that perhaps doesn't add very much to his notion of the nature of organisations as activity systems. However, I think Searle's status functions are more usefully examined as "scarcity functions": to declare x as a rule for conduct y in the hospital, is to say that conduct y is not legitimate within the hospital; to declare a person to be a doctor means that nurses cannot do what the doctor does. "Doctoring" and "acting legitimately" all become scarce through the declarations of those with the deontic power within the hospital (managers). Engeström appears to want to change the status functions (or the scarcity functions) which are declared in the hospital so that the contradictions are addressed. However, for people positioned at different points in the Activity Theory diagram, there will always be issues of scarcity, role and position, rights, obligations, duties and responsibilities. In all organisations, behaviour occurs within constraints which are declared through status (or scarcity) functions. The danger is to blindly reconfigure the constraints of behaviour without fully understanding the deeper dynamics of constraint. The problem is that constraint goes far beyond contradictions between divisions of labour and rules of engagement (say). They operate historically, organisationally, socially, personally, materially, ontogenetically and intersubjectively. Moreover, social behaviour frequently serves to uncover constraints. Engeström's focus on overcoming contradiction may be misplaced: it may be more important to act methodically so as to identify the constraints within which one works in a more comprehensive way. Effective communication may depend more on the mutual understanding of constraints between practitioners than it does on the codification of new practices within the organisation.

More recently I've been reading a brilliant paper by cybernetician Lou Kauffman on Category Theory and knots. Kauffman has a simple idea. Category theory is all about 'mappings' from x to y (the posh word is a 'morphism'). Status functions also are a kind of mapping. So to say "this paper counts as money in social context c" is a mapping from the paper to money within a category (context). Kauffman argues that there can be a meta-mapping z which maps onto a mapping from x to y. In other words, it looks like this:
It seems to me that we could say that the mapping from x->y constrains z. In less formal terms, talk of markets is a mapping onto the mapping about money, or the talk of markets is constrained by the concept of money. It becomes more obvious when Kauffman does this kind of thing:

These, Kauffman explains, are  reflexive examples, because each mapping constrains the other: "every morphism in this category is a morphism of morphisms" (Kauffman, L "Categorical Pairs and the indicative shift", Applied Mathematics and Computation, 2012, vol 218)

My question on reflecting on all this, is whether Engestrom's communicative methods can be enhanced by a deeper characterisation of constraints represented as the 'mappings' of 'status functions'. Indeed, the relationship between the status function and the scarcity function is the difference between the mapping x->y (the status function) and the mapping z to x->y (constraint or scarcity). This is to dimension the inter-relations between different aspects of the activity theory contradictions at different levels of recursion (or meta-level). The recursive layering gives extra dimension to the rather monovalent contradictions described by Engestrom. 

Sunday, 20 September 2015

E-learning Failure? The OECD report into Technology in Schools and a Scientific Problem

A recent OECD report “Students, Computers and Learning” into the impact of technology in schools argues that computers have had little effect on childrens’ learning. In comparing results across Europe, there appears to be little noticeable educational advantage despite enormous sums of money that have been spent on educational technology. You can read the report here: There is a more general air of disappointment in educational technology circles these days, much in contrast to the optimism of 10 or 15 years ago. It perhaps wasn’t a coincidence that “Edtech” was hit by the double-whammy of the financial crisis together with a sense of having reached a cul-de-sac; the financial crisis is itself a labyrinthine cul-de-sac – and we have yet to find our way out. In Edtech, we have seen the withdrawal of funding from educational technology, with government agencies shut down (like BECTA), or turned into charities dependent on institutional sponsorship (like JISC). These agencies have been accused of lacking impact and that too much funding supported academics and others experimenting with technologies which weren’t sustainable. At the same time, technological maturity of social software, MOOCs and mobile learning appear to have settled the educational questions, thus requiring no further research and development, despite the manifest shortcomings of those technologies. Diana Laurillard’s comment that the “reality falls far short of the promise” in educational technology and that “empirical work on what is actually happening” does not reflect theoretical predictions reflects not only the mood, but a scientific problem in the way educational technology has proceeded. Laurillard doesn’t address the scientific problem but, like many others in educational technology, perpetuates it. However, education is in the state it is because of a scientific and methodological incoherence which has its roots in history long before the recent impact of the web.
The early 2000s were characterised by an explosion of ideas about education stimulated by web technologies. Underpinning both the technologies and the ways in which those technologies were envisaged to transform society were a range of ideas drawn from theories about learning, technology and organisation which had a common foundation in systems theory and cybernetics. In order to understand what has happened recently, it is important to understand the deeper historical context within which these ideas were established. Over the course of its development, and particular in the early phases of its development, there were many more articulations of what Ron Barnett calls ‘feasible utopias’ than could be explored in practice. Many of these ideas sprang from the 1950s long before the web in an age of pedagogical experimentation as governments grappled with the challenge of mass secondary schooling. By the time of the appearance of the web, the organisational power of the technology already was beginning to obstruct some paths of development in favour of others: the choice of feasible utopias became restricted. The OECD’s judgement about computers and learning is a judgement about a particular subset of ideas and activities which attracted funding, some of which delivered their promised impact, and others didn’t. Political forces drove the constraining of the research effort, and it is now political forces that determine that the phase of experimentation and research in education and technology should end in the light of ‘evidence’ produced through blinkers.
In order to be disappointed by failures requires an expectation about the validity of the scientific methodology that is pursued. Ministers and funders hope for the discovery for certain key ‘independent’ variables which can be manipulated to directly improve things, or implemented in policy. Theories about learning generate new ideas about education and new possibilities for intervention. Theories are the generators of promises. They make predictions as to what might happen, or what innovators might wish to happen. When predictions aren’t met or promised results do not materialise, disappointment results. The scientific problem lies in mistaken thinking about the causal connection between theory and practice in education. But what causes "successful learning"? Ministerial hopes for independent variables are misplaced, and whilst evidence will be sought to defend one position or another, in the real world, declarations of the causal power (or lack of it) of intervention x or y is nothing short of political manipulation. Such declarations blind themselves to contingencies in the name of pragmatism or expedience. More significantly, the pursuit of the independent variables of educational technology is blind to the constraints that bear upon education, theories of education, methodologies of the research and the personal motivations of researchers, teachers, managers and politicians. The causal obsession loses sight of the constraints that frame it.
Whilst the OECD’s declaration of failure of Educational Technology betrays a simplistic causal logic, it isn’t really their fault. The distinction between causes and constraints goes back to the relationship between the two disciplines which underpin so much of the theoretical background of education: General Systems Theory and Cybernetics. Confusion between these apparently closely-related theoretical enterprises has, I argue, been partly responsible for the current malaise in education and technology. Whilst both traditions of “systems thinking” sit uneasily with the pursuit of independent variables, quasi experiments, or evidence-based policy, their underpinning empirical approaches are fundamentally distinct. The internal differences between General Systems Theory and Cybernetics have produced a central incoherence in thinking about education and educational research where Laurillard’s “disappointment” and the OECD’s rejection are both symptoms - and indicators of a possible way forwards.
Systems theory and Cybernetics
It is hard to find a book that champions educational technology which does not hang its pitch on some variety of the concept of ‘system’. From Seymour Papert, who worked with Piaget, Diana Laurillard, whose work drew heavily on the cybernetic conversation theory of Gordon Pask, to Sugata Mitra, whose background in physics made him familiar with theories of self-organisation and cellular automata, each has defended their ideas in terms of redescribing the complexities of education in “system” terms. In each case, the theories that educational technologists attached themselves to were universalising and transdisciplinary, seeking to account for the richness of learning and educational organisation. There were connections made between a wide variety of educational perspectives, most notably between critical approaches to pedagogy, seizing on the educational critiques of Illich, Freire and others in arguing that new means of communication would deliver new ways of organising people. System theories were the generators of educational optimism buoyed by profound technological changes which transform the learning situation in schools and in the home.
The tendency of most theories of learning, and particularly “systems” theories is to be grandiose. New concepts are used to redescribe and critique existing practices. The manifest problems inherent in received knowledge and practice become clarified within the new framework conditional upon the implementation of new practices, new ways of organising education and the implementation of technologies. The transdisciplinary all-embracing nature of the systems descriptions presents what Gordon Pask called ‘defensible metaphors’ (Pask’s definition of cybernetics was “the art and science of manipulating defensible metaphors”) of transformed scenarios. In practice, when putting real teachers and learners and the real messy situations of education into the equation things didn’t look so simple: redescription produced absences; certain things were overlooked; the contingencies always outweighed the planned futures. Generally there was failure to account for concrete persons, real politics, or ethics: modelled social emancipation, however well-meaning, rarely manifests in reality. The history of cybernetics presents plenty of examples this kind of failure, not just in education. The response to these kinds of failures takes a variety of forms. On the one hand, there is a response that criticises the intervention situation for not sufficiently supporting the initiative: with “management support” or without “management interference”, better training, more time and so on, things would have worked – it is frequently asserted.  Then there is a response which looks at deficiencies of the intervention in its material constitution: the interface could have been easier, the tools more accessible, and so on. There might also be a reflection on the possibility that the underpinning theory upon which interventions were based was deficient in some way. This latter approach can lead to the rejection of one theory, and the adoption of a new one.  
If there is a common thread between these different ways of responding, it is that they each focus on causes: the lack of management support caused the intervention to fail; the interface meant the tools were unusable; the deficiencies of theory caused inappropriate interventions to be made. However, an approach based on constraint rather that causality changes the emphasis in such assessments. Constraints are discovered when theorised possibilities do not manifest themselves in nature. The constraint perspective is more consistent with a Cybernetics than it is with a Systems perspective. In order to understand the constraint perspective, we have to reinterpret the diagnosis of failure. If computers fail to help children learn in the way that the marketeers argue they might it is because there is a mis-match between the constraints that bear upon those generating ideas about what might be possible in educational reality (theorists, technologists, and so on), and the actual constraints that are revealed by the world when new interventions are attempted. We might only consider it failure if our purpose was to determine causal patterns between interventions and results: the identification of ‘independent variables’ in the implementation of technology. But what if our scientific approach was geared towards identifying constraints? Then we would have learnt a lot through our intervention: most particularly that a set of ideas and designs which in an imagined world lead to beneficial outcomes, in reality do not. What might that tell us about the constraints on thought and the generation of new ideas? How might thinking change in the light of new constraints we have discovered? By this approach, knowledge emerges in a reflexive process of theoretical exploration, and the discovery of which theoretically-generated possibilities can and cannot be found in reality.
There are significant constraints that bear upon intellectual engagement with educational technology. The identification of constraints in reality (where things don’t work as we intended) did not send theorists back to think about why they thought it might work in the first place. Whilst many of these constraints are political: “we needed the project funding to keep our jobs, and this was what the funders were asking for…”, or “this complies with current government or EU policy”, other constraints on thought emerged from a confusion that stems from the contrast between causal thinking and constraint-thinking. To put it more bluntly, it stems from confusion between constraint-oriented Cybernetics, and cause-oriented General Systems Theory to the point that the justification of interventions, or the sales-pitch for pieces of software, meant that explanations that attenuated the complexities of reality were produced to try to attract the funding for the research, rather than suggested as possibilities to be empirically explored. 

The OECD's judgement is an interesting step along the way; instead there is a risk it will be seen to slam the door shut.

Saturday, 19 September 2015

Gregory Bateson on Educational Management and Knowledge

At the end of Gregory Bateson's book, Mind and Nature, there is an essay called "Time is Out of Joint", in which he talks about the relationship between knowledge, science and educational management. It was written as a memorandum to the Regents of the University of California in 1978. He says:
"While much that universities teach today is new and up-to-date, the presupposition or premises of thought upon which all our teaching is based are ancient and, I assert, obsolete. I refer to such notions as:
a. The Cartesian dualism separating "mind" and "matter"
b. The strange physicalism of the metaphors which we use to describe and explain mental phenomena - "power", "tension", "energy", "social forces", etc
c. Our anti-aesthetic assumption, borrowed from the emphasis which Bacon, Locke and Newton long ago gave to the physical sciences, viz that all phenomena (including the mental) can and shall be studied and evaluated in quantitative terms. 
The view of the world - the latent and partly unconscious epistemology - which such ideas together generate is out of date in three different ways:
a. pragmatically, it is clear that these premises and their corollaries lead to greed, monstrous over-growth, war, tyranny, and pollution. In this sense, our premises are daily demonstrated false, and the students are half aware of this.
b. Intellectually, he premises are obsolete in that systems theory, cybernetics, holistic medicine, and gestalt psychology offer demonstrably better ways of understanding the world of biology and behaviour.
c. As a base for religion, such premises as I have mentioned became clearly intolerable and therefore obsolete about 100 years ago. In the aftermath of Darwinian evolution, this was stated rather clearly by such thinkers as Samuel Butler and Prince Kropotkin. But already in the eighteenth century, William Blake saw that the philosophy of Locke and Newton could only generate "dark Satanic mills"
Bateson's work fundamentally had been about justifying these claims in an ecological science which rested on cybernetics (it's interesting that he doesn't make the distinction with systems theory). I'd always felt that Bateson appeared to lack a political edge in his writing, preferring to argue the case for his "explanatory principles" rather than fighting to get things to change. However, the following passage contains some powerful political rhetoric, although unfortunately, his prediction that the "facts of deep obsolescence will command attention" has not yet come to pass:
"So, in this world of 1978, we try to run a university and to maintain standards of "excellence" in the face of growing distrust, vulgarity, insanity, exploitation of resources, victimization of persons, and quick commercialism. The screaming voices of greed, frustration, fear and hate.
It is understandable that the Board of Regents concentrates attention upon matters which can be handled at a superficial level, avoiding the swamps of all sorts of extremism. But I still think that the facts of deep obsolescence will, in the end, compel attention."
What would he say about our universities today, where the screaming voices of greed, frustration, fear and hate have taken over the halls of learning themselves, not just the world outside the ivory tower.  What would he make of the new managerial class of administrator today... particularly after having so many difficulties himself in 'fitting in' to the academic establishment. My guess is he would find it far worse - although perhaps he would not be surprised. He sarcastically remarks that this is "only 1978" and that by 1979,
"we shall know a little more by dint of rigour and imagination, the two great contraries of mental process, either of which by itself is lethal. Rigour alone is paralytic death, but imagination alone is insanity."
Perhaps there's a side-swipe  here at the 'hippy' community who embraced Bateson in his last years, when the scientific establishment of which he was clearly part, had shunned him. The hippies were lovely people, only too happy to talk about ecology and consciousness, but they were all imagination with no rigour. A similar criticism might be made of today's radicals: it is not enough for Occupy, the Greens or even Jeremy Corbyn's Labour to have bold dreams; there needs to be hard analysis too - more rigorous than anything attempted by those they oppose.

Which leads on to his comment about the student uprising in 1968:
"I believe that the students were right in the sixties: There was something very wrong in their education and indeed in almost the whole culture. But I believe that they were wrong in their diagnosis of where the trouble lay. They fought for "representation" and "power". On the whole, they won their battles and now we have student representation on the Board of the Regents and elsewhere. But it becomes increasingly clear that the winning of these battles for "power" has made no difference in the educational process. The obsolescence to which I referred is unchanged and, no doubt, in a few years we shall see the same battles, fought over the same phony issues, all over again."
Well, in reality it took over 40 years, but I suggest Bateson has been proved right! What he then articulates is a theory about education's relation to science and to politics. He introduces this theory by saying "I must now ask you to do some thinking more technical and more theoretical than is usually demanded of general boards in their perception of their own place in history. I see no reason why the regents of a great university should share in the anti-intellectual preferences of the press of media." - anti-intellectual? University management?! Heaven forbid!!!

Fundamentally, the theory reflects on what he sees as "two components in evolutionary process, and that mental process similarly has a double structure.". He basically argues for a conservative inner logic that demands compatibility and conformance; at the same time there is an imaginative, adaptive response by nurture in order to survive in a changing world. His point about time being "out of joint" is that the conservative and imaginative forces are mutually out-of-step: "Imagination has gone too far ahead of rigour and the result looks to conservative elderly persons like me, remarkably like insanity or perhaps like nightmare, the sister of insanity." He points out this this process is common in many fields: the law lags behind technology, for example. However, he argues for a dialectical necessity relating "conservatives" to "liberals", "radicals" and so on:
"behind these epithets lies epistemological truth which will insist that the poles of contrast dividing the persons are indeed dialectical necessities of the living world."
He argues that the purpose of university management is to maintain the balance between conservative and imaginative forces:
"if the Board of Regents has any nontrivial duty it is that of statemanship in precisely this sense - the duty of rising above partisanship with any component or particular fad in university politics."
Perfect! But Bateson sees the dangers too. He argues that there is a reason why "acquired characteristics"  can not be inherited in biology so as to protect the gene system from too rapid change. In Universities, there is no such barrier:
"Innovations become irreversibly adopted into the on-going system without being tested for long-time viability; and necessary changes are resisted by the core of conservative individuals without any assurance that these particular changes are the ones to resist."
The problem is that universities can be become corrupt, where the forces of conservatism take over:
"It is not so much "power" that corrupts as the myth of "power" It was noted above that "power", like "energy", "tension", and the rest of the quasi-physical metaphors are to be distrusted and, among them, "power" is one of the most dangerous. He who covets a mythical abstraction must always be insatiable!"
I don't think Bateson would be encouraged by what he would see today. We have gone nowhere, and the world of Universities, just as the world outside, is in dire straights. If Bateson's analysis is right, it is because of the "out-of-jointedness" of the two forces. In my own experience, I believe the problem lies with conservatism allying itself to mystifying allusions to 'learning' which are not rigorously inspected. His question to the board of Regents is simple:
"Do we, as a Board, foster whatever will promote in students, in faculty, and around the boardroom table those wider perspectives which will bring our system back into an appropriate synchrony or harmony between rigour and imagination?"
We have no information as to how the Californian Regents responded. Our problem today is that there too many (overpaid) Vice-Chancellors who wouldn't even understand the question.

Tuesday, 15 September 2015

Spam and Schutz and Meaningless Snapchats

There's a brilliant exhibition at Manchester's @HOME_mcr at the moment. "I must first apologise..." by Joana Hadjithomas and Khalil Joreige. Having collected spam emails for a number of years, Hadjithomas and Joreige put the words of those emails into the mouths of actors who were filmed reading pleas for money to be sent in various desperate circumstances. In the exhibition you first walk into a darkened room full of TV screens, each one showing a talking head. It's a cacophony of voices, and you have to get close to hear what each individual is saying. You look into their eyes, you hear words which we all have read in our inboxes every day, and somehow it all seems very different.

The closeness of the piece is what interests me, or rather the difference between reading text in an email, and staring into someone's eyes reading that text (albeit on a computer screen). They also try very innovative forms of projection where characters are projected onto a gauze-like translucent material which makes them appear to 'stand out'. It's impressive.

Since I've been thinking about intersubjectivity so much recently, the difference between text exhanges on a screen, time-based voice and video experiences, and real face-to-face contact resonates with deeper changes in the ways we interact. The online revolution has meant that our intimate face-to-face contact has become less (it's time-consuming and time-dependent and inefficient!), and remote text exchange has increased. But, beyond recent calls to limit the use of email in work (, we haven't really been able to articulate how these forms of communication are different.

Alfred Schutz made a distinction between the 'face-to-face' situation as what he called the "pure we-relation" and the more remote "world of contemporaries". With regard to the former, he says:
"I experience a fellow-man directly if and when he shares with me a common sector of time and space. The sharing of a common sector of time implies a genuine simultaneity of our two streams of consciousness: my fellow-man and I grow older together. The sharing of a common sector of space implies that my fellow-man appears to me in person as he himself and none other. His body appears to me as a unified field of expressions, that is, of concrete symptoms through which his conscious life manifests itself to me vividly. This temporal and spatial immediacy are essential characteristics of the face-to-face situation." 
With regard to the latter he comments that:
"The stratification of attitudes by degrees of intimacy and intensity extends into the world of mere contemporaries, i.e., of Others who are not face-to-face with me, but who co-exist with me in time. The gradations of experiential directness outside the face-to-face situation are characterized by a decrease in the wealth of symptoms by which I apprehend the Other and by the fact that the perspectives in which I experience the Other are progressively narrower."
It would appear then, that there is simply 'more information' in the face-to-face encounter. But what does that mean exactly? After all, information itself is an intersubjective phenomenon. The "variables" that we might identify as distinctions between face-to-face and remote communications, for example, body language, gaze, tone of voice and so on, are all themselves categories of experience only accessible to us because we live in a world of others.

Recently, I've begun to look at this differently. Schutz's phrase "the wealth of symptoms" is carefully chosen because it doesn't implicate information directly. Rather it says there are distinctions that we might agree between us. In that process of agreement of distinctions, something constrains us to the point that we can say "this is the gaze... this is the body language..." and so on. The constraint is the "not-variable" - the thing which lies outside the identified property. "Not-variables" have a different kind of combinatorial logic to variables. I suspect that Schutz's intersubjective differences between the face-to-face and the world of contemporaries is an interaction of different constraints or "not-variables". In face-to-face settings, there are more constraints than there are in remote settings.  The way we tune into each other depends on the way we recognise the constraints bearing upon each other.

Schutz's insight had one of its most powerful expressions in his love of music. His paper, "Making music together" is a remarkable account of the way that music and musicians communicate without any kind of reference. I am fascinated by the analytical component of this, which is why I'm messing around exploring the interactions of different redundancies in musical performance at the moment (see The relationship between redundancy and constraint is fascinating because it is not only the redundancies of expression in face-to-face communication (gazes, body movements, voice tone, etc), but also the redundancies of repetition and habit. Whilst most social media provides a narrow form of communication, it is also built for redundancy and the expression of habit (think of endless Twitter messages about what you're having for breakfast, or  streams of Snapchats with little content).

As we communicate more remotely with one another, so we find new ways of generating redundancy in those communications where the redundancies of face-to-face would have once taken much greater precedence. What this might mean is that 'wasting time' online with endless Tweets about very little is much more significant than those who criticise it might think.