Sunday, 20 September 2015

E-learning Failure? The OECD report into Technology in Schools and a Scientific Problem

A recent OECD report “Students, Computers and Learning” into the impact of technology in schools argues that computers have had little effect on childrens’ learning. In comparing results across Europe, there appears to be little noticeable educational advantage despite enormous sums of money that have been spent on educational technology. You can read the report here: http://www.oecd.org/fr/publications/students-computers-and-learning-9789264239555-en.htm. There is a more general air of disappointment in educational technology circles these days, much in contrast to the optimism of 10 or 15 years ago. It perhaps wasn’t a coincidence that “Edtech” was hit by the double-whammy of the financial crisis together with a sense of having reached a cul-de-sac; the financial crisis is itself a labyrinthine cul-de-sac – and we have yet to find our way out. In Edtech, we have seen the withdrawal of funding from educational technology, with government agencies shut down (like BECTA), or turned into charities dependent on institutional sponsorship (like JISC). These agencies have been accused of lacking impact and that too much funding supported academics and others experimenting with technologies which weren’t sustainable. At the same time, technological maturity of social software, MOOCs and mobile learning appear to have settled the educational questions, thus requiring no further research and development, despite the manifest shortcomings of those technologies. Diana Laurillard’s comment that the “reality falls far short of the promise” in educational technology and that “empirical work on what is actually happening” does not reflect theoretical predictions reflects not only the mood, but a scientific problem in the way educational technology has proceeded. Laurillard doesn’t address the scientific problem but, like many others in educational technology, perpetuates it. However, education is in the state it is because of a scientific and methodological incoherence which has its roots in history long before the recent impact of the web.
The early 2000s were characterised by an explosion of ideas about education stimulated by web technologies. Underpinning both the technologies and the ways in which those technologies were envisaged to transform society were a range of ideas drawn from theories about learning, technology and organisation which had a common foundation in systems theory and cybernetics. In order to understand what has happened recently, it is important to understand the deeper historical context within which these ideas were established. Over the course of its development, and particular in the early phases of its development, there were many more articulations of what Ron Barnett calls ‘feasible utopias’ than could be explored in practice. Many of these ideas sprang from the 1950s long before the web in an age of pedagogical experimentation as governments grappled with the challenge of mass secondary schooling. By the time of the appearance of the web, the organisational power of the technology already was beginning to obstruct some paths of development in favour of others: the choice of feasible utopias became restricted. The OECD’s judgement about computers and learning is a judgement about a particular subset of ideas and activities which attracted funding, some of which delivered their promised impact, and others didn’t. Political forces drove the constraining of the research effort, and it is now political forces that determine that the phase of experimentation and research in education and technology should end in the light of ‘evidence’ produced through blinkers.
In order to be disappointed by failures requires an expectation about the validity of the scientific methodology that is pursued. Ministers and funders hope for the discovery for certain key ‘independent’ variables which can be manipulated to directly improve things, or implemented in policy. Theories about learning generate new ideas about education and new possibilities for intervention. Theories are the generators of promises. They make predictions as to what might happen, or what innovators might wish to happen. When predictions aren’t met or promised results do not materialise, disappointment results. The scientific problem lies in mistaken thinking about the causal connection between theory and practice in education. But what causes "successful learning"? Ministerial hopes for independent variables are misplaced, and whilst evidence will be sought to defend one position or another, in the real world, declarations of the causal power (or lack of it) of intervention x or y is nothing short of political manipulation. Such declarations blind themselves to contingencies in the name of pragmatism or expedience. More significantly, the pursuit of the independent variables of educational technology is blind to the constraints that bear upon education, theories of education, methodologies of the research and the personal motivations of researchers, teachers, managers and politicians. The causal obsession loses sight of the constraints that frame it.
Whilst the OECD’s declaration of failure of Educational Technology betrays a simplistic causal logic, it isn’t really their fault. The distinction between causes and constraints goes back to the relationship between the two disciplines which underpin so much of the theoretical background of education: General Systems Theory and Cybernetics. Confusion between these apparently closely-related theoretical enterprises has, I argue, been partly responsible for the current malaise in education and technology. Whilst both traditions of “systems thinking” sit uneasily with the pursuit of independent variables, quasi experiments, or evidence-based policy, their underpinning empirical approaches are fundamentally distinct. The internal differences between General Systems Theory and Cybernetics have produced a central incoherence in thinking about education and educational research where Laurillard’s “disappointment” and the OECD’s rejection are both symptoms - and indicators of a possible way forwards.
Systems theory and Cybernetics
It is hard to find a book that champions educational technology which does not hang its pitch on some variety of the concept of ‘system’. From Seymour Papert, who worked with Piaget, Diana Laurillard, whose work drew heavily on the cybernetic conversation theory of Gordon Pask, to Sugata Mitra, whose background in physics made him familiar with theories of self-organisation and cellular automata, each has defended their ideas in terms of redescribing the complexities of education in “system” terms. In each case, the theories that educational technologists attached themselves to were universalising and transdisciplinary, seeking to account for the richness of learning and educational organisation. There were connections made between a wide variety of educational perspectives, most notably between critical approaches to pedagogy, seizing on the educational critiques of Illich, Freire and others in arguing that new means of communication would deliver new ways of organising people. System theories were the generators of educational optimism buoyed by profound technological changes which transform the learning situation in schools and in the home.
The tendency of most theories of learning, and particularly “systems” theories is to be grandiose. New concepts are used to redescribe and critique existing practices. The manifest problems inherent in received knowledge and practice become clarified within the new framework conditional upon the implementation of new practices, new ways of organising education and the implementation of technologies. The transdisciplinary all-embracing nature of the systems descriptions presents what Gordon Pask called ‘defensible metaphors’ (Pask’s definition of cybernetics was “the art and science of manipulating defensible metaphors”) of transformed scenarios. In practice, when putting real teachers and learners and the real messy situations of education into the equation things didn’t look so simple: redescription produced absences; certain things were overlooked; the contingencies always outweighed the planned futures. Generally there was failure to account for concrete persons, real politics, or ethics: modelled social emancipation, however well-meaning, rarely manifests in reality. The history of cybernetics presents plenty of examples this kind of failure, not just in education. The response to these kinds of failures takes a variety of forms. On the one hand, there is a response that criticises the intervention situation for not sufficiently supporting the initiative: with “management support” or without “management interference”, better training, more time and so on, things would have worked – it is frequently asserted.  Then there is a response which looks at deficiencies of the intervention in its material constitution: the interface could have been easier, the tools more accessible, and so on. There might also be a reflection on the possibility that the underpinning theory upon which interventions were based was deficient in some way. This latter approach can lead to the rejection of one theory, and the adoption of a new one.  
If there is a common thread between these different ways of responding, it is that they each focus on causes: the lack of management support caused the intervention to fail; the interface meant the tools were unusable; the deficiencies of theory caused inappropriate interventions to be made. However, an approach based on constraint rather that causality changes the emphasis in such assessments. Constraints are discovered when theorised possibilities do not manifest themselves in nature. The constraint perspective is more consistent with a Cybernetics than it is with a Systems perspective. In order to understand the constraint perspective, we have to reinterpret the diagnosis of failure. If computers fail to help children learn in the way that the marketeers argue they might it is because there is a mis-match between the constraints that bear upon those generating ideas about what might be possible in educational reality (theorists, technologists, and so on), and the actual constraints that are revealed by the world when new interventions are attempted. We might only consider it failure if our purpose was to determine causal patterns between interventions and results: the identification of ‘independent variables’ in the implementation of technology. But what if our scientific approach was geared towards identifying constraints? Then we would have learnt a lot through our intervention: most particularly that a set of ideas and designs which in an imagined world lead to beneficial outcomes, in reality do not. What might that tell us about the constraints on thought and the generation of new ideas? How might thinking change in the light of new constraints we have discovered? By this approach, knowledge emerges in a reflexive process of theoretical exploration, and the discovery of which theoretically-generated possibilities can and cannot be found in reality.
There are significant constraints that bear upon intellectual engagement with educational technology. The identification of constraints in reality (where things don’t work as we intended) did not send theorists back to think about why they thought it might work in the first place. Whilst many of these constraints are political: “we needed the project funding to keep our jobs, and this was what the funders were asking for…”, or “this complies with current government or EU policy”, other constraints on thought emerged from a confusion that stems from the contrast between causal thinking and constraint-thinking. To put it more bluntly, it stems from confusion between constraint-oriented Cybernetics, and cause-oriented General Systems Theory to the point that the justification of interventions, or the sales-pitch for pieces of software, meant that explanations that attenuated the complexities of reality were produced to try to attract the funding for the research, rather than suggested as possibilities to be empirically explored. 

The OECD's judgement is an interesting step along the way; instead there is a risk it will be seen to slam the door shut.

No comments: