Moving on with my book, this is the sketch for Chapter 4. Chapters 4 and 5 deal with two aspects of functionalism: here I deal with functionalism in educational thinking - both in the managerialist sense and also in what I call the 'deep functionalism' of cybernetics. In Chapter 6 I will deal with functionalism in the thinking about information. Still lots to do, and hopefully no clangers! (although the good thing about blogging this stuff is that I'm more likely to iron out problems...)
Introduction:
Functionalism and the Question of Efficient Education
Educational institutions are complex. They comprise departments
for academic disciplines, research, marketing, libraries, estates management,
IT deployment, finance and so on. To see each of these performing a function,
with a particular purpose lies at the root of functionalism. Functionalism sees
that the architecture of the enterprise of education may always be improved, in
the same way that the architecture of a business might be improved:
efficiencies can be introduced, often with the help of new technology. Whether using Taylorist scientific management,
analysing the workflows of academics, or even through introducing wholesale Fordist reorganisation of educational processes, optimised and efficient delivery of education has been a driver both for well-intentioned educational reorganisation (to reduce costs to learners) or for (arguably) ill-intentioned managerial intervention with the goal of maximising profits. Here we must ask, Can education be 'efficient'? What does efficiency
in education mean?
In terms of inspecting its efficiency, a common topic of debate is the expense of education and how it should be paid for. In this debate, typically we see arguments about efficiency combined with arguments about fairness. The reasons for this lie in the fact that it is impossible to discuss efficiency in education in isolation from both macro and micro-economic mechanisms which will bear upon the constitution of society in the wake of education, and on the rights and obligations of individuals subject to education. For example, we can ask, If Higher Education is funded by everyone
(through taxes), and not everyone attends university, is it efficient, irrespective of it being fair? To what extent does the answer to that question depend on how the money is used within the institution? To what extent does it depend on what education gives society? The institution of education might appear to be grossly inefficient and an expensive drain on the taxpayer, but to reform it would change its nature, and with it the nature of society: how might the efficiency of higher education be measured against the benefits to the whole of society in such a model? If, alternatively, education is funded by only those who attend, the burden of
funding on the individual is much greater than it would be if it was funded by
everyone. Is this more efficient? What are the knock-on effects on society? If (to take a popular solution to this problem) education is funded on the basis of loans which are taken out and repaid by
those who attend, but who only repay if they earn enough to afford later in
life, is this more efficient? And what happens to the money within the institution? If Vice-Chancellors award themselves huge pay rises (as they have done in recent years) is this a mark of efficiency (as VCs usually claim!)? If VCs have led cost-cutting initiatives which have squashed academic salaries, introduced institutional fear and compromised academic freedom, can this be defended as being 'efficient'? The problem is that whilst efficiencies might be sought by identifying the functions of components of education, the function of education itself is contested.
Functionalist thinking can
work at many levels. I will make a simple
distinction between 'shallow functionalism' and ‘deep functionalism’. Shallow
functionalism is the thinking about the functions of the components of the
institution of education, the way that those components inter-operate, and so
on. This is the Taylorist agenda in education. Deep functionalism looks at
similar problems with a greater degree of granularity: what are the components
of learning processes, what are the components of teaching, of the curriculum,
of knowledge itself. Thinking relating to deep functionalism belongs to the
discipline of cybernetics. The relationship between deep functionalism and
shallow functionalism presents new questions about how we thinking about 'solving
the problem' of education, how we think about 'identifying the problem of
education'.
Shallow functionalism
The shallow functional problem begins with the way that
institutions and society is organised. We might draw this as a simple hierarchical
diagram with government and education ministers at the top and teachers and
students at the bottom. In between are various organisational units: schools,
universities, colleges, each of which has a head who is responsible for
coordinating the provision of courses, resources, teachers and timetables.
Teachers themselves have the responsibility for coordinating the activities of
their learners, making sure national standards (like exams) are met
satisfactorily, and the expectations of learners (and, in schools, their
parents) are satisfied.
The hierarchy here is clear. Each individual has a function.
Within the contemporary University, there are particular functions which are
identified as for 'quality', 'validation', 'assessment', 'certification', and
so on. Each of these units performs a function ascribed to them within the
structure. Such functions are determined at different levels of the structure.
For example, the reason why we have 'validation' as an activity in the
University is so as to ensure that the things students are taught fit into a
general and comparable schema of 'subjects for which degrees might be awarded'
within the national sector of universities. Validation is a function emerging
as a knock-on effect of other functions within the university system. The
individual responsible for 'validation' in the university is the person who has
to fill in forms, attend meetings, review documentation and feed the results of
this to the institutional hierarchy.
Similarly, the assessment boards within the university have
a function to verify the marks awarded by teachers to their students. Typically
marks for modules are presented to the group of staff concerned, and
discussions ensue as to what decisions should be taken in each case. This panel
is seen as fundamentally important in guaranteeing the award of degrees to
individuals. What if it didn't happen? What if anyone could award anyone else a
degree? Then the system as an education system becomes incoherent: the end
result is a perceived injustice on the part of some learners who find their
efforts unfavourably viewed purely because of bad luck or lack of transparent
organisation.
The functional differentiation within the education system
has evolved over time. From origins in
the individualistic practices of apprenticeship learning, the monastery and so
on, preferred hierarchical models arose which privileged the fairness and
equity and stability of an education system. The advocates of change in the
education system are those who see that this evolutionary process is
incomplete; that further evolution is required in order to ensure that the
functions performed by the different components of the education, and the
functions performed by individuals within those units are all optimal, and that
therefore the function of the education system as a whole is tightly and
efficiently specified.
However, there remain a number of problems with this
picture. Whilst the hierarchy of functions specify the arrangements of
documents passing between departments which assure quality, fairness, and so
on, there are no 'real' people. Institutions do not contain functionaries; they
contain real persons, with real histories, real hang-ups, real talents, and so
on. Everybody concerned is involved in a process of problem-solving: the
question within institutions is whose problems count for the most; who gets to
pursue a solution to their problem, and who else loses as a result? Institutions
constitute themselves with real politics.
The actual 'evolutionary' processes which determine the
functions of the institution rely on various arguments which present the latest
problem solution in an “ethical” light - it is the 'right' thing to do. Such
statements are usually made by those in power. Functional determination is not
a process of natural selection because any selection is determined by power not
by any inherent natural property within each individual: the dominant species
in any institution dwarfs the power of everyone else. This process becomes more marked when
powerful individuals arm themselves with new technologies. It has been argued
that technologies are often used as tools for the class struggle, but this is
more often the way in which senior managers might enhance their power over
everyone else. The ethical arguments for such change amount to declarations of
status of new instruments of engagement.
It is for this reason that functionalist thinking has become
associated with managerialist doctrine. That the seeking of efficiencies of the
system frequently costs the jobs of many of the workers. It is in fighting this
naive view that deep functionalism is
sometimes invoked as a way of challenging the managerial hierarchy - either by
re-determining the functions of education so that institutional structures are
no longer needed (for example the recent experiments with MOOCs), or by
highlighting the deep processes of individual learning and drawing attention to
the epistemological gulf between functionalist philosophies of management and
the processes of learning and growth of culture which they threaten.
A hierarchy of function is a kind of model. The boxes in a
model indicate departments and people. The lines delineate 'information flows'.
Those are the communication channels between people. For thinking about is
meant by information in this context,
I will address this in the next chapter. But typically in institutions, the
information flows contain reports and formal communications which are the
result of some process which has been executed by the unit concerned; their
necessity has usually been determined by higher-order power functions. Failure
to produce reports will be deemed to be “not working”, and the workers within
the unit will probably be sacked.
If EdTech has been a tragic enterprise over the last 10
years then it has been because it hoped to establish itself with the ambition
and ideals of deep functionalism, only (in the end) to have strengthened the
hand of shallow functionalists. In understanding the reasons for this, we now
have to turn to the background behind the deep functionalism that started it
all. Here we have to consider a different kind of model.
Deep Functionalism
and Educational Technology
Deep functionalist models consider the arrangement of
components of individual behaviour, communication, self-organisation and
consciousness. The principle feature of deep functionalist thinking is not just
a greater granularity in the determination of components, but increasing
complexity in the interrelationship of those components: in particular, the
relationship of circular inter-connectedness. Circular inter-connectedness, or
feedback, was one of the principal features of psychological and biological
models and early functional models date to before the 19th century. Piaget's
model of perturbation and adaptation in organisms provides the classic early
example of this kind of deep functionalism. But the thinking about deep
functionalism goes back further to the origins of the discipline which
concerned the interaction of components at a deep level, and in particular, the
circular relationships between the action of different components which produce
behaviours which can appear chaotic or life-like.
Piaget's mechanism is one of feedback between components,
and it was this model which was one of the founding principles behind the
pedagogical models which informed thinking about learning and teaching. Among
the most influential work within educational technology is that of Gordon Pask,
whose conversation theory attempted to identify the functional components of
communication within the teaching and learning context. This model was
simplified in the late 1990s by Diana Laurillard as her 'conversation model',
which subsequently was used as one of the bases of constructivist pedagogy.
However, just as the shallow functionalist models failed to
work, or at least relied on power relations in order to work, so the deep
functionalist models and the technologies that they have inspired have also
often failed to work. In a passage in Diana Laurillard’s book on “Learning as a
Design Science”, she states the problem:
“The promise of learning technologies is that they appear to provide
what the theorists are calling for. Because they are interactive,
communicative, user-controlled technologies, they fit well with the requirement
for social-constructivist, active learning. They have had little critique from
educational design theorists. On the other hand, the empirical work on what is
actually happening in education now that technology is widespread has shown
that the reality falls far short of the promise."
She then goes on to cite various studies which indicate
causes for this 'falling short'. These include Larry Cuban's study which
pointed to:
·
Teachers have too little time to find and
evaluate software
·
They do not have appropriate training and
development opportunities
·
It is too soon – we need decades to learn how to
use new technology
·
Educational institutions are organized around
traditional practices
She goes on to echo these findings by stating:
"While we cannot expect that a revolution in the quality and
effectiveness of education will necessarily result from the wider use of
technology, we should expect the education system to be able to discover how to
exploit its potential more effectively. It has to be teachers and lecturers who
lead the way on this. No-one else can do it. But they need much more support
than they are getting."
However, here we see a common feature of functionalism, both
shallow and deep: functionalist theories struggle to inspect themselves. The
odd thing in Laurillard’s analysis is that at no point is it suggested that the
theories might be wrong. The finger points at the agency of teachers and
learners and the structural circumstances within which they operate. In other
words, deep functionalism is used to attack shallow functionalism.
Most interventions in education are situated against a
background of theory, and it is often with this theoretical background that
researchers situate themselves. Given the difficulties of empirical
verification in any social science, the relationship between these descriptions
is metaphorical at best, and such models are often a poor match for real
experience. The difficulty in adapting these abstractions presents an
interesting question about the relationship between theories, researchers,
practitioners and the academic community. That the personal identity of
researchers becomes associated with the validation of a particular analytical
perspective or a theoretical proposition. Either it is a theoretical
proposition which is to be defended or a particular method of research, which
itself will be situated against a theoretical proposition (which often lies
latent). To critique theory is not just an intellectual demand to articulate
new theory (which is difficult enough), but it is also to question the
theoretical assumptions that often form the basis for professional and personal
identities of the researcher. On top of this, critique of school or college
structures (which are often blamed for
implementation failures) provides a more ready-to-hand target for critique
rather than theoretical deficiency.
This is a question about the nature of functionalist thought
as a precursor to any theoretical abstraction and technological inervention. What
is the relationship between analytical thought and its categories to the real world
of experiences and events? For Hume, whose thinking was fundamental in the
establishment of scientific method, there was no possible direct access to real
causation: causation was a mental construct created by scientists in the light
of regular successions of observed events. The models of education present an
interesting case of Humean causal theory because there are no regular
successions of observed events: events are (at most) partially regular; only in
the physical sciences are event regularities possible. Given that merely
partial regularities are observable, what are the conditions for the
construction of educational theories? The answer to this is the use of
modelling and isomorphism between models and reality: educational science has
proceeded as a process of generating, modelling and inspecting metaphors of
real processes.
Functionalism and the
Model
When Laurillard discusses the extant ‘theoretical models’
(Dewey, Vygotsky, Piaget) she presents a variety of theories of learning. She
attempts to subsume these models within her own ‘conversational model’ which
she derived from the work of Gordon Pask. She defends the fact that these
models of learning haven’t changed by arguing that “learning doesn’t change”.
How should we read this? Does it mean that Dewey, Vygotsky and Piaget were
right? Or does it mean that “there is no need to change the theoretical
foundations of our educational design, irrespective of whether the
interventions work or not”. Basically, there is an assumption that the model
which has served as a foundation for design of educational interventions isn’t
broken because it served its purpose in being a model for the design of
educational interventions, irrespective of its ability to provide a way of
predicting the likely results of educational interventions.
Such deficiencies in modelling are not uncommon in the
social sciences. In economics, for example, econometric models which fail to
explain and (certainly) to predict the events of economic life continue to
appear in economic journal papers. The deficiencies of the model appear to
serve a process of endless critique of policy as attempts are made to make
policy interventions fit the prescribed models. This continues to the point
where it is difficult to publish a paper in an economics journal which does not
contain an econometric formula. Yet, the principal figures of economics
(Keynes, Hayek, etc) used very little mathematics, and Hayek in particular was
scathing in the emerging fashion for econometrics.
A similar situation of adherence to formalisations as the
basis for theory has emerged in education. In education, this takes the form of
slavish adherence to established theoretical models to underpin practice: a
tendency which might be called ‘modellism’. Models are associated with
ideological positions in education. The dominance of constructivist thinking,
which (as we said in chapter 1) is grounded in solid and reasonable pedagogical
experience – is nevertheless a foundation for models of reality which (partly
because of their affinity to pedagogical practice) are hard to critique, lest
those who hold to them feel that their most cherished values about education
are under attack. In trying to address this situation, we need to understand
the nature of these models.
Laurillard hopes her ‘conversation model’ provides a
generalisation of the available theories of e-learnning. She states that she
would have liked to have made a simpler model, but she feels that simple model
(like Kolb’s learning cycles, or double-loop learning) leave out too much. That
she is able to produce a model which is commensurable with these existing
models owes partly to the fact that each of the models she identifies have a
shared provenance in the discipline of cybernetics.
The Machinic Allegory
of the Cybernetic Model
Cybernetics is a discipline of model building, and
particularly understanding the properties of systems with circularity in their
connections. Cybernetics is difficult to describe. Its efforts at defining
itself (multiple definitions abound) testify to the fact that it doesn’t have
the same kind of constitution as other established sciences, cybernetics grew
from a period of interdisciplinary creativity and science that emerged from the
Word War II, it was recognised that there was the possibility of making
connections between ‘feedback and control’ within the newly discovered
mechanical devices of the war (in cybernetics case, it was the missile
detection systems which Norbert Wiener had been developing in MIT), and the
biological mechanisms of living things. It appears as a kind of playful
philosophising, where physical or logical mechanical creations with unusual
properties are explored, and the questions raised are used to speculate on the
nature of the world. Pickering calls this ‘ontological theatre’: a kind of allegorical
process of exploring fundamental mechanisms and relating them to reality. Cybernetics
provides an alternative to philosophy as a means of description of the world. With
emphasis on feedback and indeterminacy, cybernetics brought with it its own
mathematics which provided the ground for deeper investigations which,
ultimately were to produce many spin-offs which now have their own separate
disciplines (and rarely acknowledge their shared heritage) including Computer
Science, Artificial Intelligence, Family Therapy, Management Science, biology.
Self-organising systems became the principal metaphor behind these systems and
generic models could be provided which could cover a range of different
phenomena.
Machines with circular connections exhibit behaviour which
becomes unpredictable in ways in that can make a machine appear to have ‘life’.
One of the first machines with this property was developed by psychiatrist Ross
Ashby. His ‘homeostat’ was a machine which contained four oscillating
mechanisms whose output values were wired into the inputs of other oscillators.
When activated, the different gauges oscillate in apparently random patterns, a
change in each individual motor prompting reactions in each of the others. By
making the machine and observing the behaviour, the possibility of making
distinctions about the behaviour about the machine becomes possible. At the
same time, it also becomes possible to consider the nature of this 'model' and
its relationship to the natural world. The distinctions surrounding the
homeostat provided the opportunity to introduce new concepts: attenuation,
amplification. These distinctions feature in a new kind of 'allegory' about
social life and psychological phenomena like learning. The homeostat creates
events in which understanding is a performative process of engagement:
cybernetic machines and models 'tell stories' about the phenomena of
consciousness and understanding. Cybernetics is a new kind of metaphysical
allegory to account for the way things come to be, and for the way things might
become.
The latter emphasis on becoming pinpoints the evolutionary hopes
contained within cybernetic understanding. There are many similarities between
evolutionary explanation and cybernetic understanding: indeed, for those
scientists who have sought to develop Darwinian evolutionary theory,
cybernetics has been a powerful tool which they have used to dig deeper into
the components of the emergence of life. As with evolutionary explanation, the
central feature in these kinds of explanations is time. It was initially in the
mathematics of time-series that Wiener first articulated the cybernetic
dynamics as an example of irreversibility: each state depended on some prior
state, and differences of initial conditions could produce dramatically
different patterns of behaviour (a physical manifestation of the point made by
Poincaré many years earlier). Given the dependence on states on previous
states, and the irreversibility of the processes of emergence, there needed to
be a way of thinking about the connection between the variation in states and
the conditions under which states varied.
Just as evolutionary explanation regards selection and
probability as its principle mechanical drivers over time, cybernetics takes as
its principal driver the inter-relationships between components each of which
can occupy a finite number of states at any particular moment. Ashby noticed
that the number of possible states in a component at any point in time was related
to the number of states in other components. In suggesting counting the number
of possible states at any one point, he called his measure of the number of
possible states of a machine Variety, and stated his axiom that the variety of
a system can only be absorbed by the variety of another system. In other words,
equilibrium between components depended on the balancing of the number of
possible states in each component. An imbalance caused fluctuations in
behaviour and uncertainty. The technique of counting variety, and Ashby’s
associated law has many everyday applications: in the classroom, the teacher
has the variety of a single human being; whilst the class has the variety of 30
human beings. Somehow, the teacher has to manage this variety, which they do by
attenuating the variety in the class (with rules and regulations), and
amplifying their own variety (with a central position where they can be seen,
and a chalk-board).
Ashby’s Law represents deep functionalism's alternative to
evolutionary language: instead of genes competing for supremacy, complex
organisms interact in ways which preserve their own internal variety management
across their different components. Given the new kind of language of deep
functionalism and cybernetics, the predominant concerns of shallow
functionalism can be re-inspected. What does the organisation hierarchy look
like if instead of identifying the components of the organisation as those
which perform the functions of marketing, sales, accounts and so on, we examine
the way that variety is managed from the macro to the micro level? What does
the enterprise of the educational institution look like as a multi-level
variety-management operation? What impact do technologies have in helping to
manage variety through either attenuation or amplification? Most importantly,
by this route, might it be possible to characterise the more basic qualities of
mind and establish better ways of organising newly re-described components of
education to provide a more enlightening way of organising education?
Of the pioneers of cybernetics who asked the questions about
the organisation, Stafford Beer applied Ashby’s principles to re-describe the
organisation chart of the institution. Beer’s approach was to allegorize fundamental
components he considered to be vital to the successful management of variety in
any organisation, and indeed in any organism. Beer’s Viable System Model led
him to Chile where he was invited to rewire the Chilean economy under Salvadore
Allende. Using Telex machines and a rudimentary 1972 computer, a control centre
was established which had feeds of production information from the entire
Chilean economy. Decisions could be taken in the light of information received.
In the history of cybernetics, there is perhaps no more spectacular example of
the aspiration of deep functionalism.
Beer’s work characterises the power and ambition of deep functionalist
thinking for re-describing social institutions and transforming them, at the
level of individual consciousness and experience, the same principles were used
to address psycho-social pathologies emerging from human communication and
experience. American anthropologist Margaret Mead was one of the first scientists
present at Macy conferences, but she was later joined by her husband Gregory
Bateson who saw in the descriptions of ‘feedback’ dynamics within trans-cultural,
and trans-species systems a way of describing human pathology which if
apprehended could avert the ecological catastrophe that many cyberneticians
(including Heinz von Foerster) were already predicting. Bateson’s thinking
about system dynamics goes back to Ashby in recognising the fundamental
distinction as that of the ‘difference’. Differences are perturbations to a
system’s equilibrium, and differences cause change in a system: Bateson argues
that what human’s consider to be ‘information’ is “a difference that makes a
difference that…” The dynamics of difference-making result in differences
occurring at different ‘levels’ in an organism. Influenced by Russell and
Whitehead’s class-set theory, Bateson defines different classes of difference.
Consciousness involves the interaction of different difference-processing
mechanisms. In examining the learning processes of children as they grow into
adults, he observed that basic mechanism of stimulus and response at one level,
gave way to deeper levels of coordination as higher-level differences concern
not basic stimuli, but the results of accommodation to primary responses to
stimuli. Tertiary levels of response occur in response to those differences
produced by secondary levels: the response to the adaptation processes to the
primary response. There are two important phenomena which arise from this
mechanism: on the one hand, there is, for Bateson, emergent consciousness which
arises through the interaction of different levels of description in a system.
Secondly, there is emergent pathology: different levels of description may
contradict each other – particularly in inter-human communication. It is in
these different levels of description that Bateson becomes particularly
interested.
One particular situation is what he called the ‘double-bind’:
a situation where one level of description conflicts with another in a
contradictory way, where a third level of control prohibits either party in
being able to “see” the situation they are caught in. In the double-bind
Bateson saw the political relationship between masters and slaves, the dynamics
of war, addiction or depression. In the dynamics of emergent levels of
organisation, Bateson saw the processes of learning from basic experience of
stimulus and response, to higher order functions of sciences and the arts.
[more to do]
Pask’s Conversation
Theory and Teaching Machines
Whilst Bateson’s project was one of radical redescription,
with interventions occurring across a range of phenomena (from dolphins to
psychotherapy), Gordon Pask used a similar approach to experiment directly with
new technologies for learning. As with all cyberneticians, Pask’s interests
were very varied: his contributions range from educational technology, art, architecture,
biological computing to epistemological concerns around the nature of concepts
and understanding. It encompassed the connection
between self-organising systems, constructivist epistemology and a theory of
society. However, Pask is perhaps most important as one of the founders of
experimentation with teaching machines. It was with these machinces that Pask
explored his interest in communication and the ways individuals learn and adapt
from one another and the ways that they acquire concepts about the world, which
they then communicate.
Displaying a circularity typical of cyberneticians, Pask’s
work in education revolved around the concept of the “concept”: what is a
concept and how are they related to communication and learning? Pask’s theory
of concepts has many similarities to von Foerster’s theory of object. For Pask,
the concept is a stabilised pattern of interactions between individuals and the
world. Pask’s approach is a kind of meta-epistemology, which regards concepts
as both ideal and subjective – since they are realised in individual minds –
whilst exhibiting the properties of objectivity
in the eddies of stability in the interations between people in the
world. In order to realise his theory of concepts, Pask requires two things: he
requires a mechanism that drives the process of conceptualising. And then he
requires a “field” which bounds the operation of this mechanism and allows for
the formation of concepts and the interactions between people. Pask calls these
two aspects simply an M-machine and a P-machine. An M-machine is the hardware –
biological substrate, and in the case of human beings, the brain. The P-machine
is the software – some kind of storage for emerging concepts. Importantly, the
P-machine may exist within a single M-machine, or across many M-machines. In
the latter case, one may talk of social groups, shared discourses and (most
importantly) shared concepts between members of social groups. Pask sees as a
process of maintaining stable conceptual forms as a broader process of
maintaining individual identity. Luhmann sees individual identities as being
constituted out of the self-organising dynamics of communications; Pask posits
an ultimately individualistic and psychological mechanism of maintaining
conceptual structures in an interactionist context.
This kind of deep functionalism presents difficult
questions: How are concepts communicated? It is this part of his theory which
Laurillard is drawn to, and whilst his explanation stands on its own at one
level, the extent to which its assumptions draw on his much more convoluted
assertions about the nature of concepts is a sign of particular problems
further along the road. Fundamentally, conceptual structures are realised
through processes of communication. In teaching and learning processes,
teachers attempt to coordinate their understanding of conceptual structures
with learners’ understandings by making interventions in contexts, and
encouraging learners to articulate their understandings through a process Pask
calls “teach-back”. Through repeated exchanges, activities (which are different
kinds of context) and exercises, teachers and learners gradually harmonise
their conceptual formulations.
So from the shallow functionalism of units of organisation
of the university to the deep functionalism of concepts, communication and
psychology, how do these things interact? Pask built his conversation theory into
a theory of society. Within his work is situated the adaptation-assimilation
mechanisms of Piaget, the social constructivist theory of Vygotsky, the
situated cognition of Lave and Wenger, and many other theories. The breadth of
this work provides the ground for Laurillard’s ambition to embrace the other
models that she admires, and indeed, within Pask’s model, there is some
commensurability between the cybernetic provenance of Senge, Kolb, Piaget, and
others.
From Deep
Functionalism to Epistemology and Critique
Whilst cybernetics relation to philosophy is somewhat
strained (Luhmann humorously characterises the relationship with philosophers
turning up at the cyberneticians party like the uninvited “angry fairy”) the
deep functionalism of cybernetics must eventually end up in philosophical
territory: for all the speculation about the nature of the world, cybernetics
struggles to be anything other than metaphysical. Its identification of processes
of variety management, difference and extrapolating these to the processes of
consciousness attest to a particular view on knowledge and being in the world.
Having said this, cybernetics asks the question as to the nature of reality in
a new way: with the demonstrable dynamics of machines and the logic of its
ideas and its mathematics, cybernetics presents a model of the metaphysical
realm which appears more coherent than those presented by philosophers from
past ages. Such a view was clearly held by Bateson who called cybernetics “the
biggest bite from the tree of knowledge man has taken for 2000 years”, and his
epistemology focused on the importance of language and communication as a
dynamic process of difference-making which resulted in the co-creation of
reality. By this logic, empiricist views of ‘the real’, the ‘observable’ and so
on were challenged: reality was a construct. It’s an irony that the philosopher
who would most closely agree with this position is David Hume, who was
similarly sceptical about reality, but whose work established the foundations
for modern empirical method.
The central difficulty that Bateson addressed was the
problem of observation. A philosophical shift was signalled by Margaret Mead,
who wrote in 1968 of the need for “cybernetics of cybernetics”. Mead’s plea for
a cybernetics which turns its focus on cybernetics itself was seen to focus
immediately on the work by two Chilean biologists, Humberto Maturana and
Francisco Varela.
Maturana and Varela’s work on cellular organisation appeared
to demonstrate (down the microscope), that cells were self-organising,
self-reproducing entities. The question then became, if cells are like this,
what about language? What about human relations? What about objectivity?
Maturana carried this work forwards in arguing that media of transmission were
in fact fictions – the results of self-organising processes. There was no
information, there was no language: there were processes of languaging between
biological entities. That there was an empirical biological basis behind
Maturana’s epistemology introduced the seeds of a problem with reality: his
work could be characterised as a ‘biological reductionism’. However, it wasn’t
just biology that was pointing in this direction. Shortly after Maturana and
Varela’s intervention, Heinz von Foerster argued for a mathematical orientation
towards 2nd order cybernetics. In considering the nature of objects and the
objection to radical idealism that was first proposed by Dr. Johnson, Von
Foerster worked on a way in which the identification of objects could be
explained through the patterning of sensory information, where stable patterns
of interaction could be determined irrespective of the point of view. He called
these points of stability of interaction ‘eigenvalues’ and his work
subsequently was expanded by other cybernetic mathematicians, notably the
topologist Louis Kauffman.
Maturana and Varela’s ideas had a further impact, and one
which Maturana (in particular) objected to. This was the idea that communications
themselves could be self-organising. Niklass Luhmann’s view of the world was based
on 2nd order cybernetics, yet his approach was essentially an inversion of
Maturana’s biological emphasis. Luhmann suggested that it was communications,
not biological entities, which should be seen to be self-organising. Biological
entities became then the means by which communications were reproduced and
transformed in society. This move enabled Luhmann to redescribe the phenomena
of the social sciences from the perspective of self-reproducing systems. More
importantly, it meant Luhmann could engage in a full-blooded redescription of
the major themes in sociology, which resulted in a critical engagement that
brought Luhmann’s cybernetics far greater inter-disciplinary attention than
anyone else, building as it did on the highly influential social functionalism
of Talcott Parsons.
Luhmann’s exclusion of agency is at once challenging and
powerful. Luhmann asks whether agency is merely the interaction of communication
dynamics. Is meaning simply the calculation of aniticpations of
double-contingency of communication? What then matters? What of love? What of
passion? Luhmann’s presents powerful answers to these question in “love as
passion”, where love is seen as an inter-penetration of psychic systems.
Luhmann generally makes 2nd order cyberneticians uncomfortable. Yet
his message isn’t radically different from that of Pask’s social theory. Yet
Luhmann, more clearly that any other figure in cybernetics , relates the sociological
discourse to the cybernetics, but more importantly he reveals the direct
inheritance of German idealism, and particular of Kant. It is for this reason
that his argument with Jurgen Habermas, who represents a fundamentally
different tradition of thinking, is most interesting. However, the revealing of
the Kantian inheritance and its problems is where the novelty and magic of
cybernetic ‘deep functionalist’ thought can be situated in a deeper tradition
of thought.
Deep Functionalism
and the Kantian Inheritance
Kant’s importance in the history of philosophy rests on his
rejection of a model of the world where meaning lay inherent in nature, not in
man. In his enterprise he echoed Hume’s view that causes were not inherent in
the natural world, but instead the result of discourse between scientists who
contrived reproducible experiments. He made a distinction between analytic knowledge
(knowledge is that which can be known inherent to a proposition, as in
mathematics) and synthetic knowledge (knowledge constructed in the light of
experience). He introduced a new method of philosophical reasoning to ask
“given that we can have knowledge of the world in various ways, what must the
world be like?” As Von Glasersfeld points out, Kant is not the first to highlight
the importance of human construction for coming to know the world (Bentham may
have been the first, but Vico also expressed a similar view), but he is the
first to devise a completely new philosophical schema within which this might
occur. His fundamental question, building on the distinction between the
analytic and the synthetic, is how synthetic a priori propositions are
possible: how it is that something which is constructed from experience can be
known without experience? This was an important question because it concerned
the knowledge of God: if synthetic a priori knowledge was impossible, how could
God exist? In suggesting an answer, he postulates that human subjectivity must
organise perceptions into “categories” of thought. Through the categories Kant
was able to justify and analyse the way in which synthetic a priori knowledge
and other forms of knowledge were possible. He concluded that knowledge of the
world must emerge from human beings, but that human understanding must be
constituted in a particular way so as to reach the conclusions about the world
with which we are all familiar. This was a metaphysical proposition: the
‘Transcendental subject’ of categorical understanding, which could only be
inferred by the way the world was.
In proposing the transcendental subject, Kant made a key
assumption: the regularities of the world which Hume had referred to, and which
were fundamental to synthetic understanding, was a necessary attribute of the
world. This so-called ‘natural necessity’ was itself challenged by Hume, who
could see no reason why the world should exhibit regular successions of events
if causal mechanisms were human constructs. The transcendental subject was a
dual metaphysical assertion: an assertion about the world and an assertion
about consciousness. It is this dual assertion upon which Husserl built when
devising his phenomenology. 2nd order cybernetics disagrees with
Kant on the question of natural necessity. By substituting mechanisms of
understanding for Kantian categories, reality is seen through the eyes of the
cyberneticians as constituted through interacting communicative processes.
A contrasting approach to Kant is to uphold his view on
natural necessity, but to reject his view of the transcendental subject. Bhaskar
upholds a different kind of transcendentalism based on the question “Given that
science is possible, what must the world be like?” This is to re-ask the
Kantian question following Hume with the benefit of hindsight of 200 years of
scientific progress. In upholding natural necessity, Bhaskar also rejects not
only Hume’s rejection of it, but also rejecting Hume’s assertion that causes
are constructs. In arguing instead that causes are real, inherent in the nature
of the world, and that science’s job is to discover them (not create them),
Bhaskar paints a picture of reality which is very different from the both the
cybernetic view, and from Hume’s and Kant’s subjectivist view. The concept of
mechanism plays a fundamental in this, with Bhaskar making a key distinction
between transitive and intransitive mechanisms: those mechanisms which exist
through human agency, and those mechanisms which exist outside human agency. In
articulating an argument that welds Aristotelian and Marxist thought with
Kantian transcendentalism, Bhaskar argues for a dialectical materialist logic
that is fundamentally oriented towards emancipation. From this perspective, the
cybernetic view it attacked for not inspecting its ontology: it suffers a
linguistic reductionism which excludes causal factors which must, in Bhaskar’s
view, be considered if one is to account for reality. The most important of these
is absence. Bhaskar’s philosophy
suffers similar problems of privileging mechanism that the cybernetic viewpoint
is subject to, however, the highlighting of the reductionism to language, and
the importance of absence as a cause helps him and other (including Smith) to focus
on the concreteness of human experience and the transcendental implications of
this on the nature of the world rather than dual transcendentalism of the world
(natural necessity) and the subject.
The contrast between these positions presents three critical
areas for inspection which form the nexus of the problem space concerning the
different varieties of functionalism. On the one hand, there is a critique of
actualism from Bhaskar which focuses on the causal power of absence, on the
other hand, there is critique of Kant’s correlationism from Meillasoux and
Badiou which focuses on whether or not there is natural necessity. This broad
area of disagreement then boils down to two concrete issues: the nature of real
agency and ethical behaviour, and the nature of real people.
1. The Correlationist/Actualist Problem
What opens up is a vista on possible ontologies. The
differences between positions can be characterised as to the extent to which
ontologies assert natural necessity or not (in the language of Meillassoux,
whether they are ‘correlationist’ or not) and the extent to which ontologies
tend towards a temporal mechanical descriptions which are available to
synthetic knowledge and with ‘fit’ as the driving force, as opposed to logical
analytic descriptions with truth as the central feature. To identify a range of
ontological positions is not to relativise ontology; it is instead to find the
resources to situate possible ontologies within a meta-ontological framework.
To begin with, a simple table of the positions under
consideration can be constructed:
Nature\subjectivity
|
Transcendental
subjectivity
|
Concrete
subjectivity
|
Natural
necessity
|
Kantian transcendentalism
Pragmatism
|
Critical Realism
|
Natural
contingency
|
2nd-order Cybernetic epistemology
|
Badiou
Meillisoux
|
This pinpoints the difference between Kantian ontology and
cybernetic otology, so far as the assumptions which are made about the nature
of the world. The cybernetic epistemology holds to no particular stance on the
nature of reality; there is no natural necessity. However, it still upholds
Kant’s transcendental subjectivity, albeit with a mechanistic flavour rather than
one of categories. The problem for the the cyberneticians is that their
ontology ressupposes time as the fundamental requirement for mechanisms of
variety management. Kant’s philosophy, on the other hand, does not describe
time-based mechanisms. Having said this, the end-result is the same: what Pask
and Luhmann describe is a transcendental subject.
The problem of time is the marker between those approaches
which attempt to avoid the transcendentalising of the subject. In Bhaskar, the
idea of mechanism plays as important a role in his philosophy as it does in
cybernetic thought. The apprehension of mechanism as a concept appears to imply
both natural necessity and some degree of subjective transcendentalism: it is natural
necessity which determines the regular successions of events which humans can
interpret as “mechanistic” through the categories. If there is no natural
necessity, as the cyberneticians claim, an alternative to mechanistic
description needs to be suggested. Badiou and Meillassoux both argue (in
different ways) that reality is essentially contingent: people have bodies and
they use languages and the world presents ‘events’ to them. The process of
making sense of events, for Badiou, is a process of apprehending truths. In
Badiou, mechanism is rejected in favour analytical (i.e. not synthetic)
knowledge.
In this actualist/correlationist conflict between possible
ontologies, the cybernetic transcendental person fits uncomfortably. Cybernetics
rejects natural necessity on the one hand, only to infer it in its embrace of
mechanism. This makes its transcendental assertions problematic. The cybernetic
subject does not appear to be a real person. Instead, it is a machine which
processes communications: in Laurillard’s model, teachers and learners process
each others’ communications in the light of engaging with the world. Learners’
descriptions of their learning are used by the teacher to gauge what they might
do next. It is hard to see why anything would matter to anyone in this situation. It is hard to see where either
a teacher or a learner might become passionate about what they teach. With
everything is reduced to language and coordinating mechanisms, why bother with
the process at all? Is it possible to have a model of learning without a model
of human care which must be its fundamental prerequisite? To understand this,
we have to address the question of agency and why things matter to people in
the first place.
2. The problem of action and ethics
Teaching, parenting, caring, empathising and listening are
activities carried out because they matter to us. How does cybernetics explain
why a person would risk their life to save others? How does it explain why a
person would wish to make art or compose music? How does it explain our
conscience? Or why people fight each other? In each case, the rational
ascription of function to the components of a mechanism (even if it is a
conversational mechanism) leads to what Bhaskar calls a ‘flattening-out’ of
being: reality becomes a rationally-prescribed mechanical process. What is
missing is the acknowledgement of the ‘real’: what it takes to make a ‘real
person’.
The causal connection between speech acts of teaching and
teach-back with the dynamics of engaging with the context of learning sit
within a deeper context which is unspecified. In Bhaskar’s terminology, the
correlationism of mechanism to transcendental subjectivity might also be called
‘actualism’ in that they model what either can be actually said to exist
(supported with evidence) and suggest mechanisms that are deemed to be actually
operating in order to produce the observable effects. Here we see the problem
in its essence: learners and teachers have personal histories which will affect
the ways in which they interact; much is communicated in ways beyond direct
linguistic or bodily utterances; the joy and laughter of learning are absent.
The fact that functionalism tends to box-in real people,
whether shallow or deep, problems are encountered when it is intended to
coordinate action. Arguments are put forwards as to why such and such should be
done, or on warning of the dangers of doing something else. From the
historicist rhetoric that much neo-Darwinian thinking inspires, to the deep
cybernetic arguments for global ecology, ought is the word that tends to
dominate. The issue here concerns another aspect of Hume’s thinking: the fact
that he argued that obtaining “oughts” from “is” is something which is
impossible. For Bhaskar, this argument
is wrong because Hume’s ontology was wrong. Bhaskar argues that oughts are
derivable from is’s: indeed, the emancipatory axiology of his critical realism
is precisely determined from navigating the space between is to ought.
What is agency as an ethically-driven process of engaging
with the world? Behind any model is a kind of instrumentalisation of the world
and instrumentalisation of the engagements between people. The phenomenology of
action does not render itself amenable to analytical probing. If agency was to
be characterised as ultimately ethical, and that judgements about the rightness
of action preceded any action, then a distinction has to be made between the
agency of human beings and the agency of any other kind of entity (like a
robot). It would also entail a characterisation of ethics which excluded the
possibility of an artificial ethic.
The question of agency and models is therefore a question
about ethical judgement and action. With regard to ethical judgement, there are
a number of ethical positions which philosophers identify as conceptually
distinct. The ethical position which might be modellable in some way is the
position labelled as ‘consequentialist’: this position considers that agents
act through some kind of benefit calculation either to themselves or to the
community as a whole. It may be conceivable that a model might be able to
characterise a range of calculations of this: von Foerster’s ethical principal
“always act to increase the number of possibilities” is an example of a
“machine ethic” which would work under these circumstances. However, other
ethical positions are less easy to model.
3. The problem of the person and the new
problematisation of the subject
Models are abstract, yet we know life through our
constitution as persons. Individual understandings emerge against the backdrop
of their own personal experiences, attachments, personal motivations and so on.
Kant’s transcendental subject and Pask’s P-machine and Luhmann’s transpersonal
communication constructs are united in the absence of real people with these
rich dimensions of experience. Modelled agency serve as cyphers of real people
– as vague predictors of real behaviour. Yet usually real people are remarkably
richer in their response to the world than any model suggests. The failure of
econometric calculations about utility functions, agent-based modelling,
rational economic behaviour and so on is that they ultimately fail to predict
the nature of human agency. All these models are constrained by the variables
they consider, and exclude many other variables which in the end are only
revealed through practice and experience. The fundamental question that arises
from this inability is whether a naturalistic inquiry into the behaviour of
individuals is at all possible.
The failings of deep and shallow functionalism in
characterising real people ultimately depends on what Smith calls ‘variables
sociology’. However carefully the independent variables are chosen for the
scientific explaination, the dependent variables appear change according to values
other than those defined as ‘independent’. This is the weakness of trying to
model individuals as rational agents selecting an action from a set of options.
Mechanisms describe what can be actually perceived (by some means or other –
even if it is through an agent-based model), but reality extends beyond what
can actually be identified to include what isn’t there.
The idea of absence being causal is not restricted to
sociology. In cosmology, we popularly understand ‘dark matter’ in the universe
as representative of the physical causal forces which must exist for the
universe to be consistent. The challenge is to try to find a method whereby
absent causes may be unearthed. Bhaskar attributes the identification of
concrete absence to the dialectical and methodological process of science
itself. This means that a naturalistic inquiry into education is a
methodological pursuit with the purpose of determining absences. It is by this
absence pursual move that Bhaskar argues that science itself was possible in
the first place. In this way, observations are made and mechanisms suggested to
explain them, with successful mechanisms being used as a foundation for further
investigation and false mechanisms rejected. However, whilst Bhaskar’s position
seems deeply sensible, and certainly has formed the foundation for deeper and
more sensible thinking about the nature of personhood, what matters to people,
and so on, there remain (as we have discussed) problems with the idea of
mechanism and the assumption of natural necessity.
Meillassoux’s alternative ontology constrasts with Bhaskar
because it doesn’t accept natural necessity as its premise. Meillasoux also
begins with Hume, and asks about the conditions under scientists reach
conclusions about the nature of the world. Meillasoux’s conclusion is that the
Humean practice of constructing causes in the light of event regularities was
in fact a processes of determining expectations, or the probabilities of
events. However, the probability of anything means that the total number of
possible events is calculable. Meillasoux points out that this can’t be the
case. This is, in fact, a restatement of Bhaskar’s claim that empirical event
regularities are produced under the conditions of closed-system experiments,
and that the causal inference could not have been constructed because outside
the experimental conditions, many constructed causal mechanisms still hold.
Instead of making a transcendental claim on the nature of natural necessity and
the ontology of causes (causes are real), Meillasoux appeals for an ontology of
truth, arguing that a natural ordering of the world, revealable through
mathematics, is at work in the scientist’s specification of the probabilities
of events, and that the empirical process amounts to the bringing together of
an idealised natural order with an observed natural order. What matters in
scientific inquiry is an encounter with ‘events’ of life where the ordering is
made explicit. Badiou pinpoints 4 kinds of events: Science itself (and
mathematics), art and aesthetic experience, love, and politics and the
experience of justice.
Both intellectual moves avoid transcendentalising the person,
as is attempted through cybernetics or other kinds of functionalism. Instead it
transcendentalises a natural order which can be articulated either through mathematics
(Badiou) or through causal mechanisms (Bhaskar). Science proceeds on the basis
of positing the natural ordering of things, and engaging in dialogue with the
orders of things as revealed through events. Bhaskar’s and Badiou’s positions
are related in a number of ways. First, they both articulate the importance of
politics as a driver for social emancipation, although Badiou and Meillasoux
see other drivers in art, love and science. Secondly the assertion of truth by
Meillasoux and Badiou is really the assertion of a dialectical mechanism,
similar in force to Bhaskar’s dialectic as an emancipatory force. Finally,
Bhaskar’s absences which have a causal power become Badiou’s events themselves:
events which are essentially not really there, but through the transformations
effected by them reveal aspects of natural order previously undetermined.
Persons have bodies and languages, but they also perceive
truth which is revealed to them through events. The weakness of the Paskian
model is that it appears that only language is explicitly recognised (although
one might suppose that bodies are perhaps implicit in the interactions between
the teacher and the learner). Pask will not commit either to truth on the one
hand (to satisfy Badiou and Meillasoux), or will commit to natural necessity
and a materialist dialectic of emancipation (to satisfy Bhaskar). What emerges
is a flat ungrounded model with little explanatory or predictive power, but
with some strong and essentially unprovable metaphysical assertions. Is a more
naturalistic position possible?
Addressing the
Naturalistic Gap in Education
The process of determining the function of components
remains fundamental to scientific inquiry. The speculation of causal mechanisms,
the guessing and imagining of “what might be going on” are all processes which
sit at the heart of all scientific inquiry, alongside the desire to understand
what might be going on. The methodological question in education, and in the
social sciences in general, is the extent to which the practice of those who
study what’s going on in education relate to the processes of those who study
the behaviour of sub-atomic particles or the origin of the universe. Different
ontological stances present different accounts of these processes. Bhaskar, for
example, introduces his distinction between the transitive and the
instransitive domain to account for the possibility of creating close-system
experiments in the physical sciences, together with the social processes of
conjectures and refutations, paradigm shifts and so on which characterise the
social dimension of science. Bhaskar’s ‘possibility of naturalism’ rests with
an ontological grounding of inquiry in the social sciences which situates
social mechanisms of reproduction and transformation of social structures with
material causal mechanisms. By this logic, all science – not just social
science - is politics; it is the political which sits at the heart of his
naturalistic ontology. However, Bhaskar’s ontology, as we have seen, is one
which rejects Hume’s scepticism about reality, and upholds natural necessity in
the form of the intransitive domain and the reality of causes.
The alternative possibility of naturalism rests with Badiou
and Meillasoux who point to an ontology of mathematical truth. The purpose of
naturalistic inquiry – whether it be in science, mathematics, art or in the
various dimensions of human relations – is to uncover truths relating to the
ordering of the world. By this logic, Hume’s scepticism about causes is upheld;
his regularity theory of causation is reframed as a statistical theory of human
expectation, the nature of the difference between physical experiment and social
experiment being one of different orders of expectation whose logic is
potentially knowable.
Both these positions articulate a positive vision for the
social science. Both demand progressive closure of the gap between theory and
practice, openness to refutation of theory, and a fundamental grounding in
political reality and the concreteness of persons. Both have methodological
processes by which they might achieve their aims. In Bhaskar’s case, perhaps
the most widely deployed methodological approach is Pawson and Tilley’s
Realistic Evaluation of Minger’s multimethodology. Both these methods are
effectively meta-methods which seek critique of different methods for examining
what happens, so that not only the results of experiment are examined, but so
too are the implicit ontologies lying behind the methods themselves. Both
techniques privilege causal mechanisms and tend to avoid the less mechanistic
and more subtle aspects of Bhaskar’s later philosophy. In practice, mechanistic
descriptions can be hard to reconcile, articulating different processes from
different levels of abstraction.
The alternative is a logico-empirical movement which was
first suggested by Elster, who combined measurement with the representation of
different social theoretical statements using rational choice theory and game theory
(before later claiming that rational choice theory was deficient). Badiou and Meillasoux’s ontology presents an
alternative which combines mathematical analysis and measurement. One of the
key problems they address is the way in which different social theories may be
represented and compared. Taking the view that different social theories are
different descriptions of social ordering, Badiou’s utilisation of the
mathematics and set and category theory presents the possibility for the
representation and evaluation of different social theories.
Whatever the prospects for these different positions, the need
for a metatheory about theorising and empirical practice within the social
sciences, and particularly education, seems beyond question. There are likely
to be many possible metatheories, and there are likely to be a number of
different possible ontologies upon which they sit. The space for theoretical
development in education is a space of possible world-views – not just views
about the nature of education, or the purpose of education – but fundamental
different views on what it is to be human, and on the nature of the world.
Against this background, the modelling of education through cybernetics or
other kinds of functionalism becomes merely a way of creating new kinds of events
in the light of which we hope to learn new things. The point about the technologies
in education about which Laurillard bemoans the failure is not, and could never
have been, implementation. It was illumination.
Conclusion
Functionalism’s dominance in modern culture rests on its
unique position as the ‘solution-finding’ paradigm. What I have hoped to
articulate here is that functionalism’s solutions are invariably deficient, but
that functionalism’s strength lies on the one hand in its ability to coordinate
action, and on the other to ask new questions. When we devise new interventions
in education, be they pedagogical or technical, we create a new “theatre of
events”. What matters is the effect of those events on the practice of people
witnessing them. New interventions might be devised in the light of
functionalist theories (either deep or shallow), but providing participants are
open to ask deeper questions about the world in the light of the events that
occur, rejecting or refining theories as they go, then critical scientific
advances within education ought to be possible. However, this proviso is a very
big one. The reality of human behaviour appears to lead individuals to becoming
attached to explanatory theories as a means of articulating explanations and
designing interventions which, when they don’t work, cause individuals not to
abandon their theories, but instead to assert them more strongly, blaming
factors in implementation rather than poor theorising for the situation.
Here we should inquire about the relationship between
particular properties of particular theories which lead to uncritical or
dogmatic acceptance, and particular properties and tendencies of particular
individuals. Do totalising theories attract people who are less likely to let
go of them? I have attempted to show how the totalising ambitions of cybernetic
description are ungrounded and that whilst cybernetic ideas can raise very
powerful questions about ontology, they sit uneasily between philosophical
positions which are fundamentally incompatible. It may be that the first step
to dealing with over-attachment to totalisations is to unpick the totalisation
and highlight the tensions contained within it. However, on its own, this is
unlikely to be enough: it is merely a discursive intervention, and suffers the
same idealisation of the person as the theories it critiques.
Real people in education have real feelings, real histories,
real ambitions, real fears and it is likely to be only through particular
circumstances that any progress might be made. The challenge is to ensure the
conditions under which authentic interactions between real people can occur in
such a way so as to ensure the continual asking of questions about the nature
of what we attempt to do in education. Laurillard hints at this in her book:
“only real teachers can solve the problems of their students”.
No comments:
Post a Comment