Wednesday, 4 May 2016

Denotation and Connotation in Learning Analytics

One of the criticisms of current data obsessions is the way in which analytic results appear to 'shout science': in other words, to make a claim for objectivity from collective subjectivities. Science is held to be domain of objectivity - of denotative claims: the realm where one points at something and says 'it's clearly moving at speed x, with momentum y" and so on. The subjective world is a world of connotation and interpretation: meaning arises between things. 'Shouting science' makes denotatative claims about connotative processes. In the final analysis, this is an act of oppression: it can only impose an 'objective' judgement on those whose connotative processes might not only come to a different conclusion, but almost certainly come to a different conclusion in the additional light of the imposition of analytic claims to truth.

What's wrong here is not data analytics itself. It is our view of science which is mistaken. To put it quite simply, 'objective' judgements are not indicative of stable entities in the world to which labels can be attached. They are indicative of coherences of understanding and expectation around which action can be coordinated. Zebras have no word of 'lion', but there is a coherence of expectation among zebras that coordinates action when a lion is spotted. We see the effects of these coherences in the regularities of behaviour which occur around particular entities: lifeworld entities like 'lions' exhibit a universal constraint on behaviour rendering it surprisingly predictable.

The question, with regard to data, is whether there is some indicative index of this coherence of understanding. But the same logic applies. Such an indicative index, were one to be available, is itself indicative of coherences of understanding and expectation around which action can be coordinated. However, when things get more abstract, deeper problems set in. A declaration of  an index by a powerful person can create a coordination of behaviour driven by understanding and expectations about the constraints of power dynamics, not the index itself. This is where false consciousness begins - and misleading representations of science.

If learning analytics is seen as a set of creative re-representations of things that happen in education, then it can make a more modest claim to add to the numerous descriptions of educational processes we already have. It is another descriptive layer in our connotative process. In a way, it is rather like how poets describe things:

Understanding is built up from the accretions of references. So too might we build up an understanding of learners through accretions of representations of what they do. The Facebook analytical graph is one of many possible descriptions of interactions online. It may be that such representations are important because so few alternative descriptions of online behaviour are available. In face-to-face communication we have bodies, sounds, movements, smells, touch and so on. Each contributes a layer of the connotative experience. Is it a wonder that online we feel the need to create this diversity of description?

But then if we do this, and we assert it as denotative, a distortion occurs. But we only do this because we think this is what science does. 

The scientific question concerns the generative power of the imagination and the discovery of constraints that nature imposes on imagined mechanisms. Codified mechanisms are coordinators of discursive processes. That might be the beginning of organised attempts to find those mechanisms in nature: The Hadron Supercollider is a good example. When some mechanisms are not found, the natural constraints which prevent them can also be coordinated. It's not finding the Higgs Boson which is important; it is identifying those speculated mechanisms which cannot be found.

Our research approach to education would look very different if we applied this approach. We would identify the constraints within which theoretical constructs are upheld, and where they don't work. Most importantly for the data analysts, the appreciation of constraints requires the accretion of many different descriptions. It is in exercising a connotative judgement that understandings between us can be coordinated.

In a deep way, science IS education.

Thursday, 28 April 2016

Redesigning Educational Technology from the Person Up

I've just seen a series of posts about Institutional VLEs giving access to previous modules that students have studied. Many institutions do this (although few allow access after a student has left), but it is a bit of a technological headache: 'courses' are rolled-forward to create new instances which are populated by new students, where the old instances (with all the previous students' work in them) are maintained too. It's a headache because the 'course' and its associated "cohort" has become a monolithic chunk in the VLE as they are in institutions. The individual person doesn't care very much about courses except that it contains particular units of assessment which they have to pass, and that they have to remember that course x was where they put that really cool forum post which they like to refer back to from time to time. But there are so many courses and so many assessments - and apart from differences in teacher and content, they are not that different from one another.

If a person wants access to a previous module, it is likely they either want access to their previous transactions in their education, or they might want access to the transactions of the teacher - particular content which was uploaded as part of a module. This is, after all, much how we use social media today - reposting, retweeting, commenting and so on. The social media giants store our transaction records and it is these which we search. Any Gmail user knows how powerful it is when every transaction over many years is stored and becomes searchable. Of course, the downside is that Google, Facebook and Twitter mine our data - more on that in a minute.

Why do we have courses anyway? And why has educational technology adopted the course metaphor? There's no reason why it should be like this apart from the fact that institutions have always done education like this: the issue is historical - it is because educational technology grew up around the face-to-face institution. But if we take the learner's transactions with teachers and peers as the building block rather than courses, things start to look different. First of all, learners don't all have to begin at the same time: this is one of the biggest constraints bearing on education.

A person's engagement with education begins long before they enrol on a course. Course enrolment is a transaction end-point of a process which begins with the person trying to decide on which direction to take their life. The educational journey starts in conversation, not lectures.

Early discussions are incredibly important and they happen on a one-to-one basis, not just with teachers but with family and peers. The transactions and flow from these discussions shape everything that comes next. Indeed, the idea of self-determination is probably incorrect: there is a kind of conversational alignment which produces a direction. In education, there has never been too much time spent thinking about these "what do you want?" discussions: apart from being complex and individualised, they simply aren't what educational institutions are about. Educational institutions say to a person "You want this!" and try to make products which can lure them in.

Why doesn't our learning technology start here? Why doesn't it start with the person's search? Why haven't we got tools to support this engagement? Why don't those tools then allow for the gradual emergence of an idea of a path of study? The simple answer is, this is not what our educational institutions are about. Up to this point, our attempts to change institutions with technology have ended up with institutions reinforcing their practices. Something more radical is required. I think we have to the tools to do this. We just need to refocus back onto the person. (And perhaps we should talk less about learners and more about persons!)

Tuesday, 26 April 2016

Online courses and Zones of Proximal Development

I've been doing some analytic work on some large-scale online courses. As with most online courses, the forum is the central area of learner engagement, and like most online courses, what actually happens in the forums isn't conversational in the way that we would think about conversation in a face-to-face sense. If an online forum was a face-to-face event, it would be like a 'show trial': witnesses would be wheeled out one after another, after the main evidence has been heard, and each would testify that this evidence was what they personally experienced (agreeing with previous witness statements as they go). Occasionally, a witness brings new evidence to the attention of everyone else, but this hardly causes any kind of radical shift in everyone else's mindsets.

These courses are successful, and learners pay a lot of money for them. The content they are given is well-produced and presented, and the quality of information, and the quality of staff is very high. Typically, if a course has a lot of posts where learners have written a lot, most people are happy to say that the course is successful. But we should ask how the teacher would know if the learners have really 'got it', or how learner capability has been transformed. In the face-to-face environment teachers get a feel for this. Online it's much harder.

It's harder because the 'feel' that the teacher has about face-to-face learners is a sense of how they might intervene with a learner in new ways, and how the learner might respond. As the learner develops, so the teacher feels freer to try more ambitious things - not because it is determined by the curriculum, but because the teachers knows the learner will respond to it. And the teacher knows who to try this with, and who not to. It's never about only the learner's capability - it's always about the relation between the teacher and the learner.

The relational focus is significant and something that's missed in most education. The teacher's intervention in learning can be thought of as a way of increasing the maximum possible uncertainty that both learner and teacher have to deal with. It's introducing a degree of confusion into a situation which causes development. Surprisingly, this increase in uncertainty takes the form of increasing the constraint on the learner's behaviour. So the music teacher will say, forget about playing pieces - practice your scales! The learner is disrupted from routine, and (in this case) torn between practising pieces and playing scales. All of us settle into our environments - albeit uncomfortably - and find it difficult to step outside them. Teachers impose new constraints which disrupt patterns of practice and cause the need for new adaptation.

Vygotsky's notion of the Zone of Proximal Development is closely related to this. Essentially, the ZPD is a domain of interacting constraints, where one of those constraints is imposed by a teacher. It has a mathematical analogue in the relationship between the uncertainty which people live with, and the maximum uncertainty which is possible within an environment. In any environment, skill and mastery reduce uncertainty we live with: our behaviour become regularised and less erratic. The maximum possible uncertainty is dependent on the environment, which is changed by the addition of the teacher's intervention. The ratio between lived uncertainty and maximum uncertainty is an index of constraints within which learners self-organise. If teachers intervene to increase constraint in one dimension ("practice your scales"), the effect is to decrease constraint in others: learners become less certain of what they were doing. Importantly, teachers can overdo it: an intervention can increase uncertainty to the point that everything falls apart: this is why the ZPD is a 'zone'; equally it can be 'underdone' where little development is achieved.

Most of our learning technologies have been built around an idea of conversation as a pedagogical foundation. This model does not fit what we actually see in online learning environments. Perhaps we shouldn't exclude the possibility that a radically different kind of online environment is needed to help teachers acquire the same feel for the effects of the constraints they impose of learners at a distance as they have in a face-to-face environment. The deficiencies of our current online environments have almost persuaded us that an alternative isn't possible. 

Monday, 25 April 2016

Learning Analytics: What we have and what we need

It's an unfortunate tendency of educational technology to take a form which reinforces existing practices in education. So we have the 'giant photocopier' of the VLE, the 'giant classroom' of the MOOC, 'giant assessments' of MCQs, and increasingly 'giant marking' in the form of automatic essay assessment and plagiarism detection. As we've become beholden to the problems that technology is able to solve (and which it consequently creates), our institutions have been drawn into a 'production line' mindset where every student is cash, we have lost sight of what we actually need.

With the dismal application of computers in upholding ancient practices (and power structures), the greatest tragedy of all this is that we lost trust in computers: that they are forever tarnished with ramping up bureaucracy and surveillance and diminishing freedom.

Standing back to consider what we need in education means standing apart from the madness that has become education. When we look at the real problems, they are not expressed in terms of marketisation of institutions, or problems with retention. Rather they are expressed in terms of social alienation, inequality, lack of opportunity, fascism, terrorism and the collapse of trust in institutions. Somehow we have to look at these problems.

Then we should turn to our tools. What tools do we now have? The simple fact is that the technologies we have today are amazing: they are far more powerful than our parents had at their disposal when dealing with their complex problems. Moreover, they are cheap and ubiquitous. Whilst big data has its downsides (which Wikileaks can attest to), machine processing of data, whilst not intelligent, performs not far behind the dreams of science fiction.

The point is, if these technologies were applied to real problems, rather than upholding ancient institutional structures and vested interests, how much better could we make the world?

I think communication and teaching and learning would be top of the list if we were to get to the heart of where things currently go wrong. Communication is a more complex process than the sending of tweets or emails. The quality of intersubjective understanding and empathy which emerges when we engage face-to-face occurs under conditions which are poorly understood. The 'dance' of engagement between a teacher and a learner, as the teacher tries to work out where the learner's "blockages" are is something that is far more difficult to coordinate at a distance. Why? What might we do about it?

Are there not things we can do to amplify latent signals in communication which might give a teacher a better indication of where specific pedagogical blockages might be? Are there not things that we can do to help the teacher manage the diverse complexity of many learners with many different needs? The answers to these questions are not just technical. They require a rethinking of pedagogy too. But our technologies can give us deeper insight into each other if we let them.

Learning analytics today seems concerned to keep learners on courses, or to help the design of new courses where learners are less likely to drop off (or out). But just because learners stay on a course doesn't mean that good education is going on (the best educational action might be a decision to leave!). Analytics is post-hoc and market driven - and increasingly keeping learners on courses means keeping teachers in chains.

Education is relational. We could use our technologies to help us monitor (in real time) the health of our relations - much as ecologists now monitor the health of ecosystems. If we attempted this, we might start to question the ecological relations between students, institutions, markets and governments. 

Monday, 18 April 2016

The PLE and the Institution of Technology

Personal tools for learning are everywhere. We have mobile phones, tablets, and many of us use social media to keep abreast of and contribute to domains of knowledge which interest us. Institutional champions of social software and personal technology continue to encourage greater integration between personal tools and formal learning (sometimes under the banner of the ‘Personal Learning Environment’), or to encourage greater ‘literacy’ about technology, but the precise nature of this advocacy is unclear: it is clearly not that learners do not use the internet, smart phones or social software; it is rather that there is a gap in the purpose and nature of the interactions and expectations which occur between what is recognised as formal learning and what occurs informally. Part of this gap concerns the nature of the transactions that occur between learners and institutions, and between learners and technology corporations. In formal education the transaction typically concerns the completing assignments in exchange for grades from a tutor, and a certificate at the end of a course. With personal tools and social media, the learner consciously participates in transactions with friends and others, whilst (perhaps less consciously) engaging in a transaction with the social media corporations who provide the means of communication and who, in return, target advertising. Both educational establishments and technology corporations are institutions, and with both there is a pattern of transactional engagement.

The relationship between the Personal Learning Environment and institutions is poorly inspected: rhetoric which overstated the case for de-institutionalising educational technology took a simplistic view of both institutions and technology. Technology was seen as a counter-institutional force in education which could overcome barriers to learning created by rigid institutions. Originally conceived as a critique of the explosion of technology in formal education, the PLE argued that many institutional learning technologies became barriers to learning rather than enablers: each new tool provided new interfaces, often new passwords, new functionality and so on, and in order to proceed with their learning, learners had to negotiate increasing technological complexity. The argument was to shift the locus of control of technology towards the learner: wouldn’t it be better if different services from different sources (for example, communications services from different providers) could be integrated by learners, and the barriers of different interfaces addressed by having standard approaches for integrating and managing services? Students should be able to bring their own tools to their learning. Over the years, many aspects of this technological argument have been vindicated by technical developments. Mobile platforms now feature service integration: messaging tools, calendaring tools, media services and other facilities are today on most mobile platforms provider-neutral. Moreover, the adding of new services has, in the way that was reflected in PLE prototype tools like PLEX, become a standardised and easy process – often involving little more than installing an App. Appstores themselves have simplified the processes of coordinating and increasing the number of services individuals can coordinate. And perhaps most importantly, educational institutions have made their technologies available as Apps which can be more easily coordinated with other social media tools. The arguments about standardised technological coordination have largely been seen to be correct.

Before 2005 and the social software explosion, the argument that social technology was counter-institutional was clearer than it appears today. Experiments in peer-to-peer learning technologies like Colloquia (a server-less VLE) certainly provided opportunities for learners to take control of their technologies, allowing them to self-organise by creating their own courses, establish private groups and coordinate their own tools either with or without a teacher. However, with the advent of centralised social media which exploited Service-Oriented architectures to drive technological flexibility, the powerful affordances of these technologies gained mass following and approval by promoters of the PLE. Where peer-to-peer technologies like Colloquia had no owner, Facebook and Twitter were corporations. This meant that the distinction between a counter-institutional technology and the institutional rigidity of universities became a murkier battle where distinctions were harder to draw.

For this reason, the question about institutions is at the heart of the PLE where once there was a question about technology. Here there are two aspects to the question. On the one hand, there is a question about the institution of education - how it operates, and what it achieves. On the other hand, there is now a question about the institution of technology and the way that transactions are managed between learners and social media corporations. What is the relation between a person and the various institutions with which they engage as they move through the world? Institutions dominate life: the institution of education, the institution of health, the institution of the family, the state, public services, the media and so on. Each of these makes demands on learners – from simple transactions for the payment of services, to more complex transactions to uphold trust and commitments. Technologies mediates these transactions, and technologies themselves are controlled by institutions. So a clearer definition of an institution, and particularly a clearer understanding of the institution of technology is required.

Institutions and the PLE

Our principal focus is the identification of the institution through the study of transactions that individuals have with them. The philosophy of institutions has a long history, but the institution of technology is relatively new. It would not have occurred to Aristotle to consider those aspects of ‘techne’ as institutional in the way that he regarded the institutions of state. Among more recent perspectives, the view that institutions are human constructs is common: institutions exist through their continual reproduction and transformation by humans; if, as Bhaskar tells us, humans cease to exist, then institutions cease to exist: they comprise what he calls the “transitive domain of reality”. Yet this point is complex: Bhaskar argues for the continued existence of the Sun and the stars in the event of human annihilation – aspects of what he calls the “intransitive domain of reality”. Upholding the reality of his "intransitive" domain, Bhaskar upholds a separation between institutions and humans, between social structure and human agency. This separation is controversial. Giddens also maintains a distinction between structure and agency, but maintains that institutions do not have a separate existence beyond human minds: institutions are constructed in discourse. Searle more recent social ontology has developed a similar position, arguing that institutions and other social phenomena exist by virtue of ‘declarations’ of ‘status’ and ‘function’ by powerful actors in society, and a broad acknowledgement of ‘collective intentionality’ which upholds it. Both Giddens and Searle adhere to what Archer calls ‘ellisionist’ philosophy in conflating mind with social structure, a more extreme example of which is contained in the sociomaterial philosophy of Latour, Barad or Orlikowski. Here the quantum theoretical notion of ‘entanglement’ is used to articulate the difficulties in separating mind from nature, and in sociomaterial applications to education, technology from learning.

Whether institutions are separable from minds or not, they clearly have important effects. Institutions declare laws, provide healthcare, employ people, grant degrees, make products and provide services. To consider the personal organisation of learners who revolve around institutions without inspecting the nature of institutions themselves is to ignore half the story. To criticise ignorance of institutions in favour of the pursuit of focus on the ends of institutional engagement (learning), is to parallel similar criticism of ignorance of the the institution of firms and markets in economics. Coase, for example, believed that institutions were overlooked in economic analysis which focused on means and ends. In Coase’s economic theory, the institution was constituted through the transactions that individuals engage through it: crucially, the ‘firm’ only existed because the transaction cost of dealing directly with a market was too high. Again, similar arguments can be made for the institution of education, and the capability of independent teachers to obtain an income outside the institutions walls. More recently, work under the banner of “New Institutionalism”, the nature of the transactions within the institution has been studied more closely. DiMaggio and Powell have identified those processes whereby the management of institutions become similar: for example, the ways in which the management of a University becomes similar to the management of a technology company (and thus it is not surprising to see Martin Bean take the helm of the Open University). In terms of examining the content of transactions between institutions, Etkowitz and Leydesdorff’s Triple Helix presents measurable techniques based in Shannon’s Information Theory whereby the integration between discursive transactions can provide a metric of levels of innovation.

In the PLE, the learner engages in many transactions with many different kinds of institution. Technology has transformed this process of transaction management to the point that most transactions are mediated by technology. Whilst technological barriers to managing transactions have been alleviated, the nature of the management of transactions by learners has not been considered. Yet the starting point for thinking about transactions is to consider that both learners and institutions must remain viable: that the processes of reproduction and transformation which learners (and everyone else) engages in with institutions, the choices that learners and institutions make in their transactions, will be determined by ways of maintaining their viability within a complex environment. The modelling of viability of the learner played a role in the articulation of a broader argument about the relationship between the person, institutions and technology in the PLE (Johnson and Liber, 2008) using a model of the ‘person as a viable system’ based on Stafford Beer’s Viable System Model.   

Revisiting the Learner as a Viable System

The Viable System Model is a cybernetic model of the regulating mechanisms of living systems, whether they are individual organisms or collectives like bee-hives or businesses. Drawing on analogies with the human body, and on the work of cyberneticians such as Ross Ashby whose ‘Design for a Brain’ postulated the need for multiple-level regulatory mechanisms in living things, and McCulloch’s pioneering work on neural networks and heterarchical organisation, the VSM draws together a number of streams of thought into a deceptively simple model which Beer used primarily as a discursive tool within business organisations to help optimise business organisation. Beer’s definition of cybernetics more generally was that it was the “art and science of effective organisation”.

As a recursive model, the VSM is fractal in nature: each viable system comprises viable systems, and each viable system is a component in a larger viable system. Fundamentally, each viable system has to survive in its environment, where survival means that the complexity of the environment must either be absorbed by the system, or that the system can adapt to absorb new complexities. The process of absorbing complexity is a process of coordinating operations within the environment: most basically, eating, seeking food, avoiding predators; for learners, this list can be appended with ‘getting assignments in’, not running out of money, returning library books, socialising, career planning, and so on. This coordination requires a metasystem which has oversight of the basic operations, and which can provision resources so that the individual operations work effectively. Within the metasystem, there are specific functions. Operations have to be coordinated in such a way that they do not conflict with each other – in education, the timetable does this; operations have to be adequately resourced and directed – in education, ensuring access to adequate information is essential; the effectiveness of the coordination needs to be checked – getting feedback on performance is essential if learners are to know how they are progressing; potential threats in the environment need to be scanned and processes of adaptation or appropriate response coordinated – in education, the changing job market may require new kinds of activities; and the conflicting balance between the disruption of adaptation and the ongoing requirement for operational management has to be monitored – learners have to establish an identity which gives them sufficient flexibility to adapt to different situations, but which ensures that effective organisation of fundamental operations is maintained.

Whilst the VSM is a powerful metaphor, its utilisation requires some care. Beer makes no claim for the ontological existence of his regulating systems: there is no real System 2 or 3 - the VSM is a tool to think with. Related to this is the fact that the regulating systems are conceived as constructs emerging from discourse. Beer illustrates the differences between the regulating levels with allusions to typical comments individuals in different roles make in organisations: “System 4 is where they spend the money we make in System 3!” – illustrating the typical tension between Research and Development activities (System 4) and operational management (system 3). In other words, the regulating levels represent different communities of people in an organisation with different sets of expectations. Identifying a particular function as System 3 or System 4 is to articulate a particular expectation. Moreover, different sets of expectations are established in constradistinction to one another. Once again, the statement that “System 4 is where they spend the money we make in System 3” is a distinguishing of System 4 as something “other” to the expectations of System 3.

In Johnson and Liber’s paper on the PLE, the regulating systems of the VSM were presented as relating to the different activities of learning: System 2’s ‘anti-oscilation’ was the timetable, System 3’s operational management concerned the things that needed to be managed, System 4’s activities concerned looking at future career and Personal Development Planning, and System 5 concerned deeper questions about ‘what if…?’

The narrative of the VSM’s regulating levels is characterised by a particular kind of language which codifies different expectations. Beer’s identification of regulating mechanisms is a process of capturing these levels of codification. He captured some of these everyday utterances about organisation in his book “The Heart of Enterprise”, which between the theoretical chapters on the VSM, he includes a section “Later at the bar…” where a group of imaginary participants at a conference presentation discuss the theory of what they’ve been told. Statements relating to the different regulating mechanisms fall into different categories: “We have to make sure that everyone does this”, “we need to think about how we should adapt”.  Each statement can be thought of as a speech act or a transaction in the course of managing both personal viability and the viability of the organisation. From the individual’s perspective, the demand is to make utterances which contribute to a situation in their environment which they can survive better, and help with the process of being able to manage and coordinate the complexity of all the other things they have to manage. Many of these transactions are hidden, and yet the form of utterances reveals something of the nature of constraint which bears upon the individual as they engage with their environment. For example, if utterances are one-dimensional with little variation irrespective of the conditions, then some aspect of constraint is bearing upon them that causes this to occur; if utterances are varied and well-targeted, then an individual is likely to be operating with greater freedom to think.

Learning, Viability and Constraint

The advantage of discussing viability rather than learning is that viability is relational: a learner is viable in relation to an environment. Viable means that the way the learner organises themselves balances the complexities of the environment they are in – either through careful selection of those aspects of information which they know they have to concern themselves with (what Beer calls attenuation), or through expanding their capacity to organise themselves in richer ways through technologies (amplification). A further advantage of viability is that online discourse provides clues as to how individuals organise themselves and the constraints within which they operate. Any discussion about learning, by contrast, remains metaphysical speculation – and its reification (which has been a characteristic of some discussion about the PLE) can be a recipe for dogmatic ideology rather than intellectual inquiry.
A technical way of examining viability and the nature of amplification and attenuation is to see it as autonomous self-organisation within constraints. Constraint is the flip-side of variety: the behaviours of viable systems adopt patterns – repetitions, common tricks and habits – in response to constraints. The greater the constraint, the more predictable the behaviour; the less constraint, the more erratic the behaviour. Each human being operates within multiple constraints that may be identified individually, but whose net result is not reducible to the action of any single one. We are constrained by bodies, emotions, the emergent effects of childhood attachments, economic conditions, social class, educational opportunity, jobs, family, transport, nutrition, access to healthcare, and so on. Educational processes manipulate constraints. Whether it is the fear of a five-year-old child in ascending a climbing frame, or the confidence to speak a foreign language or play a musical instrument, what once constrained behaviour within a particular range is overcome and behaviour acquires a broader range of possibilities.

The cybernetic concept of constraint has a more technical representation within Shannon’s Information Theory. The variety of learner behaviour can be considered in terms of the average surprisingness of different behaviours at any point in time. The inverse of average surprisingness is called ‘redundancy’ or ‘constraint’. Shannon’s information theory provides two equations which are powerful in describing this. On the one hand, the average surprisingness of behaviour, identified by Shannon as ‘H’, is:



In other words, the sum of the log of probabilities of each event (i.e. each probability multiplied by each other) weighted by the number of each type of event. (The minus sign results from the fact that probabilities are fractions, and therefore negative logarithms)

As the inverse of this relation, the constraint, or redundancy (R) bearing upon the pattern of n behaviours is 1 – H. H, however, is a scalar value potentially greater than 1 which needs to be expressed as a value between 0 and 1. To do this, it is divided by a notional value for the maximum possible surprise within the system, or Hmax:

These equations provide two important aspects on learning which resonate with the Viable System Model. On the one hand, learning can occur through increased self-organisation. In other words, the observed surprisingness of learned behaviour may decrease to the point where behaviour is reliable and predictable. H tends to 0 and constraint is 1. Such an increase in self-organisation might arise through continued practice and mastery of skills such as the mastery of musical performance, or mastery of language skills, for example.

Equally, however, something may occur within either the learner or the environment which increases the possible maximum surprisingness of events within an environment. This may be a new technology, a performance-enhancing drug, or some acquired expansion of capability. Under such conditions, constraints will also be increased as new possibilities are reckoned with and there is greater self-organisation.


These information theoretic notions of learning are not speculations about metaphysical processes. They are instead statements about the nature of self-organisation within a complex environment. Having developed this theoretical apparatus, we can turn to a concrete example of learners having to coordinate their behaviour across different environments. On an online Continuing Professional Development course, the variation in transactions is reflected in different kinds of speech act that learners make in different circumstances. We can speculate on the relationship between the variety of speech acts in online transactions and the maintenance of individual viability for learners on the course. Fundamentally, it is possible to consider ways in which the constraints learners operate within are made apparent through discourse, and then to consider the ways learners find of overcoming their constraints. 

Sunday, 17 April 2016

Learning Analytics and the Futile Functionalist Alchemy of the Data analyst

I've been playing with R - trying to find something intelligent to say about a set of forum posts. They're very nice posts, and I'm sure the people making them got something out of it at the time when they submitted them. But one of the problems with analytics is that it's always trying to say something more about things which are ephemeral and about which there isn't really that much to say after the event. If I was a commercial organisation trying to sell people things, then perhaps I'd feel different. I'd take a punt on some of my speculative conclusions about the people posting messages and fire a few adverts at these people, and see if they bought something. If it works, I'll do it again and again and become rich! But then again, what I'd be doing is making the people fit my data by re-representing the data back to them as 'suggestions'.

I find with data analytics tools like R that a kind of fever comes over me. Manipulating matrices can be exciting - and one is tempted, having performed one transformation, to have a go at another - eventually some 'gold' will emerge. The problem is that we become so blinded by the technicalities of it all, and the hubris that some gold is there to be discovered, that it is easy to forget what it is we are looking for. It's all breathless and soulless. Where are the real people?

My deeper brush with analytics has concerned the Triple Helix, and the use of Shannon's information equations to model 'discourse dynamics'. I think there is some kind of discourse dynamic. It forms part of the environment within which we all live. But the important thing is that it only forms part of the environment. The internet itself, which is the domain which people tend to focus on when analysing discourse - is an even smaller part of the environment. Where's the rest of it? Well, we can't really see it, but each of us knows something is there. I prefer to call it 'constraint'. The Triple Helix is interesting because it acknowledges 'constraint' and uses Shannon's equations to attempt to get a handle on constraint.

The problem is that our understanding of constraints is constrained by constraints which we do not understand. Having said this, there are constraints which we do understand very well. The constraints of the market, for example. Or the constraints of tyrants. The effect of these constraints may have an impact on our discursive utterances, and this impact may perhaps be studied. But whatever we say about it is shrouded in constraints. Fundamentally, I'm re-stating Goedel's incompleteness theorem.

The curious thing is that learning is, in the final analysis, about overcoming constraints. Whether those constraints are fear, or physical ability, or intellectual challenge, when we overcome our constraints new things become possible. If we are to have a humane and sensible education system, then we may have to overcome the constraint that is 'learning analytics'.

Thursday, 7 April 2016

The central question behind the Block chain: What is a transaction? And how does rethinking the transaction affect our understanding of educational technologies?

Block chains are records of transactions. Over the last few weeks as I've been hammering-out a bid for exploring block chain in education, I've frequently encountered the comment, "But education isn't transactional! It's interactive!" I've struggled to engage critics with a deeper consideration of what either "transaction" or "interaction" actually means. The point is that clarity about these words suddenly becomes really important. In reality, nobody knows what an interaction is any more than they do a transaction: I always struggled with the 'interactive' mantra of educational technology discourse from a few years back. It painted the computer as a kind facilitative medium through which actions were taken and the "machine" (either some AI or other kind of adaptive interface, some other person) acted back producing a kind of self-sustaining loop. This wasn't helped by certain socio-material theories which considered technology as a 'actor' without any consideration of how the technology 'actor' is identified.

The fact that people would say "It's interactive" was a way of acknowledging the existence of an "it" and a process which the learner/user gets caught in. Why do we call it an "it"? The reason for asking is that there are many social entities where we do things, things happen in return, and we get a kind of self-sustaining loop. We don't say that institutions are interactive, do we? We say "there is a university", "there is a bank" and so on. What is the "it" of interactivity?

What I'm getting at, apart from the woolly thinking about interactivity, is that the proper topic of interactivity is the question of the identification of social entities. It is a much deeper issue about patterns of human action and the determination of social objects.

In developing the Theory of the Firm, Ronald Coase argued that economics was upside-down. That it concentrated on means and ends, whilst ignoring the social entities through which means and ends were negotiated. There was in economics no questioning about the existence of markets, banks, firms, governments and so on. Coase suggested that social entities were created through the transactions which individuals engaged in. The cost of transactions (e.g. buying and selling) could be reduced through creating an institution to manage them.

Analytically, this means that transactions provide a key to the identity of social entities. With patterns of interaction with online tools available for analysis, this becomes increasingly apparent. Transactions are not just about money. They can be about approval or disapproval and social status. We have academic societies which publish their "transactions" for example. Transactions can also be large or small. A book is a transaction which takes a long time to execute. A hand-written letter takes less time. An online forum post or a blog post less time still.

As a general trend, we are sending more and more transactions of a shorter and shorter size. Part of the reason for this is the role of mobile technology. Although mobile technology is presented to users as a convenience - that it is more ready-to-hand (in Heidegger's terminology), the importance of the mobile revolution is in capturing small transactions: a Facebook like, a retweet, a Whatsapp message, and so on. Wearable, ambient, and IoT technology will decrease the transaction size and increase its frequency even more. The increasing use of Bots in mobile is a technological move to harness more and more transactions from us. When Satya Nadella announced last week:
“It’s not about man versus machines, but man with machines,” (see http://thenextweb.com/microsoft/2016/03/30/everything-microsoft-announced-build-2016-day-1/#gref)
He wasn't simply pushing increased effort into the virtual assistants. He was announcing a drive to engage humans in more and more small transactions - data from which Microsoft no doubt can try to sell us more and more stuff with "one-click" purchasing (i.e. very small transactional processing!).No doubt this is the strategy behind their newly announced "Bot Framework": http://thenextweb.com/microsoft/2016/03/30/auto-draft-11/ - this appears to be the new mobile strategy for Microsoft now Windows Phone has failed. They may be right.

My guess is that education will not be untouched by this - but that it is not ready for what is coming. It will impact every aspect of formal education. Formal education has the longest transaction times of any social activity we engage in (apart from possibly mortgages). It is the least flexible of all the social activities we engage in. There may be virtue in the long, slow transactions. But right now, this is not where the world is heading and we need to think how education can adapt.