Tuesday, 31 December 2019

Programming the Educational Platform: A turning point for educational technology

The sale of Instructure to a private equity firm, Thoma Bravo, has prompted various reactions within education (see for example https://eliterate.us/instructure-plans-to-expand-beyond-canvas-lms-into-machine-learning-and-ai/). Instructure's Canvas has established itself as the leading VLE, with many high-profile institutions opting for its clean interface, seamless video and mobile integration, and powerful opensource service-oriented architecture. It is ahead of the competition, correctly identifying the move towards data-oriented educational coordination and flexibility, and providing an impressive range of tools to manage this.

The indications that there has been some serious thought behind the platform include its GraphIQL query interface (see https://github.com/graphql/graphiql), and an API which sits beneath Canvas's own interface. The API is surprisingly easy to use: simply adjust almost any Canvas URL for pages, files or modules to include "/api/v1/" after the domain, and instead of the interface, you get a JSON file. The consistency of this is impressive, and putting data in (automatically creating content, for example) is as easy as getting data out.

Instructure, like many players in educational technology, see their future in data (Turnitin was also sold this year for £1.3 billion). Of course, like Turnitin, in providing a hosted platform, they have access potentially to enormous amounts of data. The big hope for the corporations is machine learning and predictive analytics. However, for all the hand-wringing going on, I think it would be wise to be slightly sceptical about what has been portrayed as a corporate data-grab of universities. After all, machine learning is in its infancy, and there is no evidence as to what might be learnt through analysing VLE data that would be useful for educational institutions. MOOC data, after all, was something of a disappointment.

Of course, people point to Facebook, Google and Amazon as corporations which exploit the data of their users. Logic would suggest that education would follow the same path. But the difference lies in the fact that Facebook, Google and Amazon are all trying to sell us things (which we usually don't want), or get us to vote for people (who may not be good for us).

Despite all the claims around the marketisation of education, education is fundamentally about relationships, not sales. So Instructure might be wrong. We should use their tools and watch the space patiently - but I doubt that we are looking at an educational equivalent of Blackrock (although I'm sure that is what Instructure envisage)

The approach of Canvas to rationally organising the technical infrastructure of institutional learning systems is a good one and much needed. Whatever challenges educational institutions face in the future, they are likely to need to adapt quickly to a fast changing environment and increasing complexities (more students, increasing diversity of learning needs, more flexibility in the curriculum, more work-based education, etc). Rigid technical infrastructure which limits control to manipulation of poor interfaces, hides data, and makes coordination difficult will impair the institution's ability to adapt. Instructure has addressed many of these problems. So, basically, the technology is very good - this is what institutions need right now (I'm sure other VLE providers will learn from this, but at the moment they seem to be behind).

This also spells an important change for those whose role is to coordinate learning technology. Data analysis and effective control (probably through programming interfaces) are going to become essential skills. It is through these skills that flexibility is maintained. As more and more content abounds on the internet freely, as video production tools are available to everyone (including students), as the creativity and variety of expression and production becomes more important for personal growth, the job shifts to managing the means of coordination, rather than the production of yet more content. The challenge is for each institution to take control of its own platform - and this will demand new skillsets.

This is a new stage of educational technology. Where MOOCs provided content, they thought little about coordination and relationships, and the essential role of institutions in managing this. In Coursera and Edx, the institution was merely a calling-card - exploited for its status. In creating a flexible technical framework for institutions, initiatives like Canvas approach educational problems as problems of institutional organisation. There is inevitably a trade-off between big corporations which provide the resources to refine these kinds of tools, and institutional needs which when correctly analysed can use them profitably.

The interesting thing about where we are is that both universities and technology corporations are organic entities which swallow-up their environments. In biological terms, they could be said to be endosymbiotic. Lynne Margulis's endosymbiosis theory described how competing entities like this (in her case it was cells and bacteria) eventually learn to cooperate. Is this what we're going to see in education? If it is, then I think we are at a turning point.

Sunday, 29 December 2019

From 2d to 3d Information Theory

I've been doing some work on Shannon information theory in collaboration with friends, and wrote a simple program to explore Shannon's idea of mutual information. Mutual information is the measurement of the extent to which two sources of information share something in common. It can be considered as an index of the extent that information source A can predict the messages produced by information source B. If the Shannon information of source A is H and the Shannon information of B is Hb, then the mutual information is calculated by:
H + Hb - Hab  
There is an enormous literature about this, because mutual information is very useful and practical, whilst also presenting some interesting philosophical questions. For example, it seems to be closely related to Vygotsky's idea of "zone of proximal development" (closing the ZPD = increasing mutual information while also increasing complexity in the messages).

There are problems with Mutual Information. With 3 information sources, its value oscillates between a positive and negative value. What does a negative value indicate? Well, it might indicate that there is mutual redundancy rather than mutual information - so the three systems are generating constraints between them (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3030525)

Negative values should not occur in two dimensions. But they do. Loet has put my program on his website, and it's easy to see how a negative value for mutual information can be produced: https://www.leydesdorff.net/transmission

It presents two text boxes. Entering a string of characters in each immediately calculates the entropy (Shannon's H), and the mutual information between the two boxes.


This is fine. But if one of the information sources has zero entropy (which it would if it has no variety), we get a negative value.

So what does this mean? Does it mean that if two systems do not communicate, they generate redundancy? Intuitively I think that might be true. In teaching, for example, with a student who does not want to engage, the teacher and the student will often retreat into generating patterns of behaviour. At some point sufficient redundancy is generated so that a "connection" is made. This is borne out in my program, where more "y"s can be added to the second text, leaving the entropy at 0, but increasing the mutual information: 

But maybe I'm reading too much into it. It seems that it is a mathematical idiosyncrasy - something weird with probability theory (which Shannon depends on) or the use of logs (which he got from Boltzmann). 

Adding redundant symbols is not the same as "adding nothing" - it is another symbol - even if it's zero. 

The bottom line is that Shannon does not have a way of accounting for "nothing". How could it?

This is where I turn to my friend Peter Rowlands and his nilpotent quantum mechanics which exploits quaternions and Clifford Algebra to express nothing in a 3-dimensional context. It's the 3d-ness of quaternions which is really interesting: Hamilton realised that only quaternions could express 3 dimensions.

I don't know what a quaternionic information theory might look like, but it does seem that our understanding of information is 2-dimensional, and that this 2-d information is throwing up inconsistencies when we move into higher dimensions, or try weird things with redundancy.

The turn from 2d representation to 3d representation was one of the turning points of the renaissance. Ghilberti's "Gates of Paradise" represents a moment of artistic realisation about perspective which changed the way representation was thought about forever.

We are at the beginning of our information revolution. But, like medieval art, it may be the case that our representations are currently two-dimensional, where we will need three. Everything will look very different from there.

Tuesday, 24 December 2019

Out of Chaos - A Mathematical Theory of Coherence

One of my highlights of 2019 was the putting together of a what is beginning to look like a mathematical theory of evolutionary biology, with John Torday of UCLA, Peter Rowlands in Liverpool university, using the work Loet Leydesdorff and Daniel Dubois on anticipatory systems. The downside of 2019 has been that things have seemed to fall apart - "all coherence gone" as John Donne put it at the beginning of the scientific revolution (in "An Anatomy of the world"):

And new philosophy calls all in doubt,
The element of fire is quite put out,
The sun is lost, and th'earth, and no man's wit
Can well direct him where to look for it.
And freely men confess that this world's spent,
When in the planets and the firmament
They seek so many new; they see that this
Is crumbled out again to his atomies.
'Tis all in pieces, all coherence gone,
All just supply, and all relation;
Prince, subject, father, son, are things forgot,
For every man alone thinks he hath got
To be a phoenix, and that then can be
None of that kind, of which he is, but he.
The keyword in all of this (and a word which got me into trouble this year because people didn't understand it) is "Coherence". Coherence, fundamentally, is a mathematical idea belonging to fractals and self-referential systems. It is through coherence that systems can anticipate future changes to their environment and adapt appropriately, and the fundamental driver for this capacity is the creation of fractal structures, which by definition, are self-similar at different scales.

In work I've done on music this year with Leydesdorff, this coherent anticipatory model combines both synchronic (structural) and diachronic (time-based) events into a single pattern. This is in line with the physics of David Bohm, but it also coincides with the physics of Peter Rowlands.

When people talk of a "mathematical theory" we tend to think of something deterministic, or calculative. But this is not at all why maths is important (indeed it is a misunderstanding). Maths is important because it is a richly generative product of human consciousness which provides consciousness with tangible natural phenomena upon which its presuppositions can be explored and developed. It is a search for abstract principles which are generative not only of biological or social phenomena, but of our narrative capacities for accounting for them and our empirical faculties for testing them. Consciousness is entangled with evolutionary biology, and logical abstraction is the purest product of consciousness we can conceive. In its most abstract form, an evolutionary biology or a theory of learning must be mathematical, generative and predictive. In other words, we can use maths to explore the fundamental circularity existing between mind and nature, and this circularity extends beyond biology, to phenomena of education, institutional organisation and human relations.

When human organisations, human relations, learning conversations, artworks, stories or architectural spaces "work", they exhibit coherence between their structural and temporal relations with an observer. "Not working" is the label we give to something which manifests itself as incoherent. This coherence is at a deep level: it is fractal in the sense that the pattern expressed by these things are recapitulations of deeper patterns that exist in cells and in atoms.

These fractal patterns exist between the "dancing" variables involved in multiple perceptions - what Alfred Schutz called a "spectrum of vividness" of perception. The dancing of observed variables may have a similar structure to deeper patterns within biology or physics, and data processing can allow some glimpse into what these patterns might look like.

Fractal structures can immediately be seen to exhibit coherence or disorder. Different variables may be tried within the structure to see which displays the deepest coherence. When we look for the "sense" or "meaning" of things, it is a search for those variables, and those models which produce a sense of coherence. It is as true for spiritual practice as it is for practical things like learning (and indeed those things are related).

2019 has been a deeply incoherent year - both for me personally, and for the world. Incoherence is a spur to finding a deeper coherence. I doubt that we will find it by doing more of the same stuff. What is required is a new level of pattern-making, which recapitulates the deeper patterns of existence that will gradually bring things back into order. 

Friday, 20 December 2019

Human Factors and Educational Technology in Institutions

Educational institutions are now enormously complex technological organisations - particularly universities. They are generally so complex that few people in the university really understand how everything fits together. Computer services will have teams who understand individual systems, although it is unusual to find someone in a computer services department who understands how it all fits together technically. Even less likely is it to find someone who understands the divergences of digital practice either in the classroom by teachers, or among professional service staff who process marks (and often organise assignments in the VLE).

Of course, despite the lack of any synoptic view, things keep on going. This works because whatever complexities are created by different systems, an administrative workforce can be summoned up to handle the complexity. Providing marks are entered, exam boards are provided with data, and students progressed through their courses to completion, it might be tempting to ask whether a lack of a synoptic view matters.

This is where technological infrastructure, human factors and organisational effectiveness meet. An effective organisation is one which organises itself to deal with actual demands placed on it. An effective organisation manages its complexity, understands its environment, and has sufficient flexibility to adapt to change.  In a university, it can be very difficult to define "demand" or be clear about "environment". At a superficial level, there is demand from "students" for teaching and assessment. This demand is increasingly framed as a "market". However at a deeper level, there is a demand from society, and the politicians who steer it (and the policy for higher education).  What does society demand of education? In recent years, the answer to that question has also been framed around "the market" - but many commentators have pointed our that this is a false ontology. Society has a habit of turning on institutions which extend their power beyond reasonable limits. There is no reason to suppose this might not happen to universities, which have extended their power through a variety of what Colin Crouch calls "privatised Keynesianism" - individualised debt to pay for institutional aggrandisement such as real-estate (https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-856X.2009.00377.x)

Then we should ask "What is the actual demand for learning?" A commonsense answer is that a student should expect to be able to do stuff which they weren't able to do before. But this is a very vague notion. As a crude example, how many of the many thousands of computer science graduates can actually program when they graduate? How many creative writers will make it as authors? How many architects will build a building? Now, of course, this is unfair. There are many "transferrable skills" from a degree - people will go into other walks of life, buoyed by their new-found self-confidence. Apart from those who become so worn-down by assessment and institutional rigidity that their mental health took a knock in education. So there is a first demand: "Education should not leave me feeling worse about myself than when I started".

It turns out to be surprisingly hard to organise that (https://www.hrmagazine.co.uk/article-details/students-regret-going-to-university) Teachers do their best, but within constraints which they and everyone else in the system find unfathomable. Today, many of those constraints are technical in origin. The computer has swamped the natural human pattern of communication which kept institutions viable for centuries. The space for rich intersubjective engagement - whether it is between teachers and students, or between staff, or even between students and their peers - has been attenuated to lights on a screen and clicks and flicks. And with the problems that this creates, the answer is always the same - more technology.

So do we reach the stage where the art of teaching becomes the art of designing a webpage in the VLE? Instructional design would appear to have a lot to answer for. Deep human virtues of knowledge, openness, generosity and revealing of uncertainty do not fit the digital infrastructure and get unrewarded. Flashy new tech sells staff with ambitions and a latent drive for everyone to "do it their way". Some of these people become senior managers and appoint more like them. It's positive feedback.

The equations are simple. All technology creates more complexity in the guise of offering a "solution" to a problem created by some prior technology. Human beings struggle to deal with the complexity, and demand new technical solutions, which often make things worse. How do we get out of this?

The importance of a synoptic view is that it must entail clear distinctions about the system, its operations, its demands and its boundaries. As long as we have no clear idea of the problem to which we want to put technology to address in education, we will be condemned to repeat the cycle. So what is the purpose? What's the point?

It's probably very simple: we all die. The older ones will die first. While they are alive they can communicate something of how to survive in the future. Some will do things which speak to future generations after they are dead. But this can only happen if people are able to live in a world that is effectively organised. In a world of ineffective organisation, complexity will proliferate and the intergenerational conversation will die in a torrent of tweets and emails. There is a reason why the ice caps melt, the stock market booms, and the world is full of mad dictators.



Wednesday, 4 December 2019

Institutions, Art and Meaning

Much of what I am reading at the moment - Simondon, Erich Hörl, Stiegler and Luhmann - is leading me to rethinking what an institution is in relation to an "individual". It's like doing a "reverse Thatcher" (which is a good slogan) - there is no such thing as an "individual". There is a continual process of distinction making (and remaking) and transduction by which "institutions" - as biological organisms like you and me - or families, friendship groups, universities, or societies preserve meaning. This is a Copernican shift in perspective, and it is something that I think Luhmann and Simondon saw most clearly, although there are aspects of their accounts which miss important things.

This is a helpful definition, because it seems we live in a time when our institutions don't work very well. Life in them becomes meaningless and alienating. So what's going on?

I think the answer has something to do with technology. Technology did something to institutions, and a hint at an answer is contained in Ross Ashby's aphorism that "any system which categorises throws away information". Those words echo like thunder for me as I'm in the middle of trying to upgrade the learning technology in a massive institution.

So institutions do something with information which preserves meaning. Institutions which lose information risk losing meaning. Thanks to so-called IT systems, most of our institutions from schools to government are losing information.

I've been thinking (again alongside Luhmann) about art and music. A string quartet or an orchestra is an institution, and through their operations, there is little doubt that meaning is preserved. But what is interesting me is that this preservation process is not simply in the current operations of the group - the practice schedule, or performance for example. It is also something to do with history.

Playing Beethoven is to preserve the meaning in Beethoven. And we have a good idea that Beethoven meant his meaning to be preserved: "alle menschen werden brüder" and all that. What is the mechanism for preserving this meaning? A big part of it is notation: a codification of bodily skilled performances to reproduce a historical consciousness.

The art system preserves meaning over a large-scale diachronic period. It seems commonsense to suppose that if the skills to perform were lost, then the process of preserving the meaning would be damaged. Would we lose this stuff? But is this right? What if the skills to perform are lost, but recordings survive? Some information is lost - but is it the technology of recording which loses the information about performance skill, or does the loss of performance skill necessitate recording as a replacement?

In a age of rich media, "performance" takes new forms. There is performance in front of the camera which might end up on social media. There is a kind of performance in the reactions of the audience on Twitter. But is the nuance of "playing Beethoven" (or anything else) lost?

We need a way of accounting for why this "loss" (if it is a loss) is significant for an inability to preserve meaning. Of course, we also need a way of accounting for meaning itself.

So I will have an attempt: meaning is coherence. It is the form something takes which articulates its wholeness. More abstractly, I suspect coherence is an anticipatory system (borrowing this from the biological mathematics of Robert Rosen and Daniel Dubois). It is a kind of hologram which expresses the totality of the form from its beginning to its end in terms of self-similar (fractal) structures.

The act of performing is a process of contributing to the articulation of an anticipatory system. If information is lost in an institution, or an art system, then the articulation of coherence becomes more difficult. This may be partly because what is lost in not-performing is not information, but redundancy and pattern. Coherence is borne through redundancy and pattern. How much redundancy has been lost in the rituals of convivial meetings within our institutions, where now email of "Teams" takes over?

If our lives and our institutions have become less coherent it is because technology has turned everything into information in which contingency and ambiguity is lost. As Simon Critchley argued in his recent "Tragedy, the Greeks and Us", this loss of ambiguity is a serious problem in the modern world, and it can only be resolved, in his view, through the diachronic structures of ritual and drama. We have to re-enchant our institutions.

I think he's right, but I think we can move towards a richer description of this process. Technology is amazing. It is not technology per se which has done this. It is the way we think.