Sunday, 26 January 2020

Well-run Universities and methods for Analysing them

There's so much critique of higher education these days that little thought is going in to thinking about how an institution might be optimally organised in the modern world. This is partly because critique is "cheap". Bateson made the point that we are good at talking about pathology and bad at talking about health. This is partly because to talk about health you need to talk about the whole system. To talk about pathology you need to point to one bit of it which isn't working and apportion blame for its failure. Often, critique is itself a symptom of pathology, and may even exacerbate it.

The scientific problem here is that we lack good tools for analysing, measuring and diagnosing the whole system. Cybernetics provides a body of knowledge - an epistemology - which can at least provide a foundation, but it is not so good empirically. Indeed, some aspects of second-order cybernetics appear almost to deny the importance of empirical evidence. Unfortunately, without experiment, cybernetics itself risks becoming a tool for critique. Which is pretty much what's happened.

Within the history of cybernetics, there are exceptions. Stafford Beer's work in management is one of the best examples. He used a range of techniques from information theory to personal construct theory to measure and analyse systems and to transform organisation. More recently, Loet Leydesdorff has used information theory to produce models of the inter-relationship between academic discourse, government policy and industrial activity, while Robert Ulanowicz has used information theory in ecological investigations.

Information theory is something of a common denominator. It was recognised by Ross Ashby that Shannon's formulae were basically expressing the same idea as his concept of "variety", and that this equation could be used to analyse complex situations in almost any domain.

However, there are some big problems with Shannon's information theory. Not least, it assumes that complex systems are ergodic - i.e. that their complexity over a short period of time is equivalent to their complexity over a long spell of time. All living systems are non-ergodic - they emerge new features and new behaviours which are impossible to predict at the outset.

Another problem with information theory is the way that complexity itself is understood in the first place. For Ashby, complex systems are complex because of the number of states they can exist in. Ashby's variety was a countable thing. But how many countable states can a human being exist in? Where do we draw the boundary around the things that we are counting and the things that we ignore? And then the word "complex" is applied to things which don't appear to have obvious states at all - take, for example, the "complex" music of J.S. Bach. How many states does that have?

I think one of the real breakthroughs in the last 10 years or so has been the recognition that it is not information which is important, but "not-information", "constraint", "absence" or "redundancy". Terry Deacon, Loet Leydesdorff, Robert Ulanowicz and (earlier) Gregory Bateson and Heinz von Foerster can take the credit for this. In the hands of Leydesdorff, however, this recognition of constraint and absence became measurable using Shannon information theory, and the theory of anticipatory systems of Daniel Dubois.

This is where this gets interesting. An anticipatory system contains a model of itself. It is the epitome of Conant and Ashby's statement that "every good regulator of a system must be a model of that system" (see https://en.wikipedia.org/wiki/Good_regulator). Beer integrated this idea in his Viable System Model in what he called "System 4". Dubois meanwhile expresses an anticipatory system as a fractal, and this potentially means that Shannon information can be used to generate this fractal and provide a kind of "image" of a healthy system. Which takes us to a definition:

A well-run university contains a good model of itself.
How many universities do you know like that?

Here however, we need to look in more detail at Dubois's fractal. The point of a fractal is that it is self-similar at different orders of scale. That means that what happens at one level has happened before at another. So theoretically, a good fractal can anticipate what will happen because it knows the pattern of what has happened.

I've recently done some work analysing student comments from using comparative judgement of a variety of documents from science, technology, creativity and communication. (I did this for the Global Scientific Dialogue course in Vladivostok last year - https://dailyimprovisation.blogspot.com/2018/03/education-as-music-some-thoughts-on.html). The point of the comparative judgement was to stimulate critical thought and disrupt expectations. In other words, it was to re-jig any anticipatory system that might have been in place, and encourage the development of a fresh one.

I've just written a paper about it, but the pictures are intriguing enough. Basically, they were generated by taking a number of Shannon entropy measurements of different variables and examining the relative entropy between these different elements. This produces a graph, and the movements of the entropy line in the graph can be coded as 1s and 0s to produce a kind of fractal. (I used the same technique for studying music here - https://dailyimprovisation.blogspot.com/2019/05/bach-as-anticipatory-fractal-and.html)

So here are my pictures below. Now I suppose there is a simple question - can you tell who are the good students and who are the bad ones?


a

b

c

d

e


But what about the well-run institution? I think it too must have an analysable anticipatory fractal. There will be patterns of activity at all levels of management - from learner utterances (like these graphs) through to teacher behaviour, management actions, policies, technologies and relations with the environment. Yet I suspect that if we tried to do this today, we would find little coherence in the ways in which modern universities coordination they activities with the world.







Tuesday, 14 January 2020

What have VLEs done to Universities?

The distinction between genotype and phenotype is useful in thinking about organisational change. Given that an institution is a kind of organism, it is the distinction between those behaviours that emerge in its interactions with its environment, and the extent to which these behavioural changes become hard-wired into its nature and identity (the "genome"). So institutions adapt their behaviour in response to environmental changes in a "phenotypical" way initially, implementing ad-hoc technologies and procedures. Over time, these ad-hoc procedures become codified in the functionality of universal technologies which are deployed everywhere, and which determine the ongoing behaviour of the "species" - the "genotype".

Changes to the genotype are hard to shift. They determine patterns of organic reproduction: so we see different kinds of people existing in institutions to the kinds of people that we might have seen 40 years ago. Many elderly esteemed scholars would now say they wouldn't survive in the modern university. They're right - think of Marina Warner's account of her time at Essex (and why she quit) in the London review of books a few years ago: https://www.lrb.co.uk/the-paper/v36/n17/marina-warner/diary, or more recently Liz Morrish's "The university has become an anxiety machine": https://www.hepi.ac.uk/2019/05/23/the-university-has-become-an-anxiety-machine/. Only last week this Twitter thread appeared: https://twitter.com/willpooley/status/1214891603606822912. It's all true.

As part of the "genotype", technology is the thing which drives the "institutional isomorphism" that means that management functions become professionalised and universal (where they used to be an unpopular burden for academics). But - and it is a big BUT - this has only happened because we have let it happen.

The Virtual Learning Environment is an interesting example. Its genotypical function has been to reinforce the modularisation of learning in such a way that every collection of resources, activities, tools and people must be tied to a "module code", into which marks for those activities are stored. What's the result? Thousands of online "spaces" in the VLE which are effectively dead - nothing happening - apart from students (who have become inured to the dead online VLE space on thousands of other modules) going in to access the powerpoints that the teacher uploaded from the lecture, watch lecture capture, or submit their assignment.

What a weird "space" this is!

Go into any physical space on campus and you see something entirely different. Students gathered together from many courses, some revising or writing essays, some chatting with friends, some on social media. In such a space, one could imagine innovative activities that could be organised among such a diverse group - student unions are often good at this sort of thing: the point is that the possibility is there.

In the online space, where is even the possibility of organising group activities across the curriculum? It's removed by the technologically reinforced modularisation of student activity. If you remove this reinforced modularisation, do new things become possible?

If Facebook organised itself into "modules" like this it would not have succeeded. Instead it organised itself around personal networks where each node generated information. Each node is an "information producing" entity, where the information produced by one node can become of interest to the information-production function of another.

There's something very important about this "information production" function in a viable online space. In a VLE, the information production is restricted to assignments - which are generally not shared with a community for fear of plagiarism - and discussion boards. The restricting of the information production and sharing aspect is a key reason why these spaces are "dead". But these restrictions are introduced for reasons relating to the ways we think about assessment, and these ways of thinking about assessment get in the way of authentic communication: communicating within the VLE can become a risk to the integrity of the assessment system! (Of course, this means that communication happens in other ways - Facebook, Whatsapp, Snapchat, TikTok, etc)

The process of generating information - of sticking stuff out there - is a process of probing the environment. It is a fundamental process that needs to happen for a viable system if it is to adapt and survive. It matters for individual learners to do this, but it also matters for communities - whether they are online or not.

I wonder if this is a feature of all viable institutions: that they have a function which puts information out into the environment as a way of probing the environment. It is a way of expressing uncertainty. This information acts as a kind of "receptor" which attracts other sources of information (other people's uncertainty) and draws them into the community. Facebook clearly exploits this, whilst also deliberately disrupting the environment so as to keep people trying to produce information to understand an ever-changing environment. Meanwhile, Facebook makes money.

If a online course or an online community in an institution is to be viable, then it must have a similar function: there must be a regular production of information which acts as a receptor to those outside. This processing of "external uncertainty" exists alongside the processes of inner-uncertainty management which are organised within the community, and within each individual in that community.

In asking how this might be organised, I wonder if there is hope for overcoming the genotype of the VLE-dominated university.

Monday, 13 January 2020

Oscillating Emotions, Maddening Institutions... and Technology

My current emotional state is worrying me. Rather like the current climate on our burning planet, or our scary politics, its not so much a particular state (although depression and burning-Australia is of course worrying), but it is the oscillation, the variety, of emotional states that's bothering me. It's one extreme and then the next and no control. The symptoms, from an emotional point of view, are dangerous because they threaten to feed-back into the pathology. In a state of depression, one needs to talk, but things can become so overwhelming that talking becomes incredibly difficult, and so it gets worse.

A lot hangs on the nature of our institutions. It is not for nothing that stable democracies pride themselves on the stability of their institutions. This is because, I think, institutions are places where people can talk to each other. They are information-conserving entities, and the process of conserving information occurs through conversation. "Conserving conversation", if you like.

So what happens when our institutions fill themselves with technologies that disturb the context for conversation to the extent that people:

  1. feel stupid that they are not on top of the "latest tools" (or indeed, are made to feel stupid!)
  2. cannot talk to each other about their supposed "incompetence" for fear of exposing what they perceive as this "incompetence".
  3. feel that the necessity for conversation is obviated by techno-instrumental effectiveness (I sent you an email - didn't you read it?)
  4. are too busy and stressed working bad interfaces to build proper relationships or to ask powerful questions
  5. are permanently threatened by existential concerns over their future, their current precarious contract, their prospects for longer-term financial security, their family, and so on
There is, of course, the "you're lucky to have a job" brigade. Or the "don't think about it, just get on with it" people.  But these people reduce the totality of human life to a function. And it clearly isn't a simple function. And yet there is no rational way to determine that such an attitude is wrong. Because of that, these people (sometimes deliberately) amplify the oscillation. 

This functionalist thinking derives from technological thinking. It's not particular technologies that are to blame. But it is what computer technology actually does to institutions: it discards information. Losing information is really bad news. 

So we have institutions which traditionally exist by virtue of their capacity to conserve information (and memory, thought and inquiry) through facilitating conversation. We introduce an IT system which loses some information because it removes some degree of uncertainty that required conservation to address. This information loss is addressed by another IT system, which loses more information. Which necessitates... The loss of information through technology is like the increase in CO2.

It leads to suffocation. 

Tuesday, 31 December 2019

Programming the Educational Platform: A turning point for educational technology

The sale of Instructure to a private equity firm, Thoma Bravo, has prompted various reactions within education (see for example https://eliterate.us/instructure-plans-to-expand-beyond-canvas-lms-into-machine-learning-and-ai/). Instructure's Canvas has established itself as the leading VLE, with many high-profile institutions opting for its clean interface, seamless video and mobile integration, and powerful opensource service-oriented architecture. It is ahead of the competition, correctly identifying the move towards data-oriented educational coordination and flexibility, and providing an impressive range of tools to manage this.

The indications that there has been some serious thought behind the platform include its GraphIQL query interface (see https://github.com/graphql/graphiql), and an API which sits beneath Canvas's own interface. The API is surprisingly easy to use: simply adjust almost any Canvas URL for pages, files or modules to include "/api/v1/" after the domain, and instead of the interface, you get a JSON file. The consistency of this is impressive, and putting data in (automatically creating content, for example) is as easy as getting data out.

Instructure, like many players in educational technology, see their future in data (Turnitin was also sold this year for £1.3 billion). Of course, like Turnitin, in providing a hosted platform, they have access potentially to enormous amounts of data. The big hope for the corporations is machine learning and predictive analytics. However, for all the hand-wringing going on, I think it would be wise to be slightly sceptical about what has been portrayed as a corporate data-grab of universities. After all, machine learning is in its infancy, and there is no evidence as to what might be learnt through analysing VLE data that would be useful for educational institutions. MOOC data, after all, was something of a disappointment.

Of course, people point to Facebook, Google and Amazon as corporations which exploit the data of their users. Logic would suggest that education would follow the same path. But the difference lies in the fact that Facebook, Google and Amazon are all trying to sell us things (which we usually don't want), or get us to vote for people (who may not be good for us).

Despite all the claims around the marketisation of education, education is fundamentally about relationships, not sales. So Instructure might be wrong. We should use their tools and watch the space patiently - but I doubt that we are looking at an educational equivalent of Blackrock (although I'm sure that is what Instructure envisage)

The approach of Canvas to rationally organising the technical infrastructure of institutional learning systems is a good one and much needed. Whatever challenges educational institutions face in the future, they are likely to need to adapt quickly to a fast changing environment and increasing complexities (more students, increasing diversity of learning needs, more flexibility in the curriculum, more work-based education, etc). Rigid technical infrastructure which limits control to manipulation of poor interfaces, hides data, and makes coordination difficult will impair the institution's ability to adapt. Instructure has addressed many of these problems. So, basically, the technology is very good - this is what institutions need right now (I'm sure other VLE providers will learn from this, but at the moment they seem to be behind).

This also spells an important change for those whose role is to coordinate learning technology. Data analysis and effective control (probably through programming interfaces) are going to become essential skills. It is through these skills that flexibility is maintained. As more and more content abounds on the internet freely, as video production tools are available to everyone (including students), as the creativity and variety of expression and production becomes more important for personal growth, the job shifts to managing the means of coordination, rather than the production of yet more content. The challenge is for each institution to take control of its own platform - and this will demand new skillsets.

This is a new stage of educational technology. Where MOOCs provided content, they thought little about coordination and relationships, and the essential role of institutions in managing this. In Coursera and Edx, the institution was merely a calling-card - exploited for its status. In creating a flexible technical framework for institutions, initiatives like Canvas approach educational problems as problems of institutional organisation. There is inevitably a trade-off between big corporations which provide the resources to refine these kinds of tools, and institutional needs which when correctly analysed can use them profitably.

The interesting thing about where we are is that both universities and technology corporations are organic entities which swallow-up their environments. In biological terms, they could be said to be endosymbiotic. Lynne Margulis's endosymbiosis theory described how competing entities like this (in her case it was cells and bacteria) eventually learn to cooperate. Is this what we're going to see in education? If it is, then I think we are at a turning point.

Sunday, 29 December 2019

From 2d to 3d Information Theory

I've been doing some work on Shannon information theory in collaboration with friends, and wrote a simple program to explore Shannon's idea of mutual information. Mutual information is the measurement of the extent to which two sources of information share something in common. It can be considered as an index of the extent that information source A can predict the messages produced by information source B. If the Shannon information of source A is H and the Shannon information of B is Hb, then the mutual information is calculated by:
H + Hb - Hab  
There is an enormous literature about this, because mutual information is very useful and practical, whilst also presenting some interesting philosophical questions. For example, it seems to be closely related to Vygotsky's idea of "zone of proximal development" (closing the ZPD = increasing mutual information while also increasing complexity in the messages).

There are problems with Mutual Information. With 3 information sources, its value oscillates between a positive and negative value. What does a negative value indicate? Well, it might indicate that there is mutual redundancy rather than mutual information - so the three systems are generating constraints between them (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3030525)

Negative values should not occur in two dimensions. But they do. Loet has put my program on his website, and it's easy to see how a negative value for mutual information can be produced: https://www.leydesdorff.net/transmission

It presents two text boxes. Entering a string of characters in each immediately calculates the entropy (Shannon's H), and the mutual information between the two boxes.


This is fine. But if one of the information sources has zero entropy (which it would if it has no variety), we get a negative value.

So what does this mean? Does it mean that if two systems do not communicate, they generate redundancy? Intuitively I think that might be true. In teaching, for example, with a student who does not want to engage, the teacher and the student will often retreat into generating patterns of behaviour. At some point sufficient redundancy is generated so that a "connection" is made. This is borne out in my program, where more "y"s can be added to the second text, leaving the entropy at 0, but increasing the mutual information: 

But maybe I'm reading too much into it. It seems that it is a mathematical idiosyncrasy - something weird with probability theory (which Shannon depends on) or the use of logs (which he got from Boltzmann). 

Adding redundant symbols is not the same as "adding nothing" - it is another symbol - even if it's zero. 

The bottom line is that Shannon does not have a way of accounting for "nothing". How could it?

This is where I turn to my friend Peter Rowlands and his nilpotent quantum mechanics which exploits quaternions and Clifford Algebra to express nothing in a 3-dimensional context. It's the 3d-ness of quaternions which is really interesting: Hamilton realised that only quaternions could express 3 dimensions.

I don't know what a quaternionic information theory might look like, but it does seem that our understanding of information is 2-dimensional, and that this 2-d information is throwing up inconsistencies when we move into higher dimensions, or try weird things with redundancy.

The turn from 2d representation to 3d representation was one of the turning points of the renaissance. Ghilberti's "Gates of Paradise" represents a moment of artistic realisation about perspective which changed the way representation was thought about forever.

We are at the beginning of our information revolution. But, like medieval art, it may be the case that our representations are currently two-dimensional, where we will need three. Everything will look very different from there.

Tuesday, 24 December 2019

Out of Chaos - A Mathematical Theory of Coherence

One of my highlights of 2019 was the putting together of a what is beginning to look like a mathematical theory of evolutionary biology, with John Torday of UCLA, Peter Rowlands in Liverpool university, using the work Loet Leydesdorff and Daniel Dubois on anticipatory systems. The downside of 2019 has been that things have seemed to fall apart - "all coherence gone" as John Donne put it at the beginning of the scientific revolution (in "An Anatomy of the world"):

And new philosophy calls all in doubt,
The element of fire is quite put out,
The sun is lost, and th'earth, and no man's wit
Can well direct him where to look for it.
And freely men confess that this world's spent,
When in the planets and the firmament
They seek so many new; they see that this
Is crumbled out again to his atomies.
'Tis all in pieces, all coherence gone,
All just supply, and all relation;
Prince, subject, father, son, are things forgot,
For every man alone thinks he hath got
To be a phoenix, and that then can be
None of that kind, of which he is, but he.
The keyword in all of this (and a word which got me into trouble this year because people didn't understand it) is "Coherence". Coherence, fundamentally, is a mathematical idea belonging to fractals and self-referential systems. It is through coherence that systems can anticipate future changes to their environment and adapt appropriately, and the fundamental driver for this capacity is the creation of fractal structures, which by definition, are self-similar at different scales.

In work I've done on music this year with Leydesdorff, this coherent anticipatory model combines both synchronic (structural) and diachronic (time-based) events into a single pattern. This is in line with the physics of David Bohm, but it also coincides with the physics of Peter Rowlands.

When people talk of a "mathematical theory" we tend to think of something deterministic, or calculative. But this is not at all why maths is important (indeed it is a misunderstanding). Maths is important because it is a richly generative product of human consciousness which provides consciousness with tangible natural phenomena upon which its presuppositions can be explored and developed. It is a search for abstract principles which are generative not only of biological or social phenomena, but of our narrative capacities for accounting for them and our empirical faculties for testing them. Consciousness is entangled with evolutionary biology, and logical abstraction is the purest product of consciousness we can conceive. In its most abstract form, an evolutionary biology or a theory of learning must be mathematical, generative and predictive. In other words, we can use maths to explore the fundamental circularity existing between mind and nature, and this circularity extends beyond biology, to phenomena of education, institutional organisation and human relations.

When human organisations, human relations, learning conversations, artworks, stories or architectural spaces "work", they exhibit coherence between their structural and temporal relations with an observer. "Not working" is the label we give to something which manifests itself as incoherent. This coherence is at a deep level: it is fractal in the sense that the pattern expressed by these things are recapitulations of deeper patterns that exist in cells and in atoms.

These fractal patterns exist between the "dancing" variables involved in multiple perceptions - what Alfred Schutz called a "spectrum of vividness" of perception. The dancing of observed variables may have a similar structure to deeper patterns within biology or physics, and data processing can allow some glimpse into what these patterns might look like.

Fractal structures can immediately be seen to exhibit coherence or disorder. Different variables may be tried within the structure to see which displays the deepest coherence. When we look for the "sense" or "meaning" of things, it is a search for those variables, and those models which produce a sense of coherence. It is as true for spiritual practice as it is for practical things like learning (and indeed those things are related).

2019 has been a deeply incoherent year - both for me personally, and for the world. Incoherence is a spur to finding a deeper coherence. I doubt that we will find it by doing more of the same stuff. What is required is a new level of pattern-making, which recapitulates the deeper patterns of existence that will gradually bring things back into order. 

Friday, 20 December 2019

Human Factors and Educational Technology in Institutions

Educational institutions are now enormously complex technological organisations - particularly universities. They are generally so complex that few people in the university really understand how everything fits together. Computer services will have teams who understand individual systems, although it is unusual to find someone in a computer services department who understands how it all fits together technically. Even less likely is it to find someone who understands the divergences of digital practice either in the classroom by teachers, or among professional service staff who process marks (and often organise assignments in the VLE).

Of course, despite the lack of any synoptic view, things keep on going. This works because whatever complexities are created by different systems, an administrative workforce can be summoned up to handle the complexity. Providing marks are entered, exam boards are provided with data, and students progressed through their courses to completion, it might be tempting to ask whether a lack of a synoptic view matters.

This is where technological infrastructure, human factors and organisational effectiveness meet. An effective organisation is one which organises itself to deal with actual demands placed on it. An effective organisation manages its complexity, understands its environment, and has sufficient flexibility to adapt to change.  In a university, it can be very difficult to define "demand" or be clear about "environment". At a superficial level, there is demand from "students" for teaching and assessment. This demand is increasingly framed as a "market". However at a deeper level, there is a demand from society, and the politicians who steer it (and the policy for higher education).  What does society demand of education? In recent years, the answer to that question has also been framed around "the market" - but many commentators have pointed our that this is a false ontology. Society has a habit of turning on institutions which extend their power beyond reasonable limits. There is no reason to suppose this might not happen to universities, which have extended their power through a variety of what Colin Crouch calls "privatised Keynesianism" - individualised debt to pay for institutional aggrandisement such as real-estate (https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-856X.2009.00377.x)

Then we should ask "What is the actual demand for learning?" A commonsense answer is that a student should expect to be able to do stuff which they weren't able to do before. But this is a very vague notion. As a crude example, how many of the many thousands of computer science graduates can actually program when they graduate? How many creative writers will make it as authors? How many architects will build a building? Now, of course, this is unfair. There are many "transferrable skills" from a degree - people will go into other walks of life, buoyed by their new-found self-confidence. Apart from those who become so worn-down by assessment and institutional rigidity that their mental health took a knock in education. So there is a first demand: "Education should not leave me feeling worse about myself than when I started".

It turns out to be surprisingly hard to organise that (https://www.hrmagazine.co.uk/article-details/students-regret-going-to-university) Teachers do their best, but within constraints which they and everyone else in the system find unfathomable. Today, many of those constraints are technical in origin. The computer has swamped the natural human pattern of communication which kept institutions viable for centuries. The space for rich intersubjective engagement - whether it is between teachers and students, or between staff, or even between students and their peers - has been attenuated to lights on a screen and clicks and flicks. And with the problems that this creates, the answer is always the same - more technology.

So do we reach the stage where the art of teaching becomes the art of designing a webpage in the VLE? Instructional design would appear to have a lot to answer for. Deep human virtues of knowledge, openness, generosity and revealing of uncertainty do not fit the digital infrastructure and get unrewarded. Flashy new tech sells staff with ambitions and a latent drive for everyone to "do it their way". Some of these people become senior managers and appoint more like them. It's positive feedback.

The equations are simple. All technology creates more complexity in the guise of offering a "solution" to a problem created by some prior technology. Human beings struggle to deal with the complexity, and demand new technical solutions, which often make things worse. How do we get out of this?

The importance of a synoptic view is that it must entail clear distinctions about the system, its operations, its demands and its boundaries. As long as we have no clear idea of the problem to which we want to put technology to address in education, we will be condemned to repeat the cycle. So what is the purpose? What's the point?

It's probably very simple: we all die. The older ones will die first. While they are alive they can communicate something of how to survive in the future. Some will do things which speak to future generations after they are dead. But this can only happen if people are able to live in a world that is effectively organised. In a world of ineffective organisation, complexity will proliferate and the intergenerational conversation will die in a torrent of tweets and emails. There is a reason why the ice caps melt, the stock market booms, and the world is full of mad dictators.