Sunday 26 January 2020

Well-run Universities and methods for Analysing them

There's so much critique of higher education these days that little thought is going in to thinking about how an institution might be optimally organised in the modern world. This is partly because critique is "cheap". Bateson made the point that we are good at talking about pathology and bad at talking about health. This is partly because to talk about health you need to talk about the whole system. To talk about pathology you need to point to one bit of it which isn't working and apportion blame for its failure. Often, critique is itself a symptom of pathology, and may even exacerbate it.

The scientific problem here is that we lack good tools for analysing, measuring and diagnosing the whole system. Cybernetics provides a body of knowledge - an epistemology - which can at least provide a foundation, but it is not so good empirically. Indeed, some aspects of second-order cybernetics appear almost to deny the importance of empirical evidence. Unfortunately, without experiment, cybernetics itself risks becoming a tool for critique. Which is pretty much what's happened.

Within the history of cybernetics, there are exceptions. Stafford Beer's work in management is one of the best examples. He used a range of techniques from information theory to personal construct theory to measure and analyse systems and to transform organisation. More recently, Loet Leydesdorff has used information theory to produce models of the inter-relationship between academic discourse, government policy and industrial activity, while Robert Ulanowicz has used information theory in ecological investigations.

Information theory is something of a common denominator. It was recognised by Ross Ashby that Shannon's formulae were basically expressing the same idea as his concept of "variety", and that this equation could be used to analyse complex situations in almost any domain.

However, there are some big problems with Shannon's information theory. Not least, it assumes that complex systems are ergodic - i.e. that their complexity over a short period of time is equivalent to their complexity over a long spell of time. All living systems are non-ergodic - they emerge new features and new behaviours which are impossible to predict at the outset.

Another problem with information theory is the way that complexity itself is understood in the first place. For Ashby, complex systems are complex because of the number of states they can exist in. Ashby's variety was a countable thing. But how many countable states can a human being exist in? Where do we draw the boundary around the things that we are counting and the things that we ignore? And then the word "complex" is applied to things which don't appear to have obvious states at all - take, for example, the "complex" music of J.S. Bach. How many states does that have?

I think one of the real breakthroughs in the last 10 years or so has been the recognition that it is not information which is important, but "not-information", "constraint", "absence" or "redundancy". Terry Deacon, Loet Leydesdorff, Robert Ulanowicz and (earlier) Gregory Bateson and Heinz von Foerster can take the credit for this. In the hands of Leydesdorff, however, this recognition of constraint and absence became measurable using Shannon information theory, and the theory of anticipatory systems of Daniel Dubois.

This is where this gets interesting. An anticipatory system contains a model of itself. It is the epitome of Conant and Ashby's statement that "every good regulator of a system must be a model of that system" (see https://en.wikipedia.org/wiki/Good_regulator). Beer integrated this idea in his Viable System Model in what he called "System 4". Dubois meanwhile expresses an anticipatory system as a fractal, and this potentially means that Shannon information can be used to generate this fractal and provide a kind of "image" of a healthy system. Which takes us to a definition:

A well-run university contains a good model of itself.
How many universities do you know like that?

Here however, we need to look in more detail at Dubois's fractal. The point of a fractal is that it is self-similar at different orders of scale. That means that what happens at one level has happened before at another. So theoretically, a good fractal can anticipate what will happen because it knows the pattern of what has happened.

I've recently done some work analysing student comments from using comparative judgement of a variety of documents from science, technology, creativity and communication. (I did this for the Global Scientific Dialogue course in Vladivostok last year - https://dailyimprovisation.blogspot.com/2018/03/education-as-music-some-thoughts-on.html). The point of the comparative judgement was to stimulate critical thought and disrupt expectations. In other words, it was to re-jig any anticipatory system that might have been in place, and encourage the development of a fresh one.

I've just written a paper about it, but the pictures are intriguing enough. Basically, they were generated by taking a number of Shannon entropy measurements of different variables and examining the relative entropy between these different elements. This produces a graph, and the movements of the entropy line in the graph can be coded as 1s and 0s to produce a kind of fractal. (I used the same technique for studying music here - https://dailyimprovisation.blogspot.com/2019/05/bach-as-anticipatory-fractal-and.html)

So here are my pictures below. Now I suppose there is a simple question - can you tell who are the good students and who are the bad ones?


a

b

c

d

e


But what about the well-run institution? I think it too must have an analysable anticipatory fractal. There will be patterns of activity at all levels of management - from learner utterances (like these graphs) through to teacher behaviour, management actions, policies, technologies and relations with the environment. Yet I suspect that if we tried to do this today, we would find little coherence in the ways in which modern universities coordination they activities with the world.







Tuesday 14 January 2020

What have VLEs done to Universities?

The distinction between genotype and phenotype is useful in thinking about organisational change. Given that an institution is a kind of organism, it is the distinction between those behaviours that emerge in its interactions with its environment, and the extent to which these behavioural changes become hard-wired into its nature and identity (the "genome"). So institutions adapt their behaviour in response to environmental changes in a "phenotypical" way initially, implementing ad-hoc technologies and procedures. Over time, these ad-hoc procedures become codified in the functionality of universal technologies which are deployed everywhere, and which determine the ongoing behaviour of the "species" - the "genotype".

Changes to the genotype are hard to shift. They determine patterns of organic reproduction: so we see different kinds of people existing in institutions to the kinds of people that we might have seen 40 years ago. Many elderly esteemed scholars would now say they wouldn't survive in the modern university. They're right - think of Marina Warner's account of her time at Essex (and why she quit) in the London review of books a few years ago: https://www.lrb.co.uk/the-paper/v36/n17/marina-warner/diary, or more recently Liz Morrish's "The university has become an anxiety machine": https://www.hepi.ac.uk/2019/05/23/the-university-has-become-an-anxiety-machine/. Only last week this Twitter thread appeared: https://twitter.com/willpooley/status/1214891603606822912. It's all true.

As part of the "genotype", technology is the thing which drives the "institutional isomorphism" that means that management functions become professionalised and universal (where they used to be an unpopular burden for academics). But - and it is a big BUT - this has only happened because we have let it happen.

The Virtual Learning Environment is an interesting example. Its genotypical function has been to reinforce the modularisation of learning in such a way that every collection of resources, activities, tools and people must be tied to a "module code", into which marks for those activities are stored. What's the result? Thousands of online "spaces" in the VLE which are effectively dead - nothing happening - apart from students (who have become inured to the dead online VLE space on thousands of other modules) going in to access the powerpoints that the teacher uploaded from the lecture, watch lecture capture, or submit their assignment.

What a weird "space" this is!

Go into any physical space on campus and you see something entirely different. Students gathered together from many courses, some revising or writing essays, some chatting with friends, some on social media. In such a space, one could imagine innovative activities that could be organised among such a diverse group - student unions are often good at this sort of thing: the point is that the possibility is there.

In the online space, where is even the possibility of organising group activities across the curriculum? It's removed by the technologically reinforced modularisation of student activity. If you remove this reinforced modularisation, do new things become possible?

If Facebook organised itself into "modules" like this it would not have succeeded. Instead it organised itself around personal networks where each node generated information. Each node is an "information producing" entity, where the information produced by one node can become of interest to the information-production function of another.

There's something very important about this "information production" function in a viable online space. In a VLE, the information production is restricted to assignments - which are generally not shared with a community for fear of plagiarism - and discussion boards. The restricting of the information production and sharing aspect is a key reason why these spaces are "dead". But these restrictions are introduced for reasons relating to the ways we think about assessment, and these ways of thinking about assessment get in the way of authentic communication: communicating within the VLE can become a risk to the integrity of the assessment system! (Of course, this means that communication happens in other ways - Facebook, Whatsapp, Snapchat, TikTok, etc)

The process of generating information - of sticking stuff out there - is a process of probing the environment. It is a fundamental process that needs to happen for a viable system if it is to adapt and survive. It matters for individual learners to do this, but it also matters for communities - whether they are online or not.

I wonder if this is a feature of all viable institutions: that they have a function which puts information out into the environment as a way of probing the environment. It is a way of expressing uncertainty. This information acts as a kind of "receptor" which attracts other sources of information (other people's uncertainty) and draws them into the community. Facebook clearly exploits this, whilst also deliberately disrupting the environment so as to keep people trying to produce information to understand an ever-changing environment. Meanwhile, Facebook makes money.

If a online course or an online community in an institution is to be viable, then it must have a similar function: there must be a regular production of information which acts as a receptor to those outside. This processing of "external uncertainty" exists alongside the processes of inner-uncertainty management which are organised within the community, and within each individual in that community.

In asking how this might be organised, I wonder if there is hope for overcoming the genotype of the VLE-dominated university.

Monday 13 January 2020

Oscillating Emotions, Maddening Institutions... and Technology

My current emotional state is worrying me. Rather like the current climate on our burning planet, or our scary politics, its not so much a particular state (although depression and burning-Australia is of course worrying), but it is the oscillation, the variety, of emotional states that's bothering me. It's one extreme and then the next and no control. The symptoms, from an emotional point of view, are dangerous because they threaten to feed-back into the pathology. In a state of depression, one needs to talk, but things can become so overwhelming that talking becomes incredibly difficult, and so it gets worse.

A lot hangs on the nature of our institutions. It is not for nothing that stable democracies pride themselves on the stability of their institutions. This is because, I think, institutions are places where people can talk to each other. They are information-conserving entities, and the process of conserving information occurs through conversation. "Conserving conversation", if you like.

So what happens when our institutions fill themselves with technologies that disturb the context for conversation to the extent that people:

  1. feel stupid that they are not on top of the "latest tools" (or indeed, are made to feel stupid!)
  2. cannot talk to each other about their supposed "incompetence" for fear of exposing what they perceive as this "incompetence".
  3. feel that the necessity for conversation is obviated by techno-instrumental effectiveness (I sent you an email - didn't you read it?)
  4. are too busy and stressed working bad interfaces to build proper relationships or to ask powerful questions
  5. are permanently threatened by existential concerns over their future, their current precarious contract, their prospects for longer-term financial security, their family, and so on
There is, of course, the "you're lucky to have a job" brigade. Or the "don't think about it, just get on with it" people.  But these people reduce the totality of human life to a function. And it clearly isn't a simple function. And yet there is no rational way to determine that such an attitude is wrong. Because of that, these people (sometimes deliberately) amplify the oscillation. 

This functionalist thinking derives from technological thinking. It's not particular technologies that are to blame. But it is what computer technology actually does to institutions: it discards information. Losing information is really bad news. 

So we have institutions which traditionally exist by virtue of their capacity to conserve information (and memory, thought and inquiry) through facilitating conversation. We introduce an IT system which loses some information because it removes some degree of uncertainty that required conservation to address. This information loss is addressed by another IT system, which loses more information. Which necessitates... The loss of information through technology is like the increase in CO2.

It leads to suffocation.