Monday, 17 February 2020

From Radical Constructivism to Dominic Cummings: What's wrong with Cybernetics?

The pro-Brexit lobby at the heart of the UK government possess a powerful arsenal of conceptual and epistemological tools which have effectively been "weaponised" in ways which would have mortified their inventors. Dominic Cummings knows his systems theory (as a cursory glance at his "Thoughts on Education and Political Priorities": https://dominiccummings.files.wordpress.com/2013/11/20130825-some-thoughts-on-education-and-political-priorities-version-2-final.pdf). Cummings is not the first to turn cybernetics to bad ends - "philosopher" Nick Land has been writing pretty odious stuff for quite a few years, and it turns out that a big fan of Land is Andrew Sabisky who is coming under pressure for his somewhat insane views on eugenics (https://www.bbc.co.uk/news/uk-politics-51535367) - something for which Cummings also has a penchant. Hayek got there first with the dark side of cybernetics, of course, but this new breed is not as intelligent and more dangerous (Hayek was bad enough!)

It's all deeply troubling. Many of the inventors of these tools were German émigrés, horrified by Nazism, helping with the war effort by developing new weapons, and wishing for a better world. Wiener however knew that what they were doing was dangerous. His "The Human Use of Human Beings" reads like a prophecy today. Wiener's immediate fear was nuclear annihilation, but the likes of Cummings and his crowd are in his sights as the enslavers of humankind.

10 years ago I was at the American Society for Cybernetics conference in Troy, NY, which was attended by Ernst von Glasersfeld - one of the last remaining figures from cybernetics's early period, and an important thinker about education. Glasersfeld,  by then very old, gave a short address which summarised his philosophy; he died a few months later. You can read it here: http://www.asc-cybernetics.org/2010/?p=2700

It is such a clear exposition of cybernetic concepts that it invites a critical reflection: "is Cummings here?" - is there something in these ideas which opens the door to a fascism which would have mortified von Glasersfeld? I have to say, I think there is.

Von Glasersfeld made the clearest statement that cybernetics is fundamentally about constraint: as a science it is focused on "context". But as a science, it carried with it a clear conception of what is rational and what is metaphysical - and this is the main meat of the talk. Von Glasersfeld talks of the "pious fictions" of realists who insist on an external mind-independent reality. This, he states, cannot be science. Science, by contrast, exists in the rational process of coordinating understanding within constraints. As such, it cannot gain any kind of "objective knowledge".

But this sentence is the most interesting:
Only painters, poets, musicians and other artists like mystics and metaphysicians, may generate metaphors of reality, but to comprehend these metaphors you have to step out of the rational domain.
"Outside the rational domain"? What does that mean exactly? From what kind of context does von Glasersfeld make the judgement as to what is "rational" and what is not? This is framed by the existing institutional context of science and universities. Here we have embodied the problem of "two cultures" - and from there, we are on a slippery slope to Cummings.

Feelings are not rational. Social alienation is not rational. Experience itself is not rational. Yet some "rational" force allows us to make the distinction between what is and isn't rational, rejecting the irrational as a "pious fiction". This is how one can play the game of "Take back control" or "Get Brexit Done", treating feelings as if they are "rational" constructs of a communication system which is malleable to someone else's will.  It turns out that this is the pious fiction we should most fear. The artists, by contrast, speak the truth.

Where is the problem? It lies, I think, in a kind of two-dimensionality in the way that we think of communication. Cummings is quite keen on Shannon - at least insofar as it underpins data analysis.  But Shannon, lucky genius that he was, had a two-dimensional information transmission problem in front of him: a sends message to b over a noisy medium; b interprets and responds. But even in Shannon this isn't quite as two-dimensional as it seems: a and b are "transducers" with a "memory" (see Shannon and Weaver, "Mathematical theory of communication" (1951)). They were very pale representations of people. This meant that there was a limit to what could be communicated - and what could be constructed.

In Von Glasersfeld's world, the form of conversation occurred through the interaction of constraint produced through the communication of agents. Conversation and meaning emerged through a haze of "Brownian motion". It was almost arbitrary in its emergence, only recognised to be "meaningful" by us "observers". There are many problems with this view, the deepest of which is the assumption that within complex systems, the emergence of form and meaning is the result of an "arbitrary" process.

Years of attempting to simulate music from arbitrary processes have only produced bad music. It seems that the processes at work within the artist are no more arbitrary than the movements of electrons through a diffraction grating: there is an underlying pattern. But it's not the bands of the diffraction pattern that are interesting. It is the space between them: what is there? Nothing.

Appreciating this leads to a profound question about "constructivism" and indeed "radical constructivism": ok - so you can construct "stuff" in the world... but how might you construct "nothing"?

Without "nothing" there would be no pattern. Cummings, Land and co. know the deep magic. But the deeper magic is how to make nothing (channelling Narnia!)

To cut the story a bit shorter, "nothing" is mathematically realizable. William Rowan Hamilton's discovery of quaternions in 1843 was really the beginning of adventure into nothing which we have not absorbed yet. The quaternions are a 3-dimensional complex number which is anti-commutative. Hamilton's genius was to see that in order to represent the world in 3 dimensions, anti-commutativity was essential. But more importantly, the quaternion arithmetic allowed for expressions where a = 0. So 3-dimensionality and nothingness are fundamentally connected. But we knew this: ever heard of a "vanishing point"?

Von Glasersfeld had no way of constructing nothing. I asked him, a couple of years before when he gave a talk about learning in Vienna, that it was all very well to explain learning in the way that he was. But where did the drive to learn come from? He didn't really have an answer. Maybe he was tired. But I'm suspicious that he didn't want to think about it.

So Cummings and Land are exploiting a body of theory which is profoundly incomplete and two-dimensional. It's dangerous because it is two-dimensional and the real world isn't. The world of feelings, art, poetry and music are not some irrational boundary of a rational systems world. They are the third dimension in a world of natural information which cybernetics has not yet found a way to describe. It may become very important that we highlight these scientific shortcomings. 

Wednesday, 12 February 2020

Brains and Institutions: Why Institutions need to be more Brain-like

I was grateful to Oleg for pointing out the double meaning in Beer’s Brain of the Firm last week: it wasn’t so much that there was a brain that could be unmasked in the viable institution; firms – institutions, universities, corporations, societies – were brains. Like brains, they are adaptive. Like brains they do things with information which we cannot quite fathom – except that we consider our concepts of “information processing” which we have developed into computer science – as a possible function of brains. But brains and firms are not computers. That we have considered that they are is one of our great mistakes of the modern age. It was believing this that led to the horrors of the 20th century.

So what is the message of Brain of the Firm? It is that firms, brains, universities, societies share a common topology. In the Brain of the Firm, Beer got as close as he could to articulating that topology. It was not a template. It was not a plan. It was not a recipe for effective organisation. It was not a framework for discussion. It was a topology. It was an expression of the territory within which distinctions are formed. Topology is a kind of geometry of the mind.

Universities are particularly interesting examples. Because they are made of brains, and because their work is meant to be the work of their constituent brains. Universities present an example of where the “brain-organisation” sometimes goes right, but more often goes wrong. Why does it go wrong? Because we draw our distinctions in the wrong way – most often believing the institution to be the “organisation chart” – which is always a recipe for disaster.

Governments and states are alternative examples. One of the things we talked about was the general antipathy to the big state. The apparent failure of the Corbyn project is a hangover from the general disbelief in the big state. Johnson’s message is a throwback to Thatcher’s message – ironically this time marshalling the support of those who Thatcher hurt the most. But really, this message – the disbelief in the big state – never went away. After Stalin’s Russia, there appeared nowhere for the big state to go. But ironically, Stalin’s Russia was really a small state masquerading as a big one.

Today the world is full of would-be Stalins. Individuals wanting to impose their brains on everyone else, wanting to diminish the power of every collective unless it suits themselves. The model is repeated from corporation to university to city to state. But in the end, it will not work except to destroy its own environment (which it is doing very effectively), and it will not work because the distinctions are drawn in the wrong place.

The real question we should ask ourselves is this: How do brains work and how should organisations work to emulate them? Technology almost certainly gives a glimpse.

If there is a key feature of Beer’s fundamental topology it is the difference between the inside and the outside of a distinction. I wonder if in fact Spencer-Brown wasn’t influenced in his mathematics by Beer. I suspect Beer had the insight of Spencer-Brown’s most powerful idea first.

If you want to maintain any distinction then you must have a metasystem. Why? Because all distinctions are essentially uncertain, and there must be a mechanism – there must be the other side of the Mobius strip – to maintain the coherence of the distinction.

What must the metasystem do? Well, one of the things it must do is negotiate the distinction of the inside with the environment. In essence it has to determine what belongs inside and what belongs outside and to maintain this boundary. If it doesn’t do this, then the distinction collapses. So there must be a process of engaging, probing and modelling the environment.

This is System 4

The other thing the metasystem must do is to manage the internal operations of the system – its own internal distinctions. This is System 3.

Then it must balance the balancing operation of System 4 and System 3 This is System 5.
So very quickly we arrive at this topology.

But this is the topology of a distinction, and within any topology there are further distinctions. The point is it unfolds a fractal structure. This ultimately is the fractal structure of the inside which must be balanced with what is perceived as the fractal structure of the outside.

The challenge is to operationalise this.

For many who pursued the VSM, the operationalisation ended up as a kind of consultancy – a way of talking to organisations to give them a bit more internal awareness. I guess this was fine -  and it created a bit of work of cybernetics people. But ultimately this was empty wasn’t it?

How do we do better?

We need to come back to brains and firms. Within the brain, we have very little knowledge of what happens – particularly as to what happens with information. Obviously there are ECG monitors and stuff but they simply attenuate whatever complex activity is going on into graphs that show some kind of snapshot of the dance that’s really taking place.

We can see much more of “information” if we look at communications. Then what do we see? We see massive amounts of redundancy in our communication. We see pattern infusing everything, catching the attention of analysts and clairvoyants. The clairvoyants are usually more value because they tune-in to something deeper.

The deeper problem lies in the way we are able to analyse and examine the patterns of communication. It’s not as if we are short of data – although there can always be more. It is more that we do stupid things with the data we collect. Typically this involves attenuating out most of it and using an attenuated dataset as an exercise to make stupid decisions.

It’s rather like a brain with dementia. Important parts of the processes of maintaining information flows throughout the whole organisation are damaged – in the institution’s case, by technology – and consequently the selection mechanism for adaptation is impaired. Consequently the poor individual afflicted with this, steps into the road without bothering to check the approaching double-decker bus.
Our institutions are doing a similar thing for a similar reason. Key parts of their information processing apparatus are impaired which means that the selection mechanism for their future adaptation isn’t working.

So what is a selection mechanism for future adaptation? It is precisely what Beer, influenced no doubt by Robert Rosen, called an “anticipatory system”. It is the system’s model of itself. Now a model of oneself in time must be a fractal.  The only way the future can be predicted is through its pattern of events being seen to be similar to the past.
That is not that actual events repeat (although of course they do), but it is that the pattern of relations between particular events tend to repeat.

But we need to understand fractals. They are not really two-dimensional pictures. They are three-dimensional pictures. Long before we knew about fractals we knew the concept from the hologram – that encoding of time and space into a 2-dimensional frame where the self-similarity of the frame was its key feature.

The reason why fractals are so important is that our approaches to information and measurement are essenatially 2-dimensional. Look at Shannon’s diagram to see this. The deep problem with 2-dimensionality is that it has no concept of “nothing”. In Shannon, any symbol exists against the constant background of a not-symbol, but we have no way of expressing the not-symbol.
True nothingness means making things disappear. It turns out that the only way we can make things disappear is by working in three dimensions. The fractal is an encoding of three dimensions in which “nothing” is written through like a stick of rock. Nothing is what makes the pattern.
All human behaviour in institutions is really about nothing. Or rather, it is about the attempt to grasp nothing from something. In the way that a piece of music eventually selects its ending – and silence – all our behaviour seeks a kind of resolution. Every conversation seeks a resolution. Every interminable meeting frustrates because its ending frustrates.

But if we want to see nothing, then we have to work with its encoded representation – the fractal.
Our mathematical approach to information – to computer information – can provide a glimpse of nothing. Indeed, our approaches to machine learning, which are beginning to show behaviour that is rather like conscious behaviour, are providing a glimpse – they too are fractal.
If we want to see nothing, then we need an encoding strategy whereby data is represented in a way where nothing might be analysed and considered.

If we want viable institutions, then we need viable individuals. If we want viable individuals then we need a way of encoding the communicative behaviour of individuals in a fractal which can reveal the underlying selection mechanism for optimal future development. It would not be a surprise if the optimal selection mechanism for individual development involved communication with other individuals. And it would not be a surprise if the optimal collective development of a group of individuals entailed the preservation of information between them.

What do we have? Monasticism?

Thursday, 6 February 2020

Why the current phase of Machine Learning will fail

Over the last two years I've been involved in a very interesting project combining educational technology assessment techniques with machine learning for medical diagnostics. At the centre of the project was the idea that human diagnostic expertise tends to be ordinal: experts make judgements about a particular case based on comparisons with judgements about other cases. If judgement is an ordinal process, then the deeper questions concern the communication infrastructure which supports these comparisons, and the ways in which the rich information of comparison is maintained within institutions such as diagnostic centres.

Then there was a technical question: can (and does) machine learning operate in an ordinal way? And more importantly, if machine learning does operate in an ordinal way, can it be used as a means of maintaining the information produced by the ordinal judgements of a group of experts such that the combined intelligence of human + machine can exceed that of both human-only and machine-only solutions.

The project isn't over yet, but it would not be surprising if the answer to this question is equivocal: yes and no. "Yes" because this approach to machine learning is the only approach which does not throw away information. The basic problem with all current approaches to machine learning is that ML models are developed with training sets and classifiers as a means of mapping those classifiers automatically to new data. So the complexity of any new data is reduced by the ML algorithm into a classification. That is basically a process of throwing away information - and it is a bad idea, which amplifies the general tendency of IT systems in organisations which have been doing this for years, and our institutions have suffered as a result.

However, not discarding information means that the amount of information to be processed increases exponentially (in fact it increases according to the number of combinations). It doesn't matter how powerful one's computers are, an algorithm with a computability growth rate like this is bad news. Give it 100 images, and it might take a day to train the machine learning for 4950 combinations. 200 gives 19500, 300 gives 44850, 400 gives 79800, 500 gives 124750. So if 4950 takes 24 hours, 500 will take 600 hours = 25 days. It won't take long before we are measuring the training time in months or years.

So this isn't realistic. And yet it's the right approach if we don't want to discard information. We don't yet know enough, and no amount of hacking with Tensorflow is going to sort it out.

The basic problem lies in the difference between human cognition and what the neural networks can do. The reason why we want to retrain the ML algorithm is that we want to be able to update its ordinal rankings so that they reflect the refinements of human experts. This really can only be done by retraining the whole thing with the expanded training set. If we don't retrain the whole thing, then there is a risk that a small correction in one part of the ML algorithm has undesirable consequences elsewhere.

Now humans are not like this. We can update our ordinal rankings of things very easily, and we don't suddenly become "stupid" when we do. How do we do it? And if we can understand how we do it, can that help us understand how to get the machine to do it?

I think we may have a few clues as to how we do this, and yes I think at some point in the future it will be possible to get to the next stage of AI where the machine can be retrained like this. But we are a long way off.

The key lies in the ways that the ML structures its data through its recursive processes. Although we don't have direct knowledge of exactly how all the variables and classifiers are stored within the ML layers, we get a hint of it when the ML algorithm is "reversed" to produce images which align with the ML classifiers, such as we see with the Google Deep Dream images.

These are basically fractal images which reflect the way that the ML convolutional neural network algorithm operates. Looking at an image like this:

we can get some indication of the fractal nature of the structures within the machine learning itself.

I strongly suspect that not only our consciousness, but the universe itself, has a similar structure. I am not alone in this view: David Bohm, Karl Pribram and many others have held to a similar view. Within quantum mechanics today, the idea that the universe is some kind of "hologram" is quite common, and indeed, the hologram is basically another way of describing a fractal (indeed, we had holograms long before we could generate fractal images on the computer).

What's important about fractals is that they are anticipatory. This really lies at the heart of how ML works: it is able to anticipate the likely category of data it hasn't seen before (unlike a database, which can only reveal the categories of data it has been told about).

What makes fractals awkward - and why the current state of machine learning will fail - is that in order to change the understanding of the machine, the fractal has to be changed. But in order to change a fractal, you don't just have to change one value in one place; you have to change the entire pattern in a way in which it remains consistent but transformed in a way where the new knowledge is absorbed.

We know, ultimately, this is possible. Brains - of all kinds - do it. Indeed, all viable systems do it. 

Saturday, 1 February 2020

Brexit Lions and Unicorns

Orwell's essay "The Lion and the Unicorn: Socialism and the English Genius" reads today as very old-fashioned and jingoistic. And yet, like all great artists, Orwell accesses something of the nature of life  - both at the time he was writing ("As I write, highly civilised human beings are flying overhead, trying to kill me.") which we are seeing reflected back at us in a rather unedifying way with Brexit.

Much that he identified in 1941 is still true.
"There is no question about the inequality of wealth in England. It is grosser than in any European country, and you have only to look down the nearest street to see it. Economically, England is certainly two nations, if not three or four."
Except that of course this inequality has been exported to many other countries. But what about "patriotism" which his essay is really about? What does that mean?

It seems that patriotism is very problematic and confusing. It is uncomfortably close to the "nationalism" or "populism" (what does that mean?) that we see among the Brexiteers or the Trump supporters. Orwell's point is to say that despite the differences between the rich and the poor, or the different nations of the UK (I would like to think he would apply this across our multicultural society today) a country can be united if it feels its "home" to be under threat. I doubt this is a "national" instinct but a human one - we see it in extraordinary human acts of courage and compassion in the wake of terrorist atrocities (for example) the world over. Can we explain it? No - but to "explain" it as a "national characteristic" is both tempting and facile - that's where jingoism comes from.

Where does the "threat" which mobilises everyone come from? Orwell is very clear that it is not among the individual "highly civilised human beings" who are trying to kill him.
"Most of them, I have no doubt, are kind-hearted law-abiding men who would never dream of committing murder in private life. On the other hand, if one of them succeeds in blowing me to pieces with a well-placed bomb, he will never sleep any the worse for it. He is serving his country, which has the power to absolve him from evil."
I imagine whether one might say this about some members of ISIS. So much depends on what we consider to be "civilised".

But it's not fanatics who scare us: "What English people of nearly all classes loathe from the bottom of their hearts is the swaggering officer type, the jingle of spurs and the crash of boots.". It's the aristocracy who, in Europe adopted the goose-step as their "ritual dance" - and his point that it is hard to imagine a Hitler in the UK relies on the fact that "Beyond a certain point, military display is only possible in countries where the common people dare not laugh at the army." Orwell argued that we needed socialism to counter the aristocratic swagger which leads to people like Hitler. What he called elsewhere the "arrogance of superiority".

Today in Europe we don't see military officials swaggering in uniform, although the police often now carry guns. But there is still the swagger of authority everywhere - and I think this really lies at the heart of the Tory Brexit which has just taken place. Orwell may be right in what unites rich and poor being authoritarianism - it absolutely fits the rhetoric of Nigel Farage, Dominic Cummings and others. But there is no goose-stepping, and all that the Brexiteers complain of is "Brussels bureaucrats". That's a funny kind of goose-step!

But it may be one nonetheless. This is goose-step in the techno age. It is the goose-step of free-market capitalism, technologically-driven oppression, surveillance and artificially-imposed austerity. We are all marching to its beat, and most of us - the losers - hate it.

We know, deep down, that the likes of Johnson, Trump and Farage are really the commanders of this robot dance which so many detest. And we know that they know that declaring hatred for it on the one hand, and ramping-it up on the other is the game. But we are caught - because you cannot call them out without denying that the uniting hatred of authority is fundamentally true. It's a massive double-bind.

So how do we get out of this mess? Orwell's answer was socialism - and in many respects he knew that the founding of the NHS and welfare state was inevitable after the war. It's taken over 70 years to re-impose the goose-step in the techno age in a far more complex and uncertain form where it drives everything, distorting that early socialist ideal to move to its beat.

One of the striking things about the EU and Brexit, Westminster and the election, is that everyone has taken it so seriously. Perhaps we shouldn't take any of this seriously. Perhaps the game is to get us to take seriously things which aren't at all serious. It's like the population who is afraid to laugh at its army. If we don't take our parliaments - whether national or international - seriously, what are they? They are - it all is - irrelevant. Now look at how hard the media - the press, the national broadcasters, the social media companies - are trying to convince us that this isn't irrelevant!

That's the mark of swaggering authority - to persuade the people that what is irrelevant is the most important thing in the world. It's a con. 

Sunday, 26 January 2020

Well-run Universities and methods for Analysing them

There's so much critique of higher education these days that little thought is going in to thinking about how an institution might be optimally organised in the modern world. This is partly because critique is "cheap". Bateson made the point that we are good at talking about pathology and bad at talking about health. This is partly because to talk about health you need to talk about the whole system. To talk about pathology you need to point to one bit of it which isn't working and apportion blame for its failure. Often, critique is itself a symptom of pathology, and may even exacerbate it.

The scientific problem here is that we lack good tools for analysing, measuring and diagnosing the whole system. Cybernetics provides a body of knowledge - an epistemology - which can at least provide a foundation, but it is not so good empirically. Indeed, some aspects of second-order cybernetics appear almost to deny the importance of empirical evidence. Unfortunately, without experiment, cybernetics itself risks becoming a tool for critique. Which is pretty much what's happened.

Within the history of cybernetics, there are exceptions. Stafford Beer's work in management is one of the best examples. He used a range of techniques from information theory to personal construct theory to measure and analyse systems and to transform organisation. More recently, Loet Leydesdorff has used information theory to produce models of the inter-relationship between academic discourse, government policy and industrial activity, while Robert Ulanowicz has used information theory in ecological investigations.

Information theory is something of a common denominator. It was recognised by Ross Ashby that Shannon's formulae were basically expressing the same idea as his concept of "variety", and that this equation could be used to analyse complex situations in almost any domain.

However, there are some big problems with Shannon's information theory. Not least, it assumes that complex systems are ergodic - i.e. that their complexity over a short period of time is equivalent to their complexity over a long spell of time. All living systems are non-ergodic - they emerge new features and new behaviours which are impossible to predict at the outset.

Another problem with information theory is the way that complexity itself is understood in the first place. For Ashby, complex systems are complex because of the number of states they can exist in. Ashby's variety was a countable thing. But how many countable states can a human being exist in? Where do we draw the boundary around the things that we are counting and the things that we ignore? And then the word "complex" is applied to things which don't appear to have obvious states at all - take, for example, the "complex" music of J.S. Bach. How many states does that have?

I think one of the real breakthroughs in the last 10 years or so has been the recognition that it is not information which is important, but "not-information", "constraint", "absence" or "redundancy". Terry Deacon, Loet Leydesdorff, Robert Ulanowicz and (earlier) Gregory Bateson and Heinz von Foerster can take the credit for this. In the hands of Leydesdorff, however, this recognition of constraint and absence became measurable using Shannon information theory, and the theory of anticipatory systems of Daniel Dubois.

This is where this gets interesting. An anticipatory system contains a model of itself. It is the epitome of Conant and Ashby's statement that "every good regulator of a system must be a model of that system" (see https://en.wikipedia.org/wiki/Good_regulator). Beer integrated this idea in his Viable System Model in what he called "System 4". Dubois meanwhile expresses an anticipatory system as a fractal, and this potentially means that Shannon information can be used to generate this fractal and provide a kind of "image" of a healthy system. Which takes us to a definition:

A well-run university contains a good model of itself.
How many universities do you know like that?

Here however, we need to look in more detail at Dubois's fractal. The point of a fractal is that it is self-similar at different orders of scale. That means that what happens at one level has happened before at another. So theoretically, a good fractal can anticipate what will happen because it knows the pattern of what has happened.

I've recently done some work analysing student comments from using comparative judgement of a variety of documents from science, technology, creativity and communication. (I did this for the Global Scientific Dialogue course in Vladivostok last year - https://dailyimprovisation.blogspot.com/2018/03/education-as-music-some-thoughts-on.html). The point of the comparative judgement was to stimulate critical thought and disrupt expectations. In other words, it was to re-jig any anticipatory system that might have been in place, and encourage the development of a fresh one.

I've just written a paper about it, but the pictures are intriguing enough. Basically, they were generated by taking a number of Shannon entropy measurements of different variables and examining the relative entropy between these different elements. This produces a graph, and the movements of the entropy line in the graph can be coded as 1s and 0s to produce a kind of fractal. (I used the same technique for studying music here - https://dailyimprovisation.blogspot.com/2019/05/bach-as-anticipatory-fractal-and.html)

So here are my pictures below. Now I suppose there is a simple question - can you tell who are the good students and who are the bad ones?


a

b

c

d

e


But what about the well-run institution? I think it too must have an analysable anticipatory fractal. There will be patterns of activity at all levels of management - from learner utterances (like these graphs) through to teacher behaviour, management actions, policies, technologies and relations with the environment. Yet I suspect that if we tried to do this today, we would find little coherence in the ways in which modern universities coordination they activities with the world.







Tuesday, 14 January 2020

What have VLEs done to Universities?

The distinction between genotype and phenotype is useful in thinking about organisational change. Given that an institution is a kind of organism, it is the distinction between those behaviours that emerge in its interactions with its environment, and the extent to which these behavioural changes become hard-wired into its nature and identity (the "genome"). So institutions adapt their behaviour in response to environmental changes in a "phenotypical" way initially, implementing ad-hoc technologies and procedures. Over time, these ad-hoc procedures become codified in the functionality of universal technologies which are deployed everywhere, and which determine the ongoing behaviour of the "species" - the "genotype".

Changes to the genotype are hard to shift. They determine patterns of organic reproduction: so we see different kinds of people existing in institutions to the kinds of people that we might have seen 40 years ago. Many elderly esteemed scholars would now say they wouldn't survive in the modern university. They're right - think of Marina Warner's account of her time at Essex (and why she quit) in the London review of books a few years ago: https://www.lrb.co.uk/the-paper/v36/n17/marina-warner/diary, or more recently Liz Morrish's "The university has become an anxiety machine": https://www.hepi.ac.uk/2019/05/23/the-university-has-become-an-anxiety-machine/. Only last week this Twitter thread appeared: https://twitter.com/willpooley/status/1214891603606822912. It's all true.

As part of the "genotype", technology is the thing which drives the "institutional isomorphism" that means that management functions become professionalised and universal (where they used to be an unpopular burden for academics). But - and it is a big BUT - this has only happened because we have let it happen.

The Virtual Learning Environment is an interesting example. Its genotypical function has been to reinforce the modularisation of learning in such a way that every collection of resources, activities, tools and people must be tied to a "module code", into which marks for those activities are stored. What's the result? Thousands of online "spaces" in the VLE which are effectively dead - nothing happening - apart from students (who have become inured to the dead online VLE space on thousands of other modules) going in to access the powerpoints that the teacher uploaded from the lecture, watch lecture capture, or submit their assignment.

What a weird "space" this is!

Go into any physical space on campus and you see something entirely different. Students gathered together from many courses, some revising or writing essays, some chatting with friends, some on social media. In such a space, one could imagine innovative activities that could be organised among such a diverse group - student unions are often good at this sort of thing: the point is that the possibility is there.

In the online space, where is even the possibility of organising group activities across the curriculum? It's removed by the technologically reinforced modularisation of student activity. If you remove this reinforced modularisation, do new things become possible?

If Facebook organised itself into "modules" like this it would not have succeeded. Instead it organised itself around personal networks where each node generated information. Each node is an "information producing" entity, where the information produced by one node can become of interest to the information-production function of another.

There's something very important about this "information production" function in a viable online space. In a VLE, the information production is restricted to assignments - which are generally not shared with a community for fear of plagiarism - and discussion boards. The restricting of the information production and sharing aspect is a key reason why these spaces are "dead". But these restrictions are introduced for reasons relating to the ways we think about assessment, and these ways of thinking about assessment get in the way of authentic communication: communicating within the VLE can become a risk to the integrity of the assessment system! (Of course, this means that communication happens in other ways - Facebook, Whatsapp, Snapchat, TikTok, etc)

The process of generating information - of sticking stuff out there - is a process of probing the environment. It is a fundamental process that needs to happen for a viable system if it is to adapt and survive. It matters for individual learners to do this, but it also matters for communities - whether they are online or not.

I wonder if this is a feature of all viable institutions: that they have a function which puts information out into the environment as a way of probing the environment. It is a way of expressing uncertainty. This information acts as a kind of "receptor" which attracts other sources of information (other people's uncertainty) and draws them into the community. Facebook clearly exploits this, whilst also deliberately disrupting the environment so as to keep people trying to produce information to understand an ever-changing environment. Meanwhile, Facebook makes money.

If a online course or an online community in an institution is to be viable, then it must have a similar function: there must be a regular production of information which acts as a receptor to those outside. This processing of "external uncertainty" exists alongside the processes of inner-uncertainty management which are organised within the community, and within each individual in that community.

In asking how this might be organised, I wonder if there is hope for overcoming the genotype of the VLE-dominated university.

Monday, 13 January 2020

Oscillating Emotions, Maddening Institutions... and Technology

My current emotional state is worrying me. Rather like the current climate on our burning planet, or our scary politics, its not so much a particular state (although depression and burning-Australia is of course worrying), but it is the oscillation, the variety, of emotional states that's bothering me. It's one extreme and then the next and no control. The symptoms, from an emotional point of view, are dangerous because they threaten to feed-back into the pathology. In a state of depression, one needs to talk, but things can become so overwhelming that talking becomes incredibly difficult, and so it gets worse.

A lot hangs on the nature of our institutions. It is not for nothing that stable democracies pride themselves on the stability of their institutions. This is because, I think, institutions are places where people can talk to each other. They are information-conserving entities, and the process of conserving information occurs through conversation. "Conserving conversation", if you like.

So what happens when our institutions fill themselves with technologies that disturb the context for conversation to the extent that people:

  1. feel stupid that they are not on top of the "latest tools" (or indeed, are made to feel stupid!)
  2. cannot talk to each other about their supposed "incompetence" for fear of exposing what they perceive as this "incompetence".
  3. feel that the necessity for conversation is obviated by techno-instrumental effectiveness (I sent you an email - didn't you read it?)
  4. are too busy and stressed working bad interfaces to build proper relationships or to ask powerful questions
  5. are permanently threatened by existential concerns over their future, their current precarious contract, their prospects for longer-term financial security, their family, and so on
There is, of course, the "you're lucky to have a job" brigade. Or the "don't think about it, just get on with it" people.  But these people reduce the totality of human life to a function. And it clearly isn't a simple function. And yet there is no rational way to determine that such an attitude is wrong. Because of that, these people (sometimes deliberately) amplify the oscillation. 

This functionalist thinking derives from technological thinking. It's not particular technologies that are to blame. But it is what computer technology actually does to institutions: it discards information. Losing information is really bad news. 

So we have institutions which traditionally exist by virtue of their capacity to conserve information (and memory, thought and inquiry) through facilitating conversation. We introduce an IT system which loses some information because it removes some degree of uncertainty that required conservation to address. This information loss is addressed by another IT system, which loses more information. Which necessitates... The loss of information through technology is like the increase in CO2.

It leads to suffocation.