Monday, 30 March 2020

Supersize Institutions vs Coronavirus

The principal objective of most coronavirus strategies across the world is to limit the collapse of institutions of health. It is obvious that the effort to protect one giant institution puts other giant institutions at risk. Government itself fears for its future, as politicians go to great pains to claim how "well" they are doing in the crisis: the risk here is loss of public trust. Collapse of the health system would produce catastrophic death rates and the potential for social breakdown. Businesses large and small also feeling the full force of the crisis.

Educational institutions will not be far behind. While moving teaching online has been the emergency measure, it is unlikely that traditional institutions of education can maintain their integrity divorced from the campus on which they established their history, reputation and (more recently) capital investments.

While it is tempting to view this as an environmental crisis which simply blows away everything in its path, such a view is dangerous. It opens the door to authoritarianism where Viktor Orban figures will demand total "control" to do the will of the people while really serving selfish interests. This too is a consequence of institutional crisis. The weaknesses in our institutional fabric have been obvious for decades. So there is a question to be asked about institutions - particularly those institutions which have grown so large and unwieldy, bureaucratic and sometimes dangerous so as to make them vulnerable to this kind of environmental disaster.

The institutionalisation of health is something that has happened the world over. Some thinkers, notably Ivan Illich, were always critical of the institutionalisation of public services, which has gone hand-in-hand with "professionalisation" which disempowered individuals to do things for themselves. It basically revolved around the principle of declaring "scarcity" around issues of health, treatment and technology where professionals were invested with the authority to exclusively make pronouncements around aspects of life where individuals were often perfectly capable of organising themselves to deal with if they had access to the technologies and drugs themselves.

This is particularly true in the light of our information technologies. Criticism of the use of technology for self-diagnosis and treatment is based on legitimate concerns about the results of technology. But the problems with technology are not the fault of technology. They are the fault of institutions of health defending their own structures and greedy tech corporations making profits in the shadows of large medical institutions. Health institutions chose to denigrate "Dr Google" and assert the status of institutional judgement rather that consider how health might be more effectively organised with technology in ways different from institutional hierarchy.

It is the same in education. Online education has been available since the beginning of the web. The story since the web has been one of the institutions defending themselves against technology, commandeering technology to defend their structures and practices. There was never any attempt to reform a viable institution of education online. Had there been, Facebook would have been a very different thing.

The point is that an institution is a kind of technology and coronavirus will break them. We may protect the technology of our health institutions, but in the process we will break the technology of our other institutions. Our institutions are not organised effectively. Their supersized structure is not an effective form of organisation. Unfortunately, the reaction to the current crisis is causing a ramping-up of the scale of health institutions. This is understandable: now is the time to react as best we can. But our institutions were vulnerable because of the way they are organised and the scale on which they operate.

The declarations of scarcity over technologies, treatments, and care are not effective ways of organising health in society. The pandemic genie is out of the bottle. We know that this will happen again, and next time it could be worse. So while we must now react, we will need to think about what "effective organisation" in health and education really means in the future.

Sunday, 22 March 2020

Under the Skin of an Institution: Rethinking the Global University and Civil Society

An institution - whether it is a university, school, club, church, government, rock band or orchestra - is essentially a membrane between what an institution sees to be its "identity" and its environment - the world which isn't in the club. Every membrane that exists everywhere requires an active process to maintain it. This active process is the totality of work that institutions do. The coordinated work of maintaining an institution entails the division of labour into differentiated functions, the coordination of those functions with one another, the monitoring of the operation of those functions, the monitoring of the environment, the determining of possible threats or opportunities for maintaining the membrane and the directing of any change to internal organisation should something change in the environment. An institution is a "body" (from which we get "corporation"): functional differentiation applies to bodies too.

Among the most significant changes to the environment for social institutions revolve around getting resource to survive. In modern society, this means money. Money fuels growth in ways in which food fuels metabolism, but money is a socially-determined codification of expectation which means that the same codified techniques can be used to organise internal operations: institutions "restructure". At the root of the monetary codification is confidence in  other related institutions - banks and government - and the general belief that social stability can always be achieved through fiscal means - however drastic and painful those means might be. Since the financial crash, this assumption that social stability can always be delivered by fiscal means has been called into doubt.

A plague is not a typical environmental change. It destabilises the foundations of all institutions including banks and government. It permeates the membrane of cells which lie at the root of everything. Not only the institutional membrane is threatened, but of all the sub-divisions of labour within the institution, and of all the other institutions which exist within the ecology of that institution: few conventional methods of restructuring can help. Attempts to provide fiscal support can be made, but in the process the banks must defend their identity by defending money as a "codification of expectations". But if nobody really believes the bank's defence of the value of the money they issue, this money will carry little value. An economic firestorm may occur when we lose trust in government and the banks: all membranes collapse.

Given the current clutch of world leaders that we currently have, it would not be unreasonable to expect a loss of trust in government and banks.

In society, a loss of trust can be replaced with physical force to reinforce a particular institutional membrane (for example, a totalitarian government). This is basically what happened in China, and increasingly Italy and Spain seem to be heading in the same direction. There is nothing new in this development: it is basically a matter of the institution of government wanting to physically defend its membrane by threatening its people (its "environment"). It will appear to work - temporarily. Just as it has only worked temporarily in so many other parts of the world.

A more intelligent way to think is to reconsider the nature of institutions, bodies and cells as recursively inter-connected membranes. During a time of "lockdown", the primary institution is clearly the household or the family. Like all institutions, families have their membranes and functional differentiation: not just the walls of the house or flat keep things together, but within the family are deep mechanisms of coordinating expectations of one another. In dysfunctional families this is more noticeable than in happy ones (remember: "All happy families are the same..."). The stresses and strains of life together in close proximity with little freedom is the very process of the institution attempting to maintain its cohesion. In many families, as money becomes more scarce, other means of coordinating expectations will arise. Some of these new means of coordinating expectations will reveal things about the nature of all institutions.

While there will undoubtedly be an increase in crime and selfishness, we are likely to see an increase in neighbourly altruism. As internal stresses take their toll, external cooperation will attempt to reorganise social groups for the survival of all. But this can only happen if there is external signalling from groups who want to help or who need help. This signalling will happen online. Our small institutions will become rather like cells producing receptor proteins on the cell-wall facing the environment which interact with "proteins" in the environment in "cell signalling pathways". The cybernetic term is "transduction".

So what of larger institutions like institutions of education? All our educational institutions started small: groups of friends with shared interests would meet and talk. Gradually their discussions and the products of their discussions attracted attention from outside. Gradually that attention and demand for more from the institution provided a foundation upon which the nascent institution could grow.

As academics and students move online, are we going to see an eating-away of the membranes of the traditional university led by individual academics across the world who will find that the best place to meet and talk is online? The online world also provides other ingredients for the growth of new institutions. Most importantly, for an institution to grow it must produce things which its environment finds interesting and attractive. Whether it is the video summaries of conversations, open invitations to observe small group meetings, the creation of online artefacts like models or software, or the concentration of intellectual status and reputation, this is not going to happen within the walls of any particular institution. It is going to happen globally.

Why restrict intellectual discourse to the walls of the campus when everyone everywhere is in one big campus? Since the physical campus is now toxic, it doesn't matter how ancient or beautiful it is - beautiful buildings are not what institutions are about. They are about ideas and people and if new ways of organising ideas and people become possible then they should be embraced.

More importantly, the essence of the nascent online university is trust within the new institution and outside it. The bullshit about graduate premiums has gone and the university bondholders can go at stick their increasingly meaningless money elsewhere. We have something more tangible, more effective, more trustworthy but inherently low-cost.

When the physical threats and surveillance of the population no longer work, then what will matter will be trust, honesty and openness to uncertainty. These are the values that we must build into our online institutions now.

Saturday, 21 March 2020

The Problem with Mathematical Modelling in Covid-19 and Economics

"Mathematical modelling" is everywhere at the moment, and it should make us as nervous as medical staff going to work without protective gear. It's a bad tool for policy making in a time of crisis. Models are abstract and never specific, but it is the specifics which determine matters of life and death. Whilst an agent-based model might give some idea of the exponential rise in cases, or the overloading of health systems, there will always be missing variables and causal mechanisms which are misunderstood. Some of those missing variables will account for why nearly 800 people died in Italy yesterday (following the nearly 700 the day before) two weeks after their lockdown, or why half of the 1500 people in France in intensive care are aged under 60. At the same time, the statistics from different countries don't seem to be comparable: China under-reported the extent of the death rate from the virus and Russia is determined to say that it is a "foreign problem", underplaying its significance, while building hospitals on the side.

Epidemiologists trying to predict the consequences of COVID-19 with models will cite the caveat that it is "only a model", but the fact is that it's only "only a model" if it isn't used to inform policy. As soon as policy takes heed of a model, the model becomes part of the situation. Ideas like "herd immunity" arise from models, and have informed policy. The outcry that this policy was effectively eugenics has been sufficient to cause some back-tracking.

Mathematical modelling is affecting policy decisions beyond simplistic models of transmission. At the root of government policy - particularly in the Anglo-Saxon world - is economic thinking which is also underpinned by mathematical modelling. Where do the epidemiological models and the economic models meet? That seems to have been the question puzzling the British and US governments in the last week or so. It is, of course, the wrong question. Herd immunity didn't simply arise from a particular slant on an epidemiological model; it was a compromise between the epidemiological model and the economic model: let the virus sweep through the country, let everyone continue their lives as normal, let people "eat virus", let the old die, let's save on the care and pensions, protect the market and the banks and all will be well. To any country with a "pension reform" headache, I can't believe this thought hasn't crossed the minds of their leaders, and the less scrupulous they are, the less they seem to do about the virus.

Frankly, to the calculating mind of Dominic Cummings, Boris Johnson, Donald Trump and a few others, this seems like a good plan. Until you think that it might be your parents in a makeshift hospital with no ventilators. The problem from Cummings and Johnson, and for any modelling geeks out there, is that Coronavirus is not abstract. It's not like a hedge fund whose victims are nameless in far-away countries. It threatens people we love. And much as the logic of capitalism dictates that we are all individuals engaged in a kind of Darwinian struggle for material success, love creates bonds which do not obey the individualist logic of the modeller. There is no variable which can represent its effects.

This is not to say that models are useless. But it is to say that the most sensible attitude to them is to ask how they might be wrong, rather than to ask what they predict. They are useful in the sense that they promote critical discussion among concerned individuals thinking about how to act effectively. So the critical thing is the organisation of human beings around the model. Get a load of politicians who want quick-and-easy answers, and the model could be lethal.

Unfortunately, in economic modelling, they have never been used like this. Much to the dismay of great minds in economics, including Marshall, Keynes and Hayek, bad mathematics took over the discipline of economics promising policy guided by numbers - models presented ways of removing the uncertainty of policy-making. But in removing the uncertainty, they threw away most of the information in the system.

What it left us with was a rationalistic, linear and shallow picture of human life. Looking at the world rather like Harry Lime looks down on those "little dots" from the Ferris Wheel in the Third Man, each of us was reduced to a kind of "variable set", each with our motivations and histories, and each of which could stop moving or disappear at any point without any effect on the others.

Coronavirus tells us what's missing: it cuts to the heart of our mistakes in modelling. The virus reminds us what we always knew but preferred to ignore: "we are all connected". Model-based capitalism told us the opposite. The crisis of this pandemic is only just beginning. Our very understanding of money, the economy, ownership, debt and wealth depend on the modeller's deceit that we are individuals acting with one another according to rules codified in law, executed through a market. Coronavirus takes us back not to social law, but to natural law: the bonds of love across generations matter more than any amount of money. It blows apart what Veblen saw as the atavistic behaviour of the leisure classes once and for all.

Love isn't some new variable which can be factored into the model. What happens in meaningful social interaction is the coordination of expectations, and love plays a powerful role in forming expectations. Money is, by contrast, merely a codification of expectations: artifice. But when "heart speaks to heart" (as Newman put it) - as it surely is now - there is no need to artificially codify expectations. We know the truth of the world. Maybe it is the "implicate order" of nature that we perceive. But we know. And nothing else matters.

There is a modelling question to be asked here. But it is not about extra variables. It is more about what the heart speaking to heart really is. What is it that enables us to tune-in to one another? Indeed, what is it that drives us to modelling in the first place?

The way forwards from Coronavirus will be a meta level of understanding.

Tuesday, 10 March 2020

Defining "Defining"

The Foundations of Information Science mailing list are currently trying to define "information". Frankly, that's what they've been trying to do for years (without much success), but recently they've tried to be explicit about it. The problem is that you can't define "information" unless you have some concept of what "defining" itself is. So I suggested a definition of "defining":

"Defining is a process of seeking abstract principles which are generative not only of phenomena themselves, but of our narrative capacities for explaining them and our empirical faculties for exploring them."
Some suggested to me that the word "abstract" is redundant here. Aren't they just "principles"? I'm not sure.

Lou Kauffman said that I was thinking about mathematics when I say "abstract". I am - and this definition arose from a conversation with a mutual friend, Peter Rowlands (I couldn't have had the intellectual insight to come up with something like this without Peter's genius)

Lou said something interesting about empiricism in relation to this.

"Nevertheless, we find something different in the empirical domain. We do not demand that our abstract principles generate the phenomena there. In fact we find that concept and percept arise together in the examination of phenomena and that it is in this arising, with the help of thinking and the fundamental circularity of thought knowing thought, that we come to agree that information is present."
This is precisely it. I would say that concept and percept arise together in a kind of counterpoint. It's like music. The counterpoint contrives to give form and meaning to understanding. But meaning and understanding can only arise if the interactions of the counterpoint contrive to create "nothing".

It's only by creating "nothing" that the patterns upon which meaning and understanding operate can arise. 

This, it seems to me, is new. It's where Peter Rowlands's physics and Lou's mathematics point to a profound new development in our understanding of nature and complexity.

Monday, 2 March 2020

Positioning Technology Management in Education

It is hard to imagine that technology in institutions today wasn't "managed". Management is endemic in all organisations: institutions are not so much coherent self-sustaining organisational structures, as managed aggregates of people, tools and activities - which are to varying degrees, sometimes incoherent. Indeed, the form of management which imposes functionalist categories onto all its components has become the hallmark of modern institutions. Yet as repeated institutional crises indicate, this kind of organisation appears unadaptive and brittle: in business, it produces failed banks and corporations; in politics, corruption; in education it produces disquiet, alienation and a ravenous, bottomless appetite for ever more resource from society.

For people working under it, management becomes synonymous with the constraints it imposes on the organisation. Management means making decisions about what tools to use, when, by whom and for what purpose. These decisions are necessarily simple - and far simpler than the situations those who are subject to them are trying to negotiate on the ground.

But here there is a problem: simple decisions which constrain those who are negotiating complex situations make those situations more complex. Simple decisions based on out-of-date information produce organisational oscillations and chaos. The problem is particularly evident in educational technology.

Educational technologies are managed - not merely in the sense of being provisioned and maintained, but in the sense that who is able to do what with them, with whom, how and when. Yet the provisioning of tools is fundamental to empowering individuals to deal with their environment. If a university had no classrooms, organising classes would be impossible; if it had no timetable, clashes between competing interests to access resources would result. If there was no audit of whether resources provisioned were actually utilised, then inefficiency would result. In a world that didn't change, provisioning of resources, coordination of activities (to avoid conflicts) and audit would suffice.

The impact of technology on universities has largely resulted from a change to the environmental conditions universities operate it. Talk of "Technology Enhanced Learning" is usually misplaced - computer technology produces continual world changes, and institutions must change themselves to survive in it. So what might be a largely internally-focused process of provisioning tools and resources, coordination and audit, must become a process  of balancing internal demands with external scanning of the ever-changing environment. Institutions must understand these changes, and have sufficient understanding of their own internal adaptive processes, to change themselves to survive.

These adaptive processes require steering. This is the proper domain of management. Yet if the end result of the efforts by management to govern by binary decision result in increased complexity, then the adaptation process won't work. If management sees its principal role as the balancing of complex demands between inside and outside of the organisation, then the focus of its activities becomes much clearer - and less focused on direct provisioning from the top, but on creating the conditions where dynamic provisioning of tools, educational coordination and monitoring can happen closer to the ground.

So then we must ask, What are the conditions which facilitate dynamic provisioning of tools and resources closer to the ground? In modern technological institutions, there are particular constraints that have to be overcome, the principal one being the difference in languages between different stakeholders in the institution.

These languages might be thought of as:
  1. Structural/administrative 
  2. Technical 
  3. Pedagogical 
The structural language is a language of politics, existent institutional procedures, external demands (from government and society) and power. The technical language is a language of code, systems, procedures, constraints and compliance. The pedagogical language is a language of relationships, learning, personal expression, and freedom. 

One way to coordinate a process of addressing these constraints is a continual programme of experiment and inquiry involving all stakeholders in the institution at the boundaries of these languages. Managers would do well spending less time in meetings, and more time learning to write python code (for example). Technicians would do well spending less time writing python code and more time talking to learners and teachers. Teachers would do well spending less time presenting Powerpoints, and more time engaging with the structural and technical aspects of educational organisation and educational experience.

IT tools can be instantiated anywhere. Their provisioning and control can be brought closer to the users - teachers and learners. That we tend to do technological provisioning of tools at the top of institutions is an indication of the fact that technology is seen as the main environmental threat, and so institutional technology is seen as a means of countering it. But technology is not an environmental threat. The real threat lies in ineffective organisation within the institution itself. 

Sunday, 23 February 2020

Tony Lawson vs John Searle on Money: Why Lawson is Right - Money is "Positioned Bank Debt"

Tony Lawson has presented a fascinating argument that money is symbolically codified central bank "debt" in a paper from a couple of years ago (see https://academic.oup.com/cje/article-abstract/42/4/1165/5054022) John Searle, who had a significant intellectual engagement with Lawson prior to his dismissal from Berkeley for sexual harassment (see https://www.dailycal.org/2019/07/02/former-professor-john-searle-loses-emeritus-status-over-violation-of-sexual-harassment-retaliation-policies/), objected to Lawson's theory as being "incredulous", arguing that his own theory of "status functions" with regard to money was correct. Money is real, according to Searle, because a community upholds (trusts) the "status function declaration" that "I promise to pay the bearer" which is made by a central bank. So when the banks lends me money, the obligation is on me to pay back the "debt" to the bank. And of course, that is how we are all taught to think about money.

Lawson makes a radical proposal based on a historical analysis of money. The money that the bank lends is effectively an IOU from the bank to us. Now how could that be? It goes back, according to Lawson, to the goldsmith's issue of receipts for deposited gold in the 17th century. Basically, the goldsmiths offered a depository service where merchants could deposit their gold, and the goldsmiths would issue a receipt for that gold. The receipt was effectively a certificate of debt to the merchant. These receipts became symbolically codified as representing the deposited gold with the goldsmith, and soon the actual presence of the deposited gold was assumed to the extent that it was the receipts that were exchanged without needing to check on the actual gold that was deposited.

It wasn't long after this that the goldsmiths realised that since it was the receipts that had exchange value, they could issue receipts guaranteed by gold that wasn't deposited. Providing not everybody demanded their gold back at the same time, the goldsmiths could honour the value of the receipts that they issued. The receipts remained symbolic tokens of debt by the goldsmith, and complex social relations between bankers, lawyers, borrowers, government, central and commercial banks emerged.

Interestingly, Lawson describes the difference between cash and the electronic representations of money that we are all so used to. He points out that it would be very unlikely for today's multi-millionaires to demand being paid in cash. Cash is the symbolic codification of central bank debt, while commercial banks generate IOUs to the public in the form of electronic records. When somebody withdraws cash they are converting the electronic IOUs from the commercial bank into IOUs to the public from the central bank.

Lawson argues that this is an incredible story (so at least on this point, Searle is right), but it is nevertheless true, and it is so because money is effectively a kind of "technology" which acquires its own perverse logic over history. He cites the development of the QWERTY keyboard as another example - what technology theorists might call "lock-in".

There are far deeper implications for Lawson's theory. What he is basically arguing is that the nature of the social world, including the nature of money, cannot be separated from history. Historical processes are woven into social ontology in the way that cellular evolution absorbs previous levels of evolution (at least according to endosymbiotic theory). There is deception all along the way (Lawson calls it "fraud") - but in the natural world it is the same. mimicry, camouflage, etc all create deceptions which steer the course of evolutionary history.

This also highlights what is wrong with Searle's position. I met Searle twice and found him highly charismatic but somewhat cruel. While not wanting to infer any ad-hominem assault on his intellectual position (which I have gained a lot from and written about here: https://jime.open.ac.uk/articles/10.5334/jime.398/), there was something missing (which I wrote about here: https://dailyimprovisation.blogspot.com/2014/05/pianos-consciousness-and-john-searles.html and https://dailyimprovisation.blogspot.com/2015/06/why-dont-dogs-have-universities-searles.html). Searle's ontology of status functions is fascinating and flat. It is basically a cybernetic theory where what exists in the world exists through the interactions of actors (rather like Pask's theory - see https://dailyimprovisation.blogspot.com/2016/11/conversation-and-contingency-some.html).

I originally thought that Searle was positivist in asserting that status functions assert the existence of things. It seemed to be that they were better thought of as characterising the scarcity of things. It was the other side of the distinction that mattered. Now I would say that the process of maintaining the distinction about the scarcity or presence of anything must account for its own history and its future. That is to say, no stable distinction can be created without an anticipatory system capable or refining the social positions, speech acts, institutional structures, technological resources, etc, in order to survive in an ever-changing environment. Lawson doesn't quite put it like this, but I think it's what he means. Searle, by contrast, has no history and no future. It suffers exactly the same problems as the two-dimensional information view that has led us to Dominic Cummings (see https://dailyimprovisation.blogspot.com/2020/02/from-radical-constructivism-to-dominic.html). It is not the status or even the scarcity of money that is constructed; it is Nothing.

History and the future are the third dimension in the game of establishing trust, and that in turn contributes to the process of constructing nothing which makes possible anticipation. Only with a third dimension is it even possible to create trust and to anticipate a future. The "positions" that Lawson talks about are really multiple levels of anticipation, each with its own history, and built up over a period of time.

What this begins to look like is an evolutionary biological approach where emergence is seen as a fractal process of interconnected anticipations. It's very similar to John Torday's cellular communication theory (see https://www.thethirdwayofevolution.com/people/view/john-s.-torday), and it has many similarities to theories of technology by Simondon, Stiegler, Yuk Hui, Erich Horl,  and others.

There's more to it, but getting money "right" is an essential element in the process of seeing education right.  

Saturday, 22 February 2020

Levels of Ability and "Gradus ad Parnassum": a Pedagogy of Constructing Nothing

In education, levels are everywhere. There are levels of skill, stages of accomplishment, grades, competencies and so on. Arguments rage as to whether levels are "real" or not. But obviously there is a difference between someone at Level 1, and someone at Level 8 (for example). Throughout the history of education, attempts have been made to create pedagogies which follow a staged approach to the acquisition of skill. The formalities of these approaches, and arguments about the true nature of levels (for example, whether one might naturally acquire high levels of skill without the formalities of a levelled pedagogy), have been a key battleground in education, from an almost dogmatic insistence that "things must be done in this way" to a open "inquiry-based" approach.  It surprises me that in all of these debates, which remain unresolved, little thought has gone into what actually constitutes a level.

Partly this may be because levels are seen as specific things which relate to a discipline. And yet, there are fundamental similarities between pedagogical approaches from learning Latin, music, or maths to astrophysics and medicine. There are stages, outcomes, assessments, and so on. One might think that these things are the products of the institutional structures around which we organise education. That might be true. But "levels" are nevertheless demonstrable irrespective of what assessment technique might be in operation, and their means of establishment have at the very least a family resemblance.

The most interesting and ancient of the levelled approaches is the "Gradus ad Parnassum". This refers to a range of different pedagogical approaches in different subjects. I became familiar with it through musical education, because it was the name of a treatise on counterpoint by Johann Fux. Fux's approach to counterpoint was to present learners with progressively complex exercises for them to complete. Because music is very abstract, these exercises are interesting because they present an almost paradigmatic case of the differences between one level and the next.

The basic idea is to write countermelodies to a given melody written in very long notes (called a Cantus Firmus). First, each long note is accompanied by one other long note which harmonises with it and whose construction must obey simple rules which form the foundation of the rules for the rest of the exercises. Secondly, each long note is accompanied by two shorter notes in half the rhythm. Then it is done by fours, and so on. Gradually the students learns the fundamental rules and how to mix combinations of shorter and longer notes over the original melody. The resulting music sounds like Palastrina. The technique was used by generations of composers who followed.

Fux's Gradus is interesting because each level has a certain completeness. The completeness of one level leads through the expansion and complexification of the technique to the next level. It would be very interesting to explore language pedagogies, and maths pedagogies for similar patterns of completeness. But I'm particularly interested in what this "completeness" at each level is.

It is not, I think, a construction of a particular accomplishment. That I think is an epiphenomenon. Somehow, by the performance of one level, a kind of 3-dimensional construction is made which eventually determines that the particular level is exhausted in possibilities. In other words, at a certain point, what happens next must be to stop at this stage, prepared to move on to the next stage. It may not be so different from a level in Space Invaders - and there there is a clue. What marks the end of a level, but the construction of Nothing. The invaders have gone, and so we begin again.


Thursday, 20 February 2020

Open Source Canvas as an Educational Institution Innovation Platform

I’m spending a lot of time with Instructure’s Canvas at the moment. To be honest, I don’t much care for VLEs, and certainly have little interest in Instructure, who seem to be on a corporate path to datafying the university (although I strongly suspect this won’t work, so I relax). But Canvas itself – as software, interfaces, services, analytics, etc – is really interesting. My university has bought the top-of-the-range all-singing hosted version. But Canvas is open source, and you can download and install it from https://github.com/instructure/canvas-lms

It’s a bit fiddly to install, but it does work – all it requires is a Linux machine, and you follow the 
instructures… sorry, instructions 😉

It actually works very well. What you get is not just a VLE. You get a service-oriented framework for education, upon which the VLE interface sits. Theoretically, you could build your own interface.

But then look at what the services do: https://canvas.instructure.com/doc/api/

It’s really cool – I was able to automatically generate content, delete stuff, create accounts, generate users.. in fact, anything that can be done from the interface can be done programmatically.
Then there’s the LTI integration. New tools, new integrations, huge possibilities.

And then there’s the Graphiql query language for analytics.

This is very impressive. I’ve been trying to think what this is like.

It’s like a standardised platform for doing all the kind of administrative things that we want to do in education, but having a coherent and standard set of web-services for hooking in cool stuff behind the scenes. So run machine learning services in the background, or agent-based models, or new analytic tools which spit their results straight back to learners, or personalised learning which self-adapts to user engagement. But whatever you do, you can exploit the standard Canvas ways of communicating with students, including mobile notifications, apps, etc.

I think (although I’m not sure) Canvas feels like the first time we have had educational technology which is effectively a standard service-oriented platform that tunes in to the way educational institutions work. It’s like an Eclipse Rich Client Platform (remember that?) for educational institutions.

Am I getting carried away? I don’t know – but I want to find out!



Monday, 17 February 2020

From Radical Constructivism to Dominic Cummings: What's wrong with Cybernetics?

The pro-Brexit lobby at the heart of the UK government possess a powerful arsenal of conceptual and epistemological tools which have effectively been "weaponised" in ways which would have mortified their inventors. Dominic Cummings knows his systems theory (as a cursory glance at his "Thoughts on Education and Political Priorities": https://dominiccummings.files.wordpress.com/2013/11/20130825-some-thoughts-on-education-and-political-priorities-version-2-final.pdf). Cummings is not the first to turn cybernetics to bad ends - "philosopher" Nick Land has been writing pretty odious stuff for quite a few years, and it turns out that a big fan of Land is Andrew Sabisky who is coming under pressure for his somewhat insane views on eugenics (https://www.bbc.co.uk/news/uk-politics-51535367) - something for which Cummings also has a penchant. Hayek got there first with the dark side of cybernetics, of course, but this new breed is not as intelligent and more dangerous (Hayek was bad enough!)

It's all deeply troubling. Many of the inventors of these tools were German émigrés, horrified by Nazism, helping with the war effort by developing new weapons, and wishing for a better world. Wiener however knew that what they were doing was dangerous. His "The Human Use of Human Beings" reads like a prophecy today. Wiener's immediate fear was nuclear annihilation, but the likes of Cummings and his crowd are in his sights as the enslavers of humankind.

10 years ago I was at the American Society for Cybernetics conference in Troy, NY, which was attended by Ernst von Glasersfeld - one of the last remaining figures from cybernetics's early period, and an important thinker about education. Glasersfeld,  by then very old, gave a short address which summarised his philosophy; he died a few months later. You can read it here: http://www.asc-cybernetics.org/2010/?p=2700

It is such a clear exposition of cybernetic concepts that it invites a critical reflection: "is Cummings here?" - is there something in these ideas which opens the door to a fascism which would have mortified von Glasersfeld? I have to say, I think there is.

Von Glasersfeld made the clearest statement that cybernetics is fundamentally about constraint: as a science it is focused on "context". But as a science, it carried with it a clear conception of what is rational and what is metaphysical - and this is the main meat of the talk. Von Glasersfeld talks of the "pious fictions" of realists who insist on an external mind-independent reality. This, he states, cannot be science. Science, by contrast, exists in the rational process of coordinating understanding within constraints. As such, it cannot gain any kind of "objective knowledge".

But this sentence is the most interesting:
Only painters, poets, musicians and other artists like mystics and metaphysicians, may generate metaphors of reality, but to comprehend these metaphors you have to step out of the rational domain.
"Outside the rational domain"? What does that mean exactly? From what kind of context does von Glasersfeld make the judgement as to what is "rational" and what is not? This is framed by the existing institutional context of science and universities. Here we have embodied the problem of "two cultures" - and from there, we are on a slippery slope to Cummings.

Feelings are not rational. Social alienation is not rational. Experience itself is not rational. Yet some "rational" force allows us to make the distinction between what is and isn't rational, rejecting the irrational as a "pious fiction". This is how one can play the game of "Take back control" or "Get Brexit Done", treating feelings as if they are "rational" constructs of a communication system which is malleable to someone else's will.  It turns out that this is the pious fiction we should most fear. The artists, by contrast, speak the truth.

Where is the problem? It lies, I think, in a kind of two-dimensionality in the way that we think of communication. Cummings is quite keen on Shannon - at least insofar as it underpins data analysis.  But Shannon, lucky genius that he was, had a two-dimensional information transmission problem in front of him: a sends message to b over a noisy medium; b interprets and responds. But even in Shannon this isn't quite as two-dimensional as it seems: a and b are "transducers" with a "memory" (see Shannon and Weaver, "Mathematical theory of communication" (1951)). They were very pale representations of people. This meant that there was a limit to what could be communicated - and what could be constructed.

In Von Glasersfeld's world, the form of conversation occurred through the interaction of constraint produced through the communication of agents. Conversation and meaning emerged through a haze of "Brownian motion". It was almost arbitrary in its emergence, only recognised to be "meaningful" by us "observers". There are many problems with this view, the deepest of which is the assumption that within complex systems, the emergence of form and meaning is the result of an "arbitrary" process.

Years of attempting to simulate music from arbitrary processes have only produced bad music. It seems that the processes at work within the artist are no more arbitrary than the movements of electrons through a diffraction grating: there is an underlying pattern. But it's not the bands of the diffraction pattern that are interesting. It is the space between them: what is there? Nothing.

Appreciating this leads to a profound question about "constructivism" and indeed "radical constructivism": ok - so you can construct "stuff" in the world... but how might you construct "nothing"?

Without "nothing" there would be no pattern. Cummings, Land and co. know the deep magic. But the deeper magic is how to make nothing (channelling Narnia!)

To cut the story a bit shorter, "nothing" is mathematically realizable. William Rowan Hamilton's discovery of quaternions in 1843 was really the beginning of adventure into nothing which we have not absorbed yet. The quaternions are a 3-dimensional complex number which is anti-commutative. Hamilton's genius was to see that in order to represent the world in 3 dimensions, anti-commutativity was essential. But more importantly, the quaternion arithmetic allowed for expressions where a = 0. So 3-dimensionality and nothingness are fundamentally connected. But we knew this: ever heard of a "vanishing point"?

Von Glasersfeld had no way of constructing nothing. I asked him, a couple of years before when he gave a talk about learning in Vienna, that it was all very well to explain learning in the way that he was. But where did the drive to learn come from? He didn't really have an answer. Maybe he was tired. But I'm suspicious that he didn't want to think about it.

So Cummings and Land are exploiting a body of theory which is profoundly incomplete and two-dimensional. It's dangerous because it is two-dimensional and the real world isn't. The world of feelings, art, poetry and music are not some irrational boundary of a rational systems world. They are the third dimension in a world of natural information which cybernetics has not yet found a way to describe. It may become very important that we highlight these scientific shortcomings. 

Wednesday, 12 February 2020

Brains and Institutions: Why Institutions need to be more Brain-like

I was grateful to Oleg for pointing out the double meaning in Beer’s Brain of the Firm last week: it wasn’t so much that there was a brain that could be unmasked in the viable institution; firms – institutions, universities, corporations, societies – were brains. Like brains, they are adaptive. Like brains they do things with information which we cannot quite fathom – except that we consider our concepts of “information processing” which we have developed into computer science – as a possible function of brains. But brains and firms are not computers. That we have considered that they are is one of our great mistakes of the modern age. It was believing this that led to the horrors of the 20th century.

So what is the message of Brain of the Firm? It is that firms, brains, universities, societies share a common topology. In the Brain of the Firm, Beer got as close as he could to articulating that topology. It was not a template. It was not a plan. It was not a recipe for effective organisation. It was not a framework for discussion. It was a topology. It was an expression of the territory within which distinctions are formed. Topology is a kind of geometry of the mind.

Universities are particularly interesting examples. Because they are made of brains, and because their work is meant to be the work of their constituent brains. Universities present an example of where the “brain-organisation” sometimes goes right, but more often goes wrong. Why does it go wrong? Because we draw our distinctions in the wrong way – most often believing the institution to be the “organisation chart” – which is always a recipe for disaster.

Governments and states are alternative examples. One of the things we talked about was the general antipathy to the big state. The apparent failure of the Corbyn project is a hangover from the general disbelief in the big state. Johnson’s message is a throwback to Thatcher’s message – ironically this time marshalling the support of those who Thatcher hurt the most. But really, this message – the disbelief in the big state – never went away. After Stalin’s Russia, there appeared nowhere for the big state to go. But ironically, Stalin’s Russia was really a small state masquerading as a big one.

Today the world is full of would-be Stalins. Individuals wanting to impose their brains on everyone else, wanting to diminish the power of every collective unless it suits themselves. The model is repeated from corporation to university to city to state. But in the end, it will not work except to destroy its own environment (which it is doing very effectively), and it will not work because the distinctions are drawn in the wrong place.

The real question we should ask ourselves is this: How do brains work and how should organisations work to emulate them? Technology almost certainly gives a glimpse.

If there is a key feature of Beer’s fundamental topology it is the difference between the inside and the outside of a distinction. I wonder if in fact Spencer-Brown wasn’t influenced in his mathematics by Beer. I suspect Beer had the insight of Spencer-Brown’s most powerful idea first.

If you want to maintain any distinction then you must have a metasystem. Why? Because all distinctions are essentially uncertain, and there must be a mechanism – there must be the other side of the Mobius strip – to maintain the coherence of the distinction.

What must the metasystem do? Well, one of the things it must do is negotiate the distinction of the inside with the environment. In essence it has to determine what belongs inside and what belongs outside and to maintain this boundary. If it doesn’t do this, then the distinction collapses. So there must be a process of engaging, probing and modelling the environment.

This is System 4

The other thing the metasystem must do is to manage the internal operations of the system – its own internal distinctions. This is System 3.

Then it must balance the balancing operation of System 4 and System 3 This is System 5.
So very quickly we arrive at this topology.

But this is the topology of a distinction, and within any topology there are further distinctions. The point is it unfolds a fractal structure. This ultimately is the fractal structure of the inside which must be balanced with what is perceived as the fractal structure of the outside.

The challenge is to operationalise this.

For many who pursued the VSM, the operationalisation ended up as a kind of consultancy – a way of talking to organisations to give them a bit more internal awareness. I guess this was fine -  and it created a bit of work of cybernetics people. But ultimately this was empty wasn’t it?

How do we do better?

We need to come back to brains and firms. Within the brain, we have very little knowledge of what happens – particularly as to what happens with information. Obviously there are ECG monitors and stuff but they simply attenuate whatever complex activity is going on into graphs that show some kind of snapshot of the dance that’s really taking place.

We can see much more of “information” if we look at communications. Then what do we see? We see massive amounts of redundancy in our communication. We see pattern infusing everything, catching the attention of analysts and clairvoyants. The clairvoyants are usually more value because they tune-in to something deeper.

The deeper problem lies in the way we are able to analyse and examine the patterns of communication. It’s not as if we are short of data – although there can always be more. It is more that we do stupid things with the data we collect. Typically this involves attenuating out most of it and using an attenuated dataset as an exercise to make stupid decisions.

It’s rather like a brain with dementia. Important parts of the processes of maintaining information flows throughout the whole organisation are damaged – in the institution’s case, by technology – and consequently the selection mechanism for adaptation is impaired. Consequently the poor individual afflicted with this, steps into the road without bothering to check the approaching double-decker bus.
Our institutions are doing a similar thing for a similar reason. Key parts of their information processing apparatus are impaired which means that the selection mechanism for their future adaptation isn’t working.

So what is a selection mechanism for future adaptation? It is precisely what Beer, influenced no doubt by Robert Rosen, called an “anticipatory system”. It is the system’s model of itself. Now a model of oneself in time must be a fractal.  The only way the future can be predicted is through its pattern of events being seen to be similar to the past.
That is not that actual events repeat (although of course they do), but it is that the pattern of relations between particular events tend to repeat.

But we need to understand fractals. They are not really two-dimensional pictures. They are three-dimensional pictures. Long before we knew about fractals we knew the concept from the hologram – that encoding of time and space into a 2-dimensional frame where the self-similarity of the frame was its key feature.

The reason why fractals are so important is that our approaches to information and measurement are essenatially 2-dimensional. Look at Shannon’s diagram to see this. The deep problem with 2-dimensionality is that it has no concept of “nothing”. In Shannon, any symbol exists against the constant background of a not-symbol, but we have no way of expressing the not-symbol.
True nothingness means making things disappear. It turns out that the only way we can make things disappear is by working in three dimensions. The fractal is an encoding of three dimensions in which “nothing” is written through like a stick of rock. Nothing is what makes the pattern.
All human behaviour in institutions is really about nothing. Or rather, it is about the attempt to grasp nothing from something. In the way that a piece of music eventually selects its ending – and silence – all our behaviour seeks a kind of resolution. Every conversation seeks a resolution. Every interminable meeting frustrates because its ending frustrates.

But if we want to see nothing, then we have to work with its encoded representation – the fractal.
Our mathematical approach to information – to computer information – can provide a glimpse of nothing. Indeed, our approaches to machine learning, which are beginning to show behaviour that is rather like conscious behaviour, are providing a glimpse – they too are fractal.
If we want to see nothing, then we need an encoding strategy whereby data is represented in a way where nothing might be analysed and considered.

If we want viable institutions, then we need viable individuals. If we want viable individuals then we need a way of encoding the communicative behaviour of individuals in a fractal which can reveal the underlying selection mechanism for optimal future development. It would not be a surprise if the optimal selection mechanism for individual development involved communication with other individuals. And it would not be a surprise if the optimal collective development of a group of individuals entailed the preservation of information between them.

What do we have? Monasticism?

Thursday, 6 February 2020

Why the current phase of Machine Learning will fail

Over the last two years I've been involved in a very interesting project combining educational technology assessment techniques with machine learning for medical diagnostics. At the centre of the project was the idea that human diagnostic expertise tends to be ordinal: experts make judgements about a particular case based on comparisons with judgements about other cases. If judgement is an ordinal process, then the deeper questions concern the communication infrastructure which supports these comparisons, and the ways in which the rich information of comparison is maintained within institutions such as diagnostic centres.

Then there was a technical question: can (and does) machine learning operate in an ordinal way? And more importantly, if machine learning does operate in an ordinal way, can it be used as a means of maintaining the information produced by the ordinal judgements of a group of experts such that the combined intelligence of human + machine can exceed that of both human-only and machine-only solutions.

The project isn't over yet, but it would not be surprising if the answer to this question is equivocal: yes and no. "Yes" because this approach to machine learning is the only approach which does not throw away information. The basic problem with all current approaches to machine learning is that ML models are developed with training sets and classifiers as a means of mapping those classifiers automatically to new data. So the complexity of any new data is reduced by the ML algorithm into a classification. That is basically a process of throwing away information - and it is a bad idea, which amplifies the general tendency of IT systems in organisations which have been doing this for years, and our institutions have suffered as a result.

However, not discarding information means that the amount of information to be processed increases exponentially (in fact it increases according to the number of combinations). It doesn't matter how powerful one's computers are, an algorithm with a computability growth rate like this is bad news. Give it 100 images, and it might take a day to train the machine learning for 4950 combinations. 200 gives 19500, 300 gives 44850, 400 gives 79800, 500 gives 124750. So if 4950 takes 24 hours, 500 will take 600 hours = 25 days. It won't take long before we are measuring the training time in months or years.

So this isn't realistic. And yet it's the right approach if we don't want to discard information. We don't yet know enough, and no amount of hacking with Tensorflow is going to sort it out.

The basic problem lies in the difference between human cognition and what the neural networks can do. The reason why we want to retrain the ML algorithm is that we want to be able to update its ordinal rankings so that they reflect the refinements of human experts. This really can only be done by retraining the whole thing with the expanded training set. If we don't retrain the whole thing, then there is a risk that a small correction in one part of the ML algorithm has undesirable consequences elsewhere.

Now humans are not like this. We can update our ordinal rankings of things very easily, and we don't suddenly become "stupid" when we do. How do we do it? And if we can understand how we do it, can that help us understand how to get the machine to do it?

I think we may have a few clues as to how we do this, and yes I think at some point in the future it will be possible to get to the next stage of AI where the machine can be retrained like this. But we are a long way off.

The key lies in the ways that the ML structures its data through its recursive processes. Although we don't have direct knowledge of exactly how all the variables and classifiers are stored within the ML layers, we get a hint of it when the ML algorithm is "reversed" to produce images which align with the ML classifiers, such as we see with the Google Deep Dream images.

These are basically fractal images which reflect the way that the ML convolutional neural network algorithm operates. Looking at an image like this:

we can get some indication of the fractal nature of the structures within the machine learning itself.

I strongly suspect that not only our consciousness, but the universe itself, has a similar structure. I am not alone in this view: David Bohm, Karl Pribram and many others have held to a similar view. Within quantum mechanics today, the idea that the universe is some kind of "hologram" is quite common, and indeed, the hologram is basically another way of describing a fractal (indeed, we had holograms long before we could generate fractal images on the computer).

What's important about fractals is that they are anticipatory. This really lies at the heart of how ML works: it is able to anticipate the likely category of data it hasn't seen before (unlike a database, which can only reveal the categories of data it has been told about).

What makes fractals awkward - and why the current state of machine learning will fail - is that in order to change the understanding of the machine, the fractal has to be changed. But in order to change a fractal, you don't just have to change one value in one place; you have to change the entire pattern in a way in which it remains consistent but transformed in a way where the new knowledge is absorbed.

We know, ultimately, this is possible. Brains - of all kinds - do it. Indeed, all viable systems do it. 

Saturday, 1 February 2020

Brexit Lions and Unicorns

Orwell's essay "The Lion and the Unicorn: Socialism and the English Genius" reads today as very old-fashioned and jingoistic. And yet, like all great artists, Orwell accesses something of the nature of life  - both at the time he was writing ("As I write, highly civilised human beings are flying overhead, trying to kill me.") which we are seeing reflected back at us in a rather unedifying way with Brexit.

Much that he identified in 1941 is still true.
"There is no question about the inequality of wealth in England. It is grosser than in any European country, and you have only to look down the nearest street to see it. Economically, England is certainly two nations, if not three or four."
Except that of course this inequality has been exported to many other countries. But what about "patriotism" which his essay is really about? What does that mean?

It seems that patriotism is very problematic and confusing. It is uncomfortably close to the "nationalism" or "populism" (what does that mean?) that we see among the Brexiteers or the Trump supporters. Orwell's point is to say that despite the differences between the rich and the poor, or the different nations of the UK (I would like to think he would apply this across our multicultural society today) a country can be united if it feels its "home" to be under threat. I doubt this is a "national" instinct but a human one - we see it in extraordinary human acts of courage and compassion in the wake of terrorist atrocities (for example) the world over. Can we explain it? No - but to "explain" it as a "national characteristic" is both tempting and facile - that's where jingoism comes from.

Where does the "threat" which mobilises everyone come from? Orwell is very clear that it is not among the individual "highly civilised human beings" who are trying to kill him.
"Most of them, I have no doubt, are kind-hearted law-abiding men who would never dream of committing murder in private life. On the other hand, if one of them succeeds in blowing me to pieces with a well-placed bomb, he will never sleep any the worse for it. He is serving his country, which has the power to absolve him from evil."
I imagine whether one might say this about some members of ISIS. So much depends on what we consider to be "civilised".

But it's not fanatics who scare us: "What English people of nearly all classes loathe from the bottom of their hearts is the swaggering officer type, the jingle of spurs and the crash of boots.". It's the aristocracy who, in Europe adopted the goose-step as their "ritual dance" - and his point that it is hard to imagine a Hitler in the UK relies on the fact that "Beyond a certain point, military display is only possible in countries where the common people dare not laugh at the army." Orwell argued that we needed socialism to counter the aristocratic swagger which leads to people like Hitler. What he called elsewhere the "arrogance of superiority".

Today in Europe we don't see military officials swaggering in uniform, although the police often now carry guns. But there is still the swagger of authority everywhere - and I think this really lies at the heart of the Tory Brexit which has just taken place. Orwell may be right in what unites rich and poor being authoritarianism - it absolutely fits the rhetoric of Nigel Farage, Dominic Cummings and others. But there is no goose-stepping, and all that the Brexiteers complain of is "Brussels bureaucrats". That's a funny kind of goose-step!

But it may be one nonetheless. This is goose-step in the techno age. It is the goose-step of free-market capitalism, technologically-driven oppression, surveillance and artificially-imposed austerity. We are all marching to its beat, and most of us - the losers - hate it.

We know, deep down, that the likes of Johnson, Trump and Farage are really the commanders of this robot dance which so many detest. And we know that they know that declaring hatred for it on the one hand, and ramping-it up on the other is the game. But we are caught - because you cannot call them out without denying that the uniting hatred of authority is fundamentally true. It's a massive double-bind.

So how do we get out of this mess? Orwell's answer was socialism - and in many respects he knew that the founding of the NHS and welfare state was inevitable after the war. It's taken over 70 years to re-impose the goose-step in the techno age in a far more complex and uncertain form where it drives everything, distorting that early socialist ideal to move to its beat.

One of the striking things about the EU and Brexit, Westminster and the election, is that everyone has taken it so seriously. Perhaps we shouldn't take any of this seriously. Perhaps the game is to get us to take seriously things which aren't at all serious. It's like the population who is afraid to laugh at its army. If we don't take our parliaments - whether national or international - seriously, what are they? They are - it all is - irrelevant. Now look at how hard the media - the press, the national broadcasters, the social media companies - are trying to convince us that this isn't irrelevant!

That's the mark of swaggering authority - to persuade the people that what is irrelevant is the most important thing in the world. It's a con. 

Sunday, 26 January 2020

Well-run Universities and methods for Analysing them

There's so much critique of higher education these days that little thought is going in to thinking about how an institution might be optimally organised in the modern world. This is partly because critique is "cheap". Bateson made the point that we are good at talking about pathology and bad at talking about health. This is partly because to talk about health you need to talk about the whole system. To talk about pathology you need to point to one bit of it which isn't working and apportion blame for its failure. Often, critique is itself a symptom of pathology, and may even exacerbate it.

The scientific problem here is that we lack good tools for analysing, measuring and diagnosing the whole system. Cybernetics provides a body of knowledge - an epistemology - which can at least provide a foundation, but it is not so good empirically. Indeed, some aspects of second-order cybernetics appear almost to deny the importance of empirical evidence. Unfortunately, without experiment, cybernetics itself risks becoming a tool for critique. Which is pretty much what's happened.

Within the history of cybernetics, there are exceptions. Stafford Beer's work in management is one of the best examples. He used a range of techniques from information theory to personal construct theory to measure and analyse systems and to transform organisation. More recently, Loet Leydesdorff has used information theory to produce models of the inter-relationship between academic discourse, government policy and industrial activity, while Robert Ulanowicz has used information theory in ecological investigations.

Information theory is something of a common denominator. It was recognised by Ross Ashby that Shannon's formulae were basically expressing the same idea as his concept of "variety", and that this equation could be used to analyse complex situations in almost any domain.

However, there are some big problems with Shannon's information theory. Not least, it assumes that complex systems are ergodic - i.e. that their complexity over a short period of time is equivalent to their complexity over a long spell of time. All living systems are non-ergodic - they emerge new features and new behaviours which are impossible to predict at the outset.

Another problem with information theory is the way that complexity itself is understood in the first place. For Ashby, complex systems are complex because of the number of states they can exist in. Ashby's variety was a countable thing. But how many countable states can a human being exist in? Where do we draw the boundary around the things that we are counting and the things that we ignore? And then the word "complex" is applied to things which don't appear to have obvious states at all - take, for example, the "complex" music of J.S. Bach. How many states does that have?

I think one of the real breakthroughs in the last 10 years or so has been the recognition that it is not information which is important, but "not-information", "constraint", "absence" or "redundancy". Terry Deacon, Loet Leydesdorff, Robert Ulanowicz and (earlier) Gregory Bateson and Heinz von Foerster can take the credit for this. In the hands of Leydesdorff, however, this recognition of constraint and absence became measurable using Shannon information theory, and the theory of anticipatory systems of Daniel Dubois.

This is where this gets interesting. An anticipatory system contains a model of itself. It is the epitome of Conant and Ashby's statement that "every good regulator of a system must be a model of that system" (see https://en.wikipedia.org/wiki/Good_regulator). Beer integrated this idea in his Viable System Model in what he called "System 4". Dubois meanwhile expresses an anticipatory system as a fractal, and this potentially means that Shannon information can be used to generate this fractal and provide a kind of "image" of a healthy system. Which takes us to a definition:

A well-run university contains a good model of itself.
How many universities do you know like that?

Here however, we need to look in more detail at Dubois's fractal. The point of a fractal is that it is self-similar at different orders of scale. That means that what happens at one level has happened before at another. So theoretically, a good fractal can anticipate what will happen because it knows the pattern of what has happened.

I've recently done some work analysing student comments from using comparative judgement of a variety of documents from science, technology, creativity and communication. (I did this for the Global Scientific Dialogue course in Vladivostok last year - https://dailyimprovisation.blogspot.com/2018/03/education-as-music-some-thoughts-on.html). The point of the comparative judgement was to stimulate critical thought and disrupt expectations. In other words, it was to re-jig any anticipatory system that might have been in place, and encourage the development of a fresh one.

I've just written a paper about it, but the pictures are intriguing enough. Basically, they were generated by taking a number of Shannon entropy measurements of different variables and examining the relative entropy between these different elements. This produces a graph, and the movements of the entropy line in the graph can be coded as 1s and 0s to produce a kind of fractal. (I used the same technique for studying music here - https://dailyimprovisation.blogspot.com/2019/05/bach-as-anticipatory-fractal-and.html)

So here are my pictures below. Now I suppose there is a simple question - can you tell who are the good students and who are the bad ones?


a

b

c

d

e


But what about the well-run institution? I think it too must have an analysable anticipatory fractal. There will be patterns of activity at all levels of management - from learner utterances (like these graphs) through to teacher behaviour, management actions, policies, technologies and relations with the environment. Yet I suspect that if we tried to do this today, we would find little coherence in the ways in which modern universities coordination they activities with the world.







Tuesday, 14 January 2020

What have VLEs done to Universities?

The distinction between genotype and phenotype is useful in thinking about organisational change. Given that an institution is a kind of organism, it is the distinction between those behaviours that emerge in its interactions with its environment, and the extent to which these behavioural changes become hard-wired into its nature and identity (the "genome"). So institutions adapt their behaviour in response to environmental changes in a "phenotypical" way initially, implementing ad-hoc technologies and procedures. Over time, these ad-hoc procedures become codified in the functionality of universal technologies which are deployed everywhere, and which determine the ongoing behaviour of the "species" - the "genotype".

Changes to the genotype are hard to shift. They determine patterns of organic reproduction: so we see different kinds of people existing in institutions to the kinds of people that we might have seen 40 years ago. Many elderly esteemed scholars would now say they wouldn't survive in the modern university. They're right - think of Marina Warner's account of her time at Essex (and why she quit) in the London review of books a few years ago: https://www.lrb.co.uk/the-paper/v36/n17/marina-warner/diary, or more recently Liz Morrish's "The university has become an anxiety machine": https://www.hepi.ac.uk/2019/05/23/the-university-has-become-an-anxiety-machine/. Only last week this Twitter thread appeared: https://twitter.com/willpooley/status/1214891603606822912. It's all true.

As part of the "genotype", technology is the thing which drives the "institutional isomorphism" that means that management functions become professionalised and universal (where they used to be an unpopular burden for academics). But - and it is a big BUT - this has only happened because we have let it happen.

The Virtual Learning Environment is an interesting example. Its genotypical function has been to reinforce the modularisation of learning in such a way that every collection of resources, activities, tools and people must be tied to a "module code", into which marks for those activities are stored. What's the result? Thousands of online "spaces" in the VLE which are effectively dead - nothing happening - apart from students (who have become inured to the dead online VLE space on thousands of other modules) going in to access the powerpoints that the teacher uploaded from the lecture, watch lecture capture, or submit their assignment.

What a weird "space" this is!

Go into any physical space on campus and you see something entirely different. Students gathered together from many courses, some revising or writing essays, some chatting with friends, some on social media. In such a space, one could imagine innovative activities that could be organised among such a diverse group - student unions are often good at this sort of thing: the point is that the possibility is there.

In the online space, where is even the possibility of organising group activities across the curriculum? It's removed by the technologically reinforced modularisation of student activity. If you remove this reinforced modularisation, do new things become possible?

If Facebook organised itself into "modules" like this it would not have succeeded. Instead it organised itself around personal networks where each node generated information. Each node is an "information producing" entity, where the information produced by one node can become of interest to the information-production function of another.

There's something very important about this "information production" function in a viable online space. In a VLE, the information production is restricted to assignments - which are generally not shared with a community for fear of plagiarism - and discussion boards. The restricting of the information production and sharing aspect is a key reason why these spaces are "dead". But these restrictions are introduced for reasons relating to the ways we think about assessment, and these ways of thinking about assessment get in the way of authentic communication: communicating within the VLE can become a risk to the integrity of the assessment system! (Of course, this means that communication happens in other ways - Facebook, Whatsapp, Snapchat, TikTok, etc)

The process of generating information - of sticking stuff out there - is a process of probing the environment. It is a fundamental process that needs to happen for a viable system if it is to adapt and survive. It matters for individual learners to do this, but it also matters for communities - whether they are online or not.

I wonder if this is a feature of all viable institutions: that they have a function which puts information out into the environment as a way of probing the environment. It is a way of expressing uncertainty. This information acts as a kind of "receptor" which attracts other sources of information (other people's uncertainty) and draws them into the community. Facebook clearly exploits this, whilst also deliberately disrupting the environment so as to keep people trying to produce information to understand an ever-changing environment. Meanwhile, Facebook makes money.

If a online course or an online community in an institution is to be viable, then it must have a similar function: there must be a regular production of information which acts as a receptor to those outside. This processing of "external uncertainty" exists alongside the processes of inner-uncertainty management which are organised within the community, and within each individual in that community.

In asking how this might be organised, I wonder if there is hope for overcoming the genotype of the VLE-dominated university.

Monday, 13 January 2020

Oscillating Emotions, Maddening Institutions... and Technology

My current emotional state is worrying me. Rather like the current climate on our burning planet, or our scary politics, its not so much a particular state (although depression and burning-Australia is of course worrying), but it is the oscillation, the variety, of emotional states that's bothering me. It's one extreme and then the next and no control. The symptoms, from an emotional point of view, are dangerous because they threaten to feed-back into the pathology. In a state of depression, one needs to talk, but things can become so overwhelming that talking becomes incredibly difficult, and so it gets worse.

A lot hangs on the nature of our institutions. It is not for nothing that stable democracies pride themselves on the stability of their institutions. This is because, I think, institutions are places where people can talk to each other. They are information-conserving entities, and the process of conserving information occurs through conversation. "Conserving conversation", if you like.

So what happens when our institutions fill themselves with technologies that disturb the context for conversation to the extent that people:

  1. feel stupid that they are not on top of the "latest tools" (or indeed, are made to feel stupid!)
  2. cannot talk to each other about their supposed "incompetence" for fear of exposing what they perceive as this "incompetence".
  3. feel that the necessity for conversation is obviated by techno-instrumental effectiveness (I sent you an email - didn't you read it?)
  4. are too busy and stressed working bad interfaces to build proper relationships or to ask powerful questions
  5. are permanently threatened by existential concerns over their future, their current precarious contract, their prospects for longer-term financial security, their family, and so on
There is, of course, the "you're lucky to have a job" brigade. Or the "don't think about it, just get on with it" people.  But these people reduce the totality of human life to a function. And it clearly isn't a simple function. And yet there is no rational way to determine that such an attitude is wrong. Because of that, these people (sometimes deliberately) amplify the oscillation. 

This functionalist thinking derives from technological thinking. It's not particular technologies that are to blame. But it is what computer technology actually does to institutions: it discards information. Losing information is really bad news. 

So we have institutions which traditionally exist by virtue of their capacity to conserve information (and memory, thought and inquiry) through facilitating conversation. We introduce an IT system which loses some information because it removes some degree of uncertainty that required conservation to address. This information loss is addressed by another IT system, which loses more information. Which necessitates... The loss of information through technology is like the increase in CO2.

It leads to suffocation.