Monday, 15 April 2019

Kaggle and the Future University: Learning. Machine. Learning.

One of the most interesting things that @gsiemens pointed out the other day in his rant about MOOCs was that people learning machine learning had taught themselves through downloading datasets from Kaggle ( and using the now abundant code examples for manipulating and processing these datasets with the python machine learning libraries which are also all on GitHub, including tensorflow and keras. Kaggle itself is a site for people to engage in machine learning competitions, for which it gathers huge datasets on which people try out their algorithms. There are now datasets for almost everything, and the focus of my own work on diabetic retinopathy has a huge amount of stuff in it (albeit a lot of it not that great quality). There is an emerging standard toolkit for AI: something like Anaconda with a Jupyter notebook (or maybe PyCharm), and code which imports ternsorflow, keras, numpy, pandas, etc. Its become almost like the ubiquity of setting up database connectors to SQL and firing queries (and is really the logical development of that).

Whatever we might think of machine learning with regard to any possibility of Artificial Intelligence, there's clearly something going on here which is exciting, increasingly ubiquitous, and perceived to be important in society. I deeply dislike some aspects of AI - particularly its hunger for data which has driven a surveillance-based approach to analysis - but at the same time, there is something fascinating and increasingly accessible about this stuff. There is also something very interesting in the way that people are teaching themselves about it. And there is the fact that nobody really knows how it works - which is tantalising.

It's also transdisciplinary. Through Kaggle's datasets, we might become knowledgeable in Blockchain, Los Angeles's car parking, wine, malaria, urban sounds, or diabetic retinopathy. The datasets and the tools for exploring them are foci of attention: codified ways in which diverse phenomena might be perceived and studied through a coherent set of tools. It may matter less that those tools are not completely successful in producing results - but they do something interesting which provides us with alternative descriptions of whatever it is we are interested in.

What's missing from this is the didacticism of the expert. What instead we have are algorithms which for the most part are publicly available, and the datasets themselves, and a question - "this is interesting... what can we make of it?"

We learn a lot from examining the code of other people. It contains not just a set of logic, but expresses a way of thinking and a way of organising. When that way of thinking and way of organising is applied to a dataset, it also expresses a way of ordering phenomena.

Through my diabetic retinopathy project, I have wondered whether human expertise is ordinal. After all, what do we get from a teacher? If we meet someone interesting, it's tempting to present them with various phenomena and ask them "What do you think about this?". And they might say "I like that", or "That's terrrible!". If we like them, we will try to tune our own judgements to mirror theirs. The vicarious modelling of learning seems to be something like an ordinal process. And in universities, we depend on expertise being ordinal - how else could assessment processes run if experts did not order their judgements about student work in similar ways?

The problem with experts is that when expertise becomes embodied in an individual it becomes scarce, so universities have to restrict access to it. Moreover, because universities have to ensure they are consistent in their own judgement-making, they do not fully trust individual judgement, but organise massive bureaucracies on top of it: quality processes, exam boards, etc.

Learning machine learning removes the embodiment of the expertise, leaving the order behind. And it seems that a lot can be gained from engaging with the ordinality of judgements on their own. That seems very important for the future of education.

I'm not saying that education isn't fundamentally about conversation and intersubjective engagement. It is - face-to-face talk is the most effective way we can coordinate our uncertainty about the world. But the context within which the talking takes place is changing. Distributing the ordinality of expert judgements creates a context where talk about those judgements can happen between peers in various everyday ways rather than simply focusing on the scarce relation between the expert and the learner. In a way, it's a natural development from the talking-head video (and it's interesting to reflect that we've haven't advanced beyond that!). 


Every improvisation I am making at the moment is dominated by an idea about the nature of reality as being  a hologram, or fractal. So the world isn't really as we see it: it's our cells that make us perceive it like that, and it's our cells that make us perceive a "me" as a thing that sees the world in this way.

This was brought home to me even more after a visit to the Whitworth gallery's wonderful exhibition of ancient Andean textiles. They were similar to the one below (from Wikipedia)

It's the date which astonishes: sometime around 200CE. Did reality look like this to them? I wonder if it might have done.

This music is kind-of in one key. It's basically just a series of textures and slides (which are meant to sound like traffic) that embellish a fundamental sound. I like to think that each of these textures overlays some fundamental pattern with related patterns at different levels. The point is that all these accretions of pattern produces a coherence through producing a fractal.

Saturday, 13 April 2019

Comparative Judgement, Personal Constructs and Perceptual Control

The idea that human behaviour is an epiphenomenon of the control of perception is an idea associated with Bill Power's "Perceptual Control Theory", which dates back to the 1950s. Rather than human consciousness and behaviour being "exceptional", individual, etc, it is rather seen as the aggregated result of the interactions of a number of subsystems, of which the most fundamental is the behaviour of the cell. So if our cells are organising themselves according to the ambiguity of their environment (as John Torday argues), and in so doing are "behaving" so as to maintain homeostasis with their environment by producing information (or neg-entropy), and reacting to chemiosmotic changes, then consciousness and behaviour (alongside growth and form) is the epiphenomenal result.

So when we look at behaviour and learning, and look back towards this underlying mechanism, what do we see? Fundamentally, we see individuals creating constructs: labels with which individuals deal with the ambiguity and uncertainty of the world. But what if the purpose of the creation of constructs is analogous to the purpose of the cell: to maintain homeostasis by producing negentropy and reacting to chemiosmosis (or perhaps noise in the environment)?

We can test this. Presenting individuals with pairs of different stimuli and asking them which they prefer and why is something that comparative judgement software can do. It's actually similar to the rep-grid analysis of George Kelly, but rather than using 3 elements, 2 will do. Each pair of randomly chosen stimuli (say bits of text about topics in science or art), are effectively ways of stirring-up the uncertainty of the environment. This uncertainty then challenges the perceptual system of the person to react. The "construct", or the reason for one choice or another, is the person's response to this ambiguity.

The interesting thing is that as different pairs are used, so the constructs change. Moreover, the topology of what is preferred to what also gradually reveals contradictions in the production of constructs. This is a bit like Power's hierarchies of subsystems, each of which is trying to maintain its control of perception. So at a basic level, something is going on in my cells, but as a result of that cellular activity, a higher-level system is attempting to negotiate the contradictions emerging from that lower system. And then there is another higher level system which is reacting to that system. We have layers of recursive transduction.

It's interesting to reflect on the logic of this and compare it to our online experience. Our experience of Facebook and the media in general is confusing and disabling precisely because the layers of recursive transduction are collapsed into one. Complexity requires high levels of recursion to manage it, and most importantly, it requires the maintenance of the boundaries between one layer of recursion and another. From this comes coherence. Without this, we find ourselves caught in double-binds, where one layer is in conflict with another, with no capacity to resolve the conflict at a new level of recursion.

If we want to break the stranglehold of the media on our minds, we need new tools for the bringing of coherence to our experiences. I wonder if were we to have these tools, then self-organised learning without institutional control becomes a much more achievable objective.

Tuesday, 9 April 2019

The OER, MOOC and Disruption Confusion: Some thoughts about @gsiemens claim about MOOCs and Universities

George Siemens made a strong claim yesterday that "Universities who didn't dive into the MOOC craze are screwed". Justifying this by acknowledging that although the MOOC experiment up until now has not been entirely successful, the business of operating at scale in the environment is the most important thing for universities. The evidence he points to is that many machine learning/big data experts taught themselves how to do it through online resources. Personally, I can believe this is true. George's view prompted various responses among many who are generally hostile to the "disruption" metaphor of technology in education (particularly MOOCs), but the most interesting responses suggested that the real impact of the MOOC was on OER, and that open resources were the most important thing.

I find the whole discussion around disruption, MOOCs and OER very confusing. It makes me think that these are not the right questions to be asking. They all seem to view whatever activities which happen online between individuals and content through the lens of what happens in traditional education:
Disruption = "hey kids, school's closed. Let's have a lesson in the park!";
MOOCs = "Hey kids, we're going to study with 6 million other schools today!";
OER = "Hey kids, look a free textbook!". 
The web is different in ways which we haven't fathomed yet. It's obvious now that this difference is not really being felt directly in education: as I have said in my book, and Steve Watson said the other day in Liverpool (see, education is largely using technology to maintain its existing structures and practices. But the difference is being felt in the workplace, in casualisation, among screen-addicted teens, and in increasingly automated industries which would once have provided those teens with employment.

The web provides multiple new options for doing things we could do before. The free textbook is a co-existing alternative to the non-free textbook; the MOOC is a not-too-satisfying but co-existing alternative to expensive face-to-face education. What we have seen is an explosion of choice, and an accompanying explosion of uncertainty as we attempt to deal with the choice. Our institutions and technology corporations have both been affected by the increase in uncertainty.

What we are now discovering in the way we use our electronic devices provides a glimpse into how our consciousness deals with uncertainty and multiplicity. On the surface, it doesn't look hopeful. We appear to be caught in loops of endless scrolling, swiping and distraction. But what we do not see is that this pathological behaviour is the product of a profit-driven model which demands that tech companies increase the number of transactions that users have with their tools: their share-prices move with those numbers. Every new aspect of coolness, from Snapchat image filters to Dubsmash silliness and VR immersive environments, serves to increase the data flows. Our tech environment has become toxic resulting in endless confusion and double-binds. But we are told a lie: that technology does this. It doesn't. Corporations do this, because this is the way you make money in tech - by confusing people. It unfortunately, is also the way universities are increasingly operating. Driven by financial motives, they have become predatory institutions. Deep down, everything has become like this because turning things into money is a strategy for dealing with uncertainty.

All human development involves bringing coherence to things. It is, fundamentally, a sense-making operation. Coherence takes a multiplicity of things and orders them in a deeper pattern. Newman put it well:
"The intellect of man [...] energizes as well as his eye or ear, and perceives in sights and sounds something beyond them. It seizes and unites what the senses present to it; it grasps and forms what need not have been seen or heard except in its constituent parts. It discerns in lines and colours, or in tones, what is beautiful and what is not. It gives them a meaning, and invests them with an idea. It gathers up a succession of notes into the expression of a whole, and calls it a melody; it has a keen sensibility towards angles and curves, lights and shadows, tints and contours. It distinguishes between rule and exception, between accident and design. It assigns phenomena to a general law, qualities to a subject, acts to a principle, and effects to a cause." 
This is what consciousness really does. What Newman doesn't say is that the means by which this happens is conversation. And this is where the web we have falls down. It instead acts as what Stafford Beer called an "entropy pump" - sowing confusion. The deeper reasons for this lie in fundamental differences between online and face-to-face conversation, which we are only beginning to understand. But we will understand them better in time.

I find myself agreeing with Siemens. I do not think that the traditional structures of higher education will survive a massive increase in technology-driven uncertainty. In the end, it will have to change into something more flexible: we will dispense with rigid curricula and batch-processing of students. Maybe the MOOC experiment has encouraged some to think the unthinkable about institutional organisation. Maybe.

A university, like any organism, has to survive in its environment. They are rather like cells, and like cells, they evolve by absorbing aspects of the environment within their own structures (those mitochondria were once independently existing). In biology this is endosymbiosis. That is how to survive - to embrace and absorb. Technology is also endosymbiotic in the sense that it has embraced almost every aspect of life. It feels like we are in something of a stand-off between technology and the university, where the university is threatened and as a result is putting up barriers, reinforced by "market forces". This is also where our current pathologies of social media are coming from. Adaptation will not come from this.

Creating and coordinating free interventions in the environment is at least a way of understanding the environment better. Personally, I think grass-roots things like @raggeduniversity also are important. MOOCs were an awkward way of doing this. But the next wave of technology will do it better, and eventually I think they will create the conditions whereby human consciousness can create coherence from conversations within the context of uncertainty in the challenging world of AI and automation that it finds itself in. 

Sunday, 7 April 2019

Natural information processing and Digital information processing in Education

In my book, I said:

"Educational institutions and their technology are important because they sit at the crossroads between the ‘natural’ computation which is inherent in conversation and our search for knowledge, and the technocratic approach to hierarchy which typifies all institutions."

I don’t mean to say that education institutions have to be hierarchical – but they clearly are. Nor do I mean to say that they have to be technocratic – but, increasingly and inevitably, they clearly are. It’s more about a distinction between the kind of “computing” that goes on in technocratic hierarchies, and the kind of “computing” that goes on in individuals as they have conversations with one another. And education seems to have to negotiate these two kinds of computing.

Without conversations, education is nothing. Without organisation, coherent conversation is almost impossible.

It’s as if one form of information – the information of the computer, of binary choice, or logistical control – has to complement the information of nature, organic growth and emotional flux. When the balance is right, things work. At the moment, the technocratic idea of information and its technologies dominate, squeezing out the space for conversation. And that’s why we are in trouble.
We know how our silicon machines work (although we may be a bit confused by machine learning!), but we don’t know how “natural” computing works. But we have some insights.

Natural computing seems to work on the basis of pattern – or, in information theoretical terms, redundancy. Only through the production of pattern do things acquire coherence in their structure. And without coherence, nothing makes sense: “can you say that again?”… We do this all the time as teachers – we make redundancy in our learning conversations.

Silicon, digital, information conveys messages in the form of bits, and while redundancy is a necessary part of that communication process, it is the “background” to the message. It is, in the simplest way, the “0” to the information’s “1”.

So is natural computing all about “0”? Is it “Much ado about nothing”? I find this an attractive idea, given that all natural systems are born from nothing, become something, and eventually decay back to nothing again. A sound comes from silence, wiggles around a bit, and then fades to silence again. All nature is like this. The diachronic structure of nature is about nothing.

Moreover, in the universe, Newton’s third law tells us that the universe taken as a totality must essentially be nothing too. There may have been a “big bang”, but there was also an “equal and opposite reaction”. Somewhere. And this is not to say anything about spiritual practices which are amongst always focused on nothingness.

When we learn and grow, we do so in the knowledge that one day we will die. But we do this in the understanding that others will live and die after us, and that how we convey experience from one generation to the next can help to keep a spirit of ongoing life alive.

Schroedinger’s “What is Life” considered that living systems exhibit negative entropy, producing order, and working against the forces of nature which produce entropy or decay. I think this picture needs refinement. Negative entropy can be “information” which in Shannon’s sense is measured in “bits” – or 1s and 0s. But negative entropy may also be an “order of nothings”. So life is an order of nothing from which an order of bits is an epiphenomenon?

Our “order of bits” has made it increasingly difficult to establish a coherent order of nothing. Our digital technologies have enforced an “order of bits” everywhere, not least in our educational institutions. But the relationship between digital information and natural information can be reorganised. Our digital information may help us to gain deeper insight into how natural information works.

To do this, we must turn our digital information systems on our natural information systems, to help us steer our natural information systems more effectively. But the key to getting this to work is to use our digital technologies to help us understand redundancy, not information.

Techniques like big data focus on information in terms of 1s and 0s: they take natural information systems and turn them into digital information. This is positive feedback, from which we seek an “answer” – a key piece of information which we can then make a decision. But we are looking for the wrong thing in looking for an answer. We need instead to create coherence.

Our digital information may be turned to identify patterns in nature: the mutually occurring patterns at different levels of organisation. It can present these things to us not in the form of an answer, but in the form of a more focused question. Our job, then, is to generate more redundancy – to talk to each other, to do more analysis – to help to bring coherence to the patterns and questions which are presented to us. At some point, the articulation of redundancies will bring coherence to the whole living system.

I think this is what we really need our educational technology to do. It is not about learning maths, technology, science, or AI (although all those things may be the result of creating new redundancies). It is about creating ongoing coherence in our whole living system.

Wednesday, 27 March 2019

@NearFutureTeach Scenarios and Getting many brains to think as one

I went to the final presentation of the Near Future Teaching project at Edinburgh yesterday. I've been interested in what's happening at Edinburgh for a while because it looked to me like a good way of getting teachers and learners to talk together about teaching and to think about the future. As with many things like this, the process of doing this kind of project is all important - sometimes more important than the products.

I'm familiar with a scenario-based methodology because this is what we did on the large-scale iTEC project (see;jsessionid=491AB3788AEB8152821138272A35C5E4) which was coordinated by European Schoolnet. Near Future Teaching has followed a similar plan - identification of shared values, co-design of scenarios, technological prototyping/provoking (using what they neatly called "provo-types"). iTEC took its technological prototypes a bit more seriously, which - on reflection - I think was a mistake (I wrote about it here:

During iTEC I wasn't sure about scenario-building as a methodology. It seemed either too speculative or not speculative enough, where the future was imagined as seen using lenses through which we see the present. We're always surprised by the future, often because it involved getting a new set of lenses. I was talking to a friend at Manchester university on Monday about how theologians/religious people make the best futurologists: Ivan Illich, Marshall McLuhan, C.S. Lewis (his "Abolition of Man" is an important little book), Jacques Ellul. Maybe its because the lens that allows you to believe in God is very different to the lens that looks at the world as it is - so these people are good at swapping lenses.

After Near Future Teaching, I'm a bit more enthusiastic about scenarios. I spoke to a primary school teacher who was involved in the project, and we discussed the fact that nobody is certain about the future. Uncertainty is the great leveller, teachers and learners are in the same boat, and this is a stimulus for conversation and creativity. Its not a dissimilar idea to this:

But then there is something deeper about this kind of process. Uncertainty is a disrupter to conventional ways of looking at the world. Each of us has a set of categories or constructs through which we view the world. Sometimes the barriers to conversation are those categories themselves, and making interventions which loosen the categories is a way of creating new kinds of conversation. Introducing "uncertain topics" does this.

In his work on organisational decision-making, Staffford Beer did a similar thing with his "syntegration" technique. That involved emerging issues in a group, and then organising conversations which deliberately aimed to destabilise any preconceived ways of looking at the world. Beer aimed to create a "resonance" in the communications within the group as their existing categories were surrendered and new ones formed in the context of conversation. The overall aim was to "get many brains to think as one brain". Given the disastrous processes of collective decision which we are currently witnessing, we need to get back to this!

Having said this, there's something about the whole process which IS teaching itself. That leads me to think that Near Future Teaching is closely aligned to the methods of Near Future Teaching. Maybe the scenarios can be dispensed with, almost certainly we have to rethink assessment, we have to rethink the curriculum and the institutional hierarchy, but the root of it all is conversation which disrupts existing ways of thinking and established coherence within a group.

If we had this in education, Brexit would just be a cautionary tale.

Sunday, 24 March 2019

Human Exceptionalism and Brexit Insanty

Why have we managed to tie ourselves in knots? It's (k)not just over Brexit. It's over everything - austerity, welfare, tax, university funding, climate change, the point of education...

Following on from my last post, a thought has been niggling me: is it because we think human consciousness is exceptional?  Is our belief in the exceptionalism of consciousness in the human brain stopping us from seeing ourselves as part of something bigger? The problem is that as soon as we see ourselves as something special, that our consciousness is somehow special, we consider that one person's consciousness is more special than another. Then we hold on to our individual thoughts or "values" (they're a problem too) and see to it that the thoughts and values of one person must hold out against the thoughts and values of another. Is it because consciousness is not exceptional that this creates a terrible mess?

If consciousness is not exceptional, what does it do? What is its operating principle?

In my book, Uncertain Education, I aargued that "uncertainty" was the most useful category through which to view the education system. I think uncertainty is a good category to view an unexceptional consciousness too. Consciousness, I think, is a process which coordinates living things  in managing uncertainty. It is a process which maintains coherence in nature.

This process can be seen in all lifeforms from cells to ants to humans. What we call thinking is an aggregate of similar processes among the myriad of cellular and functionally differentiated components from which we are made, and which constitute our environment. The brain is one aggregation of cells which performs this role. It is composed of cells managing their uncertainty, and the aggregate of their operation and co-operation is what we think is thinking. Really, there's a lot of calcium and ATP which is pumped around. That's the work our cells do as they manage their uncertainty.

The same process occurs at different levels. The thing is fractal in much the same way that Stafford Beer described his Viable System Model. But we know a lot more about cells now than Beer did.

But what is the practical utility of a cellular view of consciousness?

Understanding that cells are managing uncertainty is only the beginning. More important is to realise that organisms and their cells have developed ("evolved") by absorbing parts of their environment as they have managed their uncertainty over history. This absorption of the environment helps in the process of managing environmental uncertainty: uncertainty can only be managed if we understand the environment we are in. Importantly, though, each stage of adaptation entails a new level of accommodation with the environment: we move from one stable state to the next "higher" level. You might imaging a "table" of an increasingly sophisticated "alphabet" of cellular characteristics and capacities to survive in increasingly complex environments.

The cellular activity of "thinking", like all processes of adaptation, occurs in response to changes in the environment. It may be that an environment once conducive to higher-level "thought" becomes constrained in a way that cells are forced to a previous, more simple, state of organisation in order to remain viable. It's a kind of regression. The kind that we see with intelligent people at the moment, paralysed by Brexit. In history, it is the thing that made good people do bad things in evil regimes. We become more primitive. Put a group of adults in a school classroom, and they will start to behave like children....!

Understanding this is important because we need to know how to go the other way - how to produce the conditions for increasing sophistication and richer adaptiveness. That is education's job. It is also the politician's job. But if we have a mistaken idea about consciousness, we are likely to believe that the way to increase adaptiveness is to do things which actually constrain it. This is austerity, and from there we descend back into the swamp.

Saturday, 16 March 2019

Depth in Thought: Cosmological perspectives

Jenny Mackness is writing some great blog posts on Iain McGilchrist at the moment. Her post today is on the dynamic relationship between what composer Pauline Oliveros called "attention" and "awareness", and McGilchrist's take on this. As Jenny points out, this is not an idea unique to McGilchrist, and others - particularly Marion Milner, who she mentions - have had a similar insight. Her previous post was on "depth" ( and this is what I want to focus on.

McGilchrist's argument is based on a kind of updated bicamerality - not the rather crude distinctions about the "rational" left and "artistic" right, but a more sophisticated articulation of the way that attention and awareness work together. More importantly, he has pursued the social implications of his theory, suggesting that as a society we have created an environment within which attention is rewarded - particularly in the form of technology - and awareness and contemplation are confined to the shadows. There's a great RSA animate video of his ideas here:

There's much I agree with here. But something unnerves me in a similar way to previous theories of bicamerality like that of Julian Jaynes. Behind them all is the assumption that human consciousness is exceptional.

The problem is that "human exceptionalism" as biologist John Torday calls it, is a pretty devastating thing for the environment of everything - not just us. We think we're so great, so we have the arrogance to believe we know how to "fix" our problems. So we try to fix our problems - to treat our human problems as if they were technical problems (McGilchrist might say, to render the world in terms of the left hemisphere). And it doesn't work. It makes things worse. As an educational technologist, I see this every day. And I think if there is a "turn" in educational technology, it is that we once believed we could fix our problems with technology. Now we see that we've just made everything more complicated.

What if consciousness is not exceptional? We would first have to decide where it came from. Brains? Can we rule out consciousness in bacteria or plants? Eventually, we arrive at the cell. Brains are made from cells. In fact, recent research (which I know that Antonio Damasio among others, has been heavily involved in) in unpicking neural communication mechanisms has discovered that non-synaptic communication exists alongside communication along what we have always imagined to be a dendritic "neural network".

Cells talk to each other all throughout nature. The way they talk concerns a process which is characterised as transduction: the balancing of messages and protein expression by DNA inside the cell, with the reception to other proteins on the surface of the cell in its environment. I find this fascinating because these transduction processes looks remarkably like the psychodynamics of Freud and Jung. Is there a connection? Does our thinking go to the heart of our cells? (Or the cells of our heart?)

But there's more to this. One of the great mysteries of the cell is how it came to be as it is. Lynne Margulis's endosymbiotic theory suggests that all those mitochondria were once independent elements in the environment. Somehow an earlier version of the cell "decided" that it could organise itself better if it included those mitochrondria within its own structure. At an evolutionary level, cooperation took the place of competition. As a basic principle, Torday argues that cells have always organised themselves according the ambiguity of their environment. Consciousness is an emergent phenomenon arising from this process.

Each evolutionary stage moves from one state of homeostasis with the environment to another. Somehow, evolutionists tell us, we were once fish. Something happened to the swim bladder of the fish that turned it into the breathing organ we have in our chests. There must have been some kind of crisis which stimulated a fundamental change to cellular organisation.... and it stuck. Our conscious cells contain a myriad of vestigial fossils, of which the oldest is probably the cholesterol which allows my fingers to do this typing, and allows all of us to move about. In each of us is not only an operational mechanism which responds to immediate changes in its environment to maintain stability. In each cell is a history book, containing in a microcosm the millions of stages of endosymbiotic adaptation which took us to this point, and which we see in the physical and geological evidence around us. We really are stardust.

This isn't something that biologists alone talking about. It coincides with physics. David Bohm talked about the difference between the surface, manifest features of the world as the "explicate order", and the deep coherent structure of the universe as the "implicate order". This implicate order, Bohm imagined, was a kind of hologram - or rather a "holo-movement" (because it is not fixed), which acts as the root of everything. As a hologram, it has a fractal structure (holograms are a fractal encoding of light interference patterns of 3d images). This means that within each cell is a copy of a self-similar pattern of the cosmos, formed through the evolutionary history book that they contain. Each evolutionary stage of the cell, and each organisational configuration it forms (like the bicameral brain, bodies, fingers) is an express of what the physicists call "broken symmetries" of its initial organisation. Our manifest consciousness - the ideas we share (like this one) - are such a manifestation of our cellular broken symmetries.

When we think deeply, we think WE are doing the work. But the work is done by our cells (particularly the calcium pumps). They think deeply. Their behaviour is an attempt to bring coherence to their environment, and the ultimate coherence is to return to their origin and to get closer to the implicate order. Deep thought is time-travel. This is why, I think, a philosopher like John Duns Scotus in the 13th century could have anticipated the logic of quantum mechanics. In our current society, deep thought is not impossible, but the institutional structures we established to help it arise (the universities) have largely been vandalised.

I share many of McGilchrists concerns about the modern mind. But we need to look deeper than the brain. And we need to look deeper than us. I once asked Ernst von Glasersfeld, whose theory of Radical Constructivism has been very influential in education, about where the desire to learn came from. It was all very well, I suggested, to say what we thought the learning process was. But we never say why it is we want to learn in the first place. He didn't have an answer. Now I can tentatively suggest an answer. We don't want to learn. But our cells, and we who are constituted by them, need to organise themselves in relation to an environment so that it is coherent. Our drive to learn is the cell's search for the implicate order at its origin. All we need to do is listen - but in today's world, that is getting hard. 

Saturday, 9 March 2019

Implication-Realisation and the Entropic Structure of Everything

The basic structure of any sound is that it starts from nothing, becomes something, and then fades to nothing again. In terms of the flow of time, this is a process of an increase in entropy as the features of the note appear, a process of subtle variation around a stable point (the sustain of a note, vibrato, dynamics, etc) where entropy will decrease (because there is less variation than when the note first appeared), and finally an increase in entropy again when the note is released.

A single note is rarely enough. It must be followed or accompanied by others. There is something in the process of the growth of a piece of music which entails an increase in the "alphabet" in the music. So we start with a single sound, and add new sounds, which add richness to the music. What determines the need for an increase in the alphabet of the sound?

In the Implication-Realisation theory of music of Eugene Narmour, there is a basic idea that if there is an A, there must be an A* which negates and compliments it. What it doesn't say is that if the A* does not exactly match the A, then there is a need to create new dimensions. So we have A, B, A*, B*, AB and AB*. That is no longer as simple as a single note - for the completion of this alphabet, we not only require the increase and decrease of entropy in a single variable, but in another variable too, alongside an increase and decrease in entropy of the composite relations of AB and AB*. The graph below shows the entropy of intervals in Bach's 3-part invention no. 9:

What happens when that alphabet is near-complete, but potentially not fully complete? We need a new dimension, C. So then we require A, A*, B, B*, AB, AB*, C, C*, AC, AC*, BC, BC*, ABC, ABC*. That requires a more complex set of increases and decreases of entropy to satisfy.

The relational values AB, AB*, AC, AC*, ABC, ABC* are particularly interesting because one way in which the entropy can increase for all of these at once is for the music to fall to silence. At that moment, all variables change at the same time. So music breathes in order to fulfil the logic of an increasing alphabet. In the end, everything falls into silence.

The actual empirical values for A, B and C might be very simple (rhythm, melody, harmony) etc. But equally, the most important feature of music is that new ideas emerge as composite features of basic variables - melodies, motivic patterns, and so on. So while at an early stage of the alphabet's emergence we might discern the entropy of notes, or intervals or rhythms, at a later stage, we might look for the repetition of patterns of intervals or rhythms.

It is fairly easy to first look for the entropy of a single interval, and then to look for the entropy of a pair of intervals, and so on. This is very similar to text analysis techniques which look for digrams and trigrams in a text (sequences of contiguous words).

However, music's harmonic dimension presents something different. One of the interesting features of its harmony is that the frequency spectrum itself has an entropy, and that across the flow of time, while there may be much melodic activity, the overtones may display more coherence across the piece. So, once again, there is another variable...

Tuesday, 26 February 2019

Dialectic and Timelessness

One of the great arguments in physics concerns the nature of time: is it real? or is it a fiction which we construct? Physicists like Lee Smolin argue that time is not only real, but it's the foundation of every other physical process. Leonard Susskind upholds what he calls the "anthropic principle" - we make this stuff (time) up. Smolin's objection to this is that it is unfalsifiable (see

I want to approach this from a different angle. Physics underpins biology in some way, and our biology appears to be the basis of our consciousness. Consciousness in turn is responsible for social goods and ills in the world, and these social goods and ills seem to be produced over time. Moreover consciousness gives us our ideas of physics and biology, and it allows us to create our institutions of science wherein those ideas are manufactured. To some extent, these ideas imprison us.

Our lives appear as an ecological ebb-and-flow of perceptions and events which from a broader vantage point look like what philosophers call "dialectic". For Marx and Hegel, dialectic is one of the fundamental constituents of reality - although Hegel's dialectic is an "ideal" one - it is ideas which oppose one another synthesising new ideas - whereas Marx's dialectic has to do with the fundamental material constitution of reality, which underpins social structures. The Marxist underpinnings are scientific - it is physics at the root. However, if time is not real, what happens to dialectic?

The intellectual challenge is this: imagine a timeless world where there is no past or future, but a whole and total "implicate order" from which we construct our "now" and our "then". In our constructing of a "now" and "then", we give ourselves the impression of a dialectical process, but actually this is an illusion which causes us to mistake the nature of reality, and in the process, leads to social ill.

So how might we re-conceive reality in a way that we don't impose an idealised dialectical process, but rather attempt to grasp the whole of time in one structure?

One of the problems is the hold that evolutionary theory has on us - and evolutionary theory was also influential on Marx. What if all the stages of evolution co-exist at any instant? It's not so difficult to imagine that the "you" that is now includes the "you" that was a child. But it's more challenging to think that the cells that make up "you" each include the cells that existed in the primordial soup of the beginning of life. If we accept this for a minute, then some interesting things emerge. For example, we might think of a dialectical process being involved in being struck by a bacterial infection, and fighting it off: in Hegelian language, thesis - healthy cells; antithesis - cells under bacterial attack; synthesis - healing, production of antibodies. This is time-based. But what if it is seen as a step-wise movement through a "table" of co-existing biological states?

Let's take our healthy cell as a stage in a "table" of evolutionary states. When the cell is attacked by bacteria, its physical constitution changes. In fact, it seems to regress to a previous stable evolutionary state (you might imagine the cell "moving" towards the left of a table). The healing process finds a path from this regressive state back to its original state - moving back towards the right. This is "dialectic" as a process of movement from one stable state to another - rather in the way that electrons shift from one energy band to another. John Torday remarked to me the other day that the cells of the emphysema lung become more like the cells of the lung of a frog. Disease is evolution in reverse.

So what about social disease? What about oppression or exploitation? If the free and enlightened human being exists on a table of possible states of being human (probably on the right hand side), and the slave exists on the left, how does this help us think about a dialectic of emancipation? Like the cell under attack, what pushes the cell to take an evolutionary step backwards is a threat in its environment (bacteria). What matters in both cases is the relationship to the environment:  the relationship between cells, and the relationship between people. In examining people at different stages of freedom, we are seeing different sets of relations with others. The pathology lies in the master-slave relation, not in the slave; health resides in the convivial openness of the enlightened person with all around them, not in the person themselves.

Marx's principal insight lay in the recognition that emancipation from slavery could arise from the organisation of the oppressed: workers of the world unite! The organisation of the oppressed might be seen as the creation of the conditions for growth from a basic state of evolution (slave) to a more advanced state. It is similar to the healing process of a wound. Marx's dialectic becomes a coordination between people where the collective management of the environment outweighs the pathological effects of that environment on any one individual. Each stage of development towards emancipation is a "stable state" which can be attained progressively with the production of the right conditions. Equally, evolution in reverse can be produced with the creation of negative conditions - for example, austerity.

Dialectic is not a temporal process: it is not a matter of "now" and "then". It is a process of structural alignment in a structure which simultaneously contains all possible states of increasingly sophisticated "wholes". Time is implicit in this structure. The better we understand the structure and how it affects the way we think, feel and act, the better our chances of survival in the future. 

Sunday, 24 February 2019

Los Angeles's Ethic with a Busted Gut

I'm in LA at the moment - part holiday, part meeting with academic friends. I have been to LA before, but never to stay for a longer period. What a strange place! Beautiful weather, homelessness, gated communities, sandy beaches (Santa Monica is like a lovely upper-class Blackpool!), crazy traffic, overt racial profiling by the police... it feels ready to snap. Like the world more generally.

I had some very bad news from work before I got here, and have struggled to sleep as a result. But it's only work. Being in LA has brought home to me the extent of a state of crisis in the world - education has become part of the problem. This is the richest country in the world - where entire families with young children march the streets carrying their entire belongings in a shopping trolley. It's normal - nobody seems to notice. What waste.

Then the police handling of black homelessness beggars belief. It's as if they are on commission to make arrests (are they?.. I saw one guy wrestled to the ground in Pershing Square and then arrested. Nobody did anything. It's normal. Christ, this is not normal! Someone said that it's got worse under Trump. I'm sure it has, but I suspect it was always a bit like this. Cruel place.

How can a country with such contradictions produce Gershwin, Google and some of the kindest and cleverest people I know? I don't get it.

Except that in Stafford Beer's Platform for Change, there is a chapter called "Ethic with a Busted Gut". Beer knew America. He pointed out, after being warned that Washington DC was so dangerous in the 70s that he shouldn't venture outside his hotel at night, that this situation would only apply to a garrison town. That's American cities, he argued - they are garrisons: the poor and dispossessed are kept out with gates. Walls with bars in them work. But whose behind the bars? It depends which way you look.

The Ethic with a busted Gut is a protestation that something is "right" when deep down we know it to be "wrong". It's an ethic, but it makes us feel a bit sick. One of the friends I met here was biologist John Torday at UCLA who I think has an analogous way of describing this "ethic with a busted gut" - he calls it "deception", and argues that this is a fundamental mechanism of communication from cells upwards. It's a way of coping with uncertainty.

In humans, the busted gut ethic can make us behave in a cruel way, holding on to deceived reason and "logic" to defend our inhumanity. Every act of "selling" is like this. It is this that kills Arthur Miller's Salesman. "Hey, we can fix this!" - when you know you can't, or even that your "fix" will make the problem worse. Mirrors in space to fix climate change? You got it!

If you can overcome your busted gut, you can sell anything. And you can be very successful. Enter Facebook and Amazon. People seem to prefer their guts busted than their hearts whole. Deep down, the problem is an allergy to uncertainty or ambiguity.

The music reflects the wailing traffic of the place.

Friday, 15 February 2019

Becoming 50 and being grateful

I became 50 on Wednesday. I can't say I was particularly looking forward to it. But when the day came, there were many surprises which made me realise something about the importance of our interconnectedness. I am grateful for many things, but to have a loving family and so many friends from all over the world is something which makes me very happy.

I'd decided not to do anything 'big'. A quiet family birthday. Astrid had prepared a beautiful birthday table for me, and some lovely presents (including a new dressing gown because my old one smelt like a dead hamster). My brother sent me a very meaningful model Chitty Chitty Bang Bang car, which took me back to when I was about 8, obsessed with the film, and determined to turn our go-cart into the car with a spanner and a tiny hack-saw. That was a long time ago. Before the internet.

Of course now the internet allows us to do wonderful meaningful things like this card I had from my sister, who collected old photos and sent them to moonpig on a giant card!

The internet. What a wonderful, terrible, pernicious thing that is. It's the defining invention of our age and my generation saw the transition before the personal computer and after. What is striking is that it can be a medium for profound acts of kindness and warmth.

2018 was a year of educational innovation in Russia. So messages from my Russian colleagues were lovely. Mostly these were rather Russian exhortations to "more success! more innovation!". I joked with one friend who sent a message like this that it sounded like a Chinese curse: "May you live in interesting times too!" I replied (I knew she'd get the joke!). But one gesture blew me away. It was a photo:

I was gobsmacked. An image has such an impact when we know what it means - how much work and thought and care went into it - just like my sister's card too. And it's not just the care and trouble of making a cake or assembling photos. But the knowledge of the impact that it would have on me when I saw the photo. Amazing. "Get on a plane!" they said. I said "My heart is in Vladivostok, but my stomach is on the train to Liverpool"

It's striking that my daughter, a child of the internet, chose a much less technological form (but equally thoughtful and creative) to wish me a happy birthday:

And in Liverpool, a nice surprise greeted me in the afternoon. Lovely cake and warm wishes from colleagues - many of whom are a good deal younger than me. They're the generation (like my daughter) who face many challenges in a world which has been up-ended by the internet - but they all remain positive.

So it was lovely. Onwards. California on Tuesday - a long chat about biology in UCLA and a talk about education.

Our time feels very pregnant - there are moments when everything feels so pent-up and ready to pop. Universities, politics, the environment are all in deep trouble, and that matters to me (particularly the universities because they ought to be leading us towards a new civilisation, not ramping-up the pathologies of the old one). This is a very different world to that of the 1970s when I was trying to make Chitty Chitty Bang Bang. The computer and the internet changed everything - and we are about to see just how much.

Would it be a surprise to see it all go pop at once? The ice-caps, the institutions, politics, capitalism...  Just in the way I was dreading my birthday, but then it turned into something lovely, I'm anxious about the future, but I know that it will bring new things which will probably be better. And what will almost certainly be better is that I don't think I am the only 50-year old who is now thinking about making a better world for after I am not here - and that involves breaking with the status quo.

Tuesday, 12 February 2019

Letting the bad guys take over: what it means for the future of the university

Alexandria Ocasio-Cortez's brilliant performance at the congressional committee where she invited her fellow politicians with "let's play a game" ( was a simple (and rare) political pedagogical intervention which lifted the lid on the dynamics of power. I'm sure this speech will be analysed by politics students for many years to come.

It's not just the dynamics of power that puts a president above the law though. It's the dynamics of power which puts the likes of Philip Green, Mike Ashley, Harvey Weinstein, etc, in power. So many big institutions and corporations have unpleasant characters at the top who are out for themselves.

The acid test to spot these people is to consider if they care about their business, corporation or institution's future after they retire. It is whether they act in the interests of a viable future for the institution for the next generation. But the mentality that put them in charge is often a selfish one. Its become socially acceptable to say "Why should I care about that? It's not my problem". Yet for the institution itself, it is a perilous position. How did these people get appointed in the first place?

I'm tempted to play a similar "let's play a game" with those at the top of our universities. A number of them are losing their jobs at the moment, and a number of institutions are in serious trouble. There has been a blind dash for cash in monetising education in what is presented as a global market (but is something else I suspect). Universities have raised small fortunes by issuing bonds in themselves with the narrative that "We've been here for 900 years. We're not going away. We are a secure investment". Ironically the narrative of security has created the conditions for the employment of people at the top of institutions who have become the greatest threat to their long-term survival.

These are people who believe that universities are so secure, there's nothing anyone can do to destroy them. So sell bonds, spend huge amounts on building overpriced student accommodation, push up fees, reward senior managers with huge salaries... it doesn't matter. The universities will be here for ever. Nothing can go wrong.

As we now know from Reading, Cardiff and de Montfort, things are going wrong. But this is nothing compared to what's going to happen in the next 20 years or so.

Today's students are tomorrow's parents. Most of them will be poorer than their parents. Many of them will struggle to buy a house, and their employment will be seriously threatened by technology. Some of them will be still paying off their student loans when their own kids are 18.

The problem is the inter-generational narrative about universities. And this will co-exist with technological options for higher learning which we haven't conceived of yet, but which will offer increasingly rich opportunities for higher learning and self-development that have far greater flexibility than the rigid institutional offering of conventional institutions.

That this is going to happen is obvious. But few at the head of the sector want to think about it. It is, after all, going to happen after they retire. "It's not my problem".

I think this thinking at the top of institutions is new. 30 years ago, people at the head of universities saw themselves as custodians, whose job it was to care for and hand over the institution to the next generation. They would have worried about this, and they would have taken action in their own present time to head-off future threats.

As universities are faced with so many concerns in the here-and-now, and these appear to be getting more and more complex, the capacity for thinking ahead is disappearing. Yet if we don't think ahead and prepare for the most substantial threat of the "inter-generational narrative", universities are simply done for.

The question to think about then is whether the demise of the university is a problem. If technology takes over, isn't that ok? I'm not sure about this. Somehow we need to preserve what's best in the institution: the maintenance of a discourse which connects the past to the future, the library, the archives, and a space for scientific inquiry. Can technology do this? Perhaps, but it needs planning for.

This is what should be happening now. That it largely isn't should concern us all. 

Tuesday, 5 February 2019

Learner Individuation and Work-based education

One trend in universities which is set to continue is the integration of the work-place into degree-level learning. Among the multiple drivers for this are:

  • the costs of education mean that "earn while you learn" becomes attractive;
  • employability is helped by employment-related courses;
  • employability does not always follow a traditional degree course;
  • continuing professional development is becoming a requirement within many professions;
  • financial incentives by government are encouraging universities who might not have considered apprenticeship-style courses to adopt them;
However, when learners are mostly located in the work place, the coordination of learning conversations between them becomes an organisational challenge. With co-location of learners in a lecture hall, the intersubjective engagement can be more easily coordinated than it can remotely: it's the "seeing the whites of the eyes stuff" that teachers rely on either to organise group activities, or to see if students are understanding what is going on. Many work-based courses get around this by having days in the campus. 

But what when they are not on campus? What are the learning conversations? Where are the activities? This question is about the balance of organisational effort between that which must be done by the learner themselves, and that which can be coordinated by the teacher. 

The intersubjective context of the learner in the workplace is their immediate working environment. However, this environment is not always structured in the way that a teacher might to inculcate learning conversations. If the workplace experience is to be one of personal development, then often the onus is on the learner to self-organise. 

Universities provide simple tools to coordinate their operations of assessment and accreditation. The most basic of these is the e-portfolio. For many work-based competency-based courses, this amounts to claims about professional competencies being made (often by ticking a box or writing a commentary), and for these claims to be verified by an assessor. This data then feeds into the university's accreditation process. Naturally enough, students will seek the ticking-off of competencies as the means to achieve their certificate. But this can be a shallow and strategic exercise. 

The tools for self-organisation of learning remain crude - an eportfolio system does little more than provide a form to be completed. Yet the literature on self-organised learning presents much richer models. Sebastian Fiedler and I have been talking in some depth recently about Sheila Harri-Augstien and Laurie Thomas's work on Learning Conversations from the early 90s. Augstien and Thomas combined Pask's conversation theory with George Kelly's Repertory Grid analysis to create a framework for self-organised learning where students could analyse and track the emergence of their concepts as they experienced different episodes in their professional development. Augstien and Thomas used largely paper-based tools. We should revisit it as a means of rethinking the tools for self-organisation in the workplace. 

One of the most interesting aspects of the Learning Conversations work is that it explicitly treats learners as non-ergodic systems: that is, a system whose categories are both emergent and individual. Our e-portfolio systems see learning as basically ergodic - there is a fixed "alphabet" of categories or competencies determined by an expert committee. But no living system is ergodic! The Learning Conversations model sees (and explicitly tracks) categories of understanding in reflexive processes along a x-axis, whilst recording the development of these categories from one experience to the next (the y-axis). Thomas and Harri-Augstien argued that this enabled learners to organise their categories of understanding, share them with others, and gradually develop a more sophisticated view of themselves in their environment.

This is higher learning: it is a process of individuation within complex social and technical environments. It makes me think that the barriers to having a better education system are not snobbishness about work-based learning, but the tools we use. 

Floridi and Williamson on AI; Bohm and Simondon on Thought

There's a very interesting interview here with Luciano Floridi about AI. Floridi's seminal work on the nature of information, although rather too cognitivist for my liking, is something that one cannot think without today. His contribution to the ethics of information, which sees information ethics as a variety of environmental ethics, is highly important. Floridi has been talking a lot about AI, and in the interview he proposes that the most interesting aspect of AI is that it is used as a mirror of nature: that we come to know ourselves through the technology. I agree. I would go further to say that the essence of information lies in intersubjective engagement (and by extension, consciousness), not in some abstract "stuff" that exists between us. The power of "information" as a topic is that it gives us all something to talk about, where everyone is uncertain about what it is they are trying to grapple with. It's all rather scholastic - and I quite like that.

AI is, I think, also like this. It is a shared disruption to our ways of thinking which gives us all something to talk about. When we see AI as a "tool", we get it wrong. That our institutions see AI as a "tool" says something about our institutions, with their rigid hierarchies, than it does about the technologies of AI.

Ben Williamson's post here, articulates some of the institutional problems with AI. Here the institution in question is the OECD, but really it could be anyone. They are all struggling to maintain their position and status in a world which is being turned upside down by technology. And it is interesting that "education" becomes the sticking point - the point at which these large hierarchies focus on to say "this is what we have to do". As if they know! As if anyone knows! As if the cult of expertise has escaped the massive explosion of options that technology has given us. As if expertise itself isn't under threat from technology. Which it is.

I am having a personal reminder of this, because last week I self-published my book "Uncertain Education". Since I've been writing it, or thinking about it, for nearly 8 years, it was time to go public with a document that bore the scars of its gestation. I self-published with a combination of Overleaf for typesetting (in Latex) and Blurb for printing and distribution. Both work very well, and the printed result is indistinguishable from a normal printed book (even printed at Lightening Source, which also prints "ordinary" books for Amazon). The print-run thing is over. And with it, the artificial scarcity of the "final document". Everything can be changed very easily in an agile way.

So what of the expertise of the editor, the typesetter, the reviewer, etc? The cloud takes over. The expertise becomes distributed. Many eyes looking at this thing, alongside my own eyes which see a thing now in the environment which once only existed within my own private world, are a powerful driver for making small incremental improvements. What matters are the ideas, and they tend to survive awkward moments.

The cult of the expert is one of the reasons that education maintains its structures and practices and its hierarchies. It is because a teacher is seen as an expert to mark a piece of work that we have double-marking, exam boards, quality procedures, and so on. Individual teachers are not trusted, so there has to be a cumbersome mechanism to keep everything in check to ensure that the stamp of quality can be granted. So what if we do Adaptive Comparative Judgement on a cloud-scale for marking student work? What if we create distributed databases of judgements from peers and teachers all over the world about the quality of work? This is what technology affords. It's not AI as such. And yet, its fundamental mechanism is the essence of what Warren McCulloch realised his neural networks were: a heterarchy (see

It is the cult of the expert which drives the OECD to make proclamations of scarcity about education. They declare knowledge to be scarce, and so maintain the fiction of the "knowledge economy" - what do they mean by "knowledge"? What do they mean by "economy"? They declare "coding skills" to be scarce - really? They declare the "right metrics" to be scarce, without any consideration as to what a correct data analysis might be. Worst, they essentially declare "being human" to be scarce. What nonsense! Why do they do this? Because they want to keep themselves in business.

Take the expert out of all of this and the system reorganises itself naturally and heterarchically. There is no scarcity of knowledge. There is no scarcity of metrics, because every metric is merely an alternative additional description of reality, not a commandment. There is no knowledge economy because what matters is not what is known, but the uncertainty that accompanies it. Coding itself is merely a technique for amplifying artificial descriptions of the world and creating objects and new options to act. It is not scarce either.

What are we left with? It's very similar to my process of publishing and gradually improving my book. It is moving away from the objects of knowledge - final statements, artefacts, etc - and moving towards expressing thought as a process. There's a lot of stuff in my book on David Bohm's ideas about dialogue. How right I think he was. Dialogue is about inspecting thought as process, because all the stuff around us is produced by thought. Organisations like the OECD (and our universities for that matter) have become pathological because they do not see themselves as the product of thought. But they are.

If Bohm is right, then so too is Gilbert Simondon. Thought is transduction - the process of making and maintaining categories. The objects that we have are the result of transductions being configured in a particular way. If we want a better world, we need to change our transduction processes. Simondon's genius is to see that the highest levels of human development are tied up with the realisation of the capacity to control the transductions which make us "us". Particularly, it is the capacity to make us "us" - the capacity for individuation - within a technological environment, which is at the heart of the educational and technological challenge of our time.

Monday, 21 January 2019

Artificial Intelligence in a Better World

There's an interesting article in the Guardian this week about the growth of AI and the surveillance society:

Before reading it, I suggest first inspecting the hyperlink. It's to, but the file it seeks is  "shoshana-zuboff-age-of-surveillance-capitalism-google-facebook?fbclid=IwAR0Nmp3uScp5PNzblV2AkpnQtDlrNIEDYp54SdYa4iy9Ofjw66FgDCFceO8" which contains information about where the link came from and an identifier to my account. This information goes to the Guardian, who then exploit the data. Oh, the irony!!

But I don't want to distract from the contents of the article. Surveillance is clearly happening, and `Platform capitalism' (Google and Facebook are platforms) is clearly a thing (see Nick Snricek's book here:, or the Cambridge Platform Capitalism reading group: But the tendency to reach the conclusion that technology is a bad thing should be avoided. The problem lies with the relationship between institutions which are organised as hierarchies trying to cope with mounting uncertainties in the world which have been exacerbated by the abundance of options that technology has given us.

In writing this blog, I am exploiting one of the options that technology has provided. I could instead have published a paper, written to the Guardian, sent it to one of the self-publishers, made a video about it, or simply expressed my theory in discussion with friends. I could have used Facebook, Twitter, or I could have chosen a different blogging platform. In fact, the choice is overwhelming. This amount of choice is what technology has done: it has given us an unimaginably large number of options for doing things that we could do before, or in other ways. How do I choose? That's uncertainty.

For me as a person, it's perhaps not so bad. I can resort to my habits as a way of managing my uncertainty, which often means ignoring some of the other available options that technology provides (I really should get my blog off blogger, for example, but that's a big job). But the sheer number of options that each of us now has is a real problem for institutions.

This is because the old ways of doing things like learning, printing, travelling, broadcasting, banking, performing, discussing, shopping or marketing all revolved around institutions. But suddenly (and it has been sudden) individuals can do these things in new ways in addition to those old-fashioned institutions. So institutions have had to change quickly to maintain their existing structures. Some, like shops and travel agents, are in real trouble - they were too slow to change. Why? Because their hierarchical structures meant that staff on the shop floor who could see what was happening and what needed to be done were not heard at the top soon enough, and the hierarchy was unable to effect radical change because its instruments of control were too rigid.

But not all hierarchies have died. Universities, governments, publishers, broadcasters survive well enough. This is not because they've adapted. They haven't really (have universities really changed their structures?). But the things that they do - pass laws, grant degrees, publish academic journals - are the result of declarations they make about the worth of what they do (and the lack of worth of what is not done through them) which gets upheld by other sections of society. So a university declares that only a degree certificate is proof that a person is able to do something, or should be admitted to a profession. These institutions have upheld their powers to declare scarcity. As more options have become available in society to do the things that institutions do, so the institutions have made ever-increasingly strong claims that their way is the only way. Increasingly institutions have used technology as a way of reinforcing their scarcity declaration (the paywall of journals, the VLE, AI, surveillance) These declarations of scarcity are effectively a means of defending the existing structures of institutions against the increasing onslaught of environmental uncertainty.

So what of AI or surveillance? The two are connected. Machine learning depends on data, and data is provided by users. So users actions are 'harvested' by AI. However, AI is no different from any other technology: it provides new options for doing things that we could do before. So while the options for doing things increase, uncertainty increases, and feeds a reaction by institutions, including corporations and governments. The solution to the uncertainty caused by AI and surveillance is more AI and surveillance: now in universities, governments (China particularly) and technology corporations.

This is a positive-feedback loop, and as such is inherently unstable. It is more unstable when we realise that the machine learning isn't that good or intelligent after all. Machine learning, unlike humans, is very bad at being retrained. Retrain a neural network then you risk everything that had been learnt before going to pot (I'm having direct experience of this at the moment in a project I'm doing). The simple fact is that nobody knows how it works. The real breakthrough in AI will come when we really do understand how it works. When that happens, the ravenous demand for data will become less intense: training can be targetted with manageable and specific datasets. Big data is, I suspect, merely a phase in our understanding of the heterarchy of neural networks.

The giant surveillance networks in China are feeding an uncertainty dynamic that will eventually implode. Google and Facebook are in the same loop. Amplified uncertainty eventually presents itself as politics.

This analysis is produced by looking at the whole system: people and technology. It is one of the fundamental lessons from cybernetics that whole systems have uncertainty. Any system generates questions which it cannot answer. So a whole system must have something outside it which mops up this uncertainty (cyberneticians call this 'unmanaged variety'). This thing outside is a 'metasystem'. The metasystem and the system work together to maintain the identity of the whole, by managing the uncertainty which is generated. Every whole has a "hole".

The question is where we put the technology. Runaway uncertainty is caused by putting the technology in the metasystem to amplify the uncertainty mop. AI and surveillance are the H-bombs of metasystemic  uncertainty management now. And they simply make the problem worse while initially seeming to do the job. It's very much like the Catholic church's commandeering of printing.

However, the technology might be used to organise society differently so that it can better manage the way it produces uncertainty. This is to use technology to create an environment for the open expression of uncertainty by individuals: the creation of a genuinely convivial society. I'm optimistic that what we learn from our surveillance technology and AI will lead us here... eventually.

Towards a holographic future

The key moment will be when we learn exactly how machine learning works. Neural networks are a bit like fractals or holograms, and this means that the relationship between a change to the network and the reality it represents is highly complex. Which parts of a neural network do we change to produce a determinate change in its behaviour (without unforeseen consequences)? What is fascinating is that consciousness and the universe may well work according to the same principles (see The fractal is the image of the future. The telescope and the microscope were the images of the enlightenment (according to Bas van Fraassen:

Through the holographic lens the world looks very different. When we understand how machine learning does what it does, and we can properly control it, then each of us will turn our digital machines to ourselves and our social institutions. We will turn it to our own learning and our learning conversations. We will turn it to art and aesthetic and emotional experience. What will we learn? We will learn about coherence and how to take decisions together for the good of the planet. The fractals of machine learning can create the context for conversation where many brains can think as one brain. We will have a different context for science, where scientific inquiry embraces quantum mechanics and its uncertainty. We will have global education, where the uncertainty of every world citizen is valued. And we will have a transformed notion of what it is to 'compute'. Our digital machines will tell us how nature computes in a very different way to silicon.

Right now this seems like fantasy. We have surveillance, nasty governments, crazy policies, inequality, etc. But we are in the middle of a scientific revolution. The last time we had the Thirty Years War, the English Civil War and Cromwell. We also have astonishing tools which we don't yet fully understand. Our duty is to understand them better and to create an environment for conversation in the future which the universities once were.