Friday, 15 February 2019

Becoming 50 and being grateful

I became 50 on Wednesday. I can't say I was particularly looking forward to it. But when the day came, there were many surprises which made me realise something about the importance of our interconnectedness. I am grateful for many things, but to have a loving family and so many friends from all over the world is something which makes me very happy.

I'd decided not to do anything 'big'. A quiet family birthday. Astrid had prepared a beautiful birthday table for me, and some lovely presents (including a new dressing gown because my old one smelt like a dead hamster). My brother sent me a very meaningful model Chitty Chitty Bang Bang car, which took me back to when I was about 8, obsessed with the film, and determined to turn our go-cart into the car with a spanner and a tiny hack-saw. That was a long time ago. Before the internet.

Of course now the internet allows us to do wonderful meaningful things like this card I had from my sister, who collected old photos and sent them to moonpig on a giant card!


The internet. What a wonderful, terrible, pernicious thing that is. It's the defining invention of our age and my generation saw the transition before the personal computer and after. What is striking is that it can be a medium for profound acts of kindness and warmth.

2018 was a year of educational innovation in Russia. So messages from my Russian colleagues were lovely. Mostly these were rather Russian exhortations to "more success! more innovation!". I joked with one friend who sent a message like this that it sounded like a Chinese curse: "May you live in interesting times too!" I replied (I knew she'd get the joke!). But one gesture blew me away. It was a photo:

I was gobsmacked. An image has such an impact when we know what it means - how much work and thought and care went into it - just like my sister's card too. And it's not just the care and trouble of making a cake or assembling photos. But the knowledge of the impact that it would have on me when I saw the photo. Amazing. "Get on a plane!" they said. I said "My heart is in Vladivostok, but my stomach is on the train to Liverpool"

It's striking that my daughter, a child of the internet, chose a much less technological form (but equally thoughtful and creative) to wish me a happy birthday:

And in Liverpool, a nice surprise greeted me in the afternoon. Lovely cake and warm wishes from colleagues - many of whom are a good deal younger than me. They're the generation (like my daughter) who face many challenges in a world which has been up-ended by the internet - but they all remain positive.

So it was lovely. Onwards. California on Tuesday - a long chat about biology in UCLA and a talk about education.

Our time feels very pregnant - there are moments when everything feels so pent-up and ready to pop. Universities, politics, the environment are all in deep trouble, and that matters to me (particularly the universities because they ought to be leading us towards a new civilisation, not ramping-up the pathologies of the old one). This is a very different world to that of the 1970s when I was trying to make Chitty Chitty Bang Bang. The computer and the internet changed everything - and we are about to see just how much.

Would it be a surprise to see it all go pop at once? The ice-caps, the institutions, politics, capitalism...  Just in the way I was dreading my birthday, but then it turned into something lovely, I'm anxious about the future, but I know that it will bring new things which will probably be better. And what will almost certainly be better is that I don't think I am the only 50-year old who is now thinking about making a better world for after I am not here - and that involves breaking with the status quo.

Tuesday, 12 February 2019

Letting the bad guys take over: what it means for the future of the university

Alexandria Ocasio-Cortez's brilliant performance at the congressional committee where she invited her fellow politicians with "let's play a game" (https://www.theguardian.com/global/video/2019/feb/08/alexandria-ocasio-cortezs-brutal-take-down-of-us-political-finance-laws-video) was a simple (and rare) political pedagogical intervention which lifted the lid on the dynamics of power. I'm sure this speech will be analysed by politics students for many years to come.

It's not just the dynamics of power that puts a president above the law though. It's the dynamics of power which puts the likes of Philip Green, Mike Ashley, Harvey Weinstein, etc, in power. So many big institutions and corporations have unpleasant characters at the top who are out for themselves.

The acid test to spot these people is to consider if they care about their business, corporation or institution's future after they retire. It is whether they act in the interests of a viable future for the institution for the next generation. But the mentality that put them in charge is often a selfish one. Its become socially acceptable to say "Why should I care about that? It's not my problem". Yet for the institution itself, it is a perilous position. How did these people get appointed in the first place?

I'm tempted to play a similar "let's play a game" with those at the top of our universities. A number of them are losing their jobs at the moment, and a number of institutions are in serious trouble. There has been a blind dash for cash in monetising education in what is presented as a global market (but is something else I suspect). Universities have raised small fortunes by issuing bonds in themselves with the narrative that "We've been here for 900 years. We're not going away. We are a secure investment". Ironically the narrative of security has created the conditions for the employment of people at the top of institutions who have become the greatest threat to their long-term survival.

These are people who believe that universities are so secure, there's nothing anyone can do to destroy them. So sell bonds, spend huge amounts on building overpriced student accommodation, push up fees, reward senior managers with huge salaries... it doesn't matter. The universities will be here for ever. Nothing can go wrong.

As we now know from Reading, Cardiff and de Montfort, things are going wrong. But this is nothing compared to what's going to happen in the next 20 years or so.

Today's students are tomorrow's parents. Most of them will be poorer than their parents. Many of them will struggle to buy a house, and their employment will be seriously threatened by technology. Some of them will be still paying off their student loans when their own kids are 18.

The problem is the inter-generational narrative about universities. And this will co-exist with technological options for higher learning which we haven't conceived of yet, but which will offer increasingly rich opportunities for higher learning and self-development that have far greater flexibility than the rigid institutional offering of conventional institutions.

That this is going to happen is obvious. But few at the head of the sector want to think about it. It is, after all, going to happen after they retire. "It's not my problem".

I think this thinking at the top of institutions is new. 30 years ago, people at the head of universities saw themselves as custodians, whose job it was to care for and hand over the institution to the next generation. They would have worried about this, and they would have taken action in their own present time to head-off future threats.

As universities are faced with so many concerns in the here-and-now, and these appear to be getting more and more complex, the capacity for thinking ahead is disappearing. Yet if we don't think ahead and prepare for the most substantial threat of the "inter-generational narrative", universities are simply done for.

The question to think about then is whether the demise of the university is a problem. If technology takes over, isn't that ok? I'm not sure about this. Somehow we need to preserve what's best in the institution: the maintenance of a discourse which connects the past to the future, the library, the archives, and a space for scientific inquiry. Can technology do this? Perhaps, but it needs planning for.

This is what should be happening now. That it largely isn't should concern us all. 

Tuesday, 5 February 2019

Learner Individuation and Work-based education

One trend in universities which is set to continue is the integration of the work-place into degree-level learning. Among the multiple drivers for this are:

  • the costs of education mean that "earn while you learn" becomes attractive;
  • employability is helped by employment-related courses;
  • employability does not always follow a traditional degree course;
  • continuing professional development is becoming a requirement within many professions;
  • financial incentives by government are encouraging universities who might not have considered apprenticeship-style courses to adopt them;
However, when learners are mostly located in the work place, the coordination of learning conversations between them becomes an organisational challenge. With co-location of learners in a lecture hall, the intersubjective engagement can be more easily coordinated than it can remotely: it's the "seeing the whites of the eyes stuff" that teachers rely on either to organise group activities, or to see if students are understanding what is going on. Many work-based courses get around this by having days in the campus. 

But what when they are not on campus? What are the learning conversations? Where are the activities? This question is about the balance of organisational effort between that which must be done by the learner themselves, and that which can be coordinated by the teacher. 

The intersubjective context of the learner in the workplace is their immediate working environment. However, this environment is not always structured in the way that a teacher might to inculcate learning conversations. If the workplace experience is to be one of personal development, then often the onus is on the learner to self-organise. 

Universities provide simple tools to coordinate their operations of assessment and accreditation. The most basic of these is the e-portfolio. For many work-based competency-based courses, this amounts to claims about professional competencies being made (often by ticking a box or writing a commentary), and for these claims to be verified by an assessor. This data then feeds into the university's accreditation process. Naturally enough, students will seek the ticking-off of competencies as the means to achieve their certificate. But this can be a shallow and strategic exercise. 

The tools for self-organisation of learning remain crude - an eportfolio system does little more than provide a form to be completed. Yet the literature on self-organised learning presents much richer models. Sebastian Fiedler and I have been talking in some depth recently about Sheila Harri-Augstien and Laurie Thomas's work on Learning Conversations from the early 90s. Augstien and Thomas combined Pask's conversation theory with George Kelly's Repertory Grid analysis to create a framework for self-organised learning where students could analyse and track the emergence of their concepts as they experienced different episodes in their professional development. Augstien and Thomas used largely paper-based tools. We should revisit it as a means of rethinking the tools for self-organisation in the workplace. 

One of the most interesting aspects of the Learning Conversations work is that it explicitly treats learners as non-ergodic systems: that is, a system whose categories are both emergent and individual. Our e-portfolio systems see learning as basically ergodic - there is a fixed "alphabet" of categories or competencies determined by an expert committee. But no living system is ergodic! The Learning Conversations model sees (and explicitly tracks) categories of understanding in reflexive processes along a x-axis, whilst recording the development of these categories from one experience to the next (the y-axis). Thomas and Harri-Augstien argued that this enabled learners to organise their categories of understanding, share them with others, and gradually develop a more sophisticated view of themselves in their environment.

This is higher learning: it is a process of individuation within complex social and technical environments. It makes me think that the barriers to having a better education system are not snobbishness about work-based learning, but the tools we use. 


Floridi and Williamson on AI; Bohm and Simondon on Thought

There's a very interesting interview here https://conditiohumana.io/floridi-interview/?utm_campaign=direct with Luciano Floridi about AI. Floridi's seminal work on the nature of information, although rather too cognitivist for my liking, is something that one cannot think without today. His contribution to the ethics of information, which sees information ethics as a variety of environmental ethics, is highly important. Floridi has been talking a lot about AI, and in the interview he proposes that the most interesting aspect of AI is that it is used as a mirror of nature: that we come to know ourselves through the technology. I agree. I would go further to say that the essence of information lies in intersubjective engagement (and by extension, consciousness), not in some abstract "stuff" that exists between us. The power of "information" as a topic is that it gives us all something to talk about, where everyone is uncertain about what it is they are trying to grapple with. It's all rather scholastic - and I quite like that.

AI is, I think, also like this. It is a shared disruption to our ways of thinking which gives us all something to talk about. When we see AI as a "tool", we get it wrong. That our institutions see AI as a "tool" says something about our institutions, with their rigid hierarchies, than it does about the technologies of AI.

Ben Williamson's post here, https://codeactsineducation.wordpress.com/2019/02/01/education-for-the-robot-economy/ articulates some of the institutional problems with AI. Here the institution in question is the OECD, but really it could be anyone. They are all struggling to maintain their position and status in a world which is being turned upside down by technology. And it is interesting that "education" becomes the sticking point - the point at which these large hierarchies focus on to say "this is what we have to do". As if they know! As if anyone knows! As if the cult of expertise has escaped the massive explosion of options that technology has given us. As if expertise itself isn't under threat from technology. Which it is.

I am having a personal reminder of this, because last week I self-published my book "Uncertain Education". Since I've been writing it, or thinking about it, for nearly 8 years, it was time to go public with a document that bore the scars of its gestation. I self-published with a combination of Overleaf for typesetting (in Latex) and Blurb for printing and distribution. Both work very well, and the printed result is indistinguishable from a normal printed book (even printed at Lightening Source, which also prints "ordinary" books for Amazon). The print-run thing is over. And with it, the artificial scarcity of the "final document". Everything can be changed very easily in an agile way.

So what of the expertise of the editor, the typesetter, the reviewer, etc? The cloud takes over. The expertise becomes distributed. Many eyes looking at this thing, alongside my own eyes which see a thing now in the environment which once only existed within my own private world, are a powerful driver for making small incremental improvements. What matters are the ideas, and they tend to survive awkward moments.

The cult of the expert is one of the reasons that education maintains its structures and practices and its hierarchies. It is because a teacher is seen as an expert to mark a piece of work that we have double-marking, exam boards, quality procedures, and so on. Individual teachers are not trusted, so there has to be a cumbersome mechanism to keep everything in check to ensure that the stamp of quality can be granted. So what if we do Adaptive Comparative Judgement on a cloud-scale for marking student work? What if we create distributed databases of judgements from peers and teachers all over the world about the quality of work? This is what technology affords. It's not AI as such. And yet, its fundamental mechanism is the essence of what Warren McCulloch realised his neural networks were: a heterarchy (see https://vordenker.de/ggphilosophy/mcculloch_heterarchy.pdf)

It is the cult of the expert which drives the OECD to make proclamations of scarcity about education. They declare knowledge to be scarce, and so maintain the fiction of the "knowledge economy" - what do they mean by "knowledge"? What do they mean by "economy"? They declare "coding skills" to be scarce - really? They declare the "right metrics" to be scarce, without any consideration as to what a correct data analysis might be. Worst, they essentially declare "being human" to be scarce. What nonsense! Why do they do this? Because they want to keep themselves in business.

Take the expert out of all of this and the system reorganises itself naturally and heterarchically. There is no scarcity of knowledge. There is no scarcity of metrics, because every metric is merely an alternative additional description of reality, not a commandment. There is no knowledge economy because what matters is not what is known, but the uncertainty that accompanies it. Coding itself is merely a technique for amplifying artificial descriptions of the world and creating objects and new options to act. It is not scarce either.

What are we left with? It's very similar to my process of publishing and gradually improving my book. It is moving away from the objects of knowledge - final statements, artefacts, etc - and moving towards expressing thought as a process. There's a lot of stuff in my book on David Bohm's ideas about dialogue. How right I think he was. Dialogue is about inspecting thought as process, because all the stuff around us is produced by thought. Organisations like the OECD (and our universities for that matter) have become pathological because they do not see themselves as the product of thought. But they are.

If Bohm is right, then so too is Gilbert Simondon. Thought is transduction - the process of making and maintaining categories. The objects that we have are the result of transductions being configured in a particular way. If we want a better world, we need to change our transduction processes. Simondon's genius is to see that the highest levels of human development are tied up with the realisation of the capacity to control the transductions which make us "us". Particularly, it is the capacity to make us "us" - the capacity for individuation - within a technological environment, which is at the heart of the educational and technological challenge of our time.


Monday, 21 January 2019

Artificial Intelligence in a Better World

There's an interesting article in the Guardian this week about the growth of AI and the surveillance society: https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook?fbclid=IwAR0Nmp3uScp5PNzblV2AkpnQtDlrNIEDYp54SdYa4iy9Ofjw66FgDCFceO8

Before reading it, I suggest first inspecting the hyperlink. It's to theguardian.com, but the file it seeks is  "shoshana-zuboff-age-of-surveillance-capitalism-google-facebook?fbclid=IwAR0Nmp3uScp5PNzblV2AkpnQtDlrNIEDYp54SdYa4iy9Ofjw66FgDCFceO8" which contains information about where the link came from and an identifier to my account. This information goes to the Guardian, who then exploit the data. Oh, the irony!!


But I don't want to distract from the contents of the article. Surveillance is clearly happening, and `Platform capitalism' (Google and Facebook are platforms) is clearly a thing (see Nick Snricek's book here: https://www.amazon.co.uk/Platform-Capitalism-Theory-Redux-Srnicek/dp/1509504877, or the Cambridge Platform Capitalism reading group: https://cpgjcam.net/reading-groups/platform-capitalism-reading-group/). But the tendency to reach the conclusion that technology is a bad thing should be avoided. The problem lies with the relationship between institutions which are organised as hierarchies trying to cope with mounting uncertainties in the world which have been exacerbated by the abundance of options that technology has given us.

In writing this blog, I am exploiting one of the options that technology has provided. I could instead have published a paper, written to the Guardian, sent it to one of the self-publishers, made a video about it, or simply expressed my theory in discussion with friends. I could have used Facebook, Twitter, or I could have chosen a different blogging platform. In fact, the choice is overwhelming. This amount of choice is what technology has done: it has given us an unimaginably large number of options for doing things that we could do before, or in other ways. How do I choose? That's uncertainty.

For me as a person, it's perhaps not so bad. I can resort to my habits as a way of managing my uncertainty, which often means ignoring some of the other available options that technology provides (I really should get my blog off blogger, for example, but that's a big job). But the sheer number of options that each of us now has is a real problem for institutions.

This is because the old ways of doing things like learning, printing, travelling, broadcasting, banking, performing, discussing, shopping or marketing all revolved around institutions. But suddenly (and it has been sudden) individuals can do these things in new ways in addition to those old-fashioned institutions. So institutions have had to change quickly to maintain their existing structures. Some, like shops and travel agents, are in real trouble - they were too slow to change. Why? Because their hierarchical structures meant that staff on the shop floor who could see what was happening and what needed to be done were not heard at the top soon enough, and the hierarchy was unable to effect radical change because its instruments of control were too rigid.

But not all hierarchies have died. Universities, governments, publishers, broadcasters survive well enough. This is not because they've adapted. They haven't really (have universities really changed their structures?). But the things that they do - pass laws, grant degrees, publish academic journals - are the result of declarations they make about the worth of what they do (and the lack of worth of what is not done through them) which gets upheld by other sections of society. So a university declares that only a degree certificate is proof that a person is able to do something, or should be admitted to a profession. These institutions have upheld their powers to declare scarcity. As more options have become available in society to do the things that institutions do, so the institutions have made ever-increasingly strong claims that their way is the only way. Increasingly institutions have used technology as a way of reinforcing their scarcity declaration (the paywall of journals, the VLE, AI, surveillance) These declarations of scarcity are effectively a means of defending the existing structures of institutions against the increasing onslaught of environmental uncertainty.

So what of AI or surveillance? The two are connected. Machine learning depends on data, and data is provided by users. So users actions are 'harvested' by AI. However, AI is no different from any other technology: it provides new options for doing things that we could do before. So while the options for doing things increase, uncertainty increases, and feeds a reaction by institutions, including corporations and governments. The solution to the uncertainty caused by AI and surveillance is more AI and surveillance: now in universities, governments (China particularly) and technology corporations.

This is a positive-feedback loop, and as such is inherently unstable. It is more unstable when we realise that the machine learning isn't that good or intelligent after all. Machine learning, unlike humans, is very bad at being retrained. Retrain a neural network then you risk everything that had been learnt before going to pot (I'm having direct experience of this at the moment in a project I'm doing). The simple fact is that nobody knows how it works. The real breakthrough in AI will come when we really do understand how it works. When that happens, the ravenous demand for data will become less intense: training can be targetted with manageable and specific datasets. Big data is, I suspect, merely a phase in our understanding of the heterarchy of neural networks.

The giant surveillance networks in China are feeding an uncertainty dynamic that will eventually implode. Google and Facebook are in the same loop. Amplified uncertainty eventually presents itself as politics.

This analysis is produced by looking at the whole system: people and technology. It is one of the fundamental lessons from cybernetics that whole systems have uncertainty. Any system generates questions which it cannot answer. So a whole system must have something outside it which mops up this uncertainty (cyberneticians call this 'unmanaged variety'). This thing outside is a 'metasystem'. The metasystem and the system work together to maintain the identity of the whole, by managing the uncertainty which is generated. Every whole has a "hole".

The question is where we put the technology. Runaway uncertainty is caused by putting the technology in the metasystem to amplify the uncertainty mop. AI and surveillance are the H-bombs of metasystemic  uncertainty management now. And they simply make the problem worse while initially seeming to do the job. It's very much like the Catholic church's commandeering of printing.

However, the technology might be used to organise society differently so that it can better manage the way it produces uncertainty. This is to use technology to create an environment for the open expression of uncertainty by individuals: the creation of a genuinely convivial society. I'm optimistic that what we learn from our surveillance technology and AI will lead us here... eventually.

Towards a holographic future

The key moment will be when we learn exactly how machine learning works. Neural networks are a bit like fractals or holograms, and this means that the relationship between a change to the network and the reality it represents is highly complex. Which parts of a neural network do we change to produce a determinate change in its behaviour (without unforeseen consequences)? What is fascinating is that consciousness and the universe may well work according to the same principles (see https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9). The fractal is the image of the future. The telescope and the microscope were the images of the enlightenment (according to Bas van Fraassen: https://en.wikipedia.org/wiki/Bas_van_Fraassen)

Through the holographic lens the world looks very different. When we understand how machine learning does what it does, and we can properly control it, then each of us will turn our digital machines to ourselves and our social institutions. We will turn it to our own learning and our learning conversations. We will turn it to art and aesthetic and emotional experience. What will we learn? We will learn about coherence and how to take decisions together for the good of the planet. The fractals of machine learning can create the context for conversation where many brains can think as one brain. We will have a different context for science, where scientific inquiry embraces quantum mechanics and its uncertainty. We will have global education, where the uncertainty of every world citizen is valued. And we will have a transformed notion of what it is to 'compute'. Our digital machines will tell us how nature computes in a very different way to silicon.

Right now this seems like fantasy. We have surveillance, nasty governments, crazy policies, inequality, etc. But we are in the middle of a scientific revolution. The last time we had the Thirty Years War, the English Civil War and Cromwell. We also have astonishing tools which we don't yet fully understand. Our duty is to understand them better and to create an environment for conversation in the future which the universities once were.