Friday, 19 May 2023

The Digitalised Imagination

Just over 2 years ago I decided I wanted a bit of adventure in the tail-end of Covid, and gave up a slightly depressing management position at the University of Liverpool, and became a post-doc on a project on curriculum digitalisation at the University of Copenhagen. I thought at the time that digitalisation was the most important undercurrent in education, and I knew that it was a difficult thing to move towards. My best achievement had been at the Far Eastern Federal University in Russia, which I wrote about here: Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum (springer.com). The Copenhagen experience was nowhere near as good as the Russian experience, and I left Copenhagen for Manchester with a much deeper appreciation of what I had done in Russia. I just wished I'd done it in Switzerland!

During this time, and for seven years previously, I had been deeply involved in a medical diagnostic AI project whose innovation I was co-inventor. It was obvious that AI was a tidal wave that was about to hit education, and much of my frustration in Copenhagen was that very few people were really interested. They are now, like everyone else. 

There is a risk that AI sweeps the digitalisation agenda away. After all, why teach the kids to code when the computer will do it for you? This kind of statement underpins errors in the ways that digitalisation was conceived - particularly in Copenhagen and many other European universities. It also underpins the difference between the institutional approach of Copenhagen and the approach I took in Russia. 

Digitalisation is not about skill or competency. It is not about "digital literacy" (whatever that means!). It is about imagination. This was understood by the Russians, and dogmatically avoided in Copenhagen. The deep problem is the sanctifying of "competency" within European education, and the EU has been particularly pernicious in pushing this. However much the sheer lack of insight as to what "competency" is (ask anyone to define it!), it is continually asserted that this is the thing education must do. 

Now in the new AI world that is opening up in front of us, the biggest threat is not technology, but poverty of the imagination. And imagination today means (partly) the "technical imagination". It is about understanding the realm of possibility under the surface, behind the interface - it is the Freudian unconscious of the technical world which through the working of creativity can find expression in the interfaces we produce. 

With an imaginative collapse, humanity becomes enslaved. While the demands of the technical imagination are going to encompass a huge range of disciplines, skills, ideas, relationships, we will need our new tools to oil the wheels of our discourse and knowledge and find new ways of organising ourselves. It is in steering this process to which education needs to direct itself. But ironically, the university as it is currently constituted is geared-up for imaginative collapse and corporate takeover. 

Digitalisation is about changing this. It's not going to be easy. 

Tuesday, 16 May 2023

The Glass Bead Game of Human Relations

I attended an interesting session today on burnout and stress at work. There are many conflicting analyses of these problems. On the one hand, there are those studies which focus on the individual, seeing stress as an attribute of individuals, and "stressors" as independent variables producing the experienced symptoms of stress. There are clearly epistemological problems with this, not least that stress is rather like a headache - something that is subjectively experienced, but cannot be directly witnessed by others (only its effects). Searle calls this a "subjective epistemological" phenomenon (to be contrasted with "objective epistemological" things like historical dates, or "subjective ontological" things like money or universities, or "objective ontological" things like the motion of planets, or light). The notion of the "self" that is stressed is the biological/psychological entity bounded by their skin. Let's call this Stress1.

The alternative view of stress is that it is a manifestation of social relations and communication. This entails a different conception of the self as something that is constructed within communication, particularly the communication of the first person "I". The self in this sense is more like Searles's "ontological subjective" category: the reality of a self is construed by the expectations which arise as a result of social engagement and "positioning". This is the self as it is seen by others. It is also the self which can be oppressed by others directly, or by situations which result from others taking insufficient care of environmental factors that can negatively impact on the expression of the self. This is what can happen in situations where people become stressed. Communicative theories which examine stress in these circumstances include things like the "double bind", which is unfortunately extremely common in many workplaces. This is Stress2. 

Both perspectives on the stressed self - the ontological-subjective self and the epistemological-subjective self - are important. However, in terms of practical steps to eliminate stress, the two perspectives have different approaches. Stress1 is addressed through treatment to the individual - rather like giving someone with a headache paracetamol: mindfulness, etc. Stress2 is addressed through changing the structures of communication. This is much harder to do, and so Stress1 dominates the discourse, and its (rather hair-brained) remedies go relatively unchallenged. 

Stress2 is difficult because it basically requires the making of better decisions at the top of an organisation. Bad decisions will cause stress. Good decisions ought not to, but instead to create synergy, wellness and productivity. Decisions are the result of the skill of decision-makers, so the question really is how we create good decision-makers.  Here we see that the incentives for people to climb the ranks of decision-making encourage behaviour which is anathema to the making of "good decisions". People are rewarded instead for hitting targets, increasing profits, and driving down costs. All of which comes at a human cost. 

Even if better criteria could be defined to encourage and recruit better decision-makers, it will always be possible to "fake" criteria if they are in the form of new targets or KPIs. This won't work.

This has led me to wonder about what Herman Hesse's "Glass Bead Game" might actually have been (or might one day be in the future). Why do the elites of 25th century Castelia take this game, which is a bit like music (as Hesse describes it) so seriously? There is something important about it being a game. 

A game is not a set of criteria. It is a practice which requires the learning of skill to play well. As one learns to play well, one deepens in insight. As one deepens in insight, one might become more aware and able to act in the world in a way where the making of good decisions becomes more probable. Importantly, to play the glass bead game is not to "hit targets". It is not a KPI. It is an art. Only those who are more experienced in the game can judge those who are less experienced, but gradual mastery equips one with the skill to make good judgements oneself. Of course, Joseph Knecht decides the game is not for him, and a different spiritual path takes him elsewhere. But it is still a spiritual path - perhaps a different kind of game.

What if one's progression up the ranks of decision-making powers was organised like this? Would we have fewer psychopaths and more enlightened individuals at the top of our organisations? I think this is what Hesse was driving at. After all, he had seen the worst kind of management psychopaths in history in the Nazis. He must have asked himself what novel kind of arrangements might make the making of Nazis less probable. 

The other interesting thing about this though is that the Glass Bead Game is technological. Is there a way in which we could organise our technologies to produce a radically different kind of incentive scheme for those who aspire to become custodians of society? We clearly have some very powerful and novel technologies in front of us which should cause us to reflect on a better world that we might be able to build with them. 

Sunday, 14 May 2023

Positioning AI

I've been creating a simple app for my Occupational Health students to help them navigate and inquire after their learning content in flexible ways. It's the kind of thing that the chatGPT API makes particularly easy, and it seems worth playing with since chatGPT won't be the only API that does this kind of thing soon (Vicuna  and other open source offerings are probably the future...)

As with any tool development, the key is whether the people for whom the tool is made find it useful. This is always a tricky moment because others either do or don't see that a vision of what they do (manifested in the technics of what is made) actually aligns with what they perceive their needs to be. Like a bad doctor, I risk (like so many technical people) positioning students as recipients of techno-pedagogical "treatment". (Bad teachers do this too)

We've seen so many iterations of tools where mouse clicks and menus have to be negotiated which seem far-removed from real wants and needs. The VLE is the classic example. I wrote a paper about this many years ago with regard to Learning Design technology, which I am reflecting on again in the light of this new technology (see Microsoft Word - 07.doc (researchgate.net)). I used Rom Harre's Positioning Theory as a guide. I still think it is useful, and it makes me wonder how chatGPT might be any different in terms of positioning. 

Harre's Positioning Theory presents a way of talking about the constraints within which the Self is constructed in language and practice. There are three fundamental areas of constraint: 

  1. The speech acts that can be selected by an individual in their practice
  2. The positions they occupy in their social settings (for example, as a student, a teacher, a worker, etc)
  3. The "storyline" in their head which attempts to rationalise their situation and present themselves as heroic. 
With positioning through the use of tools, learners and teachers are often seen as recipients of the tool designer's judgement about what their needs are. This is always a problem in any kind of implementation - a constant theme in the adoption of technology. Of course, the storyline for the tool designer is always heroic!

But chatGPT doesn't seem to have had any adoption problems. It appears to most people who experience it that this is astonishing technology which can do things which we have been longing for easy solutions to: "please give me the answer to my question without all the adds, and the need to drill through multiple websites! (and then write me a limerick about it)" But in many cases, our needs and desires have been framed by the tedium of the previous generation of technology. It could have been much better - but it wasn't for reasons which are not technical, but commercial. 

However, could chatGPT have positioning problems? This is an interesting question because chatGPT is a linguistic tool. It, like us, selects utterances. Its grasp of context is crude by comparison to our awareness of positions, but it does display some contextual (positioning) awareness - not least in its ability to mimic different genres of discourse. Clearly, however, it doesn't have a storyline. However, because of the naturalness of the interface, and its ability to gain information from us, it is perfectly capable of learning our storylines. 

In a world of online AI like chatGPT or BARD, the ability to learn individuals' storylines would be deeply disturbing. However, this is unlikely to be where the technology is heading. AI is a decentralising technology - so we are really talking about a technology which is under the direct control of users, and which has the capacity to learn about its user. That could be a good thing. 

I might create a tool for my students to use and say "here is something that I think you might find useful". Ultimately, whether they find it useful or not depends on whether what they perceive as meaningful matches what I perceive as meaningful to them. But what is "meaningful" in the first place?

What students and teachers and technologists are all doing is looking for ways in which they (we) can anticipate our environment. Indeed, this simple fact may be the basic driving force behind the information revolution of the last 40 years. A speech act is a selection of an utterance whose effects are anticipated. If a speech act doesn't produce the expected effects, then we are likely to learn from the unexpected consequences, and choose a different speech act next time. Positioning depends on anticipation, and anticipation depends on having a good model of the world, and particularly, having a storyline which situates the self in that model of the world. 

Anticipations form in social contexts, in the networks of positionings that we find ourselves in our different social roles. ChatGPT will no doubt find its way into all walks of life and different positions. Its ability to create difference in many different ways can be a stimulus to revealing ourselves to one another in different social situations. But there are good and bad positionings. The danger is that we allow ourselves to be positioned by the technology as recipients of information, art, AI-generated video, instruction, jokes, etc. The danger is that we lose sight of what drives our curiosity in the first place. That is going to be the key question for education in the future. 

This is where the guts of judgement lie. What is in a position is not merely a set of expectations about the world around us. It is deeply rooted in our physiology. If we are not to become passively positioned by powerful technology, then it will become necessary for us to look inwards on our physiology in our deepest exercise of judgement. This is what we are going to need to teach succeeding generations. Information hypnosis, from which we have been suffering for many years of the web, cannot be the way of the future.

Sunday, 7 May 2023

The Endosymbiotic Moment

It's become increasingly obvious that there is something quasi-biological about current AI approaches. It's not just that there is a strong genotype-phenotype homology in the way that relatively fixed machine learning models work in partnership with adaptive statistics (see Improvisation Blog: AI, Technical Architecture and the Future of Education (dailyimprovisation.blogspot.com)). More importantly, the unfolding evolutionary dynamics of machine learning also appears to confirm some profound theories about cellular evolution. In my book about the future the education, written four years ago now, I said that there would come an "endosymbiotic moment" between education and technology.  Events seem to be playing that out, but now I think it's not just education in for an endosymbiotic moment, but the whole of society. 

This may be why people like Elon Musk, who has had a big stake in AI research, is calling for a "pause". Why? - is it wishful thinking to suggest that it may be because the people who are most threatened by what is happening are people like him? But it may be. 

The essence of biological evolution, and specifically cellular evolution, is that a boundary (e.g. the cell wall) must be maintained. The cell wall defines the relationship between its inside and its outside. Given that the environment of the cell is constantly changing, the cell must somehow adapt to threats to its existence. The principal strategy is what Lynn Margulis called "endosymbiosis". This is basically where the cell absorbs aspects of its environment which would otherwise threaten it. For example, it leads to the presence of mitochondria within the cell which, Margulis argued, were once independent simple organisms like bacteria. Endosymbiosis is the means by which the cell becomes more like its environment, and through this process, is able to anticipate any likely threats and opportunities that the environment might throw at it. It is also the way in which cells acquire "memory" of their evolutionary history - a kind of inner story which helps to coordinate future adaptations and coordinations with other cells. From this perspective, DNA  is not the "blueprint" for life, but rather the accreted result of ongoing R&D in the cells existence. 

What's this got to do with technology? The clue is in a leaked memo from Google (Google "We Have No Moat, And Neither Does OpenAI" (semianalysis.com)), which highlighted the threat to the company's AI efforts not from competitor companies, but from open source developments. All corporate entities, whether companies, universities or even governments maintain their viability and identity (and in the case of companies, profits) by maintaining the scarcity of what they do. That means maintaining a boundary. Often we see corporate entities doing this by "swallowing up" aspects of their environment which threaten them. The big tech giants have made a habit of this. 

The Google memo suggests something is happening in the environment which the corporation can't swallow. This is open source development of AI. Of course, there is nothing new about open source, but corporations were always able to maintain an advantage (and maintain scarcity) in their adoption of the technology, often by packaging products and services together to offer them to corporations and individuals. Microsoft has had the biggest success here. So why is open source AI so much more of a problem than Open Office or Ubuntu ?

The answer to this question lies in the nature of AI itself. It is, fundamentally, an endosymbiotic technology: a method whereby the vast networked environment of the internet can be absorbed into a single technological device (an individual computer/phone). That device, which then doesn't need to be connected to the internet, can reproduce the variety of the internet. This provides individuals equipped with the technology a vastly increased power to anticipate their environment. Up until this point, the tech industry has aimed to empower individuals with some anticipatory capability, but to maintain control of the tools which provide this. It is that control of the anticipatory tools which is likely to be lost by corporations. And it will not just be chatbots - it will be all forms of AI. It is what might be called a "radical decentralisation moment".

This has huge implications. Intellectual property, for example, depends of scarcity creation. But what happens if innovation is now performed by (or in conjunction with) machines which are ubiquitous and decentralised? New developments in technology will quickly find their way to the open source world, not just because of some desire to be "open" but because that is the place where it can most effectively develop. Moreover, open source AI is much simpler that open source office applications. It has far fewer components: a training algorithm + data + statistics is just about all that's needed. Who would invest in a new corporate innovation in a world where any innovation is likely to be reproduced by the open source community within a matter of months? (I wonder if the Silicon Valley Bank collapse carried some forewarning of this problem)

But its not just the identities of tech businesses which are under threat. What about education? What about government? Are we now really so sure that the scarcity of the educational certificate, underpinned by the authority of the institution, is safe from an open source challenge? (Blockchain hasn't gone away, for example) I'm not now, and the way that universities have responded to chatGPT has highlight the priority for them to "protect the certificate!" like the queen in the hive. If the certificate goes, what else does education have? (I'm not suggesting "nothing", but the certificate is the current business model and has been for decades)

Then there is government and the legal frameworks which protect the declaration of scarcity in commerce through IP legislation and contracts. The model of this was the East India Company, where protecting territories and trade routes with the use of force underpinned imperial wealth. What if you can't protect anything? What kind of chaos does that produce? AI regulation is not going to be a shopping list of do's and don'ts because its going to be difficult to stop people doing things. China is perhaps the most interesting case. No government can control a self-installed, non-networked chatbot: it's like kids in the Soviet Union listening to rock and roll on x-ray film turned into records. Then of course there'll be terrorist cells arming themselves with bomb-making experts. We are going to need to think deeper than the ridiculously bureaucratic nonsense of GDPR. 

Our priority in education, industry and government is going to need to be to restabilise relations between entities with identities which will be very different from the identities they have now. In the Reformation, it was the Catholic church which underwent significant changes, underpinned by major changes in government. The English civil war and the restoration produced fundamental changes to government, while the industrial revolution produced deep changes to commerce. But this is a dangerous time. Historical precedent shows that changes on this level are rarely unaccompanied by war.