Sunday, 26 June 2022

Learning, Dialogue and AI: Offline initiatives and Political Freedom

I'm running a small EU project in July called C-Camp. The idea is to instil and explore computational practices among students from 4 European Universities (Prague, Copenhagen, Milan and Heidelberg). I wanted to create something for it which built on my experiences in Russia with the Global Scientific Dialogue course (Improvisation Blog: Transforming Education with Science and Creativity (dailyimprovisation.blogspot.com) - about which a paper is shortly to appear in Postdigital Science and Education). 

In Russia, the vision was to present students with a technological "cabinet of curiosities" - a way of engaging them in asking "this is interesting - what do you make of it?". It was the uncertainty of encounter with technological things which was important - that was the driver for dialogue, which dominated the course. C-Camp is very much in the same spirit. 

This time, I have been a bit more ambitious in making my cabinet of curiosities. I've made a cross-platform desktop app using ElectronJS which incorporates a tabbed web-browser, alongside self-contained tools which make available learner's data to the learners (and only to the learners). The advantage of a desktop tool is that, apart from the learners being able to change it (my programming and design is merely functional!), nothing personal goes online, apart from the traffic in each website.  The data of engagement with the tools - which is something that is usually hidden from learners - then becomes inspectable by them. There  are lots of "cool tools" that we suggest exploring (like the amazing EbSynth below)

The pedagogy of the course will then be to explore the data that learners themselves create as they process their own uncertainty. It's messy data - which can be an advantage educationally - but it illustrates a number of important principles about what is going on online, and what data big tech companies are harvesting, and how they are doing it. 

More to the point, by having a desktop tool, there is an important thing to say that "edTech doesn't have to be like the LMS!". Not everything needs to be online. Not everything needs to be harvested by corporations. And more to the point, if individuals were more in contact with their own data - particularly their own learning data - there are opportunities for deepening both our learning and our engagement with technology. So supporting students in downloading and analysing their own Facebook data can be part of a journey into demystifying technology and inspiring the imagination to look "beyond the screen"

 


One of the things I've done is to integrate 2 AI services. One of them uses the OpenAI service, which is online. The code for doing this is quite simple, but the important thing is that the processing happens remotely on OpenAI's servers. 

However, the other AI service is local. I've integrated the VGG16 model with Imagenet data so that students can upload and explore image recognition. The model and the code are all on the local machine. The point to make is that there is no reason why OpenAI shouldn't work like this too - other than commercial reasons.

What fascinates me about this is that for all the anxious talk about AI and its supposed "sentience", nobody talks about the technical architecture which basically up-ends the idea that everything has to be online. Large-scale language models are basically self-contained anticipatory dialogical engines which could function in isolated circumstances.

Think about this: imagine in a non-free country like Russia or China, where the authorities seek to monitor and control the conversations that individuals have, suddenly individuals can have conversations which are not monitored - simply by being in possession of a particular AI file. 

I'm doing a demo of OpenAI tomorrow in China. The last time I did it there, it worked. I doubt it will work for much longer. But it's easy to envisage a future where a market for specialised language model AIs start to infiltrate the underworld allowing people to have "prohibited conversations". That could mean both very good things for social organisation and freedom from oppression, and bad things in terms in terms of crime. 

That is one of the more fascinating things to discuss in C-Camp. I think I might be more careful with my Chinese audience!




Saturday, 25 June 2022

How Learning Feels

When learning works, it feels like a burst of energy. It is the energy of an explosion of new possibilities brought about through some revelation. It is a spiritual moment (something we hardly ever acknowledge) - even when it is learning about unspiritual things. Like the discovery of a new physical energy source, we can live off the energy of new learning for some time. 

Striving for this moment is not easy. Yet we are driven towards it for reasons we do not understand. Teachers often assume that the motivation is produced by the mere operation of the education system. But the education system exists because curiosity and the motivation to learn exists. The system has no explanation for curiosity, and it struggles to conceive of ways of learning outside of itself.

New possibilities are possibilities for new social action. It is not just what some sociologists call "agency", but a transformed social configuration. A learnt skill is a transformation in social connections and conversations. It is new dialogical potential. And dialogical potential begets new possibilities for learning and energy distribution among others. To talk of the energy of learning, we should also talk of the energy of teaching. There is an energy flow in these dynamics.

In natural ecosystems like ponds and meadows, energy dynamics are very important. Ecosystems maintain themselves by keeping the energy flowing between co-evolved co-habiting system components. If the flow is stopped - by environmental damage, for example - the ecosystem dies. 

Education systems have become tragically good at preventing flows of energy. Instead of allowing energy to flow, education systems hoard it, exploit it, seek individual gain from it, use it to make money, and seek to make ourselves "powerful" as if we are independent from everyone else. 

We do this partly because we do not understand the dynamics of energy. If we did, we would take music much more seriously because it is one of the few human activities which exhibits energy flow in a pure form in a human system.

Intuitively, I think we know this. It is a symptom of the education system that it prevents us from "knowing" what we know deep down. Somehow we need the education system to adapt so that  it helps us to steer ourselves through what we know deep down. It needs to ease our steering - particularly in uncertain times. It is a transformation from hoarding knowledge to assisting steering. Then perhaps the steering of learning will feel more natural.


Wednesday, 8 June 2022

Trimtabs and Loosening Creativity

Creative processes are often difficult. It is hard to steer through distractions, uncertainty, self-doubt, dead-ends, etc. The steering becomes "heavy". So what's wrong with the mechanism, and what might be done to loosen things up to make the process more navigable?

The construction of niches for creative work is critical. It is the niche within which new things can grow. From a technical/theoretical perspective, niches are the result of redundancy. In his description of the Zone of Proximal development, Vygotsky said as much (without using the word "redundancy"), in highlighting the importance of imitation in what he called the "learning" process, and arguing that "development" lags behind "learning". In the same way, creation lags behind redundancy - it doesn't matter what kind of creation it is - it can be technical, artistic, organisational, theoretical or scientific. 

Margaret Boden talked once of the creative work of Spanish seamstresses making Flamenco dresses. She said "they do one layer, then another, then another, then another... what's going on there?" It's the same with things like mosaic, quilt-making or knitting. I didn't know enough about redundancy at the time to suggest it as an explanation, but I think she was already thinking this. This is niche construction. 

It is something we tend to ignore in education because we have become so obsessed with outcomes and products, seeing the processes which produce them as "problem solving". The word  "solving" is interesting because it really means "loosening" - solvere. That's not how people who talk about problem solving think about it. But if loosening really happens, then it makes the "steering" easier.  

Buckminster Fuller's idea of a trimtab is a loosening device. It literally loosens the steering, and it does it by creating a niche for steering - simply by  adjusting the pressure on a rudder or a wing. This tiny thing at the back-end of the navigation process is the thing that makes everything else work. Now perhaps its not stretching things too far to say that trimtabs create redundancy. Without them, there are a variety of different forces and pressures operating on the wing - so much variety that there is no single steering movement that can manage this variety. The trimtab reduces the variety by increasing the constraint. It's rather like a spider spinning a web. By creating a uniform area of lower pressure,  steering can be assisted. 

The trimtabs of our organisations lie in the redundancy of communication among their workers. Where there is high redundancy, we will also see what we might call "collegiality". Collegiality, team working, and a shared mission can all create the niche for organisational creativity. An absence of it will make creativity very difficult. 

Our organisations do not have operational trimtabs. The only lever they can pull is the departmental meeting - and this has become a ritual which often serves very little purpose. There is a deep need for exploring new mechanisms for institutional organisation. The answer to this lies in technology - but not the kind of surveillance technology which is often talked about (like "learning analytics"). Surveillance will not produce collegiality. Quite the opposite. 

We need to use technology to provoke dialogue among colleagues. It is through the dialogical engagement among colleagues that effective niches can be established. This is not to see technology as instrumental, but dialogical. AI may be our best opportunity to do something like this, and if there is one single challenge that faces us with that technology, it is that we misuse it to tighten, and not loosen, the steering.   

Saturday, 4 June 2022

The Cybernetics of the Trimtab Society

Over the last seven years, I've been heavily involved in a medical diagnostic project which unites human and machine judgement. This has always been cybernetic in my mind (and it was cybernetic insights which led to some pretty cool machine learning that sits behind it). It's about to be commercialised which is very exciting, not least because the technology is applicable to fields far beyond medical diagnostics - education, management, organisational risk and public health are all within scope of potential application. 

Cybernetics relies on simple rules and metaphors, but these work in a wide range of contexts. The Law of Requisite Variety is the most important - the amount of variety (or complexity) that a controller has is the limit of the complexity of any system that it can control. Most simply, variety eats variety. Since most systems have to survive in environments of greater complexity than they possess, they must establish a controlled relationship with their environment through attenuation (selecting what information to pay attention to and what to ignore) and amplification (use their capabilities and understanding to create a niche in the environment - for example, a spider spinning a web). This can balance the variety equation.

A simple mechanical metaphor of cybernetics is the Watt Governor on a steam engine. The engine's speed, represented by the spinning of its flywheel, is controlled by a device (the governor) which uses centripetal force generated by the speed of the wheel to either slow down or speed up the flow of steam to the engine. This works because the wheel has exactly the same amount of variety as the governor: whatever state the engine is in is matched by a corresponding state of the governor.

This is fine as a metaphor, but in social life, there is no one-to-one mapping of environmental complexity to controllers, so we end up with very complex patterns of attenuation and amplification which can create dangerous positive feedback to the system. We are living through this in many ways at the moment - not just in the climate crisis, but in the political feedback from our online communication, the economic system producing runaway inequality, the Ukraine war, and so on.

Buckminster Fuller drew attention to a different kind of cybernetic feedback mechanism - the trimtab. Trimtabs are the small edges on the back of wings which wiggle as the plane is flying, and which serve to make the pilot's job of steering and stabilising the plane easier. In other words, the trimtab is part of a mechanism which connects the pilot to the machine. It is not self-enclosed like the Watt Governor, but translates the environmental conditions into a potentially controllable situation, which would otherwise be very difficult to control. 

Buckminster Fuller thought so much of trimtabs that he had "Call me trimtab" written on his grave. He argued that the most important part of steering was not at the front, but at the back, and that each of us could be part of a "social trimtab" each feeding information about environmental conditions in a way which could facilitate effective steering. 

The diagnostic AI which I and our team have created basically works like this. With our work, the "pilot" is the doctor, but the pilot's job is to steer through different environmental conditions in terms of differing degrees of prevalence, diagnostic certainty, organisational complexity, health economics, risk and potential positive feedback. To achieve this has entailed a very different approach to AI. Conventional AI is simply used to provide "answers", often with the intention of replacing the "pilot". That's not a good idea because it throws away huge amounts of information which can be critical to understanding the nature of the challenges we face. The trimtab (and our trimtab AI) by contrast preserves information, transforming complex data into the conditions wherein effective decisions can be made. 

I've always felt that the most important thing education should do is to harness the uncertainty of individuals, because this information is information about the nature of our environment. What I've never been entirely clear about is how this "harnessing" looks - lots of forums, debate, etc, don't seem to work and in fact amplify social complexity. So we need a way of organising the many different signals coming from society as a means of facilitating effective steering for the planet (or Spaceship Earth as Fuller said). This may be the most powerful and effective use of AI. 

Friday, 13 May 2022

Dialogical Design

Thinking about thinking may be essential to dialogue. This isn't because dialogue is solipsistic - although an internal conversation might well be. It is more because dialogue involves the creation of uncertainty: either uncertainty within oneself or the social uncertainty which new utterances reflecting internal uncertainty create in communication. Dialogue is what we do to manage uncertainty, and thinking about thinking is how we generate uncertainty. Since thought and utterance are both processes now mediated by technology, this "thinking about thinking" is increasingly "thinking about technology". 

In his famous essay "The Question Concerning Technology", Heidegger sets out to make this point at the very beginning. Before we get to the rather complicated terminology that Heidegger uses to describe the phenomenon of technology ("enframing", etc), he makes a point relating to "thinking about thinking":

"In what follows we shall be questioning concerning technology. Questioning builds a way. We would be advised, therefore, above all to pay heed to the way, and not to fix our attention on isolated sentences and topics. The way is a way of thinking. All ways of thinking, more or less perceptibly, lead through language in a manner that is extraordinary. We shall be questioning concerning technology, and in so doing we should like to prepare a free relationship to it. The relationship will be free if it opens our human existence to the essence of technology. When we can respond to this essence, we shall be able to experience the technological within its own bounds."

This is Heidegger in dialogue with himself in the context of uncertainty created by technology and existence. Irrespective of what we might think about his eventual conclusions, this is an supreme example of what it is to think. 

If we were to say that thinking about thinking is essential to dialogue, what would we say if there was utterance without thought about thought? Could this be dialogical? If not, why not?

At a recent online event, Rupert Wegerif made the point that fascism is not dialogical, and that those instances of fascist/extreme right-wing posting on Twitter weren't dialogical, while other interactions on Twitter almost certainly are. Is it the recursiveness of thought which distinguishes these things? 

An interesting question arose in this session as to whether TikTok was dialogical. TikTok appears to be the epitome of what Heidegger would call "falling" - the kind of thoughtless action that we engage in where the "readiness-to-hand" of the technology masks the world as it really is: like drone operators staring at computer screens and pressing "fire". We have the same experience in other forms of engagement with technology where we go into "autopilot" (driving is a good example). Is TikTok autopilot? 

My colleague Danielle Hagood objected to the idea that TikTok wasn't dialogical. Part of TikTok's  appeal lies in the counterpoint between the fallenness of the swiping of videos, and an inquiry into the behaviour of the algorithm. I think she's right - this inquiry into the behaviour of the machine, which is also an inquiry into our own thinking and reaction - is dialogical. 

I suspect it is a category mistake to talk about dialogue being facilitated by particular platforms or technological activities - one activity is dialogical and another isn't. That sounds rather like Theodor Adorno's criticism of pop music: that the only music that was worthwhile was that from the 2nd Viennese School. We (I) don't want to become a digital Adorno, sneering at all the fun people have with technology! All digital activities (all activities) provide the stimulus for thought to think about itself: it is this that makes them potentially dialogical. 

This is important when we consider conversation as an activity. Not all conversations are dialogues for exactly the same reason that not all technological activities are dialogues. Rupert's point about fascism is spot-on here. Fascism is fascism because it has no reflexivity on its own thought. To live in a non-dialogical world is to be both prevented from reflecting on our own thought (through fear) and/or to be prevented from uttering inner doubts in public which contributes to the external uncertainty. We see both these conditions in Russia at the moment. Of course, the Russian state proclaims a rationale for what it is doing - but it's manipulation of the media is characterised by the generation of non-questions in the public domain - often concerning the use of nuclear weapons. It admits (and permits) no genuine articulation of uncertainty.

This anti-dialogical condition is designed. So could we design an opposite condition: a condition wherein thought is encouraged to think about itself? 

I think the answer to this question is "yes", but I think there is no way of doing without this entailing a reflection on technology. Thought is inseparable from technology - from the medium, the technique, the technics and the politics. The condition for dialogue is a condition where the uncertainties that must be generated by dialogical processes are generated by unpicking the technological domain as much as the psychological and social domain. 

We need to think of a new kind of technology which can support this: something where the action taken with a tool leads to reflection on the operation of that tool and its relation to thought. This may be where the current drive for digitalization in education takes us. I'd be tempted to call it "Second-order educational technology"

Monday, 9 May 2022

Learning technology and "Learning technology"

I have been heavily involved in promoting digitalisation at the University of Copenhagen for a year or so. When people ask what this is really about, I have found the simplest answer is to say that it is about encouraging students and teachers to look "beyond the screen". I often demonstrate this by simply pressing "CTRL-SHIFT-I" on my keyboard in a browser to reveal the Javascript console. It's perhaps analogous to producing a microscope in the natural environment. An invitation to ask more questions and explore new possibilities: to ask "What if...?"  

In the same way that we would encourage people towards deeper self-examination as part of their education, so I think it is becoming more important that a (related) technological-critical examination takes place within the digital environment in which we all swim. For the generation of students we are now teaching, the digital environment is a natural environment, whether they are comfortable in it or not - after all, there were always plenty of natives of the non-digital natural environment who were never comfortable in it!

Just as we would encourage people to explore and inquire about the natural environment, it seems reasonable to extend this to an inquiry into this "new nature", which is really as much an inquiry into ourselves as it is an inquiry into technology. Indeed the central issue of digitalisation is that it concerns the boundary between self and world which education has so-far been able to wash over. 

Faced with the mind-bending questions about identity and environment, sociology and psychology, it is much easier to stay rooted in traditional disciplines - both for students and staff. Moreover, our institutions have constructed themselves around these disciplinary sanctuaries. There are many reasons for an institution both to encourage "digitalisation" and to resist it. The encouragement comes from an a perception of existential threat - if traditional distinctions break down, then the raison d'etre for the institution is threatened, while if institutions fail to help students to adapt to the digital world outside, then they will be seen to be irrelevant. Digitalisation sits in the same camp as many other distinction-blurring issues: sustainability, decolonisation, and gender fluidity. These are all, I suspect, manifestations of deeper processes in our changing biological relation with our environment, our history, our institutions, and each other.  

The institution responds to this not with any fundamental organisational adaptation, but rather by declaring these things as "issues" or "agendas". So digitalisation, along with so many other things, has become an "agenda". Institutionally, "agendas" can be addressed by sticking something new on the curriculum, as if all that is required is a "bit more knowledge". So writing an essay on transgender rights (for example) will somehow address deep-seated and culturally established norms of bias and (often) bigotry. Is compliance with the "digitalisation agenda" merely satisfied with an essay about data privacy in the metaverse? What use is that? It keeps everyone busy, but does little to address what is really happening. 

So what about technology in this "learning"? What about "learning technology" for "learning technology"? Institutions have adopted a particular position with regard to technology for education which is now causing problems in its thinking about adapting to the challenge of the digital environment. Partly this has been caused by the commodification of technology in education, which has actively prevented students looking "behind the screen". Yet if we actually try to engage students in "looking behind the screen" there are some pedagogical challenges which have yet to be solved, but which are critically important. They might be listed:

  • How to avoid this becoming "computer science"?
  • How to make technical engagement personally meaningful?
  • How not to alienate students and teachers?
  • How to adapt assessment in ways which encourage technical exploration and creativity?
  • How to diversify activities so that students with different skills and dispositions can engage in activities that are right for them?
  • How to maintain interest and creativity when technical engagement often involves a quick descent into (often confusing) technical details which are far-removed from intended aims?
  • How to connect technical engagement to personal identity and spiritual development?
  • How not to throw out the disciplinary baby from the bathwater - transdisciplinarity cannot replace disciplinary expertise 
These are both pedagogical questions, and structural question within the university. They are challenges related to the act of "learning technology". There is, as yet, no sign that universities are willing to consider structural changes - particularly in assessment practices. So digitalisation will continue to sit as another "agenda". They want "learning technology" but cannot find a way of supporting the deeper process of personal inquiry involved in learning technology. 

But how could it be different? Perhaps one way forward is to think of how all the agendas piling onto education are symptomatic of a structural failing of the institution in a fast-changing world. It's like the British royal family now being increasingly confronted with the legacy of slavery - a legacy which biology is demonstrably showing the 200 year old epigenetic inheritance in heightened levels of diabetes, hypertension, stress and depression among black communities (see for example, Post Traumatic Slave Syndrome | Dr. Joy DeGruy (bethehealing.com)). This is how science really challenges existing structures and practices. 

So what do we do? Should we throw away those structures? (Royals perhaps!) But we should invite innovative systemic approaches to restructuring to provide the space for personal exploration in a world which is moving fast away from traditional understanding. Assessment is where I would start - it is the principal constraint that keeps everything else stuck in its ancient shape. 


Saturday, 30 April 2022

Digital Shadow

Carl Jung always warned us not to overlook "the shadow" - the archetype of the subconscious from which the conscious mind disassociates itself. Doing so will inevitably mean that at some point the shadow will take over and cause a crisis of far greater proportions than that which might have resulted had it been negotiated with sooner. Recent events and world history suggest that these shadow dynamics are reliable. There is no world which doesn't cast a shadow. This is why Shakespeare remains our most reliable guide to human behaviour. 

Its curious perhaps that our digital technologies exist through light. We may understand well enough that the shadow that is cast by the light of the screen lies within us. But how do we negotiate our shadow in this digital light? The psychoanalytic answer is to talk about it. Perhaps drama and storytelling are ways we can share experiences of the shadow (just as it was in the classical world) - but it can objectify the shadow in the collective imagination in a way where we can become complacent that we have put the shadow in its place. This "collective shadow" can be manipulated - which is something we are seeing very clearly in Russian state propaganda at the moment.  The shadow becomes the "other" rather than part of each of us. We see this "othering" online.  

The problems of fake news in the Trump campaign, Brexit, etc are striking examples of this kind of "othering" of the shadow. When Everett Hughes asked of the Germans during the 2nd world war "How could such dirty work be done among and, in a sense, by the millions of ordinary, civilized German people?", his answer is that if the dynamics of "in- and out-groups" are organised such that one group is "in" and everyone else "out", then the capacity for good people to do dirty work is increased. Hughes argues that in a richly developed society, there are many small social groups, all of which have their "in" and "out" dynamics (see Good People and Dirty Work on JSTOR)

"A society without smaller, rule-making and disciplining powers would be no society at all. There would be nothing but law and police; and this is what the Nazis strove for, at the expense of family, church, professional groups, parties and other such nuclei of spontaneous control. But apparently the only way to do this, for good as well as for evil ends, is to give power into the hands of some fanatical small group which will have a far greater power of self-discipline and a far greater immunity from outside control than the traditional groups. The problem is, then, not of trying to get rid of all the self- disciplining, protecting groups within society, but one of keeping them integrated with one another and as sensitive as can be to a public opinion which transcends them all. It is a matter of checks and balances, of what we might call the social and moral constitution of society"

It is not the beliefs of individuals which blind them to the shadow, but the dynamics of society. Here it is important to reflect on what the internet has done to those dynamics. The disembodied Balkanisation and digital othering which characterises online communities does not constitute the "nuclei of spontaneous control".  As we have seen, instead it renders communities susceptible to fanatical control because each community objectifies its shadow as "other", rather than being able to see it in themselves. If Elon Musk is really serious about improving Twitter, this is what should be understood. Importantly, it is not specifically about "algorithmic control", "AI" or even "confirmation bias". Indeed those critiques are an example of "othering" of technology - which itself contributes to the problem. 

One of the deepest challenges I think we face today is that confronting our shadows cannot be done without understanding of technics. Digitalisation is almost always presented in its light - we do this to "innovate" and "create". But no innovation and creative process comes without confronting our shadows, and education pays scant regard to this. 

The essence of digital creativity - like the essence of creativity in general - is the breakdown that occurs as we dig beneath the interface and try to grapple with the raw bits of mechanism that sit behind it. It's a psychological struggle - what was working, becomes broken. Often communication and sometimes motivation breaks down as we feel our way in the dark. In this digital shadow land, distinctions become blurred, but in the process, new communications are produced which gradually reconstruct something. And even if what is reconstructed is little different to what existed before, the confrontation with the shadow changes us. 

Online communities are susceptible to fanatical control because they have no way of talking to each other about their shadows. To dig beneath the digital interface is to recognise that, not only are we made of one physiology, but our communications are mediated through a unified computational architecture which ultimately is created by that physiology. More importantly, it is not the only technological architecture that is possible, and ours is not the only feasible technological world. The shadow lurks at all levels: the social media shadows are reflections of the shadows of each cell in our body. Our cells are rather better at dealing with their "shadows" than we are, with all our technology and communication. Understanding why and how is urgent - far more so than when Ivan Illich expressed similar arguments to politicise technology in the 1970s. That call has been misinterpreted: politicising technology is about getting technical.