Wednesday, 31 July 2019

Fractals of Learning

I've been doing some data analysis on responses of students to a comparative judgement exercise I did with them last year. Basically, they were presented with pairs of documents on various topics in science, technology and society, and asked "Which do you find more interesting and why?"

The responses collected over two weeks from about 150 students were surprisingly rich, and I've become interested in drawing distinctions between them. Some students clearly are transformed by many of the things which they read about (and this was in the context of a face-to-face course which also gravitated around these topics), and their answers reflect an emerging understanding. Other students, while they might also appear to engage with the process, are a bit more shallow in their response. 

To look at this, I've looked at a number of dimensions of their engagement and plotted the shifts in entropy in each dimension. So, we can look at the variety of documents or topics they talk about: some students stick to the same topic (so there is continually low entropy), while others choose a wide variety (so entropy jumps around). The amount of text they write also has an entropy over time, as does the entropy of the text itself. This last one is interesting because it can reveal key words in the same way that a word cloud might: key concepts get repeated, so the entropy gets reduced. 

What then would we expect to see of a student gradually discovering some new concept which helps them connect many topics? Perhaps an initial phase of high entropy in document choice, high entropy in concepts used and low entropy in the amount of text (responses might be a similar length). As time goes on, a concept might assert itself as dominant in a number of responses. The concept entropy goes down, while the document entropy might continue to oscillate. 

The overall pattern is counterpoint, rather like this graph below:


The graphical figure above is a representation of the positive and negative shifts in entropy of the main variables (going across the top), followed by the positive and negative shifts in the relative entropy of variables to one another. The further over to the right when patterns change is an indication of increasing "counterpoint" between the different variables. The further to the left is a sign of particular change in particular variables. From top to bottom is time, measured in slots where responses were made.


Not all the graphs are so rich in their counterpoint. This one (admittedly with fewer comparisons) is much more synchronous. There's a "wobble" in the middle where things are shifted in different directions, while at the end, the comments on the documents, the type of documents, and the type of topics all vary at once. If there was a common concept that had been found here, one would expect to see that the entropy of the comments would really be lower. But the graph and the diagram provide a frame for asking questions about it.
This one is more rich. It has a definite structure of entropies shifting up and down, and at the end there is a kind of unity which is produced. Looking at the student comments, it was quite apparent that there were a number of concepts which had an impact.

It doesn't always work as a technique, but there does appear to be a correlation between the shape of these graphs and the ways in which the students developed their ideas in their writing which merits further study.

More interestingly, this one (below) produced a richly contrapuntal picture, but when I looked at the data, it was collected over a very short period of time, meaning that this was the result of a one-off concentrated effort, rather that a longitudinal process. But that is interesting too, because there is a fractal structure to this stuff. A small sample can be observed to display a pattern which can then be contextualised within a larger context where that pattern might be repeated (for example, with a different set of concepts), or it might be shown to be an isolated island within a larger pattern which is in fact quite different.
Either way, the potential is there to use these graphs as a way of getting students to reflect on their own activities. I'm not sure I would go so far as to say "your graph should look like this", but awareness of the correlations between intellectual engagement and patterns of entropy is an interesting way of engaging learners in thinking about their own learning processes. Actually, it also might be possible to produce a 3d landscape from these diagrams, and from that a "google map" of personal learning: now that is interesting, isn't it?

Monday, 29 July 2019

Recursive Pedagogy, Systems thinking and Personal Learning Environments

Most of us are learning most of what we know, what we can do, what we use on an everyday basis, what we talk about to friends and colleagues, online. Not sat in lectures, gaining certificates, or sitting exams. Those things (the formal stuff) can provide 'passports' for doing new things, gaining trust in professional colleagues, getting a new job. But it is not where the learning is really happening any more. The extent to which this is a dramatic change in the way society organises its internal conversations is remarkably underestimated. Instead, institutions have sought to establish the realm of 'online learning' as a kind of niche - commodifying it, declaring scarcity around it, creating a market. This isn't true of just educational institutions of course. Social media corporations saw a different kind of marketing opportunity: to harness the desire to learn online into a kind of game which would continually manipulate and disorient individuals in the hope that they might buy stuff they didn't want, or vote for people who weren't good for them. But the basic fact remains: most of us are learning most of what we know online.

That means machines are shaping us. One senses that our sense of self is increasingly constituted by machines. I wonder if the slightly paranoid reactionaries who worry about the power of digital 'platforms' are really anxious about an assault on what they see as 'agency' and 'self' by corporations. But are we so sure about the nature of self or agency in the first place? Are we being naive to suppose autonomous agents acting in an environment of machines? Wasn't the constitution of self always trans-personal? Wasn't it always trans-personal-mechanical? The deeper soul-searching that needs to be done is a search for the individual in world of machines. Some might say this is Latour's project - but seeing 'agency' everywhere is not helpful (what does it mean, exactly?). Rather more, we should look to Gilbert Simondon, Luhmann, Kittler, and a few others. There's also a biological side to the argument which situates 'self' and consciousness with cells and evolutionary history, not brains. That too is important. It's a perspective which also carries a warning: that the assertion of agency, autonomy and self against the machine is an error in thinking which produces in its wake bad decision, ecological catastrophe and the kind of corporate madness which our platform reactionaries complain about in the first place!

Having said this, we then need to think about 'personal' learning in a context where the 'personal' is constituted by its mechanical and social environment. Machine learning gives us an insight into a way of thinking about 'personal' learning. Deep down, it means 'system awareness': to see ourselves as part of a system which constitutes us being aware of a system. It's recursive.

Some people object to the word 'system', thinking that it (again) denies 'agency'. Ask them to define what they mean by agency, and we end up confused. So its useful to be a bit clearer about 'system'. Here's my definition:

To think of 'systems' is a thought that accepts that the world is produced by thought.

This is why I'm a cybernetician. I think this is critically important. To deny that thought produces the world is to set thought against those things which constitute it. When thought is set against that which constitutes it, it becomes destructive of those things it denies: the planet, society, love.

So what of learning? What of learning online? What of personal learning?

It's about seeing our learning as a recursive process too. To study something is to study the machines through which we learn something. It may be that the machine learning revolution will make this more apparent, for the machines increasingly operate in the same kind of way that our consciousness operates in learning the stuff that is taught by the machines. It's about closing the reflexive loop.

So what about all that stuff about certificates, trust, passports, etc? It seems likely to me that closing the reflexive loop will produce new ways of codifying what we know: a kind of meta-codification of knowledge and skill. Against this, the institutional stamp of authority will look as old-fashioned as the wax seal. 

Monday, 15 July 2019

Interdisciplinary Creativity in Marseille

Last week I was lucky enough to go to this year's Social Ontology conference in Marseille. I've been going to southern France for a few years now to sit with economists and management theorists (no cyberneticians apart from me!) and talk about everything. Academic "authority" was provided by Tony Lawson (whose Cambridge social ontology group was the model for the meeting) and Hugh Willmott, whose interdisciplinarity helped established Critical Management Studies. Three years ago, I hosted the event in Liverpool, and more and more it feels like a meeting of friends - a bit like the Alternative Natural Philosophy Association (http://anpa.onl) which I'm hosting in Liverpool in August, but with management studies instead of physics.

This year, Tony didn't come, but instead we had David Knights from Lancaster University. It's always been an intimate event - and usually better for that, where the discussion has been of a very high level. Gradually we have eschewed papers, and focused entirely on dialogue for two days on a topic. This year's topic was Creativity.

If I'd read David Bohm before I'd started coming to these conferences, I would have known exactly what this was and why it was so good. Now I know Bohm, and I know he would have absolutely understood what we were doing. And with a topic like creativity, understanding what we were doing, where we were going, or where we would end up, was often unclear. Dialogue is a bit scary - it's like finding your way through the fog. Sometimes people get frustrated, and it is intense. But it is important to have faith that what we manage to achieve collectively is greater than what could be achieved by any individual.

So what conclusions did we reach? Well, I think I can sum up my own conclusions:
  • Creativity is not confined to human beings. It is a principle of nature. It may be the case that creative artists tune-in to natural processes, since this would explain how it is that their labours can result in something eternal. 
  • Creativity is connected to coherence. It is an expression of fundamental underlying patterns. In an uncertain environment, the necessity for the creative act is a necessity to maintain coherence of perception.
  • Creativity can be destructive. However (my view) I think that "creative destruction" needs unpicking. Creativity may always create something new which is additional to what was there before. This creates an increase in complexity and a selection problem. The "destruction" is done in response to this increase in complexity - often by institutions ("from now on, we are going to do it like this!")
  • The difference between creativity with regard to technical problems and creativity in human problems was discussed. Technical creativity is also driven by the drive for individual coherence - particularly in addressing ways of managing complexity - but it loses sight of the institutional destructive processes that may follow in its wake. 
  • The conversion of everything to money is, I think, such a "technical" innovation. On the one hand, money codifies expectations and facilitates the management of complexity. However, it prepares the way for the destruction of richness in the environment. 
  • The idea of "origin-ality" was explored. "Original" need not be new, but rather, connected to deeper "origins" in some way. This relates directly to the idea of creativity as a search for coherence.
  • Time is an important factor in creativity - it too may feature as a fundamental dimension in the coherence of the universe to which artists respond (particularly musicians, dancers, actors). Time raises issues about the nature of anticipation in aesthetic experience, and the perception of "new-ness"
  • A genealogy of creativity may be necessary - a process of exploring through dialogue how our notions of creativity have come to be. 
  • The genealogical issue is important when considering the role of human creativity in failures of collective decision-making and the manifest destruction of our environment. I'm inclined to see the issue of genealogy as a kind of laying-out of the levels of recursion in the topics and discourses of creativity, and this laying out may be necessary to provide sufficient flexibility for humankind to address its deepest problems.
  • Psychoanalytic approaches to creativity are useful, as are metaphors of psychodynamics. Michael Tippett's discussion of his own creative process had a powerful effect on everyone. However, the value of psychodynamics may lie in the fact that similar mechanisms are at work at different levels of nature (for example, cellular communication).
Michael Tippett Interview.mov from Directors Cut Films on Vimeo.

I took my Roli Seaboard with me, which inspired people to make weird noises. Music is so powerful to illustrate this stuff, and I invited people to contribute to a sound collage of the conference... which you can hear here. Actually, it's the first time I've heard a reflexology technique being used on the Seaboard!



Tuesday, 9 July 2019

Creativity and Novelty in Education and Life

A number of things have happened this week which has led me to think about the intellectual efforts that academics engage in to make utterances which they claim to be insightful, new or distinct in some other way. The pursuit of scholarship seems to result from some underlying drive to uncover things, the communication of which brings recognition by others that what one says is in some way "important" or "original", and basically confers status. Educational research is particularly interesting in this regard since very little that is uttered by anyone is new, yet it is often presented as being new. I don't want to criticise this kind fakery in educational research (but it is fakery), of which we are all guilty, but rather to ask why it is we are driven to do it. Fundamentally, I want to ask "Why are we driven to reclaiming ideas from the past as new and rediscovered in the present?" Additionally, I think we should ask about the role of technology in facilitating this rediscovery and repackaging of the past.

Two related questions accompany this. The first is about "tradition". At a time when we see many of the tropes of statehood, politics and institutional life becoming distorted in weird ways (by the Trumps, Farages and co), what is interesting is to observe what is retained in these distortions and what is changed. Generally it seems that surface appearance is preserved, but underlying structure is transformed from the structures that were once distributed, engaging the whole community in the reproduction of rituals and beliefs, to structures which leave a single centre of power responsible for the reproduction of rituals and beliefs.  This is, in a certain sense, a creative act on the part of the individual who manages to subvert traditions to bend to their own will.

Central to this distortion process is the control of the media. Technology has transformed our communication networks which, before the internet, were characterised by personal conversations occurring within the context of global "objects" such as TV and newspapers. Now the personal conversations are occurring within the frame of the media itself. The media technologies determine the way the communication game is played, and increasingly intrude on personal conversations where personal uncertainties could be resolved. The intrusion of media technologies increasingly serves to sway conversation in the direction of those who control the media, leaving personal uncertainties either unresolved, or deliberately obfuscated. The result is both a breakdown in mental health and increasingly lack of coherence, and increased control by media-controlling powers.

Where does creativity and novelty sit in all of this? Well, it too is a kind of trope. We think we are rehearsing being Goethe or Beethoven, but while the surface may bear some similarity, the deep structure has been rewired. More importantly, the university has become wired into this mechanism too. Is being creative mere appearance in a way that it wasn't in a pre-internet age?

At the same time, there's something about biology which is driven to growth and development to overcome restriction. Our media bubble is restriction on growth, and right now it looks menacing. The biological move is always to a meta-level re-description. Epochs are made when the world is redescribed. But we cannot redescribe in terms of "creativity" or "innovation" because those things are tropes wired into the media machine. Seeing the media machine for what it is may present us with some hope - but that is very different from our conventional notions of creativity.

Sunday, 7 July 2019

The Preservation of Context in Machine learning

I'm creating a short online course to introduce staff in my faculty to machine learning. It's partly about awareness-raising (what's machine learning going to do to medicine, dentistry, veterinary science, psychology, biology, etc?), and partly about introducing people to the tools which are increasingly accessible and available for experimentation.

As I've pointed out before, these tools are becoming increasingly standardised, with the predominance of python-based frameworks for creating machine learning models. Of course, python presents a bit of a barrier - it's so much better if you can do it in the web, and indeed, if you could do it in a desktop app based on web technologies like Electron.js. So that is what I'm working on.

Putting machine learning tools in the hands of ordinary people is important. The big internet corporations want to present a message that only they have the "big data" and expertise sufficient to really handle the immense power of AI. I'm not convinced. First of all, personal data is much "bigger" that we think, and secondly, our machine learning tools are hungry for data partly because we don't fully understand how they work. The real breakthrough will come when we do understand how it works. I think this challenge is connected to appreciating the "bigness" of personal data. For example, you could think of your 20 favourite films, and then rank them in an order. How much information is there there?

Well (without identifying my favourites), we have
F
B
C
A
E
... etc

Now if we consider that every item in the ranking is a relation to every other item, then the amount of data is actually the number of permutations of pairs of items. So,

F B
F C
F A
F E... and so on
That's 20!/(20-2)!, or 380 rows of data from a rank list of 20 items.

So could you train an algorithm to learn our preferences? Why not?
Given a new item, can it have a guess as to which rank that item might be? Well, it seems it can have a pretty good stab at it.

This is interesting because if the machine learning can estimate how "important" we think a thing is, and we can then refine this judgement in some way (by adjusting its position), then something is happening between the human and the machine: the machine is preserving the context of the human judgement which is used to train it.

The predominant way machine learning is currently used is to give an "answer": to identify the category of thing a item is. Yet the algorithm that has done this has been trained by human experts whose judgements of categories is highly contextual. By giving an answer, the machine strips out the context. In the end, information is lost. Using ranking, it may be possible to calculate how much information is lost, and from there to gain a deeper understanding of what is actually happening in the machine.

Losing information is a problem in social systems. Where context is ignored, tyranny begins. I think this is why everyone needs to know about (and actively engage with) machine learning.

Saturday, 6 July 2019

Communication's Illusion

There was a Nostradamus prediction revealed at the beginning of this year that 2019 was the year we became closer to animals (see https://www.yearly-horoscope.org/nostradamus-predictions/ for one of the many click-bait references to this). One interpretation is that we might learn to talk to animals...

The question interests me because it invites the question as to how the noises we make when we talk compare to the noises that animals make. Because we are largely obsessed with processing "information" and "meaning" in our communication (that is, attenuating the richness of sounds to codes), we tend to be oblivious to the amount of redundancy our communication entails, and we also assume that because animal communication has so much redundancy, it carries less meaning. The redundancy of animal communication is much more obvious: why doesn't a crow only "caw" once? Didn't one "caw" do the job? Why does it do it regularly 4 or 5 times (or more)? Why with the same rhythmic regularity?

Understanding of information and meaning in human communication is far from complete, and certainly for the latter, the scientific consensus seems to be pointing to the fundamental importance of redundancy, or constraint, in the establishment of meaning in human communication. Animal communication is likely to be just as meaningful to animals as our communication is to us. Indeed, our perception of "consciousness" among animals seems to be dependent on our observing animals operating within a lifeworld which we ourselves recognise. Like this ape who was filmed using a smartphone:
One problem we have in appreciating this is the belief that human consciousness is exceptional. This single belief could turn out to be the greatest scientific error, from which the destruction of our environment stems. It may be as naive as believing the earth to be the centre of the universe. In biology, many believe DNA is the centre of the universe of life and consciousness, and human DNA is special. I'm with John Torday who argues this view is ripe for a similar Copernican transformation.

I'm making a lot of weird music at the moment using a combination of the piano and the Roli Seaboard. The Seaboard can create disturbed and confused "environments". The piano tries to create redundancies in the patterns of its notes, harmonies, rhythms, and so on. As living things, all us animals inhabit a confusing environment. The creation of redundancy in communication seems fundamental to the battle to maintain coherence of understanding and effectiveness of coordination. So birds tweet their rhythmic patterns... and I blog! (and others Tweet!). Even when we talk of deep things like philosophy, we are usually repeating what's gone before. But somehow, we need to do it. We need the redundancy.

Do we only think we are saying "something" - some key "new" information? Do we only think this because we believe our consciousness is "special"? This is an uncomfortable thought. But I can't help wondering that "talking to animals" is not being Dr Doolittle. It is about realising how much more like birds our human communication really is.