Wednesday, 8 May 2019

Bach as an anticipatory fractal - and thoughts on computer visualisation

I've got to check that I've got this right, but it seems that an algorithmic analysis I've written of a Bach 3-part invention reveals a fractal. It's based on a table of entropies for different basic variables (pitch, rhythm, intervals, etc). An increase in entropy is a value for a variable "x", where a decrease in entropy is a value for "not-x". Taking the variables as A, B, C, D, etc, there is also the values for the combined entropies of AB (and not-AB), AC, BC, etc. And also for ABC, ABD, BCD, and so on.

The raw table looks a bit like this:
But plotting this looks something like this:

What a fascinating thing that is! It should be read from left to right as an index of increasing complexity of the variables (i.e. more combined variables), with those at the far left the simplest basic variables. From top to bottom is the progress in time of the music. 

My theory is that music continually creates an anticipatory fractal, whose coherence emerges over time. The fractal is a selection mechanism for how the music should continue. As the selection mechanism comes into focus, so the music eventually selects that it should stop - that it has attained a coherence within itself. 

Need to think more. But the power of the computer to visualise things like this is simply amazing. What does it do to my own anticipatory fractal? Well, I guess it is supporting my process of defining my own selection mechanism for a theory!

Tuesday, 7 May 2019

"Tensoring" Education: Machine Learning, Metasystem and Tension

I've been thinking a lot about Buckminster-Fuller recently, after I gave a talk to architecture students about methods in research (why does research need a method?). One of the students is doing an interesting research project on whether tall buildings can be created in hot environments which don't require artificial cooling systems. The tall building is a particular facet of modern society which is overtly unsustainable: we seem only to be able to build these monoliths and make them work by pumping a huge amount of technology into their management systems. Inevitably, the technology will break down, or become too expensive to run or maintain. One way of looking at this is to see the tall building as a "system", which makes its distinction between itself and its environment, but whose distinction raises a whole load of undecidable questions. Technologies make up the "metasystem" - the thing that mops up the uncertainty of the building and keeps the basic distinction it made intact.  Overbearing metasystems are the harbinger of doom - whether they are in a passenger plan (the Boeing 737 Max story is precisely a story about multiple levels of overbearing metasystems), in society (universal credit, surveillance), or in an institution (bureaucracy).

Buckminster Fuller made the distinction between "compression" and "tension" in architecture. We usually think of building in terms of compression: that means "stuff" - compressed piles of bricks on the land. His insight was that tension appeared to be the operative principle of the universe - it is the tension of gravity, for example, that keeps planets in their orbit. Fuller's approach to design was one of interacting and overlapping constraints. This is, of course, very cybernetic, and the geodesic dome was an inspiration to many cyberneticians - most notably, Stafford Beer, who devised a conversational framework around Fuller's geometical ideas called "syntegrity".

In education too, we tend to think of compressed "stuff": first there are the buildings of education - lecture halls, libraries, labs and so on. Today our "stuff"-focused lens is falling on virtual things - digital "platforms" - MOOCs, data harvesting, and so on, as well as the corporate behemoths like Facebook and Twitter. But it's still stuff. The biggest "stuff" of all in education is the curriculum - the "mass" of knowledge that is somehow (and nobody knows exactly how) transferred from one generation to the next. Fuller (and Beer) would point out that this focus on "stuff" misses the role of "tension" in our intergenerational conversation system.

Tension lies in conversation. Designing education around conversation is very different from designing it around stuff. Conversation is the closest analogue to gravity: it is the "force" which keeps us bound to one another. As anyone who's been in a relationship breakdown knows - as soon as the conversation stops, things fall apart, expectations are no longer coordinated, and the elements that were once held in a dynamic balance, go off in their different directions. Of course, often this is necessary - it is part of learning. But the point is that there is a dynamic: one conversation breaks and another begins. The whole of society maintains its coherence. But our understanding of how this works is very limited.

Beer's approach was to make interventions in the "metasystems" of individuals. He understood that the barriers to conversation lay in the "technologies" and "categories" which each of us has built up within us as a way of dealing with the world. Using Buckminster Fuller's ideas, he devised a way of disrupting the metasystem, and in the process, open up individuals to their raw uncertainty. This then necessitated conversation as individuals had to find a new way to balance their inner uncertainty with the uncertainty of their environment.

The design aspect of tensored education focuses on the metasystem. Technology is very powerful in providing a context for people to talk to each other. However, there is another aspect of "tensoring" which is becoming increasingly important in technology: machine learning. Machine learning's importance lies in the fact that it is a tensored technology: it is the product of multiple constraints - much like Buckminster-Fuller's geodesic dome. The human intelligence that machine learning feeds on is itself "tensored" - our thoughts are, to varying extents - ordered. Expert knowledge is more ordered in its tensored structure than that of novices. Machine learning is able to record the tensoring of expert knowledge.

When devising new ways of organising a tensored education, this tool for coordinating tension in the ordering of human understanding, and avoiding "compression" may be extremely useful.

Sunday, 28 April 2019

How the Roli Seaboard is changing the way I think about music

I am making very weird noises at the moment. Partly encouraged by a richly rewarding collaboration with John Hyatt and Mimoids (see https://www.facebook.com/john.hyatt.9210/videos/10212046977604430/), a digital musical instrument - the Roli Seaboard - is becoming my favoured mode of musical expression. A year ago, I would have thought that highly improbable. For me, nothing could touch the sensitivity, breadth of expression and sophistication that is possible with an acoustic piano - if you have the technique to do it. Having said that, I do wonder if we've run out of ideas within that medium.

Part of the problem with contemporary music is that the only way forwards is towards greater complexity. And with greater complexity sometimes comes a barrier with people: music becomes "clever" or "difficult" and we lose something of what matters about the whole thing in the first place.

While I've been thinking about this, I've also been thinking about what music really is in the first place. Why do I have some kind of "soundtrack" running in my head all the time? What's going on? Is it connected to the way I make sense of the world?

Music's profound quality arises from redundancy. That's interesting because it raises the question as to why my cognitive system has to continually generate redundancy. The interesting thing is that redundancy can create coherence. So maybe that continual soundtrack is simply my consciousness making sense of the chaos around me. I'm beginning to wonder about this with regard to all communicative musicality.. even learning conversations: they seem to arise from some profound need to make sense of things - and not just by learners, but by teachers too.

This also helps to explain why class music lessons in school are often terrible. Attempting to rationally codify the very thing that we use all the time to make sense of the world is likely to result in some kind of adverse reaction.

In a complex world, simplicity is important. Which brings me back to contemporary music. Not that we want to create simple music and put it on the pedestal of high art. But we need to express something of what music does to us, and perhaps to understand how it works better. The piano is a sophisticated and delicate instrument which can make simple things sound interesting. But the Roli Seaboard is an instrument which expresses ambiguity, complexity and variety in a way that the piano cannot. To me, the Seaboard sounds like the world around us - the noisy world of loudspeakers, garish colours, and distraction. The Seaboard is context, and it creates a frame for our simpler and more traditional forms of music to reveal what they really do for us: to create coherence, and (in terms of collective singing) conviviality.

Saturday, 27 April 2019

Tradition, Redundancy and Losing the Way

This week there was a rare opportunity to hear Michael Tippett's piano concerto in Manchester (it's rare anywhere) with Steven Osborne playing (who was a fellow student with me at Manchester University in the late 80s). I hadn't heard the Tippett for years - it's incredibly radiant and warm music. Another composer, John McCabe, said something fascinating about him: "I find Tippett's music tends to make me feel better" (see https://www.youtube.com/watch?v=NoS22TCM-7Q) I agree, and Tippett was very conscious that he was attempting to do something physiological with sound (he got this from Vincent D'Indy - see https://dailyimprovisation.blogspot.com/2012/01/vincent-dindy-and-breath-of-music.html). This, in his mind, was deeply connected to social concerns and emancipation, as well as to depth psychology. Jung and T.S. Eliot were profound influences.

Both these issues have been on my mind. On the day of the concert I had had a job interview (the first for a long time), which although I didn't get the job, prompted a fascinating discussion about individuation, both from a Jungian perspective and from that of Simondon. But during the concert I was thinking about the ritual of playing music, and returning to music from many years ago, and thinking about Eliot's famous essay "Tradition and the Individual Talent", which I had first got to know at Manchester with Tippett's biographer.

The whole arts world is a kind of ritual, seeming to preserve an elite social order. When that order is challenged - for example, by an 850 year old cathedral burning down - the human reaction seems irrational - but its elite nature is clear for all to see. The irony is that great art - and Tippett was a visionary artist - is made in the spirit of challenging the social order (he was also a Marxist). His piano concerto is a superb case-in-point: unlike any other concerto, it is anti-heroic. Few pianists would take it on because it doesn't put them in the spotlight. Audiences are disoriented because their expectations are frustrated by a fiendishly difficult piano part which causes the soloist to work very hard, but which remains veiled behind a collective radiant wall of sound. For most of it, they are an accompaniment, or a catalyst. Tippett was making a statement: one that is echoed in Eliot's essay -
The emotion of art is impersonal. And the poet cannot reach this impersonality without surrendering himself wholly to the work to be done. And he is not likely to know what is to be done unless he lives in what is not merely the present, but the present moment of the past, unless he is conscious, not of what is dead, but of what is already living.
Steven Osborne and Andrew Davies take this on because they understand this and believe in it. But there are contradictions (even in upholding them as "champions"!) Even in the wonderful performance in Manchester, I wondered if the point was lost on most of the audience. How do we get the point across about accompaniment or catalysis in a world which fetishises the individual achievement? Another way of asking this is to say "How do we see relations and the conversation as more important than the individual?" This was really what I talked about in my interview. And I have reflected on it more as I have thought that most of what I have done - in academia and in music - was catalysis.

But there are deeper questions about ritualised tradition. If one were to compress the years since the composition of Beethoven's 5th symphony, and examine the many millions of performances, then the ritualised repetition whereby people gather together and re-perform a set of instructions looks full of redundancy. Is redundancy the basis of tradition?

Redundancy is the basis of so much communication, from the crying of a baby, to the squawks of crows or music itself. Teaching depends on the redundancy of saying the same thing many different ways. Like playing Beethoven 5 in different ways (but rarely that different - apart from this... https://www.youtube.com/watch?v=wOiBlL9pHMw). What is it? What's going on?

My speculation is that the world is a confusing place. All living things struggle to bring coherence to it - and they do this through conversation. We are thrown into conversation from birth. Through conversation, living things negotiate the differences between the different distinctions they make. Although we see those agreed distinctions - like words in a language - as "information", the really important thing is the redundancy that sits in the background of the process that makes it. Its the redundancy that brings coherence - just as the redundancy of Beethoven's motifs gives form to his symphony.

To accompany, to catalyse, we have to see the redundancy that needs to be added to bring coherence. I think this is really what teachers do. Its actually the opposite of "information". What Eliot describes as "surrendering to the work to be done" is the process of identifying the redundancy that needs to be created. In Gregory Bateson's terms, it is identifying the "Pattern that connects". The ritual of teaching and the ritual of performance of tradition are all about the coherence of our civilisation. There's something profoundly necessary about it, and yet within it are dangers which can produce incoherence.

To lose one's way is to lose sight of the process of creating redundancy, of catalysing ongoing conversations. This can happen if we codify the products of a previous age to the point that we believe that merely repeating these "products" - the information - will maintain civilisation. It will instead do the opposite. That's why Tippett's message - and his example - is important. It's not the figure; it's the ground - the earth - our shared context.

Monday, 15 April 2019

Kaggle and the Future University: Learning. Machine. Learning.

One of the most interesting things that @gsiemens pointed out the other day in his rant about MOOCs was that people learning machine learning had taught themselves through downloading datasets from Kaggle (http://kaggle.com) and using the now abundant code examples for manipulating and processing these datasets with the python machine learning libraries which are also all on GitHub, including tensorflow and keras. Kaggle itself is a site for people to engage in machine learning competitions, for which it gathers huge datasets on which people try out their algorithms. There are now datasets for almost everything, and the focus of my own work on diabetic retinopathy has a huge amount of stuff in it (albeit a lot of it not that great quality). There is an emerging standard toolkit for AI: something like Anaconda with a Jupyter notebook (or maybe PyCharm), and code which imports ternsorflow, keras, numpy, pandas, etc. Its become almost like the ubiquity of setting up database connectors to SQL and firing queries (and is really the logical development of that).

Whatever we might think of machine learning with regard to any possibility of Artificial Intelligence, there's clearly something going on here which is exciting, increasingly ubiquitous, and perceived to be important in society. I deeply dislike some aspects of AI - particularly its hunger for data which has driven a surveillance-based approach to analysis - but at the same time, there is something fascinating and increasingly accessible about this stuff. There is also something very interesting in the way that people are teaching themselves about it. And there is the fact that nobody really knows how it works - which is tantalising.

It's also transdisciplinary. Through Kaggle's datasets, we might become knowledgeable in Blockchain, Los Angeles's car parking, wine, malaria, urban sounds, or diabetic retinopathy. The datasets and the tools for exploring them are foci of attention: codified ways in which diverse phenomena might be perceived and studied through a coherent set of tools. It may matter less that those tools are not completely successful in producing results - but they do something interesting which provides us with alternative descriptions of whatever it is we are interested in.


What's missing from this is the didacticism of the expert. What instead we have are algorithms which for the most part are publicly available, and the datasets themselves, and a question - "this is interesting... what can we make of it?"

We learn a lot from examining the code of other people. It contains not just a set of logic, but expresses a way of thinking and a way of organising. When that way of thinking and way of organising is applied to a dataset, it also expresses a way of ordering phenomena.

Through my diabetic retinopathy project, I have wondered whether human expertise is ordinal. After all, what do we get from a teacher? If we meet someone interesting, it's tempting to present them with various phenomena and ask them "What do you think about this?". And they might say "I like that", or "That's terrrible!". If we like them, we will try to tune our own judgements to mirror theirs. The vicarious modelling of learning seems to be something like an ordinal process. And in universities, we depend on expertise being ordinal - how else could assessment processes run if experts did not order their judgements about student work in similar ways?

The problem with experts is that when expertise becomes embodied in an individual it becomes scarce, so universities have to restrict access to it. Moreover, because universities have to ensure they are consistent in their own judgement-making, they do not fully trust individual judgement, but organise massive bureaucracies on top of it: quality processes, exam boards, etc.

Learning machine learning removes the embodiment of the expertise, leaving the order behind. And it seems that a lot can be gained from engaging with the ordinality of judgements on their own. That seems very important for the future of education.

I'm not saying that education isn't fundamentally about conversation and intersubjective engagement. It is - face-to-face talk is the most effective way we can coordinate our uncertainty about the world. But the context within which the talking takes place is changing. Distributing the ordinality of expert judgements creates a context where talk about those judgements can happen between peers in various everyday ways rather than simply focusing on the scarce relation between the expert and the learner. In a way, it's a natural development from the talking-head video (and it's interesting to reflect that we've haven't advanced beyond that!). 

Reality

Every improvisation I am making at the moment is dominated by an idea about the nature of reality as being  a hologram, or fractal. So the world isn't really as we see it: it's our cells that make us perceive it like that, and it's our cells that make us perceive a "me" as a thing that sees the world in this way.

This was brought home to me even more after a visit to the Whitworth gallery's wonderful exhibition of ancient Andean textiles. They were similar to the one below (from Wikipedia)



It's the date which astonishes: sometime around 200CE. Did reality look like this to them? I wonder if it might have done.

This music is kind-of in one key. It's basically just a series of textures and slides (which are meant to sound like traffic) that embellish a fundamental sound. I like to think that each of these textures overlays some fundamental pattern with related patterns at different levels. The point is that all these accretions of pattern produces a coherence through producing a fractal.

Saturday, 13 April 2019

Comparative Judgement, Personal Constructs and Perceptual Control

The idea that human behaviour is an epiphenomenon of the control of perception is an idea associated with Bill Power's "Perceptual Control Theory", which dates back to the 1950s. Rather than human consciousness and behaviour being "exceptional", individual, etc, it is rather seen as the aggregated result of the interactions of a number of subsystems, of which the most fundamental is the behaviour of the cell. So if our cells are organising themselves according to the ambiguity of their environment (as John Torday argues), and in so doing are "behaving" so as to maintain homeostasis with their environment by producing information (or neg-entropy), and reacting to chemiosmotic changes, then consciousness and behaviour (alongside growth and form) is the epiphenomenal result.

So when we look at behaviour and learning, and look back towards this underlying mechanism, what do we see? Fundamentally, we see individuals creating constructs: labels with which individuals deal with the ambiguity and uncertainty of the world. But what if the purpose of the creation of constructs is analogous to the purpose of the cell: to maintain homeostasis by producing negentropy and reacting to chemiosmosis (or perhaps noise in the environment)?

We can test this. Presenting individuals with pairs of different stimuli and asking them which they prefer and why is something that comparative judgement software can do. It's actually similar to the rep-grid analysis of George Kelly, but rather than using 3 elements, 2 will do. Each pair of randomly chosen stimuli (say bits of text about topics in science or art), are effectively ways of stirring-up the uncertainty of the environment. This uncertainty then challenges the perceptual system of the person to react. The "construct", or the reason for one choice or another, is the person's response to this ambiguity.

The interesting thing is that as different pairs are used, so the constructs change. Moreover, the topology of what is preferred to what also gradually reveals contradictions in the production of constructs. This is a bit like Power's hierarchies of subsystems, each of which is trying to maintain its control of perception. So at a basic level, something is going on in my cells, but as a result of that cellular activity, a higher-level system is attempting to negotiate the contradictions emerging from that lower system. And then there is another higher level system which is reacting to that system. We have layers of recursive transduction.

It's interesting to reflect on the logic of this and compare it to our online experience. Our experience of Facebook and the media in general is confusing and disabling precisely because the layers of recursive transduction are collapsed into one. Complexity requires high levels of recursion to manage it, and most importantly, it requires the maintenance of the boundaries between one layer of recursion and another. From this comes coherence. Without this, we find ourselves caught in double-binds, where one layer is in conflict with another, with no capacity to resolve the conflict at a new level of recursion.

If we want to break the stranglehold of the media on our minds, we need new tools for the bringing of coherence to our experiences. I wonder if were we to have these tools, then self-organised learning without institutional control becomes a much more achievable objective.