Thursday, 23 May 2019

Polythetic Analysis: Can we get beyond ethnography in education?

I had a great visit to Cambridge to see Steve Watson the other day - an opportunity to talk about cybernetics and education, machine learning, and possible projects. He also shared with me a great new book on cybernetics and communication about which I will write later - it looks brilliant: https://www.aup.nl/en/book/9789089647702/sacred-channels

One thing came up in conversation that resonated with me very strongly. It was about empirically exploring the moment-to-moment experience of education - the dynamics of the learning conversation, or of media engagement, in the flow of time. What's the best thing we can do? Well, probably ethnography. And yet, there's something which makes me feel a bit deflated by this answer. While there's some great ethnographic accounts out there, it all becomes very wordy: that momentary flow of experience which is beyond words becomes pages of (sometimes) elegant description. I've been asking myself if we can do better: to take experiences that are beyond words, and to re-represent them in other ways which allow for a meta-discussion, but which also are beyond words in a certain sense.

Of course, artists do this. But then we are left with the same problem as people try to describe what the artist does - in pages of elegant description!

This is partly why Alfred Schutz's work on musical communication really interests me. Schutz wanted to understand the essence of music as communication. In the process, he wanted to understand something about communication itself as being "beyond words". Schutz's descriptions are also a bit wordy, but there are some core concepts: "tuning-in to one another", "a spectrum of vividness of sense impressions", and most interestingly, "polythetic" experience. Polythetic is an interesting word - which has led me to think that polythetic analysis is something we could do more with.

If you google "polythetic analysis", you get an approach to data clustering where things are grouped without having any core classifiers which separate one group from another. This is done over an entire dataset. Schutz's use of polythetic is slightly different, because he is interested in the relations of events over time, where there is never any core classifier which connects one event to another, and yet they belong together because subsequent events are shaped by former events. I suppose if I want to distinguish Schutz from the more conventional use of polythetic, then it might be called "temporal polythetic" analysis.

While there are no core classifiers which distinguish events as belonging to one another, there is a kind of "dance" or "counterpoint" between variables. Schutz is interested in this dance. I've been working on a paper where the dance is analysed as a set of fluctuations in entropy of different variables. When we look at the fluctuations, patterns can be generated, much like the patterns below (which are from a Bach 3-part invention). The interesting question is whether one person's pattern becomes tuned-in to another person's. If it is possible to compare the patterns of different individuals over time then it is possible to have a meta-conversation about what might be going, to compare different experiences and different situations. In this way, a polythetic comparison of online experience versus face-to-face might be possible, for example, or a comparison of watching different videos.

So in communication, or conversation, there are multiple events which occur over time: Schutz's "spectrum of vividness" of sense impressions. As these events occur, and simultaneously to them, there is a reflective process whereby a model which anticipates future events is constructed. This model might be a bit like the fractal-like pattern shown above. In addition to this level of reflection, there is a further process whereby there are many possible models, many possible fractals, that might be constructed: a higher level process requires that the most appropriate model, or the best fit, is selected. 

Overall this means that Schutz's tuning-in process might be represented graphically in this way:

This diagram labels the "flow of experience" as "Shannon redundancy" - the repetitive nature of experience, the reflexive modelling process as "incursive", and the selection between possible models as "hyperincursive" (this is following the work on anticipatory systems by Daniel Dubois). 


Imagine if we analysed data from a conversation: everything can have an entropy over time - the words used, the pitch of the voice, the rhythm of words, the emphasis of words, and so on. Or imagine we examine educational media, we can examine the use of camera shots, or slides changing, or words on the screen, and spoken words. Our experience of education and media is all contrapuntal in this way.

Polythetic analysis presents a way in which the counterpoint might be represented and compared in a way that acts as a kind of "imprint" of meaning-making. While ethnography tries to articulate the meaning (often using more words than was in the initial situation), analysing the imprint of the meaning may enable us to create representations of the dynamic process, to make richer and more powerful comparisons between different kinds of experience.

Wednesday, 8 May 2019

Bach as an anticipatory fractal - and thoughts on computer visualisation

I've got to check that I've got this right, but it seems that an algorithmic analysis I've written of a Bach 3-part invention reveals a fractal. It's based on a table of entropies for different basic variables (pitch, rhythm, intervals, etc). An increase in entropy is a value for a variable "x", where a decrease in entropy is a value for "not-x". Taking the variables as A, B, C, D, etc, there is also the values for the combined entropies of AB (and not-AB), AC, BC, etc. And also for ABC, ABD, BCD, and so on.

The raw table looks a bit like this:
But plotting this looks something like this:

What a fascinating thing that is! It should be read from left to right as an index of increasing complexity of the variables (i.e. more combined variables), with those at the far left the simplest basic variables. From top to bottom is the progress in time of the music. 

My theory is that music continually creates an anticipatory fractal, whose coherence emerges over time. The fractal is a selection mechanism for how the music should continue. As the selection mechanism comes into focus, so the music eventually selects that it should stop - that it has attained a coherence within itself. 

Need to think more. But the power of the computer to visualise things like this is simply amazing. What does it do to my own anticipatory fractal? Well, I guess it is supporting my process of defining my own selection mechanism for a theory!

Tuesday, 7 May 2019

"Tensoring" Education: Machine Learning, Metasystem and Tension

I've been thinking a lot about Buckminster-Fuller recently, after I gave a talk to architecture students about methods in research (why does research need a method?). One of the students is doing an interesting research project on whether tall buildings can be created in hot environments which don't require artificial cooling systems. The tall building is a particular facet of modern society which is overtly unsustainable: we seem only to be able to build these monoliths and make them work by pumping a huge amount of technology into their management systems. Inevitably, the technology will break down, or become too expensive to run or maintain. One way of looking at this is to see the tall building as a "system", which makes its distinction between itself and its environment, but whose distinction raises a whole load of undecidable questions. Technologies make up the "metasystem" - the thing that mops up the uncertainty of the building and keeps the basic distinction it made intact.  Overbearing metasystems are the harbinger of doom - whether they are in a passenger plan (the Boeing 737 Max story is precisely a story about multiple levels of overbearing metasystems), in society (universal credit, surveillance), or in an institution (bureaucracy).

Buckminster Fuller made the distinction between "compression" and "tension" in architecture. We usually think of building in terms of compression: that means "stuff" - compressed piles of bricks on the land. His insight was that tension appeared to be the operative principle of the universe - it is the tension of gravity, for example, that keeps planets in their orbit. Fuller's approach to design was one of interacting and overlapping constraints. This is, of course, very cybernetic, and the geodesic dome was an inspiration to many cyberneticians - most notably, Stafford Beer, who devised a conversational framework around Fuller's geometical ideas called "syntegrity".

In education too, we tend to think of compressed "stuff": first there are the buildings of education - lecture halls, libraries, labs and so on. Today our "stuff"-focused lens is falling on virtual things - digital "platforms" - MOOCs, data harvesting, and so on, as well as the corporate behemoths like Facebook and Twitter. But it's still stuff. The biggest "stuff" of all in education is the curriculum - the "mass" of knowledge that is somehow (and nobody knows exactly how) transferred from one generation to the next. Fuller (and Beer) would point out that this focus on "stuff" misses the role of "tension" in our intergenerational conversation system.

Tension lies in conversation. Designing education around conversation is very different from designing it around stuff. Conversation is the closest analogue to gravity: it is the "force" which keeps us bound to one another. As anyone who's been in a relationship breakdown knows - as soon as the conversation stops, things fall apart, expectations are no longer coordinated, and the elements that were once held in a dynamic balance, go off in their different directions. Of course, often this is necessary - it is part of learning. But the point is that there is a dynamic: one conversation breaks and another begins. The whole of society maintains its coherence. But our understanding of how this works is very limited.

Beer's approach was to make interventions in the "metasystems" of individuals. He understood that the barriers to conversation lay in the "technologies" and "categories" which each of us has built up within us as a way of dealing with the world. Using Buckminster Fuller's ideas, he devised a way of disrupting the metasystem, and in the process, open up individuals to their raw uncertainty. This then necessitated conversation as individuals had to find a new way to balance their inner uncertainty with the uncertainty of their environment.

The design aspect of tensored education focuses on the metasystem. Technology is very powerful in providing a context for people to talk to each other. However, there is another aspect of "tensoring" which is becoming increasingly important in technology: machine learning. Machine learning's importance lies in the fact that it is a tensored technology: it is the product of multiple constraints - much like Buckminster-Fuller's geodesic dome. The human intelligence that machine learning feeds on is itself "tensored" - our thoughts are, to varying extents - ordered. Expert knowledge is more ordered in its tensored structure than that of novices. Machine learning is able to record the tensoring of expert knowledge.

When devising new ways of organising a tensored education, this tool for coordinating tension in the ordering of human understanding, and avoiding "compression" may be extremely useful.