Thursday, 18 January 2018

An Educational Techno-Utopia

Last week, one of my favourite sociologists, Christian Smith, published an angry piece in the Chronical of Higher Education entitled "Higher Education is Drowning in BS" (see https://www.chronicle.com/article/Higher-Education-Is-Drowning/242195). I've been fascinated by Smith's work for some time (see http://dailyimprovisation.blogspot.co.uk/2014/03/ethics-good-society-and-technology.htmlhttp://dailyimprovisation.blogspot.co.uk/2015/09/explaining-explaining-and-knowledge.htmlhttp://dailyimprovisation.blogspot.co.uk/2016/02/critical-realism-and-cybernetics.html), and there are two things that strike me on reading his Chronical piece.
  • First, it is no ordinary rant from any ordinary academic: this is someone who is an authority on human experience.
  • Second, I doubt that the senior management of his institution have read his work or have anything like the high opinion I and many others have of him. Some of those senior managers will call themselves "professor" and consider themselves to be intellectual authorities (since this is what "professor" denotes). In reality they will simply have been ambitious enough to acquire the title of highest academic rank without having to have read or thought that much.

There are some serious qualitative distinctions that need to be made and which are becoming blurred. Smith says it in his piece:

BS is universities hijacked by the relentless pursuit of money and prestige, including chasing rankings that they know are deeply flawed, at the expense of genuine educational excellence (to be distinguished from the vacuous "excellence" peddled by recruitment and "advancement" offices in every run-of-the-mill university).
For me personally, I have this disaster coupled with a very bright 18-year old daughter who is adamant she doesn't want another "three years of school" - and that is pretty much how all universities have become. So the bright kids are starting to desert the academy. The intellectual authorities in the institutions (the ones who know their way around the library), have either retired or have "had enough". What hope is there?

Among the many factors which have fed this decline, confusion over what "educational experience" is is high on the list of culprits. Because of the sheer difficulty in examining experience, we have allowed ourselves to be convinced that the only reliable methods are "by proxy" - questionnaires, surveys, etc. Yet these things do nothing  to measure experience. As Roger Brown says, University is an "experience good". That means "you can't know it until you've experienced it" (after having parted with £9250). That's an experience in itself!

In truth, Universities do their best not to be honest about the experience of university. Everyone knows that photographs of smiling students are a lie. Universities never tell you what it's like to struggle to get assignments done (or even, exactly what assessed work will be expected) or be bored rigid in a lecture. Why don't they publish their assessments up-front and let students decide when they feel they are ready? Because that wouldn't be in the commercial interests of the institution, even if it clearly is in the interest of the students.

In Dennis Potter's play from the 1980s, Blind Lazarus, a dead man's experience is available for others to enjoy (or at least experience too). Might technology deliver something like this to us one day?

I'm beginning to wonder if its not impossible. I've been doing some experiments analysing the dimensions of real-time experience as a kind of "counterpoint". At the moment it takes a lot of processing power to produce a map of the interplay of different domains of experience (visual, auditory, haptic, kinaesthetic, proprioceptive, etc). But as with any data processing, it will get quicker to the point of becoming instant. That would change things.

There could be no hiding of experience. One person could know another's consciousness. Would we still talk? probably - but it would change. I don't think capitalism would survive this innovation, let alone universities. But it would usher in a completely new era of learning and communicating. We would have tools to amplify the tuning-in to one another that is essential to communication. Assessment and certification would disappear as trust (which is what those things are about) becomes an explicit pattern of consciousness. Would we still lie? Maybe - but equally, we would know that we do it, and understand it better in others.

This isn't as far away as I once thought. It is really the flip-side of AI and machine learning. Those tools (AI) contribute to objects which transform themselves, presenting automatically generated multiple descriptions of themselves to the consciousness of individuals. Individual experience, contextualises these automatic multiple descriptions, and situates them within the many other multiple descriptions which comprise the context of conscious life.

I doubt Christian Smith will be able to look into the crystal ball like this - he is, after all, longing for the disappeared old academy. But here we see a new academy. It's not a hierarchy of professors and managers, but a heterarchy of intersubjective insight.

Learning and teaching will take care of itself.

Tuesday, 16 January 2018

Learning Analytics, Surveillance and Conversation

In the noisy discourse that surrounds learning analytics, there are some basic points which are worth stating clearly:
  1. Learning Analytics, like any “data analysis” is basically counting: complex equations which promise profound insights are in the end doing nothing other than counting. 
  2. Human beings determine what is to be counted and what isn’t, and within what boundaries one thing said to be the same (and counted as the same) as another thing. 
  3. Learning analytics takes a log of records – usually records of user transactions – and re-represents it in different ways.
  4. The computer automates the process of producing multiple representations of the same thing: these can be visual (graphs) or tabular 
  5. Decisions are facilitated when one or many of the representations automatically generated by the computer coincides with some human’s expectation. 
  6. If this doesn’t happen, then doubt is cast over the quality of the analysis or the data.
  7. Learning analytic services typically examine logs for multiple users from a position of privilege not available to any individual user. 
  8. Human expectations of the behaviour of these users is based on bias surrounding those aspects of individual experience that a person in privilege will have: typically this will be knowledge of the staff ("the students have had a miserable experience because teacher x is crap")
  9. Often such high-level services exist on a server into which data from all users is aggregated with little understanding by users as to what might be gleaned from it. 
  10. The essential relationship in learning analytics is between automatically generated descriptions and human understanding.  
  11. Data analytic tools like Tableau, R, Python, etc all provide functionality for programmatically manipulating data in rows and columns and performing functions on those rows and columns. Behind the complexity of the code, this is basically spreadsheet manipulation. It is the principal means whereby different descriptions are created. 

So the real question about learning analytics is a question about automatically-generated multiple descriptions of the data, and how those multiple descriptions influence decision-making. 

Of course, decisions made from good data will not necessarily be good decisions, nor are decisions made with bad data necessarily bad. What matters is the relationship between the expectations of the human being and the variety of description they are presented with. 

In teaching, communication, art, biology or poetry, multiple descriptions of things contribute to the making of meaning. Poets assemble various descriptions to convey ideas which don't have concrete words. Composers create counterpoint in sound. When we discuss things, we express different understandings of the same thing. And teaching is the art of expressing a concept in many different ways. What if some of these ways are generated by machines?

AI tools like automatic translaters or adaptive web pages are rich and powerful objects for humans to talk about. As such tools adapt in response to user input, people talking about those tools understand more about each other. Each transformation reveals something new about the people having a discussion. 

This is important when we consider analytic tools. The richness of the ability to generate multiple descriptions means that there is variety in the different descriptions that might be created by different people. The value of such tools lies in the conversations that might be had around them. 

With the emphasis on conversation, there is no reason why analytic tools should be cloud-based. There is no reason why surveillance is necessary. They could be personal tools, locally-installed instead. Their simple job is to process log files relating to one user or another. Through using them in conversation, individuals can understand each other's understanding better. They should be used intersubjectively.

Recently I've been doing some experiments with personally-oriented analytical tools which transform spreadsheet logs of activity into different forms. The value in the exercise is the conversation. 

Whatever we do with technology, it is always the conversation that counts!

Saturday, 13 January 2018

Learning as an Explanatory Principle - a response to Seb Fiedler

Seb Fiedler (University of Hamburg) wrote this (http://seblogging.cognitivearchitects.de/2018/01/11/on-learning-as-an-explanatory-principle/) earlier last week in response to my post about a "logic of learning" (see http://dailyimprovisation.blogspot.co.uk/2017/12/a-logic-of-learning.html)

My original post was about the impossibility of saying anything sensible about learning. Bateson's idea of "explanatory principles", which Seb uses, was his way of pointing out the essentially relative nature of anything we say about anything. Gravity? It's an explanatory principle!

Seb highlights Jünger's view that "learning is an explanatory model for the explanation of change".

The effect of any explanatory principle is to allay uncertainty about the environment. We are generally uncomfortable with uncertainty, and seek to explain it away. If it's not  God, it's the Government, or "human nature".... Because we attribute learning to so many aspects of change in the world to which we are uncertain, we have established institutions of learning to do an industrial-scale mopping-up of this uncertainty!

Explanatory principles - particularly when they are institutionalised - wash over the details of different people's interpretations of an explanatory principle. When the institution defines what learning is, individuals - learners and teachers - can find themselves alienated from their own personal explanatory principles. A common experience in education is for a learner to be told that they've learnt something when they feel just as confused (or more so) about the world as they did before they started.

At the heart of Bateson's argument about explanatory principles was the epistemological error which he feared would lead us to ecological catastrophe. He believed, as many believe in cybernetics, that one has to correct the epistemology. Bateson's attempt to articulate the logic upon which the epistemological error was based revolved around his work on the "double-bind". Double bind logic is a dialectical logic of levels of contradiction and resolution at a higher level. This is the logic which I think we should be looking at when we look at education and the discussion about learning. 

The use of the explanatory principle of "learning" is a bit like a move in a strategic game. When x says "this is learning" they are maintaining a distinction through a process of transducing all the different descriptions of their world and what they observe into a category. They then seek to defend their distinction against those who might have other distinctions to make. It's not the distinction that matters. It's the logic of the process whereby the distinction comes to be made and maintained. 

The logic behind the double-bind which produces the distinction is not Aristotelian. Bateson did not fully explore the more formal properties of the double bind logic. Lupasco did, and Joseph Brenner is able to tell us about it. Also I think Nigel Howard's theory of Metagames is also able to articulate a very similar kind of logic in a formal way using game theory.

Tuesday, 2 January 2018

Partial Notation of Improvisation and Creative Processes

I experimented with creating an instrumental voice (a flute) using some music notation software (Staffpad) and then improvising some kind of accompaniment to it on the piano. The notation process was interesting because it was effectively a process of creating space in the score. The gaps between the instrumental sections were more important than what occurred in those sections. I improvised into the gaps.

This worked quite well. It struck me that the process is a bit like doing a drawing where you demarcate the background and work towards the figure. The instrumental sections were pretty random - but it was just a frame. The colour was filled in with the improvisation.

I listened to the ensemble and started to add another voice which reinforced some of the features of the piano. Eventually I imagine I could dispense with the improvised bit completely.

When we sing along, or improvise with existing music, what is happening is the making of an alternative description of it. It's rather like taking Picasso's bare skeleton of a bull, and gradually filling in the bits which are missing. The bare bull is still a bull. What we add are alternative redundant descriptions.
This is what my improvisation is in relation to the fragments of notated melody on the computer. Gradually more and more description is added, and more and more redundancy is created.

One further point: thinking about my interest in Ehrenzweig's work on psychotherapy and the creative process (see http://dailyimprovisation.blogspot.co.uk/2017/11/ehrenzweig-on-objects-and-creativity.html), the notated score with its bare bones and large gaps is a means of creating what Ehrenzweig calls "dedifferentiation" in the psyche. It breaks things up and creates a framework for the drawing up of new forms and ideas from the oceanic primary process. Ehrenzweig talked about serialism doing this. This is the first time I have had the feeling that technology might actually be able to do it too. My experience with technology and musical creativity generally has been that it gets in the way because it reinforces the superego's "anal retentive" demand that things must be done in such and such a way.

I have not felt this with this particular exercise. Of course, it's not great music. But the process promises something...