Monday 26 December 2022

AI and Heterophony

Heterophony is a musical technique where the flow of a single melodic line is followed by many voices, where each voice has a slightly different variation. It is, like repetition and harmony, one of the fundamental forms of redundancy in music. It is also perhaps the most interesting because it reflects the ways in which a single structure unfolding over time can be represented in multiple ways. These multiple ways come together because the heterophony arises from the fact that we are all fundamentally the same, with a bit of variation. 

There is a certain sense in which AI is heterophonic. It obviously relies on redundancy in order to make its judgements, and with things like chatGPT, the redundancy is increasingly obvious not just in the AI itself, but in human-machine relations. All AI relies on the differences between heterophonic voices in order to learn. We seem to be similar in our own learning. 

From a musical point of view, heterophony is most closely associated with non-Western music. Among the western composers who developed it in their music, the most striking example is Britten. While some of Britten's heterophony is a kind of cultural appropriation, I've been wondering recently whether he discovered something in heterophony which was always in his music. The predominance of 7ths and contrary motion in his very early "Holiday Diary" suggests to me a kind of heterophony which, by the time of his last (3rd) quartet (https://youtu.be/AElJ08gIOOM - particularly the first and last movements) becomes distilled into a very simple and ethereal world of crystalline textures. The fact that he went via his discovery of Balinese music was not an indication of appropriation, but self discovery. Unlike Tippett, he didn't say much about his thought and processes, but like all great artists, he might have been picking something up from the future - or rather, something that connects the future with the past. 


There is something of this heterophonic aspect to early music (lots of it in the Fitzwilliam Virginal Book, for example). While parts move not so much in unison but in 3rds and 6ths, the rhythmic interplay of one part moving slowly and other parts moving much more quickly is very similar to the rhythms that unfold naturally through the interactions of heterophony. I'd always taken this rhythmic polyphony as a sign of unity in diversity, but the connection to heterophony gives it more depth for me - particularly now. 

So what about heterophony today? We have got used to a particular kind of redundancy in music produced through harmony and tonality. It is partly the product of the enlightenment, and it places the order of humanity above the order of nature. AI is generating a human-like order of utterances by decomposing a kind of natural order, and its decomposition process is both fundamentally heterophonic and fractal. AI works like a singer in a heterophonic choir, listening to where the tune is going, calculating which way it will go next, and checking to see if it was right or not. In this process, there is difference, form, fluctuation of constraint, expectation, and relation. 

We have an urgent need to understand this process, and heterophonic music provides us with one way of doing it. Also, perhaps curiously, it takes us away from the enlightenment mindset which on the one hand has given us so much, but which has also done so much damage to our environment. It is not Victorian orientalism to connect with fundamental processes that steer our collective will and judgement-making. But there may have been more to the pull of orientalism than mere fashion. I suspect Britten saw this. 

Maybe Britten wasn't tuning-in to the way AI works (how could he?), but rather he was tuning in to something that is intrinsic to our biology. Is our physiology heterophonic? Is quantum mechanics? The fact that our AI is is perhaps also a reflection that there is something in us which has always been this way. This, to me, is another reason for us to listen more carefully. Not that we should listen to the same thing, but look out for the stream and try to follow it. 

While there are tremendous technical advances being made at breakneck speed at the moment, understanding where we are culturally and spiritually is vital. We have existed for many decades in a fog where our ability to reconcile our physiology with our technology has led to a tragic disequilibrium. We have almost ceased to believe that a new equilibrium is possible. But it might be. 

Sunday 6 November 2022

Viability and the AI business - Some thoughts on Musk, OpenAI and Twitter

Just for the sake of an intellectual exercise, imagine that through some unusual stroke of luck (or misfortune) someone finds themselves at the head of a venture which spins out of an AI-related academic project. As if one of those (usually hopeless) EU education projects actually produced something that somebody else not only wanted but was willing to pay a lot of money for. A number of things follow on from this. 

Firstly, the university who (probably) made life very difficult for the people who came up with and developed the idea, probably sneered at any claim that "this is important work", or at appeals to protect key people, late in the day turns round and says "this will make us millions! it's our intellectual property". While market conditions change quickly, the university drags its feet in negotiating a handover of IP and the writing of patents. Over a year goes by, everyone tears their hair out, but eventually things are signed. Universities have become very weird organisations that ape commercial practices without really understanding why they do it, or thinking about whether it is sensible. 

Secondly, a spin-out company with freedom to operate is one thing, but this needs funding. The mode of thinking for academic spin-outs is similar to the mode of thinking of academic projects - how to get funding? It should be said that VC funding cannot be gained unless you have experienced people who know how to deal with VC firms. But say, for the sake of argument (through another stroke of luck) that this is in place. The danger of this mode of thinking is that getting funding becomes the prime objective. There may be a point, however, where it is so obvious that a spin-out product is so desirable to potential customers, that the getting of funding is not a question. That raises the third question:

What kind of a business are we?

So you might have funding which might keep your operation going for a year or so before you need to be raising revenue through sales. What are the conditions for your viability?  This is where an AI business is weird and interesting, and this sheds light on Elon Musk, Twitter and OpenAI.

Successful and viable businesses typically have a set of operations which produce things - products, services, etc - for a customer base which pays for those products and services. Among the different regulating mechanisms within any such business will be some kind of operational management which ensures effective coordination of the production operations, marketing and so on. Since all businesses operate within changing market conditions, all viable businesses will develop an R&D arm which is scanning the horizon for new opportunities and advising on strategy. Some business will hire software developers to develop new solutions to internal operational challenges. R&D looks to the future and potential scenarios, operations are focused on the present - there is often tension between them, and good businesses balance one against the other. Interesting to note that Elon Musk's current restructuring of Twitter is basically trying to rebalance the relationship between R&D and operations within that company (which is losing money). 

An AI is a specific kind of technology. In the above scenario, it fits within a company's R&D structure. In itself, it is not about operations. Musk's OpenAI is a good example. It makes itself available as an API which can be plugged-in to the R&D operations of other businesses who will use it to automate writing tasks that would once have been a function within the operations of a company. Through adopting OpenAI services, those operations are restructured, people moved (or removed), and the operations restructured. 

Now look at OpenAI itself as a business. As a business, it appears to have few customer-facing  operations apart from sales and marketing. It develops and provides access to machine learning models which sit on the internet (although from a technological perspective, these models are just files which could sit anywhere - even on individual devices). Its customer-base is a community of users who integrate its services into high-end heavy usage corporate operations for which they pay subscriptions. OpenAI must maintain the scarcity of what it does (in the face of continual innovation in AI), and ensure that customers keep buying its services. That means that OpenAI's own R&D must outpace the R&D of its customers - or rather, OpenAI's customers see that a good chunk of their own R&D is best outsourced to OpenAI. 

I think this is a problematic business model because effective R&D relies on having a good model of the organisation of which it is part. R&D without a concrete set of business operations attached is potentially root-less - it's not part of a viable operation, and could therefore lack coherent direction. This may be the most important reason why Musk was so keen to buy Twitter: it gives him an operational infrastructure, to which he (no doubt) believes his R&D company (OpenAI) can restructure and make profitable. 

With a set of operations to manage, an AI business can grow its services and see the effect of its developments on the viability of the whole organisation. Some things will work, other things won't. Sometimes operational requirements will override whatever new innovation is suggested by R&D. Other times, the R&D is critical to maintain organisational effectiveness. Moreover, an AI business in this situation could extend its reach beyond a "host" organisation, offering services to other organisations. The only problem is that in doing so, other organisations might become competitors to the original host organisation. This requires new thinking about corporate cooperation and market competition. 

This is the most fascinating question about all AI businesses. They are surrogate R&D operations without operational attachments. If an AI was a human system it would be like the pathology of when a university's management believes it is the university (see this many times!), and that the current operations (academics, administrators) could be replaced by another set of operations. Equally mad is the belief that management is generic and transplantable, as in the idea of "institutional isomorphism".  Management without operations isn't viable. 

But it's technological form is different - AI exists as a concrete coherent thing that provides services to R&D which can be genuinely useful. These services require R&D themselves - which is the regulatory domain of the AI company itself, but the whole thing demands some kind of operational "host". An AI company is a kind of "virus", and its best chances of preserving its viability is reproduction in other hosts. Reproduction of the AI is in the interests of the original host because it grows the AI business, but it must do so in such a way that other hosts do not become competitors to each other. 

The dynamics of this are different to the traditional ways we think about organisational viability and competition. Traditional businesses compete for resources (sales, income) by acquiring market share in the products they produce. They may seek to establish monopolies by acquisition of competitors to remove threats and increase profits through creating scarcity in the market (which then requires regulation by government). But AI is presenting a dynamic of what might be called "organisational environmental endogenisation". That is to say, something in the environment which threatens the viability of organisations - AI - is endogenised (assimilated) within an organisational structure in order to transform that organisational structure so it is better able to maintain its viability and profitability. As part of maintaining its viability, growing the endogenised element and then getting it to "infect" other entities becomes a critical part of the viable operation. This is not to neutralise competition, but rather to increase the strength of the ecology within which organisations sit and within which they can continue to grow and develop better R&D operations. 

There is something a bit odious about Musk. But equally, there is something important happening around technology at the moment which presents organisational questions which are unavoidable for anyone looking at the future of business, organisational viability and society. It's urgent that we think this through. I'm incredibly fortunate to be in a position where I'm grappling with this at first hand. 

Tuesday 25 October 2022

Postdigital values, Marion Milner and John Seddon

I'm giving a talk on Thursday at the Carnet Users Conference (https://cuc.carnet.hr/2022/en/programme/) as part of the extensive strand on "postdigital education". My talk has gone under the rather pompous title of "Practical Postdigital Axiology" - which is the title of a book chapter I am writing for the Postdigital group - but really this title is about something very simple. It's about "values" (axiology is the study of value), and values are things which result from processes in which each of us is an active participant. Importantly, technology provides new ways of influencing the processes involved in making and maintaining values. 

It's become fashionable in recent years to worry about the ethics of technology, and to write voluminous papers about what technology ought to be or how we should not use it. In most cases in this kind of discourse, there is an emotional component which is uninspected. It is what MacIntyre calls "emotivism" in ethical inquiry (in After Virtue), and it is part of what he blames for the decline in the intellectual rigour of ethical thought in modern times. 

I wonder if the emotivism that MacIntyre complains of relates more to mechanisms of value which precede ethics. Certainly, emotivist ethical thought is confused with value-based processes. The emotion comes through in expressing something as "unethical" when in fact what has happened is that there is a misalignment of values usually between those who make decisions, and those who are subject to those decisions. More deeply, this occurs because those in power believe they have the right to impose new conditions or technologies on others. This would not happen if we understood the benefit to all of effective organisation as that form of organisation where values are aligned. This suggests to me that the serious study of value - axiology - is what we should be focusing on. 

I think this approach to value is a core principle behind the idea of the "postdigital". This label has resulted from a mix of critique of technology alongside a deeper awareness that we are all now swimming in this stuff. A scientific appreciation of what we are swimming in is needed, and for me, the postdigital science has a key objective in understanding the mechanisms which underpin our social relations in an environment of technology. It is about understanding the "betweenness" of relations, and I think our values are a key things that sit between us. 

This orientation towards the betweenness of value is not new - indeed it predates the digital. In my talk, I am going to begin with Marion Milner, who in the early 1930s studied the education system from a psychoanalytic perspective. In her "The Human Problem in Schools", she sought to uncover the deeper psychodynamics that bound teachers, students and parents together in education. It is brilliant (and very practical) work which in education research has gone largely ignored. In her book, Milner made a striking statement:

"much of the time now spent in exhortation is fruitless; and that the same amount of time given to the attempt to understand what is happening would, very often, make it possible for difficult [students] to become co-operative rather than passively or actively resistant. It seems also to be true that very often it is not necessary to do anything; the implicit change in relationship that results when the adult is sympathetically aware of the child's difficulties is in itself sufficient."

This is a practical axiological strategy. If in our educational research with technology, we sought to manage the "implicit change in relationship that results when the "teacher" or "manager" is sympathetically aware of the "other's" difficulties" then we would achieve far more. Partly this is because we would be aware of the uncertainties and contingencies in our own judgements and the judgements of others, and we would act (or not act) accordingly. What are presented as "ethical" problems are almost always the result unacknowledged uncertainties. Even with things like machine learning and "bias", the problem lies in the overlooking or ignoring of uncertainty in classification, not in any substantive problem of the technology. 

In my new job in the occupational health department at Manchester university (which is turning into something really interesting), there is a similar issue of value-related intervention. One of the emerging challenges in occupational health is the rising levels of stress and burnout - particularly in service industries. A few years ago I invited John Seddon to talk at a conference I organised on "Healthy Organisations". It was a weird, playful but emotional conference (two people cried because it was the first time they had a chance to express how exhausted they were), but Seddon's message struck home. It was that stress is produced by what he calls "Failure demand" - i.e. the system being misaligned and making more work for itself. The actual demand that the system is meant to manage is, according to Seddon, often stable. 

It strikes me that Seddon's call to "study the demand" is much the same idea as contained in Milner's statement. It is not, strictly speaking, to do nothing. But it is to listen to what is actually demanded by the environment and to respond to it appropriately. That way, we can understand the potential value conflicts that exist, and deal with them constructively. 


Friday 14 October 2022

The Structure of Entropy

One of the things I've been doing recently in my academic work is examining the ebb-and-flow of experience as shifts in entropy in different dimensions. It began with a paper with Loet Leydesdorff for Systems Research and Behavioural Science on music: https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738?af=R, and a paper on the entropy of student reflection and personal learning https://www.tandfonline.com/doi/abs/10.1080/10494820.2020.1799030 and has continued in a recent paper on the sonic environment for Postdigital Science and education. 

I have been fascinated by the visualisations and entropy graphs of different phenomena, partly because it provides a way of comparing the shifts of entropy of different heterogenous variables all in the same scale: so, one can consider sound as frequency together with the entropy of words, together with the entropy of things happening in video. The principal feature of this is that the flow of experience is a counterpoint of different variables, and the fundamental theoretical question I have asked concerns the underlying mechanism which coordinates the dance between entropies.

Another way of talking about this dance is to say that entropy has a "structure". Loet Leydesdorff commented on this in conversation at the weekend after I shared some recent analysis of music with him (see below). Interestingly, to talk of the structure of entropy is to invite a recursion: there must be an entropy of structured entropy. Indeed, Shannon's equation is surprisingly flexible in being able to shed light on a vast range of problems. 

To understand why this might be important, we have to think about what happens in the flow of experience. I think one of the most important things that happens (again, I have got this from Loet) is that we anticipate things: we build models of the world so that we have some idea of what is going to happen next. These anticipatory models work with multiple descriptions of the world - there is "mutual redundancy" between the different variables which represent our experience, and I think Loet is right that this mutual redundancy produces an interference pattern which is a kind of fractal. It makes sense to think that anything anticipatory is fractal because in order to anticipate, we must be able to identify a pattern from past experience and map it on to possible future experience. Also, there is further evidence for this because it is basically how machine learning techniques like convolutional neural networks work.

Fractals are self-segmenting: the distinction between patterns at different orders of scale emerges from the self-referential dynamics which produce them. At certain regular points, the interference between different variables produces "nothing" - some gap in pattern which demarcates it. In the paper on music, I suggested that this production of nothing was related to the production of silence, and how music seems to play with redundancies (which is another way of producing nothing) as a way of eventually constructing an anticipation that a piece is going to end. 

I made this video last week about a Haydn piano sonata as a way of explaining my thinking to Loet:


The entropy graph I displayed here uses a Fast Fourier Transform to analyse the frequency of the sound, identifying the dominant pitch, the richness of the texture and the volume of the sound, and calculates the entropy of those variables. This graph illustrates the "structure of entropy" - and of course, eventually everything stops.

I think learning and curiosity is like this too. It too is full of redundancy, and the entropy of learning has a similar kind of dance to music. Indeed, sound is one of the key variables in learning (this is what my recent PDSE paper is about). But it's not just sound. Light also is critical - it's so interesting that our computer screens basically produce patterns of light, and yet there is so little research on light's impact on learning. And indeed, the entropy of light and the entropy of sound can be related in exactly the same way that I explore the entropy of the frequency in this video.

As to what structures the dance of entropy, I think we have to look to our physiology. It is as if there is a deeper dance going on between our physiology and our interactions with our environment. What drives that? It's probably deep in our cells - in our evolutionary history - but something drives us to shape entropies in the way we do. 

Sunday 2 October 2022

Sleeping and Learning

If learning is about making new distinctions, there is a question about how we know a distinction. Since all distinctions have two sides (an inside and an outside) our knowledge of a new distinction must be able to apprehend both sides of it. So we must be able to cross the thresholds of our distinctions. At the same time, if we are not inside our distinctions - if we are not able to use them as a lens to view the world - they are useless in a practical way. Yet the distinctions which make up our lens are dependent on our being able to cross their threshold and see no distinctions. Is this sleep and dreaming?

We don't understand why we sleep. Except that we know that if we don't sleep, we die. That suggests that it is not just our conscious distinctions that require stepping outside of themselves, but the  physiological distinctions between cells, organs, etc. If they break down, we're dead. 

At the same time, we know - at least anecdotally - that we learn in our sleep. We wake up in the morning having not been able to do something the day before, and find ourselves improved in our performance. Possibly because we've got "more energy" - but what's that? Thinking about distinctions necessitating boundary crossing helps here.

The Freudian "primary process" is the dream world of no distinctions. The world of the new baby. The "secondary process" is the regulating filter which channels the energy from the primary process into useful distinctions which (for adults at least) are conditioned by the social conventions of the "superego". (Talcott Parsons correctly recognised that Freud's superego was sociological). More to the point, this psychodynamic process between ego, id and superego was continual: a kind of pulse between the "oceanic" primary process and the secondary process. 

In education, the superego rules, and technology has ensured that its grip on the imagination of staff and students has become every more brutal. But technology outside education stimulates and suppresses the id: from cat videos to shopping to porn, we can inhabit a simulated oceanic state. Only in sleep itself is there some contact with the reality of the id.  

What have we missed in the way that we think about learning? When we examine our metrics for competency, our "constructive alignments", assessment schemes, etc, we seem to have assumed that the distinctions of learning are fixed: once we learning something it stays there. In conscious experience this looks like a sensible proposition. But to assume this misses the possibility that our distinctions appear persistent precisely because they result from a dynamic process of distinction and undistinction. 

To be clearer about this, the deepest encounter with the oceanic experience comes through an intersubjective acknowledgement of uncertainty. That can be the best teaching - not the delivery of content, or the forcing of distinctions written in textbooks, but the revealing of understanding by a teacher to the point of revealing of uncertainty. "I'm not sure what this means - what do you think?"

I've written about this kind of thing here: Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum (springer.com), and this last week I got a further reminder of the importance of this approach in an EU project which Danielle Hagood and I led around digitalization. In both cases, technology was the stimulus for uncertainty and dialogue. It is the technology which takes us to the oceanic state, from where (and this was quite obvious in my EU project) new distinctions and new thinking emerges. 

The dialogical is the closest thing we have to the primary process in education - it is rather like music because it connects us to more fundamental mechanisms. John Torday suggested in conversation last week that in sleep our cells realign themselves with their evolutionary origins, effectively connecting our waking thoughts (what Bohm calls the "explicate order") with fundamental nature ("implicate order"). That's a wild idea - but I quite like it!

Wednesday 21 September 2022

About Learning and de-growth

Seymour Papert argued that we do not have a word for the art of learning in the same way that we have words for the art of teaching (pedagogy, didactics) (see his "A word for learning": http://ccl.northwestern.edu/constructionism/2012LS452/assignments/2/wordforlearning.9-24.pdf). Papert then suggests the word "mathetics", drawing attention to the fact that "mathematics" appropriated the word for learning to refer to its specialised practices, when the word "Mathematikos" simply meant "disposed to learn". There may be deeper things to explore in this etymological relationship. 

We tend to think of learning as a kind of growth. As we learn, we know "more stuff", we gain "more knowledge", and we might even imagine that we get bigger heads! Babies start small and get bigger (up to a point), and as they get bigger they learn. Learning produces material artefacts which certainly do increase in size - before the internet, knowing more stuff meant more books, and (perhaps) a bigger library (to display as our zoom background!). The bigger the library the cleverer the people.

I was listening to Neil Selwyn talking about "de-growth" as a possible response to climate change and thinking about how education might support this (here: https://media.ed.ac.uk/playlist/dedicated/79280571/1_6u9a41zh/1_l7anxlgx). Crudely, we imagine that our ecological crisis is caused because things have grown too big, and that to address it, we need to "degrow". But what do we mean by "big" or even "growth"? My favourite source for thinking about this is Illich's "Tools for Conviviality". He talks about the outsized growth of technology and institutions beginning as beneficent, and becoming malevolent. The causes for the transition from beneficence to malevolence are mysterious - they may lie in our physiology and evolutionary biology (that's another post). But the actual manifestation of pathology is not size - it is a reduction in variety. Illich's clearest example is 100 shovels and 100 people digging a hole, which is eventually replaced by one person and a JCB. Which has the greater variety? The loss of variety as the technology becomes more powerful results in an increase in the creation of scarcity - and the "regimes of scarcity" are the ultimate propellent for positive feedback loops and accelerating crisis. 

The ecological crisis is a crisis resulting from the loss of variety caused by modern living, and within modern living, we must include education. No human institution excels in the art of producing scarcity more than education. The rocket fuel for the rest of the ecological crisis lies at the classroom door. But we can't seem to help ourselves. We see education as the solution to our troubles, not the cause! Education will teach us to "de-grow"... quick! roll-up for "degrowth 101"! Why do we do this? It is because we mistake education for learning. 

We tend not to see learning but instead see "education" in the same way that we tend not to see health but instead see "health systems". "Education" (and "health systems") get bigger and more powerful - rather like the library which forms part of educational institutions. As they get bigger and more powerful they lose variety (look at the NHS today). But "learning" (and "health") do not grow or get bigger. Both of these terms refer to processes which relate an organism (a person, a community, an institution) with its environment. These terms relate to the capacity of any organism to maintain their viability within their environment - indeed "health" and "learning" are deeply connected concepts. Learning is not about growth, but about homeostasis. 

Having said this, it's obvious that as we get older, we learn more stuff, we can do more things, we talk to more people, and so on. But we are really in a continual process of communion with a changing environment. Babies may seem to learn to scream to get attention, but their physiological context is changing alongside an epigenetic environment within which what it is to remain viable is a continual moving target. The education system appears to be a way of forcing certain kinds of environmental change, and as a result insisting on certain physiological responses (which appear to reproduce regimes of scarcity, and social inequality). Indeed, what we call "growth" is an outward manifestation of an unfolding of physiological potential in a changing environment. If growth was as fundamental as the "de-growth" people say, why does anything stop growing?

So if learning is not about growth, but about the viability of an organism in an environment, how can we visualise it differently? One way is to think about it mathematically - and so to draw back to the origin of the word for mathematics, mathematikos and "mathetic". If learning is a process of variety management, and a developing environment has differing levels of variety (and indeed, increasing entropy), then learning is really a process of finding a kind of resonance with that environment. These orders of variety and variety management might be rather like orders of prime numbers, or different levels of scale in a fractal, or different orders of infinity. Mathematically, we might be able to see learning in geometrical forms produced through cymatic patterns:

or knot topologies, 

Or Fourier analysis, or even in Stafford Beer's syntegrity Icosahedron (see Beer's book "Beyond Dispute" https://edisciplinas.usp.br/pluginfile.php/3355083/mod_resource/content/1/Stafford%20Beer_Beyond%20Dispute.pdf):

These forms are expressions of relations, not quantifications of size. If we see size (and growth) as the problem we don't only miss the point, but we feed the pathology. 

We urgently need a more scientific approach to learning. We are going to need our technologies to achieve this. This is not ed-tech, but technology that is necessary to help us understand the nature of relationship. I fear that for those consumed with ed-tech, blaming it for the demise of "education", a different kind of approach to technology and a more scientific approach to learning is not a thinkable thought. 

I feel the need to make this thinkable is now very important. 




Monday 19 September 2022

Rethinking Education and a "Trope Recognition Machine"

I went to a conference at the weekend on "Rethinking Education". As is often the case with these things, there were some good people there and some good intentions. But I came away rather depressed. It's often said that there is nothing new in education, and events like this prove it. What it amounted to was a series of tropes uttered by various people, some of whom were aware that they were tropes, and others who genuinely thought they were saying something new. Meanwhile the system trundles on doing its thing - and while everyone there might admit that the thing it does is not very good, there is a surprising lack on clarity on what the system actually does. 

When we ask people to rethink something, it is often framed as an invitation to think about the future - to say, "let's bracket-out the system we have, and conceive of the system we want". But this is naive because the system we want is always framed by the system we are in, and it is always difficult to see the frame we are in, and what it does to our thinking. Frame-blindness has specific effects - one of which is the tropes.

At one point I was getting a bit frustrated by the degree of repetition in the tropes that a wicked thought occurred to me: what if we had a trope recognition machine? What if there was some device that could process all the utterances and classify them according to their trope identity. And of course, current machine learning is very good at this kind of job. But if you had a trope recognition machine, what use might it be? 

If we look at "rethinking education" as a problem situation - not the problem of "rethinking education" but the problem of talking about "rethinking education" - this problem is one of time-consuming redundancy of utterances. Basically many people say the same thing, and feel the need to say the same thing. Indeed, I suspect meetings like this owe their appeal to the opportunity they present to people to say what's in their heads in the confidence that what they say will "resonate" with what else is said. In other words, the redundancy is there in the desire to attend and speak in the first place. Perhaps we need to think about this - about the dynamic of redundancy in communication. 

One of the most interesting things about redundancy is how attractive it seems to be - it is after all about pattern, and patterns are what we look for when we try to make sense of something. So if we want to make sense of education, we need to go somewhere where we can fit into a pattern - a conference. But this is curious because the motivation of most people at conferences is to "get noticed" - to have their version of a trope which is distinct that everyone looks to them as some kind of originator of something which has been said before (actually the whole academic discourse is like this, but let's not go there!). So how does that work? How does the desire for collective sense-making through pattern and egomania fit together?

I've been reading Elias Canetti's "Crowds and Power" and I think there is something in there about this tension between the search for redundancy and pattern, and the expression of the ego. Canetti sees the individual as someone who wants to preserve the boundary of their self. They don't even want to touched by someone else most of the time. And yet, they also want to belong to the crowd. Although Canetti was opposed to Freudian psychodynamics, clearly his analysis of the crowd is treading similar territory to psychodynamics: the crowd is the Freudian super-ego. 

The search for redundancy in going to conferences and saying similar things to everyone else is crowd-like behaviour. It seems to be driven by egos who want to get noticed - to preserve and reinforce their boundary of the self. 

I think the best way to think about this is to see both the ego and the superego as essentially dealing with contingency. They have to find a way to maintain a balance between their internal contingency and the external contingency. That means that it is necessary to understand and control the external contingency. Creating redundancy through utterances is a way of establishing some degree of control over external contingency: it is a way of establishing a "niche" in which to survive (my favourite example of an organism using redundancy to create a niche is a spider spinning a web). 

What is discovered about the external contingency has an effect on internal contingency. The ego is troubled by the subconscious, which contains the vestiges of experience and desire from infancy - and the legacy of education. The ego is satisfied with the niche it creates in talking about education and feels more secure. (What appears as egomania may simply be a need to establish some kind of inside/outside balance). But as a result, conferences like this actually satisfy the psychodynamic needs of individuals struggling in a terrible system for a short time. They are essentially palliative. 

Understanding these dynamics at conferences may be a first step to remedying the problems in the education system itself. A trope-recognition machine could pinpoint the different positions and contingencies which are expressed in a group: it could highlight areas of deep contention and uncertainty and thus focus discussion on those issues, codifying the underlying patterns that everyone is searching for in a way which could save a lot of time and frustration. That might result in some better decision-making perhaps.

Monday 5 September 2022

Learning and the Redshift of Biology

When we think we observe learning happening in others, we think of a change in an individual. We see each individual as on some kind of trajectory along a path which we have determined through making prior observations throughout history. This is encoded in our formal processes of education. In each individual we observe, there is a wide range of variation in this trajectory. Some lead to "success", some lead to "failure". One of the tragedies of education is that there is never enough time, nor the energy, to look at an individual's learning really closely. Perhaps parents (sometimes) and psychotherapists get a bit closer. Scientists observe ecosystems, stars, and cells with far more intellectual curiosity and desire for precision.

Learning never happens in "others". It happens in relationships - and those relationships inevitably include anyone who wants to "observe". If we imagine that in any relational situation, there are "engrams" - structures in consciousness - and "exterograms" - externally observable phenomena, the only observable aspect of relationship are the exterograms of communication - we can at least write spoken words down, record actions, assess, etc. In education research, this is basically what is done, and these "exterograms" of the learning process are subjected to various kinds of analysis which produce conclusions like "phonics helps children to read", or "people have different (codifiable) learning styles", and other such stuff. These are really political statements about which there is endless debate. Good for the champions of phonics and learning styles - after all, "there's only one thing worse than being talked about..."

The phrases "exterograms" and "engrams" were suggested by Rom Harré as a way of introducing the problems addressed by his "positioning theory". Harré said something very sensible about learning: "you know when learning has happened because the positioning changes". This seems true - the transition from apprentice to master is precisely a change in positioning between master and apprentice. This suggests to me that rather than look at the trajectory of an individual, we should look at the trajectory of positions. 

How could education become focused on the trajectory of positions, rather than the trajectory of individuals? Perhaps a good place to start in thinking about this is how an "individual" focus of education is different from a position-focused education. The former is built around the material consumption and production of students - textbooks, lectures, essays, exams, etc. The latter is built around the energy of communication between people. Is the shift one from a focus on matter to one on energy? 

In an energy oriented focus, there are no "exterograms" really. They are mere manifestations of energy in the learner (or lack of it!). In a dialogical relation, these exterograms have an effect on others in causing the production of other communications. But in the process, there is a  physiological background which is where the thinking and adaptation takes place. These physiological processes obey rules about which we have little understanding - but where an increasing amount of biological evidence is suggesting we might have new and highly productive scientific paths to tread. 

We are all made from the same stuff, and all our stuff - our cells - comes from a point source - a unicell. Through evolutionary history, cells diversified and acquired (through endogenisation of aspects of their historical environment) various features (mitochondria) which we find in us and everywhere else in nature. There is a surprising lack of variety of cell types in the human body - only about 200. Our learning processes are processes of cellular communication - not just within us, but between us. Those processes of communication reference the origins of cells - shared cellular history is a deep coordination mechanism which underpins what we might call "instinct". Instincts arise from cellular relations, just as learning arises from human relations. The same processes are in operation at different orders of scale. 

Looking at learning as the trajectory of "positions" in relations is like looking through the James Webb telescope for the beginning of the universe. Learning shows us the "redshift of biology". The cholesterol from which we are made had its origins at the origin of the universe (see David Deamer's book "First Life"). And there are powerful clues for this at a more mundane level. Simply counting the variety of possible relationship trajectories (just as counting the behaviour of individual learners) will reveal statistical structures which form normal distributions. Regression structures reveal differentiated groups - the learners who like maths, and those who like music - these too will be normally distributed. But simply to refocus on more fundamental things - love or hate - will also produce the same structures. It is fractal. 

To see education in terms of positions raises an important question about how to make education better. Do we want more kids to pass more exams? Or do we want better positions/relations between people? I don't think the answer to that question is too hard, although some might say that education is about "knowing stuff". So what is "knowing stuff"? We can see that too either as a material process - knowing stuff is about material consumption and production. Or we can see "knowing stuff" as being about energy - the capacity to engage with and position well a large set of relations in harmony with (and not against) our biological origins. I think this is the same as being "in dialogue" with one another.


Wednesday 31 August 2022

Tacit Revolution

As technology advances in society, there is no escaping the increasing specialisation of knowledge. This may seem like a challenge to those like me who believe that the way forwards is greater interdisciplinarity, or a metacurriculum (as I wrote here: https://link.springer.com/article/10.1007/s42438-022-00324-1). While greater specialisation indicates increasing fracturing of the curriculum and the growth and support of niche areas, more fundamentally it represents the organisational challenge to find the best way to support specialised skills from the generalised organisational frameworks of curriculum, assessment, certification, etc. At the root of this challenge is the fact that generalised organisational frameworks - from KPIs to curriculum - all depend on the codification of knowledge. Meanwhile, hyperspecialisation is largely dependent on tacit knowledge which is shared among small groups of professionals in innovative niche industries, startups, and departments of corporations. 

Since Michael Polanyi's seminal work on tacit knowledge in the 1950s in Manchester University, the educational challenge of transmitting the uncodifiable have been grappled with in industry. The Nonaka-Takeuchi model of knowledge dynamics within organisations has been highly influential in understanding the ways that professional knowledge develops in workplace settings (see https://en.wikipedia.org/wiki/SECI_model_of_knowledge_dimensions). It is in these kinds of dynamics that we are seeing technology drive (and perhaps even accelerate) processes of tacit knowing and externalisation. It's now not only highly technical people whose tacit knowledge must be communicated to senior management, but the technics of machine learning (for example) which are so different from the technics of database design or full-stack implementation, or component manufacture, where each of these technical aspects are interdependent. The communication between technical groups is critical - industries survive or fail on the quality of their internal communications. 

What is critical to making the communications work is dialogue. This may be partly why so many successful technical industries adopt flat management structures and dynamic and adaptive ways of configuring their organisation. That is driven by the dynamics of dialogue itself - the need for technical conversations to flow and develop. Compare this to the rigid hierarchical structures of every educational institution, burdened with its bureaucratic codified curricula. It is completely different. One wonders how it will survive the changes in the world. 

Over the last year and a half, I have been involved in a research project in the University Copenhagen on the digitalisation of education. This project was set up because Universities at least recognise the problem - they are getting left behind in a world which is changing too fast. But like all institutions, Copenhagen believed that the way to address this was to tweak its structures and what it "delivered" - basically to change the curriculum. This was not going to work, and our project has shown it. But this is not because of a failure to get the "right" tweaks to the curriculum. It is because technical knowledge is largely tacit and uncodifiable, and the organisational structures of education cannot deal with tacit knowledge. Indeed, with the bureaucratisation of education, it is even less able to deal with tacit knowledge than it was in Polanyi's time. 

I am now working for the Occupational Health department in the University of Manchester. This is a domain for professionals in health and industry to identify and analyse the relationship between environmental circumstances (work places, etc) and personal and public health. There is so much knowledge in professionals working in this area which is the product of years of experience in the field. Much of it is uncodifiable. 

Uncodifiable does not mean untransmissible. Uncodifiable knowledge can be taught dialogically, and more importantly, technology can greatly assist in producing new kinds of dynamic dialogical situations where this transmission can take place. I am currently looking at ways of doing this in occupational health, but as I do so, I am thinking about what it means for technical education, hyperspecialisation and tacit learning. 

This is almost certainly going to be the trajectory of educational technology. It doesn't look like it at the moment because "edtech" sees itself (tacitly!) as "educational management tech" - we don't really have "technology for learning" as such. It's managers who write the cheques for edtech. But that will change. We're going to need "technology for learning".

There's then another challenge for institutions. Because when we do have "technology for learning", the dialogical situations of tacit learning will not need to be bound by classroom, curriculum, assessment, etc. They can be situated in the world alongside the real activities that people engage in. My own experience of co-establishing a medical startup around an AI solution to diabetic retinopathy diagnosis is indicating this, and there are many other similar startups. Mine started with an educational desire to teach people how to diagnose. It ended with a new product that embraced the educational aspect but did something powerful in the actual domain of work too. 

This is where things are going. I'm not sure this "tacit revolution" is going to be quiet though...

Monday 29 August 2022

Anticipation and Learning as Information

This is a follow-on blog to yesterday's on "Visualising Learning Statistically". The most powerful thing in any scientific inquiry is to have two ways of saying similar things. Yesterday, I suggested a way of thinking about learning as emerging distinction-making in terms of relations between normal distributions (in the manner that psychophysicists like Thurstone thought). Among the powerful features of seeing things this way is the fact that the statistics strongly suggests (indeed, insists) that there must be a common origin to the psychological phenomena which produce normal distributions. 

Another way of thinking about this is to consider what happens when any phenomenon is presented to consciousness and a distinction is made. A distinction might be called a "category" - something like "chair", "table", "book", "dog". In psychological experiments, what is measured is not the perception of difference per se, but rather the articulation of difference. In psychophysics for example, this is the expression of judgements of degrees of similarity to normality. That entails not only the perception of something in the environment, but selecting a word for it. To utter a word in response to a stimulus is a communicative act. 

No word can be uttered without having some idea of the effect of that utterance. We do not make words up for things we don't know. Rather like contemporary machine learning, we fit the word which we know as the most likely utterance which we believe will be understood by others. Unlike machine learning, we might not be sure, so utter the word as a question to see the response, but fundamental we are making a prediction. Paraphrasing George Kelly, who, alongside sociologists like Parsons and later, Luhmann, to make a distinction is to anticipate something in the communication system in which we operate. So we should ask, how might this anticipation work?

To anticipate anything is to recognise a pattern which relates some expected experience to a previous experience. As a pattern, there must be something about a phenomenon which is more general than the specifics of any particular instance. The fact that an anticipation is about something present in relation to something past means that there must be a dimension of time. The time dimension works both in the ongoing unfolding of a present experience, and "backwards" in the sense of reflexivity which relates what is present to what is past. This process of identifying the commonality between what is past and what is present is a selection mechanism for the utterance of whatever one thinks is the category that relates to what is currently seen. To create a selection mechanism for an utterance must entail 1. the selection of an appropriate model of past experience which relates to present experience from a set of possible models; 2. the management of a set of possible models; 3. the ongoing generation of models from present experience. 

Yesterday I said that the psychodynamics of distinction-making mean that the ability to refine distinctions is related to the ability to relax distinctions in a different domain - so Freudian "oceanic" experiences are important as an anchor for new distinction-making. That's the kind of statement which might irritate some, but I don't see it as saying anything more than the need for sleep and dreams in order to do work. It is the push and pull of the imagination - much like music, as I wrote here: https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738

Because making a distinction relies on a selection mechanism which in turn relies on a pattern, we can see a further argument for why the selection mechanism is dynamic between ongoing refinement and "oceanic nothingness". Patterns are segmented typically through repetition. Repetition itself, from an information theoretical perspective, is "redundancy" - it has an entropy of zero. Thus we can say that the segmentation of pattern is achieved through passages of high entropy followed by low or zero entropy. This helps to explain why repetition (as redundancy) is so important for memory - the essential feature of an effective selection mechanism for identifying a category is the ability to segment patterns of experience from the past to relate it to the future. 

This also reinforces the point that there must be a common biological origin which is responsible for steering this process. Patterns established in communication rely on cellular communication throughout the brain and other organs in the body. Within cells there are also patterns which reference evolutionary processes which themselves are demarcated by nothing. Statistically this can be observed as a normal distribution, but it can be also modelled as a process of evolutionary construction of patterns which act as selection mechanisms for communication. At a cellular level, these points of "nothingness" are homeostatic points of equilibrium between a cell and its environment.

The role of the environment in learning and evolutionary development is critical. The construction of anticipatory systems is a kind of evolutionary dance of endogenising the environment, where specific stages of development are segmented in ways where one stage can be related to other stages. It is this evolutionary dance which is the reason why there is always a distribution of traits and abilities which then give rise to measurable statistical phenomena. 


Sunday 28 August 2022

Visualising Learning Statistically

To talk of learning as a process which we can observe is very difficult. When we teach teachers, we teach "theories" of learning which are just-so stories with little hard evidence to back them up barring a few (now famous) psychological experiments. The resort to teaching theory is partly because this is so hard that we would struggle to decide what we should talk about if we didn't just talk about theory. The irony is that talking about theory can be very boring, encouraging professors who didn't think of any theory themselves to talk endlessly about what's written in textbooks - not exactly an example of good teaching! Ultimately we end up with what is easiest to deliver, rather than what needs to be talked about. 

I think the birth of cybernetics in the 1940s was the best chance we had of remedying this situation, but for various reasons, a lot of this transdisciplinary insight was lost in the 1950s and 60s, as other disciplines (notably psychology) appropriated bits of it but lost sight of its key insights. Now, the growth of machine learning is providing a new impetus to revisit cybernetic thinking, with people like James Bridle leading the way in a revised presentation of these ideas (see his "Ways of Being"). One of the most impressive things about Bridle's book is the fact that he reconnects cybernetics to biology and consciousness. That connection was at the heart of the original thinking in the discipline. The biology/consciousness thing is really important - but isn't it just another just-so story? If we don't have any way of measuring anything, then I'm afraid it is. 

Here perhaps we need to look a bit deeper at the whole issue of "measurement" as it is practiced in the social sciences. Another historical development from the 1950s was the increasing dominance of statistical techniques in disciplines like economics. Tony Lawson argues that this was directly connected to the McCarthy period, where anything statistical was "trusted" as scientific and anything "critical" was communist! - as Lawson points out in his "Economics and Reality", the greatest economists of the 20th century (including Hayek and Keynes) were highly skeptical of the use of mathematics in economics. 

Statistical techniques are regularly used in academic papers in education to defend some independent variable's impact on learning. These are usually the result of academic training in statistics for researchers - not the result of a critical and scientific inquiry into the the applicability of techniques of probability to education. But there are fundamental questions to ask about statistical procedures. These include:

  • Why do natural phenomena reveal normal (Gaussian) distributions in the first place? 
  • What is an independent variable, and why should an independent variable (if such a thing exists) produce a new normal distribution?
  • All statistics is about counting - but what is counted in something like learning, and how are the distinctions made between different elements that are counted? 
  • What happens to the uncertainty about distinction-making in what is counted (Keynes made this point in his "Treatise on Probability" with regard to his discussion about Hume's distinguishing between eggs)
  • Where is the observer in the counting process? Are they an independent variable?
  • It is well-recognised that "exogenous variables" are highly significant causal factors - particularly in economics (which is often why economic predictions are wrong). Yet normal distributions arise even when exogenous variables are bracketed-out. Why?
  • While one big problem with statistical techniques is the fact that averages are not specifics, averages nevertheless can sometimes prove useful in making effective interventions. Why? 
  • Why does statistical regression (sometimes) work? (particularly as we see in machine learning)
  • Is a confidence interval uncertainty?
These are the kind of "stupid questions" which never get asked in education research, or anywhere else outside philosophy for that matter. I want here to think about the first one because I think it underpins all the others. 

Normal distributions (calculated using mathematical equations developed by de Moivre, Euler and Gauss in the 18/19th centuries) require a statistical mean and standard deviation to produce a model of likelihood of a set of results.  Behind the reliability of these assumptions is the fact that there is - among the phenomena which are measured - some common point of origin from which the variety of possible results can be obtained. Thus the top of a bell curve indicates the result which is maximally probable having passed through all the possible variations that stem from a common point of origin. 

Mathematically, we can produce a normal distribution from techniques arising from Central Limit Theorem (CLT), where a normal distribution will arise from the sums of normalised random data (see https://en.wikipedia.org/wiki/Central_limit_theorem) . According to Ramsey (https://en.wikipedia.org/wiki/Ramsey%27s_theorem) and others, true randomness is impossible. So the normal distribution is really a reflection of deeper order arising from a single point of origin. What is this point of origin? What does a normal distribution in educational research really point to?

It must lie in biology, and (importantly) the fact that biology itself must have a common point of origin. Because we tend to think of education as a cultural phenomenon, not a natural one, this point is missed. But we are all made of the same physiological stuff. And the components of our physiology have a shared evolutionary history, and it is highly likely that this shared evolutionary history has a point source. So looking at your educational bell curve is really looking at the "red-shift" of biological origins. This is an important reason why it "works".

However, this doesn't explain learning itself - it just helps to explain the diversity of features (behaviour) in a population which can be observed statistically. Much more interesting, however, is to look at how the process of making distinctions arises given that normal distributions are everywhere.

This is why psychophysics is so interesting. The psychophysicists were interested in the distributed differences that different stimuli make on a population. Some differences make big differences in perception: for example, hot and cold. Other differences are harder to distinguish - for example, the difference between Titian and Tintoretto. These differences can also be represented statistically. For example, the orange curve below might be "hot", and the blue curve might be "cold". There is little uncertainty between these distinctions, and within any population, there is no question that what is hot is identified as hot (with a little variation of degree).



But here (below), there is much more uncertainty in distinction making. 

It is this kind of uncertainty in making distinctions between things which characterises learning processes at their outset. Whether it is being able to distinguish the pronunciation of words in a foreign language, or being able to manipulate a new piece of software, among the various categories of distinctions to be made, there is a huge overlap which leaves learners initially confused. 

As the learning process continues, this distinction-making becomes more defined:
So given phenomenon x, the likelihood of correct categorisation of that phenomena is improved. 

But it is important to remember what these graphs are really telling us - that the Gaussian distribution implies a common point of origin. The second graph is the result of a conditioning process upon natural origins - rather like a cultivated garden. But perhaps more importantly, this is dynamic, where the point of origin is ever-present, and exerts an influence on distinction-making. This may be why, despite increases in the ability to make distinctions in one domain, there is a biological requirement to relax distinction making in other domains, and these domains may be related.

"Oceanic" experiences - those that Freud associated with the "primary process" of the subconscious remain an important part of the overall dynamic of distinction-making. This looks something like this:

We make the mistake of seeing learning in terms of moving towards graphs 1 and 3, without seeing the dynamic pulse which relates graphs 1 and 3 to graphs 2 and 4. But this process is critical - without the oceanic connection to distinctionlessness, the coordination mechanism (i.e. reference to origins) which facilitates higher-order distinctions (graph 3) cannot coordinate itself and is more likely to collapse in a kind of schizophrenia (this is what Freud talked about in terms of the superego taking over and the psychodynamics breaking down). 

Looking at learning like this does two things. It invites us to think about our methods of scientific measurement differently - particularly statistics - as a means of looking at life processes as processes which refer to a common origin. Secondly, it gives us a compass for assessing the interventions we make. Our current lack of a compass in education and society is quite obvious. 







Saturday 30 July 2022

Scientific Economy and Artistic Technique

I wrote (10 years ago!) about my struggle to compose: https://dailyimprovisation.blogspot.com/2012/04/music-meaning-and-compositional-process.html). What's changed in me since?

Now I would say the key word is at point 5:  "If I have enough energy, I will battle on to try and get something down in these gaps, although at some point I get tired and give up" It's that reference to "energy" which has changed for me. What is that? This has become of great interest to me (see this more recent post: http://dailyimprovisation.blogspot.com/2021/09/energy-collages-in-vladivostok.html). How do artists maintain the energy to continue?

This has always been a mystery to me - but it is the essence of what compositional technique is meant to do. Most composers work from a germinal idea which generates possibilities. These germinal ideas are highly economical and concentrated forms of energy. Most commonly, in universities and music colleges, students are told that such germinal ideas relate to patterns of pitches. I have to say I always struggled to relate to this. Pitches seem so abstract as entities - they are just frequencies after all. Music isn't made of pitches, it is made of feelings and energy, and the connection between the abstract patterns of pitches and the feelings always seemed too remote for me. 

In science, a germinal idea which is generative of possibilities is called a theory. Theories are used as a guide in the manipulation of nature and the prediction of events. In an important way, this is also a process of concentrating energy flows. Scientific curiosity depends on energy, and theories concentrate and focus the labour of scientific inquiry to produce new knowledge. There is almost certainly a physiological driver for theory production in science.  

Many successful modern composers do not have the hang-ups I experience about the abstractions of notes. Rather like mathematicians, they seem to delight in the manipulation of abstractions as a source of their composing work. What I believe they possess when they do this is a highly compressed representation of the energy of the work which is comparable to the scientist's theory. They use the abstractions to unfold the energy over the long period of time that it takes to get the actual notes down on paper. That is how they are able to get the whole thing done.

Having said this, there is a problem in becoming too fascinated by the mathematical manipulation of pattern to produce sound. It is like a scientist becoming too fascinated by their concepts. While such procedures can unfold music of energy and beauty, sometimes (perhaps quite often) it sounds abstract and remote. Techniques like serialism were developed to free the conscious mind of cliché so as to facilitate the authentic connection between the subconscious creative mind and its conscious expression. It was intended as an "unlocking" procedure. But an obsession with mathematical procedure and pattern carries its own clichés. Brilliance in science also effects a kind of psychodynamic unlocking. 

Over the last couple of years, I've become interested not in notes but in physiology. When I improvise I find that the concentrated and economical forms of energy are not in any pattern of notes, but in the pattern of my fingers. So often my improvisation exploits the economy of my usage of my hands. I've recently started to notate this physiological concentration. The advantage this has is that unlike abstract patterns of notes, the concentrated pattern of physiology does contain feeling and energy. While I can notate the physiology, I can also feel it physically, and in feeling it physically, the energy of its unfolding (so I can get notes down over time) can also be controlled. 

I wonder whether the way artists manage the energy of creation is a determiner of artistic style. Every period in history brings environmental stresses which impinge on the ability to manage the flow of energy in artistic expression. Ours is a time of "entropy pumps" - we live in an age of constant distraction. That may mean that our management of creative energy may have to be situated more closely to our bodies. Overly cerebral approaches may lead to a disconnect between what is said and what needs to be said (although maybe I'm being too cerebral!). It may be ok for writing blogs and academic papers - but really, that is a waste of time and energy. 

The ability to concentrate energy in a germinal form - which may be common to both art and science - is really the ability to facilitate the steering of the creative (or empirical) process. Something that facilitates steering in systems terms is a trim-tab. The creative process - and certainly the process of improvising - is rather like a bird in flight. The very best scientific work also has this quality of free thought. Technique is not there to direct the course of creativity. It is there to loosen the constraints which would otherwise prevent the freedom of movement of creative processes in turbulent times. 

Tuesday 26 July 2022

Prufrock's Soul

A university friend said to me the other day that she felt writing academic papers was not nourishing in the same way as more artistic things that she did (but did less since she spent more time writing papers). I agree, and this makes me want to know more about differences between the qualia of different creative activities. What is nourished when the soul is nourished? What might be the mechanism?

Spiritual nourishment is visceral. There is a sensation, perhaps somewhere near the solar plexus, which is activated with certain activities which might be considered to be nourishing. Personally, my solar plexus rarely responds when I am writing. I know this because as I write this, I cannot feel it: the activity is in my head, not my belly. When I think more about the specific feeling of "soul nourishment" then I will rehearse those things which produce it - gazing at a beautiful sunset, beautiful moments in music, water - both still and a flowing stream, a cathedral or grand library.

There is something primeval about these experiences: something timeless. In the evolutionary theory of John Torday, as biological entities, we are phenotypes seeking information to return us to an original evolutionary state. Sometimes the seeking can go wrong and we simply end up lost. When T.S. Eliot writes in the "Love Song of J. Alfred Prufrock": 

"I should have been a pair of ragged claws

Scuttling across the floors of silent seas."

The ragged claws are the primal evolutionary state; Prufrock's weary, regretful, sexually repressed, empty-souled persona is the result of evolutionary accretions in search of a return to the simplicity of evolutionary origins which have only further obscured any deeper satisfaction. And Prufrock is lost. But the poem points to a kind of vector that connects primal origins to an empty life in search of meaning.

The point about this is that Eliot's soul was nourished in writing about a lost soul. The similarity to Dante is obvious. But what is it about Eliot's art which enables him to articulate this connection? 

Great poets, artists and composers harness energy. John Galsworthy commented about art and energy that: 

Art is that imaginative expression of human energy, which, through technical concretion of feeling and perception, tends to reconcile the individual with the universal, by exciting in him impersonal emotion. And the greatest Art is that which excites the greatest impersonal emotion in an hypothecated perfect human being.

(I'm grateful to Marie Ryberg for drawing my attention to Dewey's "Art as Experience" where he quotes this Galsworthy passage.) 

Eliot's poem does this. And the writing of academic papers does not have this effect. The question, it seems, is about energy. He understood the energy vector that connects his art and technique to a deeper truth about the universe, and the plight of J. Alfred Prufrock. 

Academic writing is rather deathly by comparison. The desire to explain away things which can't be explained, and conform to expectations of "proper referencing", "cogent arguments", "rigorous methods", etc, kills the soul. It might reward academics with promotion within an insane (and increasingly broken) system, but unless the work is truly ground-breaking, it amounts to little more than paraphrases of what has gone before. This is particularly true of education research.

When we do more deeply creative things, however, we engage with the energy that connects the scuttling claws with our present state. The regression connects us to where we come from, and where we are going. The are a number of hormonal and epigenetic factors which kick-in in the process. Moreover, the technique of creative work is very similar to what Galsworthy describes: a technical concretion of feeling and perception. The artist's challenge is to develop a technique whereby this can be managed. 

The deep challenge with this is that, of course, education does not see itself in relation to primal origins and energy vectors! It sees itself in relation to the development of independent "selves" as economic units in the making. But primal origins are what connect us to each other. What we imagine as our independent "self" is merely an apparatus for collecting epigenetic information and eventually transferring it to a new zygote, which will grow to some new apparatus for collecting information. 

Darwinian natural selection privileges the organism surviving in its environment, whereas the organism may merely be a vehicle for passing epigenetic information back to a zygote. It's ironic that Darwin's model probably had its origins in Darwin's schooling, while the establishment of the evolutionary model has reinforced an attitude to educational growth and development which has pushed creativity out in favour of STEM-related nonsense. 

Saturday 16 July 2022

Disentangling Entanglement in the Social Sciences

One of the most unfortunate aspects of increasing interest in topics like complexity and systems is the appropriation of scientific terminology to obfuscate the kinds of problems which the systems sciences were developed to enlighten. It's not exactly the same problem as Sokal (Fashionable Nonsense - Wikipedia) identified 20 years ago as a kind of intellectual scientific posturing - that, he argued, was at worst a kind of fraud, and at best, intellectual laziness. What we see now is more of the latter, but it exists in a dominant normativity where it's almost impossible to suggest that simply saying stuff is "complex" is to do no more than posit a blanket "explanatory principle" which explains away intellectual difficulties, rather than invites the question "how? so then what?".  

Entanglement - as it has been used by Latour and others - is a case in point. Latour has positioned himself carefully here (see Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science - The New York Times (nytimes.com) because he is aware of the problem (and as someone who began their career doing information theoretical analyses, he should know), but that hasn't stopped a sociomaterial industry (particularly in management science and education) growing up with long words and nothing much to say. Like all industries, it seeks to defend its position, which makes challenging it very difficult, and any practical educational progress even less likely. 

In physics, entanglement refers to the specific state of affairs in quantum mechanics where non-local phenomena are causally connected in ways which cannot be explained by conventional (Newtonian, locality-based) physics. If there is a fundamental underpinning idea here, it is not so much the weird interconnections between what might be seen to be "separate" variables, but rather the distinction between local and non-local phenomena, and the ways in which the totality of the universe is conceived in relation to specific locally observable events. Talking about entanglement without at least considering it in the light of totality and non-locality is like talking about the reality of ghosts on a fairground ghost train. 

Part of the problem is that we have no educational cosmology - no understanding of totality, or rather how education fits in a totality of the universe. This seems a grand and ambitious task - but if we deny that such a thing is possible, we then cannot defend allusions to science to help us address educational problems. This is why better educational thinkers are thinking about physics, education, technology and society together (this is good: Against democracy:  for dialogue - Rupert Wegerif). James Bridle's new book "Ways of Being" is also better - containing a lot of good stuff about biology and cybernetics -  although again, it's hard to see a coherent cosmology... (very interesting interview between him and Brian Eno here: Brian Eno and James Bridle on Ways of Being | 5x15 - YouTube)... so there's lots to do. 

We need not only to ask ourselves better questions, but think of better methods for addressing those questions. Some things don't need to be that complicated, and the seeds for new thinking are often in the past. Warren McCulloch's early work on neural networks (A heterarchy of values determined by the topology of nervous nets | SpringerLink), for example, contains these fascinating diagrams:

The above diagram explains McCulloch's notation: the continuous lines at the top are the nervous system, while the broken lines at the bottom are the environmental system. Receptors receive (transduce) signals from the environment, and effectors cause changes to the environment through behaviour of the organism (that's transduction too). There are two lines above representing (for example) two variables or categories of perception (perhaps "black" and "white"). But this diagram above does nothing: what goes in comes out.

The diagram below is much more interesting. The feedback of each category is wired into every other category (rather like the Ashby homeostat), and this keeps the thing in flux. What does that mean for our values? Perhaps left to our own devices we would forever be shifting from one category to another. But in communication with other such systems, stabilities in the perceptual apparatus of many people will result in values which can be codified and assumed to be "fixed" (although what appears static is an epiphenomenon of a continuous process):

Are such values and perceptions "entangled"? In the sense that Latour and Orlikowsi discuss it, yes. And indeed, the sociomaterial dogma becomes much clearer as a cybernetic mechanism conceived 80 years ago. It simply requires rediscovering how perception was thought about at the beginning of cybernetics. Intellectual amnesia is the root of our current problems with complexity.

Having said this, McCulloch didn't address totality in a satisfactory way. He knew the challenge. In his paper on "What is a number" (see Warren S. McCulloch: What Is a Number, that a Man May Know It, and a Man, that He May Know a Number? (vordenker.de)) he says:
"The inquiry into the physiological substrate of knowledge is here until it is solved thoroughly, that is, until we have a satisfactory explanation of how we know what we know, stated in terms of the physics and chemistry, the anatomy and physiology, of the biological system"
That is an appeal to grappling with the science of totality. We are going to need to take educational research a lot more seriously, and have a very different kind of research effort, if we are going to get close to this. Its importance, however, is urgent. The study of education is not a study of a particular kind of social practice. It is the study of how organisms which live for a short period of time, organise themselves to ensure that future generations can survive. 


Tuesday 5 July 2022

Cells and Sociomateriality

The sociomaterial gaze looks upon the world as a set of interconnections. Running through the "wires" of this web is the agency of individual entities - humans (obviously) and (more controversially) objects and technologies constituting organisational structures, power relations, roles, etc. To deal with the complexity of this presentation of the world, sociomaterialists evoke ideas from quantum mechanics like "entanglement" and (occasionally) "superposition" to explain the complex interactions between the components, looking to science (as represented, for example, by the interpretation of Bohr by Karen Barad) to supply sufficient doubt over the ability to be more precise about what is actually going on. If I was being unkind, I would say the end result has been a lot of academic papers with long words which mystify more than they enlighten. Even critiquing it seems to invoke complex vocabulary: "heterogeneous dimensions are homogenized in a pan-semiosis" (Hagendijk, 1996 - see https://www.leydesdorff.net/mjohnson.htm) - well, yes. 

Gazing at the world's complexity and trying to explain it by purely focusing of manifest phenomena is like trying to explain the universe but ignoring its expansion. The synchronic (structural) dimension alone will not suffice. History - the diachronic dimension - is critical to get a perspective which is more scientifically defensible. It is a profound change in perspective: the diachronic dimension enables us to see the world in 3D. This means that we have to draw away from looking at the relation between objects/technologies and people (for example), and instead focus on life itself  - to understand not only life's characteristics, but the mechanisms behind its creation of the material environment with which sociomateriality is so fascinated. This is a project connecting Lamarck, Bateson, Schrodinger and Bohm with recent work ranging from astrobiology, cellular evolution and epigenetics. 

I want to explain why this diachronic perspective is a much more powerful way of looking at education, technology and human life.   

Every one of my cells has a history. Not just the history of where it began in me - which was in one of the three "germ layers" of the zygote that eventually grew into baby me - but a deeper history of how each of the (roughly) 200 different cells types emerging from the zygote acquired their individual structures and properties. Each of them has a history much older than me. Each of them acquired different components (organelles) which we now see as a process of absorption of externally existing components in the environment: endosymbiosis. Cellular endosymbiosis occurred in response to environmental stress. Early cells had to reorganise their structures and functions in order to maintain: 

  • homeostasis within the cell boundary
  • balance with the external environment
  • energy acquisition from the environment 
Through endosymbiosis, each of my cells carries a historical record of its own evolution. For example, the movement of animals from water to land is carried in the development of lung cells, which evolved from the cells of the swim bladders of fish. Since we are all made of the same cells, this historical record within our constitution unites not only common members of a species (all of us), but all cellular life.

To what extent might we "know" this? To what extent does our physiological knowledge play out when we sit at our computers or stare at our phones? Moreover, if we do intuitively sense our deep interconnections with nature, by what mechanism of nature do we behave as if we deny this completely?

This is to turn the fundamental questions of ecology (and particularly, cybernetic ecology - Bateson, etc) upside down. It is not to ask how we are connected, but how human relations have evolved to be disconnected. Is there a logic here? Our scientific problem is that if we look for the logic of human behaviour taking the unit of analysis as human relations (or worse, the individual), we will come to the conclusion that only specific kinds of relation "go wrong". Some relations may appear to "go wrong" more than others, but in a deep sense, we all suffer from bad relations. 

This question of the "evolution of disconnection" cannot be addressed unless we consider the cellular origins of life which connect us all, together with the ways in which the evolutionary history of cells is programmed into us. Human disconnection may be the activation of older mechanisms in cellular development which, at the scale of cells or small organisms, may not have been as devastating as we now make them. 

Our social engagement in the context of a technological environment is not "entangled" (whatever that means), it is an "evolved disconnection" from nature. We communicate - make common - our sense of being human - of having this collection of cells, which we understand to be common. That is how the empathy, love, and the expression of doubt work. In the context of that communication, we also communicate our physiological reaction to the material artefacts around us, which are in turn the results of historical communications. In that historical communication, there are the seeds of our current evolved disconnection which may be sometimes be felt as alienation or frustration, and (sometimes) as energy, excitement and flow. At the root of that evolved disconnection are deeper natural processes of cellular evolution. The better we can understand those, the better equipped we will be to steer our way through our current (and dangerous) state of evolved disconnection. 

This is not to invite further metaphysical speculations. It is to invite something more practical. Our disconnection from nature is now throwing up tremendous turbulence in our existence. Like a plane flying through turbulence, the challenge is steering, and the tapping in to the deep knowledge to do that steering well. I have been wondering recently if cellular evolutionary history is the hidden mechanism of biological steering - a kind of "trimtab" as Buckminster Fuller described. If that is the case, if we can grasp it, we can reconnect our steering with the natural world. Might we have technologies to help us?