Sunday 19 May 2024

Trajectories of Oppression

It does seem that there are some quite unpleasant leaders in the world (Putin, Xi, Modi, Orban, etc) who seem to be gathering and organising in opposition to leaders in other parts of the world who present themselves as being less unpleasant (but may in fact be almost as bad!). A balanced perspective on this is difficult, but there are certainly countries where speaking out against injustice will get you killed, or imprisoned. But "Being free" ought to mean something rather more than "they won't kill me if I speak out". 

Within all oppressive regimes there are restrictions on the range of things that can be said, and who they can be said to. This applies to countries as much as it does to family relationships, and indeed the most common form of oppression occurs in the home. There is coercion of communication, monitoring and control. Transgressing any boundaries may lead to a "visit" in one's work or home from some unsavoury character who, if not directly violent, will attempt intimidate through threats. 

Free societies need to counter the tendency towards restricting communication. Openness and dialogue backed up by legal protection is always the best policy for dealing with threats. However, as the world teeters between liberty and oppression, there is a risk that this kind of behaviour spills over into normal life in free societies. So what should we do? https://forms.gle/F8h82x7YjqoBrVyP6


Thursday 16 May 2024

Telepathy

I find there are too many occasions when "coincidences" occur for me to believe that they are merely "co-incidences". We only really believe this because we imagine ourselves to be independent from one another - little self-contained robots pursuing our own algorithms, and only "understanding" each other through the perturbations that each of us produces in others. A "co-incidence" in this model of the world is merely a particular pattern that emerges at random in the processing of these perturbations that makes one of the robots go "ah-ha!"

But it may not be like that at all. In fact, we are very unlikely to be "independent" of each other: independence is an illusion. Every cell unites us. Every cell contains a shared history which maps each "individual" back to a shared origin. If that is the case, then it is not a surprise to think that our shared history isn't causal in our coordination with each other. What then is a "co-incidence"? It is not necessarily a random encounter, but the result of a deep coordination produced the internal selection processes of mechanisms whose components belong with each other, whilst being physically separate. It is further possible that such coordinations are related in some way to basic physical processes - entanglement particularly being a mechanism that would explain this kind of "strange relationship at a distance".

We are rarely aware that we breathe together - we only become aware of it when in a large silent room with many others. When we do become aware of it, we sense something "bigger" which unites us all. Is this an illusion? If it isn't an illusion, then what must be happening is some kind of "coordination of constraint": that what constrains my free will in choosing actions becomes coupled with what constrains another person's free will. 

The simple point here is that we don't know what constrains free will. We don't know what shapes the mechanism that chooses action x or action y. But just because we don't know it, doesn't mean that there might be something that constrains action x and y and that these constraints might become aligned. 

When people fall in love, there is a very strong sense of connection - even when there is an absence. People will report picking up the phone at the very moment their loved one calls. How many times have I opened WhatsApp (for example) to see at the same moment the person I want to call show up suddenly as "online"?  So the question is, How might those constraints become aligned? 

Rupert Sheldrake speculated that its a "morphic field" which unites the various constraints that affect us and steer us to action. He conducted experiments staring at the back of peoples' heads and timing how long it took them to turn round. I think there might be a simpler explanation based on the fact that we are basically made of the same stuff, and that stuff has a common history. We don't need to invent a field, because there is already a kind of "vector" in-built into each one of our cells. 

Now the question about telepathy arises because it might be possible to "tune-in" to the vector in each of our cells and coordinate its processes with the cells in someone else. Actually, this is pretty much what happens in sex, isn't it? Also in deep conversation. So why shouldn't the same process not be possible at a distance? 

We could probably explore this experimentally if there was a way to account for the constraints bearing on cellular behaviour. John Torday, for example, has subjected cells to micro-gravity and observed common changes in this way. There may be a mistake in thinking of telephathy as "exchange of thought". Rather it seems to be "coordination of constraints on physiological process at a distance". 

I've written something on a piece of paper, and I want to conduct an experiment. Here's another form with a few questions about thought and telepathy: https://forms.gle/yF4bn5puQkAccYFA6



Tuesday 14 May 2024

Trust

 In the AI world that is unfolding around us, it is going to be increasingly difficult to know what to trust and who to trust. This issue has been known in science for a long time, but it is now becoming apparent in everyday life and politics. Von Foerster and Maturana gave this wonderful talk about it many years ago: 


Von Foerster makes the distinction between two words that mean "truth": the Latin "veritas" entails checking reality (German "wahrheit" derives from this). The English word "truth" on the other hand derives from "trust" - in other words, it becomes a matter of interpersonal agreement in terms of establishing truth. Since von Foerster was committed to arguing that there was no objective reality to check, the issue of trust and truth is central.

It is trust which is being affected not just by AI but by all forms of technological communication. Trust demands human intersubjective engagement - quite simply, looking into each others eyes - and most forms of technological communication do not allow this (text, phone, email, etc). Moreover, the interaction between people using these media of communication will unfold differently from how things unfold in interpersonal intersubjective communication. 

These days, we can't be sure about anything. We can't be sure that the sender of an email is who they say. We might assume, for example, when somebody calls us up, that the person on the other end of the phone line is the person they say: but it might not be. We can't be sure that a paper was really written by professor X, or that a student actually wrote the essay they submitted. 

This is central to the problems we are going to face in the coming years. So we are going to need new ways of establishing trust and confidence in communication.

I'm interested in what people think about this - about what we can do to establish new levels of trust. Since my googleform experiments are going reasonably well, I've created another one (I hope you trust me!). There is a very simple question: How would you establish the trustworthiness of a communication in the absence of being able to look someone in the eye?

Interested to hear the responses! https://forms.gle/Gjo2wLaNVk7egiEz6 



Saturday 11 May 2024

Faure's Breathing

I'm really out of practice with my playing, so apologies for mistakes here. While I'm playing at this Faure Nocturne, I'm also thinking about the way that music breathes. If it doesn't breathe, it's no good. That is a real test for something like AI - because it doesn't breathe at all. Breathing is so important - partly I think because it's relational. We don't think of breathing "together" - but we actually are always breathing together.  

When we play music, we breathe with the universe. Or try to at least.




Friday 10 May 2024

Delius's Idyll

When I was about 14 or 15 I fell in love with the music of Delius. I had a few LPs, and one of them had a recording of the Requiem on one side, and on the other his setting of Whitman poems from "Leaves of Grass" - "Idyll", which was assembled by Eric Fenby with Delius in the 1930s. It was magical music - the Requiem's unconventional (and perhaps shocking) nihilism about sensual pleasure and the enjoyment of life resonated with my teenage mood, but the Idyll was the piece which really struck me.

Whitman's poetry is beautiful, erotic and mystical:

"Once I pass'd through a populous city,

Imprinting my brain with all its shadows.

Of that city I remember only a woman,

A woman I casually detained,

Who detained me for love of me.

Day by day and night by night we were together --

 all else has been forgotten by me."

But the music is something else. There's an extraordinary bluesy climax set to the words "What is it to us what the rest do or think/ What is all else to us who have voided all/ but freedom and all but our own joy." (section from about 16' 28'')


The music and words are really one giant fantasy - but what a fantasy! And then again, I'm now reminded that fantasies are not "made-up": they are real. Indeed, as both Tolkein and C.S. Lewis thought, fantasy is more real than reality: it is the place where we hope and dream, which is the essence of what it is to be human. The make-believe is the humdrum monotony of the world - that world is false. The music speaks the truth though: "Dearest comrade all is over and long gone, But love is not over."

Tolkien called fairy stories a "casement of the outer world": they were a space to explore possibility.  They are fundamental to humanity. Moreover, losing sight of fantasy is a deathly way to live. 

Also, sometimes, something happens in the outer world which is remarkably close to what happens in the casement. That's a reminder that there really is something "bigger than ourselves", and that it is our dreams which are our compass and guide. John Torday sees this perception of something bigger than ourselves as a reference to our fundamental connection to the cosmos. I think Delius might have agreed. 

I was once quite critical of an eminent music scholar who at a conference went on about the love letters of a composer and how they influenced what he (it was a he) did. I said that people deceive each other (I wasn't feeling particularly romantic at the time!) - I said it's easy to say stuff like "I love you" in a letter - but is it real? Now I think I was not quite right. Sometimes it's very real, and the energy that flows from it is indeed causal in remarkable things happening - like Delius's music.

(I've given up on comments on my blog - too much spam trying to sell me Viagra! So if you want to feed-back, do it here: https://forms.gle/8TZUbtJqmu1rUtBa6  and I'll post them up)
 

Wednesday 8 May 2024

Wellbeing

One of the problems with digital communications is that they are easily subverted. It can be difficult to ascertain the real intention behind electronic communications: they can be subject to deception or coercion and that can lead to serious consequences. With face-to-face communication we at least have some insight into the lived experience of the other in the flow of communication. Human trust relies on this. 

A friend of mine commented on this phenomenon a few months back when she said of Generative AI "I am not sure who I am talking to". Quite right. We know that we are interacting with some kind of process which in itself is amusing and fascinating, but it is also deceiving. The fact that we are looking for ways of exploiting this form of deception for real-life activities says more about the poverty of our inter-human engagement where communications have become transactions than it does about the miracle of the technology. 

Fernando Flores, in "Understanding Computers and Cognition", argued that IT systems were essentially communication systems for managing the commitments we make to one another. So, for example, the email chain quickly reveals who promised what to who and when. But there's a problem with this. In recent years, the evidence trail creates lots of noise as (for example) it is revealed exactly what some cabinet members thought of Boris Johnson. As more intimate communications are recorded electronically and subject to search, we see communications intended at source to be private become public. This happens in verbal communication through gossip, but gossip permits deniability which a WhatsApp message doesn't. Because the internet is increasingly wired into the psyche, intimations and thoughts are at risk of public exposure, with social and psychological consequences.  

We might think that total transparency of communication - the tracking of commitments - is a good thing. I used to think this. But not only can it be corrupted, it also throws away huge amounts of information from the embodied source of communication. When all channels of communication are subject to surveillance, communications themselves can be coerced. In domestic situations this can be worse, because it gives rise to a "double-bind" where messages of care go hand-in-hand with threats and intimidation. Work communications too can put people in very difficult situations where they have to communicate one thing electronically, but really think another. Imagine having both coercion at home and at work. It's no wonder there is an explosion of stress. 

We probably need what Stafford Beer called an "algedonic loop" in our communications - one that provides assurance to those communicating that they are in fact "ok" - or provides a way of pushing a red button if they really are not and need help. It's not hard to do - probably something like this would work: Wellbeing (google.com)



Monday 6 May 2024

Work, Labour and Occupational Health: Hannah Arendt and the boundaries of subsistence

I've been at the International Congress on Occupational Health conference in Marrakech this week. I gave a presentation on AI in occupational health, and also had a poster on synthetic data from large language models. I was pretty much alone in talking about AI, which I found hard to believe. It will be different by their next conference in 3 years time, by which time we may have an idea of the pathologies we might unleash upon ourselves. 

By far the best session I attended was on women and occupational health. Partly the reason why it was so good was because the general definition of work that most people would give you in occupational health is that it is "something you do for money". With women, far more than men, there is a huge amount of work which is not paid. Unpaid work such childcare, together with specific female health issues which directly relate to the viability of society, have a huge impact on paid work. I think this means that we need to re-examine our definition of "work". This has made me think about Hannah Arendt's distinctions in "The Human Condition" (which has been a book that has lived with me for most of my academic career)

Arendt (partly following Marx) makes the distinction between work, labour and action. "Work", she sees as a higher form of activity which leads to the production of things or structures which outlast the process of their creation. "Labour" on the other hand, is activity that is necessary for ongoing existence. One of the challenges for us to think about in occupational health is that a lot of the "work" we discuss (and its health implications) is really "labour". People labour cutting sugar cane, or mining: the purpose of this labour is to earn enough money to subsist, or to feed processes that require constant attention. It does nothing to create something new which will outlast the process. "Action" is specifically political - the negotiated engagement between human beings in the process of coordinating the navigation of the world.

In academia, much activity used to be "work" and "action" but has become "labour" - particularly with technology. Working with technology platforms, for example, seems to have become labour rather than work. Yes, using platforms to assemble resources for others fits Arendt's definition of work, but these administrative processes are becoming increasingly ephemeral. I have observed in my own university how meetings become dominated by discussions on what technologies to use for what, or what protocols to follow to achieve certain goals, with ever changing criteria demanding continual adaptation. This is all labour - activities for the subsistence of the operation. Very little time in such meetings is devoted to "what matters", which would be partly "action" in Arendt's sense, and work in the sense that it might produce something durable and new. Even deciding on and assessing "learning outcomes" becomes ephemeral labour, not work. Worst of all, we may have turned the intellectual journey of study itself into labour rather than work.

What technology and the complexity of modern life has appeared to do to us is to move the boundaries of subsistence. Today in order to subsist, it is necessary to perform often complex, and psychologically draining activities whose purpose is merely to feed the complex system that continually demands more input. David Graeber put a nice name to much of this activity as "imaginative labour" - the labour of guessing the requirements of those for whom we work.

It is important to think about why this has happened, since some of these psychologically draining tasks we might be tempted to allocate to AI in the future. Why should there be a desire to turn human work into labour? The issue of contingency - both in work and action - is helpful to understand this.

In Arendt's conception of "action" there is greater contingency than labour: this is partly because in political action, much is necessarily undecided. Organisations, however, are generally averse to dealing with contingency in an increasingly uncertain world. Dan Davies's excellent new book "The Unaccountability Machine" is basically a description of how organisations have designed-out the need to deal with contingency, instead seeking to attenuate-out the complexity of the environment behind anonymous systems. As long as we seek to attenuate the variety of the world, we will continue to turn work into labour.

The solution to this problem has always been to amplify contingency. It is only by amplifying contingency that the human processes of conversation and coordination where Arendtian "action" can be properly situated. So could AI amplify contingency?

The activities of work, labour and action are the result of a selection mechanism. We choose the actions of work just as we choose the actions of labour. We need to understand how the selection mechanism is constructed. The difference between them is the difference in the constraint that is operative, and the degree to which there is an increase in the number of options that are available for selection. When the number of options available for selection increases, then we are looking at work rather than labour. Where the options available for selection remains the same or even decreases, then we are looking at labour.

Contingency is the key to increasing the options for acting. It is only with uncertainty that the conversational mechanisms are introduced which steer selection processes to engage with one another and acquire new options from each other. This is similar to Leydesdorff’s idea behind the Triple Helix: new options arise through the interaction between the many discarded ideas from different stakeholders. So the correct question as to the possible impact of AI will be whether it can be used to amplify uncertainty.

If AI is used to automate tasks, it will reduce uncertainty. Increasingly, discussions will focus on the function of AI, and options will reduce to the technical details of one AI or another. However, if we see that the function of a whole organisation - a business, a university, a government - is to make selections of action, then the scientific question we can ask is "how does it construct its selection mechanism?".

AI is really a scientific instrument that can help us to answer this question and gain deeper insight into our organisations. To use it correctly is the work of science that will provide new adaptive capacity to deal with the future. To use it badly will turn what little work still exists to labour and shift the boundary of subsistence even further to encompass the gamut of human action.