Monday 6 May 2024

Work, Labour and Occupational Health: Hannah Arendt and the boundaries of subsistence

I've been at the International Congress on Occupational Health conference in Marrakech this week. I gave a presentation on AI in occupational health, and also had a poster on synthetic data from large language models. I was pretty much alone in talking about AI, which I found hard to believe. It will be different by their next conference in 3 years time, by which time we may have an idea of the pathologies we might unleash upon ourselves. 

By far the best session I attended was on women and occupational health. Partly the reason why it was so good was because the general definition of work that most people would give you in occupational health is that it is "something you do for money". With women, far more than men, there is a huge amount of work which is not paid. Unpaid work such childcare, together with specific female health issues which directly relate to the viability of society, have a huge impact on paid work. I think this means that we need to re-examine our definition of "work". This has made me think about Hannah Arendt's distinctions in "The Human Condition" (which has been a book that has lived with me for most of my academic career)

Arendt (partly following Marx) makes the distinction between work, labour and action. "Work", she sees as a higher form of activity which leads to the production of things or structures which outlast the process of their creation. "Labour" on the other hand, is activity that is necessary for ongoing existence. One of the challenges for us to think about in occupational health is that a lot of the "work" we discuss (and its health implications) is really "labour". People labour cutting sugar cane, or mining: the purpose of this labour is to earn enough money to subsist, or to feed processes that require constant attention. It does nothing to create something new which will outlast the process. "Action" is specifically political - the negotiated engagement between human beings in the process of coordinating the navigation of the world.

In academia, much activity used to be "work" and "action" but has become "labour" - particularly with technology. Working with technology platforms, for example, seems to have become labour rather than work. Yes, using platforms to assemble resources for others fits Arendt's definition of work, but these administrative processes are becoming increasingly ephemeral. I have observed in my own university how meetings become dominated by discussions on what technologies to use for what, or what protocols to follow to achieve certain goals, with ever changing criteria demanding continual adaptation. This is all labour - activities for the subsistence of the operation. Very little time in such meetings is devoted to "what matters", which would be partly "action" in Arendt's sense, and work in the sense that it might produce something durable and new. Even deciding on and assessing "learning outcomes" becomes ephemeral labour, not work. Worst of all, we may have turned the intellectual journey of study itself into labour rather than work.

What technology and the complexity of modern life has appeared to do to us is to move the boundaries of subsistence. Today in order to subsist, it is necessary to perform often complex, and psychologically draining activities whose purpose is merely to feed the complex system that continually demands more input. David Graeber put a nice name to much of this activity as "imaginative labour" - the labour of guessing the requirements of those for whom we work.

It is important to think about why this has happened, since some of these psychologically draining tasks we might be tempted to allocate to AI in the future. Why should there be a desire to turn human work into labour? The issue of contingency - both in work and action - is helpful to understand this.

In Arendt's conception of "action" there is greater contingency than labour: this is partly because in political action, much is necessarily undecided. Organisations, however, are generally averse to dealing with contingency in an increasingly uncertain world. Dan Davies's excellent new book "The Unaccountability Machine" is basically a description of how organisations have designed-out the need to deal with contingency, instead seeking to attenuate-out the complexity of the environment behind anonymous systems. As long as we seek to attenuate the variety of the world, we will continue to turn work into labour.

The solution to this problem has always been to amplify contingency. It is only by amplifying contingency that the human processes of conversation and coordination where Arendtian "action" can be properly situated. So could AI amplify contingency?

The activities of work, labour and action are the result of a selection mechanism. We choose the actions of work just as we choose the actions of labour. We need to understand how the selection mechanism is constructed. The difference between them is the difference in the constraint that is operative, and the degree to which there is an increase in the number of options that are available for selection. When the number of options available for selection increases, then we are looking at work rather than labour. Where the options available for selection remains the same or even decreases, then we are looking at labour.

Contingency is the key to increasing the options for acting. It is only with uncertainty that the conversational mechanisms are introduced which steer selection processes to engage with one another and acquire new options from each other. This is similar to Leydesdorff’s idea behind the Triple Helix: new options arise through the interaction between the many discarded ideas from different stakeholders. So the correct question as to the possible impact of AI will be whether it can be used to amplify uncertainty.

If AI is used to automate tasks, it will reduce uncertainty. Increasingly, discussions will focus on the function of AI, and options will reduce to the technical details of one AI or another. However, if we see that the function of a whole organisation - a business, a university, a government - is to make selections of action, then the scientific question we can ask is "how does it construct its selection mechanism?".

AI is really a scientific instrument that can help us to answer this question and gain deeper insight into our organisations. To use it correctly is the work of science that will provide new adaptive capacity to deal with the future. To use it badly will turn what little work still exists to labour and shift the boundary of subsistence even further to encompass the gamut of human action.

No comments: