In the final dialogue between physicist David Bohm and spiritual guru, Jiddhu Krishnamurti, Krishnamurti focuses on how "human problems" can be solved, why it is that they persist, and whether humanity could ever live without any problems at all. He says:
Education is a human problem to which institutions attempt to find solutions. There are many dimensions to the human problem of education: the problem of making distinctions, the problem of conversation, the problem of institutional organisation, the problem of science and knowledge, the problem of openness, the problem of collective decision and judgement, the problem of economics, and the problem of research into education itself. The human problem of education is part of all these problems. The extent to which education seems to be an exacerbating factor in the production of these problems may partly be due to the fact that we do not possess a metalanguage for human problems: a way of talking about the connectedness of human problems.
And yet I wonder if we saw human problems from a different perspective, we might be able to look upon our situation in an organisational way which might help us to find a better way of living with the technologies which, so often, contribute to our problems. What if we had a meta-language of human problems?
This week Joseph Stiglitz argued that Artificial Intelligence was the world's greatest threat (see https://www.timeshighereducation.com/news/joseph-stiglitz-education-effort-post-war-scale-needed-ai), and a force which would lead the world to fascism. In response, what is needed, he argues, is a massive-scale amplification of education, to empower human critical faculties in being able to address the challenge of automated judgements and corporate surveillance.
There's some essence of truth in Stiglitz's message: the threat to society lies in the imbalance between machines and humans - but the temptation is to blame the machines themselves (Stiglitz seems to do this). In the end, it is not machines that replace jobs with automation; it is human institutions - businesses, corporations, institutions and their leaders - which do this. They do it, I believe, because they react to increased environmental uncertainty, which itself is created by technology. The answer to address the imbalance between humans and machines is not to empower the institutions! The machines - and particularly AI - is powerful because it is organised in a different way to humans. It is a heterarchy (a word coined by the founder of machine learning, Warren McCulloch), whereas human institutions are hierarchies. The root of the human problem is institutions misunderstanding the nature of the threat from their environment and mis-adapting so that they exacerbate the problem. This appears to be Stiglitz's solution unfortunately.
The core issue is that there are ways of organising human institutions which are not hierarchical. This would be to organise so as to manage the uncertainties created by technology, rather than seek to defend existing institutional structures against them (and in the process make it worse).
What is needed is a meta-language of human problems. There are ways in which humans can look at their problems and address new ways of organising themselves, sometimes using technologies. In all crises in human history we see precisely this kind of movement - eventually... after humans have been sufficiently stupid in attempting simple "technological solutions" to problems that things get so bad that no other options appear to be available. If I am worried about the state of the world now, it is that I don't think really reached "Max Stupidity" yet.
“I am asking in this dialogue whether it is possible to have no human problems at all - only technological problems, which can be solved. But human problems seem insoluble. Is it because of our education, our deep-rooted traditions, that we accept things as they are?”After some considerable soul-searching Bohm responds
“I wonder if we should even call these things problems, you see. A problem would be something that is reasonably solvable. If you put the problem of how to achieve a certain result, then that presupposes that you can reasonably find a way to do it technologically. But psychologically, the problem cannot be looked at in that way; to propose a result you have to achieve, and thenBohm’s insight highlights the fundamental dichotomy of educational technology. Technology in education is approached - by institutions, teachers, and learners - as a solution to a human problem. Yet the human problem of education is not one for which the result one wants to achieve can be specified in a simple way such that technology can be proposed as a solution. Most commonly, attempts to solve human problems in this way simply creates a deeper problem, and it is this to which Krishnamurti is drawing attention. Krishnamurti’s suspicion that education might be a cause of human problems - that education attempts to solve human problems through technological intervention - would suggest that some blame for the state of the world must sit at education’s feet.
find a way to do it.”
Education is a human problem to which institutions attempt to find solutions. There are many dimensions to the human problem of education: the problem of making distinctions, the problem of conversation, the problem of institutional organisation, the problem of science and knowledge, the problem of openness, the problem of collective decision and judgement, the problem of economics, and the problem of research into education itself. The human problem of education is part of all these problems. The extent to which education seems to be an exacerbating factor in the production of these problems may partly be due to the fact that we do not possess a metalanguage for human problems: a way of talking about the connectedness of human problems.
And yet I wonder if we saw human problems from a different perspective, we might be able to look upon our situation in an organisational way which might help us to find a better way of living with the technologies which, so often, contribute to our problems. What if we had a meta-language of human problems?
This week Joseph Stiglitz argued that Artificial Intelligence was the world's greatest threat (see https://www.timeshighereducation.com/news/joseph-stiglitz-education-effort-post-war-scale-needed-ai), and a force which would lead the world to fascism. In response, what is needed, he argues, is a massive-scale amplification of education, to empower human critical faculties in being able to address the challenge of automated judgements and corporate surveillance.
There's some essence of truth in Stiglitz's message: the threat to society lies in the imbalance between machines and humans - but the temptation is to blame the machines themselves (Stiglitz seems to do this). In the end, it is not machines that replace jobs with automation; it is human institutions - businesses, corporations, institutions and their leaders - which do this. They do it, I believe, because they react to increased environmental uncertainty, which itself is created by technology. The answer to address the imbalance between humans and machines is not to empower the institutions! The machines - and particularly AI - is powerful because it is organised in a different way to humans. It is a heterarchy (a word coined by the founder of machine learning, Warren McCulloch), whereas human institutions are hierarchies. The root of the human problem is institutions misunderstanding the nature of the threat from their environment and mis-adapting so that they exacerbate the problem. This appears to be Stiglitz's solution unfortunately.
The core issue is that there are ways of organising human institutions which are not hierarchical. This would be to organise so as to manage the uncertainties created by technology, rather than seek to defend existing institutional structures against them (and in the process make it worse).
What is needed is a meta-language of human problems. There are ways in which humans can look at their problems and address new ways of organising themselves, sometimes using technologies. In all crises in human history we see precisely this kind of movement - eventually... after humans have been sufficiently stupid in attempting simple "technological solutions" to problems that things get so bad that no other options appear to be available. If I am worried about the state of the world now, it is that I don't think really reached "Max Stupidity" yet.