There's an interesting article in the Guardian this week about the growth of AI and the surveillance society: https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook?fbclid=IwAR0Nmp3uScp5PNzblV2AkpnQtDlrNIEDYp54SdYa4iy9Ofjw66FgDCFceO8
But I don't want to distract from the contents of the article. Surveillance is clearly happening, and `Platform capitalism' (Google and Facebook are platforms) is clearly a thing (see Nick Snricek's book here: https://www.amazon.co.uk/Platform-Capitalism-Theory-Redux-Srnicek/dp/1509504877, or the Cambridge Platform Capitalism reading group: https://cpgjcam.net/reading-groups/platform-capitalism-reading-group/). But the tendency to reach the conclusion that technology is a bad thing should be avoided. The problem lies with the relationship between institutions which are organised as hierarchies trying to cope with mounting uncertainties in the world which have been exacerbated by the abundance of options that technology has given us.
In writing this blog, I am exploiting one of the options that technology has provided. I could instead have published a paper, written to the Guardian, sent it to one of the self-publishers, made a video about it, or simply expressed my theory in discussion with friends. I could have used Facebook, Twitter, or I could have chosen a different blogging platform. In fact, the choice is overwhelming. This amount of choice is what technology has done: it has given us an unimaginably large number of options for doing things that we could do before, or in other ways. How do I choose? That's uncertainty.
For me as a person, it's perhaps not so bad. I can resort to my habits as a way of managing my uncertainty, which often means ignoring some of the other available options that technology provides (I really should get my blog off blogger, for example, but that's a big job). But the sheer number of options that each of us now has is a real problem for institutions.
This is because the old ways of doing things like learning, printing, travelling, broadcasting, banking, performing, discussing, shopping or marketing all revolved around institutions. But suddenly (and it has been sudden) individuals can do these things in new ways in addition to those old-fashioned institutions. So institutions have had to change quickly to maintain their existing structures. Some, like shops and travel agents, are in real trouble - they were too slow to change. Why? Because their hierarchical structures meant that staff on the shop floor who could see what was happening and what needed to be done were not heard at the top soon enough, and the hierarchy was unable to effect radical change because its instruments of control were too rigid.
But not all hierarchies have died. Universities, governments, publishers, broadcasters survive well enough. This is not because they've adapted. They haven't really (have universities really changed their structures?). But the things that they do - pass laws, grant degrees, publish academic journals - are the result of declarations they make about the worth of what they do (and the lack of worth of what is not done through them) which gets upheld by other sections of society. So a university declares that only a degree certificate is proof that a person is able to do something, or should be admitted to a profession. These institutions have upheld their powers to declare scarcity. As more options have become available in society to do the things that institutions do, so the institutions have made ever-increasingly strong claims that their way is the only way. Increasingly institutions have used technology as a way of reinforcing their scarcity declaration (the paywall of journals, the VLE, AI, surveillance) These declarations of scarcity are effectively a means of defending the existing structures of institutions against the increasing onslaught of environmental uncertainty.
So what of AI or surveillance? The two are connected. Machine learning depends on data, and data is provided by users. So users actions are 'harvested' by AI. However, AI is no different from any other technology: it provides new options for doing things that we could do before. So while the options for doing things increase, uncertainty increases, and feeds a reaction by institutions, including corporations and governments. The solution to the uncertainty caused by AI and surveillance is more AI and surveillance: now in universities, governments (China particularly) and technology corporations.
This is a positive-feedback loop, and as such is inherently unstable. It is more unstable when we realise that the machine learning isn't that good or intelligent after all. Machine learning, unlike humans, is very bad at being retrained. Retrain a neural network then you risk everything that had been learnt before going to pot (I'm having direct experience of this at the moment in a project I'm doing). The simple fact is that nobody knows how it works. The real breakthrough in AI will come when we really do understand how it works. When that happens, the ravenous demand for data will become less intense: training can be targetted with manageable and specific datasets. Big data is, I suspect, merely a phase in our understanding of the heterarchy of neural networks.
The giant surveillance networks in China are feeding an uncertainty dynamic that will eventually implode. Google and Facebook are in the same loop. Amplified uncertainty eventually presents itself as politics.
This analysis is produced by looking at the whole system: people and technology. It is one of the fundamental lessons from cybernetics that whole systems have uncertainty. Any system generates questions which it cannot answer. So a whole system must have something outside it which mops up this uncertainty (cyberneticians call this 'unmanaged variety'). This thing outside is a 'metasystem'. The metasystem and the system work together to maintain the identity of the whole, by managing the uncertainty which is generated. Every whole has a "hole".
The question is where we put the technology. Runaway uncertainty is caused by putting the technology in the metasystem to amplify the uncertainty mop. AI and surveillance are the H-bombs of metasystemic uncertainty management now. And they simply make the problem worse while initially seeming to do the job. It's very much like the Catholic church's commandeering of printing.
However, the technology might be used to organise society differently so that it can better manage the way it produces uncertainty. This is to use technology to create an environment for the open expression of uncertainty by individuals: the creation of a genuinely convivial society. I'm optimistic that what we learn from our surveillance technology and AI will lead us here... eventually.
Towards a holographic future
The key moment will be when we learn exactly how machine learning works. Neural networks are a bit like fractals or holograms, and this means that the relationship between a change to the network and the reality it represents is highly complex. Which parts of a neural network do we change to produce a determinate change in its behaviour (without unforeseen consequences)? What is fascinating is that consciousness and the universe may well work according to the same principles (see https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9). The fractal is the image of the future. The telescope and the microscope were the images of the enlightenment (according to Bas van Fraassen: https://en.wikipedia.org/wiki/Bas_van_Fraassen)
Through the holographic lens the world looks very different. When we understand how machine learning does what it does, and we can properly control it, then each of us will turn our digital machines to ourselves and our social institutions. We will turn it to our own learning and our learning conversations. We will turn it to art and aesthetic and emotional experience. What will we learn? We will learn about coherence and how to take decisions together for the good of the planet. The fractals of machine learning can create the context for conversation where many brains can think as one brain. We will have a different context for science, where scientific inquiry embraces quantum mechanics and its uncertainty. We will have global education, where the uncertainty of every world citizen is valued. And we will have a transformed notion of what it is to 'compute'. Our digital machines will tell us how nature computes in a very different way to silicon.
Right now this seems like fantasy. We have surveillance, nasty governments, crazy policies, inequality, etc. But we are in the middle of a scientific revolution. The last time we had the Thirty Years War, the English Civil War and Cromwell. We also have astonishing tools which we don't yet fully understand. Our duty is to understand them better and to create an environment for conversation in the future which the universities once were.
Before reading it, I suggest first inspecting the hyperlink. It's to theguardian.com, but the file it seeks is "shoshana-zuboff-age-of-surveillance-capitalism-google-facebook?fbclid=IwAR0Nmp3uScp5PNzblV2AkpnQtDlrNIEDYp54SdYa4iy9Ofjw66FgDCFceO8" which contains information about where the link came from and an identifier to my account. This information goes to the Guardian, who then exploit the data. Oh, the irony!!
But I don't want to distract from the contents of the article. Surveillance is clearly happening, and `Platform capitalism' (Google and Facebook are platforms) is clearly a thing (see Nick Snricek's book here: https://www.amazon.co.uk/Platform-Capitalism-Theory-Redux-Srnicek/dp/1509504877, or the Cambridge Platform Capitalism reading group: https://cpgjcam.net/reading-groups/platform-capitalism-reading-group/). But the tendency to reach the conclusion that technology is a bad thing should be avoided. The problem lies with the relationship between institutions which are organised as hierarchies trying to cope with mounting uncertainties in the world which have been exacerbated by the abundance of options that technology has given us.
In writing this blog, I am exploiting one of the options that technology has provided. I could instead have published a paper, written to the Guardian, sent it to one of the self-publishers, made a video about it, or simply expressed my theory in discussion with friends. I could have used Facebook, Twitter, or I could have chosen a different blogging platform. In fact, the choice is overwhelming. This amount of choice is what technology has done: it has given us an unimaginably large number of options for doing things that we could do before, or in other ways. How do I choose? That's uncertainty.
For me as a person, it's perhaps not so bad. I can resort to my habits as a way of managing my uncertainty, which often means ignoring some of the other available options that technology provides (I really should get my blog off blogger, for example, but that's a big job). But the sheer number of options that each of us now has is a real problem for institutions.
This is because the old ways of doing things like learning, printing, travelling, broadcasting, banking, performing, discussing, shopping or marketing all revolved around institutions. But suddenly (and it has been sudden) individuals can do these things in new ways in addition to those old-fashioned institutions. So institutions have had to change quickly to maintain their existing structures. Some, like shops and travel agents, are in real trouble - they were too slow to change. Why? Because their hierarchical structures meant that staff on the shop floor who could see what was happening and what needed to be done were not heard at the top soon enough, and the hierarchy was unable to effect radical change because its instruments of control were too rigid.
But not all hierarchies have died. Universities, governments, publishers, broadcasters survive well enough. This is not because they've adapted. They haven't really (have universities really changed their structures?). But the things that they do - pass laws, grant degrees, publish academic journals - are the result of declarations they make about the worth of what they do (and the lack of worth of what is not done through them) which gets upheld by other sections of society. So a university declares that only a degree certificate is proof that a person is able to do something, or should be admitted to a profession. These institutions have upheld their powers to declare scarcity. As more options have become available in society to do the things that institutions do, so the institutions have made ever-increasingly strong claims that their way is the only way. Increasingly institutions have used technology as a way of reinforcing their scarcity declaration (the paywall of journals, the VLE, AI, surveillance) These declarations of scarcity are effectively a means of defending the existing structures of institutions against the increasing onslaught of environmental uncertainty.
So what of AI or surveillance? The two are connected. Machine learning depends on data, and data is provided by users. So users actions are 'harvested' by AI. However, AI is no different from any other technology: it provides new options for doing things that we could do before. So while the options for doing things increase, uncertainty increases, and feeds a reaction by institutions, including corporations and governments. The solution to the uncertainty caused by AI and surveillance is more AI and surveillance: now in universities, governments (China particularly) and technology corporations.
This is a positive-feedback loop, and as such is inherently unstable. It is more unstable when we realise that the machine learning isn't that good or intelligent after all. Machine learning, unlike humans, is very bad at being retrained. Retrain a neural network then you risk everything that had been learnt before going to pot (I'm having direct experience of this at the moment in a project I'm doing). The simple fact is that nobody knows how it works. The real breakthrough in AI will come when we really do understand how it works. When that happens, the ravenous demand for data will become less intense: training can be targetted with manageable and specific datasets. Big data is, I suspect, merely a phase in our understanding of the heterarchy of neural networks.
The giant surveillance networks in China are feeding an uncertainty dynamic that will eventually implode. Google and Facebook are in the same loop. Amplified uncertainty eventually presents itself as politics.
This analysis is produced by looking at the whole system: people and technology. It is one of the fundamental lessons from cybernetics that whole systems have uncertainty. Any system generates questions which it cannot answer. So a whole system must have something outside it which mops up this uncertainty (cyberneticians call this 'unmanaged variety'). This thing outside is a 'metasystem'. The metasystem and the system work together to maintain the identity of the whole, by managing the uncertainty which is generated. Every whole has a "hole".
The question is where we put the technology. Runaway uncertainty is caused by putting the technology in the metasystem to amplify the uncertainty mop. AI and surveillance are the H-bombs of metasystemic uncertainty management now. And they simply make the problem worse while initially seeming to do the job. It's very much like the Catholic church's commandeering of printing.
However, the technology might be used to organise society differently so that it can better manage the way it produces uncertainty. This is to use technology to create an environment for the open expression of uncertainty by individuals: the creation of a genuinely convivial society. I'm optimistic that what we learn from our surveillance technology and AI will lead us here... eventually.
Towards a holographic future
The key moment will be when we learn exactly how machine learning works. Neural networks are a bit like fractals or holograms, and this means that the relationship between a change to the network and the reality it represents is highly complex. Which parts of a neural network do we change to produce a determinate change in its behaviour (without unforeseen consequences)? What is fascinating is that consciousness and the universe may well work according to the same principles (see https://medium.com/intuitionmachine/the-holographic-principle-and-deep-learning-52c2d6da8d9). The fractal is the image of the future. The telescope and the microscope were the images of the enlightenment (according to Bas van Fraassen: https://en.wikipedia.org/wiki/Bas_van_Fraassen)
Through the holographic lens the world looks very different. When we understand how machine learning does what it does, and we can properly control it, then each of us will turn our digital machines to ourselves and our social institutions. We will turn it to our own learning and our learning conversations. We will turn it to art and aesthetic and emotional experience. What will we learn? We will learn about coherence and how to take decisions together for the good of the planet. The fractals of machine learning can create the context for conversation where many brains can think as one brain. We will have a different context for science, where scientific inquiry embraces quantum mechanics and its uncertainty. We will have global education, where the uncertainty of every world citizen is valued. And we will have a transformed notion of what it is to 'compute'. Our digital machines will tell us how nature computes in a very different way to silicon.
Right now this seems like fantasy. We have surveillance, nasty governments, crazy policies, inequality, etc. But we are in the middle of a scientific revolution. The last time we had the Thirty Years War, the English Civil War and Cromwell. We also have astonishing tools which we don't yet fully understand. Our duty is to understand them better and to create an environment for conversation in the future which the universities once were.
1 comment:
Thanks for posting this article. I loved reading it. Please keep posting.
technology write for us
Post a Comment