Sunday 7 May 2023

The Endosymbiotic Moment

It's become increasingly obvious that there is something quasi-biological about current AI approaches. It's not just that there is a strong genotype-phenotype homology in the way that relatively fixed machine learning models work in partnership with adaptive statistics (see Improvisation Blog: AI, Technical Architecture and the Future of Education (dailyimprovisation.blogspot.com)). More importantly, the unfolding evolutionary dynamics of machine learning also appears to confirm some profound theories about cellular evolution. In my book about the future the education, written four years ago now, I said that there would come an "endosymbiotic moment" between education and technology.  Events seem to be playing that out, but now I think it's not just education in for an endosymbiotic moment, but the whole of society. 

This may be why people like Elon Musk, who has had a big stake in AI research, is calling for a "pause". Why? - is it wishful thinking to suggest that it may be because the people who are most threatened by what is happening are people like him? But it may be. 

The essence of biological evolution, and specifically cellular evolution, is that a boundary (e.g. the cell wall) must be maintained. The cell wall defines the relationship between its inside and its outside. Given that the environment of the cell is constantly changing, the cell must somehow adapt to threats to its existence. The principal strategy is what Lynn Margulis called "endosymbiosis". This is basically where the cell absorbs aspects of its environment which would otherwise threaten it. For example, it leads to the presence of mitochondria within the cell which, Margulis argued, were once independent simple organisms like bacteria. Endosymbiosis is the means by which the cell becomes more like its environment, and through this process, is able to anticipate any likely threats and opportunities that the environment might throw at it. It is also the way in which cells acquire "memory" of their evolutionary history - a kind of inner story which helps to coordinate future adaptations and coordinations with other cells. From this perspective, DNA  is not the "blueprint" for life, but rather the accreted result of ongoing R&D in the cells existence. 

What's this got to do with technology? The clue is in a leaked memo from Google (Google "We Have No Moat, And Neither Does OpenAI" (semianalysis.com)), which highlighted the threat to the company's AI efforts not from competitor companies, but from open source developments. All corporate entities, whether companies, universities or even governments maintain their viability and identity (and in the case of companies, profits) by maintaining the scarcity of what they do. That means maintaining a boundary. Often we see corporate entities doing this by "swallowing up" aspects of their environment which threaten them. The big tech giants have made a habit of this. 

The Google memo suggests something is happening in the environment which the corporation can't swallow. This is open source development of AI. Of course, there is nothing new about open source, but corporations were always able to maintain an advantage (and maintain scarcity) in their adoption of the technology, often by packaging products and services together to offer them to corporations and individuals. Microsoft has had the biggest success here. So why is open source AI so much more of a problem than Open Office or Ubuntu ?

The answer to this question lies in the nature of AI itself. It is, fundamentally, an endosymbiotic technology: a method whereby the vast networked environment of the internet can be absorbed into a single technological device (an individual computer/phone). That device, which then doesn't need to be connected to the internet, can reproduce the variety of the internet. This provides individuals equipped with the technology a vastly increased power to anticipate their environment. Up until this point, the tech industry has aimed to empower individuals with some anticipatory capability, but to maintain control of the tools which provide this. It is that control of the anticipatory tools which is likely to be lost by corporations. And it will not just be chatbots - it will be all forms of AI. It is what might be called a "radical decentralisation moment".

This has huge implications. Intellectual property, for example, depends of scarcity creation. But what happens if innovation is now performed by (or in conjunction with) machines which are ubiquitous and decentralised? New developments in technology will quickly find their way to the open source world, not just because of some desire to be "open" but because that is the place where it can most effectively develop. Moreover, open source AI is much simpler that open source office applications. It has far fewer components: a training algorithm + data + statistics is just about all that's needed. Who would invest in a new corporate innovation in a world where any innovation is likely to be reproduced by the open source community within a matter of months? (I wonder if the Silicon Valley Bank collapse carried some forewarning of this problem)

But its not just the identities of tech businesses which are under threat. What about education? What about government? Are we now really so sure that the scarcity of the educational certificate, underpinned by the authority of the institution, is safe from an open source challenge? (Blockchain hasn't gone away, for example) I'm not now, and the way that universities have responded to chatGPT has highlight the priority for them to "protect the certificate!" like the queen in the hive. If the certificate goes, what else does education have? (I'm not suggesting "nothing", but the certificate is the current business model and has been for decades)

Then there is government and the legal frameworks which protect the declaration of scarcity in commerce through IP legislation and contracts. The model of this was the East India Company, where protecting territories and trade routes with the use of force underpinned imperial wealth. What if you can't protect anything? What kind of chaos does that produce? AI regulation is not going to be a shopping list of do's and don'ts because its going to be difficult to stop people doing things. China is perhaps the most interesting case. No government can control a self-installed, non-networked chatbot: it's like kids in the Soviet Union listening to rock and roll on x-ray film turned into records. Then of course there'll be terrorist cells arming themselves with bomb-making experts. We are going to need to think deeper than the ridiculously bureaucratic nonsense of GDPR. 

Our priority in education, industry and government is going to need to be to restabilise relations between entities with identities which will be very different from the identities they have now. In the Reformation, it was the Catholic church which underwent significant changes, underpinned by major changes in government. The English civil war and the restoration produced fundamental changes to government, while the industrial revolution produced deep changes to commerce. But this is a dangerous time. Historical precedent shows that changes on this level are rarely unaccompanied by war. 

No comments: