Monday, 15 July 2019

Interdisciplinary Creativity in Marseille

Last week I was lucky enough to go to this year's Social Ontology conference in Marseille. I've been going to southern France for a few years now to sit with economists and management theorists (no cyberneticians apart from me!) and talk about everything. Academic "authority" was provided by Tony Lawson (whose Cambridge social ontology group was the model for the meeting) and Hugh Willmott, whose interdisciplinarity helped established Critical Management Studies. Three years ago, I hosted the event in Liverpool, and more and more it feels like a meeting of friends - a bit like the Alternative Natural Philosophy Association (http://anpa.onl) which I'm hosting in Liverpool in August, but with management studies instead of physics.

This year, Tony didn't come, but instead we had David Knights from Lancaster University. It's always been an intimate event - and usually better for that, where the discussion has been of a very high level. Gradually we have eschewed papers, and focused entirely on dialogue for two days on a topic. This year's topic was Creativity.

If I'd read David Bohm before I'd started coming to these conferences, I would have known exactly what this was and why it was so good. Now I know Bohm, and I know he would have absolutely understood what we were doing. And with a topic like creativity, understanding what we were doing, where we were going, or where we would end up, was often unclear. Dialogue is a bit scary - it's like finding your way through the fog. Sometimes people get frustrated, and it is intense. But it is important to have faith that what we manage to achieve collectively is greater than what could be achieved by any individual.

So what conclusions did we reach? Well, I think I can sum up my own conclusions:
  • Creativity is not confined to human beings. It is a principle of nature. It may be the case that creative artists tune-in to natural processes, since this would explain how it is that their labours can result in something eternal. 
  • Creativity is connected to coherence. It is an expression of fundamental underlying patterns. In an uncertain environment, the necessity for the creative act is a necessity to maintain coherence of perception.
  • Creativity can be destructive. However (my view) I think that "creative destruction" needs unpicking. Creativity may always create something new which is additional to what was there before. This creates an increase in complexity and a selection problem. The "destruction" is done in response to this increase in complexity - often by institutions ("from now on, we are going to do it like this!")
  • The difference between creativity with regard to technical problems and creativity in human problems was discussed. Technical creativity is also driven by the drive for individual coherence - particularly in addressing ways of managing complexity - but it loses sight of the institutional destructive processes that may follow in its wake. 
  • The conversion of everything to money is, I think, such a "technical" innovation. On the one hand, money codifies expectations and facilitates the management of complexity. However, it prepares the way for the destruction of richness in the environment. 
  • The idea of "origin-ality" was explored. "Original" need not be new, but rather, connected to deeper "origins" in some way. This relates directly to the idea of creativity as a search for coherence.
  • Time is an important factor in creativity - it too may feature as a fundamental dimension in the coherence of the universe to which artists respond (particularly musicians, dancers, actors). Time raises issues about the nature of anticipation in aesthetic experience, and the perception of "new-ness"
  • A genealogy of creativity may be necessary - a process of exploring through dialogue how our notions of creativity have come to be. 
  • The genealogical issue is important when considering the role of human creativity in failures of collective decision-making and the manifest destruction of our environment. I'm inclined to see the issue of genealogy as a kind of laying-out of the levels of recursion in the topics and discourses of creativity, and this laying out may be necessary to provide sufficient flexibility for humankind to address its deepest problems.
  • Psychoanalytic approaches to creativity are useful, as are metaphors of psychodynamics. Michael Tippett's discussion of his own creative process had a powerful effect on everyone. However, the value of psychodynamics may lie in the fact that similar mechanisms are at work at different levels of nature (for example, cellular communication).
Michael Tippett Interview.mov from Directors Cut Films on Vimeo.

I took my Roli Seaboard with me, which inspired people to make weird noises. Music is so powerful to illustrate this stuff, and I invited people to contribute to a sound collage of the conference... which you can hear here. Actually, it's the first time I've heard a reflexology technique being used on the Seaboard!



Tuesday, 9 July 2019

Creativity and Novelty in Education and Life

A number of things have happened this week which has led me to think about the intellectual efforts that academics engage in to make utterances which they claim to be insightful, new or distinct in some other way. The pursuit of scholarship seems to result from some underlying drive to uncover things, the communication of which brings recognition by others that what one says is in some way "important" or "original", and basically confers status. Educational research is particularly interesting in this regard since very little that is uttered by anyone is new, yet it is often presented as being new. I don't want to criticise this kind fakery in educational research (but it is fakery), of which we are all guilty, but rather to ask why it is we are driven to do it. Fundamentally, I want to ask "Why are we driven to reclaiming ideas from the past as new and rediscovered in the present?" Additionally, I think we should ask about the role of technology in facilitating this rediscovery and repackaging of the past.

Two related questions accompany this. The first is about "tradition". At a time when we see many of the tropes of statehood, politics and institutional life becoming distorted in weird ways (by the Trumps, Farages and co), what is interesting is to observe what is retained in these distortions and what is changed. Generally it seems that surface appearance is preserved, but underlying structure is transformed from the structures that were once distributed, engaging the whole community in the reproduction of rituals and beliefs, to structures which leave a single centre of power responsible for the reproduction of rituals and beliefs.  This is, in a certain sense, a creative act on the part of the individual who manages to subvert traditions to bend to their own will.

Central to this distortion process is the control of the media. Technology has transformed our communication networks which, before the internet, were characterised by personal conversations occurring within the context of global "objects" such as TV and newspapers. Now the personal conversations are occurring within the frame of the media itself. The media technologies determine the way the communication game is played, and increasingly intrude on personal conversations where personal uncertainties could be resolved. The intrusion of media technologies increasingly serves to sway conversation in the direction of those who control the media, leaving personal uncertainties either unresolved, or deliberately obfuscated. The result is both a breakdown in mental health and increasingly lack of coherence, and increased control by media-controlling powers.

Where does creativity and novelty sit in all of this? Well, it too is a kind of trope. We think we are rehearsing being Goethe or Beethoven, but while the surface may bear some similarity, the deep structure has been rewired. More importantly, the university has become wired into this mechanism too. Is being creative mere appearance in a way that it wasn't in a pre-internet age?

At the same time, there's something about biology which is driven to growth and development to overcome restriction. Our media bubble is restriction on growth, and right now it looks menacing. The biological move is always to a meta-level re-description. Epochs are made when the world is redescribed. But we cannot redescribe in terms of "creativity" or "innovation" because those things are tropes wired into the media machine. Seeing the media machine for what it is may present us with some hope - but that is very different from our conventional notions of creativity.

Sunday, 7 July 2019

The Preservation of Context in Machine learning

I'm creating a short online course to introduce staff in my faculty to machine learning. It's partly about awareness-raising (what's machine learning going to do to medicine, dentistry, veterinary science, psychology, biology, etc?), and partly about introducing people to the tools which are increasingly accessible and available for experimentation.

As I've pointed out before, these tools are becoming increasingly standardised, with the predominance of python-based frameworks for creating machine learning models. Of course, python presents a bit of a barrier - it's so much better if you can do it in the web, and indeed, if you could do it in a desktop app based on web technologies like Electron.js. So that is what I'm working on.

Putting machine learning tools in the hands of ordinary people is important. The big internet corporations want to present a message that only they have the "big data" and expertise sufficient to really handle the immense power of AI. I'm not convinced. First of all, personal data is much "bigger" that we think, and secondly, our machine learning tools are hungry for data partly because we don't fully understand how they work. The real breakthrough will come when we do understand how it works. I think this challenge is connected to appreciating the "bigness" of personal data. For example, you could think of your 20 favourite films, and then rank them in an order. How much information is there there?

Well (without identifying my favourites), we have
F
B
C
A
E
... etc

Now if we consider that every item in the ranking is a relation to every other item, then the amount of data is actually the number of permutations of pairs of items. So,

F B
F C
F A
F E... and so on
That's 20!/(20-2)!, or 380 rows of data from a rank list of 20 items.

So could you train an algorithm to learn our preferences? Why not?
Given a new item, can it have a guess as to which rank that item might be? Well, it seems it can have a pretty good stab at it.

This is interesting because if the machine learning can estimate how "important" we think a thing is, and we can then refine this judgement in some way (by adjusting its position), then something is happening between the human and the machine: the machine is preserving the context of the human judgement which is used to train it.

The predominant way machine learning is currently used is to give an "answer": to identify the category of thing a item is. Yet the algorithm that has done this has been trained by human experts whose judgements of categories is highly contextual. By giving an answer, the machine strips out the context. In the end, information is lost. Using ranking, it may be possible to calculate how much information is lost, and from there to gain a deeper understanding of what is actually happening in the machine.

Losing information is a problem in social systems. Where context is ignored, tyranny begins. I think this is why everyone needs to know about (and actively engage with) machine learning.

Saturday, 6 July 2019

Communication's Illusion

There was a Nostradamus prediction revealed at the beginning of this year that 2019 was the year we became closer to animals (see https://www.yearly-horoscope.org/nostradamus-predictions/ for one of the many click-bait references to this). One interpretation is that we might learn to talk to animals...

The question interests me because it invites the question as to how the noises we make when we talk compare to the noises that animals make. Because we are largely obsessed with processing "information" and "meaning" in our communication (that is, attenuating the richness of sounds to codes), we tend to be oblivious to the amount of redundancy our communication entails, and we also assume that because animal communication has so much redundancy, it carries less meaning. The redundancy of animal communication is much more obvious: why doesn't a crow only "caw" once? Didn't one "caw" do the job? Why does it do it regularly 4 or 5 times (or more)? Why with the same rhythmic regularity?

Understanding of information and meaning in human communication is far from complete, and certainly for the latter, the scientific consensus seems to be pointing to the fundamental importance of redundancy, or constraint, in the establishment of meaning in human communication. Animal communication is likely to be just as meaningful to animals as our communication is to us. Indeed, our perception of "consciousness" among animals seems to be dependent on our observing animals operating within a lifeworld which we ourselves recognise. Like this ape who was filmed using a smartphone:
One problem we have in appreciating this is the belief that human consciousness is exceptional. This single belief could turn out to be the greatest scientific error, from which the destruction of our environment stems. It may be as naive as believing the earth to be the centre of the universe. In biology, many believe DNA is the centre of the universe of life and consciousness, and human DNA is special. I'm with John Torday who argues this view is ripe for a similar Copernican transformation.

I'm making a lot of weird music at the moment using a combination of the piano and the Roli Seaboard. The Seaboard can create disturbed and confused "environments". The piano tries to create redundancies in the patterns of its notes, harmonies, rhythms, and so on. As living things, all us animals inhabit a confusing environment. The creation of redundancy in communication seems fundamental to the battle to maintain coherence of understanding and effectiveness of coordination. So birds tweet their rhythmic patterns... and I blog! (and others Tweet!). Even when we talk of deep things like philosophy, we are usually repeating what's gone before. But somehow, we need to do it. We need the redundancy.

Do we only think we are saying "something" - some key "new" information? Do we only think this because we believe our consciousness is "special"? This is an uncomfortable thought. But I can't help wondering that "talking to animals" is not being Dr Doolittle. It is about realising how much more like birds our human communication really is.

Friday, 28 June 2019

Choosing a new VLE: Technological Uncertainty points towards Personal Machine Learning

I sat in a presentation from a company wanting to replace my university's Virtual Learning Environment yesterday. It was a slick presentation (well practiced) and people generally liked it because the software wasn't too clunky. Lack of clunkiness is a sign of quality these days. New educational technology's functionality is presented as a solution to problems which have been created by other technology, whether it is the management of video, the coordination of marks, mobile apps, management of threaded discussion, integration of external tools, and so on. A solution to these problems creates new options for doing the same kinds of things: "use our video service to make the management of video easier", "use our PDF annotation tool to integrate with our analytics tools", etc. Redundancy of functionality is increased in the name of simplification of technological complexity in the institution. But in the end, it can't keep up: what we end up with is another option to choose from, an increase in the uncertainty of learners and teachers, which inevitably necessitates a managerial diktat to insist on the use of tool x rather than tool y. Technology that promises freedom produces restriction, and an increasingly wide stand-off between the technology of the institution and the technology of the outside world.

The basic thesis of my book "Uncertain Education" is that technology always creates new options for humans to act. Through this, the basic human problem of choosing the manner and means of acting becomes less certain. Institutions react to rising uncertainty in their environment often by co-opting technologies to reinforce their existing structures: so "institutional" tools, rather than personal tools, dominate. Hence we have corporate learning platforms in universities, and the dominance of corporate online platforms everywhere else. This is shown in the diagram below: the "institution's assistance" operates at a higher-level "metasystem", which tries to attenuate the uncertainty of learners and teachers in the primary system (the circle in the middle). Institutional technology like this seeks to ease the burden of choice of technology for workers, but the co-opting institutional process can't keep up with the pace of change in the outside world - indeed, it feeds that change. This situation is inherently unstable, and will, I think, eventually lead to transformation of organisational structures. New kinds of tools may drive this process. I am wondering whether personal AI, or more specifically, personal machine learning, might provide a key to transformation.

Machine learning appears to be a tool which also generates many new options for acting. Therefore it also should exacerbate uncertainty. But is there a point at which the tools which generate new options for acting create new ways in which options might be chosen by an individual? Is there a point at which this kind of technology is able assist in the creation of a coherent understanding of the world in the face of explosive complexification produced by technology? One of the ways this might work is if machine learning tools could assist in stimulating and coordinating conversations directly between teachers and learners. Rather than an institutional metasystem, machine learning could operate at the level of the human system in the middle, helping to mitigate the uncertainty that is required to be managed by the higher level system:

Without wanting to sound too posthuman, machine learning may not be so much a tool as an evolutionary "moment" in the relationship between humans and machines. It is the moment when the separation between humans and machines, which humans have defended since the industrial revolution in what philosopher Gilbert Simondon calls "facile humanism", becomes indefensible. Perhaps people like Friedrich Kittler and Erich Horl are right: we are no longer humans selves constituted of cells and psychology existing in a techno-social system; now the technical system constitutes the human "I" in a process intermediated by our cells and our consciousness.

I wonder if the point is driven home when we appreciate machine learning tools as an anticipatory system. Biological life drives an anticipatory process in modelling and adapting to the environment. We don't know how anticipation occurs, but we do possess models of what it might be like. One way of thinking about anticipation is to imagine it as a kind of fractal - something which approximates to David Bohm's 'implicate order' - an underlying and repeated symmetry. We see it in nature, in trees, in art, music, and in biological developmental processes. Biological processes also appear to be endosymbiotic - they absorb elements of the environment within their internal structure, repeating them at higher levels of organisation. So cells absorbed the mitochondria which once lived independently, and the whole reproduces itself at a higher order. This is a fractal.

Nobody quite knows how machine learning works. But the suspicion is that it too is a fractal. Machine learning anticipates the properties of an object it is presented with by mapping its features which are detected through progressive layers of analysis focusing on smaller and smaller chunks of an image. The fractal is created by recursively exploring the relationship between images and labels across different levels of analysis. Human judgements which feed the "training" of this system eventually become encoded as a set of "fixed points" in a relational pattern in the machines model.

I don't think we've yet grasped what this means. At the moment we see machine learning as another "tool". The problem with machine learning as a "tool" is that it is then used to provide an "answer": that is, it is used to filter-out information which does not relate to this answer. Most of our "information tools", which provide us with increased options for doing things, actually operate like this: they discard information, removing context. This adds to the uncertainty they produce: tool x and tool y both do similar jobs, but they filter out different information. Choosing which tool to use is to decide which information we don't need, which requires human anticipation of an unknowable future. Fundamentally, this is the problem that any university wanting to invest in new technology is faced with. Context is everything, and identifying the context requires anticipation.

Humans are "black boxes": we don't really know how any of us work. But as black boxes who converse, we gradually tune-in to each other, understanding the behaviour of each of us, and in the process, understanding more about our own "black box". In the process we manage the uncertainty of our own existence. Machine learning is also a black box. So might the same thing work? If you put two black boxes together, do they begin to "understand" each other? If you put a human black box together with a machine black box, does the human gain insight into the machine, and insight into the themselves through exploring the operation of the anticipatory system in the machine? If you put a number of human black boxes together with a machine black box, does it stimulate conversation between the humans as well as engagement with the machine? It is important to note in each of these scenarios, information is preserved: context is maintained with the increase in insight, and can be further encoded by the machine to enrich human conversation.

I wonder if these questions point to a new kind of organisational setup in institutions between humans and technology. I cannot see how the institutional platform can really be a viable option for the future: discarding information is not a way forward. But we need to understand the nature of machine learning, and the ways in which information can be preserved in the human machine relationship.

Tuesday, 18 June 2019

Machine Learning as a Personal Anticipatory System

Can a living system survive without anticipation? As humans we take anticipation for granted as a function of consciousness: without an ability to make sense of the world around us, and to preempt changes, we would not be able to survive. We attribute this ability to high-level functions like language and communication. At the same time, the ability of all living things to adapt to environments whilst not always showing the same skill of language is apparent, although many scientists are reluctant to attribute consciousness to bacteria or cells. Ironically, this reluctance probably has more to do with our human language for describing consciousness, than it does to the nature of any "language" or "communication" of cells or bacteria!

We believe human consciousness is special, or exceptional, partly because we have developed a language for making distinctions about consciousness which reinforces a separation between human thought and other features of the natural world. In philosophy, the distinction boils down to "mind" and "body". We have now reached a stage of development where continuing to think like this will most likely destroy our environment, and us with it.

Human technology is a product of human thought. We might believe our computers and big data to be somehow "objective" and separate from us, but we are looking at the manifestations of consciousness. Like other manifestations of consciousness such as art, music, mathematics and science, our technologies tell us something about how consciousness works: they carry an imprint of consciousness in their structure. This is perhaps easiest to see in the artifice of mathematics, which whilst being an abstraction, appears to reveal fundamental patterns which are reproduced throughout nature. Fractals, and the imaginary numbers upon which they sit, are good examples of this.

It is also apparent in our technologies of machine learning. Behind the excitement about AI and machine learning lies a fundamental problem of perception: these tools display remarkable properties in their ability to record patterns of human judgement and reproduce them, but we have little understanding of how it works. Of course, we can describe the architecture of a convolutional neural network (for example), but in terms of what is encoded in the network, how it is encoded, and how results are produced, we have little understanding. Work with these algorithms is predominantly empirical, not theoretical. Computer programmers have developed "tricks" for training networks, such as training a full network with existing public domain image sets (using, for example, the VGG16 model), but then retraining the bottom layer for the specific images that they want identified (for example, images of diabetic retinopathy, or faces). This works better than training the whole network on specific images. Why? We don't know - it just does.

It seems likely that whatever is happening in a neural network is some kind of fractal. The training process of back-propagation involves recursive processing which seeks fixed points in the production of results across a vast range of variables from one layer of the network to the next. The fractal nature of the network means that retraining the network cannot be achieved by tweaking a single variable: the whole network must be retrained. Neural networks are very dissimilar from human brains in this way. But the fractal nature of neural networks does raise a question as to whether the structure of human consciousness is also fractal.

There is an important reason for thinking that it might be. Fractals are by definition self-similar, and self-similarity means that a pattern perceived at one level with one set of variables can be reproduced at another level, with a different set of variables. In other words, a fractal representation of one set of events can have the same structure as the fractal pattern of a different set of events: perception of the first set can anticipate the second set.

I've been fascinated by the work of Daniel Dubois on Anticipatory Systems recently partly because it is closely related to fractals, and it also seems to have a strong correlation to the way that neural networks work. Dubois makes the point that an anticipatory system processes events over time by developing models that anticipate them, whilst also generate multiple possible models and selecting the best fit. Each of these models is a differently-generated fractal.

If we want to understand what AI and machine learning really mean for society, we need to think about what use an artificial anticipatory system might be. One dystopian view is that it means the "Minority Report" - total anticipatory surveillance. I am sceptical about this, because an artificial anticipatory system is not a human system: its fractals are rigid and inflexible. Human anticipation and machine anticipation need to work together. But a personal artificial anticipatory system is something that is much more interesting. This is a system which processes the immediate information flows of experience and detects patterns. Could such a system help individuals establish deeper coherence in their understanding and action? It might. Indeed, it might counter the deep dislocation produced by overwhelming information that we are currently immersed in, and provide a context for a deeper conversation about understanding.


Sunday, 16 June 2019

Machine Learning and the Future of Work: Why eventually we will all create our own AIs

I'm on my way to Russia again. I've had an amazing couple of days with a Chinese delegation from Xiamen Eye Hospital and the leading experts in retinal disease in China, who are collaborating with us on a big EPSRC project. There was a very special atmosphere: despite the language differences, we were all conscious of staring at the future of medical diagnostics where AI and humans work in partnership.

There's a lot of critical dystopian stuff about technology in society and education in the e-learning discourse at the moment. I think history will see this critical reaction more as a response to desperately nasty things going on in our universities, rather than an accurate prediction of the future. I am also subject to these institutional pathologies, but I suspect both the dystopian critiques and the institutional self-harm are symptoms of more profound changes which are going to hit us. Eventually we will rediscover a sane way of organising human thought and creativity once more, which is what our universities used to do for society.

So this is what I'm going to say the students in Vladivostok:

Machine Learning, Scientific Dialogue and the Future of Work
It is not unusual today to hear people say how the next wave of the technological revolution will be Artificial Intelligence. Sometimes this is called the "4th industrial revolution": there will be robots everywhere - robot teachers, robot doctors, robot lawyers, etc. In this imagined future, machines are envisaged to take the place of humans. But this is misleading. The future will however involve a deeper partnership between humans and intelligent machines. In order to understand this, it is important to understand how our technologies of AI work, how the processes of creating AIs and machine learning are becoming available to you and me, and how human work is likely to change in the face of technologies which have remarkable new capabilities. 
In this presentation, I will explain how it will become increasingly easy to create our own AIs. Even now, the technologies of Machine Learning are widely available, increasingly standardised and accessible to people with a bit of computer programming knowledge. The situation at the moment is very much like the early web in the 1990s, when to create a website, people needed a bit of knowledge of HTML. As with the web, creating our own AIs will become something everyone can do.  
Drawing on my work, I will explain how in a world of networked services, there is one feature about Artificial Intelligence which is largely ignored by those not informed of its technical nature: AI does not need to be centralised. A machine learning algorithm is essentially a single (and often not very large) file, which can be embedded in any individual device (this is how, for example, the facial recognition works on your phone). The world of AI will be increasingly distributed. 
Finally, I will consider what this future means for human work. One of the important distinctions between human decision-making and AI is that humans make judgements in a context; AI, however, ignores context. In other words, AI, like much information technology, actually discards information, and this has many negative consequences on the organisation of institutions, stable society and the economy. The most potentially powerful feature of AI in partnership with humans is that it can preserve information by preserving the context of human judgement. I will discuss ways in which this can be done, and why it means that those things which humans do best – empathy, criticality, creativity and conversation – will become the essence of the work we do in the future.