tag:blogger.com,1999:blog-51393808668605110182024-03-18T00:24:29.560+00:00Improvisation BlogCybernetics in Education and TechnologyUnknownnoreply@blogger.comBlogger1510125tag:blogger.com,1999:blog-5139380866860511018.post-70014879873364488052024-03-09T23:04:00.001+00:002024-03-09T23:10:45.252+00:00Music and Breathing<p>There is an oscillation in my academic work between thinking about things which are practical and of importance - either in health or education (or both) - and thinking about music. Music, of course, is extremely practical and very important, but few people will support research work into music directly. They should of course. I've found that techniques for thinking about music become applicable to more practical stuff. Most specifically, developing information theoretical techniques of analysis of music is highly valuable across many fields. In addition to education, I'm currently working on the organisational impact of AI, AI and information theory (specifically focusing on my work on diabetic retinopathy diagnosis), and work-based stress. </p><p>Why is music so important? Quite simply because it protects us against hubris in our analytical thinking. Whatever social theory one might have, it has to work for music, or it is no good. Or at least, not good enough. Most cybernetic theories fall short because they can't "breathe" - and that is the key. Much as I admire and find very useful the work of Beer, Luhmann, Bateson, Von Foerster and Maturana, in each case their theories don't breathe properly. Not in the way that music does. The wisest of them (particularly Beer and von Foerster) knew it. </p><p>This is partly why the deep physiological ontology of John Torday, Bill Miller, Frantisek Baluska, Denis Noble and others has attracted me, and music has often been at the centre of discussions with Torday and Miller. By situating consciousness with the smallest unit of biology - the cell - breathing becomes foregrounded because it is obviously biologically fundamental. This is really what my recent paper for Progress in Biophysics and Molecular biology was about (see <a href="https://www.sciencedirect.com/science/article/pii/S0079610723000998">Music, cells and the dimensionality of nature - ScienceDirect</a>)</p><p>Within this biological perspective, there are two fundamental principles: the maintenance of homeostasis and the endogensation of the environment through symbiogenesis (i.e. how cells absorb factors in their environment like bacteria, which become mitochondria). The two principles are deeply related in ways which challenge the conventional cybernetic view of homeostasis. </p><p>Endogenisation turns the cell into a history book - a memory of environmental stresses from the past, for which adaptive strategies can anticipate the recurrence of similar stresses in the future. Cells are anticipatory agents which maintain a deep homeostasis - not only with their immediate environment, but with the entirety of their developmental history. That history is itself a vector which points to some originary state, and through the commonalities of these vectors, a deeper level of biological coordination can be organised. No current AI can reproduce this. If we were to have an AI in the future which could, it's architecture would be so fundamentally different from what we have at the moment: more like biology. </p><p>ChatGPT and the like are clever illusions, behind which lie some deeper truths about nature - not least it's recursive structure, and the anticipatory capability that recursion provides. But it is nonetheless a useful illusion. And it might be able to write great text (although the more I use it, the more I can detect it's hand), it remains rather poor at music. It simply cannot breathe. </p><p>Current social theories, theories about stress, methods of epidemiological study, etc, all have a breathing problem. You can often tell, because the champions of these theories tend to be a bit breathless in the way they articulate them. They desperately WANT to have the answer, for their pet theorists (Beer, Luhmann, Giddens, Bhaskar, whoever...) to be able to blow away the cobwebs of confusion. But it never works and it's always breathless.</p><p>This is not to disregard those theories - they are all great. But the high priests of those theories knew the limitations of the theory, where the clergy who slavishly follow them do not. This is why I stay close to music. It is to stay close to breathing amid a lot of breathless exhaustion. </p><p> </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-76052647322853954402024-02-11T14:55:00.002+00:002024-02-11T15:05:44.262+00:00Cybernetic Boa Constrictors<p>Brahms described the symphonies of Bruckner as "symphonic boa constrictors". After going to a performance of Bruckner's 3rd symphony last night in Manchester, I knew what he meant. I needed some music after sitting in a rather constricting online session on consciousness from the American Society for Cybernetics. But I didn't need to have all the life squeezed out of me. That had already been the experience in the meeting.</p><p>Damn it - what's wrong? Not with Bruckner - that, unfortunately is a matter of taste (I just thought I might give the snake a second chance. I'll know better next time). But what's happened with cybernetics?</p><p>To put it very simply (and perhaps, rudely), cybernetics started as science - Wiener, Ashby, von Foerster, Bateson. But it has ended up as religion. There is no longer cybernetic analysis - no consideration of what "variety" means - or homeostasis, transduction, viability, difference, information (ok, that's tricky), entropy, regulation, recursion, distinction, construction, ontology, epistemology, etc. Evan Thompson - who was the star turn - asked the most intelligent question "What is a system?" - but then there is a pretence that anyone knows the answer to that most basic of questions for the systems sciences. </p><p>There is a reasonable definition that says "systems are constructed by observers" - but that doesn't say very much. It doesn't say what a system is, but merely says that a process of observation is involved in their coming to be. Ok. But can we say more about this process?</p><p>Systems, like words, are selected. There are any number of possible selections that might be made, and out of that set of possibilities, something is chosen as "system". And of course, we are remarkably inconsistent in choosing what is selected: at one moment we choose system x, and at another system y, often forgetting that the operating principles of system x are completely incompatible from those of system y. The cybernetic boa constrictor sets to work when the inconsistency between what is professed, and how people actually behave is at its most acute. </p><p>It's a mechanism well-known to cyberneticians - the double-bind. It's well-deployed by boa constrictors... "oooh warm and cosy... shit I can't breathe.... oooh so cosy... arghh!" So how do we get out of it? Bateson tells us - we need to step outside the double-bind and describe what is happening.</p><p>Yes - systems are selections made by an observer. But, what constructs the mechanism that performs the selection? That question was often suggested by Loet Leydesdorff, and his approach to constructivism has been most useful to me, and he pointed back to the origins of phenomenology to defend his approach. </p><p>What is constructed is not "knowledge", or "system", or even "reality". What is constructed is a mechanism that selects "things that we know", "patterns of operation within an environment", or "beliefs and conjectures". How is the mechanism constructed? Well, Leydesdorff had a powerful insight that an effective selection mechanism would have to be anticipatory. It would have to be a "good regulator" - to have a model of its environment. How could a system which has a model of an ambiguous environment be constructed? </p><p>One sub-question here is whether such a "good regulator" could be constructed all at once out of thin air, or whether it would have to emerge, or evolve, over time. I cannot see how the latter case is not likely. So the construction of a selection mechanism is evolutionary - from the smallest units to the emanations of modern consciousness.</p><p>At each stage of evolution in the construction of a selection mechanism, there must be selection taking place. So a selection mechanism selects its ongoing evolution. Rather like music improvisation. But where does this process start?</p><p>Does it start in physics? The problem here is that we cannot conceive of a physical world beyond our own biology. We know (at least we select!) that our cells are made from molecules, some of which like cholesterol, appear to be astrobiological fossils. The behaviour of those molecules must have something to do with physics, and physics does have a selection mechanism of sorts - the geometry of the four forces, Pauli exclusion, the spins of electrons, etc. But only through biology do we have that knowledge. There is no physics without biology. There is no observation without biology.</p><p>Biology brings observation and with observation there is increasing sophistication in the selection mechanisms that are constructed. Why would the universe create biology? Does it need it? If so, how?</p><p>There is a clue to this question in how biology works. Biological selection mechanisms work by endogenising their environment. The cell becomes a fractal of environmental history, where the capacity to anticipate revolves around the fact that what is to come rhymes with what has gone before. This includes the "what has gone before" in terms of the fundamental laws of physics. But deep down, the fundamental laws of physics and the anticipatory selection mechanisms of biology have one thing in common: they both operate to maintain homeostasis: that is, the balance between some locality in the universe (an atom, cell, star, planet or a plant), and the non-local context. </p><p>Selection shifts the balance of the whole. Constructing selection mechanisms is about maintaining stability in the balance of future selections, and to do that, increasingly sophisticated phenotypic mechanisms are required to convey information about an increasingly complex environment. The universe needs life because it needs to maintain homeostasis between the local and nonlocal. </p><p>Was there a point in the evolution of the universe where life wasn't inevitable? I suspect not. Any more than I suspect there wasn't a point in Bruckner's 3rd symphony where a catatonic state of boredom wasn't inevitable. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-9198942508298750802024-02-08T09:47:00.004+00:002024-02-08T17:08:03.539+00:00Agency from the Zygote Up<p>I've never understood what "agency" is. We do stuff. Is to say that "doing stuff" or maybe "selecting what stuff to do (and then doing it)" is "agency" to say anything at all? It's agency to say what agency is, after all. Not sure that gets us anywhere. Agency doesn't explain anything. </p><p>Can we rob people of agency? People talk about giving person x agency, by which they mean person x has the option of doing things that (perhaps) they might not have otherwise had. But even in cases where people have very limited options for acting, they still do stuff. It's generally a good idea to increase the options for people to act, and sometimes people act in way which reduce the options of other people to act. Agency doesn't explain this though. </p><p>But I want to know what it's all about, and "agency" doesn't help. So how about looking at this differently...</p><p>The problem may be with Darwin: we act to survive, because acting is selection.... to reduce the options for acting is to reduce the chances for survival. But do we act to survive? Or is survival a biproduct of something else? Disastrous actions which lead to a swift demise perhaps amuse us in jokes, or myths and allegories giving warnings like "don't do this". Those myths and stories are important for the survival of the species. But that is about information. </p><p>So this is the perspective I am interested in: <a href="https://pubmed.ncbi.nlm.nih.gov/27399791/#:~:text=It%20is%20proposed%20that%20phenotype,Darwinian%20selection%20towards%20reproductive%20success.">Phenotype as Agent for Epigenetic Inheritance - PubMed (nih.gov)</a></p><p>Paraphrasing this argument, acting gives rise to "information" - differences that make a difference. At a fundamental level, that information must be biological - the differences that make a difference are in the physiology of every cell. What are its dynamics?</p><p>The hormonal responses to "differences that make a difference" make a difference to cellular machinery. Specifically, there are epigenetic transformations to stress and other factors in the environment which will either be exposed through acting, or which will cause subsequent actions. Those epigenetic changes are carried back to the core of reproductive physiology - to the gametes. Why might this happen? Well. it's quicker than natural selection... </p><p>The zygote that is the result of future interaction between male and female gametes therefore carries some blueprint of whatever environmental conditions imprinted themselves epigenetically on the agent's gametes at some point in their earlier existence. In other words, the information is carried forwards as a pre-programming of the next generation. </p><p>Now is it too far-fetched to suggest that the point of "doing stuff" is that it is all about this "pre-programming". After all, it is the survival of the species which must be the abiding concern of evolution. And in considering this, species is not a collection of phenotypes - people, birds, insects, bacteria, etc. It is a process involving a collection of information-gathering entities which collectively perform information-harvesting in an ambiguous environment in which future generations will need to adapt and perform the same function. Fundamentally, the whole thing is a homeostatic process. </p><p>I like this because it suggests that the practice of science and art is deeply related: both are about discovering information, and that this process is driven by the physiological imperative which feeds information discovery back to successive generations. Beethoven and Einstein were phenotypic agents performing this function, and - in their case - because of particular conditions, their information harvesting operation was particularly profound. </p><p>I also like it though because it means that there is no life that is not profound. There is no life which does not contribute to the future possibility of human flourishing. No life is wasted. Yet there are questions here about those who are truly evil, or who inflict suffering which I need to think about. The uncomfortable answer to that is that information about evil is necessary. I suspect Shakespeare might agree. </p>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-5139380866860511018.post-66862196555168420342024-01-11T19:14:00.006+00:002024-01-12T10:24:44.557+00:00Self-Provisioning of "Tools for Knowing" using AI<p>In my own teaching practice, I have become increasingly aware that preparation for sessions I have led has involved not the curation/creation of content (for example, in the form of Powerpoint slides), but the construction of tools to support activities driven by AI. The value of this is that the technology can now do something that only complex classroom organisation could achieve, namely the support of personalised and meaningful inquiry. I have been able to create a wide variety of activities ranging from drama-based exercises, to simulated personal relationships (usually around health). I am aware that the potential scope for doing new kinds of activities appears at this stage enormous: powerful organisational simulations (for example, businesses or even hospitals) with language-based AI agents are all possible, allowing students to play roles and observe the organisational dynamics. </p><p>Of course, a lot of this involves coding or other technical acts, which I quite enjoy, even if I'm not that good at it. At some point the need for coding may reduce and we will have platforms for making our own tools for learning (actually, we kind-of already have it with OpenAI's GPT Editor). But the real trick will be to allow teachers and students to create their own tools supporting different kinds of learning activity, provide different kinds of assessments, and maybe even provide ways of mapping personal learning activities to professional standards. </p><p>A lot of focus at the moment is falling on how teachers might use chatGPT for producing learning content - basically amplifying existing practices with the new tech (e.g. "write your MCQs with AI!"). But why shouldn't learners do the same thing? Indeed, what may be happening is the establishment of a common set of practices of "learning tool creation", which may be modelled by teachers, and then adopted and developed by learners. Everyone creates their own tools. Everyone moves towards becoming a teacher empowered by tools they develop. </p><p>Why does that matter? Because it addresses the two fundamental variety management problem of education. Firstly, it addresses the problem that teachers and learners are caught between the ever-increasing complexity of the world, and the constraints of the institution. My paper on <a href="https://www.tandfonline.com/doi/abs/10.1080/10494820.2020.1799030">Comparative judgement and the visualisation of construct formation in a personal learning environment: Interactive Learning Environments: Vol 31, No 2 (tandfonline.com)</a> (long winded title, I know - but this paper is interesting me more now than when I wrote it). It argued that the basic structure of the pathology of education is this (drawing on Stafford Beer's work): </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhRP3vFco0eL-yfsCAnIX-HvAzZqXSNF437lEsVX3wZE_liIOKh_qjRtHmzTpvtT6WzglW1nygaj6fybPsSVkJtvrIc75Elz5iKUIbEGUFND3PY6EADzpUOGZm8jeEjUupxqtss4yrzp4J_OA4-N4rQKAhtBF1KoLOm9LLt9Y1oWPlKBpsYOaHmC6guRSPZ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="742" data-original-width="1378" height="249" src="https://blogger.googleusercontent.com/img/a/AVvXsEhRP3vFco0eL-yfsCAnIX-HvAzZqXSNF437lEsVX3wZE_liIOKh_qjRtHmzTpvtT6WzglW1nygaj6fybPsSVkJtvrIc75Elz5iKUIbEGUFND3PY6EADzpUOGZm8jeEjUupxqtss4yrzp4J_OA4-N4rQKAhtBF1KoLOm9LLt9Y1oWPlKBpsYOaHmC6guRSPZ=w463-h249" width="463" /></a></div><br />The institution wants to control technology, but personal tool creation means that it is individuals who could create and control their own tools. This is to shift much of the "metasystem" function (the big blue arrow) away from the institutional management to the individuals in the system. This was always the fundamental argument of the Personal Learning Environment: it's just that we never had tools which could generate sufficient variety to meet the expectations of individuals. Now we do. <p></p><p>The second problem is the problem of too many students and too few teachers. That is a problem of how the practice of "knowing things" can be modelled in such a way that a wide variety of different people can relate to the "knowledge" that is presented to them. This problem however may be addressed if we see knowledge not as resulting from a "selection mechanism that chooses words", to instead being a "selection mechanism that chooses practices" - particularly practices with AI tools which then perform the business of "selecting words". If teachers model a "selection mechanism that chooses practices" which can result in a high variety of choosing words, then a wide variety of students with different interests and abilities can develop those same practices to lead to the selection of words which are meaningful to them in different ways. In fact, this is basically what is happening with chatGPT.</p><p>Teaching is always modelling. It is the teacher's job to model what it is to know something - to the point of modelling what they know and what they don't know. Really, they are revealing their own selection mechanism for words, but this selection mechanism includes their own practices for inquiry. Good teachers will say things "I can't remember the details of this, but this is what I do to find out". Students who model themselves on those teachers will acquire a related selection mechanism. </p><p>The key is "This is what I do to find out". Many academics are likely to say "I would explore this in chatGPT". That is a technical selection made by a new kind of selection mechanism in teachers which can be reproduced in students. Teachers might also say "I would get the AI to test me", or "I would get the AI to pretend to be someone who is an expert in this area that I can talk to", or "I would get the AI to generate some fake references to see if anything interesting (and true) comes up", or "I would ask it to generate some interesting research questions". The list goes on.</p><p>Is "Knowing How" becoming more important than "Knowing That"? To ask that is to ask what we mean by "knowing" in the first place. Increasingly it seems that "knowing how" and "knowing that" are both selections. ChatGPT is an artificial mechanism for selecting words. It begs the question as to the ways in which we humans are not also selection mechanisms for words - albeit ones which have a deep connection to the universe which AI doesn't have. </p><p>We are moving away from an understanding of knowledge as the result of selection towards an understanding of knowledge as the construction of a selection mechanism itself. This may be the most important thing about the current phase of AI development we are in. </p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5139380866860511018.post-78084787210569102342023-10-31T12:53:00.004+00:002023-10-31T12:56:23.809+00:00Iconicity and Epidemiology: Lessons for AI and Education<p>The essence of cybernetics is iconicity. It is partly, but not only, about thinking pictorially. More deeply it is about playing with representations which open up a dance between mind and nature. This is distinct from approaches to thought which are essentially "symbolic". Mathematics is the obvious example, but actually, most of the concepts one learns in school are symbols that stand in relation to one another, and whose relation to the world outside has to be "learnt". This process can be difficult because the symbols themselves are shrouded in obscure rules which are often unclear and sometimes contradictory.</p><p>Iconic approaches make the symbols as simple as possible: a distinction, a game, a process - onto which we are invited to project our experience of a particular subject or problem. It was something that was first considered by C.S. Peirce who developed his own approaches to iconic logic (see this for example: <a href="https://homepages.math.uic.edu/~kauffman/Peirce.pdf">Peirce.pdf (uic.edu)</a>). Cybernetics followed in Peirce's footsteps, and the iconicity of its diagrams and technical creativity makes its subject matter transdisciplinary. It also makes cybernetics a difficult thing for education to deal with, because education organises itself around subjects and their symbols, not icons and games. </p><p>But thinking iconically changes things.</p><p>I am currently teaching epidemiology which has been quite fun. But I'm struck by how the symbols of epidemiology - not just the equations, but the classifications of study types, problematisation of things like bias and confounding, etc, all put barriers in the way of understanding something that is basically about counting. So I have been thinking about ways of doing this more iconically.</p><p>To do this is to invite people into the dance between mind and nature, and to do that, we need new kinds of invitations. I'm grateful to Lou Kauffman who recommended Lancelot Hogben's famous "Mathematics for the Million" as a starting point. </p><p>Hogben's book teaches the context and history of mathematical inquiry first, and then delves into the specifics of its symbolism. That is a good approach, and one that needs updating for today (I don't know of anything quite like it). Having said that, there are some great online tools to do iconic things: The "Seeing theory" project from Brown university is wonderful (and open source): <a href="https://seeing-theory.brown.edu/">https://seeing-theory.brown.edu/</a> (again, thanks to Lou for that)</p><p>Then of course, we have games and simulations - and now we have AI. Here's a combination of those things I've been playing with inspired by Mary Flannagan's "Grow a Game" <a href="https://maryflanagan.com/games/grow-a-game/">Grow a Game - Mary Flanagan</a>: </p><p>My AI version <a href="http://13.40.150.219:9995/">http://13.40.150.219:9995/</a>. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEg4sSnbGRuCI8p-s3mWNOa4ZBvwGhZrkTS6-8tr_8SOxIQia9Ig23N5Ks1llU7gerSsqZf48QDJly-JW-EE2nVH5-_-fjXH9PyWMIyRGaFzqs_rrGM9-H6kT7FRbe9caSquWKwkvqLsvSf9WegAxYJgw5w6IAQBhGbouEv0A5ffJxujEZocKncJdGjFk73P" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="890" data-original-width="1853" height="154" src="https://blogger.googleusercontent.com/img/a/AVvXsEg4sSnbGRuCI8p-s3mWNOa4ZBvwGhZrkTS6-8tr_8SOxIQia9Ig23N5Ks1llU7gerSsqZf48QDJly-JW-EE2nVH5-_-fjXH9PyWMIyRGaFzqs_rrGM9-H6kT7FRbe9caSquWKwkvqLsvSf9WegAxYJgw5w6IAQBhGbouEv0A5ffJxujEZocKncJdGjFk73P" width="320" /></a></div><br /><br /><p></p><p>Basically enter a topic, select a game and chatGPT will produce prompts suggesting rule changes to the game to reflect the topic. Of course, whatever the AI comes up with can be tweaked by humans - but its a powerful way of stimulating new ideas and thought in epidemiology. </p><p>There's more to do here.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-35508672898026409282023-10-27T22:36:00.003+01:002023-10-27T22:42:18.420+01:00Computer metaphors and Human Understanding<p>One of the most serious accusations levelled against cognitivism is that it imposed a computer metaphor over natural processes of consciousness. At the heart of the approach is the concept of information as conceived by engineers of electronic systems in the 1950s (particularly Shannon). The problem with this is that there is no coherent definition of information that applies to all the different domains in which one might speak of information: from electronics, to biology, to psychology to philosophy, theology and physics.</p><p>Shannon information is a particularly special case and unique in the sense that it provides a method of quantification. Shannon himself, however, made no pretence in applying this to other phenomena than the engineering situation he focused on. But the quantified definition contains concepts other than information - most notably, redundancy (which Shannon, following the cyberneticians including Ashby identified as constraint on transmission) and noise. Noise is the reason why the redundancy is there - Shannon's whole engineering problem concerned the distinguishing of signal from noise on a communication channel (i.e. a wire). </p><p>Shannon was involved with the establishment of cybernetics as a science. He was one of the participants at the later "Macy conferences" where the term "cybernetics" was defined by Norbert Wiener (actually, it may have been the young Heinz von Foerster who is really responsible for this). Shannon would have been aware that other cyberneticians saw redundancy rather than information as the key concept of natural systems: most notably, Gregory Bateson saw redundancy as an index of "meaning" - something which was also alluded to by Shannon's co-author, the philosopher Warren Weaver.</p><p>But in the years that followed the cybernetic revolution, it was information that was the key concept. Underpinned by the technical architecture that was first established by John von Neumann (another attendee of the Macy conferences), computers were constructed from a principle that separated processing from storage. This gave rise to the cognitivist separation of "memory" from "intelligence". </p><p>There were of course many critiques and revisions: Ulrich Niesser, for example, among early cognitivists, came to challenge the cognitivist orthodoxy. Karl Pribram wrote a wonderful paper on the importance of redundancy on cognition and memory (The Four Rs of Remembering6 see <a href="http://www.karlpribram.com/wp-content/uploads/pdf/theory/T-039.pdf">karlpribram.com/wp-content/uploads/pdf/theory/T-039.pdf</a>). But the information processing model prevailed, inspiring the first wave of Artificial Intelligence and expert systems from the late 80s to the early 90s. </p><p>So what have we got now with our AI? </p><p>What is really important is that our current AI is NOT "information" technology. It produces information in the form of predictions, but the means by which those predictions are formed is the analysis and processing of redundancy. This is unlike early AI. The other thing to say is that the technology is inherently noisy. Probabilities are generated for multiple options, and somehow a selection must be made between those probabilities: statistical analysis becomes really important in this selection process. Indeed, within own involvement with AI development in medical diagnostics, the development of models (for making predictions about images) was far less important than the statistical post-processing that cleaned the noise from the data, and increased the sensitivity and specificity of the AI judgement. It will be the same with chatGPT: there the statistics must ensure that the chatBot doesn't say anything that will upset OpenAI's investors!</p><p>Information and redundancy are two sides of the same coin. But redundancy is much more powerful and important in natural systems, as has been obvious to researchers in ecology and the life sciences for many years (notably, statistical ecologist Robert Ulanowicz, economist Loet Leydesdorff, Bateson, Terry Deacon, etc). It is also fundamental to education - but few educationalists recognise this.</p><p>The best example is in the Vygotskian Zone of Proximal Development. I described a year or so ago how the ZPD was basically a zone of "mutual redundancy" (here: <a href="https://www.researchgate.net/publication/360097605_Reconceiving_the_Digital_Network_From_Cells_to_Selves">Reconceiving the Digital Network: From Cells to Selves | Request PDF (researchgate.net)</a> ), drawing on Leydesdorff's description. ChatGPT emphasises this: Leydesdorff's work is of seminal importance in understanding where we really are in our current phase of socio-technical development. </p><p>Nature computes with redundancy, not information - and this is computation unlike how we think of computation with information. This is not to leave Shannon behind though: in Shannon, what happens is selection. Symbols are selected by a sender, and interpretations are selected by a receiver. The key in the ability to communicate is that the complexity of the sending machine is equivalent to the complexity of the receiving machine (which is a restatement of Ashby's Law of Requisite Variety - <a href="https://en.wikipedia.org/wiki/Variety_(cybernetics)">Variety (cybernetics) - Wikipedia</a>). If the receiver doesn't have the complexity of the sender there will be challenges in communication. With such challenges - either because of noise on the channel, or because of insufficient complexity on the part of the receiver, it is necessary for the sender to create more redundancy in the communication: sufficient redundancy can overcome a deficiency in the complexity of the receiver to interpret the message. </p><p>One of the most remarkable features of AI generally is that it is both created with redundancy, and it is capable of generating large amounts of redundancy. If it didn't, its capacity to appear meaningful would be diminished. </p><p>For many years (with Leydesdorff) the nature of redundancy in the construction of meaning and communication has fascinated me. Music provides a classic example of redundancy in communication - there is so much repetition, which we analysed here: <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738">onlinelibrary.wiley.com/doi/full/10.1002/sres.2738</a>. I've just written a new paper on music and biology which will be published soon which develops these ideas, drawing on the importance of what might be called a "topology of information" with reference to evolutionary biology. </p><p>It's not just that the computer metaphor doesn't work. The metaphor that does work is probably musical.</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5139380866860511018.post-50481060549375798802023-09-04T18:34:00.003+01:002023-09-11T18:49:01.960+01:00Wittgenstein on AI<p>Struck by what appears to be a very high degree of conceptual confusion about AI, I've been drawn back to the basic premise of Wittgenstein that the problems of philosophy (or here, "making sense of AI") stem from lack of clarity in the way language is used. Wittgenstein's thoughts on aesthetics come closest to articulating something that might be adapted to the way people react to AI:</p><p></p><blockquote>"When we make an aesthetic judgement about a thing, we do not just gape at it and say: "Oh! How marvellous!" We distinguish between a person who knows what he is talking about and a person who doesn't. If a person is to admire English poetry, he must know English. Suppose that a Russian who doesn't know English is overwhelmed by a sonnet admitted to be good. We would say that he does not know what is in it. In music this is more pronounced. Suppose there is a person who admires and enjoys what is admitted to be good but can't remember the simplest tunes, doesn't know when the bass comes in, etc. We say he hasn't seen what's in it. We use the phrase 'A man is musical' not so as to call a man musical if he says "Ah!" when a piece of music is played, any more than we call a dog musical if it wags its tail when music is played."</blockquote><p>Wittgenstein says that expressions of aesthetic appreciation have their origins as interjections in response to aesthetic phenomena. The same is true of our judgements to writing produced by AI: we said (perhaps when we first saw it) "Wow!" or "that's amazing". Even after more experience with it, we can laugh at an AI-generated poem or say "Ah!" to a picture. But these interjections are not indicators of understanding. They are more like expressions of surprise at what appears to be "understanding" by a machine. </p><p>In reality, such interjections are a response to what might be described as "noise that appears to make sense". But there is a difference between the judgement of someone who might interject after an AI has returned a result who has a deeper understanding of what is going behind the scenes, and someone who doesn't. One of the problems of our efforts to establish conceptual clarity is that it is very difficult to distinguish the signal "Wow!" from its provenance in the understanding or lack of it in the person making the signal. </p><p>Aesthetic judgement is not simply about saying "lovely" to a particular piece of art. It is about understanding the repertoire of interjections that are possible in response to a vast range of different stimuli. Moreover, it is about having an understanding of the constraints of reaction alongside an understanding of the mechanisms for production of the stimuli in the first place. It is about appreciating a performance of Beethoven when we also have some appreciation of what it is like to try to play Beethoven. </p><p>Finally, whatever repertoire one has to make judgements, you can find others in the social world with whom you can communicate the structure of your repertoire of reactions to AI. This is about sharing the selection mechanism for your utterances and in so doing articulating a deeper comprehension of the technology between you. </p><p>I'm doing some work at the moment on the dimensionality of these different positions. It seems that this may hold the key for a more rational understanding of the technology and help us to carve a coherent path towards adapting our institutions to it. But in appreciating the dimensionality of these positions, the problem is that the interconnections between the different dimensions breaks. </p><p>It is easy to fake expertise in AI because few understand it deeply. That means it is possible to learn a repertoire of communications about AI without the utterances being grounded in the actual "noise" of the real technology. </p><p>It is also easy to construct new kinds of language game about AI which are divorced from practice, but manage to co-opt existing discourses so as to give those existing discourses some veneer of "relevance". "AI ethics" is probably the worst offender here, but there's lots of words spent of discussing the sociology of "meaning" in AI. </p><p>Equally it is possible to be deeply grounded in the noise of the technology but to find that the concepts arising from this engagement find no resonance with people who have no contact with the technics, or indeed, are in some cases almost impossible to express as signals. </p><p>It is in understanding the dynamics of these problems which is where the dimensionality can help. It is also where experiments to probe the relationship between human communications about the technology and the technology itself can be situated. </p><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-9204041437710443402023-07-23T12:43:00.000+01:002023-07-23T12:43:39.460+01:00Exploring the Dark with AI<p>One of the consequences of a changing landscape of technology is that everyone is in the dark. What we need to do when everyone is in the dark is talk to those people who are most familiar with the dark to show us around their uncertainties. This is when interdisciplinary engagement can be most powerful and productive. </p><p>In 1968 Arthur Koestler organised a symposium at Alpbach, Austria which gave rise to a book of essays by the leading scientists of the day. The book is called "Beyond Reductionism: New perspectives in the life sciences". The attendance list included: Ludwig von Bertalanffy, Jerome Bruner, Viktor Frankl, Friedrich Hayek, Jean Piaget, Conrad Waddington and Paul Weiss. (The gender bias is unfortunately a sign of the time)</p><p>If we were to create a similar meeting, who would we invite? Who has been shining lights into the darkness for some time, who might show us a way forwards? Whose conversations might benefit from deeper interdisciplinary connection? I think my list would include (in no particular order): Isabelle Stengers (philosophy), Mark Solms (neurobiology), Maxine Sheets-Johnstone (dance, philosophy), Peter Rowlands (physics), Antonio Damasio (psychology), Karen Barad (physics), John Torday (evolutionary biology), Sabine Hossenfelder (physics), Louis Kauffman (Mathematics), Katherine Hayles (cybernetics), Lee Smolin (Physics), Elizabet Sahtouris (evolutionary biology), Rupert Wegerif (education), Mariana Mazzucato (economics).</p><p>Most of those people won't see this message - but I think we should do something like this. Academia today is much changed from the world of 1968. Today we don't seem to believe in the dark much - everything is brightly lit with learning outcomes and assessment criteria and universities as businesses. Dark things happening - disease, war - put us into oscillation which is more dangerous than the initial triggers. </p><p>Holistic thinking is, I suspect, much less easy today than it was in 1968. I have been talking to friends about the difficulty of getting young people involved in the Alternative Natural Philosophy Association (<a href="http://anpa.onl">http://anpa.onl</a>). Only those with well-established careers can afford to do think holistically, or people hiding under the radar. Everyone else seems to just need to survive. But none of us will survive if we don't encourage holism among the young and discourage the managerial nonsense that has become education. </p><p>We all begin in the dark. Showing each other around is an important thing to do. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-83514415158084092562023-05-19T09:34:00.001+01:002023-05-19T09:34:34.455+01:00The Digitalised Imagination<p>Just over 2 years ago I decided I wanted a bit of adventure in the tail-end of Covid, and gave up a slightly depressing management position at the University of Liverpool, and became a post-doc on a project on curriculum digitalisation at the University of Copenhagen. I thought at the time that digitalisation was the most important undercurrent in education, and I knew that it was a difficult thing to move towards. My best achievement had been at the Far Eastern Federal University in Russia, which I wrote about here: <a href="https://link.springer.com/content/pdf/10.1007/s42438-022-00324-1.pdf?pdf=button">Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum (springer.com)</a>. The Copenhagen experience was nowhere near as good as the Russian experience, and I left Copenhagen for Manchester with a much deeper appreciation of what I had done in Russia. I just wished I'd done it in Switzerland!</p><p>During this time, and for seven years previously, I had been deeply involved in a medical diagnostic AI project whose innovation I was co-inventor. It was obvious that AI was a tidal wave that was about to hit education, and much of my frustration in Copenhagen was that very few people were really interested. They are now, like everyone else. </p><p>There is a risk that AI sweeps the digitalisation agenda away. After all, why teach the kids to code when the computer will do it for you? This kind of statement underpins errors in the ways that digitalisation was conceived - particularly in Copenhagen and many other European universities. It also underpins the difference between the institutional approach of Copenhagen and the approach I took in Russia. </p><p>Digitalisation is not about skill or competency. It is not about "digital literacy" (whatever that means!). It is about imagination. This was understood by the Russians, and dogmatically avoided in Copenhagen. The deep problem is the sanctifying of "competency" within European education, and the EU has been particularly pernicious in pushing this. However much the sheer lack of insight as to what "competency" is (ask anyone to define it!), it is continually asserted that this is the thing education must do. </p><p>Now in the new AI world that is opening up in front of us, the biggest threat is not technology, but poverty of the imagination. And imagination today means (partly) the "technical imagination". It is about understanding the realm of possibility under the surface, behind the interface - it is the Freudian unconscious of the technical world which through the working of creativity can find expression in the interfaces we produce. </p><p>With an imaginative collapse, humanity becomes enslaved. While the demands of the technical imagination are going to encompass a huge range of disciplines, skills, ideas, relationships, we will need our new tools to oil the wheels of our discourse and knowledge and find new ways of organising ourselves. It is in steering this process to which education needs to direct itself. But ironically, the university as it is currently constituted is geared-up for imaginative collapse and corporate takeover. </p><p>Digitalisation is about changing this. It's not going to be easy. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-6684851038616762082023-05-16T23:26:00.001+01:002023-05-16T23:33:31.255+01:00The Glass Bead Game of Human Relations<p>I attended an interesting session today on burnout and stress at work. There are many conflicting analyses of these problems. On the one hand, there are those studies which focus on the individual, seeing stress as an attribute of individuals, and "stressors" as independent variables producing the experienced symptoms of stress. There are clearly epistemological problems with this, not least that stress is rather like a headache - something that is subjectively experienced, but cannot be directly witnessed by others (only its effects). Searle calls this a "subjective epistemological" phenomenon (to be contrasted with "objective epistemological" things like historical dates, or "subjective ontological" things like money or universities, or "objective ontological" things like the motion of planets, or light). The notion of the "self" that is stressed is the biological/psychological entity bounded by their skin. Let's call this Stress1.</p><p>The alternative view of stress is that it is a manifestation of social relations and communication. This entails a different conception of the self as something that is constructed within communication, particularly the communication of the first person "I". The self in this sense is more like Searles's "ontological subjective" category: the reality of a self is construed by the expectations which arise as a result of social engagement and "positioning". This is the self as it is seen by others. It is also the self which can be oppressed by others directly, or by situations which result from others taking insufficient care of environmental factors that can negatively impact on the expression of the self. This is what can happen in situations where people become stressed. Communicative theories which examine stress in these circumstances include things like the "double bind", which is unfortunately extremely common in many workplaces. This is Stress2. </p><p>Both perspectives on the stressed self - the ontological-subjective self and the epistemological-subjective self - are important. However, in terms of practical steps to eliminate stress, the two perspectives have different approaches. Stress1 is addressed through treatment to the individual - rather like giving someone with a headache paracetamol: mindfulness, etc. Stress2 is addressed through changing the structures of communication. This is much harder to do, and so Stress1 dominates the discourse, and its (rather hair-brained) remedies go relatively unchallenged. </p><p>Stress2 is difficult because it basically requires the making of better decisions at the top of an organisation. Bad decisions will cause stress. Good decisions ought not to, but instead to create synergy, wellness and productivity. Decisions are the result of the skill of decision-makers, so the question really is how we create good decision-makers. Here we see that the incentives for people to climb the ranks of decision-making encourage behaviour which is anathema to the making of "good decisions". People are rewarded instead for hitting targets, increasing profits, and driving down costs. All of which comes at a human cost. </p><p>Even if better criteria could be defined to encourage and recruit better decision-makers, it will always be possible to "fake" criteria if they are in the form of new targets or KPIs. This won't work.</p><p>This has led me to wonder about what Herman Hesse's "Glass Bead Game" might actually have been (or might one day be in the future). Why do the elites of 25th century Castelia take this game, which is a bit like music (as Hesse describes it) so seriously? There is something important about it being a game. </p><p>A game is not a set of criteria. It is a practice which requires the learning of skill to play well. As one learns to play well, one deepens in insight. As one deepens in insight, one might become more aware and able to act in the world in a way where the making of good decisions becomes more probable. Importantly, to play the glass bead game is not to "hit targets". It is not a KPI. It is an art. Only those who are more experienced in the game can judge those who are less experienced, but gradual mastery equips one with the skill to make good judgements oneself. Of course, Joseph Knecht decides the game is not for him, and a different spiritual path takes him elsewhere. But it is still a spiritual path - perhaps a different kind of game.</p><p>What if one's progression up the ranks of decision-making powers was organised like this? Would we have fewer psychopaths and more enlightened individuals at the top of our organisations? I think this is what Hesse was driving at. After all, he had seen the worst kind of management psychopaths in history in the Nazis. He must have asked himself what novel kind of arrangements might make the making of Nazis less probable. </p><p>The other interesting thing about this though is that the Glass Bead Game is technological. Is there a way in which we could organise our technologies to produce a radically different kind of incentive scheme for those who aspire to become custodians of society? We clearly have some very powerful and novel technologies in front of us which should cause us to reflect on a better world that we might be able to build with them. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-66497616346158591272023-05-14T22:02:00.006+01:002023-05-14T22:20:49.508+01:00Positioning AI<p>I've been creating a simple app for my Occupational Health students to help them navigate and inquire after their learning content in flexible ways. It's the kind of thing that the chatGPT API makes particularly easy, and it seems worth playing with since chatGPT won't be the only API that does this kind of thing soon (<a href="https://lmsys.org/blog/2023-03-30-vicuna/">Vicuna </a> and other open source offerings are probably the future...)</p><p>As with any tool development, the key is whether the people for whom the tool is made find it useful. This is always a tricky moment because others either do or don't see that a vision of what they do (manifested in the technics of what is made) actually aligns with what they perceive their needs to be. Like a bad doctor, I risk (like so many technical people) positioning students as recipients of techno-pedagogical "treatment". (Bad teachers do this too)</p><p>We've seen so many iterations of tools where mouse clicks and menus have to be negotiated which seem far-removed from real wants and needs. The VLE is the classic example. I wrote a paper about this many years ago with regard to Learning Design technology, which I am reflecting on again in the light of this new technology (see <a href="https://www.researchgate.net/profile/Mark-Johnson-11/publication/220349602_Positioning_Theory_Roles_and_the_Design_and_Implementation_of_Learning_Technology/links/5465ebe60cf2052b50a10075/Positioning-Theory-Roles-and-the-Design-and-Implementation-of-Learning-Technology.pdf">Microsoft Word - 07.doc (researchgate.net)</a>). I used Rom Harre's Positioning Theory as a guide. I still think it is useful, and it makes me wonder how chatGPT might be any different in terms of positioning. </p><p>Harre's Positioning Theory presents a way of talking about the constraints within which the Self is constructed in language and practice. There are three fundamental areas of constraint: </p><p></p><ol style="text-align: left;"><li>The speech acts that can be selected by an individual in their practice</li><li>The positions they occupy in their social settings (for example, as a student, a teacher, a worker, etc)</li><li>The "storyline" in their head which attempts to rationalise their situation and present themselves as heroic. </li></ol><div>With positioning through the use of tools, learners and teachers are often seen as recipients of the tool designer's judgement about what their needs are. This is always a problem in any kind of implementation - a constant theme in the adoption of technology. Of course, the storyline for the tool designer is always heroic!</div><div><br /></div><div>But chatGPT doesn't seem to have had any adoption problems. It appears to most people who experience it that this is astonishing technology which can do things which we have been longing for easy solutions to: "please give me the answer to my question without all the adds, and the need to drill through multiple websites! (and then write me a limerick about it)" But in many cases, our needs and desires have been framed by the tedium of the previous generation of technology. It could have been much better - but it wasn't for reasons which are not technical, but commercial. </div><div><br /></div><div>However, could chatGPT have positioning problems? This is an interesting question because chatGPT is a linguistic tool. It, like us, selects utterances. Its grasp of context is crude by comparison to our awareness of positions, but it does display some contextual (positioning) awareness - not least in its ability to mimic different genres of discourse. Clearly, however, it doesn't have a storyline. However, because of the naturalness of the interface, and its ability to gain information from us, it is perfectly capable of learning our storylines. </div><div><br /></div><div>In a world of online AI like chatGPT or BARD, the ability to learn individuals' storylines would be deeply disturbing. However, this is unlikely to be where the technology is heading. AI is a decentralising technology - so we are really talking about a technology which is under the direct control of users, and which has the capacity to learn about its user. That could be a good thing. </div><div><br /></div><div>I might create a tool for my students to use and say "here is something that I think you might find useful". Ultimately, whether they find it useful or not depends on whether what they perceive as meaningful matches what I perceive as meaningful to them. But what is "meaningful" in the first place?</div><div><br /></div><div>What students and teachers and technologists are all doing is looking for ways in which they (we) can anticipate our environment. Indeed, this simple fact may be the basic driving force behind the information revolution of the last 40 years. A speech act is a selection of an utterance whose effects are anticipated. If a speech act doesn't produce the expected effects, then we are likely to learn from the unexpected consequences, and choose a different speech act next time. Positioning depends on anticipation, and anticipation depends on having a good model of the world, and particularly, having a storyline which situates the self in that model of the world. </div><div><br /></div><div>Anticipations form in social contexts, in the networks of positionings that we find ourselves in our different social roles. ChatGPT will no doubt find its way into all walks of life and different positions. Its ability to create difference in many different ways can be a stimulus to revealing ourselves to one another in different social situations. But there are good and bad positionings. The danger is that we allow ourselves to be positioned by the technology as recipients of information, art, AI-generated video, instruction, jokes, etc. The danger is that we lose sight of what drives our curiosity in the first place. That is going to be the key question for education in the future. </div><div><br /></div><div>This is where the guts of judgement lie. What is in a position is not merely a set of expectations about the world around us. It is deeply rooted in our physiology. If we are not to become passively positioned by powerful technology, then it will become necessary for us to look inwards on our physiology in our deepest exercise of judgement. This is what we are going to need to teach succeeding generations. Information hypnosis, from which we have been suffering for many years of the web, cannot be the way of the future.</div><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-12191971183679465622023-05-07T10:49:00.003+01:002023-05-07T10:51:10.027+01:00The Endosymbiotic Moment<p>It's become increasingly obvious that there is something quasi-biological about current AI approaches. It's not just that there is a strong genotype-phenotype homology in the way that relatively fixed machine learning models work in partnership with adaptive statistics (see <a href="http://dailyimprovisation.blogspot.com/2023/01/ai-technical-architecture-and-future-of.html">Improvisation Blog: AI, Technical Architecture and the Future of Education (dailyimprovisation.blogspot.com)</a>). More importantly, the unfolding evolutionary dynamics of machine learning also appears to confirm some profound theories about cellular evolution. In my book about the future the education, written four years ago now, I said that there would come an "endosymbiotic moment" between education and technology. Events seem to be playing that out, but now I think it's not just education in for an endosymbiotic moment, but the whole of society. </p><p>This may be why people like Elon Musk, who has had a big stake in AI research, is calling for a "pause". Why? - is it wishful thinking to suggest that it may be because the people who are most threatened by what is happening are people like him? But it may be. </p><p>The essence of biological evolution, and specifically cellular evolution, is that a boundary (e.g. the cell wall) must be maintained. The cell wall defines the relationship between its inside and its outside. Given that the environment of the cell is constantly changing, the cell must somehow adapt to threats to its existence. The principal strategy is what Lynn Margulis called "endosymbiosis". This is basically where the cell absorbs aspects of its environment which would otherwise threaten it. For example, it leads to the presence of mitochondria within the cell which, Margulis argued, were once independent simple organisms like bacteria. Endosymbiosis is the means by which the cell becomes more like its environment, and through this process, is able to anticipate any likely threats and opportunities that the environment might throw at it. It is also the way in which cells acquire "memory" of their evolutionary history - a kind of inner story which helps to coordinate future adaptations and coordinations with other cells. From this perspective, DNA is not the "blueprint" for life, but rather the accreted result of ongoing R&D in the cells existence. </p><p>What's this got to do with technology? The clue is in a leaked memo from Google (<a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither">Google "We Have No Moat, And Neither Does OpenAI" (semianalysis.com)</a>), which highlighted the threat to the company's AI efforts not from competitor companies, but from open source developments. All corporate entities, whether companies, universities or even governments maintain their viability and identity (and in the case of companies, profits) by maintaining the scarcity of what they do. That means maintaining a boundary. Often we see corporate entities doing this by "swallowing up" aspects of their environment which threaten them. The big tech giants have made a habit of this. </p><p>The Google memo suggests something is happening in the environment which the corporation can't swallow. This is open source development of AI. Of course, there is nothing new about open source, but corporations were always able to maintain an advantage (and maintain scarcity) in their adoption of the technology, often by packaging products and services together to offer them to corporations and individuals. Microsoft has had the biggest success here. So why is open source AI so much more of a problem than Open Office or Ubuntu ?</p><p>The answer to this question lies in the nature of AI itself. It is, fundamentally, an endosymbiotic technology: a method whereby the vast networked environment of the internet can be absorbed into a single technological device (an individual computer/phone). That device, which then doesn't need to be connected to the internet, can reproduce the variety of the internet. This provides individuals equipped with the technology a vastly increased power to anticipate their environment. Up until this point, the tech industry has aimed to empower individuals with some anticipatory capability, but to maintain control of the tools which provide this. It is that control of the anticipatory tools which is likely to be lost by corporations. And it will not just be chatbots - it will be all forms of AI. It is what might be called a "radical decentralisation moment".</p><p>This has huge implications. Intellectual property, for example, depends of scarcity creation. But what happens if innovation is now performed by (or in conjunction with) machines which are ubiquitous and decentralised? New developments in technology will quickly find their way to the open source world, not just because of some desire to be "open" but because that is the place where it can most effectively develop. Moreover, open source AI is much simpler that open source office applications. It has far fewer components: a training algorithm + data + statistics is just about all that's needed. Who would invest in a new corporate innovation in a world where any innovation is likely to be reproduced by the open source community within a matter of months? (I wonder if the Silicon Valley Bank collapse carried some forewarning of this problem)</p><p>But its not just the identities of tech businesses which are under threat. What about education? What about government? Are we now really so sure that the scarcity of the educational certificate, underpinned by the authority of the institution, is safe from an open source challenge? (Blockchain hasn't gone away, for example) I'm not now, and the way that universities have responded to chatGPT has highlight the priority for them to "protect the certificate!" like the queen in the hive. If the certificate goes, what else does education have? (I'm not suggesting "nothing", but the certificate is the current business model and has been for decades)</p><p>Then there is government and the legal frameworks which protect the declaration of scarcity in commerce through IP legislation and contracts. The model of this was the East India Company, where protecting territories and trade routes with the use of force underpinned imperial wealth. What if you can't protect anything? What kind of chaos does that produce? AI regulation is not going to be a shopping list of do's and don'ts because its going to be difficult to stop people doing things. China is perhaps the most interesting case. No government can control a self-installed, non-networked chatbot: it's like kids in the Soviet Union listening to rock and roll on x-ray film turned into records. Then of course there'll be terrorist cells arming themselves with bomb-making experts. We are going to need to think deeper than the ridiculously bureaucratic nonsense of GDPR. </p><p>Our priority in education, industry and government is going to need to be to restabilise relations between entities with identities which will be very different from the identities they have now. In the Reformation, it was the Catholic church which underwent significant changes, underpinned by major changes in government. The English civil war and the restoration produced fundamental changes to government, while the industrial revolution produced deep changes to commerce. But this is a dangerous time. Historical precedent shows that changes on this level are rarely unaccompanied by war. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-24565888680917551782023-04-10T23:31:00.001+01:002023-04-11T08:05:46.518+01:00Quantum Ears<p>It seems obvious to say that music starts at time a and finishes at time b, and in between goes on a journey. But I'm beginning to hear it differently. I don't think there is a time a and a time b: they are constructed as part of our sense-making about what happens to us when we listen or play. Importantly, our sense-making must omit certain key aspects of making music. The principal dimension that I think is omitted is noise, or the energy that is continually shaking our senses and causing our physiology to find new ways of organising itself. </p><p>If we consider what noise does, then the journey of music over what is perceived as time is entirely co-present at any "now". Music is a more like a space to explore than a path to follow. From the very moment that we both make and don't make a sound, the whole space is there, existing in the dynamic between physiology and the universe. </p><p>Harrison Birtwistle seems to have heard music like this, and his thought has had a big influence on me. I was particularly struck by Birtwistle's appreciation of Paul Klee - particularly Klee's pedagogical sketchbooks. Birtwistle says:</p><p></p><blockquote>Like Paul Klee, I'm taking a line for a walk. But the lines Klee draws are pure continuum, they look like a map of a walk or a journey. And this is how we usually think of journeys - fluid things which are uninterrupted. But when you're in the process of journeying, you perceive them differently. You don't look straight ahead, you look to the right and then to the left. And when you turn to the left you fail to take in the events on the right and vice-versa. In retrospect you think of the journey as being a logical progression from one thing to another, but in actual fact it consists of a series of unrelated things, which means that you're simply making choices all the time, choices about where to look. It's to do with discontinuity. You have a continuum, but you're cutting things out of it while you look the other way.</blockquote><p>Music is discontinuous in essence. The "continuity" is something that perception imposes on us, making us ignorant of the dynamics that drive its discontinuities. Deep down, what we perceive in Mozart or Bach (and in Birtwistle) is coherence, which is not the same thing as continuity. </p><p>Coherence does not need time as we understand it. It represents the deep symmetry of nature, in which what we call time is a parameter. In quantum mechanics, this deep symmetry is what balances out local (physically proximate) phenomena with non-local (physically distant) phenomena. For there to be "spooky action at a distance" (which there appears to be), then there must be some underlying balancing that goes on between what happens locally and something happening non-locally. All matter, including our physiology - and our ears - will partake in this universal symmetry.</p><p>Because of this complex symmetrical mechanism, the energy of the quantum world is always buzzing and interfering with our physiological substrates. To deal with this, all life needs to construct niches. The space of music is its niche. To be entranced by music is to be drawn into its niche, and then (in the case of Western classical music) to be convinced of music's "journeying". But the journey is an illusion. Music immediately presents a multiplicity of the same thing. Heterophony is the closest we get to this kind of thing. </p><p>Taking time and continuity out of the music equation carries important lessons for other aspects of life. Learning, like music, is discontinuous, but learners and teachers are forced to deny this by the expedience of institutions who must regiment educational practice. Equally, the climate emergency is often portrayed as a "race against time" - but rather like the pathology of education, the more we impose a linear model on what is essentially a discontinuous system, we become (despite the good intentions of activists) more denatured, not less. The same is true in politics: our only understanding of a regulatory system is one which works in a linear continuous fashion, and which in operation creates more alienation. </p><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-25492490093787573242023-04-06T08:52:00.000+01:002023-04-06T08:52:42.826+01:00Universal UncertaintyMeasuring "speed" of change is tricky - speed is relational. There does however seem to be a lot more uncertainty around: anticipating the future means grappling with very high degrees of contingency. To say "things" change, what we mean by "things" is not so much "stuff happening in the world", but rather our <i>relation </i>to "stuff happening in the world". It's not the stuff which is uncertain. It is the relationship between our context and perception and "stuff" which is generating more contingency in our decision-making. <div><br /></div><div>Uncertainty means disorder in relations. We can measure "maximum disorder" of relations as the entropy of the stuff in the world (particularly when new technologies increase the number of options we have, or a new virus radically restricts our capacity to adapt to the world) in relation to the entropy of our capacity to deal with it. If the equations don't balance, then there will be uncertainty. At some point in the future, these equations will balance out again - and on it goes. This appears to be an evolutionary principle. </div><div><br /></div><div>COVID was a good example of this explosive relative uncertainty. A disruption at a biological level of organisation impacted on the normal institutional mechanisms for dealing with uncertainty (see here: <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7518093/">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7518093/</a>). As a result, it became very difficult to coordinate expectations across society with normal regulatory mechanisms. This necessitated an authoritarian doctrine of "follow the science" backed-up with the threat of force, as a way of radically changing the way people lived. The irony about this was that science is the business of exploring uncertainty, while the COVID authoritarian science (rather like "school science") excluded uncertainty in its official pronouncements, to leave doubt and inquiry in the hands of conspiracy theorists. "Following the science" is not the same as "being scientific".</div><div> </div><div>I've been thinking about this diagram, presented by Jerry Ravetz to explain Post-Normal Science. All science displays degrees of uncertainty. In a presentation I gave the other week, I contrasted images from the Hubble telescope and images from the James Webb telescope. I said that while the technology improves, and we get more information (in fact, the maximum entropy of information increases), there is still a relation between those things which we are certain about, and those things which we are not certain about. The <i>relation</i> between certainty about craters on the moon, and certainty about planets in other galaxies is constant. </div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgCUdMupKQ5H6z7QDwSKWBN2sazoJsQoO50oQ2hdihnrliu4LNu9u157Y5-p7fWYnp8mfZngdXUXsOsKT3PUlci9vgSuthC-OrdeXmuaT5KqkpxxKtCS0SRJKt3vbBTzlHRc40xysoXrszZKhlH-RCbiEeSf3nAYeteoNO_8V7k7kjXREYxYG45pO33Kw" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="585" data-original-width="725" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEgCUdMupKQ5H6z7QDwSKWBN2sazoJsQoO50oQ2hdihnrliu4LNu9u157Y5-p7fWYnp8mfZngdXUXsOsKT3PUlci9vgSuthC-OrdeXmuaT5KqkpxxKtCS0SRJKt3vbBTzlHRc40xysoXrszZKhlH-RCbiEeSf3nAYeteoNO_8V7k7kjXREYxYG45pO33Kw" width="297" /></a></div><br />In the context of COVID, this is useful, because there were things which we knew were high risk for transmission, and other things about which there was much argument. With COVID, there were also high decision stakes alongside high scientific uncertainty. The difficulty was that government not only failed to convey the systems uncertainty, but in fact attenuated it.</div><div> </div><div>This diagram is also interesting because it reveals that there is a a gradation of causal relationships in the "systems uncertainty" direction. Attributions of causation between factors become more contingent the further one goes from left to right. It is perhaps no surprise that contingency in decision also rises, and perhaps this is related to the "stakes" of those decisions. How might we think about this gradation of causal relationships? </div><div> </div><div>These must be related to the communication dynamics that are established in the light of experience. Hume argued that causes were the outcome of communication dynamics between scientists in the light of their experiments. I think he was right (although lots of people don't), Regularity of events was the key ingredient to produce scientific consensus. The problem is that with higher systems uncertainties, the likelihood of regularity in events become less. Systems become more complex, more contingent, mechanisms harder to agree on. This lack of social agreement can impact the decision-stakes: failure to agree scientifically can render political chaos and social disorder. </div><div><br /></div><div>With COVID, the fundamental disruptive mechanism was a bio-techno-social dynamic, where technology took the forms of apps, masks, vaccines, etc. It's actually very similar with AI at the moment. That is also a bio-techno-social disruption, where its not a disease that represents the "bio" bit, but our cognition and emotions. The challenge for institutions is to find a way of renormalising relations. That requires finding new perspectives from which to view the dynamics we are in. </div><div><br /></div><div>In some ways, COVID presented an easier challenge because it (sort of) went away, and life could get "back to normal". AI is much more serious because the institutional discourse relations cannot grasp what is happening in the bio-techno-social mechanism, and are constantly blind-sided by "the next cool thing". I wonder if these are the conditions within which Copernicus and Galileo paved the way to a social gestalt-switch which restabilised European institutions.</div><div><br /></div><div>In order to get on top of what is happening to technology, we are going to need a similar gestalt-switch. <br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-53855535329572764002023-03-09T21:37:00.001+00:002023-03-10T15:05:10.420+00:00The Maximum Entropy of Work<p>I'm very doubtful that the current trajectory of AI will make our lives easier. Indeed, the impressive progress of AI has led me to reflect on the fact that despite huge technological advances over the last 50 years, the lives of the majority of people have got harder and more uncertain. If I compare my own career with that of my dad, he was able to jog-along with a job he didn't much like, but basically survived without too much threat, and retired at 58 with a very generous pension. My journey, by comparison, has been a rollercoaster (indeed, a rollercoaster with some bits of the track missing!) and I am seeing people (particularly young academics) in their 30s faring even less well. So what's going on? And - before I delve into that further - it's too easy and lazy simply to blame "capitalism": we need to be more precise. </p><p>I suspect the common denominator in the work equation is technological advancement alongside rigid institutional structures. This is not to denigrate technology - it is amazing - but it is to ask deeper questions about our institutions. I think there is a systems explanation for what is happening.</p><p>When a new technology arrives in a social system (a society, a business, an institution) it increases the possibilities for surprise in that system. Quite simply, new things become possible which people haven't seen before. Since information entropy is a measure of surprise, we can say that the "maximum entropy" of the social system increases, where this is the maximum is what is possible - not necessarily what is observed. </p><p>What is observed in a social system with a new technology is a degree of surprise (some degree of innovation is observable), but nowhere near the maximum amount of possible surprise. So observable entropy increases, but the maximum entropy increases more. What does this mean for work and workers?</p><p>A bit like voltage in electronics, the difference between the maximum potential and the observed reality creates a space in which activity is stimulated. The bigger the space between observed entropy and maximum entropy, the greater the stimulation for activity. This activity is what we do in work. More precisely, work becomes a process of exploring the many ways in which the possible new configurations of practice and technology can be realised. Some of that work is called "research", other aspects of this work might be called "operations", other aspects of it might be called "management", but whatever kind of activity it is, it increasingly involves the exploration of new options.</p><p>This "work space" between the maximum entropy and the observed entropy is, as David Graeber famously said in his "bullshit jobs", mostly pointless. The work is basically doing things that have been, or can be, done in many different ways: it is effectively "redundant". But that's the point - redundancy generation in the space between the maximum entropy and the observed entropy is what must go on in that space. And it is exhausting and dispiriting, particularly if it increases. </p><p>This is a bleak outlook because of all recent technologies to increase the maximum entropy, AI is in a league of its own. It will accelerate the growth of maximum entropy beyond anything we have yet seen. So what will happen with the observed entropy and the work in the space between?</p><p>The problem is the increasing gap between observed entropy and maximum entropy. What keeps the observed entropy so much lower is the structure of institutions. The deepest risk is that the maximum entropy goes off the scale, and the observed entropy - the visible interface to existing institutions doesn't change very much at all. That will create a pressure-cooker atmosphere within the work system. There will be work, and indeed more of it than ever before, but work will become increasing febrile and pointless. It will make us sick: the mental health problems of workers, students and everyone else will suffer. </p><p>It would be better if the redundancy-generating space was maintained as stable rather than increasing. This might be achieved if we consider the drivers for increasing maximum entropy through technology. One of the drivers is noise. It is the noise generated by an existing technology (for example, an AI) which drives the innovation to the next iteration of the technology. If human labour was seen as an effective management of noise, rather than the generation of redundancy, then society might be steered in a way which doesn't cause internal collapse. </p><p>Another way of saying this is to say that uncertainty is the variable to manipulate collectively, and only humans can manipulate this variable. One of the problems with increasing maximum entropy is that labour is directed to do tasks that can be clearly defined. We see this with chatGPT at the moment: thousands of academics who say "we can use it to do <insert name of well-defined task>" This is looking for your keys where the light is, not where you lost them. </p><p>One of the things the technology might be able to do is to direct human labour to where the uncertainty is greatest. Focused in this way, the work is really about exploring differences in understanding between different people of things which nobody is clear about. This is high variety, convivial, high level work for the many. Part of this work is work to explore the possibilities of new technology - the "redundancy work" in the space between observed and maximum entropy. But the other part of the work is to coordinate intellectual effort in exploring the noise of uncertainty, and the result of that work can help manage the gap between maximum entropy and observed entropy. </p><p>What does this look like practically? I think, given that uncertainty is experienced physiologically, and exploring uncertainty together is deeply convivial, this looks like work with a focus on wellness, maybe using technology to identify where wellness might be threatened. </p><p>Creating a "wellness system" is a possibility. The consequences of not doing this look far more dire than anyone can yet imagine. </p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5139380866860511018.post-72197013283048469562023-03-08T11:37:00.006+00:002023-03-08T22:34:31.635+00:00Birtwistle's Seriousness<p>I attended the commemorative concert for Harrison Birtwistle on Sunday. It was a powerful occasion which has led me to think about the abandonment of seriousness in art which seems to have occurred in the last 20 years or so. Birtwistle was a serious artist - by which I mean that he never sought popularity. He was committed to his project, crystal clear in its direction and what he was doing, and uncompromising in his attitude towards whether anybody else liked it or not. </p><p>He was lucky in the sense that his formative years coincided with a post-war spirit that supported experimental music that was often hard on the ears, but which allowed for the exploration of deeper meaning. This supportive spirit has pretty much gone with late capitalism's demand that a market must exist for whatever the artist produces. Birtwistle now has a niche because it was able to grow in better times. How could such a niche be constructed now? What do we lose if we lose our ability to do this?</p><p>Part of the problem in answering this is that art is not always for the present or a present audience - it is for a future where things that may not resonate in the present find resonance decades after the artist is dead. Birtwistle's music will make more sense and convey its power and meaning more overtly in future worlds. How do we know which art will produce this effect? This is where some kind of deeper knowledge of what matters is important. Some people can tune into this and know what matters, what needs to be preserved. Those people too are now threatened in an anti-intellectual climate which even (or maybe particularly) in universities favours work that delivers immediate gain. </p><p>Universities are part of society's mechanism for selecting what matters. They are now failing to do this. The decline of the professoriate both in quality and power in steering institutions is a signal of what has gone wrong. It is difficult to see a way back, although it would likely feature technology I would guess. I'm not sure how though. </p><p>If we have no mechanism for selecting what matters, the future state of knowledge is threatened. It is an analogue of the current ecological crisis - the decline in diversity of species. </p><p>The Birtwistle piece that opened the concert was a short duet called "The message". This took inspiration from an artwork by Bob Law containing the words: "The purpose of life is to pass the message on". Birtwistle's seriousness lies in the fact that he understood this. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEi4bMjqr2xX1eOyQNJnGjq1iZ3jz5PwKHhROZRfLx_G34jiPGTRM7eTAidswT7lB0Tv-BbIWxGgj53z7iEfrjSY0twmtA7fFFTwE8KCxL2tUjE0JZNAmQU4Z1wbBl_TDtbtzpRx2Oq3Xq6uHR1MX0zpzIQLGrUzajHKLlgrA_0Mc673qCGmVt7aZHuIvg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="266" data-original-width="189" height="400" src="https://blogger.googleusercontent.com/img/a/AVvXsEi4bMjqr2xX1eOyQNJnGjq1iZ3jz5PwKHhROZRfLx_G34jiPGTRM7eTAidswT7lB0Tv-BbIWxGgj53z7iEfrjSY0twmtA7fFFTwE8KCxL2tUjE0JZNAmQU4Z1wbBl_TDtbtzpRx2Oq3Xq6uHR1MX0zpzIQLGrUzajHKLlgrA_0Mc673qCGmVt7aZHuIvg=w285-h400" width="285" /></a></div><br /><br /><p></p><p>All seriousness is about understanding this message.</p><p>And we can hope that the best of his music should be a sufficient transducer - like this: <a href="https://www.youtube.com/watch?v=AVnpktJtOms">Harrison Birtwistle - Earth Dances - YouTube</a></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-5034361894692896512023-01-30T19:56:00.008+00:002023-02-01T07:30:37.350+00:00AI, Technical Architecture and the Future of Education<p>I gave a presentation to the leaders of Learning Support at the University of Copenhagen this morning. I will write a paper about this, but in the meantime this is a blogpost to summarise the key points.</p><p>I began by saying that I would say nothing about "stopping the students cheating". I said basically, as leaders in learning technology in universities, there is no time to worry about this. The technology is improving so fast, what really matters is to think ahead about how things are going to change, and the strategies that are required to adapt. </p><p>I said that basically, we are in "Singin' in the Rain". The movie is a good guide to the tech-rush that's about to unfold. </p><div class="separator" style="clear: both; text-align: left;">I also referred to the 2001 Spielberg movie AI, which I didn't understand when I first saw it. I think we will look back on it as a prescient masterpiece. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">My own credentials for talking about AI are that I have been involved in an AI diagnostic project in Diabetic Retinopathy for 7 years at the University of Liverpool, and after £1.1m of project funding and then £2m of VC support, this has now been spun-out. When the project started I was an AI sceptic (despite being the co-inventor of the novel approach that has led to it's success!). I'm not sceptical now. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I said that what is really important to understand is how the technology represents a new kind of technical architecture. I represented this with a diagram:</div><div class="separator" style="clear: both; text-align: left;"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgUqtKzijP9FtOT5LeNeehakU_7hXYCvT0xMwFe-UA7asDlSoaxrf9p9mS8bCVfAkgE0CERazgUo70ggEXjEBOuGtKA0fZohOidImv70fDXjeGY4z-hmRmv53OpP3IqLwW6hB3fcweENZ8O9Bzh0_RYaf1CHZX3tnVdgxlKUVyGod49LehRZwzyVkqIaQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEgUqtKzijP9FtOT5LeNeehakU_7hXYCvT0xMwFe-UA7asDlSoaxrf9p9mS8bCVfAkgE0CERazgUo70ggEXjEBOuGtKA0fZohOidImv70fDXjeGY4z-hmRmv53OpP3IqLwW6hB3fcweENZ8O9Bzh0_RYaf1CHZX3tnVdgxlKUVyGod49LehRZwzyVkqIaQ" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">As a term, AI is a silly description. "Artificial Anticipation" is much better. The technology is new. It is not a database; it consists of a document called a model (which is a file) that can be thought of as being like a "sieve". The configuration of the structure of the sieve is produced through a process called "training", which requires lots of data, and lots and lots of time. This process uses huge amounts of data from the internet. Training requires "data redundancy" - lots of representations of the same thing. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Since academics have been busy writing papers which are very similar to each other for the last 30 years, chatGPT has had rich pickings from which it can train itself. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">If you want to understand the training process, I recommend looking at google's "teachable machine" (see <a href="http://teachablemachine.withgoogle.com">http://teachablemachine.withgoogle.com</a>). This allows you to not only train a machine learning model (to recognise images or objects), but to download the model file and write your own programs with it. It's designed for children - which is how simple all of this stuff will be quite soon...</div><div class="separator" style="clear: both; text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj9PYqafzCzsxEuqbuFWuCNAC_YRCwhdArk90GdwHjC-xaxs79PU-5zSERAY0gujDwq3OAPMAQlFIdbtdq2G5ahdZHzPDsRcT_2KJSmYo4v6dbFlghI8pbrJRrhgJXCTqqTuHeBUxXN6QVVKysrEIiZyC8NptGqArnN1gTm1atajndZTEp9Vlxx_q78pg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="828" data-original-width="1766" height="150" src="https://blogger.googleusercontent.com/img/a/AVvXsEj9PYqafzCzsxEuqbuFWuCNAC_YRCwhdArk90GdwHjC-xaxs79PU-5zSERAY0gujDwq3OAPMAQlFIdbtdq2G5ahdZHzPDsRcT_2KJSmYo4v6dbFlghI8pbrJRrhgJXCTqqTuHeBUxXN6QVVKysrEIiZyC8NptGqArnN1gTm1atajndZTEp9Vlxx_q78pg" width="320" /></a></div><br /><br /></div><div class="separator" style="clear: both; text-align: left;">Once trained, the "model" does not need to be connected to the internet (chatGPT isn't, despite being accessed online). The model can make predictions about the likely categories of data it hasn't seen before (unlike a database which gives back what was put into it in response to a query). The better the training, the better the predictions. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">All predictions are probabilities. In chatGPT, every word is chosen according to the predictions of the chatGPT model, on the basis of the probabilities generated by the model. The basic architecture looks like the diagram above. Notice how the output of the text is fed as input back into the model. Also notice the statistical layer which does something called "autoregression" to refine the selection process from the options presented by the model. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This architecture is where the clues are to how profound the impact of the technology is going to be. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Models are not connected to the internet. That means they can stand alone and do everything that chapGPT does. We can have conversations with a file on our device as if we were on the internet. Spielberg got this spot-on in AI. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Another implication of this is, as I (carefully) pointed out to some Chinese students I gave a presentation to a few months back (at Beijing Normal University), the conversations you have can be entirely private. There need not be any internet traffic. Think about the implications of that. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We are going to see AI models on personal devices doing all kinds of things everywhere.</div><div class="separator" style="clear: both; text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiRQEY5q7BFuf_9vFrtArydrfzbZBvPP2wOxeN38ibdtZOs8qyVPWNl8uU7rtoSTxP9j1tRZh-lDtJMvWs0En0XvANRwKNLbcBO2PtKPQV1yct5LH6mUhwmSvghse57L1abPt6dx0G_fT9Qoos4F1gRQDWc2gMc8nOblq23o1y9BeRw5_XQaCer_2Hq6Q" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEiRQEY5q7BFuf_9vFrtArydrfzbZBvPP2wOxeN38ibdtZOs8qyVPWNl8uU7rtoSTxP9j1tRZh-lDtJMvWs0En0XvANRwKNLbcBO2PtKPQV1yct5LH6mUhwmSvghse57L1abPt6dx0G_fT9Qoos4F1gRQDWc2gMc8nOblq23o1y9BeRw5_XQaCer_2Hq6Q" width="320" /></a></div><br />I made a couple of cybernetic references: one to Ashby's homeostat - because the homeostat's autonomous units coordinated their behaviour with each other in the way that AI's are likely to provide data for other AIs to train themselves. This is likely to be a tipping point. I strongly suggested that people read Andy Pickering's "The Cybernetic Brain".</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">There's something biological about this architecture. A machine learning model does not change in most machine learning applications: chatGPT's model does not retrain itself: retraining takes huge amounts of resource and time. What happens is that the statistical layer which refines the selection does adapt. Biologically, it's similar to the model being the Genotype (DNA) and the statistical layer being the phenotype (adaptive organism). </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This also ties in with AI being seen as an anticipatory system because the academic work on anticipatory systems originally comes from biology: an anticipatory system is a system which contains a model of itself in its environment (Robert Rosen). Loet Leydesdorff, with whom I have worked for nearly 15 years, has developed a model of this (building on Rosen's work) to explain communication in the context of economics, innovation and academic discourse (the Triple Helix). I have found Loet's thinking very powerful to explain this current phase of AI.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgt76VEjrzPd2bfi__5_VgRxB-NpAliLbaNYUMowsCSON8rwpEpLemnx8wnmAugzxrfJFOkNJzgOUtWweKteT8wp6K0RK4CCCmYUycVg9jKApk_FIdSn7FfCdIRjK4EXHALmFtRy9wAVon7BfaHcGqOzGnZLXKQvEZBPMSt1PSxyhwTsTWhkSVGUbnXGg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEgt76VEjrzPd2bfi__5_VgRxB-NpAliLbaNYUMowsCSON8rwpEpLemnx8wnmAugzxrfJFOkNJzgOUtWweKteT8wp6K0RK4CCCmYUycVg9jKApk_FIdSn7FfCdIRjK4EXHALmFtRy9wAVon7BfaHcGqOzGnZLXKQvEZBPMSt1PSxyhwTsTWhkSVGUbnXGg" width="320" /></a></div><br /><div style="text-align: left;">Of course, there are limitations to the technology. But some of these - particularly about uncertainty and inspectibility will be overcome I think (some of my own work concerns this)</div><div style="text-align: left;"><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjSfsKYZ0K228g6w-z-PJdaBPxe2TNbA_hcm_A9-BgxFuO2bdZw1pU8SwEdFHVN-a6wSgiR1H85iGUnfTU-YW4Uqui61YQhwPpp4-5X90Pd728GkOQ0LAMtUWrOvZ38pKrHFgXdYVG59kJIR67IzZQtZIuO_SdduhagWFiHd8DJxxsnt5WwuGqfF1H14A" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEjSfsKYZ0K228g6w-z-PJdaBPxe2TNbA_hcm_A9-BgxFuO2bdZw1pU8SwEdFHVN-a6wSgiR1H85iGUnfTU-YW4Uqui61YQhwPpp4-5X90Pd728GkOQ0LAMtUWrOvZ38pKrHFgXdYVG59kJIR67IzZQtZIuO_SdduhagWFiHd8DJxxsnt5WwuGqfF1H14A" width="320" /></a></div><div style="text-align: left;"><br /></div>But perhaps the biggest question concerns the nature of the technical architecture. AI - or Artificial Anticipatory Technology - is basically a document which is also a medium. What does that mean for us? Why does it matter in education?</div><div style="text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgU-jIH5Yu52jlJs1a8bLXgHUMyjDWQayVW3ItV6g78Z_4XHcI9jxfHn_GZ35tN_UrcYr_FS29qV5A_KbcKOMS3kpo1tZTaANIJXSXA9YGdfh8Vh0hTtokg6D1g2IIcdQaHLDRq3PU7oA0Y3wmKyh8pEAPA-6CU5whunH9uKj0DsD4X44HokaqabA7VQw" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEgU-jIH5Yu52jlJs1a8bLXgHUMyjDWQayVW3ItV6g78Z_4XHcI9jxfHn_GZ35tN_UrcYr_FS29qV5A_KbcKOMS3kpo1tZTaANIJXSXA9YGdfh8Vh0hTtokg6D1g2IIcdQaHLDRq3PU7oA0Y3wmKyh8pEAPA-6CU5whunH9uKj0DsD4X44HokaqabA7VQw" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div>The real question behind this is "What is education for?". Again, Spielberg gets something deeply correct here: one of the principal reasons why we have education at all is the ongoing survival of the species - which means that those who will die first must pass on the ability to make good judgements about the world to those who are younger. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">The education system is our technology for doing this. It's rather crude and introduces all kinds of problems. It combines documents (books, papers, videos, etc) which contain knowledge which requires interpretation and communication by teachers and students in order to fulfil this "cultural transmission" (someone objected to the word "transmission", and I agree it's an awkward shorthand for the complexity of what really happens).</div><div style="text-align: left;"><br /></div><div style="text-align: left;">AI is a document which is also a medium of interpretation and communication. It is a new kind of cultural artefact. What kind of education system do we build around this? Do we even need an education system that looks remotely like what we have now?</div><div style="text-align: left;"><br /></div><div style="text-align: left;">I said I think this is what we should be thinking about. It's going to come for us much faster than most senior managers in universities can imagine. </div></div></div></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">So we simply haven't got time to worry about stopping the kids cheating!</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-2288755713275534132023-01-13T18:44:00.004+00:002023-01-13T19:33:02.694+00:00Triad Chords as a "nice noise" (From Plankton to Puccini)<p>20 years ago, when the Lindsay string quartet retired from Manchester University, Ian Kemp - who had been an inspirational musical figure for me and so many others - returned from retirement to conduct a last "Lindsay session", playing Beethoven and Tippett (which was the favourite diet). Although Ian complained that he was "bad at hearing", his musical intellect remained sharp as tack. </p><p>There was a passage in the music (I think it must have been Tippett) which was very unusual. So he asked, in his typical way, "what's going on here?". By this time, University academics of Kemp's temperament were very rare, and they had been replaced with younger people who were eager to please and were full of "musical analysis terminology". So Ian's question prompted much impressive-sounding jargon. "Perhaps," he said on hearing this, "but maybe it's just a nice noise". </p><p>So what is a nice noise? We hear, with Western ears at least, the major triad as the epitome of musical consonance - a nice noise. It is a resting place, and the tonal geometric relations that form around the triad provide us not only with the "nice noise" of the chord itself, but an unfolding diachronic (and diatonic) space with which we can engineer a sense of arrival and homecoming in tonal music. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhz_xDJ40G9mv1QLJ-QgzgaB7i3-msAKQ4YLxMYNXkHUwQkatkFN3YDop-J4c7ZvGxu6heEXY2aPIxh4KkkV-GfqUNJqTi8RPwU4mji7HNVYccopvCD3DOxih4zQ9khh-9UaLeV8G0-hkQDnJtgXkbHAT_IeYy1UcW1RepGGejqAWqjiBR3GJDqkvrQIg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="170" data-original-width="894" height="61" src="https://blogger.googleusercontent.com/img/a/AVvXsEhz_xDJ40G9mv1QLJ-QgzgaB7i3-msAKQ4YLxMYNXkHUwQkatkFN3YDop-J4c7ZvGxu6heEXY2aPIxh4KkkV-GfqUNJqTi8RPwU4mji7HNVYccopvCD3DOxih4zQ9khh-9UaLeV8G0-hkQDnJtgXkbHAT_IeYy1UcW1RepGGejqAWqjiBR3GJDqkvrQIg" width="320" /></a></div><br />When we learn about triads, we are introduced to the notation, and young pianists are taught how to shape their hands. But something gets added in both these cases. The triad is never "just" the notes. It is never "just the hand-shape". If it was "just the notes", then playing a triad with sine waves would be as satisfying as playing it on the piano. But it isn't - and this is my point: the triad's beauty lies in what occurs <i>outside </i>the notes. It lies in the noise that surrounds it. <p></p><div>So much of music analysis manages to miss the music. I strongly suspect that Kemp's "nice noise" comment hit the music on the nose. Part of the key to understanding this (pardon the pun) lies in inspecting the relationship between a triad and a note.</div><div><br /></div><div>Marina Frolova-Walker's fascinating lecture on the triad (see <a href="https://www.youtube.com/watch?v=PW21OfIs3Nc">(38) Triads, Major and Minor - YouTube</a>P includes a nice demonstration of the overtone series and how this relates to the triad. But if we play a note and analyse its harmonics, we see the different harmonics at a couple of octaves above the fundamental note. If we add another note a third above the original note, what actually happens is the overall frequencies become "noisier" - there is a tussle between two fundamental notes which are nevertheless connected. </div><div><br /></div><div>Marina does say something about the experience of early musicians in hearing the consonance between two notes. This must have been fascinating and puzzling, because perception struggles to piece together the coherence of sounds which on the one hand interfere with each other, and on the other, agree with each other. The recursive operations of consciousness in the face of this oscillation is possibly comparable to the way that early art features recursive geometric tiling patterns (across many different cultures across the world)</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjcOgHB3smSpUFdi-uDsLpqji7x0pspM87RYpA5QZhUZpXmXlQ2jeD04fFVpkta6pXIAYNFJrBidor3Mlx12kXiXnrDE7qK_GDbT567M-CFRNj7I4tBupRxXagrgapcHmgBslBGWOulQfKw6PKRGzlRJ-JYeoT-OZoLAfN75olDyCQSDGkHIod1krgknQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="152" data-original-width="331" height="147" src="https://blogger.googleusercontent.com/img/a/AVvXsEjcOgHB3smSpUFdi-uDsLpqji7x0pspM87RYpA5QZhUZpXmXlQ2jeD04fFVpkta6pXIAYNFJrBidor3Mlx12kXiXnrDE7qK_GDbT567M-CFRNj7I4tBupRxXagrgapcHmgBslBGWOulQfKw6PKRGzlRJ-JYeoT-OZoLAfN75olDyCQSDGkHIod1krgknQ" width="320" /></a></div><br />Just as with the oscillations of perception with a tiling pattern, the oscillations of perception with a triad creates a dynamic dance between noise and consonance. As Marina illustrates at the beginning of her talk, Wagner completely understands and demonstrates this dance at the beginning of The Ring. </div><div><br /></div><div>The consonance of the triad is not static - it moves. But it moves in a way in which perception becomes fascinated. Understanding this also helps to explain why not everybody in the world has the same music. The issue is not about consonance and dissonance - it is about the relationship between stability, order and noise. Western harmony is one way of managing a dance between these factors, but it depends on particular kinds of social relation which reflect the society that favours that way of doing things. There are many others, just as there are many other kinds of society. </div><div><br /></div><div>The role of noise in creating order is much overlooked. Kemp's "nice noise", and the triad itself, is a dynamic relation between noise and order. An energy imbalance is inherent in the first note connecting the physiology of perception and action with the physics of sound. The noise around music is essential in driving forwards the process of unfolding immanent structures in the sound as more energy is produced, and the physiology of expectation adapts. </div><div><br /></div><div>I thought a while ago that there was a clear distinction between the synchronic aspects of music and the diachronic aspects. (I wrote about this here: <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738">Redundancies in the communication of music: An operationalization of Schutz's ‘Making Music Together’ - Johnson - 2021 - Systems Research and Behavioral Science - Wiley Online Library</a> and here: <a href="https://link.springer.com/article/10.1007/s42438-022-00355-8">Communicative Musicality, Learning and Energy: A Holographic Analysis of Sound Online and in the Classroom | SpringerLink</a>). Now I think the synchronic aspects are much more dynamic than I realised. The ancient and medieval theorists who spoke of the divisions of the string and the harmonics ignored the role that perception plays in appreciating the beauty of "real" music, as opposed to mere mathematical relations. But now I see (and hear) that what happens to perception in the experience of the structure of sound is just as dynamic as what happens over time as sound develops. </div><div><br /></div><div>There is also something to say here about evolution, and the evolution of music. Michael Spitzer, with whom I've had the privilege of some detailed conversations recently alongside the biologist John Torday, has suggested that music is fundamentally connected to the ocean. He asked me a few weeks ago, after I'd given a talk on "music and epigenetics" about how the primeval ocean connects to Beethoven. It's a great question. Now, I think I would say that the ocean is a noisy environment (Michael says it is the most sonically rich environment on earth). The developmental process of life concerns the continual generation of order (negentropy). What do we need for this order-producing process? Information - in the form of selection is one thing. Constraint is the flip-side of information, and this is also required (technically, this is known as redundancy). But noise is critical. It's only with noise that the latent structures of organisms - from cells upwards - can be "shaken" into finding new ordered configurations. It's the same process - from plankton to Puccini! </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-122527112370837332022-12-26T23:58:00.001+00:002022-12-27T00:01:34.671+00:00AI and Heterophony<p>Heterophony is a musical technique where the flow of a single melodic line is followed by many voices, where each voice has a slightly different variation. It is, like repetition and harmony, one of the fundamental forms of redundancy in music. It is also perhaps the most interesting because it reflects the ways in which a single structure unfolding over time can be represented in multiple ways. These multiple ways come together because the heterophony arises from the fact that we are all fundamentally the same, with a bit of variation. </p><p>There is a certain sense in which AI is heterophonic. It obviously relies on redundancy in order to make its judgements, and with things like chatGPT, the redundancy is increasingly obvious not just in the AI itself, but in human-machine relations. All AI relies on the differences between heterophonic voices in order to learn. We seem to be similar in our own learning. </p><p>From a musical point of view, heterophony is most closely associated with non-Western music. Among the western composers who developed it in their music, the most striking example is Britten. While some of Britten's heterophony is a kind of cultural appropriation, I've been wondering recently whether he discovered something in heterophony which was always in his music. The predominance of 7ths and contrary motion in his very early "Holiday Diary" suggests to me a kind of heterophony which, by the time of his last (3rd) quartet (<a href="https://youtu.be/AElJ08gIOOM">https://youtu.be/AElJ08gIOOM</a> - particularly the first and last movements) becomes distilled into a very simple and ethereal world of crystalline textures. The fact that he went via his discovery of Balinese music was not an indication of appropriation, but self discovery. Unlike Tippett, he didn't say much about his thought and processes, but like all great artists, he might have been picking something up from the future - or rather, something that connects the future with the past. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOS7cEFjUB5q7-n2P9k53946hRRco_fyEKrZvb_539_8xs5V3LjH9s1PWBynxdz1AgL52BdlXInYJbLxnrD7v7EUMQ4sQvRK-GLmfdxDshztU7GSMjlZadyHsW4Nhlo0vrtttJcOYYLMJdGs4KJWLjkkx92gHk7_1tEVKwyHhoYAWummG4i9seD0TuXQ/s4000/IMG_20221226_232016027.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="4000" data-original-width="3000" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOS7cEFjUB5q7-n2P9k53946hRRco_fyEKrZvb_539_8xs5V3LjH9s1PWBynxdz1AgL52BdlXInYJbLxnrD7v7EUMQ4sQvRK-GLmfdxDshztU7GSMjlZadyHsW4Nhlo0vrtttJcOYYLMJdGs4KJWLjkkx92gHk7_1tEVKwyHhoYAWummG4i9seD0TuXQ/s320/IMG_20221226_232016027.jpg" width="240" /></a></div><br /><p>There is something of this heterophonic aspect to early music (lots of it in the Fitzwilliam Virginal Book, for example). While parts move not so much in unison but in 3rds and 6ths, the rhythmic interplay of one part moving slowly and other parts moving much more quickly is very similar to the rhythms that unfold naturally through the interactions of heterophony. I'd always taken this rhythmic polyphony as a sign of unity in diversity, but the connection to heterophony gives it more depth for me - particularly now. </p><p>So what about heterophony today? We have got used to a particular kind of redundancy in music produced through harmony and tonality. It is partly the product of the enlightenment, and it places the order of humanity above the order of nature. AI is generating a human-like order of utterances by decomposing a kind of natural order, and its decomposition process is both fundamentally heterophonic and fractal. AI works like a singer in a heterophonic choir, listening to where the tune is going, calculating which way it will go next, and checking to see if it was right or not. In this process, there is difference, form, fluctuation of constraint, expectation, and relation. </p><p>We have an urgent need to understand this process, and heterophonic music provides us with one way of doing it. Also, perhaps curiously, it takes us away from the enlightenment mindset which on the one hand has given us so much, but which has also done so much damage to our environment. It is not Victorian orientalism to connect with fundamental processes that steer our collective will and judgement-making. But there may have been more to the pull of orientalism than mere fashion. I suspect Britten saw this. </p><p>Maybe Britten wasn't tuning-in to the way AI works (how could he?), but rather he was tuning in to something that is intrinsic to our biology. Is our physiology heterophonic? Is quantum mechanics? The fact that our AI is is perhaps also a reflection that there is something in us which has always been this way. This, to me, is another reason for us to listen more carefully. Not that we should listen to the same thing, but look out for the stream and try to follow it. </p><p>While there are tremendous technical advances being made at breakneck speed at the moment, understanding where we are culturally and spiritually is vital. We have existed for many decades in a fog where our ability to reconcile our physiology with our technology has led to a tragic disequilibrium. We have almost ceased to believe that a new equilibrium is possible. But it might be. </p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5139380866860511018.post-81773439974996927252022-11-06T01:44:00.005+00:002022-11-06T10:16:14.527+00:00Viability and the AI business - Some thoughts on Musk, OpenAI and Twitter<p>Just for the sake of an intellectual exercise, imagine that through some unusual stroke of luck (or misfortune) someone finds themselves at the head of a venture which spins out of an AI-related academic project. As if one of those (usually hopeless) EU education projects actually produced something that somebody else not only wanted but was willing to pay a lot of money for. A number of things follow on from this. </p><p>Firstly, the university who (probably) made life very difficult for the people who came up with and developed the idea, probably sneered at any claim that "this is important work", or at appeals to protect key people, late in the day turns round and says "this will make us millions! it's our intellectual property". While market conditions change quickly, the university drags its feet in negotiating a handover of IP and the writing of patents. Over a year goes by, everyone tears their hair out, but eventually things are signed. Universities have become very weird organisations that ape commercial practices without really understanding why they do it, or thinking about whether it is sensible. </p><p>Secondly, a spin-out company with freedom to operate is one thing, but this needs funding. The mode of thinking for academic spin-outs is similar to the mode of thinking of academic projects - how to get funding? It should be said that VC funding cannot be gained unless you have experienced people who know how to deal with VC firms. But say, for the sake of argument (through another stroke of luck) that this is in place. The danger of this mode of thinking is that getting funding becomes the prime objective. There may be a point, however, where it is so obvious that a spin-out product is so desirable to potential customers, that the getting of funding is not a question. That raises the third question:</p><p>What kind of a business are we?</p><p>So you might have funding which might keep your operation going for a year or so before you need to be raising revenue through sales. What are the conditions for your viability? This is where an AI business is weird and interesting, and this sheds light on Elon Musk, Twitter and OpenAI.</p><p>Successful and viable businesses typically have a set of operations which produce things - products, services, etc - for a customer base which pays for those products and services. Among the different regulating mechanisms within any such business will be some kind of operational management which ensures effective coordination of the production operations, marketing and so on. Since all businesses operate within changing market conditions, all viable businesses will develop an R&D arm which is scanning the horizon for new opportunities and advising on strategy. Some business will hire software developers to develop new solutions to internal operational challenges. R&D looks to the future and potential scenarios, operations are focused on the present - there is often tension between them, and good businesses balance one against the other. Interesting to note that Elon Musk's current restructuring of Twitter is basically trying to rebalance the relationship between R&D and operations within that company (which is losing money). </p><p>An AI is a specific kind of technology. In the above scenario, it fits within a company's R&D structure. In itself, it is not about operations. Musk's OpenAI is a good example. It makes itself available as an API which can be plugged-in to the R&D operations of other businesses who will use it to automate writing tasks that would once have been a function within the operations of a company. Through adopting OpenAI services, those operations are restructured, people moved (or removed), and the operations restructured. </p><p>Now look at OpenAI itself as a business. As a business, it appears to have few customer-facing operations apart from sales and marketing. It develops and provides access to machine learning models which sit on the internet (although from a technological perspective, these models are just files which could sit anywhere - even on individual devices). Its customer-base is a community of users who integrate its services into high-end heavy usage corporate operations for which they pay subscriptions. OpenAI must maintain the scarcity of what it does (in the face of continual innovation in AI), and ensure that customers keep buying its services. That means that OpenAI's own R&D must outpace the R&D of its customers - or rather, OpenAI's customers see that a good chunk of their own R&D is best outsourced to OpenAI. </p><p>I think this is a problematic business model because effective R&D relies on having a good model of the organisation of which it is part. R&D without a concrete set of business operations attached is potentially root-less - it's not part of a viable operation, and could therefore lack coherent direction. This may be the most important reason why Musk was so keen to buy Twitter: it gives him an operational infrastructure, to which he (no doubt) believes his R&D company (OpenAI) can restructure and make profitable. </p><p>With a set of operations to manage, an AI business can grow its services and see the effect of its developments on the viability of the whole organisation. Some things will work, other things won't. Sometimes operational requirements will override whatever new innovation is suggested by R&D. Other times, the R&D is critical to maintain organisational effectiveness. Moreover, an AI business in this situation could extend its reach beyond a "host" organisation, offering services to other organisations. The only problem is that in doing so, other organisations might become competitors to the original host organisation. This requires new thinking about corporate cooperation and market competition. </p><p>This is the most fascinating question about all AI businesses. They are surrogate R&D operations without operational attachments. If an AI was a human system it would be like the pathology of when a university's management believes it is the university (see this many times!), and that the current operations (academics, administrators) could be replaced by another set of operations. Equally mad is the belief that management is generic and transplantable, as in the idea of "institutional isomorphism". Management without operations isn't viable. </p><p>But it's technological form is different - AI exists as a concrete coherent thing that provides services to R&D which can be genuinely useful. These services require R&D themselves - which is the regulatory domain of the AI company itself, but the whole thing demands some kind of operational "host". An AI company is a kind of "virus", and its best chances of preserving its viability is reproduction in other hosts. Reproduction of the AI is in the interests of the original host because it grows the AI business, but it must do so in such a way that other hosts do not become competitors to each other. </p><p>The dynamics of this are different to the traditional ways we think about organisational viability and competition. Traditional businesses compete for resources (sales, income) by acquiring market share in the products they produce. They may seek to establish monopolies by acquisition of competitors to remove threats and increase profits through creating scarcity in the market (which then requires regulation by government). But AI is presenting a dynamic of what might be called "organisational environmental endogenisation". That is to say, something in the environment which threatens the viability of organisations - AI - is endogenised (assimilated) within an organisational structure in order to transform that organisational structure so it is better able to maintain its viability and profitability. As part of maintaining its viability, growing the endogenised element and then getting it to "infect" other entities becomes a critical part of the viable operation. This is not to neutralise competition, but rather to increase the strength of the ecology within which organisations sit and within which they can continue to grow and develop better R&D operations. </p><p>There is something a bit odious about Musk. But equally, there is something important happening around technology at the moment which presents organisational questions which are unavoidable for anyone looking at the future of business, organisational viability and society. It's urgent that we think this through. I'm incredibly fortunate to be in a position where I'm grappling with this at first hand. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-65294478448335448492022-10-25T22:44:00.002+01:002022-10-26T07:05:06.860+01:00Postdigital values, Marion Milner and John Seddon<p>I'm giving a talk on Thursday at the Carnet Users Conference (<a href="https://cuc.carnet.hr/2022/en/programme/">https://cuc.carnet.hr/2022/en/programme/</a>) as part of the extensive strand on "postdigital education". My talk has gone under the rather pompous title of "Practical Postdigital Axiology" - which is the title of a book chapter I am writing for the Postdigital group - but really this title is about something very simple. It's about "values" (axiology is the study of value), and values are things which result from processes in which each of us is an active participant. Importantly, technology provides new ways of influencing the processes involved in making and maintaining values. </p><p>It's become fashionable in recent years to worry about the ethics of technology, and to write voluminous papers about what technology ought to be or how we should not use it. In most cases in this kind of discourse, there is an emotional component which is uninspected. It is what MacIntyre calls "emotivism" in ethical inquiry (in After Virtue), and it is part of what he blames for the decline in the intellectual rigour of ethical thought in modern times. </p><p>I wonder if the emotivism that MacIntyre complains of relates more to mechanisms of value which precede ethics. Certainly, emotivist ethical thought is confused with value-based processes. The emotion comes through in expressing something as "unethical" when in fact what has happened is that there is a misalignment of values usually between those who make decisions, and those who are subject to those decisions. More deeply, this occurs because those in power believe they have the right to impose new conditions or technologies on others. This would not happen if we understood the benefit to all of effective organisation as that form of organisation where values are aligned. This suggests to me that the serious study of value - axiology - is what we should be focusing on. </p><p>I think this approach to value is a core principle behind the idea of the "postdigital". This label has resulted from a mix of critique of technology alongside a deeper awareness that we are all now swimming in this stuff. A scientific appreciation of what we are swimming in is needed, and for me, the postdigital science has a key objective in understanding the mechanisms which underpin our social relations in an environment of technology. It is about understanding the "betweenness" of relations, and I think our values are a key things that sit between us. </p><p>This orientation towards the betweenness of value is not new - indeed it predates the digital. In my talk, I am going to begin with Marion Milner, who in the early 1930s studied the education system from a psychoanalytic perspective. In her "The Human Problem in Schools", she sought to uncover the deeper psychodynamics that bound teachers, students and parents together in education. It is brilliant (and very practical) work which in education research has gone largely ignored. In her book, Milner made a striking statement:</p><p></p><blockquote>"much of the time now spent in exhortation is fruitless; and that the same amount of time given to the attempt to understand what is happening would, very often, make it possible for difficult [students] to become co-operative rather than passively or actively resistant. It seems also to be true that very often it is not necessary to do anything; the implicit change in relationship that results when the adult is sympathetically aware of the child's difficulties is in itself sufficient."</blockquote><p>This is a practical axiological strategy. If in our educational research with technology, we sought to manage the "implicit change in relationship that results when the "teacher" or "manager" is sympathetically aware of the "other's" difficulties" then we would achieve far more. Partly this is because we would be aware of the uncertainties and contingencies in our own judgements and the judgements of others, and we would act (or not act) accordingly. What are presented as "ethical" problems are almost always the result unacknowledged uncertainties. Even with things like machine learning and "bias", the problem lies in the overlooking or ignoring of uncertainty in classification, not in any substantive problem of the technology. </p><p>In my new job in the occupational health department at Manchester university (which is turning into something really interesting), there is a similar issue of value-related intervention. One of the emerging challenges in occupational health is the rising levels of stress and burnout - particularly in service industries. A few years ago I invited John Seddon to talk at a conference I organised on "Healthy Organisations". It was a weird, playful but emotional conference (two people cried because it was the first time they had a chance to express how exhausted they were), but Seddon's message struck home. It was that stress is produced by what he calls "Failure demand" - i.e. the system being misaligned and making more work for itself. The actual demand that the system is meant to manage is, according to Seddon, often stable. </p><p>It strikes me that Seddon's call to "study the demand" is much the same idea as contained in Milner's statement. It is not, strictly speaking, to do nothing. But it is to listen to what is actually demanded by the environment and to respond to it appropriately. That way, we can understand the potential value conflicts that exist, and deal with them constructively. </p><p><br /></p><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-64385805275232071982022-10-14T23:24:00.001+01:002022-10-14T23:27:01.885+01:00The Structure of Entropy<p>One of the things I've been doing recently in my academic work is examining the ebb-and-flow of experience as shifts in entropy in different dimensions. It began with a paper with Loet Leydesdorff for Systems Research and Behavioural Science on music: <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738?af=R">https://onlinelibrary.wiley.com/doi/full/10.1002/sres.2738?af=R</a>, and a paper on the entropy of student reflection and personal learning <a href="https://www.tandfonline.com/doi/abs/10.1080/10494820.2020.1799030">https://www.tandfonline.com/doi/abs/10.1080/10494820.2020.1799030</a> and has continued in a recent paper on the sonic environment for Postdigital Science and education. </p><p>I have been fascinated by the visualisations and entropy graphs of different phenomena, partly because it provides a way of comparing the shifts of entropy of different heterogenous variables all in the same scale: so, one can consider sound as frequency together with the entropy of words, together with the entropy of things happening in video. The principal feature of this is that the flow of experience is a counterpoint of different variables, and the fundamental theoretical question I have asked concerns the underlying mechanism which coordinates the dance between entropies.</p><p>Another way of talking about this dance is to say that entropy has a "structure". Loet Leydesdorff commented on this in conversation at the weekend after I shared some recent analysis of music with him (see below). Interestingly, to talk of the structure of entropy is to invite a recursion: there must be an entropy of structured entropy. Indeed, Shannon's equation is surprisingly flexible in being able to shed light on a vast range of problems. </p><p>To understand why this might be important, we have to think about what happens in the flow of experience. I think one of the most important things that happens (again, I have got this from Loet) is that we anticipate things: we build models of the world so that we have some idea of what is going to happen next. These anticipatory models work with multiple descriptions of the world - there is "mutual redundancy" between the different variables which represent our experience, and I think Loet is right that this mutual redundancy produces an interference pattern which is a kind of fractal. It makes sense to think that anything anticipatory is fractal because in order to anticipate, we must be able to identify a pattern from past experience and map it on to possible future experience. Also, there is further evidence for this because it is basically how machine learning techniques like convolutional neural networks work.</p><p>Fractals are self-segmenting: the distinction between patterns at different orders of scale emerges from the self-referential dynamics which produce them. At certain regular points, the interference between different variables produces "nothing" - some gap in pattern which demarcates it. In the paper on music, I suggested that this production of nothing was related to the production of silence, and how music seems to play with redundancies (which is another way of producing nothing) as a way of eventually constructing an anticipation that a piece is going to end. </p><p>I made this video last week about a Haydn piano sonata as a way of explaining my thinking to Loet:</p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/VN7Duk91PJI" width="320" youtube-src-id="VN7Duk91PJI"></iframe></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The entropy graph I displayed here uses a Fast Fourier Transform to analyse the frequency of the sound, identifying the dominant pitch, the richness of the texture and the volume of the sound, and calculates the entropy of those variables. This graph illustrates the "structure of entropy" - and of course, eventually everything stops.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I think learning and curiosity is like this too. It too is full of redundancy, and the entropy of learning has a similar kind of dance to music. Indeed, sound is one of the key variables in learning (this is what my recent PDSE paper is about). But it's not just sound. Light also is critical - it's so interesting that our computer screens basically produce patterns of light, and yet there is so little research on light's impact on learning. And indeed, the entropy of light and the entropy of sound can be related in exactly the same way that I explore the entropy of the frequency in this video.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">As to what structures the dance of entropy, I think we have to look to our physiology. It is as if there is a deeper dance going on between our physiology and our interactions with our environment. What drives that? It's probably deep in our cells - in our evolutionary history - but something drives us to shape entropies in the way we do. </div><div class="separator" style="clear: both; text-align: left;"><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-11789538352714706662022-10-02T20:11:00.000+01:002022-10-02T20:11:54.049+01:00Sleeping and Learning<p>If learning is about making new distinctions, there is a question about how we know a distinction. Since all distinctions have two sides (an inside and an outside) our knowledge of a new distinction must be able to apprehend both sides of it. So we must be able to cross the thresholds of our distinctions. At the same time, if we are not inside our distinctions - if we are not able to use them as a lens to view the world - they are useless in a practical way. Yet the distinctions which make up our lens are dependent on our being able to cross their threshold and see no distinctions. Is this sleep and dreaming?</p><p>We don't understand why we sleep. Except that we know that if we don't sleep, we die. That suggests that it is not just our conscious distinctions that require stepping outside of themselves, but the physiological distinctions between cells, organs, etc. If they break down, we're dead. </p><p>At the same time, we know - at least anecdotally - that we learn in our sleep. We wake up in the morning having not been able to do something the day before, and find ourselves improved in our performance. Possibly because we've got "more energy" - but what's that? Thinking about distinctions necessitating boundary crossing helps here.</p><p>The Freudian "primary process" is the dream world of no distinctions. The world of the new baby. The "secondary process" is the regulating filter which channels the energy from the primary process into useful distinctions which (for adults at least) are conditioned by the social conventions of the "superego". (Talcott Parsons correctly recognised that Freud's superego was sociological). More to the point, this psychodynamic process between ego, id and superego was continual: a kind of pulse between the "oceanic" primary process and the secondary process. </p><p>In education, the superego rules, and technology has ensured that its grip on the imagination of staff and students has become every more brutal. But technology outside education stimulates and suppresses the id: from cat videos to shopping to porn, we can inhabit a simulated oceanic state. Only in sleep itself is there some contact with the reality of the id. </p><p>What have we missed in the way that we think about learning? When we examine our metrics for competency, our "constructive alignments", assessment schemes, etc, we seem to have assumed that the distinctions of learning are fixed: once we learning something it stays there. In conscious experience this looks like a sensible proposition. But to assume this misses the possibility that our distinctions appear persistent precisely because they result from a dynamic process of distinction and undistinction. </p><p>To be clearer about this, the deepest encounter with the oceanic experience comes through an intersubjective acknowledgement of uncertainty. That can be the best teaching - not the delivery of content, or the forcing of distinctions written in textbooks, but the revealing of understanding by a teacher to the point of revealing of uncertainty. "I'm not sure what this means - what do you think?"</p><p>I've written about this kind of thing here: <a href="https://link.springer.com/content/pdf/10.1007/s42438-022-00324-1.pdf">Digitalization and Uncertainty in the University: Coherence and Collegiality Through a Metacurriculum (springer.com)</a>, and this last week I got a further reminder of the importance of this approach in an EU project which Danielle Hagood and I led around digitalization. In both cases, technology was the stimulus for uncertainty and dialogue. It is the technology which takes us to the oceanic state, from where (and this was quite obvious in my EU project) new distinctions and new thinking emerges. </p><p>The dialogical is the closest thing we have to the primary process in education - it is rather like music because it connects us to more fundamental mechanisms. John Torday suggested in conversation last week that in sleep our cells realign themselves with their evolutionary origins, effectively connecting our waking thoughts (what Bohm calls the "explicate order") with fundamental nature ("implicate order"). That's a wild idea - but I quite like it!</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-6671815745743673152022-09-21T23:24:00.002+01:002022-09-22T06:52:27.104+01:00About Learning and de-growth<p>Seymour Papert argued that we do not have a word for the art of learning in the same way that we have words for the art of teaching (pedagogy, didactics) (see his "A word for learning": <a href="http://ccl.northwestern.edu/constructionism/2012LS452/assignments/2/wordforlearning.9-24.pdf">http://ccl.northwestern.edu/constructionism/2012LS452/assignments/2/wordforlearning.9-24.pdf</a>). Papert then suggests the word "mathetics", drawing attention to the fact that "mathematics" appropriated the word for learning to refer to its specialised practices, when the word "Mathematikos" simply meant "disposed to learn". There may be deeper things to explore in this etymological relationship. </p><p>We tend to think of learning as a kind of growth. As we learn, we know "more stuff", we gain "more knowledge", and we might even imagine that we get bigger heads! Babies start small and get bigger (up to a point), and as they get bigger they learn. Learning produces material artefacts which certainly do increase in size - before the internet, knowing more stuff meant more books, and (perhaps) a bigger library (to display as our zoom background!). The bigger the library the cleverer the people.</p><p>I was listening to Neil Selwyn talking about "de-growth" as a possible response to climate change and thinking about how education might support this (here: <a href="https://media.ed.ac.uk/playlist/dedicated/79280571/1_6u9a41zh/1_l7anxlgx">https://media.ed.ac.uk/playlist/dedicated/79280571/1_6u9a41zh/1_l7anxlgx</a>). Crudely, we imagine that our ecological crisis is caused because things have grown too big, and that to address it, we need to "degrow". But what do we mean by "big" or even "growth"? My favourite source for thinking about this is Illich's "Tools for Conviviality". He talks about the outsized growth of technology and institutions beginning as beneficent, and becoming malevolent. The causes for the transition from beneficence to malevolence are mysterious - they may lie in our physiology and evolutionary biology (that's another post). But the actual manifestation of pathology is not size - it is a reduction in variety. Illich's clearest example is 100 shovels and 100 people digging a hole, which is eventually replaced by one person and a JCB. Which has the greater variety? The loss of variety as the technology becomes more powerful results in an increase in the creation of scarcity - and the "regimes of scarcity" are the ultimate propellent for positive feedback loops and accelerating crisis. </p><p>The ecological crisis is a crisis resulting from the loss of variety caused by modern living, and within modern living, we must include education. No human institution excels in the art of producing scarcity more than education. The rocket fuel for the rest of the ecological crisis lies at the classroom door. But we can't seem to help ourselves. We see education as the solution to our troubles, not the cause! Education will teach us to "de-grow"... quick! roll-up for "degrowth 101"! Why do we do this? It is because we mistake education for learning. </p><p>We tend not to see learning but instead see "education" in the same way that we tend not to see health but instead see "health systems". "Education" (and "health systems") get bigger and more powerful - rather like the library which forms part of educational institutions. As they get bigger and more powerful they lose variety (look at the NHS today). But "learning" (and "health") do not grow or get bigger. Both of these terms refer to processes which relate an organism (a person, a community, an institution) with its environment. These terms relate to the capacity of any organism to maintain their viability within their environment - indeed "health" and "learning" are deeply connected concepts. Learning is not about growth, but about homeostasis. </p><p>Having said this, it's obvious that as we get older, we learn more stuff, we can do more things, we talk to more people, and so on. But we are really in a continual process of communion with a changing environment. Babies may seem to learn to scream to get attention, but their physiological context is changing alongside an epigenetic environment within which what it is to remain viable is a continual moving target. The education system appears to be a way of forcing certain kinds of environmental change, and as a result insisting on certain physiological responses (which appear to reproduce regimes of scarcity, and social inequality). Indeed, what we call "growth" is an outward manifestation of an unfolding of physiological potential in a changing environment. If growth was as fundamental as the "de-growth" people say, why does anything stop growing?</p><p>So if learning is not about growth, but about the viability of an organism in an environment, how can we visualise it differently? One way is to think about it mathematically - and so to draw back to the origin of the word for mathematics, mathematikos and "mathetic". If learning is a process of variety management, and a developing environment has differing levels of variety (and indeed, increasing entropy), then learning is really a process of finding a kind of resonance with that environment. These orders of variety and variety management might be rather like orders of prime numbers, or different levels of scale in a fractal, or different orders of infinity. Mathematically, we might be able to see learning in geometrical forms produced through cymatic patterns:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhAWaTsVQPkBuX1opCoAKmHmj7623Otp2MJfuVIjCgEkL8KtAHHAut2DPncyY_-jD0Px-L5uno12J6mCEp3qacxmzwTm7bAzUsVGf1O1UsqbtsXcwBPUNPeQCWpiMgAn_ewAi-siRKowXvtWt4JIxKyB5wiVym3ZFHoakJ0fPXM7TjrxfqI3hihIzJIAg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="302" data-original-width="662" height="146" src="https://blogger.googleusercontent.com/img/a/AVvXsEhAWaTsVQPkBuX1opCoAKmHmj7623Otp2MJfuVIjCgEkL8KtAHHAut2DPncyY_-jD0Px-L5uno12J6mCEp3qacxmzwTm7bAzUsVGf1O1UsqbtsXcwBPUNPeQCWpiMgAn_ewAi-siRKowXvtWt4JIxKyB5wiVym3ZFHoakJ0fPXM7TjrxfqI3hihIzJIAg" width="320" /></a></div>or knot topologies, <p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhIaBn7tJCDXgpWJdbCnbnJZqAOdTsyuNo3MpS0YJEufxuLQX_PqlDbD696V8snZnYJhExineV1Bisn-TRUdmQ8xgs3HcHpdiepbqiL3Jp_1Ch4tqD-WwMvdUUvFwjz7xc3z32umfGt6m-Izp5HbakMbG4NVQWPyOVK-C32hKQK0ztLZeZYHXCuPtR9XQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="894" data-original-width="1200" height="238" src="https://blogger.googleusercontent.com/img/a/AVvXsEhIaBn7tJCDXgpWJdbCnbnJZqAOdTsyuNo3MpS0YJEufxuLQX_PqlDbD696V8snZnYJhExineV1Bisn-TRUdmQ8xgs3HcHpdiepbqiL3Jp_1Ch4tqD-WwMvdUUvFwjz7xc3z32umfGt6m-Izp5HbakMbG4NVQWPyOVK-C32hKQK0ztLZeZYHXCuPtR9XQ" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">Or Fourier analysis, or even in Stafford Beer's syntegrity Icosahedron (see Beer's book "Beyond Dispute" <a href="https://edisciplinas.usp.br/pluginfile.php/3355083/mod_resource/content/1/Stafford%20Beer_Beyond%20Dispute.pdf">https://edisciplinas.usp.br/pluginfile.php/3355083/mod_resource/content/1/Stafford%20Beer_Beyond%20Dispute.pdf</a>):</div><div class="separator" style="clear: both; text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiTON1XK_wUZM6ra4sw6yyAjtCUoPCZofgeJ8XpA9EO5ExY3TsPT9lwyS9t7a3F1RzAOVEtwhbCmM9xPLRcayOA6UebXeo3g7SbPT82LInwSt_7lqx5TmW3LUJcPwtKXympvAOXJkdh9jyFcwBPfT0DImFh17g0NJKSBU1xsR4n9HvuzF-f1SWdCdJAwA" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="218" data-original-width="231" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEiTON1XK_wUZM6ra4sw6yyAjtCUoPCZofgeJ8XpA9EO5ExY3TsPT9lwyS9t7a3F1RzAOVEtwhbCmM9xPLRcayOA6UebXeo3g7SbPT82LInwSt_7lqx5TmW3LUJcPwtKXympvAOXJkdh9jyFcwBPfT0DImFh17g0NJKSBU1xsR4n9HvuzF-f1SWdCdJAwA" width="254" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div>These forms are expressions of relations, not quantifications of size. If we see size (and growth) as the problem we don't only miss the point, but we feed the pathology. <br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We urgently need a more scientific approach to learning. We are going to need our technologies to achieve this. This is not ed-tech, but technology that is necessary to help us understand the nature of relationship. I fear that for those consumed with ed-tech, blaming it for the demise of "education", a different kind of approach to technology and a more scientific approach to learning is not a thinkable thought. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I feel the need to make this thinkable is now very important. </div><br /><p></p><p><br /></p><p><br /></p><p><br /></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5139380866860511018.post-12016901257894632392022-09-19T00:03:00.004+01:002022-09-19T08:42:25.723+01:00Rethinking Education and a "Trope Recognition Machine"<p>I went to a conference at the weekend on "Rethinking Education". As is often the case with these things, there were some good people there and some good intentions. But I came away rather depressed. It's often said that there is nothing new in education, and events like this prove it. What it amounted to was a series of tropes uttered by various people, some of whom were aware that they were tropes, and others who genuinely thought they were saying something new. Meanwhile the system trundles on doing its thing - and while everyone there might admit that the thing it does is not very good, there is a surprising lack on clarity on what the system actually does. </p><p>When we ask people to rethink something, it is often framed as an invitation to think about the future - to say, "let's bracket-out the system we have, and conceive of the system we want". But this is naive because the system we want is always framed by the system we are in, and it is always difficult to see the frame we are in, and what it does to our thinking. Frame-blindness has specific effects - one of which is the tropes.</p><p>At one point I was getting a bit frustrated by the degree of repetition in the tropes that a wicked thought occurred to me: what if we had a trope recognition machine? What if there was some device that could process all the utterances and classify them according to their trope identity. And of course, current machine learning is very good at this kind of job. But if you had a trope recognition machine, what use might it be? </p><p>If we look at "rethinking education" as a problem situation - not the problem of "rethinking education" but the problem of talking about "rethinking education" - this problem is one of time-consuming redundancy of utterances. Basically many people say the same thing, and feel the need to say the same thing. Indeed, I suspect meetings like this owe their appeal to the opportunity they present to people to say what's in their heads in the confidence that what they say will "resonate" with what else is said. In other words, the redundancy is there in the desire to attend and speak in the first place. Perhaps we need to think about this - about the dynamic of redundancy in communication. </p><p>One of the most interesting things about redundancy is how attractive it seems to be - it is after all about pattern, and patterns are what we look for when we try to make sense of something. So if we want to make sense of education, we need to go somewhere where we can fit into a pattern - a conference. But this is curious because the motivation of most people at conferences is to "get noticed" - to have their version of a trope which is distinct that everyone looks to them as some kind of originator of something which has been said before (actually the whole academic discourse is like this, but let's not go there!). So how does that work? How does the desire for collective sense-making through pattern and egomania fit together?</p><p>I've been reading Elias Canetti's "Crowds and Power" and I think there is something in there about this tension between the search for redundancy and pattern, and the expression of the ego. Canetti sees the individual as someone who wants to preserve the boundary of their self. They don't even want to touched by someone else most of the time. And yet, they also want to belong to the crowd. Although Canetti was opposed to Freudian psychodynamics, clearly his analysis of the crowd is treading similar territory to psychodynamics: the crowd is the Freudian super-ego. </p><p>The search for redundancy in going to conferences and saying similar things to everyone else is crowd-like behaviour. It seems to be driven by egos who want to get noticed - to preserve and reinforce their boundary of the self. </p><p>I think the best way to think about this is to see both the ego and the superego as essentially dealing with contingency. They have to find a way to maintain a balance between their internal contingency and the external contingency. That means that it is necessary to understand and control the external contingency. Creating redundancy through utterances is a way of establishing some degree of control over external contingency: it is a way of establishing a "niche" in which to survive (my favourite example of an organism using redundancy to create a niche is a spider spinning a web). </p><p>What is discovered about the external contingency has an effect on internal contingency. The ego is troubled by the subconscious, which contains the vestiges of experience and desire from infancy - and the legacy of education. The ego is satisfied with the niche it creates in talking about education and feels more secure. (What appears as egomania may simply be a need to establish some kind of inside/outside balance). But as a result, conferences like this actually satisfy the psychodynamic needs of individuals struggling in a terrible system for a short time. They are essentially palliative. </p><p>Understanding these dynamics at conferences may be a first step to remedying the problems in the education system itself. A trope-recognition machine could pinpoint the different positions and contingencies which are expressed in a group: it could highlight areas of deep contention and uncertainty and thus focus discussion on those issues, codifying the underlying patterns that everyone is searching for in a way which could save a lot of time and frustration. That might result in some better decision-making perhaps.</p>Unknownnoreply@blogger.com0