Sunday, 13 June 2021

AI, Experiment and Pedagogy - Why we need to step back from the critical "Punch and Judy" battles and be scientific

There are some things going on around technology in education which I'm finding quite disturbing. Top of the list is the "Punch and Judy" battle going on between the promoters of AI, and the critics of AI. One way or another, it is in the interests of both parties that AI is talked about - for the promoters, they want to encourage big money investment (usually, but not always, for nefarious purposes); for the critics, they want a platform upon which they can build/enhance their academic reputations - what would they do without AI??

Nowhere, in either case, is there real intellectual curiosity about the technology. Both parties see it as a functionalist thing - either delivering "efficiency" (and presumably, profits), or delivering educational disaster. The former is possible, but a huge missed opportunity; the latter is unlikely because the technology is not what the critics imagine it is. In fact, the technology is very interesting in many ways, and if we were scientists, we would be taking an active interest in it.

As I have said before, "Artificial Intelligence" is a deeply misleading term. What machine learning is is an exploitation of self-referential, recursive algorithms which display the property of an "anticipatory system": it predicts the likely categories of data it hasn't seen before, by virtue of being trained to construe fundamental features of each category.  We have not had technology like this before - it is new. It is not a database, for example, which will return data that was placed in it in response to a query (although AI is a kind of evolution from a database).   

"Artificial Anticipatory Systems" are extremely important for reasons we haven't begun to fathom. The deep issue is that, as all biological systems, we are also "anticipatory systems". Moreover, the principles of anticipation in biological systems are remarkably similar to the principles of anticipatory systems in machine learning: both rely of vast amounts of "information redundancy" - that is, different descriptions of the same thing. Redundancy was identified by Gregory Bateson (long ago) as fundamental to meaning-making and cognition. Karl Pribram wrote a brilliant paper about the nature of redundancy and memory (see T-039.pdf (karlpribram.com). Poets (Wallace Stevens in "The necessary angel"), musicians (Leonard B. Meyer), physicists (David Bohm) and many others have said the same thing about multiple descriptions and redundancy. How does it work? We don't know. But instead of using the opportunity to inspect the technology, foolish academics posture either trying to shoot the stuff down, or to wear it as a suit. 

To hell with the lot of them!

I read a recently-published remark against empiricism itself the other day by a well-known and highly intelligent scholar. The argument was basically that "flat earth" campaigners (and other conspiracy theorists) were empiricists because they appealed to simple observations. What was needed in place of this "empiricism" was the carefully constructed critical argument of the social science discourse. 

I think I partly blame the philosophy of "Critical Realism" for this (and I speak as someone who for a long time had a lot of time for CR). Roy Bhaskar  makes a distinction between the "empirical", the "actual" and the "real", arguing that the empirical is the most constrained situation, because it involves observation of events (typically, but not exclusively, through artificial closed-system experiments designed to produce observable and reproducible successions of events). The actual, by contrast, considers events that might occur, but might not be observed. The real, by contrast, involves the world as it is beyond human perception - what Bhaskar considers to be the result of "generative mechanisms". 

Now what's wrong with this deflating of empiricism? The real problem arises because Bhaskar bases his arguments on a particular reading of Hume's scientific theory which suggests that science results from experiments producing regular events, and scientists constructing causes to explain these events. Hume's position is unfortunately misconstrued by many as a defence of a naive mind-independent empirical reality (which was the opposite of what Hume was really saying), but Bhaskar's point is to say that Hume was wrong in saying causes were constructs. [The tortuous complexity of these arguments exhausts me!] However, behind all this is a deeper problem in that experiment is seen as a thoroughly rational and cognitive operation - which it almost certainly is not. Moreover, this cognitive and rational view becomes embedded in the kind of authoritarian "school science" that we all remember. 

The flat earthers are not empirical. They are authoritarian - borrowing the cognitive misinterpretation of science from the schoolroom to make their points.  

As physiological entities, scientists are engaged in something much more subtle when doing experiments. Science is really a "dance with nature" - a process of coordinating a set of actions against a set of unknown constraints from nature. Producing regular successions of events are  a way of codifying some of constraints that might be uncovered, but that in a way is really an epiphenomenon of the empirical enterprise. Codification is important for reasons of social status in science, and perhaps for social coordination (if it is codification of the genome of a virus, for example). But it is not what drives the empirical effort. That is driven by continually asking new questions, and making new interventions to get ever-richer versions of reality. The drive for this curiosity may be to do with evolution, or energy and negentropy. As David Bohm pointed out, scientific understanding is rather like a continual accretion of multiple descriptions of nature. It is also about redundancy. 

This is why I find the machine learning thing so important, and why the ridiculous posturing around it drives me crazy. This is a technology which embodies (an inappropriate term, of course) a principle which lies at the heart of our sense-making of the world. Studying it will shed light on some deep mysteries of consciousness and learning, and our relationship with technology. As we move closer to quantum computing, and closer towards being able to study nature "in the raw", some of the insights from the current development phase of machine learning will provide a useful compass for future inquiries. They are, I'm sure, related. 

It may be more of a tragedy that the critics of AI are not scientists, than the many promoters of AI in education being criminals.  


No comments: