Thursday 11 January 2024

Self-Provisioning of "Tools for Knowing" using AI

In my own teaching practice, I have become increasingly aware that preparation for sessions I have led has involved not the curation/creation of content (for example, in the form of Powerpoint slides), but the construction of tools to support activities driven by AI. The value of this is that the technology can now do something that only complex classroom organisation could achieve, namely the support of personalised and meaningful inquiry. I have been able to create a wide variety of activities ranging from drama-based exercises, to simulated personal relationships (usually around health). I am aware that the potential scope for doing new kinds of activities appears at this stage enormous: powerful organisational simulations (for example, businesses or even hospitals) with language-based AI agents are all possible, allowing students to play roles and observe the organisational dynamics. 

Of course, a lot of this involves coding or other technical acts, which I quite enjoy, even if I'm not that good at it. At some point the need for coding may reduce and we will have platforms for making our own tools for learning (actually, we kind-of already have it with OpenAI's GPT Editor). But the real trick will be to allow teachers and students to create their own tools supporting different kinds of learning activity, provide different kinds of assessments, and maybe even provide ways of mapping personal learning activities to professional standards. 

A lot of focus at the moment is falling on how teachers might use chatGPT for producing learning content - basically amplifying existing practices with the new tech (e.g. "write your MCQs with AI!"). But why shouldn't learners do the same thing? Indeed, what may be happening is the establishment of a common set of practices of "learning tool creation", which may be modelled by teachers, and then adopted and developed by learners. Everyone creates their own tools. Everyone moves towards becoming a teacher empowered by tools they develop. 

Why does that matter? Because it addresses the two fundamental variety management problem of education. Firstly, it addresses the problem that teachers and learners are caught between the ever-increasing complexity of the world, and the constraints of the institution. My paper on Comparative judgement and the visualisation of construct formation in a personal learning environment: Interactive Learning Environments: Vol 31, No 2 (tandfonline.com) (long winded title, I know - but this paper is interesting me more now than when I wrote it). It argued that the basic structure of the pathology of education is this (drawing on Stafford Beer's work): 


The institution wants to control technology, but personal tool creation means that it is individuals who could create and control their own tools. This is to shift much of the "metasystem" function (the big blue arrow) away from the institutional management to the individuals in the system. This was always the fundamental argument of the Personal Learning Environment: it's just that we never had tools which could generate sufficient variety to meet the expectations of individuals. Now we do. 

The second problem is the problem of too many students and too few teachers. That is a problem of how the practice of "knowing things" can be modelled in such a way that a wide variety of different people can relate to the "knowledge" that is presented to them. This problem however may be addressed if we see knowledge not as resulting from a "selection mechanism that chooses words", to instead being a "selection mechanism that chooses practices" - particularly practices with AI tools which then perform the business of "selecting words". If teachers model a "selection mechanism that chooses practices" which can result in a high variety of choosing words, then a wide variety of students with different interests and abilities can develop those same practices to lead to the selection of words which are meaningful to them in different ways. In fact, this is basically what is happening with chatGPT.

Teaching is always modelling. It is the teacher's job to model what it is to know something - to the point of modelling what they know and what they don't know. Really, they are revealing their own selection mechanism for words, but this selection mechanism includes their own practices for inquiry. Good teachers will say things "I can't remember the details of this, but this is what I do to find out". Students who model themselves on those teachers will acquire a related selection mechanism.  

The key is "This is what I do to find out". Many academics are likely to say "I would explore this in chatGPT". That is a technical selection made by a new kind of selection mechanism in teachers which can be reproduced in students. Teachers might also say "I would get the AI to test me", or "I would get the AI to pretend to be someone who is an expert in this area that I can talk to", or "I would get the AI to generate some fake references to see if anything interesting (and true) comes up", or "I would ask it to generate some interesting research questions". The list goes on.

Is "Knowing How" becoming more important than "Knowing That"? To ask that is to ask what we mean by "knowing" in the first place. Increasingly it seems that "knowing how" and "knowing that" are both selections. ChatGPT is an artificial mechanism for selecting words. It begs the question as to the ways in which we humans are not also selection mechanisms for words - albeit ones which have a deep connection to the universe which AI doesn't have. 

We are moving away from an understanding of knowledge as the result of selection towards an understanding of knowledge as the construction of a selection mechanism itself. This may be the most important thing about the current phase of AI development we are in.