Sunday 14 May 2023

Positioning AI

I've been creating a simple app for my Occupational Health students to help them navigate and inquire after their learning content in flexible ways. It's the kind of thing that the chatGPT API makes particularly easy, and it seems worth playing with since chatGPT won't be the only API that does this kind of thing soon (Vicuna  and other open source offerings are probably the future...)

As with any tool development, the key is whether the people for whom the tool is made find it useful. This is always a tricky moment because others either do or don't see that a vision of what they do (manifested in the technics of what is made) actually aligns with what they perceive their needs to be. Like a bad doctor, I risk (like so many technical people) positioning students as recipients of techno-pedagogical "treatment". (Bad teachers do this too)

We've seen so many iterations of tools where mouse clicks and menus have to be negotiated which seem far-removed from real wants and needs. The VLE is the classic example. I wrote a paper about this many years ago with regard to Learning Design technology, which I am reflecting on again in the light of this new technology (see Microsoft Word - 07.doc (researchgate.net)). I used Rom Harre's Positioning Theory as a guide. I still think it is useful, and it makes me wonder how chatGPT might be any different in terms of positioning. 

Harre's Positioning Theory presents a way of talking about the constraints within which the Self is constructed in language and practice. There are three fundamental areas of constraint: 

  1. The speech acts that can be selected by an individual in their practice
  2. The positions they occupy in their social settings (for example, as a student, a teacher, a worker, etc)
  3. The "storyline" in their head which attempts to rationalise their situation and present themselves as heroic. 
With positioning through the use of tools, learners and teachers are often seen as recipients of the tool designer's judgement about what their needs are. This is always a problem in any kind of implementation - a constant theme in the adoption of technology. Of course, the storyline for the tool designer is always heroic!

But chatGPT doesn't seem to have had any adoption problems. It appears to most people who experience it that this is astonishing technology which can do things which we have been longing for easy solutions to: "please give me the answer to my question without all the adds, and the need to drill through multiple websites! (and then write me a limerick about it)" But in many cases, our needs and desires have been framed by the tedium of the previous generation of technology. It could have been much better - but it wasn't for reasons which are not technical, but commercial. 

However, could chatGPT have positioning problems? This is an interesting question because chatGPT is a linguistic tool. It, like us, selects utterances. Its grasp of context is crude by comparison to our awareness of positions, but it does display some contextual (positioning) awareness - not least in its ability to mimic different genres of discourse. Clearly, however, it doesn't have a storyline. However, because of the naturalness of the interface, and its ability to gain information from us, it is perfectly capable of learning our storylines. 

In a world of online AI like chatGPT or BARD, the ability to learn individuals' storylines would be deeply disturbing. However, this is unlikely to be where the technology is heading. AI is a decentralising technology - so we are really talking about a technology which is under the direct control of users, and which has the capacity to learn about its user. That could be a good thing. 

I might create a tool for my students to use and say "here is something that I think you might find useful". Ultimately, whether they find it useful or not depends on whether what they perceive as meaningful matches what I perceive as meaningful to them. But what is "meaningful" in the first place?

What students and teachers and technologists are all doing is looking for ways in which they (we) can anticipate our environment. Indeed, this simple fact may be the basic driving force behind the information revolution of the last 40 years. A speech act is a selection of an utterance whose effects are anticipated. If a speech act doesn't produce the expected effects, then we are likely to learn from the unexpected consequences, and choose a different speech act next time. Positioning depends on anticipation, and anticipation depends on having a good model of the world, and particularly, having a storyline which situates the self in that model of the world. 

Anticipations form in social contexts, in the networks of positionings that we find ourselves in our different social roles. ChatGPT will no doubt find its way into all walks of life and different positions. Its ability to create difference in many different ways can be a stimulus to revealing ourselves to one another in different social situations. But there are good and bad positionings. The danger is that we allow ourselves to be positioned by the technology as recipients of information, art, AI-generated video, instruction, jokes, etc. The danger is that we lose sight of what drives our curiosity in the first place. That is going to be the key question for education in the future. 

This is where the guts of judgement lie. What is in a position is not merely a set of expectations about the world around us. It is deeply rooted in our physiology. If we are not to become passively positioned by powerful technology, then it will become necessary for us to look inwards on our physiology in our deepest exercise of judgement. This is what we are going to need to teach succeeding generations. Information hypnosis, from which we have been suffering for many years of the web, cannot be the way of the future.

No comments: