Sunday 26 June 2022

Learning, Dialogue and AI: Offline initiatives and Political Freedom

I'm running a small EU project in July called C-Camp. The idea is to instil and explore computational practices among students from 4 European Universities (Prague, Copenhagen, Milan and Heidelberg). I wanted to create something for it which built on my experiences in Russia with the Global Scientific Dialogue course (Improvisation Blog: Transforming Education with Science and Creativity (dailyimprovisation.blogspot.com) - about which a paper is shortly to appear in Postdigital Science and Education). 

In Russia, the vision was to present students with a technological "cabinet of curiosities" - a way of engaging them in asking "this is interesting - what do you make of it?". It was the uncertainty of encounter with technological things which was important - that was the driver for dialogue, which dominated the course. C-Camp is very much in the same spirit. 

This time, I have been a bit more ambitious in making my cabinet of curiosities. I've made a cross-platform desktop app using ElectronJS which incorporates a tabbed web-browser, alongside self-contained tools which make available learner's data to the learners (and only to the learners). The advantage of a desktop tool is that, apart from the learners being able to change it (my programming and design is merely functional!), nothing personal goes online, apart from the traffic in each website.  The data of engagement with the tools - which is something that is usually hidden from learners - then becomes inspectable by them. There  are lots of "cool tools" that we suggest exploring (like the amazing EbSynth below)

The pedagogy of the course will then be to explore the data that learners themselves create as they process their own uncertainty. It's messy data - which can be an advantage educationally - but it illustrates a number of important principles about what is going on online, and what data big tech companies are harvesting, and how they are doing it. 

More to the point, by having a desktop tool, there is an important thing to say that "edTech doesn't have to be like the LMS!". Not everything needs to be online. Not everything needs to be harvested by corporations. And more to the point, if individuals were more in contact with their own data - particularly their own learning data - there are opportunities for deepening both our learning and our engagement with technology. So supporting students in downloading and analysing their own Facebook data can be part of a journey into demystifying technology and inspiring the imagination to look "beyond the screen"

 


One of the things I've done is to integrate 2 AI services. One of them uses the OpenAI service, which is online. The code for doing this is quite simple, but the important thing is that the processing happens remotely on OpenAI's servers. 

However, the other AI service is local. I've integrated the VGG16 model with Imagenet data so that students can upload and explore image recognition. The model and the code are all on the local machine. The point to make is that there is no reason why OpenAI shouldn't work like this too - other than commercial reasons.

What fascinates me about this is that for all the anxious talk about AI and its supposed "sentience", nobody talks about the technical architecture which basically up-ends the idea that everything has to be online. Large-scale language models are basically self-contained anticipatory dialogical engines which could function in isolated circumstances.

Think about this: imagine in a non-free country like Russia or China, where the authorities seek to monitor and control the conversations that individuals have, suddenly individuals can have conversations which are not monitored - simply by being in possession of a particular AI file. 

I'm doing a demo of OpenAI tomorrow in China. The last time I did it there, it worked. I doubt it will work for much longer. But it's easy to envisage a future where a market for specialised language model AIs start to infiltrate the underworld allowing people to have "prohibited conversations". That could mean both very good things for social organisation and freedom from oppression, and bad things in terms in terms of crime. 

That is one of the more fascinating things to discuss in C-Camp. I think I might be more careful with my Chinese audience!




1 comment:

Archeb said...

Hello Mark, I am the "Linux Guy" on BNU's summer course. I regret not being able to get all my classmates to achieve the goals of the course, which was frustrating to everyone I think. I wonder weather it's possible to start with something simpler to get people interested in technology, compared to constant trials and failures?

Also, I agree with what you said in class about the authorities blocking people from accessing technology: GitHub, npm, Google colab and more other important technical sites are slowed down or blocked. We had to use a VPN to access these services, which make it harder for everyone to get into it. In my opinion, this is more of a reason to maintain political stability. There are too many "unstable" factors in those websites, therefore the authority make the one-size-fits-all decision to block them all.

The above are some thoughts on the course content; and for this article, I think making AI more accessible to everyone is certainly important. But training AI models requires knowledge, experience, and capital (data collections, hardware resources), which requires at least the funds of a university laboratory to produce meaningful products. Therefore, rather than using it, we may need to explore the aspect of "creation".

All in all, thank you very much for your class and this article, it has inspired me a lot in the possibilities of AI.