Friday 18 February 2022

Personal Computational Environments: From Pedagogy to Technics

I remember when I first encountered R and RStudio about 8 years ago that I thought that this was a new kind of environment for learning. It was reminiscent of the Personal Learning Environment which I had spent so long thinking about (and which, in the end, didn't really amount to anything). RStudio appeared to be a "personal" environment for writing code, installing personal collections of functionality (libraries), being able to engage in a community of people who were doing similar things, and  managing a range of data sources. It was, of course, more technical and abstract. But it was obviously incredibly powerful - and it encapsulated one of the principle issues of the PLE - that a small set of technical tricks or dispositions could achieve a range of different outcomes: loading and running libraries, browsing CRAN, checking the documentation, manipulating dataframes, etc.  

A couple of years later, and it was the turn of Jupyter notebooks which now seem to have taken over. I know there are a lot of R enthusiasts out there who wouldn't be seen dead using Python, but python and Jupyter have become my go-to place for doing almost everything on the computer. It is the same story - a small set of technical libraries and a wide range of potential results. When I was in Liverpool overseeing the roll-out of Canvas, Jupyter was a go-to tool for accessing the Canvas API, getting data, analysing it and producing reports for management. It all worked, and it meant I could do "voodoo" in the eyes of colleagues who were otherwise helpless with anything apart from Excel.

It's obvious that these skills are important - and it's equally obvious that to a large extent they are unknown to most teachers, and not taught to learners. And this becomes a problem. Which is why I took a position in Copenhagen to work on a project to instil "digitalization practices" in topic teaching. I've had an interesting year (and I left Liverpool at a good time, having sorted the Canvas roll-out in just over a year - thanks largely to the data analyses. Once Canvas became established it was obvious that not much more was going to happen: learners and teachers couldn't actually do much more than they could in the old cludgy Blackboard)

The Copenhagen project has been challenging - which is what I wanted. As is often the case with these things, the real problems lie in the specification of the project itself. But it is good to have challenges to do difficult things - particularly if they are worthwhile.

I ended up with a small group of chemistry students plus a couple of students from my Russian university (Far Eastern Federal University) playing around with Python and OpenAI. This was an eye-opener - because OpenAI provided a  way into understanding the importance of Python without needing to explain all the computer science rudiments (variables, loops, etc), but instead just treating it as a kind of short "text" which, when run, did amazing things. I've now done similar things with the Russians on a larger scale, and with students in Germany. It seems to be the same story. If there is a rule to this, it is that we need to do things which have a relatively low technical barrier of entry (so a short "text") but which generate vastly more variety. If you can do that then the path into programming becomes clearer.

Today I did something with the same group that was not so easy. I wanted to store the generated OpenAI data in one of Tim Berners-Lee's "Solid" data stores. That should have been a lot easier than it was. And because it wasn't easy, and sometimes frustrating, I lost the students I think. There wasn't the "return" in terms of the increased generated variety. Even introducing the students to the power of the Linked Data that sits behind this, and its relevance in things like DBPedia, didn't really grab the students.

AI has the property of being able to return more variety than it is given. Agent-based modelling and automata may be able to do the same thing, or possibly visualisation tools. But this is a requirement for the pedagogy.   And in fact, my initial interest and enthusiasm for R was also the product of this rule: it was the realisation that a simple command to install libraries meant that the variety of possibilities of the platform was infinite. Of course, I would have to know something about the platform to realise that, but this is the point: we have to start from the position of a tiny amount of interest, and quickly produce a lot more variety which stimulates that interest. The AI-driven art tools are similar: Wombo dream is amazing, and can keep people captivated for ages as they type in different texts and observe the images that are formed. 

This is what it was like to be a kid in the early 80s when home computers were appearing. The generated variety of machine enticed because it meant that kids could do something that was more powerful than anything their parents could do. In my experiments with students, it seems the AI is having the same effect on them. After some familiarity, then you can do things with getting stuff on the web (we used Heroku), and that is another moment where the possibilities explode. 

So is there a way of structuring a pedagogy of technology that can fit a wide range of interests, disciplines, abilities  where the rule that whatever is done generates high variety, while making low technical demands? This would be to think of computing by starting with pedagogy, not with "topics", or even tools like Scratch. 

When Papert developed LOGO, he was thinking about the physiology of discovery, and trying to counter the essentially disembodied abstraction of code. Unfortunately LOGO, and Scratch, quickly become quite abstract, despite the fancy graphics, and succumb to a "curriculum". But what is really going on is a variety generation exercise: the turtle which draws elaborate patterns from a few lines of code. 

If we think about the variety first, we would not start with variables, loops and conditions. We would look for those steps to expanding possibilities from current technologies which empower individuals to do things which others who don't know the technology, can't do. So it could be:

OpenAI -> Flask -> Heroku -> GitHub -> ?Docker... I guess there are other possibilities (I've done all of these with the students apart from Docker - but I'm tempted to go there next).  But perhaps the specifics don't matter - it's the management of variety. 

The motivation to learn technical skill comes about through our hunger for greater variety. We need to design our pedagogy from this principle. 


No comments: