Monday, 15 April 2019

Kaggle and the Future University: Learning. Machine. Learning.

One of the most interesting things that @gsiemens pointed out the other day in his rant about MOOCs was that people learning machine learning had taught themselves through downloading datasets from Kaggle ( and using the now abundant code examples for manipulating and processing these datasets with the python machine learning libraries which are also all on GitHub, including tensorflow and keras. Kaggle itself is a site for people to engage in machine learning competitions, for which it gathers huge datasets on which people try out their algorithms. There are now datasets for almost everything, and the focus of my own work on diabetic retinopathy has a huge amount of stuff in it (albeit a lot of it not that great quality). There is an emerging standard toolkit for AI: something like Anaconda with a Jupyter notebook (or maybe PyCharm), and code which imports ternsorflow, keras, numpy, pandas, etc. Its become almost like the ubiquity of setting up database connectors to SQL and firing queries (and is really the logical development of that).

Whatever we might think of machine learning with regard to any possibility of Artificial Intelligence, there's clearly something going on here which is exciting, increasingly ubiquitous, and perceived to be important in society. I deeply dislike some aspects of AI - particularly its hunger for data which has driven a surveillance-based approach to analysis - but at the same time, there is something fascinating and increasingly accessible about this stuff. There is also something very interesting in the way that people are teaching themselves about it. And there is the fact that nobody really knows how it works - which is tantalising.

It's also transdisciplinary. Through Kaggle's datasets, we might become knowledgeable in Blockchain, Los Angeles's car parking, wine, malaria, urban sounds, or diabetic retinopathy. The datasets and the tools for exploring them are foci of attention: codified ways in which diverse phenomena might be perceived and studied through a coherent set of tools. It may matter less that those tools are not completely successful in producing results - but they do something interesting which provides us with alternative descriptions of whatever it is we are interested in.

What's missing from this is the didacticism of the expert. What instead we have are algorithms which for the most part are publicly available, and the datasets themselves, and a question - "this is interesting... what can we make of it?"

We learn a lot from examining the code of other people. It contains not just a set of logic, but expresses a way of thinking and a way of organising. When that way of thinking and way of organising is applied to a dataset, it also expresses a way of ordering phenomena.

Through my diabetic retinopathy project, I have wondered whether human expertise is ordinal. After all, what do we get from a teacher? If we meet someone interesting, it's tempting to present them with various phenomena and ask them "What do you think about this?". And they might say "I like that", or "That's terrrible!". If we like them, we will try to tune our own judgements to mirror theirs. The vicarious modelling of learning seems to be something like an ordinal process. And in universities, we depend on expertise being ordinal - how else could assessment processes run if experts did not order their judgements about student work in similar ways?

The problem with experts is that when expertise becomes embodied in an individual it becomes scarce, so universities have to restrict access to it. Moreover, because universities have to ensure they are consistent in their own judgement-making, they do not fully trust individual judgement, but organise massive bureaucracies on top of it: quality processes, exam boards, etc.

Learning machine learning removes the embodiment of the expertise, leaving the order behind. And it seems that a lot can be gained from engaging with the ordinality of judgements on their own. That seems very important for the future of education.

I'm not saying that education isn't fundamentally about conversation and intersubjective engagement. It is - face-to-face talk is the most effective way we can coordinate our uncertainty about the world. But the context within which the talking takes place is changing. Distributing the ordinality of expert judgements creates a context where talk about those judgements can happen between peers in various everyday ways rather than simply focusing on the scarce relation between the expert and the learner. In a way, it's a natural development from the talking-head video (and it's interesting to reflect that we've haven't advanced beyond that!). 

No comments: