Wednesday 9 November 2011

Adaptive Comparative Judgement and Conviviality in Assessment

For all the talk about teaching and learning which dominates e-learning discourse, the principal strategic problem that Universities face is assessment. It is at the moment of  assessment, when an individual teacher passes judgement on the worthiness of a student's work that the problems begin. And whilst we try to 'moderate' that judgement (with a second opinion), ultimately it still is a high-risk unpredictable situation for students, who can often be placed in a position where they find it hard to tell whether their work will pass with flying colours or fail miserably, putting their investment of time, money and effort in jeopardy. Added to this, there are increasing organisational pressures which make the problem worse:
1. the pressure of student 'consumer' demands in the light of fees
2. increasing cultural and organisational distribution of teaching, learning and assessment
3. institutional difficulties in separating quality management from delivery (exacerbated by 2)

In other sectors of education (in further education or school), teaching and assessment are separated. Assessment is conducted  usually by examination boards, who publish clear criteria for the expected standard of work and the grades it will receive. There are still problems here, but it does mean that the person doing the teaching is not the person doing the final assessment, and this can create greater uniformity and predictability for students. Some FE courses work on the basis of publishing clear learning outcomes, leaving the design of assessments (in the form of coursework) to lecturers. This design is then moderated by external examiners. This is more open to problems, although it has operated successfully for many years.

However, an approach to the separation of assessment and teaching is being presented by technology in the form of Adaptive Comparative Judgement (ACJ - see ACJ is a different approach to separation of assessment and teaching, where there are not distinct groups of people doing assessment (independently of teaching), but rather the workload of assessment is spread across teachers working for the organisation. The cognitive load of assessment is spread much more widely, with an overall judgement being dependent on a large number of small 'comparative judgements'. These amount to comparisons between like-pairs of sections of work, with the judgement task being simply to say which of a pair is better.

This can only really become practically useful with computer support. With the computer support, the workflow of judgements that are required to be made, the distribution of work, the continual adaptation of the judgements that are required can all be managed. What emerges is a distribution of 'merit', upon which grading boundaries can be laid.

Obviously this would fit some areas assessed work more than others, but its organisational benefits go deeper than simply greater objectivity in assessment. I think it could be a way of making the whole business of assessment more convivial and community-driven. I could be wrong here (it could be something horrible!!), but the spreading of the cognitive burden of assessment means that collective coordinated small efforts of teachers can be seen to contribute to much greater transparency for students: that the universal benefit of working together coordinated through technology can be greater than the super-human efforts of a few individual judgements. 

No comments: