Bibliometrics now dominates the academic landscape. Government research resource allocation, measures of academic status and professional development all now depend on a bunch of statistical measures that supposedly track merit. It is an international game that has turned intellectual effort into paper production factories, publishers into 'credit rating agencies', and academic against academic in blind peer review processes.
Unsurprisingly, given the stakes for individual careers, people game the system. Rather than concentrating on the kind of deliberative self-critique and doubt that is ultimately necessary for intellectual progress, many academics now concentrate on strategies for publishing as soon as possible in the highest impact factor journals that they can. What matters is increasingly the badge of publication, supported by statistics for impact and citation, rather than the intellectual content of any contribution. There are some academics who exploit the peer review process by suggesting particular citations in submitted papers (with which they have a stake) as a condition of publication. Others simply try to push out as many papers as they can. The actual content of these papers is often terrible.
The problem is exacerbated by the fact that Universities, in their drive for increasing their own league table position, recruit staff on the basis of a quantitative measure of 'number of articles published' and 'impact factors' - this all done often without reading the work! Indeed, the advantage of bibliometrics (and the principal reason why it has caught on) is the fact that it empowers managers, not academics, to make decisions from a perspective which is not scholarly, but administrative: show me the score, and I'll make the decision. Consequently, good and thoughtful teachers with poor academic publication rates are sacked. More importantly, deeply thoughtful academics who are slow to produce output, or whose output doesn't fit the normative expectations of peer review (by virtue of the kind of originality that makes for transformative research) are also shown the door. Universities are becoming no longer places for thinking. Often, those recruited on the back of 'number of publications' are recruited to institutions where a good teacher, not a publication record, is necessary.Unfortunately, good teachers tend not to be made from being immersed in a publication factory; they are made through dedicated experience of teaching.
The deep problem is the fact that the statistical measures mask a way of making meaningful distinctions about real academic quality: not the kind of academic quality that a manager might see, but the kind of academic quality that other academics would recognise. This is very similar to the problem of the slicing-up of debts where traders could not tell the difference between good debt and bad. What happened? Eventually, there was a crisis of confidence. Institutions were shown to have been exposed to things which they thought were of approved high quality, only to find that in reality the quality was poor, resulting from gaming the system which produced the measure of quality. We had it in banking, we had it in eggs: now we have it in academic papers!
When will the crisis of confidence come? My guess it will be disgruntled students. It will only take a mis-placed 'esteemed' academic to upset a cohort who will then express their anger through critiquing the quality of that academic's work. It won't be long before the press get hold of it. Then the questions will come... "did you not read their papers?", "how could a university be taken in?", etc.
As the current phase of reduction of diversity in the education sector takes hold in the UK and many institutions aim for 'Oxbridgification', the stage is being set for an embarrasing fall. Students will want great teaching. Oxbridgification is unlikely to deliver this. The ensuing disruption may be dangerous for many institutions. What follows it, however, may be a revaluation of what really matters in education.
The pursuit of truth matters to both great teachers and great thinkers.
Unsurprisingly, given the stakes for individual careers, people game the system. Rather than concentrating on the kind of deliberative self-critique and doubt that is ultimately necessary for intellectual progress, many academics now concentrate on strategies for publishing as soon as possible in the highest impact factor journals that they can. What matters is increasingly the badge of publication, supported by statistics for impact and citation, rather than the intellectual content of any contribution. There are some academics who exploit the peer review process by suggesting particular citations in submitted papers (with which they have a stake) as a condition of publication. Others simply try to push out as many papers as they can. The actual content of these papers is often terrible.
The problem is exacerbated by the fact that Universities, in their drive for increasing their own league table position, recruit staff on the basis of a quantitative measure of 'number of articles published' and 'impact factors' - this all done often without reading the work! Indeed, the advantage of bibliometrics (and the principal reason why it has caught on) is the fact that it empowers managers, not academics, to make decisions from a perspective which is not scholarly, but administrative: show me the score, and I'll make the decision. Consequently, good and thoughtful teachers with poor academic publication rates are sacked. More importantly, deeply thoughtful academics who are slow to produce output, or whose output doesn't fit the normative expectations of peer review (by virtue of the kind of originality that makes for transformative research) are also shown the door. Universities are becoming no longer places for thinking. Often, those recruited on the back of 'number of publications' are recruited to institutions where a good teacher, not a publication record, is necessary.Unfortunately, good teachers tend not to be made from being immersed in a publication factory; they are made through dedicated experience of teaching.
The deep problem is the fact that the statistical measures mask a way of making meaningful distinctions about real academic quality: not the kind of academic quality that a manager might see, but the kind of academic quality that other academics would recognise. This is very similar to the problem of the slicing-up of debts where traders could not tell the difference between good debt and bad. What happened? Eventually, there was a crisis of confidence. Institutions were shown to have been exposed to things which they thought were of approved high quality, only to find that in reality the quality was poor, resulting from gaming the system which produced the measure of quality. We had it in banking, we had it in eggs: now we have it in academic papers!
When will the crisis of confidence come? My guess it will be disgruntled students. It will only take a mis-placed 'esteemed' academic to upset a cohort who will then express their anger through critiquing the quality of that academic's work. It won't be long before the press get hold of it. Then the questions will come... "did you not read their papers?", "how could a university be taken in?", etc.
As the current phase of reduction of diversity in the education sector takes hold in the UK and many institutions aim for 'Oxbridgification', the stage is being set for an embarrasing fall. Students will want great teaching. Oxbridgification is unlikely to deliver this. The ensuing disruption may be dangerous for many institutions. What follows it, however, may be a revaluation of what really matters in education.
The pursuit of truth matters to both great teachers and great thinkers.
well said!... i very much agree with your critique of this 'numbers game'... insight/quality is losing out to pure quantity.. it's getting ridiculous when a single lab submits 40 or more papers to a single conference! ..people are so busy producing these days nobody has time for reading and reflecting ...!
ReplyDelete