What sort of evidence is being produced for the efficacy of e-learning project interventions? How is the evidence interpreted? How is it gathered? What does it mean? In fact, is it evidence at all?
One perspective, which I find interesting, is that what is touted as evidence is in fact narrative. Projects are 'participative story generators'. This is not to take anything away from much of the value of individual projects, but what is really produced are stories: some of them inspiring, some of them horrific (not enough of those!), some of them a bit dull, etc. The inspiring stories may be causal factors in others trying to reproduce them; the horror stories may be causal factors in others trying to avoid the same mistakes (both of those are valuable in equal measure, I would say!); the dull ones are.. just dull.
Is making stories a worthwhile activity? Yes, but it might be argued that it would be just as effective to make them up. However, the technological aspects of the projects allow them to be participative, so that authorship is a shared experience, with different perspectives on the story distributed amongst stakeholders in the institution. This, I think, is particularly valuable.
So why not celebrate stories and stop pretending that the value of particular interventions may be borne out by 'evidence'?
No comments:
Post a Comment