Yesterday's post on learning raised a number of issues and before getting onto the meat of "explaining learning", it is important to say something about explanation itself.
I think the most important thing about explanation is its relationship with agreement. An individual may have an understanding of something, but until they explain their understanding, they cannot know whether others agree with them. This is why when we think about learning analytics (for example), the individual interpretation of the data (however it is achieved) is not really the issue. The key issue is the explanation of the categories that an individual presents to others: the shapes, the form of the data - it is the explanation which counts because it is the explanation which may or may not elicit collaborative action.
This is why I think that visual analytics is the best path to take with regard to learning analytics, because there is no attempt made to elicit latent meaning in the data itself, but rather create tools which afford the collaborative exploration of the data and the construction of explanations.
This is not to say that no value can be gained from a data-analytic approach, but I do think that in our understanding of the data, we should try and factor in the possible explanans that emerge and their impact.
There are a variety of ways of examining the reliability of data which is analysed for its latent 'content'. Of the different 'alpha' values which are attributed to reliability, I find the most interesting is Krippendorff's alpha. (http://en.wikipedia.org/wiki/Krippendorff's_alpha). This is a measure of agreement of categories amongst a number of observers. Essentially it is a simple ratio between the amount of disagreement observed, and the amount of disagreement which is possible.
But I don't think it's quite this simple: explanation muddies the picture. This is because as certain categories are decided upon, explanations are formed, which are characterised by anticipations of what might be next, which colour the judgement of further categories. That means that the scope for disagreement fluctuates over time depending on the extent to which explanations are agreed. In turn this means that we consider not only the agreement of categories in content analysis, but also the agreement of explanations (maybe in the form of anticipations). It may be that the agreement of anticipations may follow a similar metric to the agreement of categories. This looks very much like an example of Leydesdorff's 'hyperincursive' sub-systems (see http://www.leydesdorff.net/casys07/index.htm)
What does this mean in practice? In essence, I think it's a more formal way of saying something quite sensible: we need to communicate our understanding as well as communicate the categories we use. Some management tools (particularly Beer's Viable System Model) come with categories (in the VSM's case, regulating mechanisms) and with explanations as to how the regulating mechanisms connect. Because the explanations are set in the model itself, users of the VSM usually only have to worry about assigning the categories to their experience.
With content analysis of complex data (such as is attempted in learning analytics), both explanations and categories have to co-emerge. Actually, the co-emergence of categories and explanations is what happens in learning itself. This gives an indication of the deeper analysis that needs to be done with regard to learning theory (which I will follow up on in the near future).
But more practically, I cannot see how any analytic approach can be successful unless it focuses on the creation of environments for fostering shared explanations. Without that, the conflict of competing explanations and competing categories lacks any kind of regulation, and risks simply being subject to the assertion of power relations - which will be worse because everyone is so flummoxed by the data!
I think the most important thing about explanation is its relationship with agreement. An individual may have an understanding of something, but until they explain their understanding, they cannot know whether others agree with them. This is why when we think about learning analytics (for example), the individual interpretation of the data (however it is achieved) is not really the issue. The key issue is the explanation of the categories that an individual presents to others: the shapes, the form of the data - it is the explanation which counts because it is the explanation which may or may not elicit collaborative action.
This is why I think that visual analytics is the best path to take with regard to learning analytics, because there is no attempt made to elicit latent meaning in the data itself, but rather create tools which afford the collaborative exploration of the data and the construction of explanations.
This is not to say that no value can be gained from a data-analytic approach, but I do think that in our understanding of the data, we should try and factor in the possible explanans that emerge and their impact.
There are a variety of ways of examining the reliability of data which is analysed for its latent 'content'. Of the different 'alpha' values which are attributed to reliability, I find the most interesting is Krippendorff's alpha. (http://en.wikipedia.org/wiki/Krippendorff's_alpha). This is a measure of agreement of categories amongst a number of observers. Essentially it is a simple ratio between the amount of disagreement observed, and the amount of disagreement which is possible.
But I don't think it's quite this simple: explanation muddies the picture. This is because as certain categories are decided upon, explanations are formed, which are characterised by anticipations of what might be next, which colour the judgement of further categories. That means that the scope for disagreement fluctuates over time depending on the extent to which explanations are agreed. In turn this means that we consider not only the agreement of categories in content analysis, but also the agreement of explanations (maybe in the form of anticipations). It may be that the agreement of anticipations may follow a similar metric to the agreement of categories. This looks very much like an example of Leydesdorff's 'hyperincursive' sub-systems (see http://www.leydesdorff.net/casys07/index.htm)
What does this mean in practice? In essence, I think it's a more formal way of saying something quite sensible: we need to communicate our understanding as well as communicate the categories we use. Some management tools (particularly Beer's Viable System Model) come with categories (in the VSM's case, regulating mechanisms) and with explanations as to how the regulating mechanisms connect. Because the explanations are set in the model itself, users of the VSM usually only have to worry about assigning the categories to their experience.
With content analysis of complex data (such as is attempted in learning analytics), both explanations and categories have to co-emerge. Actually, the co-emergence of categories and explanations is what happens in learning itself. This gives an indication of the deeper analysis that needs to be done with regard to learning theory (which I will follow up on in the near future).
But more practically, I cannot see how any analytic approach can be successful unless it focuses on the creation of environments for fostering shared explanations. Without that, the conflict of competing explanations and competing categories lacks any kind of regulation, and risks simply being subject to the assertion of power relations - which will be worse because everyone is so flummoxed by the data!
No comments:
Post a Comment