Tuesday, 21 September 2010

Morality and Machine Learning

One of the key features of Activity Theory and Latour's Actor Network Theory is the belief that technology has agency. This is obviously more interesting if we look at 'intelligent' or 'learning' machines. I think this is a mistaken view of agency.

I wonder if the agency of human beings always has a moral component. People usually act for reasons which are grounded in a sense of 'doing the right thing' from their perspective; sometimes of course, they might deliberately do the 'wrong thing'. Sometimes they might unintentionally do something harmful, but if they become aware of this, there will be some sort of corrective response. And of course, there's no reason to just talk about people: there is no reason to suppose that a cat doesn't have some sense of 'the right thing', whatever 'the right thing' might be to a cat. Certainly, pathological social behaviour in animals is much less common than it is in humans!

I'm fascinated at the moment by the extent to which a moral sense might underpin cognition. I want to know how this might work. I suspect it's got something to do with homeostatic biological relationships with material and social mechanisms. Agency (and morality) may simply be aspects that we see of things which are part of those mechanisms.

1 comment:

Astrid Johnson said...

I don't think "doing the right thing" and "morality" are such a simple concepts. For example members of an African tribe practice female circumcision, doing the right thing within their tradition. It would take a memetic shift to understand that this is not a very fair practice for women, but once understood, lets say they cease to do it. Do they cease the practice because they felt all of a sudden that female circumcision is pretty bad for women or because a modern or post-modern consciousness discovered equality, which makes you rethink gender roles? Maybe cognition changes morality? And maybe cats will become vegetarians.... Lots of questions.