There appears to be general consensus that experiences within the Virtual Reality of the Oculus Rift are something new. The thing that stood out for me was the fact that this was a technologically-enhanced experience which felt relaxed, unhurried, stress-free - it genuinely felt like 'dwelling' - that key category that Heidegger created as the antithesis of technology. I wonder if he would have agreed with me. For Heidegger, there is something poetic about dwelling. And there is something more poetic about the VR experience in the Rift than with any computer technology I've experienced.
What is going on? Can we analyse it?
I think the simple fact is that the VR conforms to expectations about experience of reality in ways that conventional technology doesn't. When we move our heads, we expect our perspective on the world to change. In the Rift, it does. Because of this, it may present us with a way of understanding our ordinary experience of reality.
One way of approaching an analysis is to consider the aspects of experience which are not represented. The Rift is a different kind of experience in many ways, but one obvious way is the fact that you cannot see your own hands. It may seem like a small point, but up to now, all our technology has worked on the basis of hand-eye coordination. I've recently been reading David Sudnow's "The ways of the hand" which is a phenomenological account of jazz piano playing (quite brilliant) and a reminder that looking at one's hands and coordinating what they are doing with awareness of the tools they engage with has been a fundamental characteristic of all tool usage up to this point. Not being able to see one's hands may be a big deal.
In VR, the neck and the eyes are the most important things. These movements we do not associate with technologies in the real world at all. We instead associate them with exploration, discovery, wonder, gaze, etc. So the combination of neck movements and visual stimulation in VR which conforms to the expectations of real-world neck movements may stimulate similar emotions. This is interesting because hand-eye coordination is associated with goal-oriented technological practice; neck-eye coordination is associated with exploration. And then there is the sound environment too. This begins to explain some aspects of my experiences in the Rift.
But then, what of expectations? What of reflexivity as I explore my virtual world? Is there something worth looking at in Gibson's affordance theory, because he put particular emphasis on the movements of the neck and body in ascertaining affordances of objects? What information is flowing as we look around us? How are our expectations changed?
The question here is "What shapes expectations?" The answer, I think, is redundancies. It's a useful answer because redundancies are measurable. The technique for analysing expectations in the Oculus Rift is very similar to the technique for analysing the experiences of music. Redundancies occur at a variety of levels of experience. These levels of experience are not 'modes' (as "multimodal" experience) - but they are effectively different levels of expectation. In music, we might have an expectation of an overall tonality, whilst also entertaining expectations of particular motifs, phrasing or particular rhythms. These are not on the same level, but they co-exist, and are mutually dependent. What matters in musical experience is where levels of redundancy shift over each other. This shifting over each other is key to understanding experience in the Rift.
Imagine a physical action which is repeated such as moving one's neck left. This action produces a set of visual stimulations which also change as the neck moves. These are overlapping redundancies. If I move towards an object and repeat my neck movement, the pattern of redundancies shifts because my perspective shifts: some things will be the same and some things will be different. It may be that meaningful information will exist in the overlapping of redundancies from one moment to the next because one level of experience gains access to the constraints operating on another level of experience.
Overlapping redundancies constrain expectations because they represent things which we do not notice but are causal on the experience of the things we do notice. Wallpaper is a classic example of a redundancy which frames experience. I suspect that when two redundancies overlap, each redundancy becomes noticeable to the other layer: in other words, what was absent becomes present, or at least some new 'presence' arises through the awareness of what each layer of experience misses. New presences drive new ideas and new action. In this way, we can characterise within Virtual Worlds as "intrinsic motivation" as a dynamic of "presencing" absence at different levels of experience.
But that is all theory. What is measurable are the patterns of redundancy. And we can compare the overlapping of the patterns of redundancy with the experiences of people within the Rift. To begin with, that would be a useful exercise as a way of investigating whether the idea of overlapping redundancies at different levels of experience maps on to the emotional experiences of VR.
What is going on? Can we analyse it?
I think the simple fact is that the VR conforms to expectations about experience of reality in ways that conventional technology doesn't. When we move our heads, we expect our perspective on the world to change. In the Rift, it does. Because of this, it may present us with a way of understanding our ordinary experience of reality.
One way of approaching an analysis is to consider the aspects of experience which are not represented. The Rift is a different kind of experience in many ways, but one obvious way is the fact that you cannot see your own hands. It may seem like a small point, but up to now, all our technology has worked on the basis of hand-eye coordination. I've recently been reading David Sudnow's "The ways of the hand" which is a phenomenological account of jazz piano playing (quite brilliant) and a reminder that looking at one's hands and coordinating what they are doing with awareness of the tools they engage with has been a fundamental characteristic of all tool usage up to this point. Not being able to see one's hands may be a big deal.
In VR, the neck and the eyes are the most important things. These movements we do not associate with technologies in the real world at all. We instead associate them with exploration, discovery, wonder, gaze, etc. So the combination of neck movements and visual stimulation in VR which conforms to the expectations of real-world neck movements may stimulate similar emotions. This is interesting because hand-eye coordination is associated with goal-oriented technological practice; neck-eye coordination is associated with exploration. And then there is the sound environment too. This begins to explain some aspects of my experiences in the Rift.
But then, what of expectations? What of reflexivity as I explore my virtual world? Is there something worth looking at in Gibson's affordance theory, because he put particular emphasis on the movements of the neck and body in ascertaining affordances of objects? What information is flowing as we look around us? How are our expectations changed?
The question here is "What shapes expectations?" The answer, I think, is redundancies. It's a useful answer because redundancies are measurable. The technique for analysing expectations in the Oculus Rift is very similar to the technique for analysing the experiences of music. Redundancies occur at a variety of levels of experience. These levels of experience are not 'modes' (as "multimodal" experience) - but they are effectively different levels of expectation. In music, we might have an expectation of an overall tonality, whilst also entertaining expectations of particular motifs, phrasing or particular rhythms. These are not on the same level, but they co-exist, and are mutually dependent. What matters in musical experience is where levels of redundancy shift over each other. This shifting over each other is key to understanding experience in the Rift.
Imagine a physical action which is repeated such as moving one's neck left. This action produces a set of visual stimulations which also change as the neck moves. These are overlapping redundancies. If I move towards an object and repeat my neck movement, the pattern of redundancies shifts because my perspective shifts: some things will be the same and some things will be different. It may be that meaningful information will exist in the overlapping of redundancies from one moment to the next because one level of experience gains access to the constraints operating on another level of experience.
Overlapping redundancies constrain expectations because they represent things which we do not notice but are causal on the experience of the things we do notice. Wallpaper is a classic example of a redundancy which frames experience. I suspect that when two redundancies overlap, each redundancy becomes noticeable to the other layer: in other words, what was absent becomes present, or at least some new 'presence' arises through the awareness of what each layer of experience misses. New presences drive new ideas and new action. In this way, we can characterise within Virtual Worlds as "intrinsic motivation" as a dynamic of "presencing" absence at different levels of experience.
But that is all theory. What is measurable are the patterns of redundancy. And we can compare the overlapping of the patterns of redundancy with the experiences of people within the Rift. To begin with, that would be a useful exercise as a way of investigating whether the idea of overlapping redundancies at different levels of experience maps on to the emotional experiences of VR.
No comments:
Post a Comment