The world of computer science today has been transformed by the vast quantities of data that we are all continually contributing to. Where the challenge to computer scientists might once have been to create machines that store, organise, calculate and retrieve data in the light of clear intentions of managers, governments and corporations, the challenge now in the face of such vast data resources and the sheer overwhelming difficulty of making decisions is to work out what any of it means. From assemblages of calculating functions which start from nothing and create data resources, we move towards creating assemblages of filters and masks to render the informational complexity of world manageable to us.
The difference between these two situations is fundamental. In the early days of computing, individual human intention and engineering expertise could be married where intentions could be realised and amplified. Now, the unforeseeable complexities of amplifying intentions make us doubt any individual perspective as we look to make a distinction between what is and isn't important. Where we saw a division between man and machine, or at least only a kind of isomorphism, now we see no separation: the world is man, matter and information. Where before computation was an intended function applied by man to matter to produce information, now computation lies in the unintended mechanisms that relate intention to man, machine and information.
So is the concept of a "computer" today the same as a "computer" in the 1950s? I think the answer is clearly not. And this means that we need to think about our relationship with technology and in what ways our calculating machinery might best be thought to work for us now. To think differently means to think back to the conceptual basis of computers so that we might see what's different now.
The Turing Machine is a conceptual model of a very rudimentary computer, and most of our computing equipment can be viewed as elaborated Turing Machines at a deep level. But it is, like all computers, a bottom-up device. Emergent behaviour arises from simple principles which are executed over time.
The thing that fascinated Turing towards the end of his life was that this emergent behaviour in its most basic state did not describe emergent forms. Turing believed that it ought to be possible to describe emergent forms from basic principles, and he made a fascinating suggestion in his late paper "A chemical basis of morphogenesis". But that paper was largely ignored (it's only beginning to spark new interest now as we face the problem of morphogenesis in a variety of ways, including the 'Symbol Grounding Problem' in Information Science). But I think the reason for it being ignored or seen as irrelevant was that the basic computer didn't have to create social form. Social form occurred around it. The computer might have operated on basic principles, but the combination of computer + society unleashed mechanisms whose complexity was even more baffling than the complexities which had led to the development of the computer in the first place.
I remember as a 13-year-old playing with my ZX Spectrum that it was the thrilling effects of the computer on me and my friends that made the whole thing magic (computing has never been able to recapture this). Basic principles led to unforeseeable consequences amongst us. It was fascinating.
But what now? The truth is, those effects are everywhere. Do we need more machines working from basic principles?
I'm going to suggest that we don't, and that we look elsewhere for effective computing.
What matters are the priorities of man, not the capabilities of machines. Actions result from decisions, and decisions are taken in an environment of information, matter and other people. If there are basic principles, they are not about writing symbols to a tape; they are key questions: "What is out there?", "How might we look at it?", "What ways of looking at it help us to make decisions?", "What are the implications of those decisions?" These are critical questions as basic principles. To each of these questions, technologies must be brought to bear. "What is out there?" invites a consideration of the sum-total of all information about a domain. "How might we look at it?" invites a consideration of available filters. "What ways of looking at it help us to make decisions" invites a consideration of ways of interacting with different filters and different models. "What are the implications of those decisions?" invites a consideration of available predictive models.
This is 'negative' computing.
A negative computer is a data-immersed computer. The operations of a negative computer are to filter-out what is not deemed necessary. A negative computer is entirely entwined with its interface: there is no 'separable' CPU: the negative computer works through human participation.
Imagine a kind of Turing machine where the symbols on the tape magically appear at all positions. The machine seeks meaning, meaning it must decide strategies for moving through the tape which will give it some predictive capabilities as to where the next symbols will appear. It can only move towards this through trial and error. But the complexity of the tape is too great for one machine. But if other machines are faced with the same problem, with the same tape, and they are connected for coordination purposes with their own tape (independent of the main tape), then there is some hope that together they might get there.
This isn't quite a negative computer, despite it having the basic function of 'filtering'. It cannot be because it is still cast as an abstract entity. A proper negative computer is not separable from the humans who engage with it. They are more than the collaborative negative Turing machines described above. With the negative computer, there may be an objectively observable "shared tape" at the deepest level (ethics or conscience?). But the filters and the complexity contribute to a 'tape universe' much more stratified and constellated. It is within that world that the negative computer operates.
The difference between these two situations is fundamental. In the early days of computing, individual human intention and engineering expertise could be married where intentions could be realised and amplified. Now, the unforeseeable complexities of amplifying intentions make us doubt any individual perspective as we look to make a distinction between what is and isn't important. Where we saw a division between man and machine, or at least only a kind of isomorphism, now we see no separation: the world is man, matter and information. Where before computation was an intended function applied by man to matter to produce information, now computation lies in the unintended mechanisms that relate intention to man, machine and information.
So is the concept of a "computer" today the same as a "computer" in the 1950s? I think the answer is clearly not. And this means that we need to think about our relationship with technology and in what ways our calculating machinery might best be thought to work for us now. To think differently means to think back to the conceptual basis of computers so that we might see what's different now.
The Turing Machine is a conceptual model of a very rudimentary computer, and most of our computing equipment can be viewed as elaborated Turing Machines at a deep level. But it is, like all computers, a bottom-up device. Emergent behaviour arises from simple principles which are executed over time.
The thing that fascinated Turing towards the end of his life was that this emergent behaviour in its most basic state did not describe emergent forms. Turing believed that it ought to be possible to describe emergent forms from basic principles, and he made a fascinating suggestion in his late paper "A chemical basis of morphogenesis". But that paper was largely ignored (it's only beginning to spark new interest now as we face the problem of morphogenesis in a variety of ways, including the 'Symbol Grounding Problem' in Information Science). But I think the reason for it being ignored or seen as irrelevant was that the basic computer didn't have to create social form. Social form occurred around it. The computer might have operated on basic principles, but the combination of computer + society unleashed mechanisms whose complexity was even more baffling than the complexities which had led to the development of the computer in the first place.
I remember as a 13-year-old playing with my ZX Spectrum that it was the thrilling effects of the computer on me and my friends that made the whole thing magic (computing has never been able to recapture this). Basic principles led to unforeseeable consequences amongst us. It was fascinating.
But what now? The truth is, those effects are everywhere. Do we need more machines working from basic principles?
I'm going to suggest that we don't, and that we look elsewhere for effective computing.
What matters are the priorities of man, not the capabilities of machines. Actions result from decisions, and decisions are taken in an environment of information, matter and other people. If there are basic principles, they are not about writing symbols to a tape; they are key questions: "What is out there?", "How might we look at it?", "What ways of looking at it help us to make decisions?", "What are the implications of those decisions?" These are critical questions as basic principles. To each of these questions, technologies must be brought to bear. "What is out there?" invites a consideration of the sum-total of all information about a domain. "How might we look at it?" invites a consideration of available filters. "What ways of looking at it help us to make decisions" invites a consideration of ways of interacting with different filters and different models. "What are the implications of those decisions?" invites a consideration of available predictive models.
This is 'negative' computing.
A negative computer is a data-immersed computer. The operations of a negative computer are to filter-out what is not deemed necessary. A negative computer is entirely entwined with its interface: there is no 'separable' CPU: the negative computer works through human participation.
Imagine a kind of Turing machine where the symbols on the tape magically appear at all positions. The machine seeks meaning, meaning it must decide strategies for moving through the tape which will give it some predictive capabilities as to where the next symbols will appear. It can only move towards this through trial and error. But the complexity of the tape is too great for one machine. But if other machines are faced with the same problem, with the same tape, and they are connected for coordination purposes with their own tape (independent of the main tape), then there is some hope that together they might get there.
This isn't quite a negative computer, despite it having the basic function of 'filtering'. It cannot be because it is still cast as an abstract entity. A proper negative computer is not separable from the humans who engage with it. They are more than the collaborative negative Turing machines described above. With the negative computer, there may be an objectively observable "shared tape" at the deepest level (ethics or conscience?). But the filters and the complexity contribute to a 'tape universe' much more stratified and constellated. It is within that world that the negative computer operates.
No comments:
Post a Comment