Saturday 29 June 2013

Prism: The Human right to Anticipation

The Prism scandal has been portrayed as being about information. Yet it is quite clear that the menace is not 'information' itself (whatever that is!), but the unequal capacity of corporations and governments to anticipate what each of us is likely to do next. When institutions and corporations know our next move before we do then we have a fundamental problem with democracy and freedom. The right to self-expression and free will is a right to be different, to be transformative, to do our part to shape the world we live in - and to usurp power which overstretches its mandate.

Information is now much like banking. The banks invest our money organising it ways which are not available to any individual, extracting profits which are unrealisable by any individual, but only giving the customer back a tiny fraction of the rewards they have reaped: just enough to encourage the customer to keep placing their money in the bank. Google or Twitter receive a tiny piece of information from each of us. To us, that information may simply be an inconsequential message. But Google and the rest have ways of organising it which are not available to any of us, and extract from their organisation powerful properties of anticipation  which they can then sell to those organisations willing to buy their services. Information is transformed into money by way of anticipation.

Any individual is at the mercy of this. Whilst some might eschew online engagement for fear of these forces (rightly), increasingly online engagement through the internet corporations has become as essential for professional development and advancement as education. Indeed, if the bleatings of the Web2.0 mafia are to be believed, it is more important than education (but they are a naive mafia!). The more we commit ourselves to the services of internet corporations, the more the world changes to enforce engagement. Just as the car has become a machine for covering the planet in tarmac, so the internet is a machine for turning us all into 'information workers'.

But we have no concept of the freedom of information workers. We don't have a concept of information. But we can now see the asymmetry of anticipation between the giant internet corporations, governments and the people who are meant to be the ones who vote for governments. The fact that we don't vote for internet corporations must now be a massive concern.

Part of the problem is that few understand the power that can be harvested from Google accounts, Facebook, etc. The pattern matching algorithms are so sophisticated previously unknown commonalities in our psychology and biology can be revealed. Ironically, these algorithms emerged through the massive data challenge of genetic sequencing. But when we 'sequence' ourselves, we need to understand what is being exposed, and the power of revealing it.

I think the asymmetry will need to be addressed. Eventually the political scandal will be apparent to everyone. But also there is no way back. The technologies are there now. But maybe the next wave of social and technical transformation will not be about individuals submitting information, but individuals having their own means of technologically-enhanced anticipation. But in that situation, the locus of meaning, of being human and of freedom will become ever more important.

Friday 28 June 2013

The Institution of the Science of Education

To pursue education as a science requires a particular attitude of criticality. Yet educational institutions can be remarkably uncritical of their own practices. To pursue education as a science is to attempt to change this.

The barriers to a scientific approach are very significant. Yet in a "knowledge economy" (yes, I don't know what it means either), it would at least seem that teaching and learning are of fundamental importance to healthy social ecologies of businesses, institutions, nations and the world.

To pursue education as science is to critically examine the nature of "healthy social ecologies"

These are structures existing amongst human beings which have both synchronic and diachronic dimensions. Within those dimensions are the elements which feed the continual reproduction and tranformation of those forms. It seems very likely to me that effective ways of thinking about emergent structures and processes within healthy social ecologies are fractal. Or at least, fractal thinking may be the closest we can get to an effective way of thinking in order to maintain healthy ecologies.

To understand education as science is to understand the relationship between the way we think about education and what actually happens in it. More deeply, it is to critique what we mean by thinking itself.

Much educational thought over the last 100 years (pretty much since the beginning of state-supported compulsory education) has been grounded in psychology, which itself was in its infancy. The dominant models put  forward by psychology to support education have been either essentially mentalist or behaviourist in character. These models reflect particular ways of thinking about a person and their experience which present opportunities for empirical defence of the respective models. In other words, the models have followed established methodologies. The methodologies are rarely challenged.

At issue is the relationship between cause and effect in education. Hume's regularity theory of causation is unlikely to be right, even in it's characterisation of practice in the physical sciences. But the contrived regularities of education produced through statistical analysis have meant that the deficiencies of the method have rarely been exposed, or worse, that a post-modern attitude of "the impossibility of naturalism in education" has equally been able to take root.

The question of causality goes right to the heart of the nature of education. When we think that the cause of engineers is engineering courses, the cause of physicists is courses in physics, and so on, we miss something fundamental about the real practice and experience of physicists and engineers. What are the causes of a physicist? or a doctor? or a lawyer?

That is the scientific question.

To establish an institution which addresses this question is perhaps where education should go next...



Thursday 27 June 2013

Stories and Psychopaths

We tell stories all the time. It would seem that our lived experience is carried through some kind of ongoing story (Rom Harre calls this a "story line"). We may well know that the stories are not real, but the fact that we inhabit them for a while helps us to deal with what is real. Stories are the vehicle of  the imagination. And the imagination is politically fundamental: freedom, hope, oppression and tyranny all exist in the imagination. Any explanation of how stories help us might need to account both for the wonderful things that the human imagination can do, as well as the ways in which fearfulness can lead us into crisis, evil, despondency and depression.

What is most important to understand is the relationship between stories and decisions. Good and bad decisions have stories behind them. The need to communicate, the need to rationalise, the need to defend actions all drive the need to create stories. But we should ask how it is that such desires are satisfied by the story-making activity, and indeed why it is that these desires exist in the first place.

I think stories are ways of 'playing with absence'. Art, music and drama are also ways of playing with absence. Their artificial structures reveal something natural about the nature of the relationship between what's there and what is not. The desire to play with absence comes from the fact that we tie ourselves in knots in daily life. Complexity reaches a point where there needs to be some 'pruning' of our experience, or some reorganisation of our experience in order that we can carry on functioning. Such reorganisation entails the determination of some new absence: the determination of part of the force which causes things to take the structure that they do.

Absences are difficult to pin down - particularly from the context of a present reality. Any present reality is necessarily blind to the absences which bear upon it. But a new story is a new context. It has it's own internal structure, but it also has which bear upon that structure which can be identified through 'playing' with the story.

The most effective identification of an absence in a story takes the form of a recursive principle. Through the recursive principle, the story takes shape. But it is this recursive principle, discovered through playing with the story, that serves as the key to the determination of the absences of real life. Play nourishes reality with the creation of new recursive principles.

The fantasy of thinking and planning and imagining the reaction of other people is a classic example. Through this activity, we imaging a different world related to our own, and inspired by the challenges of the real world.  Through the fantasy, we find a pattern - some principle which lies at the heart of the generation of the structure of the story. Through that, we resolve to act in particular ways in reality.

After doing something bad, we also engage in fantasy. A story is concocted to justify, defend, deny or explain away ones actions. The trauma of guilt from the reality creates complexities which are too great to bear. Fantasy is the only way out, through which anticipations of others may be considered and the causal factors behind the shape of events identified. Once again, the recursive causal principle is identified. Once again, this transforms the traumatic situation - or at least makes it a little more manageable.

It is ironic that bad actions - which demand so much of fantasy for their defence or denial - usually arise from  a lack of creativity in the first place. Bad actions result from the imprisonment of fear. But the bad person begins a pathological relationship to fantasy. Where fantasy for the good person nourishes their decision-making and psychological health in everyday life, for the bad person, fantasy is demanded as a way of defending what is really indefensible. Because it is indefensible, the recursiveness of the story only works so far; ultimately more and more stories are required. And then it is easy to lose sight of the distinction between the story and the reality. There begins the psychopath.

Authentic being is not a luxury.

Sunday 23 June 2013

An Amazing Week!

I don't normally post diary entries - but this week has been special in many ways. The week started in Turkey - and a somewhat slightly precarious journey to Esenkoy, near Yalova, Turkey. I was meeting about 70 school teachers and presenting some of the technologies of the iTEC project. I had already gained some idea of what I was heading for because I had conducted a seminar online with a few of the teachers a week before. On the whole, English is not a strong skill in Turkey (although I did have some excellent interpreters) - so slow delivery and careful planning was needed. And lots of activities. Which was fine - because that was the basic message I was wanting to deliver anyway.

I started by talking about how important it was to be together. We were all together in a rather dingy classroom on a hot day with the occasional power-cut. But after an ice-breaker involving drawing pictures of themselves, and introducing each other through their pictures to somebody they didn't know, things were going quite well. "The bottom line with the iTEC technologies is to enhance and facilitate this experience of togetherness," I said - taking a few liberties. But it felt like the right thing to say. Although few of the iTEC technologies actually do anything specifically about togetherness (apart from shared-editor widgets, shared drawing and chat), I also highlighted the importance of coordinating a variety of activities to keep people engaged and a lively dynamic in the classroom - the widgets in iTEC can at least do that.

I then demonstrated what might be possible in terms of shared experience online. The Virtual Choir is still the most compelling example - and it grabbed their imagination (as it does mine). I was asked to continue after lunch (which I hadn't planned to) and so took them through the process of accumulating their own collections of tools, videos, etc through the iTEC Widget Store. Everyone seemed to 'get it' and (most impressively) everyone, almost without exception, was keen, motivated and in a mood to experiment. I've never seen this before in such a large group of teachers.

Day 2 and everyone was expecting me to tell them how to 'make' widgets. I thought about this, and I came to the conclusion that the most significant 'widget' that I could get everyone to make was a video widget. That meant that the session focused on making video. Although one or two teachers had done this before, the vast majority hadn't. This was the first time they were forced to deal with problems of microphones, screen recorders, powerpoint voiceovers, and finding somewhere quiet to record their voice. This is one of the videos that was created:


I was getting tired by the end, and a failed attempt to link up to an iTEC workshop in Bolton meant that I had to go over some of the exercises that I had done over the two days (this time with volunteers from the audience to help me out). Everyone seems to have got the hang of it.

My next stop in the week was Marseille for a conference on "Ethics and responsibility in economics and business studies” (http://philo-eco.eu/wp-content/uploads/2013/04/Ethics-Responsibility-in-Economics-and-Business-Studies.pdf), where I was delivering a paper on George Bataille's economic theory. This is a small group that I weirdly found myself involved with through the Cambridge Social Ontology Group who first introduced me to Critical Realism more than 10 years ago. It was rich, intimate and profound event, with contributions ranging from Social Ontology and Ethics (with particular focus on a critique of Sen), the nature of the person,  business coaching, Corporate governance, and numerology. Marseille was beautiful.

On the saturday I delivered remotely a session for Bolton's new EdD course on "Education and Reality". It was interesting the extent it was possible to stimulate rich discussion amongst the students, despite the fact that I wasn't there. I think I also managed to upset them with an awkward statement: "science teachers are not really scientists - if they were, they'd be doing science"... an audible gasp! Maybe I should have rephrased it... but hurling a few unintentional insults into lessons at least gets peoples' attention (although I don't want to end up like Niall Ferguson!)


Wednesday 19 June 2013

Music, Information and "technological prejudice"

I've had a strange experience in my attempts to multitask and be in three places at once (In these days of scarce resource, such measures become necessary).

I was hoping to give a presentation at my University's research conference on my work on metagames, information and music. Unfortunately, on the day of the conference I found myself in Turkey talking to Turkish teachers  about the iTEC project, and trying to coordinate a testing session for iTEC in Bolton. So my talk became a video.

Videos are much more difficult to create than powerpoint talks. And I thought very carefully about how I was going to present what are pretty complex ideas. They are, I think, ideas that merit some attention because they deal with information as experience, rather than information as abstraction. So I sent a colleague who was also presenting in the same session to show the video.

Of course, one is always at the mercy of session chairs. But I haven't yet come across one who says "I'm not watching a video. I'm going to get an early lunch!" I think at the very least this shows a lack of awareness of the effort it takes to produce the thing - a kind of technological prejudice. Given that my presentation was about decision, it is an interesting decision!

Our abstractions of information (including those surrounding academic practice) lead us to certain expectations which in turn blind us to the constraining effects of the information we're not directly aware of (or don't want to think about). The experience of sitting in a room listening to an argument is a convivial experience whether or not there is a real person there doing the talking. To suppose that only valid information is conveyed in the presence of the talking person is clearly nonsense. But we don't really have a theoretical apparatus to explain the experience of sitting in a room with other people listening to an argument. That's the gap I am trying to address in my work.

There's something here about marking certain types of informative practice as valid, and others not. To admit other things as equally valid sources of "information" which we might consider invalid is to poke absences within ourselves which we would rather not be poked. The stick that Wikipedia still comes in for is a good example. These are the result of personal 'taboos' of academics - the things we bracketed-out from our experience because we found a way to exist by leaving them alone. And yet, we have forgotten that any way of life is ultimately framed by the things that we don't think about.

Considering music as 'information' is about as close as we get to saying "there's no such thing as information". Really, I think information is simply 'absence', or constraint (following Terry Deacon). Wiener thought that information was an ontological category of its own. But the ontological category is not Wiener's 'negentropy': it's 'neg'.

And perhaps the conference chair's desire for an early lunch was the most informative thing that could have occurred!

For those interested, the video is available here:
https://dl.dropboxusercontent.com/u/13531449/information_0001.wmv


Thursday 13 June 2013

Steps Towards Institutional Ecology: 1. Mechanisms of Accountability

I am working on a framework for understanding educational institutions as 'ecologies' and this is the first of a number of posts which will develop the idea. As managerial interventions (from Vice Chancellors and ministers) slash institutions, there have been profound effects (many of which unforeseeable) on the culture and operations within those institutions. Yet, we have little understanding of how institutional culture is shaped, the relationship between the functions of individuals, their job roles, the students, formal procedures, governance, etc. Present attempts at institutional reform feel like medieval dentistry and a more sophisticated understanding of institutions seems much needed.

Accountability lies at the heart of any viable operation. Yet it's real function is difficult to pin down. Totalitarian regimes appear viable - for a while. But they usually collapse eventually. I was thinking this looking at the news from Turkey (where I'm giving an ITEC presentation next week). Accountability is not just about engineering 'participation' from the electorate (holding a referendum about real-estate development seems inappropriate), it is about having 'trusted mechanisms' of critique and inspection (in the UK, we would probably hold a public inquiry, not just into the park development, but into the actions of the authorities). Deep down, these mechanisms serve to make explicit the connection between information and decision, where sources of  information upon which decisions are based are revealed, and the mechanisms of decision-making in the light of information are inspected and challenged. Within educational ecology, an equivalent question which relates to the Turkish situation is "what are the mechanisms for holding Vice Chancellors to account? How are sources of information exposed and decision-making processes inspected?"

Notionally, there are at least three mechanisms.

  1. the academic senate (or sometimes called the academic board)
  2. the governors
  3. the unions
But each of these mechanisms has undergone important changes in recent years, particularly with the rise of new universities. 

Senate is the place for academic debate about the direction of  the University led by the academics of the university with the Vice Chancellor as their head. It is not, as is often now the case, a rubber-stamping organisation for the latest policies of management. Its purpose is to maintain the essential function of the University, which is (fundamentally) the pursuit of truth and knowledge through teaching and learning. As a rubber-stamping meeting, this function of Senate has gone. And with it an opportunity for holding leaders to account.

The governors are the body of people whose job it is to steer the University as viable business. They should inspect the accounts, ask questions of the leaders, of current policies and directions, and to play a key role in the selection of management. In recent years, many Vice Chancellors have found ways to turn the board of governors into another rubber-stamping group. It's not difficult to see how this might happen. "Being a governor" is a badge of honour, often with more 'in it' for the individual governor than for the institution. As a result, governors who have little interest in 'governing' will gladly accept places on the board and (frankly) do what they are told. It only takes a critical mass of 'VC-friendly' governors on the board before the processes of ratifying any further appointments are tainted.  Consequently, another (and possibly the most important) mechanism of accountability is lost.

Finally, the Unions have traditionally the task of holding the managers to account in the interests of their members. But with savage spending cuts and consequent redundancies, the relationship between unions and managers has become unsurprisingly confrontational. In such circumstances, "holding to account" can become a somewhat hysterical game of accusation and counter-accusation which helps no-one. Furthermore, protecting the members interests is a different game from protecting the future of the institution or the future of education. Unions within universities often struggle with what it is they are actually trying to protect or to challenge. In difficult times, and particularly with 'stitched-up' governing bodies and bureaucratic  Senates, it becomes easier for managers to simply ignore the unions in what they will argue are the "broader interests of the institution".

If we are to move towards an understanding of institutional ecology, then these examples of changes to accountability structures provide an excellent example to study. We are likely to see institutional failures in the near future, and where that happens, we will be able to examine governance structures and academic decision-making processes. This is unfortunate for the institutions concerned, but it would be more serious not to learn from their misfortune. The fundamental questions will be 
  • What information was available?
  • How did decision-making function in the light of  the available information?
  • What information was ignored?
  • Who decided what information to ignore?
  • Who challenged this?
Of course, these will be questions for any public accounts inquiry into any failure. But the issues lie deep in our understanding (or misunderstanding) of the nature of information, the way decisions are made, and the impact of technology.

Tuesday 11 June 2013

Negative Computing

The world of computer science today has been transformed by the vast quantities of data that we are all continually contributing to. Where the challenge to computer scientists might once have been to create machines that store, organise, calculate and retrieve data in the light of clear intentions of managers, governments and corporations, the challenge now in the face of such vast data resources and the sheer overwhelming difficulty of making decisions is to work out what any of it means. From assemblages of calculating functions which start from nothing and create data resources, we move towards creating assemblages of filters and masks to render the informational complexity of world manageable to us.

The difference between these two situations is fundamental. In the early days of computing, individual human intention and engineering expertise could be married where intentions could be realised and amplified. Now, the unforeseeable complexities of amplifying intentions make us doubt any individual perspective as we look to make a distinction between what is and isn't important. Where we saw a division between man and machine, or at least only a kind of isomorphism, now we see no separation: the world is man, matter and information. Where before computation was an intended function applied by man to matter to produce information, now computation lies in the unintended mechanisms that relate intention to man, machine and information.

So is the concept of a "computer" today the same as a "computer" in the 1950s? I think the answer is clearly not. And this means that we need to think about our relationship with technology and in what ways our calculating machinery might best be thought to work for us now. To think differently means to think back to the conceptual basis of computers so that we might see what's different now.

The Turing Machine is a conceptual model of a very rudimentary computer, and most of our computing equipment can be viewed as elaborated Turing Machines at a deep level. But it is, like all computers, a bottom-up device. Emergent behaviour arises from simple principles which are executed over time.

The thing that fascinated Turing towards the end of his life was that this emergent behaviour in its most basic state did not describe emergent forms. Turing believed that it ought to be possible to describe emergent forms from basic principles, and he made a fascinating suggestion in his late paper "A chemical basis of morphogenesis". But that paper was largely ignored (it's only beginning to spark new interest now as we face the problem of morphogenesis in a variety of ways, including the 'Symbol Grounding Problem' in Information Science). But I think the reason for it being ignored or seen as irrelevant was that the basic computer didn't have to create social form. Social form occurred around it. The computer might have operated on basic principles, but the combination of computer + society unleashed mechanisms whose complexity was even more baffling than the complexities which had led to the development of the computer in the first place.

I remember as a 13-year-old playing with my ZX Spectrum that it was the thrilling effects of the computer on me and my friends that made the whole thing magic (computing has never been able to recapture this). Basic principles led to unforeseeable consequences amongst us. It was fascinating.

But what now? The truth is, those effects are everywhere. Do we need more machines working from basic principles?

I'm going to suggest that we don't, and that we look elsewhere for effective computing.

What matters are the priorities of man, not the capabilities of machines. Actions result from decisions, and decisions are taken in an environment of information, matter and other people. If there are basic principles, they are not about writing symbols to a tape; they are key questions: "What is out there?", "How might we look at it?", "What ways of looking at it help us to make decisions?", "What are the implications of those decisions?" These are critical questions as basic principles. To each of these questions, technologies must be brought to bear. "What is out there?" invites a consideration of the sum-total of all information about a domain. "How might we look at it?" invites a consideration of available filters. "What ways of looking at it help us to make decisions" invites a consideration of ways of interacting with different filters and different models. "What are the implications of those decisions?" invites a consideration of available predictive models.

This is 'negative' computing.

A negative computer is a data-immersed computer. The operations of a negative computer are to filter-out what is not deemed necessary. A negative computer is entirely entwined with its interface: there is no 'separable' CPU: the negative computer works through human participation.

Imagine a kind of Turing machine where the symbols on the tape magically appear at all positions. The machine seeks meaning, meaning it must decide strategies for moving through the tape which will give it some predictive capabilities as to where the next symbols will appear. It can only move towards this through trial and error. But the complexity of the tape is too great for one machine. But if other machines are faced with the same problem, with the same tape, and they are connected for coordination purposes with their own tape (independent of the main tape), then there is some hope that together they might get there.

This isn't quite a negative computer, despite it having the basic function of 'filtering'. It cannot be because it is still cast as an abstract entity. A proper negative computer is not separable from the humans who engage with it. They are more than the collaborative negative Turing machines described above. With the negative computer, there may be an objectively observable "shared tape" at the deepest level (ethics or conscience?). But the filters and the complexity contribute to a 'tape universe' much more stratified and constellated. It is within that world that the negative computer operates.

Monday 10 June 2013

Computation and Measurement

Yeats said "Measurement began our might". To measure something (certainly in Yeats's sense) is to understand something of its meaning.

Computation isn't measurement. Computation - when treated sensibly - serves to help us discover the limits of our abstractions. We put a formula into the computer, we explore the parameters.  We notice when interesting things happen. By 'interesting', we look for those things which we couldn't have anticipated.

When we measure we bring tools and ideas to bear on the world which is seen to be independent of the measuring act. Measurement is often seen as a means of observing what is there. The fact that our act of measurement changes the world as every act does is a complexity which the scientists of the enlightenment hadn't grasped. When we measure, we look simply look to gather things together.

Measurements produce data. Data can be fed into a computer whose algorithms produce effects, some of which we may find 'interesting': things we couldn't have anticipated. But because those algorithms operate on what we believe to be "measurements" we have a tendency to forget that the interesting things we see are the product of the limits of our abstractions. Instead we believe them to be isomorphic to the world which we measured. We then attribute meaning to phenomena in the world which we have gained through our computer screens. The world is changed as a result - but not in ways we can imagine.

The deep problem here is that whilst out conception of finding something interesting in computation revolves around finding something we couldn't anticipate, we do not accept that measuring something isn't also an act related to anticipation as well. Yet to believe something to be measurable is the result of a way of looking at the world where the phenomena look stable, countable, regular, etc.

But there are many lenses through which to view the world, and each lens produces its own kind of stability, and consequently, a different kind of measurability. Which measurement? Which lens? Which computation?

A typical trick in the social sciences is to use computation to produce the illusion of measurability. Statistics flatten experiences to data which can then be compared. In our financial institutions, the same thing happens: trading systems use prices as cyphers standing for real substances and qualities.

Consequently we are in a mess.

But only because we have forgotten what we are looking at. What we look at are things which we couldn't anticipate; things which are the limits of our abstractions. The obvious thing, given this, is to develop our abstractions! In social science, this rarely happens.

Instead, new policies are created to make observed phenomena fit the existing abstractions. Data operates as a kind of currency, with everyone competing to have the 'best data', everyone trying to implement the policies to produce the best outcomes within the current abstract frame. In the heat of this competition, changing the frame is practically impossible.

The question is "where does the real meaning lie?" Anticipation in daily life requires deep knowledge of each other. It requires the kind of knowledge that doesn't come through computer-processed measurements. It comes through lived experience.

Perhaps not until we see our computation processes and measurements as part and parcel of a lived experience rather than abstract entities, will we be able to situate the power computation and technology within the context of well-lived and dignified lives. 

Wednesday 5 June 2013

Beyond Understanding Computers and Cognition

Winograd and Flores's1986 book "Understanding Computers and Cognition" (really just Flores since he wrote much of it as his PhD in a Chilean prison after Pinochet killed Allende and the Cybersyn project) still is a landmark in the technology literature. It was the book that argued that it wasn't data processing and AI that was the future of computing, but communication. It's difficult to make accurate predictions in technology, and this was a particularly good one considering the web was still some way off. How did they make their predictions? Well, philosophy, cybernetics and biology were the main ingredients in what is still a rollicking read (I've recommended it to a number of students who've loved it too). Anyone who thinks technology is a deathly topic of inquiry (as I did when I came across it) should read it.

They were right. Then. But now? Things have a habit of coming full-circle. Their first sentence was "Computers are everywhere." What would we say now? First of all, ubiquity has made "computers" almost disappear. But there's something else. "Data processing is everywhere!" That's where we are now. There isn't a single thing that we do online which doesn't get analysed and fed into some kind of decision-support system. And the algorithms are getting better and better.

I wrote a few days back about how Discrete Wavelet Transforms are facilitating the analysis of continuous data from Video Mining to medical applications that listen to the sound of bowel movements (or anything else). This is basically 'fingerprinting' technology. YouTube can to its video recognition because the Wavelet transformations allow for the production of a highly compressed unique key to each video. With a searchable databank and a simple method of applying the algorithm in such a way that it reveals a part of the key that can be searched with, it is easy to see how they do it.

With text mining, techniques learnt from genetic sequencing similarly are "fingerprinting" our utterances. Facebook and Google know more about us than we think we have revealed. The vast fingerprinting and trawling of patterns of human text engagements means that reasonable prediction becomes a reality. There are of course many applications of this which could be great. Add to this the semantic networks which exploit the data mining continually adapting new inferences and rules, there's some impressive stuff going on. But it's all rather Orwellian, isn't it?

Winograd and Flores would say "This isn't AI". Maybe. But it's bloody clever all the same. And it's new. It depends on the fact that we have now wired ourselves up to each other in a way that even Flores would have been astonished by.

I think we need to re-look at computers. Our world has changed. Super-powerful inference engines exhibit a kind of 'agency': information, after all, is constraint. But I put 'agency' in inverted commas because I don't think it is agency (I disagree with the Socio-materialists and Actor-Network theory people here). At least it's not the agency of a machine. Agency, I think, requires conscience - and computers can't have that.

The technology question is, more than ever before, a political question. The agency of the machines is the agency of the people behind the machines. Who are they? Who votes for them? What are they up to? How can we get rid of them if we don't like them?

What they are up to should concern all of us. Their motivation is one problem. But their methods are dangerous. The apparent cleverness of the data anlysis increasingly takes us into a 'positive' world where all that matters is that which can be shown on a network graph. What about all the stuff in between? What about the fact that whatever I just tweeted, it wasn't quite what I really thought - just an approximation. An abstraction. Not the reality. Yet whatever has been concretely launched into cyberspace is "the truth". We're losing our grip on the space between things. It's like analysing thousands of Bach fugues to determine his 'rules', and then asserting the "Bach was following rules". [of course, he didn't write thousands!]

Whether these are good people or bad people, their naivity is akin to recklessly drilling for oil at the bottom of the ocean. But the real problem is that they act outside the regulations upheld by the people we vote for. The people acting through machines are a new elite who we do not vote for. This is a challenge for government and democracy.

Bringing technology into the political domain may be the most significant challenge we face in the 21st century.

Monday 3 June 2013

Towards Institutional Ecology

The consensus from academics in institutions ranging from research universities to widening participation institutions is that the REF is poison. Be that as it may, it should be said that this is a poison of our own making. We created the technologies which allowed the numbers to be crunched; we created the peer-review processes which ensure that anything different usually meets barriers to publication; We blew our own trumpets about the value of having degrees, and played the commodification game when it suited us. It's our fault. Most ecological catastrophes are.

There is something peculiar about the poison of the REF though. It sits on a technical innovation which in turn shifts the power relations in such a way that scholars can have judgements passed on them by non-scholars (otherwise known as 'managers'). As a result, the technology bolsters power relations which are almost certainly damaging the upholding of wisdom and knowledge, which one hopes Universities aspire towards. The aims of the REF become distorted into a means of settling managerial scores with inconvenient individuals. I've lost count of esteemed academics in their late 50s and 60s who look at the system now and say "well, if I'd had to deal with that, I wouldn't have survived!". In short, this is the removal of swathes of diversity which make up the ecology of thinkers and teachers in an institution. But the tidal wave of data is overwhelming - and a sense of defeatism has set in, along with the new breed of academic who see the new game and lamely accept that "this is how it has to be". There Is No Alternative.

Meanwhile, the managerial elite of non-scholar bureaucrats pour technologically-brewed effluent over everything else. People get sick. Nobody is happy - least of all the students. Even the bureaucrats are not immune (although they of course thought they were!) Who would want to study under such conditions? Who could? Who would want to teach in such an environment? This is not the haven for thinking it presents itself as. It is anything but.

Something is wrong in our metrics. (Is there something wrong in any metric?) What is deeply wrong is a model of human worth based on individual 'productivity'. As soon as the productive unit can be quantified, it can be compared to those better/worse than they are. Yet no productive process results from one brain. We need each other in ways which escape the bibliometricians completely. If we hack at those parts of the forest which we deem unproductive, unforseen consequences will arise. One dreadful consequence which is entirely forseeable (for anyone interested in history) is the emergence of the 'master hacker', that demented individual who believes their role is to "sort the institution out!" - meaning to attempt to realise some abstract model they have in their head by removing those things which don't fit it. Yet the more they hack, the further their vision recedes from them, causing them to hack more, this time with frustration, venom, etc. In such hands, the REF becomes a scythe.

Our understanding of cognition, knowledge and worth has been distorted into crude mechanised forms by technology we created. Within them, models of cognition, knowledge and worth have been embedded which are plainly untrue. Yet they are difficult to disentangle from the mess we are now in. But disentangle them we must.

This is scientific challenge. The clean-up needs new thinking and new technology. We must be able to monitor and manage the ecology of our institutions better. This is not to say that nobody should ever lose their jobs, or that budgets don't have to be respected. But it is to say that the relationship between technology, information and power needs to be situated with a deeper concept of mind and richer models of human organisation. The pathologies are forseeable. It's just that when pathology looms, those that predict catastrophe are frightened to say anything.

My view is that our fundamental problem is 'positivity': we only value what we can see. I believe a methodologically negative approach is an important antidote which appreciates the importance of data, evidence, etc, but which always seeks what's missing. We must examine the ground behind the figure.

In the Ecological Institution, what matters is not the 'positive results' of operations, but the collective, systemic research and development into what's missing. This process reflects the individual mind whose creativity and worth in an environment is dependent not on what they destroy, but on what they bring in the process of collectively determining things that are missing. This is, fundamentally, what artists do. The ecological institution is creative through-and-through. Who wouldn't want to work there? (maybe the lovers of the REF!!)