Wednesday, 21 May 2014

Stephen Downes defends Connectivism (again)

It's not uncommon that after failed experiments there is a bit of wailing and gnashing of teeth as the rationale which underpinned them is critiqued. This seems to be going on with the MOOC at the moment. Stephen Downes has become famous through innovating with learning technologies. He has defended his innovations with a theory of learning (one feels that it is his theory he is most proud of, not the interventions). He has done a lot of defending of his theory over the last few months, spurred on by some interesting critiques by Marc Clarà and Elena Barberà (see Three problems with the connectivist conception of learning) and Matthias Melcher.

No doubt driven by the apparent failure of MOOCs (which would indicate something is wrong with Downes's theory) critique is good because it can be the impetus for very clear statements about that theory. To his credit, Downes has done this in response to the criticisms made. (see

To start with, Downes defends connectivism as a theory about knowledge-as-pattern. He says:
The claim made by connectivism [...]is that learning is a process of pattern recognition, nothing more or less. The warning inherent in connectivist theory is that there is no apriori privileged set or type of pattern that may be learned: so while you may think that you are presenting shapes to learners, they may be learning to recognize colours. And that any pattern inherent in your teaching - including bad habits, prejudice, whatever - will also be learned by the people watching you.
This is an epistemological position which is consistent with much thinking in cybernetic learning theory. In particular, it can be favourably compared to Pask's epistemology (of which Laurillard's conversation model is a rather superficial representation), Luhmann's theory of communication and (perhaps most usefully) Heinz von Foerster's theory of Eigenforms in perception. What these all indicate, (although what Downes doesn't really say) is that concepts and knowledge are effectively configurations of stabilities within a network structure. Each node within that structure will have a different view (shapes or colours), but what matters are the dynamics between the nodes as this is where the knowledge resides.

He says something like this below, but he is particularly anxious to avoid any kind of Kantian Transcendental Subject:
Finally - to be clear - talk about "recognizing" a pattern does not involve some homunculus inside our head doing some conceptual work. The phenomenon of pattern recognition is a well-known property of neural networks. The point I make is epistemological: what makes something a 'pattern' is the fact that it is recognized by neural nets. There is no apriori set of entities, 'patterns' (or 'concepts', or whatever) that must somehow acquired and placed in the mind.
The allusion to neural nets is (I think) unfortunate: on the one hand, he presents a trans-personal epistemology, and on the other a mentalist biological reduction. He seems particularly insensitive to the "people are neural nets" reduction. Worse for him, however, is that Kant is still there! He's just making a different kind of transcendental argument for epistemology - although its a popular argument among cyberneticians.

But there's interesting stuff here:
A connection is a state. Roughly speaking, it is a communications channel that exists between two entities such that a change in the state of one entity can result in a change in the state of the other entity. Usually, we depict these channels as physical, for example, the axons of a neuron, or a telephone wire carrying signals.

Connections can be extremely complex; there is no requirement whatsoever that they be two-state on-off types of things. Connection strength can vary, the frequency of signals can vary, the nature of signals can vary, there may be multiple strands and different types of connection between two nodes (hence, I can send George an email and a Tweet). The nature of the states of the nodes can be variable as well. A signal from one node to another may have a cumulative effect, triggering a reaction only after a tipping point is reached, for example.
Again, the neuro-biologism here is blind to the accusation of mentalism (that the causes for behaviour are in heads), although we might give him the benefit of the doubt that he means a Paskian P-individual (psychology)/M-individual (Mechanical) set-up. For Pask, the M-individual was an individual brain - the machinery (hardware) upon which thought takes place, whereas a P-individual is the 'software' that runs on the hardware (e.g. consciousness, communication, knowledge, etc). Pask had the idea that a P-individual could straddle many M-individuals. So social institutions and discourses were P-individuals. By contrast, Downes doesn't see real brains in real heads, but instead neural networks. These are "ideal brains", and for Downes, it appears 'brains' that reach out between individuals through their connnections. In Pask's language, a neural net is software, not hardware. I'm not clear where Downes sees the actual hardware of real brains and bodies.

Paskian resonances and Downes's idealism are further exposed when he says:
An interaction is the actual event where a change of state in one entity takes place with the result that a change of state in the second entity takes place. We may also think of an interaction as a mass noun, referring to a set or a series of such changes of states. Again, we usually think of an interaction as something physical, for example, a signal sent down a communications channel.
This gets to the nub of the problem, and it is a problem which affects cybernetic theories of learning as well as Downes's variant. What is an event?

Cybernetics talks of difference; Gregory Bateson talks of a "Difference that makes a difference". But in reality, we never see a difference. It only leaves a trace: it itself is a kind of absence. Our bodies and expectations change in the light of things which aren't there. What we think of as a kind of knock-on 'signalling' process is a post-hoc construction. As Alain Badiou, John Maynard-Smith, Gregory Bateson, Terry Deacon, Jakob von Uexkull and others realise is that the physicalist metaphor of signalling cannot be right. For all of these thinkers, absences become fundamental in the processes associated with 'information', and it is this acknowledgement that begins to help us to make sense of the many uses of that strange word (information) from epigenesis, to learning, to Shannon information, or Hawking radiation or to "digital ontology".

Finally Downes gets to the thorny topic of meaning:
So the criticism is essentially that connectivism doesn't have a built-in semiotics. It doesn't have a sense in which a communication between two entities is about a third entity. And it's true that representation in communication doesn't work this way in connectivism. Rather, connectivism works according to two principles: direct representation, and distributed representation.
  • Direct representation is the idea that the signal is its own message (think of it as a corollary to Gibson's direct perception). We can think of this along the lines of the concept of content addressable memory in computer science. The message is its own content. True, we as a sender may intend the message to refer to or represent some object or entity, but what is in fact received is only the sentence itself, which must carry all its representational content with it. 
  • Distributed representation is concepts (for lack of a better work) are stored not as single entities in the mind, but as sets of connections between entities, so they exist not just in one place, but in many places. What's significant here is that the same set of connections is used to store not one but many concepts (indeed, all concepts). So when the set of connections defining one concept is changed, so also is the set of connections defining many other concepts. 
But the 'signalling' metaphor - that rationalistic construct we've become accustomed to as a way of making sense of the strange things that happen to us - is everywhere. As is the physicalist reduction of consciousness (content-addressable memory!). Downes appears to be struggling here, and he's a long way away from any account of real people. But it's also clear that he doesn't see the connections which aren't there, or there is any space in his theory for absent connections to have causal power.

But absent connections are very real in the MOOC world. They represent the people who didn't engage in his platform, or who dropped-off the course after a couple of weeks. I guess he didn't want to think about them. But in these people are not just accounts of learning, but accounts of aspiration, freedom, injustice, poverty, technocracy and disappointment. It's only in the missing parts that Downes's ideal disembodied network acquires a body.

If he did think a bit more about these people, he might find cause to rethink his theory!

1 comment:

Oleg said...