Response to Dron

I don't want to restate the theses of connectivism as I understand it but it may help readers of Jon Dron's to identify where his exegesis leads him into misunderstanding.

Here's his paper, which you may need to read first.

Let me first and foremost be clear about my objectives in my work. Dron writes, "But I'm not so sure that, as presented here, it is a learning theory at all." Honestly, I don't care whether it's a learning theory. I also don't care whether it is original to me, whether it borrows from someone else's work, or any of the usual academic trappings.

I care precisely and only about the following:
- whether I can describe what learning is and explains why learning occurs.
- whether I can use this knowledge to help people learn and make their lives better

Dron then uses that classic form of criticism, "if it is [a theory] - it is very hard to tell as it gets a bit fuzzy at precisely the point at which it seems to become one [a theory] -  then it is one that appears either inconsistent or very likely wrong." We'll discuss this.

Dron  also seeks to establish a wedge between what I describe and what George Siemens described. He writes, "this is my attempt to make sense of what Stephen means and, at the end of it, to explain why connectivism (small 'c'), as George Siemens has explained it, is such a good idea not just despite but because it is not a coherent learning theory." We'll discuss this too.

--

Let's let Dron introduce connectivism:

The Connectivist account of individual learning, in which the nervous system is understood as a neural network with emergent properties and behaviours resulting from its connections that we describe as 'learning', is certainly compelling. In fact, it is so compelling that it is accepted by most proponents of almost every theory of learning without blinking an eyelid and without any contradiction.
I wish that were true. There is one trivial sense in which every theorist agrees that learning is based on a neural network as described - they agree because they have to. Cut open a human brain and that's what you see. So, trivially, everyone has to agree with that theory, because, manifestly, that is what we all see.

But network theories of learning have long been contrasted by what may be described here under the heading of the 'physical symbol system' hypothesis. This is the idea that we think, literally, in words and rules and principles. When you say something like "Freddy is learning about such-and-such by forming a generalization," you are implicitly appealing to the physical symbol system hypothesis, because you are suggesting that Freddy is learning by making a model or representation in which certain principles govern explanations and predictions.

I've spent a lifetime arguing against this proposition, so I know that what I am arguing is not accepted by every theory of learning. It follows then that the first line of Dron's criticism is a caricature of connectivism as I understand it. It's not just that neurons produce knowledge. Everyone knew that. It's how they produce knowledge that's important. And that is the core of the theory.

A theory which Dron tells us has already been invented:

We even have a word for it that has been around a great deal longer than Connectivism: connectionism.  There may be a few that believe in incorporeal souls or that, more plausibly, seek quantum explanations of consciousness but, even for these, a connectionist account is recognized as possibly incomplete but certainly true of how we think and learn at some level. 

Oh, if only I had known about it. But wait. I did. I wrote about it at length in 1990 in my long essay The Network Phenomenon: Empiricism and the New Connectionism. This thesis has been available on my website for five years now. Additionally, the word 'connectionism' appears hundreds of times in my work. I am very open about my debt to connectionism and what I am drawing from it.

The four learning mechanisms I described in my 'Connectionism as Learning Theory' paper are all to be found in Rumelhart and McClelland's Parallel Distributed Processing, the influential two-volume tome of connectionism and the basis for the industry that has sprung up around these ideas since.

There is certainly a large camp of writers who do what Dron does: they say, "Oh yes, connectionism must be true at some level," but then go back and start talking about beliefs and intentions and representations and all that. People like, say, Jerry Fodor, say, or (from a different perspective) Daniel Dennett. I am not one of those. I am much more like Paul Churchland or Steven Stich. I don't think we can just dismiss connectionism by saying that it's true 'at some level' - I think that connectionist mechanisms are literally how we learn.

And I would say that almost every published author in education today falls into the 'true at some level' camp. In education, most writers - including Dron, and even to a degree Siemens - give lip service to the idea that learning is a network phenomenon, but don't apply that understanding to their actual theorizing about learning. This is where connectivism goes beyond connectionism: it asserts that learning is a network phenomenon, and then  proceeds to apply that understanding to things like learning design and pedagogy (subjects about which the connectionists are largely silent).

So this is just a misrepresentation of what I say then: "Stephen asserts Connectivism's distinctiveness by extending that concept into our other networks, broadly lumped together as social networks." It is true that I believe network learning also applies to social networks. I also think it applies to networks of crickets, as described by Duncan J. Watts. But that is not why I think connectivism is distinct from connectionism (I honestly don't know whether Rumelhart, for example, would have said connectionism applies to networks of crickets - but I imagine that if he thought about it for a while he would agree that it does).

Let me review, for those who are just skimming this post:

   Connectionism - the theory describing how networks learn

   Connectivism - the theory applying that understanding to education

Whether you say one or another is or is not a theory interests me not in the least. But I would assert that (a) each is a distinct understanding of learning that can be distinguished from other approaches that genuinely are called theories, and (b) neither is widely adopted (much less consensus opinion) in learning technology, or education generally, today.

---

I wish Dron would actually go into some of my work and extract the position he attempting to criticize rather than trying to make it up on the fly. He writes:
If Connectivism is about saying that our individual intelligence or capabilities to function as social beings cannot meaningfully exist nor be meaningfully described without considering the people and objects with which we interact then it is again hard to disagree.
Have I ever said this? Or anything like this?

He continues, 
We have a label for it: socially distributed cognition, a widely accepted and venerable family of models and theories that delves into the idea very deeply. This does not constitute a new or distinctive theory of learning either. 
Again, I do not care whether someone has previously come up with the same idea. That said, socially distributed cognition is not a description of what I am describing. Let's quote a bit from the Wikipedia reference Dron offers us:

Distributed cognition is a psychological theory that knowledge lies not only within the individual, but also in the individual's social and physical environment... In a sense, it expresses cognition as the process of information that occurs from interaction with symbols in the world. It considers and labels all phenomena responsible for this processing as ecological elements of a cognitive ecosystem. The ecosystem is the environment in which ecological elements assemble and interact in respect to a specific cognitive process. Cognition is then shaped by the transduction of information across extended and embodied modalities, the representations formed as result of their interactions and the attentive distribution of those representations toward a cognitive goal.
You can see pretty clearly why I don't mean anything like distributed cognition. Distributed cognition is an instance of the physical symbol system hypothesis.  

We can see where Dron becomes misled - he characterizes connectivism as merely "considering the people and objects with which we interact." But connectionism (and hence connectivism) is based on a much deeper understanding than that: they assert that knowledge is the set of connections between entities (and not the content of the signals being exchanged between them).

This results in what we call 'distributed representation' - see Geoffrey Hinton's introduction to the concept. In a nutshell (as Hinton says):

     - Each concept is represented by many neurons
     - Each neuron participates in the representation of many concepts

Nothing to do with symbols (or for that matter, rules nor generalizations not principles). I would argue (and have argued) that it's not even a type of representation at all, that the identification of 'concepts' in the neural network is an after-the-fact 3rd party interpretation of what is going on in a network, and not what is actually going on in a network.

I've tried to explain this all before, at length, in my essay Principles of Distributed Representation, from 2005.

---

Another case in which Dron makes up his own account of connectivism on the fly, rather than referring to what was actually written:
if we are simply looking at first-order connections between individuals and the objects and people they interact with (the most basic hub and spoke model) then there seems no point in talking about networks at all in this context because none of the interesting things about neural networks have any meaning or relevance in a hub and spoke model.
Of course, we are not we are simply looking at first-order connections between individuals and the objects and people they interact with. I've never suggested we were. I've criticized the hub-and-spoke model on a number of occasions. In my essay Fairness and Democracy in  Education, for example, I write, "The shape of the network that forms as a result of preferential attraction is the now-familiar hub-and-spoke network.... The problem with the hub-and-spoke network is that it is less stable."

At a certain point, the complaints that my argument is 'fuzzy' have a lot more to do with the critic not reading them than they do with the arguments actually being fuzzy.
The whole discussion about first-order networks seems to be setting up some sort of straw man. I'm not sure what Dron is arguing against here (certainly not me!) but it seems very important to him:
For Connectivism to make any sense as a distinctive learning theory, there must be learning in networks beyond our own first-order connections with them  - something important about the emergent behaviours of the networks themselves. 
So, OK, the word 'emergent' is another one of those words you can find hundreds of times in my writing dating back to the 1990s. Here's something from 2005 called Emergent Learning: Social Networks and Learning Networks that explains some of my thoughts on the idea.
This is not exactly the same as 'collective intelligence', but let's let Dron introduce the idea:
This brings us into the very well-trodden field of collective intelligence, that looks at how the interactions of large groups of agents leads to emergence of behaviours and learning at a group/network level.
I mention these particular writings because they use very similar terminology and concepts to the six (I think - going on memory here) used by Stephen to characterize what makes social networks tick. 
In the field of 'collective intelligence' Dron cites specifically "Howard Bloom's Global Brain, or pretty much anything by Scott E. Page." Today's readers may be more familiar with Surowiecki The Wisdom of Crowds.
But emergent learning is not the same as collective intelligence. The latter (to again use the same Wikipedia reference Dron cited) includes "It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity." But none of these is a type of emergentism; indeed, the idea of voting and quantifying run directly contradictory to emergentism. I think Surowiecki is pretty clear about this; I haven't read the other two authors.
Dron should know this - it was the basis of my work on Groups and Networks - which was followed (but not cited) by his own paper on 'collectives, networks and groups' in social software.

I don't forward six criteria, I forward four, and yes, there is definitely overlap with what people in the field of collective intelligence say. The principles (by now familiar to anyone who has read my work) are: autonomy, diversity, interactivity, and openness. 

Dron, who can't recall them, or even how many there are, writes,
It's well worth studying but it is not something that Connectivism can claim as its own territory unless there is something more to it. That something appears to lie in its treatment of networks as a fundamental unifying principle. 
I'm not sure I even want to get into how patently absurd this comment is - nobody can claim things like 'diversity' as 'its own territory'. 

For what it's worth, I first offered a version of them in Learning Networks: Theory and Practice, again in 2005 (a very productive year for me) and finalized the list of four by adapting a presentation from Charles Vest in Snowmass, Colorado. I've never claimed any of them as my own - I may have been the first to identify all four as a unit, but even this might not be the case (I've seen diversity and autonomy emphasized a lot, but not so much openness and interactivity).

No matter. I think where I advanced is in the following: first, explaining why these principles are important in terms of network principles (and not 3rd party observer principles or folk psychology) as a response against cascade phenomena (see, for example, Cascades and Connectivity, or Community Blogging), and second, using these four principles as design principles for learning technology and learning design generally. The concept of the MOOC is based on these four principles.

But again: it doesn't matter whether I own this, was first to talk about any of this. It doesn't matter whether I 'own' it. It matters only whether or not it is right.

---
So, is it right? Dron says it isn't.
...the crux of the issue: That Connectivism provides a unified model of how networks (including people's brains and their social networks) learn. This starts to look like the basis of a theory and seems more distinctive than any of the components so far. However, I think it is based on a spurious bit of reasoning and cannot ever work but, because it is a bit fuzzily portrayed...
If you really want to represent it that way (remember, I don't really go in for models and theories and such) then, yes, I am arguing that there is a unified model of how networks learn.

But understand: it's 'unified' in the way mathematics is unified. The fact that the same principles apply to counting does and counting sheep doesn't mean that I think that dogs are the same as sheep, nor does it mean I am conflating dogs and sheep, nor that I have described something fundamental about the essential nature of dogs and sheep.

Moreover, to make the point the other way: it doesn't follow (nor should it follow) that there are special properties inherent to dogs only, or sheep only, that makes us count them a special way. Yes, dogs are special and unique creatures and we should all cherish them, but we still count them the same way, one by one, and dogs aren't any the less dogs for that.

Again, I've made this sort of point before. In An Introduction to Connective Knowledge I begin with the observation that we has two ways of talking about things in the past - by talking about qualities, which leads to syllogistic reasoning, and by quantities, which leads to mathematical reasoning, and that we can now talk about a new form of connective reasoning, which underlies our understanding of things in the same way the previous two do. You can blame me for audacity, but it should be clear I'm not talking about a 'unified theory' the way Newtonian Laws or the Theory of Relativity are unified theories.

With those caveats, I don't confess to fuzziness, but let's examine Dron's understanding of the argument. he writes,
There are some topological similarities between brains and our social networks (including the mediating objects within them) but there are exactly the same kinds of topological similarity in the spread of disease, mob dynamics and the formation of traffic jams. There therefore has to be more substance to this idea than topological similarity. 

See, again, what he is doing here is not looking at what I actually say, but rather, what he thinks "has to be" in the argument.

This is actually a fairly common form of argument against connectionism, and against associationist theories of knowledge in general. Chomsky called it Plato's problem - the idea that the mechanisms in question are impoverished, that they are not sufficient to produce the phenomena in question. Dron's version isn't quite so sophisticated: he says the similarities found between brains and social networks are also found in mobs, therefore, these similarities can't explain what brains do. He doesn't tell us how the 'topological similarities' are impoverished; he just implies that they are.

Indeed, the argument here is based on innuendo rather than assertion. The phrase 'topological similarities' is just another way of saying 'surface features'. It's like he's arguing, 'tomatos are red, and strawberries are red, but so are holly berries, so we can't use the red colour to explain why some are safe to eat (unlike holly berries, which are poisonous). It sounds like a good argument, but it really requires that the 'topological similarities' in question be identical. If we're just talking about similarities, then there's lots of room where the differences can explain why one thing and not the other.

And in fact, we get exactly that sort of essential difference between learning networks and mobs. And we get it in precisely the way described by the theory. Human brains and social networks learn, while mobs do not, because human brains and social networks are more resistant to cascade phenomena than mobs. And this is because human networks and social networks are in important ways more diverse and more interactive than mobs. They are interactive in a way that mobs are not. They are defined by differences in opinion, objective and perspective, where mobs are not.

Dron goes on to describe the type of similarities I have in mind:
This is where things get sticky because, as Stephen is the first to admit, brains are different. However, he appears (this is the point at which it gets fuzzy for reasons I describe below, so I apologize if I misrepresent this) to wish to apply the same kind of principles that relate to neural networks, which have broadly uniform nodes, directed edges, constant distribution and qualitatively identical connections, invoking ideas that relate to neural networks like back-propagation, Hebbian rules and Boltzmann distributions as though they apply equally and similarly to the discontinuous, messy, asymmetrical, diverse, complicated world of social networks. 
Yes, just in the same way I would count simple things using the same mathematics I would use to count complicated things.

But note the argument here: neurons are simple, social entities (being mostly humans) are complex, and yet (says Dron) I want to apply the same sorts of principles that relate to neural networks that I do to social networks.

Dron demands that I be more precise, but it is his own formulation that creates the fuzz. I don't simply 'apply the same kind of principles' (as though they were some kind of ointment, I guess). Rather, I am saying that similar principles describe how connections form between entities (or to use the terminology being employed by Dron, which is derived from graph theory, similar principles describe how edges are created between nodes). 

Well I've offered four such theories, which I call properly 'learning theories', because they are theories describing how these connections are formed. The four (for those who have forgotten them) are complimentarity (aka Hebbian associationism), contiguity, back-propagation, and Boltzmann settling mechanisms. The precise physical mechanism via which these principles operate may vary, but the principles underlying them may be the same. This should be surprising to anyone; planets and billiard balls are very different, but they are still governed by things like inertia and momentum. It turns out that what Dron calls 'topological similarities' are actually deeper identities. Discount the innuendo, and you don't have an argument at all.

But whatever. It is a matter of empirical fact whether or not neurons and humans associate and form connections with each other according to the same underlying principles. The isomorphism uniting graph theory, social network theory, neurophysiology and computational neural network theory is strong prima facie evidence that it is an empirical fact. Certainly the sameness of these connective principles can be observed. The determination of whether it amounts to an underlying logic will take empirical science years, maybe generations, to determine.

Where Dron suggests I'm fuzzy, I think I'm pretty precise (certainly, it seems to me I've offered a level of precision far exceeding anything constructivism, say, has to offer - because, really, what can you say about that black box called 'making meaning'. But I digress).

He writes, "His assertion seems to be that they are not exactly the same, but that they are part of the same class of explanations and, importantly, that learning happens within them in broadly related ways. But what does this actually mean?" What it actually means is what I've just asserted above.

---

Remember, above, where Dron said 'neurons are simple, social entities (being mostly humans) are complex.' Well, now he's going to backtrack on that:
Brains have levels of emergent and structural organization that are tightly hooked into our bodies, with evolved intentionality to help us stay alive, look after children, eat, mate, seek comfort, avoid danger, learn to use tools etc. 
And at the risk of getting mystical, he continues, 
They have a purpose and that purpose is us (though, evolutionarily speaking, they may equally have a group-selection role too as we are a eusocial species). Technically speaking, they are directed networks. They are inherently contained, otherwise we die. 
Technically speaking, I would respond, they are not directed networks. They are self-organizing networks. Neurons aren't created with a purpose; they adapt and change according to the circumstances they find themselves in. This is well understood. The visual cortex, for example, doesn't have the purpose of seeing; sew the eyes shut (as researchers did with cats) and the very same neurons will be employed in some other task.

That does not mean there are no innate properties to brains, neural organization, and bodies in general. If course there is. But the innate principles are simple - as simple as they could possibly be - because the evolutionary advantage expressed in brains is the capacity to learn. Most of the rest -  " stay alive, look after children, eat, mate, seek comfort, avoid danger, learn to use tools etc." - is learned behaviour. The behaviours we observe in animals - the nursing instinct, say - are very simple and non-intentional, which is why you can fool birds with black dots and why cats grow up thinking a human being is its mother. They are responding to cues, not goals, objectives, concepts, or any of the rest of it. A sea-slug can reproduce, but can't even form a coherent thought resembling our idea of mating and parenthood.
They are inherently contained, otherwise we die.

What nonsense. Without being taught, we die. Tarzan is a myth (and even he was raised by chimps).

Why don't we die? Because we have neural nets that adapt very quickly to new information and learn from example (and are especially nimble when young). What makes these nets so good that way? Because the learning principles they physically instantiate create dynamic yet stable networks - that is, networks in which the nodes are autonomous, where they are diverse, there they interact as a coherent whole (and not an incoherent mass), and where they are open both to signals from each other, and to input (and output) with the external world. Take away any of these conditions, and that's why they die. Not because of some mysterious (and causally inexplicable) 'nurturing instinct' or whatever.

Dron continues, 
Moreover, the things that make brains work are a specific kind of neural connection between the same types of entity. If a neuron could decide to behave differently from other neurons it would not be a good thing at all. Even a simple change in behaviour ('today I think I will reduce the strength of my signals' or 'I wonder what it would be like if I responded when things are quiet rather than when I get stimulation' or 'I'm going to talk back') would quickly degenerate into chaos and no thought at all if more than a few errant neurons began to diversify. Crucially, knowledge and learning in a neural network exists entirely within its configuration of connections, not in its individual neurons.
Well I guess that if humans turned into toads, that would be hard on social networks too.

But in fact, neurons are very diverse. And if you want to take the embodiment argument to its natural conclusion, the many different types of things that make up a human body are very diverse. And individually, each of them differs from the other in many ways - including things like internal structure, activation potential, and the rest. 

And so... still, social networks might be more diverse than human bodies. Probably they are. What would follow from that? Dron goes on at length, and I'll elide here:
Our social networks, including the mediating objects we create, are diverse, plural, parallel, reaching whatever emergent patterns they fall into by many different processes....  Suffice to say, the differences between social and neural networks go more than skin deep while their similarities lurk mainly on the surface.
We can accept that social networks are distinct from neural networks. It does not follow that a different logic must be used to describe social networks and neural networks, nor does it follow that a common logic cannot have explanatory power.

Planets are much more complex than basketballs. But put the two of them in space, and the same principles can be used to describe (and predict) their respective motions. Yes, they are (if you will) both superficially bodies in space with a certain mass and momentum. But that, it turns out, is all that matters. The same is the case, I argue, with neural networks and social networks.

What's important at this juncture is the following: I have a story that explains how and why both humans and societies, though very different, can both learn and know. Moreover, this theory can be used to make interesting and useful predictions (such as: societies organized using parliaments rather than mobs will be more stable and will last longer; such as: too much extraneous neural noise, such as a loud buzzing sound, will make it difficult to learn; such as: increasing social resistance through immunization protects society against disease; and on and on and on). By contrast, Dron's sort of explanation is this: "Some of the nodes are intentional agents, with different agendas, and not all are nice." Well, what do we learn from that? How widely applicable is this knowledge? How does this even qualify as a theory?

---

Indeed, in the end, it is the utility of networks that Dron focuses on:
... This all helps us to make effective use of social networks for learning, to find strengths and limitations in them and to design or influence systems that make use of collectives to exhibit crowd wisdom in support of individual learning.
In additon, there is (to Dron) something magical about brains:
However, though sharing some similar dynamics and topology, brains do something pretty cool that the spread of memes, the movements of pedestrians on sidewalks, the formation of ecosystems, the flocking of birds, the nest building of termites and social connections between people do not: they think.
Wait a second. Wait a second. What do you mean, "They think?" What is this magical this that is not all the network phenomena described before, the having of experiences, the creating of associations, the cascading of neural networks, etc?

One wants Dron to read his Gilbert Ryle. "A foreigner learns what are the functions of the bowlers, the batsmen, the fielders, the umpires and the scorers. He then says 'But there is no one left on the field to contribute the famous element of team-spirit.'" What is this thing, 'thinking', that humans do that societies do not?

I talked about this in The MOOC of One. Dron wants to import some combination of functionalism and subjective experience into his explanation of learning. My response is that it is neither necessary nor sufficient to do so, and ultimately involves the invocation of magical entities to do the explaining - some sort of cosmological teleology, a 'will to live' (as alluded to above), or at the very least, a fear of flying.

Read this, and see what I mean:
This is because they are utterly different networks organized in utterly different ways performing utterly different functions. To suggest they are similar is perfectly reasonable but it is no more or less relevant than saying that the fact that salt and sugar are similar because they are composed of electrons, protons and neutrons. In a great many important ways beyond this similarity they are alike, and it is indeed a little too easy to mistake one for the other, but you would not normally want to substitute one for the other in a recipe.
His explanation for human learning is like explaining the difference between sugar and salt by saying the function of sugar is to sweeten. But we don't explain the differences between sugar and salt by appealing to their inherent nature, or their function, or some other mysterious force. We look at how they are the same underneath. The behaviours of sugar and salt are both explained by molecular chemistry (and so is DNA, even though it is much more complex) just in the same way the learning of humans and societies can be explained by identifying underlying principles. Indeed, it's the very fact that sugar and salt composed of electrons, protons and neutrons that explains why they react the way they do. I don't see how Dron doesn't get this.
Dron wants to say I don't get this:
Stephen goes to some lengths to disavow that notion that that social networks and neural networks operate in the same manner. But this is why it seems fuzzy to me because he also appears to be claiming fairly unequivocally that they do.
By now this should be pretty clear, right?

Two humans will form a connection between them in a manner very different from the way two neurons form a connection between them. Humans connect on a macro scale, neurons connect on a micro scale. Human connections are more complex and have more variability. So they're different.

But we can say the same thing about them. We can say 'a change of state in one neuron can result in a change of state in the second neuron'. And in the same way, 'a change of state in one human can result in a change of state in the second human'. The nature of these states is different; in a neuron, it might be a difference in the concentration of potassium ions, in a human it might be the acquisition of a social disease (or an idea). The physical instantiation of the connection can be different, but the fact of the connection can be the same.

I don't see a fuzzyness there, no more than I see a fuzziness in the idea that we can count neurons and we can count humans, even through the physical instances we are counting are very different.

And yes, I say that these connections are the learning. In humans, neurons are just the tools we use to make connections. In societies, humans are just the tools we (they?) use to make connections. 

Just so:
... these are 'actual' learning theories and that learning is the formation of connections in a network....  social networks learn in ways that can be explained or at least described by things like back-propagation, contiguity, etc, in much the same way as we describe neural networks.  So, as far as I can make out, Stephen is telling us that a social network is both not at all like a brain and very much like one. 
Yes. Exactly.

Such an apparent contradiction can only be true, without the Moon being made of green cheese, if and only if these claims relate to two epistemologically fundamentally different entities, which is exactly the problem that he has with other theories of learning.

I don't get this sentence at all. First of all, there's no contradiction, not even remotely one (unless you things that counting dogs and counting sheep is also a contradiction). But more, why would my claims have to related to two epistemologically fundamentally different entities?

Maybe he just has a fundamentally incorrect understanding of what 'social learning' means in this context:
Unless, of course, he is talking about ways that social networks can exhibit collective intelligence, in which case I am fully on board (and wrote a book, a PhD and numerous papers about it)
Oh, well, hallelujah, maybe he does agree with me except he thinks it's collective intelligence, about which I have already written above.
but that's another fundamentally different kind of entity, not directly about knowledge in a social network, and there are many other processes involved of which those relating to neural networks and their kin are but a very small if significant subset.  

So after all this the problem is the distinction between "knowledge in a social network" and "knowledge in a social network"? 

No, his problem is that I think  "knowledge in a social network" - that is, the knowledge humans have - and "knowledge in a social network" - that is, the knowledge networks have - are formed through the same processes of associative learning (and not through teleology, black boxes, or 'thinking'). And he fundamentally disagrees with this.

I don't know what his QED says but I think by now we've pretty much established that it is incoherent:

Therefore, either this is wrong, or this is not one theory but at a number of existing theories lumped together with only a common theme of networks to very loosely bind them or, as David Wiley suggests, it could be that it is just very incomplete. If so, it is much too incomplete to be described as  a learning theory, even if it does press-gang a bona fide learning theory (connectionism) into its service. I welcome correction if I am mistaken about this.

Well we come back to what I said at the top of the post:

   Connectionism - the theory describing how networks learn

   Connectivism - the theory applying that understanding to education

I've never made this a secret, and if Dron thinks this is press-ganging a bona fide learning theory, so be it. I honestly don't care. If the whole point of Dron's post is to say I've contributed nothing to the field, who cares?

---

Maybe the service Dron's post does is to drive a wedge between George Siemens's connectivism and my own. I actually think we're working different aspects of the same theory, but if people really need to identify why George is so great and I'm not, then this work may be useful.

So what does Dron think of Siemens's connectivism:
it is a situated set of principles, observations, perspectives and suggestions about how to learn, given the conditions that are made possible through the read-write web. It's thus a theory (using the term a little loosely but, I think, accurately) of how to learn, given a particular set of conditions, not a theory of learning.  
I guess my first response is to ask whether Dron skimmed Siemens too. Here's an excerpt from his Connectivism paper:
Learning is a process that occurs within nebulous environments of shifting core elements – not entirely under the control of the individual. Learning (defined as actionable knowledge) can reside outside of ourselves (within an organization or a database), is focused on connecting specialized information sets, and the connections that enable us to learn more are more important than our current state of knowing.
That doesn't sound like a set of principles, observations, perspectives and suggestions about how to learn. It sounds like a theory of learning to me.

I think his idea may have shifted through the years, but what's interesting about Siemens's connectivism is the idea that, if learning occurs in a network, that this network need not be constrained by the person. I agree. And we could talk a lot about the sort of things external to a person that can be a part of a person's learning network. And I think we both agree that there is a sense in which a society can have its own knowledge over and above what any individual can have - we've discussed this idea many times. But none of this is a set of principles, observations, perspectives and suggestions about how to learn. So maybe Dron skipped to the end of the paper... oh wait. Nope. Not there at all.

Both George and I believe that novel and important conclusions about how to learn follow from our theory(ies). But both of our approaches are based in something important (and maybe novel, but who knows) understanding of how learning happens. We built things like MOOCs together based on these principles. The impact our MOOCs had on the world suggests we were right about something - and probably something pretty fundamental. More than just tips and tricks, at any rate.

I'll let George deal with the rest, with whether Dron's characterization of his form of connectivism is accurate, about whether it was all actually to be found earlier in Dron's own work, or in Bateson, Hofstadter and Illich (none of these forms any particular influence in my own work, though I am a posteriori sympathetic with Illich).

I will say, in closing, this: there is no movement, and there is no high priest. Those concepts themselves are an incorrect understanding of the organization of society, at least, as I understand it. We are not unified around a single idea, following a single voice, marching to the same tune or singing Hosanna! together. These things belong to a world where we thought people were replaceable parts in a machine, not autonomous and diverse entities interacting in an open-ended (but endlessly interesting) firmament of experience and imagination.

To that end, Robert Bateman, who has influenced me:
I can't conceive of anything being more varied and rich and handsome than the planet earth. And its crowning beauty is the natural world. I want to soak it up, understand it as well as I can, and to absorb it.... and then I would like to put it together and express it in my painting. This is the way I want to dedicate my life.  
Is it a theory? Its it owned by me? Is this more complex than that? Silly questions.

Comments

  1. Brilliant rebuttal, again.

    ReplyDelete
  2. I just wish I could write as much as you do (and as coherently) in "half an hour."

    ReplyDelete
  3. Lovely set of ideas and much to think about and review. It raises a question I'd like to hear more about from you.

    If there is no form of symbolism, that is, if 'the 'physical symbol system' hypothesis' is wrong wholly or in part, then what do you think about the hierarchical temporal 'sparse representation' approach that Numenta takes as it attempts to make operational some of what is understood about cortical structures? What I'm curious about is if the next layer up in a chain of real-time (or near-real time) processing, which is keeping track of a greater number of temporal patterns below it (in the hierarchy or sequence) can ever be thought of then as a kind of non-symbolic representation, as distributed as it might be; and might be in some sense be a reduction of the layer below it, because it is more abstract, so to speak? Is there no way to even metaphorically call the second layer (the smaller set in the network watching (taking as inputs) the larger set) a symbol (or creating a symbol) of the lower layer when talking to the next (possibly higher, but it doesn't matter) layer in the sequence? Or maybe you think the Numenta model is a toy universe and not that accurate concerning how the generic mechanisms of learning might track dynamic patterns over time? If the Numenta account is workable, which it seems to be for tracking and learning to predict cyclically occurring patterns of events, and if the lower levels in hierarchically arranged (or sequentially arranged) layers of networks are reductions of the lower levels, then why can't we treat the lower levels as the 'content' that the upper (next) layers are watching (taking as inputs). I'd love your thinking on that.

    ReplyDelete
  4. In a strict sense, each layer of a neural network can be thought of as a representation of the layer that precedes it. It need not be strictly hierarchical, nor need it be trained, to be thought of in this way. Because, strictly speaking, any 'x' that can be traced back to a 'y' either through direct or indirect causation may be thought of as a representation of 'y'.

    But whether it *is* a representation of 'y' is a very different story. The mist that follows a rain may be thought of as a representation of rain, in the sense that it may be taken as a sign that the rain happened. But in order for it to *be* a representation of rain, it must be seen *as such* by a third party - typically some person who sees the mist and concludes, "it must have rained."

    This is a general principle (I content) of representational systems. The representation of 'y' by 'x' is not inherent in the representation. It must be seen *as* a representation by some third party (indeed, as is often the case, even *created* as a representation by that third party).

    Why is this important. If we consider the Numenta sparse representation - or any other multi-layer neural net similarly constructed - then it should be clear that, despite the labeling, the successive layers are not inherently representatoions of the preceding layers; they are only so insofar as they are interpreted as such by a third party.

    If we were (say) a thinking entity composed only of a hierarchical temporal 'sparse representation' as described, then, whatever cognition we undertook would not be *representational* cognition, because there would be no third party to frame a particular neural layer *as* a representation. It would thus be an error to say that our cognition is based on (for example) the construction of representations.

    The argument against the physical symbol system hypothesis, in my view, is equally an argument against any representational system.

    ReplyDelete

Post a Comment

Your comments will be moderated. Sorry, but it's not a nice world out there.

Popular Posts