An Inquiry into & a Critique of Dennett on Intentional Systems

A new working paper. Downloads HERE:

Abstract, contents, and introduction below:

* * * * *

Abstract: Using his so-called intentional stance, Dennett has identified so-called “free-floating rationales” in a broad class of biological phenomena. The term, however, is redundant on the pattern of objects and actions to which it applies and using it has the effect of reifying the pattern in a peculiar way. The intentional stance is itself a pattern of wide applicability. However, in a broader epistemological view, it turns out that we are pattern-seeking creatures and that phenomenon identified with some pattern must be verified by other techniques. The intentional stance deserves no special privilege in this respect. Finally, it is suggested that the intentional stance may get its intellectual power from the neuro-mental machinery it recruits and not from any special class of phenomena it picks out in the world.

CONTENTS

Introduction: Reverse Engineering Dan Dennett 2
Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains 6
In Search of Dennett’s Free-Floating Rationales 9
Dan Dennett on Patterns (and Ontology) 14
Dan Dennett, “Everybody talks that way” – Or How We Think 20

Introduction: Reverse Engineering Dan Dennett

I find Dennett puzzling. Two recent back-to-back videos illustrate that puzzle. One is a version of what seems to have become his standard lecture on cultural evolution, this time entitled

https://www.youtube.com/watch?feature=player_embedded&v=AZX6awZq5Z0

As such it has the same faults I identify in the lecture that occasioned the first post in this collection, Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains. It’s got a collection of nicely curated examples of mostly biological phenomenon which Dennett crafts into an account of cultural evolution though energetic hand-waving and tap-dancing.
And then we have a somewhat shorter video that is a question and answer session following the first:

https://www.youtube.com/watch?feature=player_embedded&v=beKC_7rlTuw

I like much of what Dennett says in this video; I think he’s right on those issues.

What happened between the first and second video? For whatever reason, no one asked him about the material in the lecture he’d just given. They asked him about philosophy of mind and about AI. Thus, for example, I agree with him that The Singularity is not going to happen anytime soon, and likely not ever. Getting enough raw computing power is not the issue. Organizing it is, and as yet we know very little about that. Similarly I agree with him that the so-called “hard problem” of consciousness is a non-issue.

How is it that one set of remarks is a bunch of interesting examples held together by smoke and mirrors while the other set of remarks is cogent and substantially correct? I think these two sets of remarks require different kinds of thinking. The second set involve philosophical analysis, and, after all Dennett is a philosopher more or less in the tradition of 20th century Anglo-American analytic philosophy. But that first set of remarks, about cultural evolution, is about constructing a theory. It requires what I called speculative engineering in the preface to my book on music, Beethoven’s Anvil. On the face of it, Dennett is not much of an engineer.

And now things get really interesting. Consider this remark from a 1994 article [1] in which Dennett gives an overview of this thinking up to that time (p. 239):

My theory of content is functionalist […]: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering of the robot). This is a specifically ‘teleological’ notion of function (not the notion of a mathematical function or of a mere ‘causal role’, as suggested by David LEWIS and others). It is the concept of function that is ubiquitous in engineering, in the design of artefacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive to find ‘laws of nature’, but a species of engineering: the analysis, by ‘reverse engineering’, of the found artefacts of nature – which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.)

I am entirely in agreement with his emphasis on engineering. Biological thinking is “a species of engineering.” And so is cognitive science and certainly the study of culture and its evolution.

Earlier in that article Dennett had this to say (p. 236):

It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford, I developed a deep distrust of the methods I saw other philosophers employing, and decided that before I could trust any of my intuitions about the mind, I had to figure out how the brain could possibly accomplish the mind’s work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked – clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool ‘arts and humanities’ academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.)

My reaction to that last remark, that parenthesis, was something like: Coulda’ fooled me! For I had been thinking that an engineering sensibility is what was missing in Dennett’s discussions of culture. He didn’t seem to have a very deep sense of structure and construction, of, well, you know, how design works. And here he is telling us he coulda’ been an engineer.

Continue reading “An Inquiry into & a Critique of Dennett on Intentional Systems”

Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”

In Search of Dennett’s Free-Floating Rationales

I’ve decided to take a closer look at Dennett’s notion of free-floating rationale. It strikes me as being an unhelpful reification, but explaining just why that is has turned out to be a tricky matter. First I’ll look at a passage from a recent article, “The Evolution of Reasons” [1], and then go back three decades to a major exposition of the intentional stance as applied to animal behavior [2]. I’ll conclude with some hints about metaphysics.

On the whole I’m inclined to think of free-floating rationale as a poor solution to a deep problem. It’s not clear to me what a good solution would be, though I’ve got some suggestions as to how that might go.

Evolving Reasons

Dennett opens his inquiry by distinguishing between “a process narrative that explains the phenomenon without saying it is for anything” and an account that provides “a reason–a proper telic reason” (p. 50). The former is what he calls a how come? account and the latter is a what for? account. After reminding us of Aristotle’s somewhat similar four causes Dennett gets down to it: “Evolution by natural selection starts with how come and arrives at what for. We start with a lifeless world in which there are lots of causes but no reasons, no purposes at all.” (p. 50).

Those free-floating rationales are a particular kind of what for. He introduces the term on page 54:

So there were reasons before there were reason representers. The reasons tracked by evolution I have called “free-floating rationales” (1983, 1995, and elseswhere), a term that has apparently jangled the nerves of more than a few thinkers, who suspect I am conjuring up ghosts of some sort. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. There were nine planets before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. I am not relenting; instead, I am hoping here to calm their fears and convince them that we should all be happy to speak of the reasons uncovered by evolution before they were ever expressed or represented by human investigators or any other minds.

That is, just as there is no mystery about the relationship between numbers and planets, or between centers of gravity and asteroids, so there is no mystery about the relationship between free-floating rationales and X.

What sorts of things can we substitute for X? That’s what’s tricky. It turns out those things aren’t physically connected objects. Those things are patterns of interaction among physically connected objects.

Before taking a look at those patterns (in the next section), let’s consider another passage from this article (p. 54):

Natural selection is thus an automatic reason finder that “discovers,” “endorses,” and “focuses” reasons over many generations. The scare quotes are to remind us that natural selection doesn’t have a mind, doesn’t itself have reasons, but is nevertheless competent to perform this “task” of design refinement. This is competence without comprehension.

That’s where Dennett is going, “competence without comprehension” – a recent mantra of his.

It is characteristic of Dennett’s intentional stance that it authorizes the use of intentional language, such as “discovers,” “endorses,” and “focuses”. That’s what it’s for, to allow the use of such language in situations where it comes naturally and easily. What’s not clear to me is whether or not one is supposed to treat it as a heuristic device that leads to non-intentional accounts. Clearly intentional talk about “selfish” genes is to be cashed out in non-intentional talk, and that would seem to be the case with natural selection in general.

But it is one thing to talk about cashing out intentional talk in a more suitable explanatory lingo. It’s something else to actually do so. Dennett’s been talking about free-floating rationales for decades, but hasn’t yet, so far as I know, proposed a way of getting rid of that bit of intentional talk. Continue reading “In Search of Dennett’s Free-Floating Rationales”

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading “Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains”

Turtles All the Way Down: How Dennett Thinks

An Essay in Cognitive Rhetoric

I want to step back from the main thread of discussion and look at something else: the discussion itself. Or, at any rate, at Dennett’s side of the argument. I’m interested in how he thinks and, by extension, in how conventional meme theorists think.

And so we must ask: Just how does thinking work, anyhow? What is the language of thought? Complicated matters indeed. For better or worse, I’m going to have to make it quick and dirty.

Embodied Cognition

In one approach the mind’s basic idiom is some form of logical calculus, so-called mentalese. While some aspects of thought may be like that, I do not think it is basic. I favor a view called embodied cognition:

Cognition is embodied when it is deeply dependent upon features of the physical body of an agent, that is, when aspects of the agent’s body beyond the brain play a significant causal or physically constitutive role in cognitive processing.

In general, dominant views in the philosophy of mind and cognitive science have considered the body as peripheral to understanding the nature of mind and cognition. Proponents of embodied cognitive science view this as a serious mistake. Sometimes the nature of the dependence of cognition on the body is quite unexpected, and suggests new ways of conceptualizing and exploring the mechanics of cognitive processing.

One aspect of cognition is that we think in image schemas, simple prelinguistic structures of experience. One such image schema is that of a container: Things can be in a container, or outside a container; something can move from one container to another; it is even possible for one container to contain another.

Memes in Containers

The container scheme seems fundamental to Dennett’s thought about cultural evolution. He sees memes as little things that are contained in a larger thing, the brain; and these little things, these memes, move from one brain to another.

This much is evident on the most superficial reading of what he says, e.g. “such classic memes as songs, poems and recipes depended on their winning the competition for residence in human brains” (from New Replicators, The). While the notion of residence may be somewhat metaphorical, the locating of memes IN brains is not; it is literal.

What I’m suggesting is that this containment is more than just a contingent fact about memes. That would suggest that Dennett has, on the one hand, arrived at some concept of memes and, on the other hand, observed that those memes just happen to exist in brains. Yes, somewhere Over There we have this notion of memes as the genetic element of culture; that’s what memes do. But Dennett didn’t first examine cultural process to see how they work. As I will argue below, like Dawkins he adopted the notion by analogy with biology and, along with it, the physical relationship between genes and organisms. The container schema is thus foundational to the meme concept and dictates Dennett’s treatment of examples.

The rather different conception of memes that I have been arguing in these notes is simply unthinkable in those terms. If memes are (culturally active) properties of objects and processes in the external world, then they simply cannot be contained in brains. A thought process based on the container schema cannot deal with memes as I have been conceiving them. Continue reading “Turtles All the Way Down: How Dennett Thinks”

Cultural Evolution, So What?

I’d like this to be the last post in this series except, of course, for an introduction to the whole series, from Dan Dennett on Words in Cultural Evolution on through to this one. We’ll see.

I suppose the title question is a rhetorical one. Of course culture evolves and of course we need to a proper evolutionary theory in order to understand culture. But the existing body of work is not at all definitive.

In the first section of this post I have some remarks on genes and memes, observing that both concepts emerged as place-holders in a larger ongoing argument. The second section jumps right in with the assertion, building on Dawkins, that the study of evolution must start by accounting for stability before it can address evolutionary change. The third and final section takes a quick look at change by looking at two different verstions of “Tutti Frutti”. There’s an appendix with some bonus videos.

From Genes to Memes

I’ve been reading the introduction to Lenny Moss, What Genes Can’t Do (MIT 2003), on Google Books:

The concept of the gene, unlike that of other biochemical entities, did not emerge from the logos of chemistry. Unlike proteins, lipids, and carbohydrates, the gene did not come on the scene as a physical entity at all but rather as a kind of placeholder in biological theory… The concept of the gene began not with the intention to put a name on some piece of matter but rather with the intention of referring to an unknown something, whatever that something might turn out to be, which was deemed to be responsible for the transmission of biological form between generations.

Things changed, of course, in 1953 when Watson and Crick established the DNA molecule and the physical locus of genes.

The concept of the meme originated in a similar way. While the general notion of cultural evolution goes back to the 19th century, it was at best of secondary, if not tertiary, importance in the 1970s when Dawkins write The Selfish Gene. And while others had offered similar notions (e.g. Cloake), for all practical purposes, Dawkins invented the concept behind his neologism, though it didn’t began catching on until several years after he’d published it.

The concept still functions pretty much as a placeholder. People who use it, of course, offer examples of memes and arguments for those examples. But there is no widespread agreement on a substantial definition, one that has been employed in research programs that have increased our understanding of human culture. Continue reading “Cultural Evolution, So What?”

The Mind is What the Brain Does, and Very Strange

Having now clearly established memes as properties of objects and events in the external world, properties that provide crucial data for the operation of mental “machines,” I want to step aside from thinking about memes and cultural evolution as such and think a bit about the mind. I want to set this conversation up by, once again, quoting from Dennett’s recent interview, The Well-Tempered Mind, at The Edge:

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

A bit later:

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

That’s the problem David Hays and I set ourselves in Principles and Development of Natural Intelligence (Journal of Social and Biological Systems 11, 293 – 322, 1988). While we had something to say about control in our discussion of the modal principle, we addressed the broader question of how to construct a mind from neurons that aren’t simple logical switches.

It is by no means clear to me how Dennett, and others of his mind-set, think about the mind. Yes, it’s computational. I can deal with that. But not, as I’ve said, if it’s taken to mean that the primitive operations of the nervous system are like the operations in digital computers, not if it’s taken to imply that the mind is constituted by ‘programs’ written in the ‘mentalese’ version of Fortran, Lisp, or C++. THAT was never a very plausible idea and the more we’ve come to know about the nervous system, the less plausible it becomes.

The upshot is that we need a much more fluid, a much more dynamic, conception of the mind. In Beethoven’s Anvil I talked of neural weather. Here’s how I set-up that metaphor (pp. 71-72): Continue reading “The Mind is What the Brain Does, and Very Strange”

How Do We Account for the History of the Meme Concept?

First, in asking THAT question I do not intend a bit of cutesy intellectual cleverness: Oh Wow! Let’s get the meme meme to examine it’s own history. My purpose would be just as well served by examining, say, the history of the term “algorithm” or the term “deconstruction,” both originally technical terms that have more or less entered the general realm. I’m looking at the history of the meme concept because I’ve just been reading Jeremy Burman’s most interesting 2012 article, “The misunderstanding of memes” (PDF).

Intentional Change

Second, as far as I can tell, no version of cultural evolution is ready to provide an account of that history that is appreciably better than the one Burman himself supplies, and that account is straight-up intellectual history. In Burman’s account (p. 75) Dawkins introduced the meme concept in 1976

as a metaphor intended to illuminate an evolutionary argument. By the late-1980s, however, we see from its use in major US newspapers that this original meaning had become obscured. The meme became a virus of the mind.

That’s a considerable change in meaning. To account for that change Burman examines several texts in which various people explicate the meme concept and attributes the changes in meaning to their intentions. Thus he says (p. 94):

To be clear: I am not suggesting that the making of the active meme was the result of a misunderstanding. No one individual made a copying mistake; there was no “mutation” following continued replication. Rather, the active meaning came as a result of the idea’s reconstruction: actions taken by individuals working in their own contexts. Thus: what was Dennett’s context?

And later (p. 98):

The brain is active, not the meme. What’s important in this conception is the function of structures, in context, not the structures themselves as innate essences. This even follows from the original argument of 1976: if there is such a thing as a meme, then it cannot exist as a replicator separately from its medium of replication.

Burman’s core argument this is a relatively simple one. Dawkins proposed the meme concept in 1976 in The Selfish Gene, but the concept didn’t take hold in the public mind. That didn’t happen until Douglas Hofsadter and Daniel Dennett recast the concept in their 1982 collection, The Mind’s I. They took a bunch of excerpts from The Selfish Gene, most of them from earlier sections of the book rather than the late chapter on memes, and edited them together and (pp. 81-82)

presented them as a coherent single work. Al- though a footnote at the start of the piece indicates that the text had been excerpted from the original, it doesn’t indicate that the essay had been wholly fabricated from those excerpts; reinvented by pulling text haphazardly, hither and thither, so as to assemble a new narrative from multiple sources.

It’s this re-presentation of the meme concept that began to catch-on with the public. Subsequently a variety of journalist accounts further spread the concept of the meme as a virus of the mind.

Why? On the face of it it would seem that the virus of the mind was a more attractive and intriguing concept whereas Dawkins’ original more metaphorical conception. Just why that should have been the case is beside the point. It was.

All I wish to do in this note is take that observation and push it a bit further. When people read written texts they do so with the word meanings existing in their minds, which aren’t necessarily the meanings that exist in the minds of the authors of those texts. In the case of the meme concept, the people reading The Selfish Gene didn’t even have a pre-existing meaning for the term, as Dawkins introduced and defined it in that book. The same would be true for the people who first encountered the term in The Mind’s I and subsequent journalistic accounts. Continue reading “How Do We Account for the History of the Meme Concept?”

Dennett Upside Down Cake: Thinking About Language Evolution in the 21st Century

About two years ago Wintz placed a comment on Replicated Typo’s About page in which he lists several papers that make good background reading for someone new to the study of linguistic and cultural evolution. I’ve just blitzed my way through one of them, Language is a Complex Adaptive System (PDF) by Beckner et al (2009)*, and have selected some excerpts for comment.

The point of this exercise is to contrast the way things look to a young scholar starting out now with the way they would have looked to a scholar starting out back in the ancient days of the 1960s, which is when both Dennett and I started out (though he’s a few years older than I am). The obvious difference is that, for all practical purposes, there was no evolutionary study of language at the time. Historical linguistics, yes; evolutionary, no. So what I’m really contrasting is the way language looks now in view of evolutionary considerations and the way it looked back then in the wake of the so-called Chomsky revolution—which, of course, is still reverberating.**

Dennett’s thinking about cultural evolution, and memetics, is still grounded in the way things looked back then, the era of top-down, rule-based, hand-coded AI systems, also known as Good Old-Fashioned AI (GOFAI). In a recent interview he’s admitted that something was fundamentally wrong with that approach. He’s realized that individual neurons really cannot be treated as simple logical switches, but rather must be treated as quasi-autonomous sources of agency with some internal complexity. Alas, he doesn’t quite know what to do about it (I discuss this interview in Watch Out, Dan Dennett, Your Mind’s Changing Up on You!). I’m certainly not going to claim that I’ve got it figured out, I don’t. Nor am I aware of anyone that makes such a claim. But a number of us have been operating from assumptions quite different from those embodied in GOFAI and Language is a Complex Adaptive System gives a good précis of how the world looks from those different assumptions. Continue reading “Dennett Upside Down Cake: Thinking About Language Evolution in the 21st Century”

How the Meme became a Pest

Since I’ve been posting a lot about memes recently, and from a POV in opposition to the most prevalent memetic doctrines, I thought I’d post a link to this article (full text is downloadable):

Jeremy Trevelyan Burman. The misunderstanding of memes: Biography of an unscientific object, 1976–1999. Perspectives on Science. Spring 2012, Vol. 20, No. 1, Pages 75-104
Posted Online January 19, 2012.
(doi:10.1162/POSC_a_00057)
© 2012 by The Massachusetts Institute of Technology

Abstract: When the “meme” was introduced in 1976, it was as a metaphor intended to illuminate an evolutionary argument. By the late-1980s, however, we see from its use in major US newspapers that this original meaning had become obscured. The meme became a virus of the mind. (In the UK, this occurred slightly later.) It is also now clear that this becoming involved complex sustained interactions between scholars, journalists, and the letter-writing public. We must therefore read the “meme” through lenses provided by its popularization. The results are in turn suggestive of the processes of meaning-construction in scholarly communication more generally.

We might, of course, see Burman’s argument as an illustration of how the intentional products of brilliant minds, in this particular case, Dawkins’ original 1976 conception, undergo chaotic if not random, variation and selection in the larger cultural arena. Burman lays the original variation and popularization at the feet of Douglas Hofstadter and Daniel Dennett and their 1981 edited collection, The Mind’s I, which was more popular in its time that Dawkins’ The Selfish Gene.

In The Mind’s I, Hofstadter and Dennett presented a new version of the meme-metaphor. To construct it, they selected harmonious themes from across The Selfsh Gene and presented them as a coherent single work. Although a footnote at the start of the piece indicates that the text had been excerpted from the original, it doesn’t indicate that the essay had been wholly fabricated from those excerpts; reinvented by pulling text haphazardly, hither and thither, so as to assemble a new narrative from multiple sources.

This omission could perhaps be forgiven. The collection was “composed,” after all. But, in the case of the meme, there is more to its composition than a simple departure from the original. The new version provides no clear indication that changes had been made, such as to shift the spelling and punctuation from UK to US standard; or that, in several instances, material had been lifted mid-paragraph and re-presented out of context. Indeed, comments are included from the original—without any editorial remarks—that misrepresent the whole as a coherent unit.

Whoops!

And the rest, as they say, is history. You’ll have to read the full article to get the blow-by-bloody-blow.