Underwood and Sellers 2015: Beyond narrative we have simulation

It is one thing to use computers to crunch data. It’s something else to use computers to simulate a phenomenon. Simulation is common in many disciplines, including physics, sociology, biology, engineering, and computer graphics (CGI special effects generally involve simulation of the underlying physical phenomena). Could we simulate large-scale literary processes?

In principal, of course. Why not? In practice, not yet. To be sure, I’ve seen the possibility mentioned here and there, and I’ve seen an example or two. But it’s not something many are thinking about, much less doing.

Nonetheless, as I was thinking about How Quickly Do Literary Standards Change? (Underwood and Sellers 2015) I found myself thinking about simulation. The object of such a simulation would be to demonstrate the principle result of that work, as illustrated in this figure:

19C Direction

Each dot, regardless of color or shape, represents the position of a volume of poetry in a one-dimensional abstraction over 3200 dimensional space – though that’s not how Underwood and Sellers explain it (for further remarks see “Drifting in Space” in my post, Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction). The trend line indicates that poetry is shifting in that space along a uniform direction over the course of the 19th century. Thus there seems to be a large-scale direction to that literary system. Could we create a simulation that achieves that result through ‘local’ means, without building a telos into the system?

The only way to find out would be to construct such a system. I’m not in a position to do that, but I can offer some remarks about how we might go about doing it.

* * * * *

I note that this post began as something I figured I could knock out in two or three afternoons. We’ve got a bunch of texts, a bunch of people, and the people choose to read texts, cycle after cycle after cycle. How complicated could it be to make a sketch of that? Pretty complicated.

What follows is no more than a sketch. There’s a bunch of places where I could say more and more places where things need to be said, but I don’t know how to say them. Still, if I can get this far in the course of a week or so, others can certainly take it further. It’s by no means a proof of concept, but it’s enough to convince me that at some time in the future we will be running simulations of large scale literary processes.

I don’t know whether or not I would create such a simulation given a budget and appropriate collaborators. But I’m inclined to think that, if not now, then within the next ten years we’re going to have to attempt something like this, if for no other reason than to see whether or not it can tell us anything at all. The fact is, at some point, simulation is the only way we’re going to get a feel for the dynamics of literary process.

* * * * *

It’s a long way through this post, almost 5000 words. I begin with a quick look at an overall approach to simulating a literary system. Then I add some details, starting with stand-ins (simulations of) texts and people. Next we have processes involving those objects. That’s the basic simulation, but it’s not the end of my post. I have some discussion of things we might do with this system followed with suggestions about extending it. I conclude with a short discussion of the E-word. Continue reading “Underwood and Sellers 2015: Beyond narrative we have simulation”

Cultural Evolution and Oral Tradition: ‘Information transfer’ at the micro scale

It’s clear that one problem I have with Dennett’s memetics is this his conception face-to-face mechanisms of cultural evolution – like the transfer of information from one computer to another – seems rather thin, unrealistically so. I tend to think that meaning is something arrived at through negotiation whereas Dennett writes as though one-shot one-way ‘information transfer’ is sufficient to the process.

I want to present some passages from David Rubin, Memory in Oral Tradition: The Cognitive Psychology of Epic, Ballads, and Counting-out Rhymes (Oxford UP 1995) that I think merit close consideration. These are passages about oral epic and so are relevant to thinking about folktales, myth and such, stories that are held in memory and delivered to an audience without benefit of written prompt. One thing we need to keep in mind is that, in oral culture, the notion of faithful repetition is not the same as it is in literate culture. In the literate world repetition means word-for-word. In oral cultures it does not. A faithful recounting of a story is one where the same characters are involved in the same (major) incidents in (pretty much) the same order. Word-for-word recounting is not required; in fact, such a notion is all but meaningless. With no written (or otherwise recorded) verification, how do you tell?

This passages illustrates that nicely (pp. 137-138):

Avdo Medjedovic was the best singer recorded by Lord and Parry. An example of his learning a new song provides insights into what it is that the poetic-language learner must learn about his genre (Lord, 1960; Lord & Bynum, 1974). A singer sang a song of 2,294 lines that Avdo Medjedovic had never heard before. When the song was finished, Avdo Medjedovic was asked if he could sing the same song. He did, only now the song was 6,313 lines long. The basic story line remained the same, but, to use Lord’s description, “the song lengthened, the ornamentation and richness accumulated, and the human touches of character, touches that distinguish Avdo Medjedovic from other singers, imparted a depth of feeling that had been missing” (p. 78). Avdo Medjedovic’s song retold the same story in his own words, much as subjects in a psychology experiment would retell a story from a genre with which they were familiar, but Avdo Medjedovic’s own words were poetic language and his story was a song of high artistic quality. Although the particular words changed, the words added were all traditional; and so the stability of the tradition, if not the stability of the words of a particular telling of a story, was ensured.

Several aspects of this feat are of interest. First, the song was composed without preparation and sung at great speed. There was no time for preparation before the 6,313 lines were sung, and once the song began, the rhythm allowed little time for Avdo Medjedovic to stop and collect his thoughts. Such a feat implies a well-organized memory and the equivalent of an efficient set of rules for production. Second, the song expanded yet remained traditional in style, demonstrating that more than a particular song was being recalled. Rather, rules or parts drawn from other songs were being used. Third, although Avdo Medjedovic was creative by any standards, he was not trying to create a novel song; he believed that he was telling a true story just the way he had heard it, though perhaps a little better. To do otherwise would be to distort history.

So, an expert listens to a story than runs to 2,294 lines and then immediately repeats it back, but embellished to 6,313. Would he be able to do the same thing the next day or ten days or a year later? Probably. Continue reading “Cultural Evolution and Oral Tradition: ‘Information transfer’ at the micro scale”

Where I’m at on cultural evolution, some quick remarks

I don’t know.

Some notes to myself.

1. Cultural Analogs to Genes and Phenotypes

I’ve spent a fair amount of time off and on over the last two decades hacking away at identifying cultural analogues to biological genes and phenotypes. In the past few years that effort has taken the form of an examination of Dan Dennett. I more or less like the current conceptual configuration, where I’ve got Cultural Beings as an analog to phenotypes and coordinators as analogs to genes. As far as I can tell – and I AM biased, of course, it’s the best such scheme going.

And it just lays there. So what? I don’t see that it allows me to explain anything that can’t otherwise be explained. Nor does it have obvious empirical consequences that one could test in obvious ways. It seems to me mostly a formal exercise at this point. In that it is not different from any version of memetics nor from Sperber’s cultural attractor theory. These are all formal exercises with little explanatory value that I can see.

That’s got to change. But how? I note that dealing with words as evolutionary objects seems somewhat different from treating literary works (or musical works and performances, works of visual art, etc.) as evolutionary objects.

Issues: Design, Human Communication

2. Cultural Direction

Perhaps the most interesting work I’ve done in the past year as been my work on Matt Jockers’ Macroanalysis and, just recently, on Underwood and Sellers’ paper on 19th century poetry. In the case of Jockers’ work on the novel, he’d done a study of influence which I’ve reconceptualized as a demonstration that the literary system as a direction. In the case of Underwood and Sellers, they’ve found themselves looking at directionality, but they hadn’t been looking for it. Their problem was to ward of the conceptual ‘threat’ of Whig historicism; they want to see if they can accept the directionality but not commit themselves to Whiggishness, and I’ve spent some time arguing that they need not worry.

What excites me is that two independent studies have come up with what looks like demonstrations of historical direction. I take this as an indication of the causal structure of the underlying historical process, which encompasses thousands upon thousands of people interaction with and through thousands of texts over the course of a century. What shows up in the texts can be thought of as a manifestation of Geist and so these studies are about the apparent direction of Geist. Continue reading “Where I’m at on cultural evolution, some quick remarks”

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.

Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century? Continue reading “Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015”

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs together. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):

Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.

I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled. Continue reading “Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking”

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation. Continue reading “Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction”

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality? Continue reading “Follow-up on Dennett and Mental Software”

Dennet’s WRONG: the Mind is NOT Software for the Brain

And he more or less knows it; but he wants to have his cake and eat it too. It’s a little late in the game to be learning new tricks.

I don’t know just when people started casually talking about the brain as a computer and the mind as software, but it’s been going on for a long time. But it’s one thing to use such language in casual conversation. It’s something else to take it as a serious way of investigating mind and brain. Back in the 1950s and 1960s, when computers and digital computing were still new and the territory – both computers and the brain – relatively unexplored, one could reasonably proceed on the assumption that brains are digital computers. But an opposed assumption – that brains cannot possibly be computers – was also plausible.

The second assumption strikes me as being beside the point for those of us who find computational ideas essential to thinking about the mind, for we can proceed without the somewhat stronger assumption that the mind/brain is just a digital computer. It seems to me that the sell-by date on that one is now past.

The major problem is that living neural tissue is quite different from silicon and metal. Silicon and metal passively take on the impress of purposes and processes humans program into them. Neural tissue is a bit trickier. As for Dennett, no one championed the computational mind more vigorously than he did, but now he’s trying to rethink his views, and that’s interesting to watch.

The Living Brain

In 2014 Tecumseh Fitch published an article in which he laid out a computational framework for “cognitive biology” [1]. In that article he pointed out why the software/hardware distinction doesn’t really work for brains (p. 314):

Neurons are living cells – complex self-modifying arrangements of living matter – while silicon transistors are etched and fixed. This means that applying the “software/hardware” distinction to the nervous system is misleading. The fact that neurons change their form, and that such change is at the heart of learning and plasticity, makes the term “neural hardware” particularly inappropriate. The mind is not a program running on the hardware of the brain. The mind is constituted by the ever-changing living tissue of the brain, made up of a class of complex cells, each one different in ways that matter, and that are specialized to process information.

Yes, though I’m just a little antsy about that last phrase – “specialized to process information” – as it suggests that these cells “process” information in the way that clerks process paperwork: moving it around, stamping it, denying it, approving it, amending it, and so forth. But we’ll leave that alone.

One consequence of the fact that the nervous system is made of living tissue is that it is very difficult to undo what has been learned into the detailed micro-structure of this tissue. It’s easy to wipe a hunk of code or data from a digital computer without damaging the hardware, but it’s almost impossible to do the something like that with a mind/brain. How do you remove a person’s knowledge of Chinese history, or their ability to speak Basque, and nothing else, and do so without physical harm? It’s impossible. Continue reading “Dennet’s WRONG: the Mind is NOT Software for the Brain”

On the Direction of Cultural Evolution: Lessons from the 19th Century Anglophone Novel

I’ve got another working paper available (title above):

Most of the material in this document was in an earlier working paper, Cultural Evolution: Literary History, Popular Music, Cultural Beings, Temporality, and the Mesh, which also has a great deal of material that isn’t in this paper. I’ve created this version so that I can focus on the issue of directionality and so I’ve dropped all the material that didn’t related to that issue. The last section, The Universe and Time, is new, as is this introduction.

* * * * *

Abstract: Matthew Jockers has analyzed a corpus of 19th century American and British novels (Macroanalysis 2013). Using standard techniques from natural language processing (NLP) Jockers created a 600-dimensional design space for a corpus of 3300 novels. There is no temporal information in that space, but when the novels are grouped according to close similarity that grouping generates a diagonal through the space that, upon inspection, is aligned with the direction of time. That implies that the process that created those novels is a directional one. Certain (kinds of) novels are necessarily earlier than others because that is how the causal mechanism (whatever they are) work. This result has implications for our understanding of cultural evolution in general and of the relationship between cultural evolution and biological evolution.

1. Introduction: Direction in Design Space, Telos? 2
2. The Direction of Cultural Evolution: The Child is Father or the Man 6
3. Nineteenth Century English-Language Novels 9
4. Macroanalysis: Styles 10
5. Macroanalysis: Themes 13
6. Influence and Large Scale Direction 15
7. The 19th Century Anglophone Novel 18
8. Why Did Jockers Get That Result? 20
9. What Remains to be Done? 21
10. Literary History, Temporal Orders, and Many Worlds 22
11. The Universe and Time 30

Introduction: Evolving Along a Direction in Design Space

In 2013 Matthew Jockers published Macroanalysis: Digital Methods & Literary History (2013). I devoted considerable blogging effort to it 2014, including most, but not all, of the material in this working paper. In Jockers’ final study he operationalized the idea of influence by calculating the similarity between each pair of texts in his corpus of roughly 3300 19th century English-language novels. The rationale is obvious enough: If novelist K was influenced by novelist F, then you would expect her novels to resemble those of F more than those of C, who K had never even read.

Jockers examined this data by creating a directed graph in which each text was represented by a node and each text (node) was connected only to those texts to which it had a high degree of resemblance. This is the resulting graph:

9dot3

It is, alas, almost impossible to read this graph as represented here. But Jockers, of course, had interactive access to it and to all the data and calculations behind it. What is particularly interesting, though, is that the graph lays out the novels more or less in chronological order, from left to right (notice the coloring of the graph), though there was no temporal information in the underlying data. Much of the material in the rest of this working paper deals with that most interesting result (in particular, sections 2, 6, 7, 8, and 10).

What I want to do here is, first of all, reframe my treatment of Jockers’ analysis in terms of something we might call a design space (a phrase I take from Dan Dennett, though I believe it is a common one in certain intellectual circles). Then I emphasize the broader metaphysical implications of Jockers’ analysis. Continue reading “On the Direction of Cultural Evolution: Lessons from the 19th Century Anglophone Novel”

Has Dennett Undercut His Own Position on Words as Memes?

Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:

Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, …) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.

While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, “Aching Voids and Making Voids” (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage:

But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’être.

In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug. Continue reading “Has Dennett Undercut His Own Position on Words as Memes?”