An Inquiry into & a Critique of Dennett on Intentional Systems

A new working paper. Downloads HERE:

Abstract, contents, and introduction below:

* * * * *

Abstract: Using his so-called intentional stance, Dennett has identified so-called “free-floating rationales” in a broad class of biological phenomena. The term, however, is redundant on the pattern of objects and actions to which it applies and using it has the effect of reifying the pattern in a peculiar way. The intentional stance is itself a pattern of wide applicability. However, in a broader epistemological view, it turns out that we are pattern-seeking creatures and that phenomenon identified with some pattern must be verified by other techniques. The intentional stance deserves no special privilege in this respect. Finally, it is suggested that the intentional stance may get its intellectual power from the neuro-mental machinery it recruits and not from any special class of phenomena it picks out in the world.

CONTENTS

Introduction: Reverse Engineering Dan Dennett 2
Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains 6
In Search of Dennett’s Free-Floating Rationales 9
Dan Dennett on Patterns (and Ontology) 14
Dan Dennett, “Everybody talks that way” – Or How We Think 20

Introduction: Reverse Engineering Dan Dennett

I find Dennett puzzling. Two recent back-to-back videos illustrate that puzzle. One is a version of what seems to have become his standard lecture on cultural evolution, this time entitled

https://www.youtube.com/watch?feature=player_embedded&v=AZX6awZq5Z0

As such it has the same faults I identify in the lecture that occasioned the first post in this collection, Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains. It’s got a collection of nicely curated examples of mostly biological phenomenon which Dennett crafts into an account of cultural evolution though energetic hand-waving and tap-dancing.
And then we have a somewhat shorter video that is a question and answer session following the first:

https://www.youtube.com/watch?feature=player_embedded&v=beKC_7rlTuw

I like much of what Dennett says in this video; I think he’s right on those issues.

What happened between the first and second video? For whatever reason, no one asked him about the material in the lecture he’d just given. They asked him about philosophy of mind and about AI. Thus, for example, I agree with him that The Singularity is not going to happen anytime soon, and likely not ever. Getting enough raw computing power is not the issue. Organizing it is, and as yet we know very little about that. Similarly I agree with him that the so-called “hard problem” of consciousness is a non-issue.

How is it that one set of remarks is a bunch of interesting examples held together by smoke and mirrors while the other set of remarks is cogent and substantially correct? I think these two sets of remarks require different kinds of thinking. The second set involve philosophical analysis, and, after all Dennett is a philosopher more or less in the tradition of 20th century Anglo-American analytic philosophy. But that first set of remarks, about cultural evolution, is about constructing a theory. It requires what I called speculative engineering in the preface to my book on music, Beethoven’s Anvil. On the face of it, Dennett is not much of an engineer.

And now things get really interesting. Consider this remark from a 1994 article [1] in which Dennett gives an overview of this thinking up to that time (p. 239):

My theory of content is functionalist […]: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering of the robot). This is a specifically ‘teleological’ notion of function (not the notion of a mathematical function or of a mere ‘causal role’, as suggested by David LEWIS and others). It is the concept of function that is ubiquitous in engineering, in the design of artefacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive to find ‘laws of nature’, but a species of engineering: the analysis, by ‘reverse engineering’, of the found artefacts of nature – which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.)

I am entirely in agreement with his emphasis on engineering. Biological thinking is “a species of engineering.” And so is cognitive science and certainly the study of culture and its evolution.

Earlier in that article Dennett had this to say (p. 236):

It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford, I developed a deep distrust of the methods I saw other philosophers employing, and decided that before I could trust any of my intuitions about the mind, I had to figure out how the brain could possibly accomplish the mind’s work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked – clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool ‘arts and humanities’ academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.)

My reaction to that last remark, that parenthesis, was something like: Coulda’ fooled me! For I had been thinking that an engineering sensibility is what was missing in Dennett’s discussions of culture. He didn’t seem to have a very deep sense of structure and construction, of, well, you know, how design works. And here he is telling us he coulda’ been an engineer.

Continue reading “An Inquiry into & a Critique of Dennett on Intentional Systems”

Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”

Turtles All the Way Down: How Dennett Thinks

An Essay in Cognitive Rhetoric

I want to step back from the main thread of discussion and look at something else: the discussion itself. Or, at any rate, at Dennett’s side of the argument. I’m interested in how he thinks and, by extension, in how conventional meme theorists think.

And so we must ask: Just how does thinking work, anyhow? What is the language of thought? Complicated matters indeed. For better or worse, I’m going to have to make it quick and dirty.

Embodied Cognition

In one approach the mind’s basic idiom is some form of logical calculus, so-called mentalese. While some aspects of thought may be like that, I do not think it is basic. I favor a view called embodied cognition:

Cognition is embodied when it is deeply dependent upon features of the physical body of an agent, that is, when aspects of the agent’s body beyond the brain play a significant causal or physically constitutive role in cognitive processing.

In general, dominant views in the philosophy of mind and cognitive science have considered the body as peripheral to understanding the nature of mind and cognition. Proponents of embodied cognitive science view this as a serious mistake. Sometimes the nature of the dependence of cognition on the body is quite unexpected, and suggests new ways of conceptualizing and exploring the mechanics of cognitive processing.

One aspect of cognition is that we think in image schemas, simple prelinguistic structures of experience. One such image schema is that of a container: Things can be in a container, or outside a container; something can move from one container to another; it is even possible for one container to contain another.

Memes in Containers

The container scheme seems fundamental to Dennett’s thought about cultural evolution. He sees memes as little things that are contained in a larger thing, the brain; and these little things, these memes, move from one brain to another.

This much is evident on the most superficial reading of what he says, e.g. “such classic memes as songs, poems and recipes depended on their winning the competition for residence in human brains” (from New Replicators, The). While the notion of residence may be somewhat metaphorical, the locating of memes IN brains is not; it is literal.

What I’m suggesting is that this containment is more than just a contingent fact about memes. That would suggest that Dennett has, on the one hand, arrived at some concept of memes and, on the other hand, observed that those memes just happen to exist in brains. Yes, somewhere Over There we have this notion of memes as the genetic element of culture; that’s what memes do. But Dennett didn’t first examine cultural process to see how they work. As I will argue below, like Dawkins he adopted the notion by analogy with biology and, along with it, the physical relationship between genes and organisms. The container schema is thus foundational to the meme concept and dictates Dennett’s treatment of examples.

The rather different conception of memes that I have been arguing in these notes is simply unthinkable in those terms. If memes are (culturally active) properties of objects and processes in the external world, then they simply cannot be contained in brains. A thought process based on the container schema cannot deal with memes as I have been conceiving them. Continue reading “Turtles All the Way Down: How Dennett Thinks”