In Search of Dennett’s Free-Floating Rationales

I’ve decided to take a closer look at Dennett’s notion of free-floating rationale. It strikes me as being an unhelpful reification, but explaining just why that is has turned out to be a tricky matter. First I’ll look at a passage from a recent article, “The Evolution of Reasons” [1], and then go back three decades to a major exposition of the intentional stance as applied to animal behavior [2]. I’ll conclude with some hints about metaphysics.

On the whole I’m inclined to think of free-floating rationale as a poor solution to a deep problem. It’s not clear to me what a good solution would be, though I’ve got some suggestions as to how that might go.

Evolving Reasons

Dennett opens his inquiry by distinguishing between “a process narrative that explains the phenomenon without saying it is for anything” and an account that provides “a reason–a proper telic reason” (p. 50). The former is what he calls a how come? account and the latter is a what for? account. After reminding us of Aristotle’s somewhat similar four causes Dennett gets down to it: “Evolution by natural selection starts with how come and arrives at what for. We start with a lifeless world in which there are lots of causes but no reasons, no purposes at all.” (p. 50).

Those free-floating rationales are a particular kind of what for. He introduces the term on page 54:

So there were reasons before there were reason representers. The reasons tracked by evolution I have called “free-floating rationales” (1983, 1995, and elseswhere), a term that has apparently jangled the nerves of more than a few thinkers, who suspect I am conjuring up ghosts of some sort. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. There were nine planets before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. I am not relenting; instead, I am hoping here to calm their fears and convince them that we should all be happy to speak of the reasons uncovered by evolution before they were ever expressed or represented by human investigators or any other minds.

That is, just as there is no mystery about the relationship between numbers and planets, or between centers of gravity and asteroids, so there is no mystery about the relationship between free-floating rationales and X.

What sorts of things can we substitute for X? That’s what’s tricky. It turns out those things aren’t physically connected objects. Those things are patterns of interaction among physically connected objects.

Before taking a look at those patterns (in the next section), let’s consider another passage from this article (p. 54):

Natural selection is thus an automatic reason finder that “discovers,” “endorses,” and “focuses” reasons over many generations. The scare quotes are to remind us that natural selection doesn’t have a mind, doesn’t itself have reasons, but is nevertheless competent to perform this “task” of design refinement. This is competence without comprehension.

That’s where Dennett is going, “competence without comprehension” – a recent mantra of his.

It is characteristic of Dennett’s intentional stance that it authorizes the use of intentional language, such as “discovers,” “endorses,” and “focuses”. That’s what it’s for, to allow the use of such language in situations where it comes naturally and easily. What’s not clear to me is whether or not one is supposed to treat it as a heuristic device that leads to non-intentional accounts. Clearly intentional talk about “selfish” genes is to be cashed out in non-intentional talk, and that would seem to be the case with natural selection in general.

But it is one thing to talk about cashing out intentional talk in a more suitable explanatory lingo. It’s something else to actually do so. Dennett’s been talking about free-floating rationales for decades, but hasn’t yet, so far as I know, proposed a way of getting rid of that bit of intentional talk. Continue reading “In Search of Dennett’s Free-Floating Rationales”

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality? Continue reading “Follow-up on Dennett and Mental Software”

Memetic Sophistry

Over at the Psychology Today blog complex, Joseph Carroll is taking Norman Holland to task on remarks that Holland made concerning the relationship between the reader of a literary text and the text itself. Though I disagree with Carroll on many matters, I agree with him on this one particular issue. Beyond that, I think his critique of Holland can also be applied to Susan Blackmore’s equivocations on memes. Here’s what Carroll says about Holland:

This whole way of thinking is a form of scholastic sophistry, useless and sterile. It produces verbal arguments that consist only in fabricated and unnecessary confusions, confusions like that which you produce as your conclusion in the passage you cited from your book: “the reader constructs everything” (p. 176). This conclusion seems plausible because it slyly blends two separate meanings of the word “constructs.” One meaning is that our brains assemble percepts into mental images. That meaning is correct. The other meaning is that our brains assemble percepts that are not radically constrained by the signals produced in the book. That meaning is incorrect. Once you have this kind of ambiguity at work for you, you can shuffle back and forth between the two meanings, sometimes suggesting the quite radical notion that books don’t “impose” any constraints—any meanings—on readers; and sometimes retreating into the safety of the correct meaning: that our brains assemble percepts.

Blackmore equivocates in a similar fashion on the question of whether or not memes are active agents. Here’s a snippet from a TED talk she gave last year:

The way to think about memes, though, is to think, why do they spread? They’re selfish information, they get copied if they can. But some of them will be copied because they’re good, or true, or useful, or beautiful. Some of them will be copied even though they’re not. Some, it’s quite hard to tell why.

Here she talks of memes as though they are agents of some kind, they’re selfish and they try to get copied. A bit later she says:

So think of it this way. Imagine a world full of brains and far more memes than can possibly find homes. The memes are trying to get copied, trying, in inverted commas, i.e., that’s the shorthand for, if they can get copied they will. They’re using you and me as their propagating copying machinery, and we are the meme machines.

Here memes are using us as machines for propagating themselves. And then we have this passage where she talks about a war between memes and genes:

So you get an arms race between the genes which are trying to get the humans to have small economical brains and not waste their time copying all this stuff, and the memes themselves, like the sounds that people made and copied – in other words, what turned out to be language – competing to get the brains to get bigger and bigger. So the big brain on this theory of driven by the memes.

The term “meme,” as we know, was coined by Richard Dawkins, who is also responsible for anthropomorphizing genes as selfish agents in biological evolution. Dawkins knows perfectly well that genes aren’t agents, and is quite capable of explicating that selfishness in terms that eliminate the anthropomorphism, which is but a useful shorthand, albeit a shorthand that has caused a great deal of mischief.

Continue reading “Memetic Sophistry”