Form, Event, and Text in an Age of Computation

IMGP1879rd 1by1 B&W

I’ve put another article online. This is not a working paper. It is a near-final draft of an article I will be submitting for publication once I have had time to let things settle in my mind. I’d appreciate any comments you have. You can download the paper in the usual places:

Academia.edu: https://www.academia.edu/27706433/Form_Event_and_Text_in_an_Age_of_Computation
SSRN: http://ssrn.com/abstract=2821678

Abstract: Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I then adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition and the central section is clearly marked in audience response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

CONTENTS

Computational Semantics: Model and Text 3
Literary Form, Attridge and Latour 8
Obama’s Pinckney Eulogy as Performance 11
Obama’s Pinckney Eulogy as Text 15
Description in Method 19

Form, Event, and Text in an Age of Computation

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is a bit different from that prevalent among practitioners of distant reading. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine.

But, for whatever reason, digital critics have little or no interest in computation as something one enacts while reading any one of those texts. That is the sense of computation that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was the idea of computation that drove the so-called cognitive revolution in its early years:

… the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in “memory” and retrieve them again, classify inputs, recognize patterns, and so on. Whether they do these things just like people was less important than that they do them at all. The coming of the computer provided a much-needed reassurance that cognitive processes were real; that they could be studied and perhaps understood.

Much of the work in the newer psychologies is conducted in a vocabulary that derives from computing and, in many cases, involves computer simulations of mental processes. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools through which we could construct models with differentiated components and processes meshing like, well, clockwork. It gave us a way to objectify our theories.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is not necessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms, digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic device. Specifically, I propose that it helps us think about and describe literary form in ways we haven’t done before.

First I present a model of computational semantics for Shakespeare’s Sonnet 129. This affords us a distinction between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition; the central section is clearly marked in the audience’s response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. I prefer to think of it as speculative engineering. To be sure, engineering, like science, is technical. But engineering is about design and construction, perhaps even Latourian composition. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading “What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics””

Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here’s an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett’s FB page, Yorick Wilks observed: “It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer.”

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

Dennett on Memes, Neurons, and Software

Another working paper, links:
Academia.edu: https://www.academia.edu/16514603/Dennett_on_Memes_Neurons_and_Software
SSRN: http://ssrn.com/abstract=2670107

Abstract, contents, and introduction below.

* * * * *

Abstract: In his work on memetics Daniel Dennett does a poor job of negotiating the territory between philosophy and science. The analytic tools he has as a philosopher aren’t of much use in building accounts of the psychological and social mechanisms that underlie cultural processes. The only tool Dennett seems to have at his disposal is analogy. That’s how he builds his memetics, by analogy from biology on the one hand and computer science on the other. These analogies do not work very well. To formulate an evolutionary account of culture one needs to construct one’s gene and phenotype analogues directly from the appropriate materials, neurons and brains in social interaction. Dennett doesn’t do that. Instead of social interaction he has an analogy to apps loading into computers. Instead of neurons he has homuncular agents that are suspiciously like his other favorite homuncular agents, memes. It doesn’t work.

CONTENTS

Introduction: Too many analogies, no construction 2
Watch Out, Dan Dennett, Your Mind’s Changing Up on You! 5
The Memetic Mind, Not: Where Dennett Goes Wrong 11
Turtles All the Way Down: How Dennett Thinks 16
A Note on Dennett’s Curious Comparison of Words and Apps 21
Has Dennett Undercut His Own Position on Words as Memes? 23
Dennet’s WRONG: the Mind is NOT Software for the Brain 27
Follow-up on Dennett and Mental Software 31

Introduction: Too many analogies, no construction

Just before the turn of the millennium Dennet gave an interview in The Atlantic in which he observed:

In the beginning, it was all philosophy. Aristotle, whether he was doing astronomy, physiology, psychology, physics, chemistry, or mathematics — it was all the same. It was philosophy. Over the centuries there’s been a refinement process: in area after area questions that were initially murky and problematic became clearer. And as soon as that happens, those questions drop out of philosophy and become science. Mathematics, astronomy, physics, chemistry — they all started out in philosophy, and when they got clear they were kicked out of the nest.

Philosophy is the mother. These are the offspring. We don’t have to go back a long way to see traces of this. The eighteenth century is quite early enough to find the distinction between philosophy and physics not being taken very seriously. Psychology is one of the more recent births from philosophy, and we only have to go back to the late nineteenth century to see that.

My sense is that the trajectory of philosophy is to work on very fundamental questions that haven’t yet been turned into scientific questions.

This is a standard view, and it’s one I hold myself, though it’s not clear to me just how it would look when the historical record is examined closely.

But I do think that, in his recent work, Dennett’s been having troubles negotiating the difference between philosophy, in which he has a degree, and science. For he is also a cognitive scientist in good standing, and that phrase – “cognitive science” – stretches all over the place, leaving plenty of room to get tripped up over the difference between philosophy and science.

Dennett has spent much of his career as a philosopher of artificial intelligence, neuroscience, and cognitive psychology. That is to say, he’s looked at the scientific work in those disciplines and considered philosophical implications and foundations. More recently he’s done the same thing with biology.

Now, it is one thing to apply the analytic tools of philosophy to the fruits of those disciplines. But Dennett has also been interested in memetics, a putative evolutionary account of culture. The problem is that there is no science of memetics for Dennett to analyze. So, when he does memetics, just what is he doing?

The analytic tools he has as a philosopher aren’t of much use in building accounts of the psychological and social mechanisms that might underlie cultural processes. The only tool Dennett seems to have at his disposal is analogy. And so that’s how he builds his memetics, by analogy from biology on the one hand and computer science on the other.

Alas, these analogies do not work very well. That’s what I examine in the posts I’ve gathered into this working paper. What Dennett, or anyone else, needs to do to formulate an evolutionary account of culture is to construct one’s gene and phenotype analogues (if that’s what you want to do) directly from the appropriate materials, neurons and brains in social interaction. Dennett doesn’t do that. Instead of social interaction he has an analogy to apps loading into computers. Instead of neurons he has homuncular agents that are suspiciously like his other favorite homuncular agents, memes. It doesn’t work. It’s incoherent. It’s bad philosophy or bad science, or both. Continue reading “Dennett on Memes, Neurons, and Software”

An Inquiry into & a Critique of Dennett on Intentional Systems

A new working paper. Downloads HERE:

Abstract, contents, and introduction below:

* * * * *

Abstract: Using his so-called intentional stance, Dennett has identified so-called “free-floating rationales” in a broad class of biological phenomena. The term, however, is redundant on the pattern of objects and actions to which it applies and using it has the effect of reifying the pattern in a peculiar way. The intentional stance is itself a pattern of wide applicability. However, in a broader epistemological view, it turns out that we are pattern-seeking creatures and that phenomenon identified with some pattern must be verified by other techniques. The intentional stance deserves no special privilege in this respect. Finally, it is suggested that the intentional stance may get its intellectual power from the neuro-mental machinery it recruits and not from any special class of phenomena it picks out in the world.

CONTENTS

Introduction: Reverse Engineering Dan Dennett 2
Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains 6
In Search of Dennett’s Free-Floating Rationales 9
Dan Dennett on Patterns (and Ontology) 14
Dan Dennett, “Everybody talks that way” – Or How We Think 20

Introduction: Reverse Engineering Dan Dennett

I find Dennett puzzling. Two recent back-to-back videos illustrate that puzzle. One is a version of what seems to have become his standard lecture on cultural evolution, this time entitled

https://www.youtube.com/watch?feature=player_embedded&v=AZX6awZq5Z0

As such it has the same faults I identify in the lecture that occasioned the first post in this collection, Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains. It’s got a collection of nicely curated examples of mostly biological phenomenon which Dennett crafts into an account of cultural evolution though energetic hand-waving and tap-dancing.
And then we have a somewhat shorter video that is a question and answer session following the first:

https://www.youtube.com/watch?feature=player_embedded&v=beKC_7rlTuw

I like much of what Dennett says in this video; I think he’s right on those issues.

What happened between the first and second video? For whatever reason, no one asked him about the material in the lecture he’d just given. They asked him about philosophy of mind and about AI. Thus, for example, I agree with him that The Singularity is not going to happen anytime soon, and likely not ever. Getting enough raw computing power is not the issue. Organizing it is, and as yet we know very little about that. Similarly I agree with him that the so-called “hard problem” of consciousness is a non-issue.

How is it that one set of remarks is a bunch of interesting examples held together by smoke and mirrors while the other set of remarks is cogent and substantially correct? I think these two sets of remarks require different kinds of thinking. The second set involve philosophical analysis, and, after all Dennett is a philosopher more or less in the tradition of 20th century Anglo-American analytic philosophy. But that first set of remarks, about cultural evolution, is about constructing a theory. It requires what I called speculative engineering in the preface to my book on music, Beethoven’s Anvil. On the face of it, Dennett is not much of an engineer.

And now things get really interesting. Consider this remark from a 1994 article [1] in which Dennett gives an overview of this thinking up to that time (p. 239):

My theory of content is functionalist […]: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering of the robot). This is a specifically ‘teleological’ notion of function (not the notion of a mathematical function or of a mere ‘causal role’, as suggested by David LEWIS and others). It is the concept of function that is ubiquitous in engineering, in the design of artefacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive to find ‘laws of nature’, but a species of engineering: the analysis, by ‘reverse engineering’, of the found artefacts of nature – which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.)

I am entirely in agreement with his emphasis on engineering. Biological thinking is “a species of engineering.” And so is cognitive science and certainly the study of culture and its evolution.

Earlier in that article Dennett had this to say (p. 236):

It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford, I developed a deep distrust of the methods I saw other philosophers employing, and decided that before I could trust any of my intuitions about the mind, I had to figure out how the brain could possibly accomplish the mind’s work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked – clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool ‘arts and humanities’ academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.)

My reaction to that last remark, that parenthesis, was something like: Coulda’ fooled me! For I had been thinking that an engineering sensibility is what was missing in Dennett’s discussions of culture. He didn’t seem to have a very deep sense of structure and construction, of, well, you know, how design works. And here he is telling us he coulda’ been an engineer.

Continue reading “An Inquiry into & a Critique of Dennett on Intentional Systems”

Dan Dennett, “Everybody talks that way” – Or How We Think

Note: Late on the evening og 7.20.15: I’ve edited the post at the end of the second section by introducing a distinction between prediction and explanation.

Thinking things over, here’s the core of my objection to talk of free-floating rationales: they’re redundant.

What authorizes talk of “free-floating rationales” (FFRs) is a certain state of affairs, a certain pattern. Does postulating the existence of FFRs add anything to the pattern? Does it make anything more predictable? No. Even considering the larger evolutionary context in which talk of FFRs adds nothing (p. 351 in [1]):

But who appreciated this power, who recognized this rationale, if not the bird or its individual ancestors? Who else but Mother Nature herself? That is to say: nobody. Evolution by natural selection “chose” this design for this “reason.”

Surely what Mother Nature recognized was the pattern. For all practical purposes talk of FFRs is simply an elaborate name for the pattern. Once the pattern’s been spotted, there is nothing more.

But how’d a biologist spot the pattern? (S)he made observations and thought about them. So I want to switch gears and think about the operation of our conceptual equipment. These considerations have no direct bearing on our argument about Dennett’s evolutionary thought, as every idea we have must be embodied in some computational substrate, the good ideas and the bad. But the indirect implications are worth thinking about. For they indicate that a new intellectual game is afoot.

Dennett on How We Think

Let’s start with a passage from the intentional systems article. This is where Dennett is imagining a soliloquy that our low-nesting bird might have. He doesn’t, of course, want us to think that the bird ever thought such thoughts (or even, for that matter, perhaps thought any thoughts at all). Rather, Dennett is following Dawkins in proposing this as a way for biologists to spot interesting patterns in the life world. Here’s the passage (p. 350 in [1]):

I’m a low-nesting bird, whose chicks are not protectable against a predator who discovers them. This approaching predator can be expected soon to discover them unless I distract it; it could be distracted by its desire to catch and eat me, but only if it thought there was a reasonable chance of its actually catching me (it’s no dummy); it would contract just that belief if I gave it evidence that I couldn’t fly anymore; I could do that by feigning a broken wing, etc.

Keeping that in mind, lets look at another passage. This is from a 1999 interview [2]:

The only thing that’s novel about my way of doing it is that I’m showing how the very things the other side holds dear – minds, selves, intentions – have an unproblematic but not reduced place in the material world. If you can begin to see what, to take a deliberately extreme example, your thermostat and your mind have in common, and that there’s a perspective from which they seem to be instances of an intentional system, then you can see that the whole process of natural selection is also an intentional system.

It turns out to be no accident that biologists find it so appealing to talk about what Mother Nature has in mind. Everybody in AI, everybody in software, talks that way. “The trouble with this operating system is it doesn’t realize this, or it thinks it has an extra disk drive.” That way of talking is ubiquitous, unselfconscious – and useful. If the thought police came along and tried to force computer scientists and biologists not to use that language, because it was too fanciful, they would run into fierce resistance.

What I do is just say, Well, let’s take that way of talking seriously. Then what happens is that instead of having a Cartesian position that puts minds up there with the spirits and gods, you bring the mind right back into the world. It’s a way of looking at certain material things. It has a great unifying effect.

So, this soliloquy way of mind is useful in thinking about the biological world and something very like it is common among those who have to work with software. Dennett’s asking us to believe that, because thinking about these things in that way is so very useful (in predicting what they’re going to do) that we might as well conclude that, in some special technical sense, they really ARE like that. That special technical sense is given in his account of the intentional stance as a pattern, which we examined in the previous post [3].

What I want to do is refrain from taking that last step. I agree with Dennett that, yes, this IS a very useful way of thinking about lots of things. But I want to take that insight in a different direction. I want to suggest that what is going on in these cases is that we’re using neuro-computational equipment that evolved for regulating inter personal interactions and putting it to other uses. Mark Changizi would say we’re harnessing it to those other purposes while Stanislaw Dehaene would talk of reuse. I’m happy either way. Continue reading “Dan Dennett, “Everybody talks that way” – Or How We Think”

Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”

In Search of Dennett’s Free-Floating Rationales

I’ve decided to take a closer look at Dennett’s notion of free-floating rationale. It strikes me as being an unhelpful reification, but explaining just why that is has turned out to be a tricky matter. First I’ll look at a passage from a recent article, “The Evolution of Reasons” [1], and then go back three decades to a major exposition of the intentional stance as applied to animal behavior [2]. I’ll conclude with some hints about metaphysics.

On the whole I’m inclined to think of free-floating rationale as a poor solution to a deep problem. It’s not clear to me what a good solution would be, though I’ve got some suggestions as to how that might go.

Evolving Reasons

Dennett opens his inquiry by distinguishing between “a process narrative that explains the phenomenon without saying it is for anything” and an account that provides “a reason–a proper telic reason” (p. 50). The former is what he calls a how come? account and the latter is a what for? account. After reminding us of Aristotle’s somewhat similar four causes Dennett gets down to it: “Evolution by natural selection starts with how come and arrives at what for. We start with a lifeless world in which there are lots of causes but no reasons, no purposes at all.” (p. 50).

Those free-floating rationales are a particular kind of what for. He introduces the term on page 54:

So there were reasons before there were reason representers. The reasons tracked by evolution I have called “free-floating rationales” (1983, 1995, and elseswhere), a term that has apparently jangled the nerves of more than a few thinkers, who suspect I am conjuring up ghosts of some sort. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. There were nine planets before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. I am not relenting; instead, I am hoping here to calm their fears and convince them that we should all be happy to speak of the reasons uncovered by evolution before they were ever expressed or represented by human investigators or any other minds.

That is, just as there is no mystery about the relationship between numbers and planets, or between centers of gravity and asteroids, so there is no mystery about the relationship between free-floating rationales and X.

What sorts of things can we substitute for X? That’s what’s tricky. It turns out those things aren’t physically connected objects. Those things are patterns of interaction among physically connected objects.

Before taking a look at those patterns (in the next section), let’s consider another passage from this article (p. 54):

Natural selection is thus an automatic reason finder that “discovers,” “endorses,” and “focuses” reasons over many generations. The scare quotes are to remind us that natural selection doesn’t have a mind, doesn’t itself have reasons, but is nevertheless competent to perform this “task” of design refinement. This is competence without comprehension.

That’s where Dennett is going, “competence without comprehension” – a recent mantra of his.

It is characteristic of Dennett’s intentional stance that it authorizes the use of intentional language, such as “discovers,” “endorses,” and “focuses”. That’s what it’s for, to allow the use of such language in situations where it comes naturally and easily. What’s not clear to me is whether or not one is supposed to treat it as a heuristic device that leads to non-intentional accounts. Clearly intentional talk about “selfish” genes is to be cashed out in non-intentional talk, and that would seem to be the case with natural selection in general.

But it is one thing to talk about cashing out intentional talk in a more suitable explanatory lingo. It’s something else to actually do so. Dennett’s been talking about free-floating rationales for decades, but hasn’t yet, so far as I know, proposed a way of getting rid of that bit of intentional talk. Continue reading “In Search of Dennett’s Free-Floating Rationales”

On the Direction of 19th Century Poetic Style, Underwood and Sellers 2015

Another working paper (title above). Download at:
SSRN: http://ssrn.com/abstract=2623118
Academica.edu: https://www.academia.edu/13279876/On_the_Direction_of_19th_Century_Poetic_Style_Underwood_and_Sellers_2015

Abstract, contents, and introduction below:

Abstract: Underwood and Sellers have discovered that over the course of roughly a century (1820-1919) Anglo-American poetry has undergone a consistent change in style in a direction favored by editors and reviewers of elite journals. This directional shift aligns with the one Matthew Jockers found in Angophone novels during roughly the same period (from the beginning of the 19th century to its end). I argue that this change is characteristic of a cultural evolutionary process and sketch a way to simulate such a process as an interaction between a population of texts and a population of writers where texts and writers. I suggest that such directionality is a sign of autonomy in the aesthetic system, that it is not completely coupled to and subsumed by surrounding historical events.

C O N T E N T S

0. Introduction: Looking at Cultural Evolution whether You Like It or Not 2
1. Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction 8
2. Beyond Whig History to Evolutionary Thinking 14
3. Could Heart of Darkness have been published in 1813? – a digression 19
4. Beyond narrative we have simulation 22

0. Introduction: Looking at Cultural Evolution whether You Like It or Not

I was of course thrilled to read How Quickly Do Literary Standards Change? (Underwood and Sellers 2015). Why? Because they provide preliminary evidence that 19th century Anglophone poetic culture has a direction. Just what that direction, and how to characterize it, that’s something else. But there does appear to be a direction. And just why is that exciting? Because Matthew Jockers made the same discovery about the 19th century Anglophone novel. To be sure, that’s not what he claimed – I’ve had to reinterpret his work (see my working paper, On the Direction of Cultural Evolution: Lessons from the 19th Century Anglophone Novel) – but that’s what he has in fact done.

So we’ve got two investigations making the same observation: there is a long-term direction 19th century literary culture. But not the same, as Jockers looked at novels and Underwood and Sellers looked at poetry. Moreover their observational methods are quite different. Jockers uncovered direction by looking for similarity between texts where similarity judgments are based on a variety of stylistic measures and on topic analysis. Underwood and Smalls bumped into directionality by looking for differences between the general run of literary texts and texts selected for review by elite publications. Jockers’ work, almost by design, uncovered continuity between successive cohorts of texts, but simply ignored elite culture. Underwood and Smalls had no explicit interest in local continuity but, by looking at elite choice, uncovered a possible factor in directional cultural change: the “pressure” of elite preference on the system as a whole. Continue reading “On the Direction of 19th Century Poetic Style, Underwood and Sellers 2015”

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading “Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains”