Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures

This post is about terminology, but also about things – in particular, an abstract thing – and measurements of those things. The things and measurements arise in the study of cultural evolution.

Let us start with a thing. What is this?

9dot3

If you are a regular reader here at New Savanna you might reply: Oh, that’s the whatchamacallit from Jocker’s Macroanalysis. Well, yes, it’s an illustration from Macroanalysis. But that’s not quite the answer I was looking for. But let’s call that answer a citation and set it aside.

Let’s ask the same question, but of a different object: What’s this?

20141231-_IGP2188

I can imagine two answers, both correct, each it its own way:

1. It’s a photo of the moon.

2. The moon.

Strictly speaking, the first is correct and the second is not. It IS a photograph, not the moon itself. But the second answer is well within standard usage.

Notice that the photo does not depict the moon in full (whatever that might mean), no photograph could. That doesn’t change the fact that it is the moon that is depicted, not the sun, or Jupiter, or Alpha Centauri, or, for that matter, Mickey Mouse. We do not generally expect that representations of things should exhaust those things.

Now let us return to the first image and once again ask: What is this? I want two answers, one to correspond with each of our answers about the moon photo. I’m looking for something of the form:

1. A representation of X.

2. X.

Let us start with X. Jockers was analyzing a corpus of roughly 3300 19th century Anglophone novels. To do that he evaluated each of them on each of 600 features. Since those evaluations can be expressed numerically Jockers was able to create a 600-dimensional space in which teach text occupies a single point. He then joined all those points representing texts that are relatively close to one another. Those texts are highly similar with respect to the 600 features that define the space.

The result is a directed graph having 3300 nodes in 600 dimensions. So, perhaps we can say that X is a corpus similarity graph. However, we cannot see in 600 dimensions so there is no way we can directly examine that graph. It exists only as an abstract object in a computer. What we can do, and what Jockers did, is project a 600D object into two dimensions. That’s what we see in the image.

Continue reading “Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures”

Sharing Experience: Computation, Form, and Meaning in the Work of Literature

I’ve uploaded another document: Sharing Experience: Computation, Form, and Meaning in the Work of Literature. You can download it from Academia.edu:

https://www.academia.edu/28764246/Sharing_Experience_Computation_Form_and_Meaning_in_the_Work_of_Literature

It’s considerably revised from a text I’d uploaded a month ago: Form, Event, and Text in an Age of Computation. You might also look at my post, Obama’s Affective Trajectory in His Eulogy for Clementa Pinckney, which could have been included in the article, but I’m up against a maximum word count as I am submitting the article for publication. You might also look at the post, Words, Binding, and Conversation as Computation, which figured heavily in my rethinking.

Here’s the abstract of the new article, followed by the TOC and the introduction:

Abstract

It is by virtue of its form that a literary work constrains meaning so that it can be a vehicle for sharing experience. Form is thus an intermediary in Latour’s sense, while meaning is a mediator. Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of signifiers, and (3) the path one computes through (1) under constraints imposed by (2). As a text, Obama’s Eulogy for Clementa Pinckney is a ring-composition; as a performance, the central section is clearly marked by audience response. Recent work on synchronization of movement and neural activity across communicating individuals affords insight into the physical substrate of intersubjectivity. The ring-form description is juxtaposed to the performative meaning identified by Glenn Loury and John McWhorter.

CONTENTS

Introduction: Speculative Engineering 2
Form: Macpherson & Attridge to Latour 3
Computational Semantics: Network and Text 6
Obama’s Pinckney Eulogy as Text 10
Obama’s Pinckney Eulogy as Performance 13
Meaning, History, and Attachment 18
Coda: Form and Sharability in the Private Text 20

Introduction: Speculative Engineering

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is differs from that of computational critics. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine [1].

Digital critics, however, have little interest in computation as a process one enacts while reading a text, the sense that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was computation that drove the so-called cognitive revolution [2]. Much of the work in cognitive science is conducted in a vocabulary derived computing and, in many cases, involves computer simulations. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools for constructing models with differentiated components and processes meshing like, well, clockwork. Moreover, so far as I know, computation of one kind or another provides the only working models we have for language processes.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is unnecessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms; digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic: it helps us think about and systematically describe literary form in ways we haven’t done before.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. Speculative engineering is a better characterization. Engineering is about design and construction, perhaps even Latourian composition [3]. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made [4]. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

Caveat emptor: This bridge is of complex construction. I start with form, move to computation, with Shakespeare’s Sonnet 129 as my example, and then to President Obama’s Eulogy for Clementa Pinckney. After describing its structure (ring-composition) I consider the performance situation in which Obama delivered it, arguing that those present constituted a single physical system in which for sharing experience. I conclude by discussing meaning, history, and attachment.

References

[1] William Benzon, “The Only Game in Town: Digital Criticism Comes of Age,” 3 Quarks Daily, May 5, 2014, http://www.3quarksdaily.com/3quarksdaily/2014/05/the-only-game-in-town-digital-criticism-comes-of-age.html

[2] Ulric Neisser, Cognition and Reality: Principles and Implications of Cognitive Psychology (San Francisco: W. H. Freeman, 1976), 5-6.

[3] Bruno Latour, “An Attempt at a ‘Compositionist Manifesto’,” New Literary History 41 (2010), 471-490.

[4] For example, see Steven Pinker, How the Mind Works (New York: W.W. Norton & company, Inc., 1997), 21 ff.

Words, Binding, and Conversation as Computation

I’ve been thinking about my draft article, Form, Event, and Text in an Age of Computation. It presents me with the same old rhetorical problem: how to present computation to literary critics? In particular, I want to convince them that literary form is best thought of as being computational in kind. My problem is this: If you’ve already got ‘it’, whatever it is, then my examples make sense. If you don’t, then it’s not clear to me that they do make sense. In particular, cognitive networks are a stretch. Literary criticism just doesn’t give you any useful intuitions of form as being independent of meaning.

Any how, I’ve been thinking about words and about conversation. What I’m thinking is that the connection between signifier and signified is fundamentally computed in the sense that I’m after. It’s not ‘hard-wired’ at all. Rather it’s established dynamically. That’s what the first part of this post is about. The second part then goes on to argue that conversation is fundamentally computational.

This is crude and sketchy. We’ll see.

Words as bindings between sound and sense

What is a word? I’m not even going to attempt a definition, as we all know one when we see it, so to speak. What I will say, however, is that the common-sense core intuition tends to exaggeration their Parmenidean stillness and constancy at the expense of the Heraclitean fluctuation. What does this word mean:

race

It’s a simple word, an everyday word. Out there in the middle of nowhere, without context, it’s hard to say what it means. I could mean this, it could mean that. It depends.

When I look it up in the dictionary on my computer, New Oxford American Dictionary, it lists three general senses. One, “a ginger root,” is listed as “dated.” The other two senses are the ones I know, and each has a number of possibilities. One set of meanings has to do with things moving and has many alternatives. The other deals with kinds of beings, biological or human. These meanings no doubt developed over time.

And, of course, the word’s appearance can vary widely depending on typeface or how it’s handwritten, either in cursive script or printed. The spoken word varies widely as well, depending on the speaker–male, female, adult, child, etc.–and discourse context. It’s not a fixed object at all.

What I’m suggesting, then, is that this common ‘picture’ is too static:

sign

There we have it, the signifier and the signified packaged together in a little ‘suitcase’ with “sign” as the convenient handle for the package. It gives the impression the sentences are little ‘trains’ of meaning, with one box connected to the next in a chain of signifiers.

No one who thinks seriously about it actually thinks that way. But that’s where thinking starts. For that matter, by the time one gets around to distinguishing between signifier and signified one has begun to move away from the static conception. My guess is that the static conception arises from the fact of writing and the existence of dictionaries. There they are, one after another. No matter when you look up a word, it’s there in the same place, having the same definition. It’s a thing, an eternal Parmenidean thing.

Later in The Course in General Linguistics, long after he’s introduced the signifier/signified distinction, de Saussure presents us with this picture [1]:

waves sign

He begins glossing it as follows (112): “The linguistic fact can therefore be pictured in its totality–i.e. language–as a series of contiguous subdivisions marked off on both the indefinite plane of jumbled ideas (A) and the equally vague plane of sounds (B).” He goes on to note “the somewhat mysterious fact is rather that ‘thought-sound’ implies division, and that language words out its units while taking shape between two shapeless masses.” I rather like that, and I like that he chose undulating waves as his visual image. Continue reading “Words, Binding, and Conversation as Computation”

Form, Event, and Text in an Age of Computation

IMGP1879rd 1by1 B&W

I’ve put another article online. This is not a working paper. It is a near-final draft of an article I will be submitting for publication once I have had time to let things settle in my mind. I’d appreciate any comments you have. You can download the paper in the usual places:

Academia.edu: https://www.academia.edu/27706433/Form_Event_and_Text_in_an_Age_of_Computation
SSRN: http://ssrn.com/abstract=2821678

Abstract: Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I then adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition and the central section is clearly marked in audience response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

CONTENTS

Computational Semantics: Model and Text 3
Literary Form, Attridge and Latour 8
Obama’s Pinckney Eulogy as Performance 11
Obama’s Pinckney Eulogy as Text 15
Description in Method 19

Form, Event, and Text in an Age of Computation

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is a bit different from that prevalent among practitioners of distant reading. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine.

But, for whatever reason, digital critics have little or no interest in computation as something one enacts while reading any one of those texts. That is the sense of computation that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was the idea of computation that drove the so-called cognitive revolution in its early years:

… the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in “memory” and retrieve them again, classify inputs, recognize patterns, and so on. Whether they do these things just like people was less important than that they do them at all. The coming of the computer provided a much-needed reassurance that cognitive processes were real; that they could be studied and perhaps understood.

Much of the work in the newer psychologies is conducted in a vocabulary that derives from computing and, in many cases, involves computer simulations of mental processes. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools through which we could construct models with differentiated components and processes meshing like, well, clockwork. It gave us a way to objectify our theories.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is not necessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms, digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic device. Specifically, I propose that it helps us think about and describe literary form in ways we haven’t done before.

First I present a model of computational semantics for Shakespeare’s Sonnet 129. This affords us a distinction between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition; the central section is clearly marked in the audience’s response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. I prefer to think of it as speculative engineering. To be sure, engineering, like science, is technical. But engineering is about design and construction, perhaps even Latourian composition. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading “What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics””

Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here’s an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett’s FB page, Yorick Wilks observed: “It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer.”

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

Dennett on Memes, Neurons, and Software

Another working paper, links:
Academia.edu: https://www.academia.edu/16514603/Dennett_on_Memes_Neurons_and_Software
SSRN: http://ssrn.com/abstract=2670107

Abstract, contents, and introduction below.

* * * * *

Abstract: In his work on memetics Daniel Dennett does a poor job of negotiating the territory between philosophy and science. The analytic tools he has as a philosopher aren’t of much use in building accounts of the psychological and social mechanisms that underlie cultural processes. The only tool Dennett seems to have at his disposal is analogy. That’s how he builds his memetics, by analogy from biology on the one hand and computer science on the other. These analogies do not work very well. To formulate an evolutionary account of culture one needs to construct one’s gene and phenotype analogues directly from the appropriate materials, neurons and brains in social interaction. Dennett doesn’t do that. Instead of social interaction he has an analogy to apps loading into computers. Instead of neurons he has homuncular agents that are suspiciously like his other favorite homuncular agents, memes. It doesn’t work.

CONTENTS

Introduction: Too many analogies, no construction 2
Watch Out, Dan Dennett, Your Mind’s Changing Up on You! 5
The Memetic Mind, Not: Where Dennett Goes Wrong 11
Turtles All the Way Down: How Dennett Thinks 16
A Note on Dennett’s Curious Comparison of Words and Apps 21
Has Dennett Undercut His Own Position on Words as Memes? 23
Dennet’s WRONG: the Mind is NOT Software for the Brain 27
Follow-up on Dennett and Mental Software 31

Introduction: Too many analogies, no construction

Just before the turn of the millennium Dennet gave an interview in The Atlantic in which he observed:

In the beginning, it was all philosophy. Aristotle, whether he was doing astronomy, physiology, psychology, physics, chemistry, or mathematics — it was all the same. It was philosophy. Over the centuries there’s been a refinement process: in area after area questions that were initially murky and problematic became clearer. And as soon as that happens, those questions drop out of philosophy and become science. Mathematics, astronomy, physics, chemistry — they all started out in philosophy, and when they got clear they were kicked out of the nest.

Philosophy is the mother. These are the offspring. We don’t have to go back a long way to see traces of this. The eighteenth century is quite early enough to find the distinction between philosophy and physics not being taken very seriously. Psychology is one of the more recent births from philosophy, and we only have to go back to the late nineteenth century to see that.

My sense is that the trajectory of philosophy is to work on very fundamental questions that haven’t yet been turned into scientific questions.

This is a standard view, and it’s one I hold myself, though it’s not clear to me just how it would look when the historical record is examined closely.

But I do think that, in his recent work, Dennett’s been having troubles negotiating the difference between philosophy, in which he has a degree, and science. For he is also a cognitive scientist in good standing, and that phrase – “cognitive science” – stretches all over the place, leaving plenty of room to get tripped up over the difference between philosophy and science.

Dennett has spent much of his career as a philosopher of artificial intelligence, neuroscience, and cognitive psychology. That is to say, he’s looked at the scientific work in those disciplines and considered philosophical implications and foundations. More recently he’s done the same thing with biology.

Now, it is one thing to apply the analytic tools of philosophy to the fruits of those disciplines. But Dennett has also been interested in memetics, a putative evolutionary account of culture. The problem is that there is no science of memetics for Dennett to analyze. So, when he does memetics, just what is he doing?

The analytic tools he has as a philosopher aren’t of much use in building accounts of the psychological and social mechanisms that might underlie cultural processes. The only tool Dennett seems to have at his disposal is analogy. And so that’s how he builds his memetics, by analogy from biology on the one hand and computer science on the other.

Alas, these analogies do not work very well. That’s what I examine in the posts I’ve gathered into this working paper. What Dennett, or anyone else, needs to do to formulate an evolutionary account of culture is to construct one’s gene and phenotype analogues (if that’s what you want to do) directly from the appropriate materials, neurons and brains in social interaction. Dennett doesn’t do that. Instead of social interaction he has an analogy to apps loading into computers. Instead of neurons he has homuncular agents that are suspiciously like his other favorite homuncular agents, memes. It doesn’t work. It’s incoherent. It’s bad philosophy or bad science, or both. Continue reading “Dennett on Memes, Neurons, and Software”

An Inquiry into & a Critique of Dennett on Intentional Systems

A new working paper. Downloads HERE:

Abstract, contents, and introduction below:

* * * * *

Abstract: Using his so-called intentional stance, Dennett has identified so-called “free-floating rationales” in a broad class of biological phenomena. The term, however, is redundant on the pattern of objects and actions to which it applies and using it has the effect of reifying the pattern in a peculiar way. The intentional stance is itself a pattern of wide applicability. However, in a broader epistemological view, it turns out that we are pattern-seeking creatures and that phenomenon identified with some pattern must be verified by other techniques. The intentional stance deserves no special privilege in this respect. Finally, it is suggested that the intentional stance may get its intellectual power from the neuro-mental machinery it recruits and not from any special class of phenomena it picks out in the world.

CONTENTS

Introduction: Reverse Engineering Dan Dennett 2
Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains 6
In Search of Dennett’s Free-Floating Rationales 9
Dan Dennett on Patterns (and Ontology) 14
Dan Dennett, “Everybody talks that way” – Or How We Think 20

Introduction: Reverse Engineering Dan Dennett

I find Dennett puzzling. Two recent back-to-back videos illustrate that puzzle. One is a version of what seems to have become his standard lecture on cultural evolution, this time entitled

https://www.youtube.com/watch?feature=player_embedded&v=AZX6awZq5Z0

As such it has the same faults I identify in the lecture that occasioned the first post in this collection, Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains. It’s got a collection of nicely curated examples of mostly biological phenomenon which Dennett crafts into an account of cultural evolution though energetic hand-waving and tap-dancing.
And then we have a somewhat shorter video that is a question and answer session following the first:

https://www.youtube.com/watch?feature=player_embedded&v=beKC_7rlTuw

I like much of what Dennett says in this video; I think he’s right on those issues.

What happened between the first and second video? For whatever reason, no one asked him about the material in the lecture he’d just given. They asked him about philosophy of mind and about AI. Thus, for example, I agree with him that The Singularity is not going to happen anytime soon, and likely not ever. Getting enough raw computing power is not the issue. Organizing it is, and as yet we know very little about that. Similarly I agree with him that the so-called “hard problem” of consciousness is a non-issue.

How is it that one set of remarks is a bunch of interesting examples held together by smoke and mirrors while the other set of remarks is cogent and substantially correct? I think these two sets of remarks require different kinds of thinking. The second set involve philosophical analysis, and, after all Dennett is a philosopher more or less in the tradition of 20th century Anglo-American analytic philosophy. But that first set of remarks, about cultural evolution, is about constructing a theory. It requires what I called speculative engineering in the preface to my book on music, Beethoven’s Anvil. On the face of it, Dennett is not much of an engineer.

And now things get really interesting. Consider this remark from a 1994 article [1] in which Dennett gives an overview of this thinking up to that time (p. 239):

My theory of content is functionalist […]: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering of the robot). This is a specifically ‘teleological’ notion of function (not the notion of a mathematical function or of a mere ‘causal role’, as suggested by David LEWIS and others). It is the concept of function that is ubiquitous in engineering, in the design of artefacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive to find ‘laws of nature’, but a species of engineering: the analysis, by ‘reverse engineering’, of the found artefacts of nature – which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.)

I am entirely in agreement with his emphasis on engineering. Biological thinking is “a species of engineering.” And so is cognitive science and certainly the study of culture and its evolution.

Earlier in that article Dennett had this to say (p. 236):

It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford, I developed a deep distrust of the methods I saw other philosophers employing, and decided that before I could trust any of my intuitions about the mind, I had to figure out how the brain could possibly accomplish the mind’s work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked – clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool ‘arts and humanities’ academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.)

My reaction to that last remark, that parenthesis, was something like: Coulda’ fooled me! For I had been thinking that an engineering sensibility is what was missing in Dennett’s discussions of culture. He didn’t seem to have a very deep sense of structure and construction, of, well, you know, how design works. And here he is telling us he coulda’ been an engineer.

Continue reading “An Inquiry into & a Critique of Dennett on Intentional Systems”

Dan Dennett, “Everybody talks that way” – Or How We Think

Note: Late on the evening og 7.20.15: I’ve edited the post at the end of the second section by introducing a distinction between prediction and explanation.

Thinking things over, here’s the core of my objection to talk of free-floating rationales: they’re redundant.

What authorizes talk of “free-floating rationales” (FFRs) is a certain state of affairs, a certain pattern. Does postulating the existence of FFRs add anything to the pattern? Does it make anything more predictable? No. Even considering the larger evolutionary context in which talk of FFRs adds nothing (p. 351 in [1]):

But who appreciated this power, who recognized this rationale, if not the bird or its individual ancestors? Who else but Mother Nature herself? That is to say: nobody. Evolution by natural selection “chose” this design for this “reason.”

Surely what Mother Nature recognized was the pattern. For all practical purposes talk of FFRs is simply an elaborate name for the pattern. Once the pattern’s been spotted, there is nothing more.

But how’d a biologist spot the pattern? (S)he made observations and thought about them. So I want to switch gears and think about the operation of our conceptual equipment. These considerations have no direct bearing on our argument about Dennett’s evolutionary thought, as every idea we have must be embodied in some computational substrate, the good ideas and the bad. But the indirect implications are worth thinking about. For they indicate that a new intellectual game is afoot.

Dennett on How We Think

Let’s start with a passage from the intentional systems article. This is where Dennett is imagining a soliloquy that our low-nesting bird might have. He doesn’t, of course, want us to think that the bird ever thought such thoughts (or even, for that matter, perhaps thought any thoughts at all). Rather, Dennett is following Dawkins in proposing this as a way for biologists to spot interesting patterns in the life world. Here’s the passage (p. 350 in [1]):

I’m a low-nesting bird, whose chicks are not protectable against a predator who discovers them. This approaching predator can be expected soon to discover them unless I distract it; it could be distracted by its desire to catch and eat me, but only if it thought there was a reasonable chance of its actually catching me (it’s no dummy); it would contract just that belief if I gave it evidence that I couldn’t fly anymore; I could do that by feigning a broken wing, etc.

Keeping that in mind, lets look at another passage. This is from a 1999 interview [2]:

The only thing that’s novel about my way of doing it is that I’m showing how the very things the other side holds dear – minds, selves, intentions – have an unproblematic but not reduced place in the material world. If you can begin to see what, to take a deliberately extreme example, your thermostat and your mind have in common, and that there’s a perspective from which they seem to be instances of an intentional system, then you can see that the whole process of natural selection is also an intentional system.

It turns out to be no accident that biologists find it so appealing to talk about what Mother Nature has in mind. Everybody in AI, everybody in software, talks that way. “The trouble with this operating system is it doesn’t realize this, or it thinks it has an extra disk drive.” That way of talking is ubiquitous, unselfconscious – and useful. If the thought police came along and tried to force computer scientists and biologists not to use that language, because it was too fanciful, they would run into fierce resistance.

What I do is just say, Well, let’s take that way of talking seriously. Then what happens is that instead of having a Cartesian position that puts minds up there with the spirits and gods, you bring the mind right back into the world. It’s a way of looking at certain material things. It has a great unifying effect.

So, this soliloquy way of mind is useful in thinking about the biological world and something very like it is common among those who have to work with software. Dennett’s asking us to believe that, because thinking about these things in that way is so very useful (in predicting what they’re going to do) that we might as well conclude that, in some special technical sense, they really ARE like that. That special technical sense is given in his account of the intentional stance as a pattern, which we examined in the previous post [3].

What I want to do is refrain from taking that last step. I agree with Dennett that, yes, this IS a very useful way of thinking about lots of things. But I want to take that insight in a different direction. I want to suggest that what is going on in these cases is that we’re using neuro-computational equipment that evolved for regulating inter personal interactions and putting it to other uses. Mark Changizi would say we’re harnessing it to those other purposes while Stanislaw Dehaene would talk of reuse. I’m happy either way. Continue reading “Dan Dennett, “Everybody talks that way” – Or How We Think”

Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”