Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”

Cultural Evolution and the Impending Singularity: The Movie

This post was chosen as an Editor's Selection for ResearchBlogging.org

Here’s a video of a talk I gave at the Santa Fe Institute‘s Complex Systems Summer School (written with roboticist Andrew Tinka-check out him talking about his fleet of floating robots).  The talk was a response to the “Evolution Challenge”:

  1. Has Biological Evolution come to an end?
  2. Is belief an emergent property?
  3. Will advanced computers use H. Sapiens as batteries?

I also blogged about a part of this talk here (why a mad scientist’s attempt at creating A.I. to make new scientific discoveries was doomed).

The talk was given a prise for best talk by the judging panel which included David Krakauer, Tom Carter and best-selling author Cormac McCarthy.  At several points in the talk, I completely forget what I was supposed to say because the people filming the event asked me to set my screen up in a way so I couldn’t see my notes.


Sperl, M., Chang, A., Weber, N., & Hübler, A. (1999). Hebbian learning in the agglomeration of conducting particles Physical Review E, 59 (3), 3165-3168 DOI: 10.1103/PhysRevE.59.3165

Chater N, & Christiansen MH (2010). Language acquisition meets language evolution. Cognitive science, 34 (7), 1131-57 PMID: 21564247

Ay N, Flack J, & Krakauer DC (2007). Robustness and complexity co-constructed in multimodal signalling networks. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362 (1479), 441-7 PMID: 17255020

Ackley, D.H., and Cannon, D.C.. “Pursue Robust Indefinite Scalability”. In Proceedings of the Thirteenth Workshop on Hot Topics in Operating Systems (HOTOS-XIII) (2011, May). Abstract, PDF.

Guttal V, & Couzin ID (2010). Social interactions, information use, and the evolution of collective migration. Proceedings of the National Academy of Sciences of the United States of America, 107 (37), 16172-7 PMID: 20713700

Language, Thought, and Space (II): Universals and Variation

Spatial orientation is crucial when we try to navigate the world around us. It is a fundamental domain of human experience and depends on a wide array of cognitive capacities and integrated neural subsystems. What is most important for spatial cognition however, are the frames of references we use to locate and classify ourselves, others, objects, and events.

Often, we define a landmark (say ourselves, or a tree, or the telly) and then define an object’s location in relation to this landmark (the mouse is to my right, the bike lies left of the tree, my keys have fallen behind the telly). But as it turns out, many languages are not able to express a coordinate system with the meaning of the English expression “left of.” Instead, they employ a compass-like system of orientation.

They do not use a relative frame of reference, like in the English “the cat is behind the truck” but instead use an absolute frame of reference that can be illustrated in English by sentences such as “the cat is north of the truck.” (Levinson 2003: 3). This may seem exotic for us, but for many languages it is the dominant – although often not the only – way of locating things in space.

What cognitive consequences follow from this?

Continue reading “Language, Thought, and Space (II): Universals and Variation”

What Makes Humans Unique? (II): Six Candidates for What Makes Human Cognition Uniquely Human

ResearchBlogging.orgWhat makes humans unique? This never-ending debate has sparked a long list of proposals and counter-arguments and, to quote from a recent article on this topic,

“a similar fate  most likely awaits some of the claims presented here. However such demarcations  simply  have  to  be  drawn  once  and  again.  They  focus  our  attention, make us wonder, and direct and stimulate research, exactly because they provoke and challenge other researchers to take up the glove and prove us wrong.” (Høgh-Olesen 2010: 60)

In this post, I’ll focus on six candidates that might play a part in constituting what makes human cognition unique, though there are countless others (see, for example, here).

One of the key candidates for what makes human cognition unique is of course language and symbolic thought. We are “the articulate mammal” (Aitchison 1998) and an “animal symbolicum” (Cassirer 2006: 31). And if one defining feature truly fits our nature, it is that we are the “symbolic species” (Deacon 1998). But as evolutionary anthropologists Michael Tomasello and his colleagues argue,

“saying that only humans have language is like saying that only humans build skyscrapers, when the fact is that only humans (among primates) build freestanding shelters at all” (Tomasello et al. 2005: 690).

Language and Social Cognition

According to Tomasello and many other researchers, language and symbolic behaviour, although they certainly are crucial features of human cognition, are derived from human beings’ unique capacities in the social domain. As Willard van Orman Quine pointed out, language is essential a “social art” (Quine 1960: ix). Specifically, it builds on the foundations of infants’ capacities for joint attention, intention-reading, and cultural learning (Tomasello 2003: 58). Linguistic communication, on this view, is essentially a form of joint action rooted in common ground between speaker and hearer (Clark 1996: 3 & 12), in which they make “mutually manifest” relevant changes in their cognitive environment (Sperber & Wilson 1995). This is the precondition for the establishment and (co-)construction of symbolic spaces of meaning and shared perspectives (Graumann 2002, Verhagen 2007: 53f.). These abilities, then, had to evolve prior to language, however great language’s effect on cognition may be in general (Carruthers 2002), and if we look for the origins and defining features of human uniqueness we should probably look in the social domain first.

Corroborating evidence for this view comes from comparisons of brain size among primates. Firstly, there are significant positive correlations between group size and primate neocortex size (Dunbar & Shultz 2007). Secondly, there is also a positive correlation between technological innovation and tool use – which are both facilitated by social learning – on the one hand and brain size on the other (Reader and Laland 2002). Our brain, it seems, is essential a “social brain” that evolved to cope with the affordances of a primate social world that frequently got more complex (Dunbar & Shultz 2007, Lewin 2005: 220f.).

Thus, “although innovation, tool use, and technological invention may have played a crucial role in the evolution of ape and human brains, these skills were probably built upon mental computations that had their origins and foundations in social interactions” (Cheney & Seyfarth 2007: 283).

Continue reading “What Makes Humans Unique? (II): Six Candidates for What Makes Human Cognition Uniquely Human”

Inconsistent, yes… But here are some links.

Okay, I promised a post a day and failed miserably. But it does say in my profile that I’m inconsistent. So it’s probably best to not believe everything I write, even if there is one valuable lesson in my broken promises: avoid targets. Anyway, I just thought I’d do a quick post on some articles that have caught my attention over the past few days: