You’re clever for your kids’ sake: A feedback loop between intelligence and early births

The gap between our cognitive skills and that of our closest evolutionary ancestors is quite astonishing. Within a relatively short evolutionary time frame humans developed a wide range of cognitive abilities and bodies that are very different to other primates and animals. Many of these differences appear to be related to each other. A recent paper by Piantadosi and Kidd argues that human intelligence originates in human infants’ restriction of their birth size, leading to premature births and long weaning times that require intensive and intelligent care. This is an interesting hypothesis that links the ontogeny of the body with cognition.

Human weaning times are extraordinarily long. Human infants spend their first few months being highly dependent on their caregivers, not just for food but for pretty much any interaction with the environment. Even by the time they are walking they still spend years being dependant on their caregivers. Hence, it would be a good for their parents to stick around and care for them – instead of catapulting them over the nearest mountain.  Piantadosi and Kidd argue that “[h]umans must be born unusually early to accommodate larger brains, but this gives rise to particularly helpless neonates. Caring for these children, in turn, requires more intelligence—thus even larger brains.” [p. 1] This creates a runaway feedback loop between intelligence and weaning times, similar to those observed in sexual selection.

Piantadosi and Kidd’s computational model takes into account infant mortality as a function of intelligence and head circumference, but also take into account the ooffspring’s likelihood to survive into adulthood, depending on parental care/intelligence. The predictions are based on the population level, and the model predicts a fitness landscape where two optima emerge: populations either drift towards long development and smaller head circumference (a proxy for intelligence in the model) or they drift towards the second optimum – larger heads but shorter weaning time. Once a certain threshold has been crossed, a feedback loop emerges and more intelligent adults are able to support less mature babies. However, more intelligent adults will have even bigger heads when they are born – and thus need to be born even more premature in order to avoid complications at birth.

To test their model’s predictions, the authors also correlated weaning times and intelligence measures within primates and found a high correlation within the primate species. For example, bonobos and chimpanzees have an average weaning time of approximately 1100 days, and score highly in standardised intelligence measures. Lemurs on the other hand only spend 100 days with their offspring, and score much lower in intelligence. Furthermore, Piantadosi and Kidd also look at the relationship between weaning age with various other physical measures of the body, such as the size of the neocortex, brain volume and body mass. However, weaning time remains the most reliable predictor in the model.

Piantadosi and Kidd’s model provides a very interesting perspective on how human intelligence could have been the product of a feedback loop between developmental maturity and neonatal head size, and infant care. Such a feedback component could explain the considerable evolutionary change humans have undergone. Yet between the two optima of long birth age and a small brain radius and a short birth age and a large brain, most populations do drift towards the longer birth/smaller brain (See graph 2.A in the paper). It appears that the model cannot explain the original evolutionary pressure for more intelligence that pushed humans over the edge: If early humans encountered an increased number of early births, why did those populations with early births not simply die out, instead of taking the relatively costly route of becoming more intelligent? Only once there is a pressure towards more intelligence, it is possible that humans were pushed into a location leading the self-enforcing cycle of low birth age and high parental intelligence, and this cycle drove humans towards much higher intelligence than they would have developed otherwise. Even if the account falls short of ultimate explanations (i.e. why a certain feature has evolved, the reason), Piantadosi and Kidd have described an interesting proximate explanation (i.e. how a feature evolved, the mechanism).

Because the data is correlative in its nature only, the reverse hypothesis might also hold – humans might be more intelligent because they spend more time interacting with their caregivers. In fact, a considerable amount of their experiences is modulated by their caregivers, and their unique experience might also create a strong embodied perspective on the emergence of social signals. For example, infants in their early years see a proportionately high number of faces (Fausey et al., 2016). Maybe infants’ long period of dependence makes them learn so well from other people around them, thereby allowing for the acquisition of cultural information and a more in-depth understanding of the world around them. Therefore, the longer weaning time makes them pay much more attention to caregivers, providing a stimulus rich environment that human infants are immersed in for much longer than other species. Whatever the connection might be, I think that this kind of research offers a fascinating view on how children develop and what makes us human.

References

Fausey, C. M., Jayaraman, S., & Smith, L. B. (2016, Jul). From faces to hands: Changing visual input in the first two years. Cognition, 152, 101–107. doi: 10.1016/j.cognition.2016.03.005
Piantadosi, S. T., & Kidd, C. (2016). Extraordinary intelligence and the care of infants. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.1506752113
Thanks to Denis for finding the article.

Polythetic Entitation & Cultural Coordinators

Timothy Taylor has an interesting entry in this year’s “Edqe Question” idea-fest. It has the ungainly title, Polythetic Entitation. He attributes the idea to the late David Clarke:

Clarke argued that the world of wine glasses was different to the world of biology, where a simple binary key could lead to the identification of a living creature (Does it have a backbone? If so, it is a vertebrate. Is it warm blooded? If so, it is a mammal or bird. Does it produce milk? … and so on). A wine glass is a polythetic entity, which means that none of its attributes, without exception, is simultaneously sufficient and necessary for group membership. Most wine glasses are made of clear glass, with a stem and no handle, but there are flower vases with all these, so they are not definitionally-sufficient attributes; and a wine glass may have none of these attributes—they are not absolutely necessary. It is necessary that the wine glass be able to hold liquid and be of a shape and size suitable for drinking from, but this is also true of a teacup. If someone offered me a glass of wine, and then filled me a fine ceramic goblet, I would not complain.

Taylor is an archaeologist as was Clarke. They face the problem of how to identify cultural objects without knowing how they are used. An object’s physical characteristics generally do not speak unequivocally, hence the term polythetic (vs. monothetic). Thus:

Asking at the outset whether an object is made of glass takes us down a different avenue from first asking if it has a stem, or if it is designed to hold liquid. The first lumps the majority of wine glasses with window panes; the second groups most of them with vases and table lamps; and the third puts them all into a super-category that includes breast implants and Lake Mead, the Hoover dam reservoir. None of the distinctions provides a useful classificatory starting point. So grouping artefacts according to a kind of biological taxonomy will not do.

As a prehistoric archaeologist David Clarke knew this, and he also knew that he was continually bundling classes of artefacts into groups and sub-groups without knowing whether his classification would have been recognized emically, that is, in terms understandable to the people who created and used the artefacts. Although the answer is that probably they did have different functions, how might one work back from the purely formal, etic, variance—the measurable features or attributes of an artefact—to securely assign it to its proper category?

What matters for proper classification are the attributes with “cultural salience” (Taylor’s term).

Now cultural salience is how I define the genetic elements of culture, which I have taken to calling coordinators. Coordinators are the culturally salient properties of objects or processes. In a terminology originally promulgated by Kenneth Pike, they are emics (notice that Taylor uses this terminology as well).

One thing that became clear to me in Dan Everett’s Dark Matter of the Mind (see my review in 3 Quarks Daily) is that a culture covers or paints (other terms of art I am considering) their natural environment with coordinators. Thus Everett talks about how, even after he’d been among the Pirahã for a couple years he simply could not see the jungle as well as they did. They were born and raised in it; he was not. Features of the jungle – creatures and events – that were obvious to the Pirahã because they had learned to identify them, that were culturally salient to the Pirahã, were invisible to Everett. They may have been right in front of his (lying) eyes, but he couldn’t discern them. They were not culturally salient to him, for his mind/brain had developed in a very different physical environment.

The polythetic nature of cultural artfacts is closely related to what I have called abundance elsewhere. The phenomena of the world have many properties; they are abundant. Only some of those properties will even be perceptually available; after all, our ears cannot hear all sounds, our eyes cannot see all electromagnetic radiation, etc. Of the perceptually available properties, only some will be culturally salient. This is as true for natural objects as for cultural artifacts and activities.

Dan Everett’s Dark Matter @ 3QD

Consider these three words: gavagai, gabagaí, gabagool. If you’ve been binge watching episodes in the Star Trek franchise you might suspect them to be the equivalent of veni, vidi, vici, in the language of a space-faring race from the Gamma Quadrant. The truth, however, is even stranger.

The first is a made-up word that is well-known in certain philosophical circles. The second is not quite a word, but is from Pirahã, the Amazonian language brought to our attention by ex-missionary turned linguist, Daniel Everett, and can be translated as “frustrated initiation,” which is how Everett characterized his first field trip among the Pirahã. The third names an Italian cold cut that is likely spelled “capicola” or “capocolla” when written out and has various pronunciations depending on the local language. In New York and New Jersey, Tony Soprano country, it’s “gabagool”.

Everett discusses first two in his wide-ranging new book, Dark Matter of the Mind: The Culturally Articulated Unconscious (2016), which I review at 3 Quarks Daily. As for gabagool, good things come in threes, no?

Why gavagai? Willard van Orman Quine coined the word for a thought experiment that points up the problem of word meaning. He broaches the issue by considering the problem of radical translation, “translation of the language of a hitherto untouched people” (Word and Object 1960, 28). He asks us to consider a “linguist who, unaided by an interpreter, is out to penetrate and translate a language hitherto unknown. All the objective data he has to go on are the forces that he sees impinging on the native’s surfaces and the observable behavior, focal and otherwise, of the native.” That is to say, he has no direct access to what is going on inside the native’s head, but utterances are available to him. Quine then asks us to imagine that “a rabbit scurries by, the native says ‘Gavagai’, and the linguist notes down the sentence ‘Rabbit’ (of ‘Lo, a rabbit’) as tentative translation, subject to testing in further cases” (p. 29).

Quine goes on to argue that, in thus proposing that initial translation, the linguist is making illegitimate assumptions. He begins his argument by nothing that the native might, in fact, mean “white” or “animal” and later on offers more exotic possibilities, the sort of things only a philosopher would think of. Quine also notes that whatever gestures and utterances the native offers as the linguist attempts to clarify and verify will be subject to the same problem.

As Everett notes, however, in his chapter on translation (266):

On the side of mistakes never made, however, Quine’s gavagai problem is one. In my field research on more than twenty languages—many of which involved monolingual situations …, whenever I pointed at an object or asked “What’s that?” I always got an answer for an entire object. Seeing me point at a bird, no one ever responded “feathers.” When asked about a manatee, no one ever answered “manatee soul.” On inquiring about a child, I always got “child,” “boy,” or “girl,” never “short hair.”

Later:

I believe that the absence of these Quinean answers results from the fact that when one person points toward a thing, all people (that I have worked with, at least) assume that what is being asked is the name of the entire object. In fact, over the years, as I have conducted many “monolingual demonstrations,” I have never encountered the gavagai problem. Objects have a relative salience… This is perhaps the result of evolved perception.

Frankly, I forget how I reacted to Quine’s thought experiment when I first read it as an undergraduate back in the 1960s. I probably found it a bit puzzling, and perhaps I even half-believed it. But that was a long time ago. When I read Everett’s comments on it I was not surprised to find that the gavagai problem doesn’t arise in the real world and find his suspected explanation, evolved perception, convincing.

As one might expect, Everett devotes quite a bit of attention to recursion, with fascinating examples from Pirahã concerning evidentials, but I deliberately did not bring that up in my review. Why, given that everyone and their Aunt Sally seem to be all a-twitter about the issue, didn’t I discuss it? That’s why, I’m tired of it and think that, at this point, it’s a case of the tail wagging the dog. I understand well enough why it’s an important issue, but it’s time to move on.

The important issue is to shift the focus of linguistic theory away from disembodied and decontextualized sentences and toward conversational interaction. That’s been going on for some time now and Everett has played a role in that shift. While the generative grammarians use merge as a term for syntactic recursion it could just as well be used to characterize how partners assimilate what they’re hearing with what they’re thinking. Perhaps that’s what syntax is for and why it arose, to make conversation more efficient–and I seem to think that Everett has a suggestion to that effect in his discussion of the role of gestures in linguistic interaction.

Anyhow, if these and related matters interest you, read my review and read Everett’s book.

Mutable stability in the transmission of medieval texts

I’ve just checked in at Academica.edu and was alerted to this article:

Stephen G. Nichols, Mutable Stability, a Medieval Paradox: The Case of Le Roman de la Rose, Queste 23 (2016) 2, pp. 71-103.

I’ve not yet read it, but a quick skim makes it clear that it speaks to a current debate in cultural evolution concerning the high-fidelity transmission of “memes” (Dan Dennett) vs. the variable transmission of objects as guided by “factors of attraction” (Dan Sperber). I’ve not yet read it, but here’s some tell-tale passages. This is from the beginning (p. 71):

Yet even those who argue, to the contrary, that ‘transmission errors’ often represent creative ‘participation’ by a talented scribe, must recognize the attraction of a stable work.After all, despite an extraordinary record of innovation, invention, and discovery, the Middle Ages are an era that resisted change in and for itself. And yet this same veneration of conservative values underlies a fascinating paradox of medieval culture: its delicate and seemingly contradictory balance between stability, on the one hand, and transformation, on the other. It may be that only an era that saw no contradiction in promulgating an omnipotent, unchanging divinity, which was at the same time a dynamic principle of construction and transformation, could have managed the paradox of what I want to call ‘mutable stability’.

Here’s Dawkins in the 2nd chapter of The Selfish Gene:

Darwin’s ‘survival of the fittest’ is really a special case of a more general law of survival of the stable. The universe is populated by stable things. A stable thing is a collection of atoms that is permanent enough or common enough to deserve a name. It may be a unique collection of atoms, such as the Matterhorn, that lasts long enough to be worth naming. Or it may be a class of entities, such as rain drops, that come into existence at a sufficiently high rate to deserve a collective name, even if any one of them is short-lived. The things that we see around us, and which we think of as needing explanation–rocks, galaxies, ocean waves–are all, to a greater or lesser extent, stable patterns of atoms.

Etc.

Back to Nichols, a bit later in the article (p. 77):

In this case, however, it’s one that allows us to understand the paradox of medieval narrative forms whose ‘stability’ over time – in some cases over several centuries – depends on what I call the generative – or regenerative – force of transmission. Why ‘regenerative’ if transmission involves reproducing the ‘same’ work from one representation to another? The answer to that question involves recognizing the complex forces at play in the transmission of me- dieval texts, beginning with concepts like ‘the same’ and ‘seeing’ or ‘perspective’. After all, in a culture where the technology of transmission depends on copying each text by hand, what the scribe sees, or thinks she or he sees, must be factored into our definition of ‘sameness’ when comparing original and copy.

In the event, ‘sameness’, for the medieval mind had a very different connotation from our modern senses of the term. Indeed, it even involves a different process of perception and imagination. Whereas in our age of mechanical and digital reproduction, we are used to standards of ‘exactness’ for things we recognize as identical, me- dieval people had neither the means nor the expectation to make ‘same’ and ‘exact imitation’ synonymous. Indeed, one may even question the existence at that time of such a concept as ‘exact imitation’, at least as we understand it. Continue reading “Mutable stability in the transmission of medieval texts”

Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures

This post is about terminology, but also about things – in particular, an abstract thing – and measurements of those things. The things and measurements arise in the study of cultural evolution.

Let us start with a thing. What is this?

9dot3

If you are a regular reader here at New Savanna you might reply: Oh, that’s the whatchamacallit from Jocker’s Macroanalysis. Well, yes, it’s an illustration from Macroanalysis. But that’s not quite the answer I was looking for. But let’s call that answer a citation and set it aside.

Let’s ask the same question, but of a different object: What’s this?

20141231-_IGP2188

I can imagine two answers, both correct, each it its own way:

1. It’s a photo of the moon.

2. The moon.

Strictly speaking, the first is correct and the second is not. It IS a photograph, not the moon itself. But the second answer is well within standard usage.

Notice that the photo does not depict the moon in full (whatever that might mean), no photograph could. That doesn’t change the fact that it is the moon that is depicted, not the sun, or Jupiter, or Alpha Centauri, or, for that matter, Mickey Mouse. We do not generally expect that representations of things should exhaust those things.

Now let us return to the first image and once again ask: What is this? I want two answers, one to correspond with each of our answers about the moon photo. I’m looking for something of the form:

1. A representation of X.

2. X.

Let us start with X. Jockers was analyzing a corpus of roughly 3300 19th century Anglophone novels. To do that he evaluated each of them on each of 600 features. Since those evaluations can be expressed numerically Jockers was able to create a 600-dimensional space in which teach text occupies a single point. He then joined all those points representing texts that are relatively close to one another. Those texts are highly similar with respect to the 600 features that define the space.

The result is a directed graph having 3300 nodes in 600 dimensions. So, perhaps we can say that X is a corpus similarity graph. However, we cannot see in 600 dimensions so there is no way we can directly examine that graph. It exists only as an abstract object in a computer. What we can do, and what Jockers did, is project a 600D object into two dimensions. That’s what we see in the image.

Continue reading “Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures”

Culture shapes the evolution of cognition

A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone’s favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.

 

Abstract:

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Paper: http://www.pnas.org/content/early/2016/03/30/1523631113.full

Two grants for PhD students in cultural evolution at Max Planck Institute (Jena)

The MPI for the Science of Human History is offering two grants for PhD students, starting 2016 (deadline for applications is March 21st, 2016).

The Minds and Traditions research group (“the Mint”), an Independent Max Planck Research Group at the Max Planck Institute for the Science of Human History in Jena (Germany) is offering two grants for two doctoral projects focusing on “cognitive science and cultural evolution of visual culture and graphic codes“.

Funding is available for four years (three years renewable twice for six months), starting in September 2016. The PhD students will be expected to take part in a research project devoted to the cognitive science and cultural evolution of graphic codes.

More details here.

An Inquiry into & a Critique of Dennett on Intentional Systems

A new working paper. Downloads HERE:

Abstract, contents, and introduction below:

* * * * *

Abstract: Using his so-called intentional stance, Dennett has identified so-called “free-floating rationales” in a broad class of biological phenomena. The term, however, is redundant on the pattern of objects and actions to which it applies and using it has the effect of reifying the pattern in a peculiar way. The intentional stance is itself a pattern of wide applicability. However, in a broader epistemological view, it turns out that we are pattern-seeking creatures and that phenomenon identified with some pattern must be verified by other techniques. The intentional stance deserves no special privilege in this respect. Finally, it is suggested that the intentional stance may get its intellectual power from the neuro-mental machinery it recruits and not from any special class of phenomena it picks out in the world.

CONTENTS

Introduction: Reverse Engineering Dan Dennett 2
Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains 6
In Search of Dennett’s Free-Floating Rationales 9
Dan Dennett on Patterns (and Ontology) 14
Dan Dennett, “Everybody talks that way” – Or How We Think 20

Introduction: Reverse Engineering Dan Dennett

I find Dennett puzzling. Two recent back-to-back videos illustrate that puzzle. One is a version of what seems to have become his standard lecture on cultural evolution, this time entitled

https://www.youtube.com/watch?feature=player_embedded&v=AZX6awZq5Z0

As such it has the same faults I identify in the lecture that occasioned the first post in this collection, Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains. It’s got a collection of nicely curated examples of mostly biological phenomenon which Dennett crafts into an account of cultural evolution though energetic hand-waving and tap-dancing.
And then we have a somewhat shorter video that is a question and answer session following the first:

https://www.youtube.com/watch?feature=player_embedded&v=beKC_7rlTuw

I like much of what Dennett says in this video; I think he’s right on those issues.

What happened between the first and second video? For whatever reason, no one asked him about the material in the lecture he’d just given. They asked him about philosophy of mind and about AI. Thus, for example, I agree with him that The Singularity is not going to happen anytime soon, and likely not ever. Getting enough raw computing power is not the issue. Organizing it is, and as yet we know very little about that. Similarly I agree with him that the so-called “hard problem” of consciousness is a non-issue.

How is it that one set of remarks is a bunch of interesting examples held together by smoke and mirrors while the other set of remarks is cogent and substantially correct? I think these two sets of remarks require different kinds of thinking. The second set involve philosophical analysis, and, after all Dennett is a philosopher more or less in the tradition of 20th century Anglo-American analytic philosophy. But that first set of remarks, about cultural evolution, is about constructing a theory. It requires what I called speculative engineering in the preface to my book on music, Beethoven’s Anvil. On the face of it, Dennett is not much of an engineer.

And now things get really interesting. Consider this remark from a 1994 article [1] in which Dennett gives an overview of this thinking up to that time (p. 239):

My theory of content is functionalist […]: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering of the robot). This is a specifically ‘teleological’ notion of function (not the notion of a mathematical function or of a mere ‘causal role’, as suggested by David LEWIS and others). It is the concept of function that is ubiquitous in engineering, in the design of artefacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive to find ‘laws of nature’, but a species of engineering: the analysis, by ‘reverse engineering’, of the found artefacts of nature – which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.)

I am entirely in agreement with his emphasis on engineering. Biological thinking is “a species of engineering.” And so is cognitive science and certainly the study of culture and its evolution.

Earlier in that article Dennett had this to say (p. 236):

It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford, I developed a deep distrust of the methods I saw other philosophers employing, and decided that before I could trust any of my intuitions about the mind, I had to figure out how the brain could possibly accomplish the mind’s work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked – clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool ‘arts and humanities’ academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.)

My reaction to that last remark, that parenthesis, was something like: Coulda’ fooled me! For I had been thinking that an engineering sensibility is what was missing in Dennett’s discussions of culture. He didn’t seem to have a very deep sense of structure and construction, of, well, you know, how design works. And here he is telling us he coulda’ been an engineer.

Continue reading “An Inquiry into & a Critique of Dennett on Intentional Systems”

Future tense and saving money: no correlation when controlling for cultural evolution

This week our paper on future tense and saving money is published (Roberts, Winters & Chen, 2015).  In this paper we test a previous claim by Keith Chen about whether the language people speak influences their economic decisions (see Chen’s TED talk here or paper).  We find that at least part of the previous study’s claims are not robust to controlling for historical relationships between cultures. We suggest that large-scale cross-cultural patterns should always take cultural history into account.

Does language influence the way we think?

There is a longstanding debate about whether the constraints of the languages we speak influence the way we behave. In 2012, Keith Chen discovered a correlation between the way a language allows people to talk about future events and their economic decisions: speakers of languages which make an obligatory grammatical distinction between the present and the future are less likely to save money.

Continue reading “Future tense and saving money: no correlation when controlling for cultural evolution”

Dan Dennett on Patterns (and Ontology)

I want to look at what Dennett has to say about patterns because 1) I introduced the term in my previous discussion, In Search of Dennett’s Free-Floating Rationales [1], and 2) it is interesting for what it says about his philosophy generally.

You’ll recall that, in that earlier discussion, I pointed out talk of “free-floating rationales” (FFRs) was authorized by the presence of a certain state of affairs, a certain pattern of relationships among, in Dennett’s particular example, an adult bird, (vulnerable) chicks, and a predator. Does postulating talk of FFRs add anything to the pattern? Does it make anything more predictable? No. Those FFRs are entirely redundant upon the pattern that authorizes them. By Occam’s Razor, they’re unnecessary.

With that, let’s take a quick look at Dennett’s treatment of the role of patterns in his philosophy. First I quote some passages from Dennett, with a bit of commentary, and then I make a few remarks on my somewhat different treatment of patterns. In a third post I’ll be talking about the computational capacities of the mind/brain.

Patterns and the Intentional Stance

Let’s start with a very useful piece Dennett wrote in 1994, “Self-Portrait” [2] – incidentally, I found this quite useful in getting a better sense of what Dennett’s up to. As the title suggests, it’s his account of his intellectual concerns up to that point (his intellectual life goes back to the early 1960s at Harvard and then later at Oxford). The piece doesn’t contain technical arguments for his positions, but rather states what they were and gives their context in his evolving system of thought. For my purposes in this inquiry that’s fine.

He begins by noting, “the two main topics in the philosophy of mind are CONTENT and CONSCIOUSNESS” (p. 236). Intentionality belongs to the theory of content. It was and I presume still is Dennett’s view that the theory of intentionality/content is the more fundamental of the two. Later on he explains that (p. 239):

… I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists RATIONALITY. Rather, the presupposition provides leverage for generating specific predictions of behaviour, via defeasible hypotheses about the content of the control states of the entity.

This represents a position Dennett will call “mild realism” later in the article. We’ll return to that in a bit. But at the moment I want to continue just a bit later on p. 239:

In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena – a ‘heuristic overlay’ (1969), describing an inescapably idealized ‘real pattern’ (1991d). Like such abstracta as centres of gravity and parallelograms of force, the BELIEFS and DESIRES posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if – most improbably – rival intentional interpretations arose that did equally well at rationalizing the history of behaviour of an entity.

Hence his interest in patterns. When one adopts the intentional stance (or the design stance, or the physical stance) one is looking for characteristic patterns. Continue reading “Dan Dennett on Patterns (and Ontology)”