You’re clever for your kids’ sake: A feedback loop between intelligence and early births

The gap between our cognitive skills and that of our closest evolutionary ancestors is quite astonishing. Within a relatively short evolutionary time frame humans developed a wide range of cognitive abilities and bodies that are very different to other primates and animals. Many of these differences appear to be related to each other. A recent paper by Piantadosi and Kidd argues that human intelligence originates in human infants’ restriction of their birth size, leading to premature births and long weaning times that require intensive and intelligent care. This is an interesting hypothesis that links the ontogeny of the body with cognition.

Human weaning times are extraordinarily long. Human infants spend their first few months being highly dependent on their caregivers, not just for food but for pretty much any interaction with the environment. Even by the time they are walking they still spend years being dependant on their caregivers. Hence, it would be a good for their parents to stick around and care for them – instead of catapulting them over the nearest mountain.  Piantadosi and Kidd argue that “[h]umans must be born unusually early to accommodate larger brains, but this gives rise to particularly helpless neonates. Caring for these children, in turn, requires more intelligence—thus even larger brains.” [p. 1] This creates a runaway feedback loop between intelligence and weaning times, similar to those observed in sexual selection.

Piantadosi and Kidd’s computational model takes into account infant mortality as a function of intelligence and head circumference, but also take into account the ooffspring’s likelihood to survive into adulthood, depending on parental care/intelligence. The predictions are based on the population level, and the model predicts a fitness landscape where two optima emerge: populations either drift towards long development and smaller head circumference (a proxy for intelligence in the model) or they drift towards the second optimum – larger heads but shorter weaning time. Once a certain threshold has been crossed, a feedback loop emerges and more intelligent adults are able to support less mature babies. However, more intelligent adults will have even bigger heads when they are born – and thus need to be born even more premature in order to avoid complications at birth.

To test their model’s predictions, the authors also correlated weaning times and intelligence measures within primates and found a high correlation within the primate species. For example, bonobos and chimpanzees have an average weaning time of approximately 1100 days, and score highly in standardised intelligence measures. Lemurs on the other hand only spend 100 days with their offspring, and score much lower in intelligence. Furthermore, Piantadosi and Kidd also look at the relationship between weaning age with various other physical measures of the body, such as the size of the neocortex, brain volume and body mass. However, weaning time remains the most reliable predictor in the model.

Piantadosi and Kidd’s model provides a very interesting perspective on how human intelligence could have been the product of a feedback loop between developmental maturity and neonatal head size, and infant care. Such a feedback component could explain the considerable evolutionary change humans have undergone. Yet between the two optima of long birth age and a small brain radius and a short birth age and a large brain, most populations do drift towards the longer birth/smaller brain (See graph 2.A in the paper). It appears that the model cannot explain the original evolutionary pressure for more intelligence that pushed humans over the edge: If early humans encountered an increased number of early births, why did those populations with early births not simply die out, instead of taking the relatively costly route of becoming more intelligent? Only once there is a pressure towards more intelligence, it is possible that humans were pushed into a location leading the self-enforcing cycle of low birth age and high parental intelligence, and this cycle drove humans towards much higher intelligence than they would have developed otherwise. Even if the account falls short of ultimate explanations (i.e. why a certain feature has evolved, the reason), Piantadosi and Kidd have described an interesting proximate explanation (i.e. how a feature evolved, the mechanism).

Because the data is correlative in its nature only, the reverse hypothesis might also hold – humans might be more intelligent because they spend more time interacting with their caregivers. In fact, a considerable amount of their experiences is modulated by their caregivers, and their unique experience might also create a strong embodied perspective on the emergence of social signals. For example, infants in their early years see a proportionately high number of faces (Fausey et al., 2016). Maybe infants’ long period of dependence makes them learn so well from other people around them, thereby allowing for the acquisition of cultural information and a more in-depth understanding of the world around them. Therefore, the longer weaning time makes them pay much more attention to caregivers, providing a stimulus rich environment that human infants are immersed in for much longer than other species. Whatever the connection might be, I think that this kind of research offers a fascinating view on how children develop and what makes us human.


Fausey, C. M., Jayaraman, S., & Smith, L. B. (2016, Jul). From faces to hands: Changing visual input in the first two years. Cognition, 152, 101–107. doi: 10.1016/j.cognition.2016.03.005
Piantadosi, S. T., & Kidd, C. (2016). Extraordinary intelligence and the care of infants. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.1506752113
Thanks to Denis for finding the article.

Posture helps robots learn words, and infants, too.

What kind of information do children and infants take into account when learning new words? And to what extent do they need to rely on interpreting a speakers intention to extract meaning? A paper by Morse, Cangelosi and Smith (2015), published in PLoS One, suggests that bodily states such as body posture might be used by infants to acquire word meanings in the absence of the object named. To test their hypothesis, the authors ran a series of experiments using a word learning task with infants—but also a self-learning robot, the iCub.

Continue reading “Posture helps robots learn words, and infants, too.”

Narrative and Abstraction: Some Problems with Cognitive Metaphor

I’ve had problems with cognitive metaphor theory (CMT) since Lakoff and Johnson published Metaphors We Live By (1981) – well, not since then, because I didn’t read the book until a couple of years after original publication. It’s not that I didn’t believe that language and cognition where thick with metaphor, much of it flying below the radar screen of explicit awareness. I had no trouble with that, nor with the idea that metaphor is an important mechanism for abstract thinking.

But it’s not the only mechanism.

During the 1970s I had studied with David Hays in the Linguistics Department of the State University of New York at Buffalo. He had developed a somewhat different account of abstract thought in which abstract ideas are derived from narrative – which I’ll explain below. I was reminded of this yesterday when Per Aage Brandt made the following remark in response to my critique of Lakoff and Turner on “To a Solitary Disciple”:

Instead, the text sketches out a little narrative. The lines run upwards, the ornament tries to stop them, they converge and now guard, contain and protect the flower/moon. This little story can then become a larger story of cult and divinity in the interpretation by a sort of allegorical projection. All narratives can project allegorically in a similar way.

Precisely so, a little narrative. Narratives too support abstraction.

My basic problem with cognitive metaphor theory, then, is that it claims too much. There’s more than one mechanism for constructing abstract concepts. David Hays and I outlined four in The Evolution of Cognition (1990): metaphor, metalingual definition and rationalization, theorization, and model building. There’s no reason to believe that those are the only existing or the only possible mechanisms for constructing abstract concepts.

In the rest of this note I want to sketch out Hays’s old notion of abstraction, point out how it somewhat resembles CMT and then I dig up some old notes that express further reservations about CMT.

Narrative and Metalingual Definition

The fact that various episodes can exhibit highly similar patterns of events and participants is the basis of Hays’s (1973) original approach to abstraction. He called it metalingual definition, after Roman Jakobson’s notion of language’s metalingual function. While Hays’ notion is different from CMT of Lakoff and Johnson, I do not see it as an alternative except in the sense that perhaps some of the cases they handle with conceptual metaphor might better be explicated by Hay’s metalingual account. But that is a secondary matter. Both mechanisms are needed, and, as I’ve indicated above, a few others as well. Continue reading “Narrative and Abstraction: Some Problems with Cognitive Metaphor”

On the entangled banks of representations (pt.1)

ResearchBlogging.orgLately, I took time out to read through a few papers I’d put on the backburner until after my first year review was completed. Now that’s out of the way, I found myself looking through Berwick et al.‘s review on Evolution, brain, and the nature of language. Much of the paper manages to pull off the impressive job of making it sound as if the field has arrived on a consensus in areas that are still hotly debated. Still, what I’m interested in for this post is something that is often considered to be far less controversial than it is, namely the notion of mental representations. As an example, Berwick et al. posit that mind/brain-based computations construct mental syntactic and conceptual-intentional representations (internalization), with internal linguistic representations then being mapped onto their ordered output form (externalization). From these premises, the authors then arrive at the reasonable enough assumption that language is an instrument of thought first, with communication taking a secondary role:

In marked contrast, linear sequential order does not seem to enter into the computations that construct mental conceptual-intentional representations, what we call ‘internalization’… If correct, this calls for a revision of the traditional Aristotelian notion: language is meaning with sound, not sound with meaning. One key implication is that communication, an element of externalization, is an ancillary aspect of language, not its key function, as maintained by what is perhaps a majority of scholars… Rather, language serves primarily as an internal ‘instrument of thought’.

If we take for granted their conclusions, and this is something I’m far from convinced by, there is still the question of whether or not we even need representations in the first place. If you were to read the majority of cognitive science, then the answer is a fairly straight forward one: yes, of course we need mental representations, even if there’s no solid definition as to what they are and the form they take in our brain. In fact, the notion of representations has become a major theoretical tenet of modern cognitive science, as evident in the way much of field no longer treats it as a point of contention. The reason for this unquestioning acceptance has its roots in the notion that mental representations enriched an impoverished stimulus: that is, if an organism is facing incomplete data, then it follows that they need mental representations to fill in the gaps.

Continue reading “On the entangled banks of representations (pt.1)”

Seeds of Recursion in the Child’s Mind

Though it has roots in 19th Century mathematics, the idea of recursion owes most of its development 20th Century work in mathematics, logic, and computing, where it has become one of the most fruitful ideas in modern–or is it post-modern?–thought. By making it central to his work in language syntax, Chomsky introduced recursion into discussions about the fundamental architecture of the human mind. The question of whether or not syntax is recursive has been important, and controversial, ever science.

My teacher, the late David Hays, was a computational linguist and had somewhat different ideas about recursion. When I say that he was a computational linguist I mean that he devoted a great deal of time and intellectual effort to the problem, first of machine translation, and then more generally to using computer programs to simulate language processing. Back in those days computers were physically large, but computationally weak in comparison with today’s laptops and smart phones. Recursive syntax required more computational resources (memory space and CPU cycles per unit of time) than were available. Transformational grammars were computationally difficult.

Hays was of the view that recursion was a property of the human mind as a whole, but not necessarily of the syntactic component of language. In particular, building on Roman Jakonson’s notion of language’s metalingual function, he developed an account of metalingual definition for abstract concepts that allowed for the recursive nesting of definitions within one another (Cognitive networks and abstract terminology, Journal of Clinical Computing, 3(2):110-118, 1973).

Given that Hays was my teacher, it will come as no surprise that I favor his view of the matter. But I will also note that I had become skeptical about Chomskian linguistics even before I came to study with Hays, though I had started out as a fan.

It is in that skeptical spirit that I present excerpts from two papers, one a bit of intellectual biography (Touchstones) and the other a formal academic paper centering on an elaborate and sophisticated model of the cognitive underpinnings of personal pronouns (First Person: Neuro-Cognitive Notes on the Self in Life and in Fiction). Continue reading “Seeds of Recursion in the Child’s Mind”