The computational envelope of language – Once more into the breach

Time to saddle-up and once more ride my current hobby horse, or one of them at least. In this case, the idea that natural language is the simplest aspect of human activity that is fundamentally and irreducibly computational in nature.

Let’s back into it.

* * * * *

Is arithmetic calculation computational in kind?

Well yes, of course. If anything is computation, that sure is.

Well then, in my current view, arithmetic calculation is language from which meaning has been completely removed, squeezed out as it were, leaving us with syntax, morphology, and so forth.

Elaborate.

First, let’s remind ourselves that arithmetic calculation, as performed by writing symbols on some surface, is a very specialized form of language. Sure, we think of it as something different from language…

All those years of drill and practice in primary school?

Yes. We have it drilled into our heads that arithmetic is one thing, over here, while language is something different, over there. But it’s obvious, isn’t it, that arithmetic is built from language?

OK, I’ll accept that.

So, arithmetic calculation has two kinds of symbols, numerals and operators. Both are finite in number. Numerals can be concatenated into strings of any length and in any order and combination.

OK. In the standard Arabic notation there are ten numerals, zero (0) through (9).

That’s correct.

And we’ve got five operators, +, -, * [times], ÷, and =. And, come to think of it, we probably should have left and right parenthesis as well.

OK. What’s the relationship between these two kinds of symbols?

Hmmmm….The operators allow as to specify various relationships between strings of numerals.

Starting with, yes, starting with a basic set of equivalences of the form, NumStr Op NumStr = NumStr, where Op is one from +, -, *, and ÷ and NumStr is a string of one or, in the case of these primitive equivalences, two numerals. [1]

Thus giving us those tables we memorized in grade school. Right!

What do you mean by semantics being removed?

Well, what are the potentially meaning-bearing elements in this collection?

That would be the numerals, no?

Yes. What do they mean?

Why, they don’t meaning anything…

Well… But they aren’t completely empty, are they?

No.

Elaborate. What’s not empty about, say, 5?

5 could designate…

By “designate” you mean “mean”?

Yes. 5 could designate any collection with five members. 5 apples, 5 oranges, 5 mountains, 5 stars…

What about an apple, an orange, a mountain, a star, and a dragon?

Yes, as long as there’s five of them.

Ah, I see. The numerals, or strings of numerals, are connected to the world though the operation of counting. When we use them to count, they, in effect, become numbers. But, yes, that’s a very general kind of relationship. Not much semantics or meaning there.

Right. And that’s what I mean by empty of semantics. All we’ve got left is syntax, more or less.

Sounds a bit like Searle in his Chinese Room.

Yes, it does, doesn’t it?

The idea is that the mental machinery we use to do arithmetic calculation, that’s natural computation, computation performed by a brain, from which semantics has been removed. That machinery is there in ordinary language, or even extraordinary language. Language couldn’t function without it. That’s where language gets its combinatorial facility.

And THAT sounds like Chomsky, no?

Yes.

* * * * *

And so it goes, on and on.

When the intellectual history of the second half of the twentieth century gets written, the discovery of the irreducibly computational nature of natural language will surely be listed as one of the highlights. Just who will get the honor, that’s not clear, though Chomsky is an obvious candidate. He certainly played a major role. But he didn’t figure out how an actual physical system could do it (the question was of little or no interest to him), and surely that’s part of the problem. If so, however, then we still haven’t gotten it figured out, have we?

* * * * *

[1] Isn’t that a bit sophisticated for the Glaucon figure in this dialog? Yes, but this is a 21st century Glaucon. He’s got a few tricks up his sleeve.

[2] Sounds a bit like the Frege/Russell set theory definition of number: a natural number n is the collection of all sets with n elements.

CfP: Construal and language dynamics (ICLC-15 workshop proposal)

What do we mean when we talk about the “cognitive foundations” of language or the “cognitive principles” behind linguistic phenomena? And how can we tap into the cognitive underpinnings of language? These questions lie at the heart of a workshop that Michael Pleyer and I are going to propose for the next International Cognitive Linguistics Conference. Here’s our Call for Papers:

Construal and language dynamics: Interdisciplinary and cross-linguistic perspectives on linguistic conceptualization
– Workshop proposal for the 15th International Cognitive Linguistics Conference, Nishinomiya, Japan, August 6–11, 2019 –

Convenors:
Stefan Hartmann, University of Bamberg
Michael Pleyer, University of Koblenz-Landau

The concept of construal has become a key notion in many theories within the broader framework of Cognitive Linguistics. It lies at the heart of Langacker’s (1987, 1991, 2008) Cognitive Grammar, but it also plays a key role in Croft’s (2012) account of verbal argument structure as well as in the emerging framework of experimental semantics (Bergen 2012; Matlock & Winter 2015). Indirectly it also figures in Talmy’s (2000) theory of cognitive semantics, especially in his “imaging systems” approach (see e.g. Verhagen 2007).

According to Langacker (2015: 120), “[c]onstrual is our ability to conceive and portray the same situation in alternate ways.” From the perspective of Cognitive Grammar, an expression’s meaning consists of conceptual content – which can, in principle, be captured in truth-conditional terms – and its construal, which encompasses aspects such as perspective, specificity, prominence, and dynamicity. Croft & Cruse (2004) summarize the construal operations proposed in previous research, arriving at more than 20 linguistic construal operations that are seen as instances of general cognitive processes.

Given the “quantitative turn” in Cognitive Linguistics (e.g. Janda 2013), the question arises how the theoretical concepts proposed in the foundational works of the framework can be empirically tested and how they can be refined on the basis of empirical findings. Much work in the domains of experimental linguistics and corpus linguistics has established a research cycle whereby hypotheses are generated on the basis of theoretical concepts from Cognitive Linguistics, such as construal operations, and then tested using behavioral and/or corpus-linguistic methods (see e.g. Hilpert 2008; Matlock 2010; Schönefeld 2011; Matlock et al. 2012; Krawczak & Glynn forthc., among many others).

Arguably one of the most important testing grounds for theories of linguistic construal is the domain of language dynamics. Recent years have seen increasing convergence between Cognitive-Linguistic theories on the one hand and theories conceiving of language as a complex adaptive system on the other (Beckner et al. 2009; Frank & Gontier 2010; Fusaroli & Tylén 2012; Pleyer 2017). In this framework, language can be understood as a dynamic system unfolding on the timescales of individual learning, socio-cultural transmission, and biological evolution (Kirby 2012, Enfield 2014). Linguistic construal operations can be seen as important factors shaping the structure of language both on a historical timescale and in ontogenetic development (e.g. Pleyer & Winters 2014).

Empirical studies of language acquisition, language change, and language variation can therefore help us understand the nature of linguistic construal operations and can also contribute to refining theories of linguistic construal. Interdisciplinary and cross-linguistic perspectives can prove particularly insightful in this regard. Findings from cognitive science and developmental psychology can contribute substantially to our understanding of the cognitive principles behind language dynamics. Cross-linguistic comparison can, on the one hand, lead to the discovery of striking similarities across languages that might point to shared underlying cognitive principles (e.g. common pathways of grammaticalization, see e.g. Bybee et al. 1994, or similarities in the domain of metaphorical construal, see Taylor 2003: 140), but it can also safeguard against premature generalizations from findings obtained in one single language to human cognition at large (see e.g. Goschler 2017).

For our proposed workshop, we invite contributions that explicitly connect theoretical approaches to linguistic construal operations with empirical evidence from e.g. corpus linguistics, experimental studies, or typological research. In line with the cross-linguistic outlook of the main conference, we are particularly interested in papers that compare linguistic construals across different languages. Also, we would like to include interdisciplinary perspectives from the behavioural and cognitive sciences.

The topics that can be addressed in the workshop include, but are not limited to,

  • the role of construal operations such as perspectivation and specificity in language production and processing;
  • the acquisition and diachronic change of linguistic categories;
  • the question of whether individual construal operations that have been proposed in the literature are cognitively realistic (see e.g. Broccias & Hollmann 2007) and whether they can be tested empirically;
  • the refinement of construal-related concepts such as “salience” or “prominence” based on empirical findings (see e.g. Schmid & Günther 2016);
  • the relationship between linguistic construal operations and domain-general cognitive processes;
  • the relationship between empirical observations and the conclusions we draw from them about the organization of the human mind, including the viability of concepts such as the “corpus-to-cognition” principle (see e.g. Arppe et al. 2010) or the mapping of behavioral findings to cognitive processes.

Please send a short abstract (max. 1 page excl. references) and a ~100-word summary to construal.iclc15@gmail.com by August 31st, 2018 September 10th, 2018. We will inform all potential contributors in early September whether your paper can be included in our workshop proposal. If we are unable to accommodate your submission, you can of course submit it to the general session of the conference. The same applies if our workshop proposal as a whole is rejected.

 

References

Arppe, Antti, Gaëtanelle Gilquin, Dylan Glynn, Martin Hilpert & Arne Zeschel. 2010. Cognitive Corpus Linguistics: Five Points of Debate on Current Theory and Methodology. Corpora 5(1). 1–27.

Beckner, Clay, Richard Blythe, Joan Bybee, Morten H. Christiansen, William Croft, Nick C. Ellis, John Holland, Jinyun Ke, Diane Larsen-Freeman & Tom Schoenemann. 2009. Language is a Complex Adaptive System: Position Paper. Language Learning 59 Suppl. 1. 1–26.

Bergen, Benjamin K. 2012. Louder than Words: The New Science of How the Mind Makes Meaning. New York: Basic Books.

Broccias, Cristiano & Willem B. Hollmann. 2007. Do we need Summary and Sequential Scanning in (Cognitive) Grammar? Cognitive Linguistics 18. 487–522.

Bybee, Joan L., Revere Perkins & William Pagliuca. 1994. The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: University of Chicago Press.

Croft, William & Alan Cruse. 2004. Cognitive Linguistics. Cambridge: Cambridge University Press.

Enfield, N.J. 2014. Natural causes of language: frames, biases, and cultural transmission. (Conceptual Foundations of Language Science 1). Berlin: Language Science Press.

Frank, Roslyn M. & Nathalie Gontier. 2010. On Constructing a Research Model for Historical Cognitive Linguistics (HCL): Some Theoretical Considerations. In Margaret E. Winters, Heli Tissari & Kathryn Allan (eds.), Historical Cognitive Linguistics, 31–69. (Cognitive Linguistics Research 47). Berlin, New York: De Gruyter.

Fusaroli, Riccardo & Kristian Tylén. 2012. Carving language for social coordination: A dynamical approach. Interaction Studies 13(1). 103–124.

Goschler, Juliana. 2017. A contrastive view on the cognitive motivation of linguistic patterns: Concord in English and German. In Stefan Hartmann (ed.), Yearbook of the German Cognitive Linguistics Association 2017, 119–128.

Hilpert, Martin. 2008. New evidence against the modularity of grammar: Constructions, collocations, and speech perception. Cognitive Linguistics 19(3). 491–511.

Janda, Laura (ed.). 2013. Cognitive Linguistics: The Quantitative Turn. Berlin, New York: De Gruyter.

Kirby, Simon. 2012. Language is an Adaptive System: The Role of Cultural Evolution in the Origins of Structure. In Maggie Tallerman & Kathleen R. Gibson (eds.), The Oxford Handbook of Language Evolution, 589–604. Oxford: Oxford University Press.

Krawczak, Karolina & Dylan Glynn. forthc. Operationalising construal. Of / about prepositional profiling for cognition and communication predicates. In C. M. Bretones Callejas & Chris Sinha (eds.), Construals in language and thought. What shapes what? Amsterdam, Philadelphia: John Benjamins.

Langacker, Ronald W. 1987. Foundations of Cognitive Grammar. Vol. 1: Theoretical Prerequisites. Stanford: Stanford University Press.

Langacker, Ronald W. 1991. Foundations of Cognitive Grammar. Vol. 2: Descriptive Application. Stanford: Stanford University Press.

Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press.

Langacker, Ronald W. 2015. Construal. In Ewa Dąbrowska & Dagmar Divjak (eds.), Handbook of Cognitive Linguistics, 120–142. Berlin, New York: De Gruyter.

Matlock, Teenie. 2010. Abstract Motion is No Longer Abstract. Language and Cognition 2(2). 243–260.

Matlock, Teenie, David Sparks, Justin L. Matthews, Jeremy Hunter & Stephanie Huette. 2012. Smashing New Results on Aspectual Framing: How People Talk about Car Accidents. Studies in Language 36(3). 700–721.

Matlock, Teenie & Bodo Winter. 2015. Experimental Semantics. In Bernd Heine & Heiko Narrog (eds.), The Oxford Handbook of Linguistic Analysis, 771–790. Oxford: Oxford University Press.

Pleyer, Michael & James Winters. 2014. Integrating Cognitive Linguistics and Language Evolution Research. Theoria et Historia Scientiarum 11. 19–43.

Schmid, Hans-Jörg & Franziska Günther. 2016. Toward a Unified Socio-Cognitive Framework for Salience in Language. Frontiers in Psychology 7. doi:10.3389/fpsyg.2016.01110 (31 March, 2018).

Schönefeld, Doris (ed.). 2011. Converging evidence: methodological and theoretical issues for linguistic research. (Human Cognitive Processing 33). Amsterdam, Philadelphia: John Benjamins.

Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge: MIT Press.

Taylor, John R. 2003. Linguistic Categorization. 3rd ed. Oxford: Oxford University Press.

Verhagen, Arie. 2007. Construal and Perspectivization. In Dirk Geeraerts & Hubert Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics, 48–81. Oxford: Oxford University Press.

CfP: New Directions in Language Evolution Research

Panorama of Tallinn from the sea (Source: https://commons.wikimedia.org/wiki/File%3ATallinnPan.jpg, by Terker, CC-BY-SA 3.0)

Jonas Nölle, Peeter Tinits and I are going to submit a workshop proposal to next year’s Annual Meeting of the Societas Linguistica Europaea (SLE), which will be held in Tallinn from August 29th to September 1st, 2018. We thought this would be a nice opportunity to bring evolutionary linguistics to SLE – and a also a good opportunity to discuss novel and innovative approaches to language evolution in a condensed workshop setting.

Please note that there will be – as usual at SLE – a three-step selection process:

Step 1: You submit a 300-word abstract to us (the organizers: newdir.langev@gmail.com) by November 10th. We then select up to 12 papers that we include in our workshop proposal. As we want the “New directions” in our title to be more than a shallow phrase, we will base our selection as much as possible on the innovativeness of the abstracts we receive. If we’re unable to consider your paper for the workshop, there’s still the option to submit to the general session.

Step 2: Our workshop proposal is then reviewed by the scientific committee, and we’ll receive a notification of acceptance or rejection by December 15th. Good news: If you’ve submitted an abstract, there’s nothing for you to do at this point except for keeping your fingers crossed.

Step 3: If the workshop is accepted, we will ask you to submit a 500-word abstract via the conference submission system, which will be peer-reviewed like any general session paper. Notifications of acceptance or rejection can be expected in March 2018.

We’re looking forward to your contributions, and regardless of the outcome of our proposal, we hope to see many of you in Tallinn!

Here’s our CfP, which will also appear on Linguist List and on the official SLE2018 website soon:

Research on language evolution is undoubtedly among the fastest-growing topics in linguistics. This is not a coincidence: While scholars have always been interested in the origins and evolution of language, it is only now that many questions can be addressed empirically drawing on a wealth of data and a multitude of methodological approaches developed in the different disciplines that try to find answers to what has been called “the hardest problem in science” (Christiansen & Kirby 2003). Importantly, any theory of how language may have emerged requires a solid understanding of how language and other communication systems work. As such, the questions in language evolution research are manifold and interface in multiple ways with key open questions in historical and theoretical linguistics: What exactly makes human language unique compared to animal communication systems?  How do cognition, communication and transmission shape grammar? Which factors can explain linguistic diversity? How and why do languages change? To what extent is the structure of language(s) shaped by extra-linguistic, environmental factors?

Over the last 20 years or so, evolutionary linguistics has set out to find answers to these and many more questions. As, e.g., Dediu & De Boer (2016) have noted, the field of language evolution research is currently coming of age, and it has developed a rich toolkit of widely-adopted methods both for comparative research, which investigates the commonalities and differences between human language and animal communication systems, and for studying the cumulative cultural evolution of sign systems in experimental settings, including both computational and behavioral approaches (see e.g. Tallerman & Gibson 2012; Fitch 2017). In addition, large-scale typological studies have gained importance in recent research on language evolution (e.g. Evans 2010).

The goal of this workshop is to discuss innovative theoretical and methodological approaches that go beyond the current state of the art by proposing and empirically testing new hypotheses, by developing new or refining existing methods for the study of language evolution, and/or by reinterpreting the available evidence in the light of innovative theoretical frameworks. In this vein, we aim at bringing together researchers from multiple disciplines and theoretical backgrounds to discuss the latest developments in language evolution research. Topics include, but are not limited to,

  • experimental approaches investigating the emergence and/or development of sign systems in frameworks such as experimental semiotics (e.g. Galantucci & Garrod 2010) or artificial language learning (e.g. Kirby et al. 2014);
  • empirical research on non-human communication systems as well as comparative research on animal cognition with respect to its relevance for the evolution of cognitive prerequisites for fully-fledged human language (Kirby 2017);
  • approaches using computational modelling and robotics (Steels 2011) in order to investigate problems like the grounding of symbol systems in non-symbolic representations (Harnad 1990), the emergence of the particular features that make human language unique (Kirby 2017, Smith 2014), or the question to what extent these features are domain-specific, i.e. evolved by natural selection for a specifically linguistic function (Culbertson & Kirby 2016);
  • research that explicitly combines expertise from multiple different disciplines, e.g. typology and neurolinguistics (Bickel et al. 2015); genomics, archaeology, and linguistics (Pakendorf 2014, Theofanopoulou et al. 2017); comparative biology and philosophy of language (Moore 2016); and many more.

If you are interested in participating in the workshop, please send an abstract (c. 300 words) to the organizers (newdir.langev@gmail.com) by November 10th. We will let you know by November 15th if your paper is eligible for the proposed workshop. If our workshop proposal is accepted, you will be required to submit an anonymous abstract of ca. 500 words via the SLE submission system by January 15th. If our proposal is not accepted or if we cannot accommodate your paper in the workshop, you can still submit your abstract as a general session paper.

References

Bickel, Balthasar, Alena Witzlack-Makarevich, Kamal K. Choudhary, Matthias Schlesewsky & Ina Bornkessel-Schlesewsky. 2015. The Neurophysiology of Language Processing Shapes the Evolution of Grammar: Evidence from Case Marking. PLOS ONE 10(8). e0132819.

Christiansen, Morten H. & Simon Kirby. 2003. Language Evolution: The Hardest Problem in Science. In Morten H. Christiansen & Simon Kirby (eds.), Language Evolution, 1–15. (Oxford Studies in the Evolution of Language 3). Oxford: Oxford University Press.

Culbertson, Jennifer & Simon Kirby. 2016. Simplicity and Specificity in Language: Domain-General Biases Have Domain-Specific Effects. Frontiers in Psychology 6. doi:10.3389/fpsyg.2015.01964.

Dediu, Dan & Bart de Boer. 2016. Language evolution needs its own journal. Journal of Language Evolution 1(1). 1–6.

Evans, Nicholas. 2010. Language diversity as a tool for understanding cultural evolution. In Peter J. Richerson & Morten H. Christiansen (eds.), Cultural Evolution : Society, Technology, Language, and Religion, 233–268. Cambridge: MIT Press.

Fitch, W. Tecumseh. 2017. Empirical approaches to the study of language evolution. Psychonomic Bulletin & Review 24(1). 3–33.

Galantucci, Bruno & Simon Garrod. 2010. Experimental Semiotics: A new approach for studying the emergence and the evolution of human communication. Interaction Studies 11(1). 1–13.

Harnad, Stevan. 1990. The symbol grounding problem. Physica D 42. 335–346.

Kirby, Simon, Tom Griffiths & Kenny Smith. 2014. Iterated Learning and the Evolution of Language. Current Opinion in Neurobiology 28. 108–114.

Kirby, Simon. 2017. Culture and biology in the origins of linguistic structure. Psychonomic Bulletin & Review 24(1). 118–137.

Moore, Richard. 2016. Meaning and ostension in great ape gestural communication. Animal Cognition 19(1). 223–231.

Pakendorf, Brigitte. 2014. Coevolution of languages and genes. Current Opinion in Genetics & Development 29. 39–44.

Smith, Andrew D.M. 2014. Models of language evolution and change: Language evolution and change. Wiley Interdisciplinary Reviews: Cognitive Science 5(3). 281–293.

Steels, Luc. 2011. Modeling the Cultural Evolution of Language. Physics of Life Reviews 8. 339–356.

Tallerman, Maggie & Kathleen R. Gibson (eds.). 2012. The Oxford Handbook of Language Evolution. Oxford: Oxford University Press.

Theofanopoulou, Constantina, Simone Gastaldon, Thomas O’Rourke, Bridget D. Samuels, Angela Messner, Pedro Tiago Martins, Francesco Delogu, Saleh Alamri & Cedric Boeckx. 2017. Self-domestication in Homo sapiens: Insights from comparative genomics. PLOS ONE 12(10). e0185306.

Usage context and overspecification

A new issue of the Journal of Language Evolution has just appeared, including a paper by Peeter Tinits, Jonas Nölle, and myself on the influence of usage context on the emergence of overspecification. (It has actually been published online already a couple of weeks ago, and an earlier version of it was included in last year’s Evolang proceedings.) Some of the volunteers who participated in our experiment have actually been recruited via Replicated Typo – thanks to everyone who helped us out! Without you, this study wouldn’t have been possible.

I hope that I’ll find time to write a bit more about this paper in the near future, especially about its development, which might itself qualify as an interesting example of cultural evolution. Even though the paper just reports on a tiny experimental case study, adressing a fairly specific phenomenon, we discovered, in the process of writing, that each of the three authors had quite different ideas of how language works, which made the write-up process much more challenging than expected (but arguably also more interesting).

For now, however, I’ll just link to the paper and quote our abstract:

This article investigates the influence of contextual pressures on the evolution of overspecification, i.e. the degree to which communicatively irrelevant meaning dimensions are specified, in an iterated learning setup. To this end, we combine two lines of research: In artificial language learning studies, it has been shown that (miniature) languages adapt to their contexts of use. In experimental pragmatics, it has been shown that referential overspecification in natural language is more likely to occur in contexts in which the communicatively relevant feature dimensions are harder to discern. We test whether similar functional pressures can promote the cumulative growth of referential overspecification in iterated artificial language learning. Participants were trained on an artificial language which they then used to refer to objects. The output of each participant was used as input for the next participant. The initial language was designed such that it did not show any overspecification, but it allowed for overspecification to emerge in 16 out of 32 usage contexts. Between conditions, we manipulated the referential context in which the target items appear, so that the relative visuospatial complexity of the scene would make the communicatively relevant feature dimensions more difficult to discern in one of them. The artificial languages became overspecified more quickly and to a significantly higher degree in this condition, indicating that the trend toward overspecification was stronger in these contexts, as suggested by experimental pragmatics research. These results add further support to the hypothesis that linguistic conventions can be partly determined by usage context and shows that experimental pragmatics can be fruitfully combined with artificial language learning to offer valuable insights into the mechanisms involved in the evolution of linguistic phenomena.

In addition to our article, there’s also a number of other papers in the new JoLE issue that are well worth a read, including another Iterated Learning paper by Clay Beckner, Janet Pierrehumbert, and Jennifer Hay, who have conducted a follow-up on the seminal Kirby, Cornish & Smith (2008) study. Apart from presenting highly relevant findings, they also make some very interesting methodological points.

You’re clever for your kids’ sake: A feedback loop between intelligence and early births

The gap between our cognitive skills and that of our closest evolutionary ancestors is quite astonishing. Within a relatively short evolutionary time frame humans developed a wide range of cognitive abilities and bodies that are very different to other primates and animals. Many of these differences appear to be related to each other. A recent paper by Piantadosi and Kidd argues that human intelligence originates in human infants’ restriction of their birth size, leading to premature births and long weaning times that require intensive and intelligent care. This is an interesting hypothesis that links the ontogeny of the body with cognition.

Human weaning times are extraordinarily long. Human infants spend their first few months being highly dependent on their caregivers, not just for food but for pretty much any interaction with the environment. Even by the time they are walking they still spend years being dependant on their caregivers. Hence, it would be a good for their parents to stick around and care for them – instead of catapulting them over the nearest mountain.  Piantadosi and Kidd argue that “[h]umans must be born unusually early to accommodate larger brains, but this gives rise to particularly helpless neonates. Caring for these children, in turn, requires more intelligence—thus even larger brains.” [p. 1] This creates a runaway feedback loop between intelligence and weaning times, similar to those observed in sexual selection.

Piantadosi and Kidd’s computational model takes into account infant mortality as a function of intelligence and head circumference, but also take into account the ooffspring’s likelihood to survive into adulthood, depending on parental care/intelligence. The predictions are based on the population level, and the model predicts a fitness landscape where two optima emerge: populations either drift towards long development and smaller head circumference (a proxy for intelligence in the model) or they drift towards the second optimum – larger heads but shorter weaning time. Once a certain threshold has been crossed, a feedback loop emerges and more intelligent adults are able to support less mature babies. However, more intelligent adults will have even bigger heads when they are born – and thus need to be born even more premature in order to avoid complications at birth.

To test their model’s predictions, the authors also correlated weaning times and intelligence measures within primates and found a high correlation within the primate species. For example, bonobos and chimpanzees have an average weaning time of approximately 1100 days, and score highly in standardised intelligence measures. Lemurs on the other hand only spend 100 days with their offspring, and score much lower in intelligence. Furthermore, Piantadosi and Kidd also look at the relationship between weaning age with various other physical measures of the body, such as the size of the neocortex, brain volume and body mass. However, weaning time remains the most reliable predictor in the model.

Piantadosi and Kidd’s model provides a very interesting perspective on how human intelligence could have been the product of a feedback loop between developmental maturity and neonatal head size, and infant care. Such a feedback component could explain the considerable evolutionary change humans have undergone. Yet between the two optima of long birth age and a small brain radius and a short birth age and a large brain, most populations do drift towards the longer birth/smaller brain (See graph 2.A in the paper). It appears that the model cannot explain the original evolutionary pressure for more intelligence that pushed humans over the edge: If early humans encountered an increased number of early births, why did those populations with early births not simply die out, instead of taking the relatively costly route of becoming more intelligent? Only once there is a pressure towards more intelligence, it is possible that humans were pushed into a location leading the self-enforcing cycle of low birth age and high parental intelligence, and this cycle drove humans towards much higher intelligence than they would have developed otherwise. Even if the account falls short of ultimate explanations (i.e. why a certain feature has evolved, the reason), Piantadosi and Kidd have described an interesting proximate explanation (i.e. how a feature evolved, the mechanism).

Because the data is correlative in its nature only, the reverse hypothesis might also hold – humans might be more intelligent because they spend more time interacting with their caregivers. In fact, a considerable amount of their experiences is modulated by their caregivers, and their unique experience might also create a strong embodied perspective on the emergence of social signals. For example, infants in their early years see a proportionately high number of faces (Fausey et al., 2016). Maybe infants’ long period of dependence makes them learn so well from other people around them, thereby allowing for the acquisition of cultural information and a more in-depth understanding of the world around them. Therefore, the longer weaning time makes them pay much more attention to caregivers, providing a stimulus rich environment that human infants are immersed in for much longer than other species. Whatever the connection might be, I think that this kind of research offers a fascinating view on how children develop and what makes us human.

References

Fausey, C. M., Jayaraman, S., & Smith, L. B. (2016, Jul). From faces to hands: Changing visual input in the first two years. Cognition, 152, 101–107. doi: 10.1016/j.cognition.2016.03.005
Piantadosi, S. T., & Kidd, C. (2016). Extraordinary intelligence and the care of infants. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.1506752113
Thanks to Denis for finding the article.

I know (1) that you think (2) it’s funny, and you know (3) that I know (4) that, too.

A large part of human humour depends on understanding that the intention of the person telling the joke might be different to what they are actually saying. The person needs to tell the joke so that you understand that they’re telling a joke, so they need to to know that you know that they do not intend to convey the meaning they are about to utter… Things get even more complicated when we are telling each other jokes that involve other people having thoughts and beliefs about other people. We call this knowledge nested intentions, or recursive mental attributions. We can already see, based on my complicated description, that this is a serious matter and requires scientific investigation. Fortunately, a recent paper by Dunbar, Launaway and Curry (2015) investigated whether the structure of jokes is restricted by the amount of nested intentions required to understand the joke and they make a couple of interesting predictions on the mental processing that is involved in processing humour, and how these should be reflected in the structure and funniness of jokes. In today’s blogpost I want to discuss the paper’s methodology and some of its claims.

Continue reading “I know (1) that you think (2) it’s funny, and you know (3) that I know (4) that, too.”

What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading “What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics””

Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here’s an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett’s FB page, Yorick Wilks observed: “It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer.”

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

Culture shapes the evolution of cognition

A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone’s favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.

 

Abstract:

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Paper: http://www.pnas.org/content/early/2016/03/30/1523631113.full

CfP: Interaction and Iconicity in the Evolution of Language

Following the ICLC theme session on “Cognitive Linguistics and the Evolution of Language” last year,  I’m guest-editing a Special Issue of the journal Interaction Studies together with Michael Pleyer, James Winters, and Jordan Zlatev. This volume, entitled “Interaction and Iconicity in the Evolution of Language: Converging Perspectives from Cognitive and Evolutionary Linguistics”, will focus on issues that emerged as common themes during the ICLC workshop.

Although many contributors to the theme session have already agreed to submit a paper, we would like to invite a limited number of additional contributions relevant to the topic of the volume. Here’s our Call for Papers.

Continue reading “CfP: Interaction and Iconicity in the Evolution of Language”