CfP: New Directions in Language Evolution Research

Panorama of Tallinn from the sea (Source: https://commons.wikimedia.org/wiki/File%3ATallinnPan.jpg, by Terker, CC-BY-SA 3.0)

Jonas Nölle, Peeter Tinits and I are going to submit a workshop proposal to next year’s Annual Meeting of the Societas Linguistica Europaea (SLE), which will be held in Tallinn from August 29th to September 1st, 2018. We thought this would be a nice opportunity to bring evolutionary linguistics to SLE – and a also a good opportunity to discuss novel and innovative approaches to language evolution in a condensed workshop setting.

Please note that there will be – as usual at SLE – a three-step selection process:

Step 1: You submit a 300-word abstract to us (the organizers: newdir.langev@gmail.com) by November 10th. We then select up to 12 papers that we include in our workshop proposal. As we want the “New directions” in our title to be more than a shallow phrase, we will base our selection as much as possible on the innovativeness of the abstracts we receive. If we’re unable to consider your paper for the workshop, there’s still the option to submit to the general session.

Step 2: Our workshop proposal is then reviewed by the scientific committee, and we’ll receive a notification of acceptance or rejection by December 15th. Good news: If you’ve submitted an abstract, there’s nothing for you to do at this point except for keeping your fingers crossed.

Step 3: If the workshop is accepted, we will ask you to submit a 500-word abstract via the conference submission system, which will be peer-reviewed like any general session paper. Notifications of acceptance or rejection can be expected in March 2018.

We’re looking forward to your contributions, and regardless of the outcome of our proposal, we hope to see many of you in Tallinn!

Here’s our CfP, which will also appear on Linguist List and on the official SLE2018 website soon:

Research on language evolution is undoubtedly among the fastest-growing topics in linguistics. This is not a coincidence: While scholars have always been interested in the origins and evolution of language, it is only now that many questions can be addressed empirically drawing on a wealth of data and a multitude of methodological approaches developed in the different disciplines that try to find answers to what has been called “the hardest problem in science” (Christiansen & Kirby 2003). Importantly, any theory of how language may have emerged requires a solid understanding of how language and other communication systems work. As such, the questions in language evolution research are manifold and interface in multiple ways with key open questions in historical and theoretical linguistics: What exactly makes human language unique compared to animal communication systems?  How do cognition, communication and transmission shape grammar? Which factors can explain linguistic diversity? How and why do languages change? To what extent is the structure of language(s) shaped by extra-linguistic, environmental factors?

Over the last 20 years or so, evolutionary linguistics has set out to find answers to these and many more questions. As, e.g., Dediu & De Boer (2016) have noted, the field of language evolution research is currently coming of age, and it has developed a rich toolkit of widely-adopted methods both for comparative research, which investigates the commonalities and differences between human language and animal communication systems, and for studying the cumulative cultural evolution of sign systems in experimental settings, including both computational and behavioral approaches (see e.g. Tallerman & Gibson 2012; Fitch 2017). In addition, large-scale typological studies have gained importance in recent research on language evolution (e.g. Evans 2010).

The goal of this workshop is to discuss innovative theoretical and methodological approaches that go beyond the current state of the art by proposing and empirically testing new hypotheses, by developing new or refining existing methods for the study of language evolution, and/or by reinterpreting the available evidence in the light of innovative theoretical frameworks. In this vein, we aim at bringing together researchers from multiple disciplines and theoretical backgrounds to discuss the latest developments in language evolution research. Topics include, but are not limited to,

  • experimental approaches investigating the emergence and/or development of sign systems in frameworks such as experimental semiotics (e.g. Galantucci & Garrod 2010) or artificial language learning (e.g. Kirby et al. 2014);
  • empirical research on non-human communication systems as well as comparative research on animal cognition with respect to its relevance for the evolution of cognitive prerequisites for fully-fledged human language (Kirby 2017);
  • approaches using computational modelling and robotics (Steels 2011) in order to investigate problems like the grounding of symbol systems in non-symbolic representations (Harnad 1990), the emergence of the particular features that make human language unique (Kirby 2017, Smith 2014), or the question to what extent these features are domain-specific, i.e. evolved by natural selection for a specifically linguistic function (Culbertson & Kirby 2016);
  • research that explicitly combines expertise from multiple different disciplines, e.g. typology and neurolinguistics (Bickel et al. 2015); genomics, archaeology, and linguistics (Pakendorf 2014, Theofanopoulou et al. 2017); comparative biology and philosophy of language (Moore 2016); and many more.

If you are interested in participating in the workshop, please send an abstract (c. 300 words) to the organizers (newdir.langev@gmail.com) by November 10th. We will let you know by November 15th if your paper is eligible for the proposed workshop. If our workshop proposal is accepted, you will be required to submit an anonymous abstract of ca. 500 words via the SLE submission system by January 15th. If our proposal is not accepted or if we cannot accommodate your paper in the workshop, you can still submit your abstract as a general session paper.

References

Bickel, Balthasar, Alena Witzlack-Makarevich, Kamal K. Choudhary, Matthias Schlesewsky & Ina Bornkessel-Schlesewsky. 2015. The Neurophysiology of Language Processing Shapes the Evolution of Grammar: Evidence from Case Marking. PLOS ONE 10(8). e0132819.

Christiansen, Morten H. & Simon Kirby. 2003. Language Evolution: The Hardest Problem in Science. In Morten H. Christiansen & Simon Kirby (eds.), Language Evolution, 1–15. (Oxford Studies in the Evolution of Language 3). Oxford: Oxford University Press.

Culbertson, Jennifer & Simon Kirby. 2016. Simplicity and Specificity in Language: Domain-General Biases Have Domain-Specific Effects. Frontiers in Psychology 6. doi:10.3389/fpsyg.2015.01964.

Dediu, Dan & Bart de Boer. 2016. Language evolution needs its own journal. Journal of Language Evolution 1(1). 1–6.

Evans, Nicholas. 2010. Language diversity as a tool for understanding cultural evolution. In Peter J. Richerson & Morten H. Christiansen (eds.), Cultural Evolution : Society, Technology, Language, and Religion, 233–268. Cambridge: MIT Press.

Fitch, W. Tecumseh. 2017. Empirical approaches to the study of language evolution. Psychonomic Bulletin & Review 24(1). 3–33.

Galantucci, Bruno & Simon Garrod. 2010. Experimental Semiotics: A new approach for studying the emergence and the evolution of human communication. Interaction Studies 11(1). 1–13.

Harnad, Stevan. 1990. The symbol grounding problem. Physica D 42. 335–346.

Kirby, Simon, Tom Griffiths & Kenny Smith. 2014. Iterated Learning and the Evolution of Language. Current Opinion in Neurobiology 28. 108–114.

Kirby, Simon. 2017. Culture and biology in the origins of linguistic structure. Psychonomic Bulletin & Review 24(1). 118–137.

Moore, Richard. 2016. Meaning and ostension in great ape gestural communication. Animal Cognition 19(1). 223–231.

Pakendorf, Brigitte. 2014. Coevolution of languages and genes. Current Opinion in Genetics & Development 29. 39–44.

Smith, Andrew D.M. 2014. Models of language evolution and change: Language evolution and change. Wiley Interdisciplinary Reviews: Cognitive Science 5(3). 281–293.

Steels, Luc. 2011. Modeling the Cultural Evolution of Language. Physics of Life Reviews 8. 339–356.

Tallerman, Maggie & Kathleen R. Gibson (eds.). 2012. The Oxford Handbook of Language Evolution. Oxford: Oxford University Press.

Theofanopoulou, Constantina, Simone Gastaldon, Thomas O’Rourke, Bridget D. Samuels, Angela Messner, Pedro Tiago Martins, Francesco Delogu, Saleh Alamri & Cedric Boeckx. 2017. Self-domestication in Homo sapiens: Insights from comparative genomics. PLOS ONE 12(10). e0185306.

Usage context and overspecification

A new issue of the Journal of Language Evolution has just appeared, including a paper by Peeter Tinits, Jonas Nölle, and myself on the influence of usage context on the emergence of overspecification. (It has actually been published online already a couple of weeks ago, and an earlier version of it was included in last year’s Evolang proceedings.) Some of the volunteers who participated in our experiment have actually been recruited via Replicated Typo – thanks to everyone who helped us out! Without you, this study wouldn’t have been possible.

I hope that I’ll find time to write a bit more about this paper in the near future, especially about its development, which might itself qualify as an interesting example of cultural evolution. Even though the paper just reports on a tiny experimental case study, adressing a fairly specific phenomenon, we discovered, in the process of writing, that each of the three authors had quite different ideas of how language works, which made the write-up process much more challenging than expected (but arguably also more interesting).

For now, however, I’ll just link to the paper and quote our abstract:

This article investigates the influence of contextual pressures on the evolution of overspecification, i.e. the degree to which communicatively irrelevant meaning dimensions are specified, in an iterated learning setup. To this end, we combine two lines of research: In artificial language learning studies, it has been shown that (miniature) languages adapt to their contexts of use. In experimental pragmatics, it has been shown that referential overspecification in natural language is more likely to occur in contexts in which the communicatively relevant feature dimensions are harder to discern. We test whether similar functional pressures can promote the cumulative growth of referential overspecification in iterated artificial language learning. Participants were trained on an artificial language which they then used to refer to objects. The output of each participant was used as input for the next participant. The initial language was designed such that it did not show any overspecification, but it allowed for overspecification to emerge in 16 out of 32 usage contexts. Between conditions, we manipulated the referential context in which the target items appear, so that the relative visuospatial complexity of the scene would make the communicatively relevant feature dimensions more difficult to discern in one of them. The artificial languages became overspecified more quickly and to a significantly higher degree in this condition, indicating that the trend toward overspecification was stronger in these contexts, as suggested by experimental pragmatics research. These results add further support to the hypothesis that linguistic conventions can be partly determined by usage context and shows that experimental pragmatics can be fruitfully combined with artificial language learning to offer valuable insights into the mechanisms involved in the evolution of linguistic phenomena.

In addition to our article, there’s also a number of other papers in the new JoLE issue that are well worth a read, including another Iterated Learning paper by Clay Beckner, Janet Pierrehumbert, and Jennifer Hay, who have conducted a follow-up on the seminal Kirby, Cornish & Smith (2008) study. Apart from presenting highly relevant findings, they also make some very interesting methodological points.

Empirical approaches to the study of language evolution (PBR Special Issue)

There is no shortage of special issues on language evolution in the current landscape of academic journals. However, probably none of the three upcoming special issues I know of (or the many more I don’t know of) will match Tecumseh Fitch’s special issue on “Empirical approaches in the study of Language Evolution” in “Psychonomic Bulletin and Review”, at least in terms of sheer size – by my count, the issue contains no less than 36 contributions by 39 mostly very well-known researchers.

The volume starts out with an impressive overview – which also serves as a review paper on recent advances in language evolution research – by Fitch himself. Like some of the other contributions, it is freely available with open access. As all contributions are available as “online first” papers at the moment and have not been assigned to an issue of the journal yet, the references section of the overview is also a good starting point for retrieving the other papers in the special issue.

Some of the papers are response articles to other contributions in the volume, which nicely highlights some key debates and open questions in the field. For example, both David Adger and Dan Bowling react to Simon Kirby’s paper on “Culture and biology in the emergence of linguistic structure”. Reviewing a large number of (both computational and behavioral) experiments using the Iterated Learning paradigm, including recent work on Bayesian Iterated Learning, Kirby argues that linguistic structure emerges as sets of behaviors (utterances) are transmitted through an informational bottleneck (the limited data available to the language learner) and the behaviors adapt to better pass through the bottleneck. According to Kirby, “[a]n overarching universal arising from this cultural process is that compressible sets of behaviours pass through the bottleneck more easily. If behaviours also need to be expressive then rich systematic structure appears to be the inevitable result.” Adger, however, argues that expressivity and compressibility are not sufficient to explain the emergence of structure. He points out that the systematicity of human languages is restricted in particular ways and that in the case of some grammatical phenomena, the simplest and most expressive option is logically possible but unattested in the world’s languages. He therefore argues that the human language capacity imposes strong constraints on language development, while the structures of particular languages arise in the way envisaged by the Iterated Learning model.

Kirby also discusses the relation between biological and cultural factors in language evolution. Probably the most far-reaching conclusion he draws from Iterated Learning models (in particular, from work by Bill Thompson et al.) is that the language faculty can only contain weak domain-specific constraints, while any hard constraints on the acquisition of language will almost certainly be domain-general. Bowling’s response is targeted at this aspect of Kirby’s theory. While being sympathetic with the emphasis on cultural evolution, he argues that it “fails to leave the nature-nurture dichotomy behind”, as constraints are identified as either cultural or biological. Unfortunately, Bowling doesn’t really have enough space to unfold this argument in more detail in this very short response paper.

A second paper in the special issue that is accompanied by a short commentary is Mark Johnson‘s “Marr’s levels and the minimalist program” (preprint). He discusses the question “what kind of simplicity is likely to be most related to the plausibility of an evolutionary event introducing a change to a cognitive system?” Obviously, this question bears important implications for Chomsky’s minimalist theory of language evolution, according to which a single mutation gave rise to the operation Merge, “a simple formal operation that yields the kinds of hierarchical structures found in human languages”. Johnson points out that just because a cognitive system is easy to describe does not necessarily mean that it is evolutionarily plausible. In order to approach the question “What kind of simplicity?”, he takes up David Marr’s levels of analysis of cognitive systems: the implementational level (the “hardware”), the algorithmic level (the representations and data structures involved), and the computational level (the goal(s) of the system; the information it manipulates; the constraints it must satisfy). He suggests that complexity of genomic encoding might be most closely related to complexity at the implementational level. The introduction of Merge, however, is complex at the computational level, while the changes on the other two levels could be quite complex. To strengthen the minimalist account of language evolution, then, one would have to either show systematic connections between the three levels, or demonstrate that a simple change to neural architecture can give rise to human language.

In her response paper, Amy Perfors (preprint) basically seconds Johnson’s position. However, she also points out that, from the perspective of Occam’s razor, computational simplicity might nevertheless be an important factor in model selection: “Because the more computationally complex a model or a theory is, the more difficult it is, plausibly, to represent or learn. For those reasons the simplicity of Merge is a theoretical asset when evaluating its cognitive plausibility.”

Kirby’s and Johnson’s papers and the respective responses can of course only give a glimpse of the thematic breadth of the special issue and the diversity of theoretical frameworks represented in the volume. Other topics include, e.g., the architecture of the “language-ready brain”, advances and missed opportunities in comparative research, and the role of different modalities in the evolution of language.

 

CfP: Interaction and Iconicity in the Evolution of Language

Following the ICLC theme session on “Cognitive Linguistics and the Evolution of Language” last year,  I’m guest-editing a Special Issue of the journal Interaction Studies together with Michael Pleyer, James Winters, and Jordan Zlatev. This volume, entitled “Interaction and Iconicity in the Evolution of Language: Converging Perspectives from Cognitive and Evolutionary Linguistics”, will focus on issues that emerged as common themes during the ICLC workshop.

Although many contributors to the theme session have already agreed to submit a paper, we would like to invite a limited number of additional contributions relevant to the topic of the volume. Here’s our Call for Papers.

Continue reading “CfP: Interaction and Iconicity in the Evolution of Language”

Learn an Alien Language!

I’ve set up a little experiment in collaboration with a small armada of co-authors (Jonas Nölle, Peeter Tinits, and Michael Pleyer). Be a pioneer in interstellar communication and try to accomplish an important mission:

http://tsamtrah.bplaced.net/

Many thanks to Thomas Hartmann for programming the online interface and to James Winters for some enormously helpful advice on the design of the experiment.

We’ll keep you posted about the results…

Cognitive Linguistics and the Evolution of Language

On Tuesday, July 21st, this year’s International Cognitive Linguistics Conference will host a theme session on “Cognitive Linguistics and the Evolution of Language” co-organized by three Replicated Typo authors: Michael Pleyer, James Winters, and myself. In addition, two Replicated Typo bloggers are co-authors on papers presented in the theme session.

The general idea of this session goes back to previous work by James and Michael, who promoted the idea of integrating Cognitive Linguistics and language evolution research in several conference talks as well as in a 2014 paper – published, quite fittingly, in a journal called “Theoria et Historia Scientiarum”, as the very idea of combining these frameworks requires some meta-theoretical reflection. As both cognitive and evolutionary linguistics are in themselves quite heterogeneous frameworks, the question emerges what we actually mean when we speak of “cognitive” or “evolutionary” linguistics, respectively.

I might come back to this meta-scientific discussion in a later post. For now, I will confine myself to giving a brief overview of the eight talks in our session. The full abstracts can be found here.

In the first talk, Vyv Evans (Bangor) proposes a two-step scenario of the evolution of language, informed by concepts from Cognitive Linguistics in general and Langacker’s Cognitive Grammar in particular:

The first stage, logically, had to be a symbolic reference in what I term a words-to-world direction, bootstrapping extant capacities that Autralopithecines, and later ancestral Homo shared with the great apes. But the emergence of a grammatical capacity is also associated with a shift towards a words-to-words direction symbolic reference: words and other grammatical constructions can symbolically refer to other symbolic units.

Roz Frank (Iowa) then outlines “The relevance of a ‘Complex Adaptive Systems’ approach to ‘language’” – note the scarequotes. She argues that “the CAS approach serves to replace older historical linguistic notions of languages as ‘organisms’ and as ‘species’”.

Sabine van der Ham, Hannah Little, Kerem Eryılmaz, and Bart de Boer (Brussels) then talk about two sets of experiments investigating the role of individual learning biases and cultural transmission in shaping language, in a talk entitled “Experimental Evidence on the Emergence of Phonological Structure”.

In the next talk, Seán Roberts and Stephen Levinson (Nijmegen) provide experimental evidence for the hypothesis that “On-line pressures from turn taking constrain the cultural evolution of word order”. Chris Sinha’s talk, entitled “Eco-Evo-Devo: Biocultural synergies in language evolution”, is more theoretical in nature, but no less interesting. Starting from the hypothesis that “many species construct “artefactual” niches, and language itself may be considered as a transcultural component of the species-specific human biocultural niche”, he argues that

Treating language as a biocultural niche yields a new perspective on both the human language capacity and on the evolution of this capacity. It also enables us to understand the significance of language as the symbolic ground of the special subclass of symbolic cognitive artefacts.

Arie Verhagen (Leiden) then discusses the question if public and private communication are “Stages in the Evolution of Language”.  He argues against Tomasello’s idea that ““joint” intentionality emerged first and evolved into what is essentially still its present state, which set the stage for the subsequent evolution of “collective” intentionality” and instead defends the view that

these two kinds of processes and capacities evolved ‘in tandem’: A gradual increase in the role of culture (learned patterns of behaviour) produced differences and thus competition between groups of (proto-)humans, which in turn provided selection pressures for an increased capability and motivation of individuals to engage in collaborative activities with others.

James Winters (Edinburgh) then provides experimental evidence that “Linguistic systems adapt to their contextual niche”, addressing two major questions with the help of an artificial-language communication game:

(i) To what extent does the situational context influence the encoding of features in the linguistic system? (ii) How does the effect of the situational context work its way into the structure of language?

His results “support the general hypothesis that language structure adapts to the situational contexts in which it is learned and used, with short-term strategies for conveying the intended meaning feeding back into long-term, system-wider changes.”

The final talk, entitled “Communicating events using bodily mimesis with and without vocalization” is co-authored by Jordan Zlatev, Sławomir Wacewicz, Przemysław Żywiczyński,  andJoost van de Weijer (Lund/Torun). They introduce an experiment on event communication and discuss to what extent the greater potential for iconic representation in bodily reenactment compared to in vocalization might lend support for a “bodily mimesis hypothesis of language origins”.

In the closing session of the workshop, this highly promising array of papers is discussed with one of the “founding fathers” of modern language evolution research, Jim Hurford (Edinburgh).

But that’s not all: Just one coffee break after the theme session, there will be a panel on “Language and Evolution” in the general session of the conference, featuring papers by Gareth Roberts & Maryia Fedzechkina; Jonas Nölle; Carmen Saldana, Simon Kirby & Kenny Smith; Yasamin Motamedi, Kenny Smith, Marieke Schouwstra & Simon Kirby; and Andrew Feeney.

Empty Constructions and the Meaning of “Meaning”

Textbooks are boring. In most cases, they consist of a rather tiring collection of more or less undisputed facts, and they omit the really interesting stuff such as controversial discussions or problematic cases that pose a serious challenge to a specific scientific theory. However, Martin Hilpert’s “Construction Grammar and its Application to English” is an admirable exception since it discusses various potential problems for Construction Grammar at length. What I found particularly interesting was the problem of “meaningless constructions”. In what follows, I will present some examples for such constructions and discuss what they might tell us about the nature of linguistic constructions. First, however, I will outline some basic assumptions of Construction Grammar. Continue reading “Empty Constructions and the Meaning of “Meaning””

Language as a multimodal phenomenon

The issue of multimodality has become a widely discussed topic in several branches of linguistics and especially in research on the evolution of language. Now, a special issue of the “Philosophical Transactions of the Royal Society B” has been dedicated to “Language as a multimodal phenomenon”. The issue, edited by Gabriella Vigliocco, Pamela Perniss, and David Vinson, features a variety of interesting papers by outstanding scholars from different fields such as gesture research, signed language research, neurolinguistics, and evolutionary linguistics.

For example, Susan Goldin-Meadow discusses “what the manual modality reveals about language, learning and cognition”, arguing that, in child language acquisition, manual gestures “precede, and predict, the acquisition of structures in speech”.

Ulf Liszkowski addresses the question of how infants communicate before they have acquired a language, and Aslı Özyürek reviews neuroscientific findings on “Hearning and seeing meaning in speech and gesture”. Jeremy Skipper discusses “how auditory cortex hears context during speech perception”, and Stephen Levinson and Judith Holler, in a paper entitled “The origin of human multi-modal communication”,  talk about “the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins.”

Martin Sereno, in his opinion piece on the “Origin of  symbol-using systems”, argues that we have to distinguish “the origin of a system capable of evolution from the subsequent evolution that system becomes capable of”. According to Sereno,

“Human language arose on a substrate of a system already capable of Darwinian evolution; the genetically supported uniquely human ability to learn a language reflects a key contact point between Darwinian evolution and language. Though implemented in brains generated by DNA symbols coding for protein meaning, the second higher-level symbol-using system of language now operates in a world mostly decoupled from Darwinian evolutionary constraints.”

Padraic Monaghan, Richard C. Shillcock, Morten H. Christiansen, and Simon Kirby address the question “How arbitrary is language?” Drawing on a large-scale corpus analysis, they show that

“sound–meaning mappings are more systematic than would be expected by chance. Furthermore, this systematicity is more pronounced for words involved in the early stages of language acquisition and reduces in later vocabulary development.”

Mutsumi Imai and Sotaro Kita propose a “sound symbolism bootstrapping hypothesis for language acquisition and language evolution”, arguing that “sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation” and that sound symbolism might be deeply related to language evolution.

Karen Emmorey discusses the role of iconicity in sign language grammar and processing, and in the final paper, Pamela Perniss and Gabriella Vigliocco argue that ” iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing.”

The special issue is available here. Some of the papers are open access, all others can be accessed freely until October 19th ( User name: language; Password: tb1651 – since this information was distributed by the Royal Sociaty via several mailing lists, I guess I’m free to share it here).

 

Vyv Evans: The Human Meaning-Making Engine

If you read my last post here at Replicated Typo to the very end, you may remember that I promised to recommend a book and to return to one of the topics of this previous post. I won’t do this today, but I promise I will catch up on it in due time.

What I just did – promising something – is a nice example for one of the two functions of language which Vyvyan Evans from Bangor University distinguished in his talk on “The Human Meaning-Making Engine” yesterday at the UK Cognitive Linguistics Conference. More specifically, the act of promising is an example for the interactive function of language, which is of course closely intertwined with its symbolic function. Evans proposed two different sources for this two functions. The interactive function, he argued, arises from the human instinct for cooperation, whereas meaning arises from the interaction between the linguistic and the conceptual system. While language provides the “How” of meaning-making, the conceptual system provides the “What”. Evans used some vivid examples (e.g. this cartoon exemplifying nonverbal communication) to make clear that communication is not contingent on language. However, “language massively amplifies our communicative potential.” The linguistic system, he argued, has evolved as an executive control system for the conceptual system. While the latter is broadly comparable with that of other animals, especially great apes, the linguistic system is uniquely human. What makes it unique, however, is not the ability to refer to things in the world, which can arguably be found in other animals, as well. What is uniquely human, he argued, is the ability to symbolically refer in a sign-to-sign (word-to-word) direction rather than “just” in a sign-to-world (word-to-world) direction.  Evans illustrated this “word-to-word” direction with Hans-Jörg Schmid’s (e.g.  2000; see also here)  work on “shell nouns”, i.e. nouns “used in texts to refer to other passages of the text and to reify them and characterize them in certain ways.” For instance, the stuff I was talking about in the last paragraph would be an example of a shell noun.

According to Evans, the “word-to-word” direction is crucial for the emergence of e.g. lexical categories and syntax, i.e. the “closed-class” system of language. Grammaticalization studies indicate that the “open-class” system of human languages is evolutionarily older than the “closed-class” system, which is comprised of grammatical constructions (in the broadest sense). However, Evans also emphasized that there is a lot of meaning even in closed-class constructions, as e.g. Adele Goldberg’s work on argument structure constructions shows: We can make sense of a sentence like “Someone somethinged something to someone” although the open-class items are left unspecified.

Constructions, he argued, index or cue simulations, i.e. re-activations of body-based states stored in cortical and subcortical brain regions. He discussed this with the example of the cognitive model for Wales: We know that Wales is a geographical entity. Furthermore, we know that “there are lots of sheep, that the Welsh play Rugby, and that they dress in a funny way.” (Sorry, James. Sorry, Sean.) Oh, and “when you’re in Wales, you shouldn’t say, It’s really nice to be in England, because you will be lynched.”

On a more serious note, the cognitive models connected to closed-class constructions, e.g. simple past -ed or progressive -ing, are of course much more abstract but can also be assumed to arise from embodied simulations (cf. e.g. Bergen 2012). But in addition to the cognitive dimension, language of course also has a social and interactive dimension drawing on the apparently instinctive drive towards cooperative behaviour. Culture (or what Tomasello calls “collective intentionality”)  is contigent on this deep instinct which Levinson (2006) calls the “human interaction engine”. Evans’ “meaning-making engine” is the logical continuation of this idea.

Just like Evans’ theory of meaning (LCCM theory), his idea of the “meaning-making engine” is basically an attempt at integrating a broad variety of approaches into a coherent model. This might seem a bit eclectic at first, but it’s definitely not the worst thing to do, given that there is significant conceptual overlap between different theories which, however, tends to be blurred by terminological incongruities. Apart from Deacon’s (1997) “Symbolic Species” and Tomasello’s work on shared and joint intentionality, which he explicitly discussed, he draws on various ideas that play a key role in Cognitive Linguistics. For example, the distinction between open- and closed-class systems features prominently in Talmy’s (2000) Cognitive Semantics, as does the notion of the human conceptual system. The idea of meaning as conceptualization and embodied simulation of course goes back to the groundbreaking work of, among others, Lakoff (1987) and Langacker (1987, 1991), although empirical support for this hypothesis has been gathered only recently in the framework of experimental semantics (cf. Matlock & Winter forthc. – if you have an account at academia.edu, you can read this paper here). All in all, then, Evans’ approach might prove an important further step towards integrating Cognitive Linguistics and language evolution research, as has been proposed by Michael and James in a variety of talks and papers (see e.g. here).

Needless to say, it’s impossible to judge from a necessarily fairly sketchy conference presentation if this model qualifies as an appropriate and comprehensive account of the emergence of meaning. But it definitely looks promising and I’m looking forward to Evans’ book-length treatment of the topics he touched upon in his talk. For now, we have to content ourselves with his abstract from the conference booklet:

In his landmark work, The Symbolic Species (1997), cognitive neurobiologist Terrence Deacon argues that human intelligence was achieved by our forebears crossing what he terms the “symbolic threshold”. Language, he argues, goes beyond the communicative systems of other species by moving from indexical reference – relations between vocalisations and objects/events in the world — to symbolic reference — the ability to develop relationships between words — paving the way for syntax. But something is still missing from this picture. In this talk, I argue that symbolic reference (in Deacon’s terms), was made possible by parametric knowledge: lexical units have a type of meaning, quite schematic in nature, that is independent of the objects/entities in the world that words refer to. I sketch this notion of parametric knowledge, with detailed examples. I also consider the interactional intelligence that must have arisen in ancestral humans, paving the way for parametric knowledge to arise. And, I also consider changes to the primate brain-plan that must have co-evolved with this new type of knowledge, enabling modern Homo sapiens to become so smart.

 

References

Bergen, Benjamin K. (2012): Louder than Words. The New Science of How the Mind Makes Meaning. New York: Basic Books.

Deacon, Terrence W. (1997): The Symbolic Species. The Co-Evolution of Language and the Brain. New York, London: Norton.

Lakoff, George (1987): Women, Fire, and Dangerous Things. What Categories Reveal about the Mind. Chicago: The University of Chicago Press.

Langacker, Ronald W. (1987): Foundations of Cognitive Grammar. Vol. 1. Theoretical Prerequisites. Stanford: Stanford University Press.

Langacker, Ronald W. (1991): Foundations of Cognitive Grammar. Vol. 2. Descriptive Application. Stanford: Stanford University Press.

Levinson, Stephen C. (2006): On the Human “Interaction Engine”. In: Enfield, Nick J.; Levinson, Stephen C. (eds.): Roots of Human Sociality. Culture, Cognition and Interaction. Oxford: Berg, 39–69.

Matlock, Teenie & Winter, Bodo (forthc): Experimental Semantics. In: Heine, Bernd; Narrog, Heiko (eds.): The Oxford Handbook of Linguistic Analysis. 2nd ed. Oxford: Oxford University Press.

Schmid, Hans-Jörg (2000): English Abstract Nouns as Conceptual Shells. From Corpus to Cognition. Berlin, New York: De Gruyter (Topics in English Linguistics, 34).

Talmy, Leonard (2000): Toward a Cognitive Semantics. 2 vol. Cambridge, Mass: MIT Press.

 

Why Disagree? Some Critical Remarks on the Integration Hypothesis of Human Language Evolution

Shigeru Miyagawa, Shiro Ojima, Robert Berwick and Kazuo Okanoya have recently published a new paper in Frontiers in Psychology, which can be seen as a follow-up to the 2013 Frontiers paper by Miyagawa, Berwick and Okanoya (see Hannah’s post on this paper). While the earlier paper introduced what they call the “Integration Hypothesis of Human Language Evolution”, the follow-up paper seeks to provide empirical evidence for this theory and discusses potential challenges to the Integration Hypothesis.

The basic idea of the Integration Hypothesis, in a nutshell, is this: “All human language sentences are composed of two meaning layers” (Miyagawa et al. 2013: 2), namely “E” (for “expressive”) and “L” (for “lexical”). For example, sentences like “John eats a pizza”, “John ate a pizza”, and “Did John eat a pizza?” are supposed to have the same lexical meaning, but they vary in their expressive meaning. Miyagawa et al. point to some parallels between expressive structure and birdsong on the one hand and lexical structure and the alarm calls of non-human primates on the other. More specifically, “birdsongs have syntax without meaning” (Miyagawa et al. 2014: 2), whereas alarm calls consist of “isolated uttered units that correlate with real-world references” (ibid.). Importantly, however, even in human language, the Expression Structure (ES) only admits one layer of hierarchical structure, while the Lexical Structure (LS) does not admit any hierarchical structure at all (Miyagawa et al. 2013: 4). The unbounded hierarchical structure of human language (“discrete infinity”) comes about through recursive combination of both types of structure.

This is an interesting hypothesis (“interesting” being a convenient euphemism for “well, perhaps not that interesting after all”). Let’s have a closer look at the evidence brought forward for this theory.

Miyagawa et al. “focus on the structures found in human language” (Miyagawa et al. 2014: 1), particularly emphasizing the syntactic structure of sentences and the internal structure of words. In a sentence like “Did John eat pasta?”, the lexical items John, eat, and pasta constitute the LS, while the auxiliary do, being a functional element, is seen as belonging to the expressive layer. In a more complex sentence like “John read the book that Mary wrote”, the VP and NP notes are allocated to the lexical layer, while the DP and CP nodes are allocated to the expressive layer.

Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures
Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures

As pointed out above, LS elements cannot directly combine with each other according to Miyagawa et al. (the ungrammaticality of e.g. John book and want eat pizza is taken as evidence for this), while ES is restricted to one layer of hierarchical structure. Discrete infinity then arises through recursive application of two rules:

(i) EP →  E LP
(ii) LP → L EP
Rule (i) states that the E category can combine with LP to form an E-level structure. Rule (ii) states that the L category can combine with an E-level structure to form an L-level structure. Together, these two rules suffice to yield arbitrarily deep hierarchical structures.

The alternation between lexical and expressive elements, as exemplified in Figure (3) from the 2014 paper (= Figure 9 from the 2013 paper, reproduced above), is thus essential to their theory since they argue that “inside E and L we only find finite-state processes” (Miyagawa et al. 2014: 3). Several phenomena, most notably Agreement and Movement, are explained as “linking elements” between lexical and functional heads (cf. also Miyagawa 2010). A large proportion of the 2014 paper is therefore dedicated to phenomena that seem to argue against this hypothesis.

For example, word-formation patterns that can be applied recursively seem to provide a challenge for the theory, cf. example (4) in the 2014 paper:

(4) a. [anti-missile]
b. [anti-[anti-missile]missile] missile

The ostensible point is that this formation can involve center embedding, which would constitute a non-finite state construction.

However, they propose a different explanation:

When anti– combines with a noun such as missile, the sequence anti-missile is a modifier that would modify a noun with this property, thus, [anti-missile]-missile,  [anti-missile]-defense. Each successive expansion forms via strict adjacency, (…) without the need to posit a center embedding, non-regular grammar.

Similarly, reduplication is re-interpreted as a finite state process. Furthermore, they discuss N+N compounds, which seems to violate “the assumption that L items cannot combine directly — any combination requires intervention from E.” However, they argue that the existence of linking elements in some languages provides evidence “that some E element does occur between the two L’s”. Their example is German Blume-n-wiese ‘flower meadow’, others include Freundeskreis ‘circle of friends’ or Schweinshaxe ‘pork knuckle’. It is commonly assumed that linking elements arose from grammatical markers such as genitive -s, e.g. Königswürde ‘royal dignity’ (from des Königs Würde ‘the king’s dignity’). In this example, the origin of the linking element is still transparent. The -es- in Freundeskreis, by contrast, is an example of a so-called unparadigmatic linking element since it literally translates to ‘circle of a friend’. In this case as well as in many others, the linking element cannot be traced back directly to a grammatical affix. Instead, it seems plausible to assume that the former inflectional suffix was reanalyzed as a linking element from the paradigmatic cases and subsequently used in other compounds as well.

To be sure, the historical genesis of German linking elements doesn’t shed much light on their function in present-day German, which is subject to considerable debate. Keeping in mind that these items evolved gradually however raises the question how the E and L layers of compounds were linked in earlier stages of German (or any other language that has linking elements). In addition, there are many German compounds without a linking element, and in other languages such as English, “linked” compounds like craft-s-man are the exception rather than the rule. Miyagawa et al.’s solution seems a bit too easy to me: “In the case of teacup, where there is no overt linker, we surmise that a phonologically null element occurs in that position.”

As an empiricist, I am of course very skeptical towards any kind of null element. One could possibly rescue their argument by adopting concepts from Construction Grammar and assigning E status to the morphological schema [N+N], regardless of the presence or absence of a linking element, but then again, from a Construction Grammar point of view, assuming a fundamental dichotomy between E and L structures doesn’t make much sense in the first place. That said, I must concede that the E vs. L distinction reflects basic properties of language that play a role in any linguistic theory, but especially in Construction Grammar and in Cognitive Linguistics. On the one hand, it reflects the rough distinction between “open-class” and “closed-class” items, which plays a key role in Talmy’s (2000) Cognitive Semantics and in the grammaticalization literature (cf. e.g. Hopper & Traugott 2003). As many grammaticalization studies have shown, most if not all closed-class items are “fossils” of open-class items. The abstract concepts they encode (e.g. tense or modality) are highly relevant to our everyday experience and, consequently, to our communication, which is why they got grammaticized in the first place. As Rose (1973: 516) put it, there is no need for a word-formation affix deriving denominal verbs meaning “grasp NOUN in the left hand and shake vigorously while standing on the right foot in a 2 ½ gallon galvanized pail of corn-meal-mush”. But again, being aware of the historical emergence of these elements begs the question if a principled distinction between the meanings of open-class vs. closed-class elements is warranted.

On the other hand, the E vs. L distinction captures the fundamental insight that languages pair form with meaning. Although they are explicitly talking about the “duality of semantics“, Miyagawa et al. frequently allude to formal properties of language, e.g. by linking up syntactic strutures with the E layer:

The expression layer is similar to birdsongs; birdsongs have specific patterns, but they do not contain words, so that birdsongs have syntax without meaning (Berwick et al., 2012), thus it is of the E type.

While the “expression” layer thus seems to account for syntactic and morphological structures, which are traditionally regarded as purely “formal” and meaningless, the “lexical” layer captures the referential function of linguistic units, i.e. their “meaning”. But what is meaning, actually? The LS as conceptualized by Miyagawa et al. only covers the truth-conditional meaning of sentences, or their “conceptual content”, as Langacker (2008) calls it. From a usage-based perspective, however, “an expression’s meaning consists of more than conceptual content – equally important to linguistic semantics is how that content is shaped and construed.” (Langacker 2002: xv) According to the Integration Hypothesis, this “construal” aspect is taken care of by closed-class items belonging to the E layer. However, the division of labor envisaged here seems highly idealized. For example, tense and modality can be expressed using open-class (lexical) items and/or relying on contextual inference, e.g. German Ich gehe morgen ins Kino ‘I go to the cinema tomorrow’.

It is a truism that languages are inherently dynamic, exhibiting a great deal of synchronic variation and diachronic change. Given this dynamicity, it seems hard to defend the hypothesis that a fundamental distinction between E and L structures which cannot combine directly can be found universally in the languages of the world (which is what Miyagawa et al. presuppose). We have already seen that in the case of compounds, Miyagawa et al. have to resort to null elements in order to uphold their hypothesis. Furthermore, it seems highly likely that some of the “impossible lexical structures” mentioned as evidence for the non-combinability hypothesis are grammatical at least in some creole languages (e.g. John book, want eat pizza).

In addition, it seems somewhat odd that E- and L-level structures as “relics” of evolutionarily earlier forms of communication are sought (and expected to be found) in present-day languages, which have been subject to millennia of development. This wouldn’t be a problem if the authors were not dealing with meaning, which is not only particularly prone to change and variation, but also highly flexible and context-dependent. But even if we assume that the existence of E-layer elements such as affixes and other closed-class items draws on innate dispositions, it seems highly speculative to link the E layer with birdsong and the L layer with primate calls on semantic grounds.

The idea that human language combines features of birdsong with features of primate alarm calls is certainly not too far-fetched, but the way this hypothesis is defended in the two papers discussed here seems strangely halfhearted and, all in all, quite unconvincing. What is announced as “providing empirical evidence” turns out to be a mostly introspective discussion of made-up English example sentences, and if the English examples aren’t convincing enough, the next best language (e.g. German) is consulted. (To be fair, in his monograph, Miyagawa (2010) takes a broader variety of languages into account.) In addition, much of the discussion is purely theory-internal and thus reminiscent of what James has so appropriately called “Procrustean Linguistics“.

To their credit, Miyagawa et al. do not rely exclusively on theory-driven analyses of made-up sentences but also take some comparative and neurological studies into account. Thus, the Integration Hypothesis – quite unlike the “Mystery” paper (Hauser et al. 2014) co-authored by Berwick and published in, you guessed it, Frontiers in Psychology (and insightfully discussed by Sean) – might be seen as a tentative step towards bridging the gap pointed out by Sverker Johansson in his contribution to the “Perspectives on Evolang” section in this year’s Evolang proceedings:

A deeper divide has been lurking for some years, and surfaced in earnest in Kyoto 2012: that between Chomskyan biolinguistics and everybody else. For many years, Chomsky totally dismissed evolutionary linguistics. But in the past decade, Chomsky and his friends have built a parallel effort at elucidating the origins of language under the label ‘biolinguistics’, without really connecting with mainstream Evolang, either intellectually or culturally. We have here a Kuhnian incommensurability problem, with contradictory views of the nature of language.

On the other hand, one could also see the Integration Hypothesis as deepening the gap since it entirely draws on generative (or “biolinguistic”) preassumptions about the nature of language which are not backed by independent empirical evidence. Therefore, to conclusively support the Integration Hypothesis, much more evidence from many different fields would be necessary, and the theoretical preassumptions it draws on would have to be scrutinized on empirical grounds, as well.

References

Hauser, Marc D.; Yang, Charles; Berwick, Robert C.; Tattersall, Ian; Ryan, Michael J.; Watumull, Jeffrey; Chomsky, Noam; Lewontin, Richard C. (2014): The Mystery of Language Evolution. In: Frontiers in Psychology 4. doi: 10.3389/fpsyg.2014.00401

Hopper, Paul J.; Traugott, Elizabeth Closs (2003): Grammaticalization. 2nd ed. Cambridge: Cambridge University Press.

Johansson, Sverker: Perspectives on Evolang. In: Cartmill, Erica A.; Roberts, Séan; Lyn, Heidi; Cornish, Hannah (eds.) (2014): The Evolution of Language. Proceedings of the 10th International Conference. Singapore: World Scientific, 14.

Langacker, Ronald W. (2002): Concept, Image, and Symbol. The Cognitive Basis of Grammar. 2nd ed. Berlin, New York: De Gruyter (Cognitive Linguistics Research, 1).

Langacker, Ronald W. (2008): Cognitive Grammar. A Basic Introduction. Oxford: Oxford University Press.

Miyagawa, Shigeru (2010): Why Agree? Why Move? Unifying Agreement-Based and Discourse-Configurational Languages. Cambridge: MIT Press (Linguistic Inquiry, Monographs, 54).

Miyagawa, Shigeru; Berwick, Robert C.; Okanoya, Kazuo (2013): The Emergence of Hierarchical Structure in Human Language. In: Frontiers in Psychology 4. doi 10.3389/fpsyg.2013.00071

Miyagawa, Shigeru; Ojima, Shiro; Berwick, Robert C.; Okanoya, Kazuo (2014): The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages. In: Frontiers in Psychology 5. doi 10.3389/fpsyg.2014.00564

Rose, James H. (1973): Principled Limitations on Productivity in Denominal Verbs. In: Foundations of Language 10, 509–526.

Talmy, Leonard (2000): Toward a Cognitive Semantics. 2 vol. Cambridge, Mass: MIT Press.

P.S.: After writing three posts in a row in which I critizised all kinds of studies and papers, I herby promise that in my next post, I will thoroughly recommend a book and return to a question raised only in passing in this post.  [*suspenseful cliffhanger music*]