Color term salience and cultural evolution

The most salient colors (black, white, and perhaps red) are named in all languages; the least salient of the set are named in fewer languages. Salience correlates with earliness of introduction.

David G. Hays, Enid Margolis, Raoul Naroll, Dale Revere Perkins, Color Term Salience. American Anthropologist, 74:1107-1121, 1972. DOI: 10.1525/aa.1972.74.5.02a00050

Abstract: Eleven focal colors are named by basic color terms in many languages. The most salient colors (black, white, and perhaps red) are named in all languages; the least salient of the set are named in fewer languages. Salience correlates with earliness of introduction, as measured by a scale of social evolution; with brevity of expression, as measured by phonemic length of basic color terms; with frequency of use, as measured by frequency of basic color terms in literary languages; and with frequency of mention in ethnographic literature. None of these correlations are established in the pioneer study of Berlin and Kay (1969), a study whose defects are well exposed by Durbin (1972) and Wescott (1970). The first two were documented respectively in Naroll (1970) and Durbin (1972); the last two are documented here. These four correlations independently support the Berlin-Kay color salience theory. They furnish a sound basis for further research on color term salience in particular and indeed on salience phenomena in general. We speculate that salience may be an important general principle of cultural evolution.

Consider this finding: “Salience correlates with earliness of introduction, as measured by a scale of social evolution”. What that means is that less complex societies (as measured by one of the standard indexes, Marsh’s socially complexity scale) have fewer basic color terms than more complex ones. Why?

The Measurement of Cultural Evolution in the Non-Literate World

Back in the mid-1990s late David Hays reviewed and synthesized several decades of work on cultural complexity in non-literate societies. That review is now available on the web.

At the time he died in 1995 my teacher, David G. Hays, The Measurement of Cultural Evolution in the Non-Literate World, had just completed a review and synthesis of cross-cultural work on cultural complexity. His widow, Janet Hays, undertook to publish the book in CD-ROM form. A couple months before she died last year Janet gave me permission to distribute the book in whatever way that seemed appropriate.

I have decided to make the book available at my Academia.edu page, but I am open to other suggestions. The book consists of a PDF of the text, an XLSX file of the data, and a PDF of a brief Read Me document, as follows:

The Measurement of Cultural Evolution in the Non-Literate World (PDF): https://www.academia.edu/37163326/The_Measurement_of_Cultural_Evolution_in_the_Non-Literate_World

Bounds (XLSL), spreadsheet for the book: https://www.academia.edu/37163325/BOUNDS.xlsx

About the book (PDF): https://www.academia.edu/37163327/About_the_book

* * * * *

Preface

David G. Hays

Whether there can be a science of human life was a question in the air of the Center for Advanced Study in the Behavioral Sciences in 1954, where Raoul Naroll and I met. After forty years, the question has been answered only in part. In his last book, The Moral Order, Naroll began to sum up his life’s work on the human condition. That book convinced me, for the first time, that some of the findings of social science have the same kind of validity as findings in physics or biology. Naroll planned more books, but they will not be written.

As a science, anthropology needs methods of measurement that can be applied across all cultures. Of the qualities of culture that need measurement, evolutionary variation stands out. Naroll had already begun work on his Index of Social Development when he came to the Center in 1954, and published it in 1956. Other scales were published in the next few years by anthropologists and sociologists. Nevertheless, some anthropologists still assert that evolution is unmeasurable.

In ethnography, sociology, and archeology, the study of social and cultural evolution continues, and controversies abound. The welfare of groups within industrial countries, and the welfare of all the world outside the industrial sphere, depends on a clear understanding of evolution. The measurement of cultural evolution is an urgent practical matter as well as a necessity for theory builders.

Naroll’s next book would have been called Painful Progress. That evolution is progressive was his credo, and he believed that he could justify that belief, as he wanted to justify all his beliefs, by presenting the right numbers in the right analytic framework. The history of humanity on Earth is full of pain, far more pain than historians generally admit in their books for general readers. Naroll believed that progress is the compensation we receive for the pain we cannot escape. Today the concept of progress is under attack. In other places, I offer an argument in support of Naroll’s position, but here I deal only in the technical issues of measurement. Whether there is progress, decline, or neither in cultural evolution may be argued, but only after accurate measurement reveals the facts.

The story of cultural evolution is the story of human history, most of it unwritten; a full treatment of the subject is, roughly speaking, a complete textbook of anthropology. Naroll might have put a full treatment in Painful Progress, but that is not my intention here. The present book is a tract in methodology: What are the traits and variables that indicate the level of any culture? How can measures of individual qualities be combined into a single measure of cultural evolution? Answering these questions is the body of the present work. Although I was inspired by Naroll’s work, as was the whole field, I draw on a wide range of sources for concepts and data. In appendices, I review Naroll’s improvements in technique: How to determine the extent of a single “culture,” how to take into account the similarity of neighboring cultures, how to draw a sample of cultures, how to control for variations in the quality of the data that the anthropologist can draw on in making comparisons, and how to justify the inference of historical change from the study of groups known each only at a single date. Where others have gone beyond him, and where my views differ from his, I take note.

Anthropology has been, mostly, the study of the non-literate world. In a long collaboration with William L. Benzon, I have written about the evolution of culture on up to the present day. We find qualitative differences that make it seem natural to me to limit the scope of this book to the customary scope of anthropology. Several of the sources that I draw on included in their samples such cultures as Athens and Rome, or Bulgarian peasants; even a few industrial cultures turn up. To deal properly with evolution after the invention of writing would require the introduction of additional variables; in the end, this is a book about the non-literate world. In Chapter 20, I show some of the deleterious effects of mingling literate and nonliterate cultures in the same study, as Naroll and others have done. The design of scales to measure cultural evolution in literate cultures remains a task for the future.

Every culture is a natural experiment. The experimenters are the bearers of the culture; they cannot know in advance what the outcome will be, just as we today cannot be sure of the effects of our own inventions, technological or social. That some experiments produce situations in which further evolutionary steps can be taken, and some do not, tells us nothing about the intelligence or merit of the experimenters. A culture of high evolutionary level is a valuable possession, but does not prove inherent worth. The study of cultural evolution is altogether compatible with the belief “that all men [and women] are created equal.”

The most important point to remember in the study of cultural evolution is perhaps this: That the evolution of culture is absolutely not predicated on the evolution of biological traits. The minds of culture bearers must certainly be different at different evolutionary levels, as they are different across cultures of the same level. But the brains of all humanity are biologically similar, as best we know, over all Earth and over 25,000 to 250,000 years. No racist conclusions can be drawn from cultural-evolutionary facts. Indeed, the methods of measurement that I describe here would be nonsensical if the variations observed were biological; human uniformity is the working premiss of the art.

The principal contribution of this book is, I should suppose, the collection of profiles in Appendix F. In my judgment, these profiles are more informative than any of the scales on which they are constructed. Research on the correlates of cultural evolution should be more valid if it uses these profiles to estimate the level of each unit (culture, society) studied. Students beginning to read about cultures other than their own can orient themselves by examining the profile of each culture they encounter: The general level, and the differences among such aspects as governance (the polity), social stratification (class), and expressive culture (religion), will help in the interpretation of ethnographic writings.

In addition, methodological review demonstrates a number of shortcomings in prior work that require remedy. Some aspects of culture have been measured with adequate precision for some units, but no aspect has been measured adequately for all the units that will be drawn in future samples, and some aspects have not been measured adequately at all. Chapter 23 contains some suggestions.

CfP: Construal and language dynamics (ICLC-15 workshop proposal)

What do we mean when we talk about the “cognitive foundations” of language or the “cognitive principles” behind linguistic phenomena? And how can we tap into the cognitive underpinnings of language? These questions lie at the heart of a workshop that Michael Pleyer and I are going to propose for the next International Cognitive Linguistics Conference. Here’s our Call for Papers:

Construal and language dynamics: Interdisciplinary and cross-linguistic perspectives on linguistic conceptualization
– Workshop proposal for the 15th International Cognitive Linguistics Conference, Nishinomiya, Japan, August 6–11, 2019 –

Convenors:
Stefan Hartmann, University of Bamberg
Michael Pleyer, University of Koblenz-Landau

The concept of construal has become a key notion in many theories within the broader framework of Cognitive Linguistics. It lies at the heart of Langacker’s (1987, 1991, 2008) Cognitive Grammar, but it also plays a key role in Croft’s (2012) account of verbal argument structure as well as in the emerging framework of experimental semantics (Bergen 2012; Matlock & Winter 2015). Indirectly it also figures in Talmy’s (2000) theory of cognitive semantics, especially in his “imaging systems” approach (see e.g. Verhagen 2007).

According to Langacker (2015: 120), “[c]onstrual is our ability to conceive and portray the same situation in alternate ways.” From the perspective of Cognitive Grammar, an expression’s meaning consists of conceptual content – which can, in principle, be captured in truth-conditional terms – and its construal, which encompasses aspects such as perspective, specificity, prominence, and dynamicity. Croft & Cruse (2004) summarize the construal operations proposed in previous research, arriving at more than 20 linguistic construal operations that are seen as instances of general cognitive processes.

Given the “quantitative turn” in Cognitive Linguistics (e.g. Janda 2013), the question arises how the theoretical concepts proposed in the foundational works of the framework can be empirically tested and how they can be refined on the basis of empirical findings. Much work in the domains of experimental linguistics and corpus linguistics has established a research cycle whereby hypotheses are generated on the basis of theoretical concepts from Cognitive Linguistics, such as construal operations, and then tested using behavioral and/or corpus-linguistic methods (see e.g. Hilpert 2008; Matlock 2010; Schönefeld 2011; Matlock et al. 2012; Krawczak & Glynn forthc., among many others).

Arguably one of the most important testing grounds for theories of linguistic construal is the domain of language dynamics. Recent years have seen increasing convergence between Cognitive-Linguistic theories on the one hand and theories conceiving of language as a complex adaptive system on the other (Beckner et al. 2009; Frank & Gontier 2010; Fusaroli & Tylén 2012; Pleyer 2017). In this framework, language can be understood as a dynamic system unfolding on the timescales of individual learning, socio-cultural transmission, and biological evolution (Kirby 2012, Enfield 2014). Linguistic construal operations can be seen as important factors shaping the structure of language both on a historical timescale and in ontogenetic development (e.g. Pleyer & Winters 2014).

Empirical studies of language acquisition, language change, and language variation can therefore help us understand the nature of linguistic construal operations and can also contribute to refining theories of linguistic construal. Interdisciplinary and cross-linguistic perspectives can prove particularly insightful in this regard. Findings from cognitive science and developmental psychology can contribute substantially to our understanding of the cognitive principles behind language dynamics. Cross-linguistic comparison can, on the one hand, lead to the discovery of striking similarities across languages that might point to shared underlying cognitive principles (e.g. common pathways of grammaticalization, see e.g. Bybee et al. 1994, or similarities in the domain of metaphorical construal, see Taylor 2003: 140), but it can also safeguard against premature generalizations from findings obtained in one single language to human cognition at large (see e.g. Goschler 2017).

For our proposed workshop, we invite contributions that explicitly connect theoretical approaches to linguistic construal operations with empirical evidence from e.g. corpus linguistics, experimental studies, or typological research. In line with the cross-linguistic outlook of the main conference, we are particularly interested in papers that compare linguistic construals across different languages. Also, we would like to include interdisciplinary perspectives from the behavioural and cognitive sciences.

The topics that can be addressed in the workshop include, but are not limited to,

  • the role of construal operations such as perspectivation and specificity in language production and processing;
  • the acquisition and diachronic change of linguistic categories;
  • the question of whether individual construal operations that have been proposed in the literature are cognitively realistic (see e.g. Broccias & Hollmann 2007) and whether they can be tested empirically;
  • the refinement of construal-related concepts such as “salience” or “prominence” based on empirical findings (see e.g. Schmid & Günther 2016);
  • the relationship between linguistic construal operations and domain-general cognitive processes;
  • the relationship between empirical observations and the conclusions we draw from them about the organization of the human mind, including the viability of concepts such as the “corpus-to-cognition” principle (see e.g. Arppe et al. 2010) or the mapping of behavioral findings to cognitive processes.

Please send a short abstract (max. 1 page excl. references) and a ~100-word summary to construal.iclc15@gmail.com by August 31st, 2018 September 10th, 2018. We will inform all potential contributors in early September whether your paper can be included in our workshop proposal. If we are unable to accommodate your submission, you can of course submit it to the general session of the conference. The same applies if our workshop proposal as a whole is rejected.

 

References

Arppe, Antti, Gaëtanelle Gilquin, Dylan Glynn, Martin Hilpert & Arne Zeschel. 2010. Cognitive Corpus Linguistics: Five Points of Debate on Current Theory and Methodology. Corpora 5(1). 1–27.

Beckner, Clay, Richard Blythe, Joan Bybee, Morten H. Christiansen, William Croft, Nick C. Ellis, John Holland, Jinyun Ke, Diane Larsen-Freeman & Tom Schoenemann. 2009. Language is a Complex Adaptive System: Position Paper. Language Learning 59 Suppl. 1. 1–26.

Bergen, Benjamin K. 2012. Louder than Words: The New Science of How the Mind Makes Meaning. New York: Basic Books.

Broccias, Cristiano & Willem B. Hollmann. 2007. Do we need Summary and Sequential Scanning in (Cognitive) Grammar? Cognitive Linguistics 18. 487–522.

Bybee, Joan L., Revere Perkins & William Pagliuca. 1994. The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: University of Chicago Press.

Croft, William & Alan Cruse. 2004. Cognitive Linguistics. Cambridge: Cambridge University Press.

Enfield, N.J. 2014. Natural causes of language: frames, biases, and cultural transmission. (Conceptual Foundations of Language Science 1). Berlin: Language Science Press.

Frank, Roslyn M. & Nathalie Gontier. 2010. On Constructing a Research Model for Historical Cognitive Linguistics (HCL): Some Theoretical Considerations. In Margaret E. Winters, Heli Tissari & Kathryn Allan (eds.), Historical Cognitive Linguistics, 31–69. (Cognitive Linguistics Research 47). Berlin, New York: De Gruyter.

Fusaroli, Riccardo & Kristian Tylén. 2012. Carving language for social coordination: A dynamical approach. Interaction Studies 13(1). 103–124.

Goschler, Juliana. 2017. A contrastive view on the cognitive motivation of linguistic patterns: Concord in English and German. In Stefan Hartmann (ed.), Yearbook of the German Cognitive Linguistics Association 2017, 119–128.

Hilpert, Martin. 2008. New evidence against the modularity of grammar: Constructions, collocations, and speech perception. Cognitive Linguistics 19(3). 491–511.

Janda, Laura (ed.). 2013. Cognitive Linguistics: The Quantitative Turn. Berlin, New York: De Gruyter.

Kirby, Simon. 2012. Language is an Adaptive System: The Role of Cultural Evolution in the Origins of Structure. In Maggie Tallerman & Kathleen R. Gibson (eds.), The Oxford Handbook of Language Evolution, 589–604. Oxford: Oxford University Press.

Krawczak, Karolina & Dylan Glynn. forthc. Operationalising construal. Of / about prepositional profiling for cognition and communication predicates. In C. M. Bretones Callejas & Chris Sinha (eds.), Construals in language and thought. What shapes what? Amsterdam, Philadelphia: John Benjamins.

Langacker, Ronald W. 1987. Foundations of Cognitive Grammar. Vol. 1: Theoretical Prerequisites. Stanford: Stanford University Press.

Langacker, Ronald W. 1991. Foundations of Cognitive Grammar. Vol. 2: Descriptive Application. Stanford: Stanford University Press.

Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press.

Langacker, Ronald W. 2015. Construal. In Ewa Dąbrowska & Dagmar Divjak (eds.), Handbook of Cognitive Linguistics, 120–142. Berlin, New York: De Gruyter.

Matlock, Teenie. 2010. Abstract Motion is No Longer Abstract. Language and Cognition 2(2). 243–260.

Matlock, Teenie, David Sparks, Justin L. Matthews, Jeremy Hunter & Stephanie Huette. 2012. Smashing New Results on Aspectual Framing: How People Talk about Car Accidents. Studies in Language 36(3). 700–721.

Matlock, Teenie & Bodo Winter. 2015. Experimental Semantics. In Bernd Heine & Heiko Narrog (eds.), The Oxford Handbook of Linguistic Analysis, 771–790. Oxford: Oxford University Press.

Pleyer, Michael & James Winters. 2014. Integrating Cognitive Linguistics and Language Evolution Research. Theoria et Historia Scientiarum 11. 19–43.

Schmid, Hans-Jörg & Franziska Günther. 2016. Toward a Unified Socio-Cognitive Framework for Salience in Language. Frontiers in Psychology 7. doi:10.3389/fpsyg.2016.01110 (31 March, 2018).

Schönefeld, Doris (ed.). 2011. Converging evidence: methodological and theoretical issues for linguistic research. (Human Cognitive Processing 33). Amsterdam, Philadelphia: John Benjamins.

Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge: MIT Press.

Taylor, John R. 2003. Linguistic Categorization. 3rd ed. Oxford: Oxford University Press.

Verhagen, Arie. 2007. Construal and Perspectivization. In Dirk Geeraerts & Hubert Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics, 48–81. Oxford: Oxford University Press.

The Color Game: Challenges for App projects

Over at ICCI are a couple of blog posts by Olivier Morin about project I’m involved in, the Color Game. The first post provides an introduction to the app and how it will contribute to research on language and communication. And, as I mentioned on Twitter, the second blog post highlights one of the Color Game’s distinct advantages over traditional experiments:

An ambitious project

What I want to briefly mention is that the Color Game is an extremely ambitious project that marks the culmination of two years worth of work. A major challenge from a scientific perspective has been to design multiple projects that get the most out the potential data. Experiments are normally  laser-focused on meticulously testing a narrow set of predictions. This is quite rightly viewed as a positive quality, and it is why well-designed experiments are far better suited for discerning mechanistic and causal explanations than other research methods. But I think the Color Game does make some headway in addressing long-standing constraints:

  • Limitations in sample size and representation.
  • Technical challenges of scaling up complex methods.
  • Underlying motivation for participation.
Sample size and representation

Discussions about the limitations of experiments in terms of sample size and the sample they are representing are abundant. Such issues are particularly prevalent in the ongoing  replication and reproducibility crisis. Just looking at the first week of data for the Color Game and there are already over a 1000 players from a wide variety of countries:

Color Game players from around the world. Darker, redder colours indicate more concentrated regions of players. From: http://cognitionandculture.net/blog/color-game/the-color-games-world

By contrast, many psychological experiments will be lucky to get an n of 100, and this number is often determined on the basis of reaching sufficient statistical power for the analyses (cautionary note: having a large sample size can also be the source of big inferential errors). It is also the case that standard psychology populations are distinctly WEIRD. Apps can help connect researchers with populations normally inaccessible, especially given the proliferation of mobile phones.

Technical challenges

The Color Game’s larger and more diverse sample leads to my second point: that scaling up complex methods is both costly and technically challenging. Even though web experiments are booming, and this can mitigate the downside of having a small n, they are often extremely simple and restricted. Prioritising simplicity is fine if it is premised on scientific principles, but there is also the temptation to make design choices for reasons of expediency.

So, to give one example, if you want participants to complete your experiment, then making the experiment shorter (through restricting the number of trials and/or the time it takes to complete a trial) increases the probability of finishing. It can also lead to implementing methodological decisions to make the task technically easier. All else being equal, it is simpler to create a pseudo-communicative task (where the participant is told they are communicating with someone, even though they aren’t) than it is to create an actual communicative task. Same goes for using feedback over repair mechanisms.

All experiments are faced with these problems. But, anecdotally, it seems to be acutely problematic for web-based experiments.  Just to be clear: I’m not making a judgement about whether or not a study suffered from making a particular methodological choice. The point is to simply say that these design choices should (where possible) first consider the scientific consequences above technical and practical expediency. My worry is that when scientific considerations are not prioritised, you lose too much in terms of generalisability to real world phenomena. And, even when this is not the case and the experiment is justifiably simple, I wouldn’t be surprised to find that this creates a bias in the types of web experiments performed. In short, there’s the possibility that web-based experiments systematically underutilise certain methodological designs, leading to a situation where web-experiments occupy and explore a much narrower region of the design space.

I hope that the Color Game makes some small steps towards avoiding this pitfall. For instance, we incorporated features not often found in other web-based communication game experiments, such as the ability to communicate synchronously or asynchronously and for participants to engage in simple repair mechanisms instead of receiving feedback. Players are also free to choose who they want to play with in the forum, giving a much more naturalistic flavour to the interaction dynamics. This allows for self-organisation and it’ll be interesting to see what role (if any) the emergent population structure plays in structuring the languages. App games therefore offer a promising avenue for retaining the technically complex features of traditional lab experiments whilst profiting from the larger sample sizes of web experiments.

Having a more complex set up also allowed us to pre-register six  projects that aim to answer distinct questions about the emergence and evolution of communication systems. To achieve a similar goal with other methods is far more costly in terms of time and money. But there are downsides. One of which is that the changes and requirements imposed by a single project can impact the scope and design of all the other projects. Imagine you have a project which requires that the population size parameter is manipulated (FYI, this is not a Color Game project): every other project now needs to control for this fact be it through methodological choices (e.g., you only sample populations with number of players) or in the statistical analyses.

In some sense, this reintroduces the complexity of the real-world back into the app, both in terms of its upsides and downsides. Suffice to say, we tried to minimise these conflicts as much as possible, but in some cases they were simply unavoidable. Also, even if there are cases where this introduces unforeseen consequences in the reliability of our results, we can always follow up on our findings with more traditional lab experiments and computer models.

Underlying motivation

Assuming I haven’t managed to annoy anyone who isn’t using app-based experiments, I’ve saved my most controversial point for last. It’s a hard sell, and I’m not even sure I fully buy it, but I think the underlying motivation for playing apps is very different from participating in a standard experiment. At the task level, the Color Game is not too dissimilar from other experiments, as you receive motivation to continue playing via points and to get points in the first place you need to be successful in communication. Where it differs is in terms of why people participate in the first place. In short, the Color Game is different because people principally play it for entertainment (or, at least, that’s what I keep telling myself). Although lab-based experiments are often fun, this normally stands as an ancillary concern that’s not considered crucial to the scientific merits of a study.

Undergraduate experiments are (in)famously built on rewards of cookies and cohort obligations, and it is fair to say that most lab experiments incentivise participation via monetary remuneration (although this might not be the only reason why someone participates). Yet, humans engage in all sorts of behaviours for endogenous rewards, and app games are really nice examples of such behaviour. People are free to download the game (or not), they can play as little or as much as they please, and as I’ve already mentioned there is freedom in their choice of interaction partners. Similarly, in the real-world, people have flexibility in when and why they engage in communicative behaviour, with monetary gain being just a small subset (e.g., a large part of why you don’t have to go far to find a motivational speaker is because they earn money for public lectures and other speaking events).

If you’re interested, and want to see what all the fuss is about, feel free to download the app (available on Android and iOS):

 

The EvoLang Causal Graph Challenge

This year at EvoLang, I’m releasing CHIELD: The Causal Hypotheses in Evolutionary Linguistics Database.  It’s a collection of theories about the evolution of language, expressed as causal graphs.  The aim of CHIELD is to build a comprehensive overview of evolutionary approaches to language.  Hopefully it’ll help us find competing and supporting evidence, link hypotheses together into bigger theories and generally help make our ideas more transparent. You can access CHIELD right now, but hang around for details of the challenges.

The first thing that CHIELD can help express is the (sometimes unexpected) causal complexity of theories.  For example, Dunbar (2004) suggests that gossip replaced physical grooming in humans to support increasingly complicated social interactions in larger groups.  However, the whole theory is actually composed of 29 links, involving predation risk, endorphins and resource density:

The graph above might seem very complicated, but it was actually constructed just by going through the text of Dunbar (2004) and recording each claim about variables that were causally linked.  By dividing the theory into individual links it becomes easier to think about each part.

Second, CHIELD also helps find other theories that intersect with this one through variables like theory of mind, population size or the problem of freeriders, so you can also use CHIELD to explore multiple documents at once.  For example, here are all the connections that link population size and morphological complexity (9 papers so far in the database):

The first thing to notice is that there are multiple hypotheses about how population size and morphological complexity are linked.  We can also see at a glance that there are different types of evidence for each link.  Some are supported from multiple studies and methods, while others are currently just hypotheses without direct evidence.

However, CHIELD won’t work without your help!  CHIELD has built-in tools for you – yes YOU – to contribute.  You can edit data, discuss problems and add your own hypotheses.  It’s far from perfect and of course there will be disagreements.  But hopefully it will lead to productive discussions and a more cohesive field.

Which brings us to the challenges …

The EvoLang Causal Graph challenge: Contribute your own hypotheses

You can add data to CHIELD using the web interface.  The challenge is to draw your EvoLang paper as a causal graph.  It’s fun!  The first two papers to be contributed will become part of my poster at EvoLang.

Here are some tips:

  • Break down your hypothesis into individual causal links.
  • Try to use existing variable names, so that your hypothesis connects to other work.  You can find a list of variables here, or the web interface will suggest some.  But don’t be afraid to add new variables.
  • Try to add direct quotes from the paper to the “Notes” field to support the link.
  • If your paper is already included, do you agree about the interpretation? If not, you can raise an issue or edit the data yourself.

More help is available here.  Click here to add data now!  Your data will become available on CHIELD, and your name will be added to the list of contributors.

Bonus Challenge: Contribute 5 papers, become a co-author!

I’ll be writing an article about the database and some initial findings for the Journal of Language Evolution.  If you contribute 5 papers or more, then you’ll be added as a co-author.  As an incentive to contribute further, co-authors will be ordered by the number of papers they contribute.  This offer is open to anyone studying evolutionary linguistics, not just people presenting at EvoLang.  You should check first whether the paper you want to add has already been included.

Bonus Challenge: Contribute some code, become a co-author!

CHIELD is open source.  The GitHub repository for CHIELD has some outstanding issues. If you contribute some programming to address them, you’ll become a co-author on the journal article.

Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation.  For example, the proposed link between climate and tone languages could never have been investigated without massive global databases.  However, there is not much discussion of the overall approach to research in this area.

This week I published a paper in a special issue on the Adaptive Value of Langauges, outlining the maximum robustness approach to these problems.  I then try to apply this approach to the debate about the link between tones and climate.

In a nutshell, I suggest that research should be:

Robust

Instead of aiming for the most valid test for a hypothesis, we should consider as many sources of data and as many processes as possible.  Agreement between them supports a theory, but differences can also highlight which parts of a theory are weak.

Causal

Researchers should be more explicit about the causal effects in their hypotheses.  Formal tools from causal graph theory can help formulate tests, recognise weaknesses and avoid talking past each other.

Incremental

Realistically, a single paper can’t be the final word on a topic, and shouldn’t aim to.  Statistical studies of large-scale, cross-cultural data are very complicated, and we should expect small steps to establishing causality.

I applying these ideas to the debate about tone and climate.  Caleb Everett also published a paper in this issue showing that speakers in drier regions use vowels less frequently in their basic vocabulary. I test whether the original link with tone and the new link with vowels holds up when using different data sources and different statistical frameworks.  The correlation with tone is not robust, while the correlation with vowels seems more promising.

https://www.frontiersin.org/files/Articles/327602/fpsyg-09-00166-HTML/image_m/fpsyg-09-00166-g003.jpg

I then suggest some ideas for alternative methodological approaches to this theory that could be tested.  For example:

  • An iterated artificial learning experiment
  • A phonetic study of vowel systems
  • A historical case-study of 5 Bantu languages
  • A corpus study of tone use in Cantonese and conversational repair in Mandarin
  • A corpus study of Larry King’s speech

 

Resister: A sci-fi sequel about cultural evolution and academic funding

In 2016, Casey Hattrey combined literary genres that had long been kept far apart from each other: science fiction, academic funding applications and cultural evolution theory. Space Funding Crisis I: Persister was a story that tried to “put the fun in academic funding application and the itch in hyper-niche”. It was criticised as “unrealistic and too centered on academics to be believable” and “not a very good book”. Dan Dediu’s advice was “better not even start reading it,” and Fiona Jordan’s review was literally a four-letter word. Still, that hasn’t stopped Hattrey from writing the sequel that the title of the first book tried to warn us about.

The badly conceived artwork for Resister

Space Funding Crisis II: Resister continues to follow the career of space linguist Karen Arianne. Just when she thought she’d gotten out of academia, the shadowy Central Academic Funding Council Administration pulls her back in for one more job. Or at least a part-time post-doc. Her mission: solve the mystery of the great convergence. Over thousands of years of space-faring, human linguistic diversity has exploded, but suddenly people have started speaking the same language. What could have caused this sinister twist? Who are the Panini Press? And what exactly is research insurance? Arianne’s latest adventure sees her struggle against ‘splainer bots, the conference mafia and her own inability to think about the future.

To say that this was the “difficult second book” would give too much credit to the first.  Hattrey seems to have learned nothing about writing or science since the last time they ventured into the weird world of self-published online novels. The characters have no distinct voice, the plot doesn’t make much sense and there are eye-watering levels of exposition.  In the appendix there’s even an R script which supports some of the book’s predictions, and even that is badly composed.  Even some of the apparently over-the-top futuristic ideas like insurance for research hypotheses are a bit behind existing ideas like using prediction markets for assessing replicability.

If there is a theme between the poorly formatted pages, then it’s emergence: complex patterns arising from simple rules. Arianne has a kind of spiritual belief in just reacting, Breitenberg-like, to the here-and-now rather than planning ahead. Apparently Hattrey intends this to translate into a criticism of the pressures of early-career academic life.  But this never really materialises out of the bland dialogue and insistence on putting lasers everywhere.

Still, where else are you going to find a book that makes fun of the slow science movement, generative linguistics and theories linking the emergence of tone systems to the climate?

Resister is available for free, including in various formats, including for kindle, iPad and nook. The prequel, Persister is also available (epub, kindle, iPad, nook).

Persister: Space Funding Crisis I  Resister: Space Funding Crisis II

CfP: Experimental approaches to iconicity in language

Submissions are being sought for a special issue of Language and Cognition on Experimental approaches to iconicity in language. We welcome submissions related to any aspect of the many forms and functions of iconicity in natural language (see below). Papers may feature new experimental findings, or may present novel theoretical syntheses of experimental work on iconicity in language. Manuscripts should be a maximum of 8,000 words, with shorter submissions preferred.

Many researchers in language and cognition now recognize that iconicity – resemblance between form and meaning – is a fundamental feature of human languages, spoken and signed alike (Nuckolls 1999; Taub 2001; Perniss, Thompson, & Vigliocco, 2010; Dingemanse et al., 2015; Perry, Perlman & Lupyan, 2015; Ortega, 2017). Iconicity is found across all levels
of linguistic structure, spanning discourse, grammar, morphology, lexicon, phonology and phonetics, and even orthography. It is found in the prosody of speech and sign and in the gestures that accompany linguistic behaviour.

While experimental research on iconicity in speech has long favoured the study of pseudowords like bouba and kiki, a growing body of experimental research shows that iconicity plays an active role in a number of basic language processes, cutting across cognition, development, cultural and biological evolution. The special issue aims to feature some of the most exciting new experimental research on the many forms, functions, and
timescales of iconicity in human language.

Special issue editors
Marcus Perlman, University of Birmingham
Pamela Perniss, University of Brighton
Mark Dingemanse, Max Planck Institute for Psycholinguistics

References
Dingemanse, M., Blasi, D.E., Lupyan, G., Christiansen, M.H., & Monaghan, P. (2015). Arbitrariness, iconicity, and systematicity in language. Trends in Cognitive Sciences, 19, 603-615.
Nuckolls, J.B. (1999). The case for sound symbolism. Annual Review of Anthropology, 28, 255-282.
Ortega, Gerardo. “Iconicity and Sign Lexical Acquisition: A Review.” Frontiers in Psychology 8 (2017). https://doi.org/10.3389/fpsyg.2017.01280.
Perniss, P., Thompson, R.L., & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology, 1, 227.
Perry, L.K., Perlman, M. & Lupyan, G. (2015). Iconicity in English and Spanish and its relation to lexical category and age of acquisition. PLoS ONE, 10, e0137147.
Taub, S. (2001). Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press.

How to submit. If you would like to contribute, please email us an 800-1000-word abstract by 1st April, 2018. Abstracts should be sent to Marcus Perlman (m.perlman@bham.ac.uk). We will return a decision on your abstract by 15th April, and first submissions will be due on 15th August. Manuscripts will be submitted through the Language and Cognition submission interface. We aim to put out the complete issue by the beginning of 2019. Notably, submissions that proceed faster can appear online first.

CfP: Measuring Language Complexity at EvoLang

This is a guest post from Aleksandrs Berdicevskis about the workshop Measuring Language Complexity.

A lot of evolutionary talks and papers nowadays touch upon language complexity (at least nine papers did this at the Evolang 2016). One of the reasons is probably that complexity is a very convenient testbed for testing hypotheses that establish causal links between linguistic structure and extra-linguistic factors. Do factors such as population size, or social network structure, or proportion of non-native speakers shape language change, making certain structures (for instance, those that are morphologically simpler) more evolutionary advantageous and thus more likely? Or don’t they? If they do, how exactly?

Recently, quite a lot has been published on that topic, including attempts to do rigorous quantitative tests of the existing hypotheses. One problem that all such attempts face is that complexity can be understood in many different ways, and operationalized in yet many more. And unsurprisingly, the outcome of a quantitative study depends on what you choose as your measure! Unfortunately, there currently is little consensus about how measures themselves can be evaluated and compared.

To overcome this, we organize a shared task “Measuring Language Complexity”, a satellite event of Evolang 2018, to take place in Torun on April 15. Shared tasks are widely used in computational linguistics, and we strongly believe they can prove useful in evolutionary linguistics, too. The task is to measure the linguistic complexity of a predefined set of 37 language varieties belonging to 7 families (and then discuss the results, as well as their mutual agreement/disagreement at the workshop). See the detailed CfP and other details here.

So far, the interest from the evolutionary community has been rather weak. But there is still time! We extended the deadline until February 28 and are looking forward to receiving your submissions!

CfP: Applications in Cultural Evolution, June 6-8, Tartu

Guest post by Peeter Tinits and Oleg Sobchuk
As mentioned in this blog before, evolutionary thinking can help the study of various cultural practices, not just language. The perspective of cultural evolution is currently seeing an interesting case of global growth and coordination – the widely featured founding of the Cultural Evolution Society (also on replicatedtypo), the recent inaugural conference and follow-ups are bringing a diverse set of researchers around the same table. If this has gone past you unnoticed – there’s nice resourcesgathered on the society website.
Evolutionary thinking seems useful for various purposes. However does it work the same everywhere, and can research progress in one domain be easily carried over to another?
To make better sense of it, we’re organizing a small conference to discuss the ways that evolutionary thinking can be best applied in different domains. The event “Applications in Cultural Evolution: Arts, Languages, Technologies” is to take place in June 6-8 in Tartu, Estonia. Pleanary speakers include:
We  invite contributions from cultural evolution researchers of various persuasions and interests to talk about their work and how the evolutionary models help with that. Deadline for abstracts on Feb 14.
Discussion of individual contributions will hopefully lead to a better understanding of commonalities and differences in how cultural evolution is applied in different areas, and help build an understanding of how to most productively use evolutionary thinking – what are the prospects and limitations. We aim to allow for building a common ground through plenty of space and opportunities for formal and informal discussion on site.
Both case studies and general perspectives welcome. In addition to original research we encourage participants to think of the following questions:
– What do you get out of cultural evolution research?
– How should we best apply evolutionary thinking to culture?
– What matters when we apply this to different domains or timescales?
Deadline for abstracts: February 14, 2018
Event dates: June 6-8
Location: Tartu University, Estonia
Full call for papers and information on the website. Also available as PDF.