Call for participation: IACS3 in Toronto

Call for papers of IACS3 in Toronto is below, including research topics of experimental semiotics, speech and gesture and the evolution of language. And lots more, of course. Full call can be seen here: http://www.perceptualartifacts.org/iacs-2018/cfp.html

The International Association for Cognitive Semiotics in cooperation with OCAD University and Ryerson University is pleased to announce The Third Conference of the International Association for Cognitive Semiotics (IACS3 – 2018) TorontoOntarioCanada: iacs-2018.org

Plenary speakers confirmed (as of )

  • John M. Kennedy • University of Toronto
  • Kalevi Kull • University of Tartu
  • Maxine Sheets-Johnstone • University of Oregon

Conference Theme: MULTIMODALITIES

This non-restrictive theme is intended to encourage the exploration of pre-linguistic and extra-linguistic modes of semiotic systems and meaning construal, as well as their intersection with linguistic processes.

Cognitive Semiotics investigates the nature of meaning, the role of consciousness, the unique cognitive features of human beings, the interaction of nature and nurture in development, and the interplay of biological and cultural evolution in phylogeny. To better answer such questions, cognitive semiotics integrates methods and theories developed in the human, social, and cognitive sciences.

The International Association for Cognitive Semiotics (IACS, founded 2013) aims at establishing cognitive semiotics as a trans-disciplinary study of meaning. More information on the International Association for Cognitive Semiotics can be found at http://iacs.dk

The IACS conference series seeks to gather together scholars and scientists in semiotics, linguistics, philosophy, cognitive science, psychology and related fields, who wish to share their research on meaning and contribute the interdisciplinary dialogue.

Topics of the conference include (but are not limited to):

  • Biological and cultural evolution of human cognitive specificity
  • Cognitive linguistics and phenomenology
  • Communication across cultural barriers
  • Cross-species comparative semiotics
  • Evolutionary perspectives on altruism
  • Experimental semiotics
  • Iconicity in language and other semiotic resources
  • Intersubjectivity and mimesis in evolution and development
  • Multimodality
  • Narrativity across different media
  • Semantic typology and linguistic relativity
  • Semiosis (sense-making) in social interaction
  • Semiotic and cognitive development in children
  • Sign use and cognition
  • Signs, affordances, and other meanings
  • Speech and gesture
  • The comparative semiotics of iconicity and indexicality
  • The evolution of language

We invite abstract submissions for theme sessions, oral presentations and posters. Please select your chosen format along with your submission. Format types and guidelines are here:  http://www.perceptualartifacts.org/iacs-2018/cfp.html

Important Dates

Deadline for submission of theme sessions:
Deadline for abstract submission (oral presentations, posters):
Notification of acceptance (oral presentations, posters):
Last date for early registration:

Usage context and overspecification

A new issue of the Journal of Language Evolution has just appeared, including a paper by Peeter Tinits, Jonas Nölle, and myself on the influence of usage context on the emergence of overspecification. (It has actually been published online already a couple of weeks ago, and an earlier version of it was included in last year’s Evolang proceedings.) Some of the volunteers who participated in our experiment have actually been recruited via Replicated Typo – thanks to everyone who helped us out! Without you, this study wouldn’t have been possible.

I hope that I’ll find time to write a bit more about this paper in the near future, especially about its development, which might itself qualify as an interesting example of cultural evolution. Even though the paper just reports on a tiny experimental case study, adressing a fairly specific phenomenon, we discovered, in the process of writing, that each of the three authors had quite different ideas of how language works, which made the write-up process much more challenging than expected (but arguably also more interesting).

For now, however, I’ll just link to the paper and quote our abstract:

This article investigates the influence of contextual pressures on the evolution of overspecification, i.e. the degree to which communicatively irrelevant meaning dimensions are specified, in an iterated learning setup. To this end, we combine two lines of research: In artificial language learning studies, it has been shown that (miniature) languages adapt to their contexts of use. In experimental pragmatics, it has been shown that referential overspecification in natural language is more likely to occur in contexts in which the communicatively relevant feature dimensions are harder to discern. We test whether similar functional pressures can promote the cumulative growth of referential overspecification in iterated artificial language learning. Participants were trained on an artificial language which they then used to refer to objects. The output of each participant was used as input for the next participant. The initial language was designed such that it did not show any overspecification, but it allowed for overspecification to emerge in 16 out of 32 usage contexts. Between conditions, we manipulated the referential context in which the target items appear, so that the relative visuospatial complexity of the scene would make the communicatively relevant feature dimensions more difficult to discern in one of them. The artificial languages became overspecified more quickly and to a significantly higher degree in this condition, indicating that the trend toward overspecification was stronger in these contexts, as suggested by experimental pragmatics research. These results add further support to the hypothesis that linguistic conventions can be partly determined by usage context and shows that experimental pragmatics can be fruitfully combined with artificial language learning to offer valuable insights into the mechanisms involved in the evolution of linguistic phenomena.

In addition to our article, there’s also a number of other papers in the new JoLE issue that are well worth a read, including another Iterated Learning paper by Clay Beckner, Janet Pierrehumbert, and Jennifer Hay, who have conducted a follow-up on the seminal Kirby, Cornish & Smith (2008) study. Apart from presenting highly relevant findings, they also make some very interesting methodological points.

MMIEL Summer School in experimental and statistical methods

September Tutorial in Empiricism: Practical Help for Experimental Novices

In September, the Language Evolution and Interaction Scholars of Nijmegen (LEvInSoN group), based in the Language and Cognition Department at the Max Planck Institute for Psycholinguistics will be hosting a workshop about research in Language Evolution and Interaction (September 21-22) – call for posters here: http://www.mpi.nl/events/MMIEL

As an addition to this workshop, we will be hosting a short tutorial series bookending the workshop (Sept 20 & 23) covering experimental and statistical methods that should be of broad interest to a general audience. In this tutorial series, we will cover all aspects of creating, hosting, and analysing the data from a set of experiments that will be run live (online) during the workshop.

Details of the summer school can be found here: http://www.mpi.nl/events/MMIEL/summer-school

 

Registration is free, but required. Spots are limited and come on a first come first served basis, and a waitlist will be established if necessary.

Register here

EVOLANG XII (2018): Call for Papers

The 12th International Conference on the Evolution of Language invites substantive contributions relating to the evolution of human language.

IMPORTANT DATES
Abstract submission: 1 September 2017 Add deadline to calendar
Notification of acceptance: 1 December 2017
Early-bird fee: 31 December 2017
Conference: 16-19 April 2018

Submission Information
Submissions may be in any relevant discipline, including, but not limited to: anthropology, archeology, artificial life, biology, cognitive science, genetics, linguistics, modeling, paleontology, physiology, primatology, philosophy, semiotics, and psychology. Normal standards of academic excellence apply. Submitted papers should aim to make clear their own substantive claim, relating this to the relevant, up to date scientific literature in the field of language evolution. Submissions should set out the method by which the claim is substantiated, the nature of the relevant data, and/or the core of the theoretical argument concerned. Novel and original theory-based submissions are welcome. Submissions centred around empirical studies should not rest on preliminary results.

Please see http://evolang.org/submissions for submission templates and further guidance on submission preparation. Submissions can be made via EasyChair (https://easychair.org/conferences/?conf=evolang12) by SEPTEMBER 1, 2017 for both podium presentations (20 minute presentation with additional time for discussion) and poster presentations. All submissions will be refereed by at least three relevant referees, and acceptance is based on a scoring scheme pooling the reports of the referees. In recent conferences, the acceptance rate has been about 50%. Notification of acceptance will be given by December 1, 2017.

For any questions regarding submissions to the main conference please contact scientific-committee@evolang.org.

Workshops: in addition to the general session, EVOLANG XII will host up to five thematically focused, half-day workshops. See here for the Call for Workshops.

Deadline extended for Triggers of Change in the Language Sciences

The deadline for the 2nd XLanS conference on Triggers of Change in the Language Sciences has extended its submission deadline to June 14th.

This year’s topic is ‘triggers of change’:  What causes a sound system or lexicon or grammatical system to change?  How can we explain rapid changes followed by periods of stability?  Can we predict the direction and rate of change according to external influences?

We have also added two new researchers to our keynote speaker list, which now stands as:

 

Wh-words sound similar to aid rapid turn taking

A new paper by Anita Slonimska and myself attempts to link global tendencies in the lexicon to constraints from turn taking in conversation.

Question words in English sound similar (who, why, where, what …), so much so that this class of words are often referred to as wh-words. This regularity exists in many languages, though the phonetic similarity differs, for example:

English Latvian Yaqui Telugu
haw ka: jachinia elaa
haw mɛni tsik jaikim enni
haw mətʃ tsik jaiki enta
wət kas jita eem;eemi[Ti]
wɛn kad jakko eppuDu
wɛr kuɾ jaksa eTa; eedi; ekkaDa
wɪtʃ kuɾʃ jita eevi
hu kas jabesa ewaru
waj ˈkaːpeːts jaisakai en[du]ceeta; enduku

In her Master’s thesis, Anita suggested that these similarities help conversation flow smoothly.  Turn taking in conversation is surprisingly swift, with the usual gap between turns being only 200ms.  This is even more surprising when one considers that the amount of time it takes to retrieve, plan and begin pronouncing one word is 600ms.  Therefore, speakers must begin planning what they will say before current speaker has finished speaking (as demonstrated by many recent studies, e.g. Barthel et al., 2017). Starting your turn late can be interpreted as uncooperative, or lead to missing out on a chance to speak.

Perhaps the harshest environment for turn-taking is answering a content question.  Responders must understand the question, retrieve the answer, plan their utterance and begin speaking.  It makes sense to expect that cues would evolve to help responders recognise that a question is coming.  Indeed there are many paralinguistic cues, such as rising intonation (even at the beginning of sentences) and eye gaze.  Another obvious cue is question words, especially when they appear at the beginning of question sentences. Slonimska hypothesised that wh-words sound similar in order to provide an extra cue that a question is about to be asked, so that the speaker can begin preparing their turn early.

We tried to test this hypothesis, firstly by simply asking whether wh-words really do have a tendency to sound similar within languages.  We combined several lexical databases to produce a word list for 1000 concepts in 226 languages, including question words.  We found that question words are:

  • More similar within languages than between languages
  • More similar than other sets of words (e.g. pronouns)
  • Often composed of salient phonemes

Of course, there are several possible confounds, such as languages being historically related, and many wh-words being derived from other wh-words within a language. We attempted to control for this using stratified permutation, excluding analysable forms, and comparing wh words to many other sets of words such as pronouns which are subject to the same processes.  Not all languages have similar-sounding wh-words, but across the whole database the tendancy was robust.

Another prediction is that the wh-word cues should be more useful if they appear at the beginning of question sentences.  We tested this using typological data on whether wh-words appear in initial position.  While the trend was in the right direction, the result was not significant when controlling for historical and areal relationships.

Despite this, we hope that our study shows that it is possible to connect constraints from turn taking to macro-level patterns across languages, and then test the link using large corpora and custom methods.

Anita will be presenting an experimental approach to this question at this year’s CogSci conference.  We show that /w,h/ is a good predictor of questions in real English conversations, and that people actually use /w,h/ to help predict that a question is coming up.

Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. ArticlePDF.

All data and scripts are available in this github repository.

Call for Posters – Minds, Mechanisms and Interaction in the Evolution of Language

The workshop “Minds, Mechanisms and Interaction in the Evolution of Language” will be hosted at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands on 21st-22nd September 2017. The workshop will include a poster session on topics related to the themes of the meeting. We are interested in contributions investigating the emergence and evolution of language, specifically in relation to interaction.

We are looking for work in the following areas:

  • biases and pre-adaptations for language and interaction
  • cognitive and cultural mechanisms for linguistic emergence
  • interaction as a driver for language evolution

We invite submissions of abstracts for posters, particularly from PhD students and junior researchers.

Please submit an abstract of no more than 300 words (word count not including references) by email to hannah.little@mpi.nl.  Please include a title, authors, affiliations and contact email addresses.  

Deadline: July 9th 2017

Outcome of decision process by: 24th July

Abstracts will be reviewed by the workshop committee.

The poster session will take place on the evening of Thursday September 21st 2017.

Registration is free (details to follow).

Plenary speakers:

  • David Leavens, University of Sussex
  • Jennie Pyers, Wellesley College
  • Monica Tamariz, Heriot Watt University

The workshop also includes presentations from the Levinson group (Language Evolution and Interaction Scholars of Nijmegen)  and an introduction by Stephen Levinson himself!

Summer school:

The workshop will also be bookended with a summer school on 20th and 23rd September specifically aimed at PhD students. The school will consist of a short tutorial series covering experimental and statistical methods that should be of broad interest to a general audience, though focussed around the theme of the workshop. In this tutorial series, we will cover all aspects of creating, hosting, and analysing the data from a set of experiments that will be run live (online) during the workshop! More details for the summer school and registration will follow.

2 PhD positions available with Bart de Boer in Brussels!

Two PhD positions are available in the AI lab at the Vrije Universiteit Brussel with Bart de Boer.

One position is on modelling an emerging sign language:

We are looking for a PhD student to work on modeling the emergence of sign languages, with a focus on modeling the social dynamics underlying existing signing communities.  The project relies on specialist expertise of the Kata Kolok signing community that has emerged in a Balinese village over the course of several generations. The emergence of Kata Kolok, and the demographics of the village have been closely studied by geneticists, anthropologists, and linguists. A preliminary model has been built in Python, simulating this emergence. The aim of the project is to investigate, using a combination of linguistic field research and computational modeling which factors – cultural, genetic, linguistic and others –  determine the way language emerges. There will be one PhD student in Nijmegen conducting primary field research on Kata Kolok and one based in Brussels (as advertised here) to be involved in the computational aspect of the project. Both positions are part of a FWO-NWO funded collaboration of the Artificial Intelligence lab of the Vrije Universiteit Brussel and the Center for Language Studies at Radboud University Nijmegen and the advertised position is supervised by profs. Bart de Boer and Connie de Vos.

Advertisement here: https://ai.vub.ac.be/PhDKataKolok

The other is on modelling acquisition of speech:

We are looking for someone who has (or who is about to complete) a master’s degree in artificial intelligence, speech technology, computer science or equivalent. You will work on a project that investigates advanced techniques for learning the building blocks of speech, with a focus on spectro-temporal features and dynamic Bayesian networks. It is part of the Artificial Intelligence lab of the Vrije Universiteit Brussel and is supervised by prof. Bart de Boer.

Advertisement here: https://ai.vub.ac.be/PhD_Spectrotemporal_DBN

The deadline for application is 1st July 2017. Other details available at the links above.

Questions about details of the positions themselves should be directed to Bart de Boer (bart@arti.vub.ac.be). However, I myself did my PhD with Bart at the VUB, so I’d also be happy to answer more informal questions about working in the lab/living in Belgium/other things (hannah@ai.vub.ac.be).

Iconicity evolves by random mutation and biased selection

A new paper by Monica Tamariz, myself, Isidro Martínez and Julio Santiago uses an iterated learning paradigm to investigate the emergence of iconicity in the lexicon.  The languages were mappings between written forms and a set of shapes that varied in colour, outline and, importantly, how spiky or round they were.

We found that languages which begin with no iconic mapping develop a bouba-kiki relationship when the languages are used for communication between two participants, but not when they are just learned and reproduced.  The measure of the iconicity of the words came from naive raters.

Here’s one of the languages at the end of a communication chain, and you can see that the labels for spiky shapes ‘sound’ more spiky:

An example language from the final generation of our experiment: meanings, labels and spikiness ratings.

These experiments were actually run way back in 2013, but as is often the case, the project lost momentum.  Monica and I met last year to look at it again, and we did some new analyses.  We worked out whether each new innovation that participants created increased or decreased iconicity.  We found that new innovations are equally likely to result in higher or lower iconicity: mutation is random.  However, in the communication condition, participants re-used more iconic forms: selection is biased.  That fits with a number of other studies on iconicity, including Verhoef et al., 2015 (CogSci proceedings) and Blasi et al. (2017).

Matthew Jones, Gabriella Vigliocco and colleagues have been working on similar experiments, though their results are slightly different.  Jones presented this work at the recent symposium on iconicity in language and literature (you can read the abstract here), and will also present at this year’s CogSci conference, which I’m looking forward to reading:

Jones, M., Vinson, D., Clostre, N., Zhu, A. L., Santiago, J., Vigliocco, G. (forthcoming). The bouba effect: sound-shape iconicity in iterated and implicit learning. Proceedings of the 36th Annual Meeting of the Cognitive Science Society.

Our paper is quite short, so I won’t spend any more time on it here, apart from one other cool thing:  For the final set of labels in each generation we measured iconicity using scores from nieve raters, but for the analysis of innovations we had hundreds of extra forms.  We used a random forest to predict iconicity ratings for the extra labels from unigrams and bigrams of the rated labels.  It accounted for 89% of the variance in participant ratings on unseen data.  This is a good improvement over some old techniques such as using the average iconicity of the individual letters in the label, since random forests allows the weighting of particular letters to be estimated from the data, and also allows for non-linear effects when two letters are combined.

However, it turns out that most of the prediction is done by this simple decision tree with just 3 unigram variables. Shapes were rated as more spiky if they contained a ‘k’, ‘j’ and ‘z’ (our experiment was run in Spanish):

So the method was a bit overkill in this case, but might be useful for future studies.

All data and code for doing the analyses and random forest prediction is available in the supporting information of the paper, or in this github repository.

Tamariz, M., Roberts, S. G., Martínez, J. I. and Santiago, J. (2017), The Interactive Origin of Iconicity. Cogn Sci. doi:10.1111/cogs.12497[pdf from MPI]

Biggest linguistics experiment ever links perception with linguistic history

Back in March 2014, Hedvig Skirgård and I wrote a post about the Great Language Game.  Today we’ve published those results in PLOS ONE, together with the Game’s creator Lars Yencken.

One of the fundamental principles of linguistics is that speakers that are separated in time or space will start sound different, while speakers who interact with each other will start to sound similar.  Historical linguists have traced the diversification of languages using objective linguistic measurements, but so far there has never been a widespread test of whether languages further away on a family tree or more physically distant from each other actually sound different to human listeners.

An opportunity arose to test this in the form of The Great Language Game: a web-based game where players listen to a clip of someone talking and have to guess which language is being spoken.  It was played by nearly one million people from 80 countries, and so is, as far as we know, the biggest linguistic experiment ever.  Actually, this is probably my favourite table I’ve ever published (note the last row):

Continent of IP-address Number of guesses
Europe 7,963,630
North America 5,980,767
Asia 841,609
Oceania 364,390
South America 356,390
Africa 74,032
Antarctica 11

We calculated the probability of confusing any of the 78 languages in the Great Language Game for any of the others (excluding guesses about a language if it was an official language of the country the player was in).  Players were good at this game – on average getting 70% of guesses correct.

Using partial Mantel tests, we found that languages are more likely to be confused if they are:

  • Geographically close to each other;
  • Similar in their phoneme inventories
  • Similar in their lexicon
  • Closely related historically (but this effect disappears when controlling for geographic proximity)

We also used Random Forests analyses to show that a language is more likely to be guessed correctly if it is often mentioned in literature, is the main language of an economically powerful country, is spoken by many people or is spoken in many countries.

We visualised the perceptual similarity of languages by using the inverse probability of confusion to create a neighbour net:

This diagram shows a kind of subway map for the way languages sound. The shortest route between two languages indicates how often they are confused for one another – so Swedish and Norwegian sound similar, but Italian and Japanese sound very different. The further you have to travel, the more different two languages sound.  So French and German are far away from many languages, since these were the best-guessed in the corpus.

The labels we’ve given to some of the clusters are descriptive, rather than being official terms that linguists use.  The first striking pattern is that some languages are more closely connected than others, for example the Slavic languages are all grouped together, indicating that people have a hard time distinguishing between them. Some of the other groups are more based on geographic area, such as the ‘Dravidian’ or ‘African’ cluster. The ‘North Sea’ cluster is interesting: it includes Welsh, Scottish Gaelic, Dutch, Danish, Swedish, Norwegian and Icelandic.  These diverged from each other a long time ago in the Indo-European family tree, but have had more recent contact due to trade and invasion across the North Sea.

The whole graph splits between ‘Western’ and ‘Eastern’ languages (we refer to the political/cultural divide rather than any linguistic classification). This probably reflects the fact that most players were Western, or at least could probably read the English website.  That would certainly explain the linguistically confused “East Asian” cluster.  There are also a lot of interconnected lines, which indicates that some languages are confused for multiple groups, for example Turkish is placed halfway between “West” and “East” languages.

It was also possible to create neighbour nets for responses from specific parts of the world. While the general pattern is similar, there are also some interesting differences.  For example, respondents from North America were quite likely to confused Yiddish and Hebrew.  They come from different language families, but are spoken by a mainly Jewish population and this may form part of players’ cultural knowledge of these languages.

In contrast, players from African placed Hebrew with the other Afro-Asiatic languages.

Results like this suggest that perception may be shaped by our linguistic history and cultural knowledge.

We also did some preliminary analyses on the phoneme inventories of languages, using a binary decision tree to explore which sounds made a language distinctive.  Binary decision trees identified some rare and salient features as critical cues to distinctiveness.

The future

http://is5.mzstatic.com/image/thumb/Purple69/v4/bd/32/7d/bd327d24-f55c-b340-2f89-511ccf7ab870/source/175x175bb.jpg

The analyses were complicated because we knew little about the individuals playing beyond the country of their IP address.  However, Hedvig and I, together with a team from the Language in Interaction consortium (Mark Dingemanse, Pashiera Barkhuysen and Peter Withers) create a version of the game called LingQuest that does collect people’s linguistic background.  It also asks participants to compare sound files directly, rather than use written labels.

You can download LingQuest as an apple App, or play it online here.