A new paper in the journal ‘Emotion’ has presented research which has implications for the evolution of language, emotion and for theories of linguistic relativity.
A new paper in the journal ‘Emotion’ has presented research which has implications for the evolution of language, emotion and for theories of linguistic relativity. The paper, entitled ‘Categorical Perception of Emotional Facial Expressions Does Not Require Lexical Categories’, looks at whether our perception of other people’s emotions depend on the language we speak or if it is universal. The results come from the Max Planck Institute for Psycholinguistics and Evolutionary Anthropology.
Human’s facial expressions are perceived categorically and this has lead to hypotheses that this is caused by linguistic mechanisms.
The paper presents a study which compared German speakers to native speakers of Yucatec Maya, which is a language which has no labels which distinguish disgust from anger. This was backed up by a free naming task in which speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger.
The study comprised of a match-to-sample task of facial expressions, and both speakers of German and Yucatec Maya perceived emotional facial expressions of disgust and anger, and other emotions, categorically. This effect was shown to be just as significant across the language groups, as well as across emotion continua (see figure 1.) regardless of lexical distinctions.
The results show that the perception of emotional signals is not the result of linguistic mechanisms which create different lexical labels but instead shows evidence that emotions are subject to their own biologically evolved mechanisms. Sorry Whorfians!
Sauter DA, Leguen O, & Haun DB (2011). Categorical perception of emotional facial expressions does not require lexical categories. Emotion (Washington, D.C.) PMID: 22004379
Noam Chomsky recently gave a lecture on the poverty of the stimulus at UCL responding to topics such as language evolution and artificial language learning experiments. From about 89 minutes in he discusses iterated learning and language evolution, saying the conclusions derive from “serious illusions about evolution”:
Chomsky’s criticism of iterated learning experiments (see post here and here) is based on two points. First, the emergence of structure is more to do with the intelligence of the modern humans taking part in the experiment than a realistic language evolving scenario. He suggests that structure would not emerge in a series of computer programs without human intelligence. As as a colleague pointed out, however, the first iterated learning experiments used computational models of this kind. Secondly, he suggests that the view of evolution employed in the explanation of these systems is a pop-psychology, gradual hill-climbing one. In fact, Chomsky claims, evolution of traits such as language or eyes derive from single, frozen accidents. That is, evolution moves in leaps and bounds rather than small steps (Jim Hurford recently gave a lecture entitled ‘Reconciling linguistic jerks and biological creeps‘ on this topic). Why else would humans be the only species with language?
Geoffrey Pullum counters this last point by asking why would an innately specified UG emerge so rapidly, but then freeze for tens of thousands of years, when (borrowing Phillip Lieberman’s point) traits such as lactose tolerance have emerged in the human genome within two thousand years. Chomsky gives some examples of traits that have developed rapidly, but then only changed marginally.
I don’t think that proponents of iterated learning paradigms would have a problem with a sudden emergence of a capacity for advanced linguistic communication. Although there is a continuity between human and non-human communication systems, we have some tricks that other animals don’t (see Michael’s post here). However, the evolution of the structure of language after these mutations could owe a huge amount to processes of cultural transmission. The universals we see in the world’s languages, then would be an amplification of weak biological biases.
However, Chomsky seems disillusioned with the whole field of what he calls ‘the evolution of communication’. At least we didn’t get it as bad as exemplar theory, which he dismisses as “so outlandish it’s not worth thinking about”.
[Edit: I originally attributed Mark Liberman instead of Phillip Lieberman. Now I’ve made this error in both directions!]
We are organising a special themed session on language evolution at the 2012 Annual Meeting of the European Human Behaviour and Evolution Association, which is held in Durham, UK, 25th-28th March 2012 (http://www.dur.ac.uk/jeremy.kendal/EHBEA2012/Welcome.html). EHBEA is an excellent venue for interdisciplinary work on the cultural and biological evolution of human behaviour, including language. Given that EHBEA is running shortly after EVOLANG next year, we are happy for research that is targeted at EVOLANG to also be submitted here, although note that the audience for each is likely to be different.
If you would like to submit an abstract for consideration as part of this themed session, please follow the submission instructions on the EHBEA website, marking your abstract as for consideration in the language evolution special session, organised by Simon Kirby and Kenny Smith. Abstracts will be independently reviewed by the usual EHBEA reviewers, so bear that in mind when preparing your submission. The themed session will only run if sufficient abstracts are accepted – of course, papers on language evolution could be presented independently as standard EHBEA talks.
The deadline for submissions is November 25th.
PLEASE FORWARD THIS MESSAGE TO ANYONE WHO MIGHT BE INTERESTED!
Last week we had a lecture from Anvita Abbi on rare linguistic structures in Great Andamanese – a language spoken in the Andaman Islands. The indigenous populations of the Andaman Islands lived in isolation for tens of thousands of years until the 19th Century, but still exhibit some common features of south-east Asian languages such as retroflex consonants. This could be evidence for the migration route of humans from India to Australia. Indeed, recent genetic research has shown that the Andamanese are descendants of the first human migration from Africa in the Palaeolithic, though Abbi suggested that the linguistic evidence is also a strong marker of human migration and an “important repository of our shared human history and civilization”.
Although the similarities are fascinating for studies of cultural evolution, the rarity of some structures in Great Andamanese are even more intriguing.
The BBC are at it again and by ‘at it’ I mean talking about language evolution!
Hello! The BBC are at it again and by ‘at it’ I mean talking about language evolution!
The latest episode of ‘Origins of Us’, which is a series about human evolution from an anthropological/archaeological angle, is on brains. The program is presented by Alice Roberts and she doesn’t do a bad job of discussing the issues relating to the lack of direct fossil evidence for language. She discusses the anatomy used in speech which is something which Stephen Fry did not do in his program on the origins of language. We also get an excellent rendition of the cardinal vowels from Dr. Roberts! She also discusses the role of language in symbolic thought and there is a wee bit at the end on cultural evolution.
The part of the program on language starts about 25 minutes in, but I’d suggest watching the whole thing as all aspects of the evolution of the brain are relevant to language evolution, and also, it’s bloody interesting.
After passing my final exams I feel that I can relax a bit and have the time to read a book again. So instead of reading a book that I need to read purely for ‘academic reasons’, I thought I’d pick one I’d thoroughly enjoy: James Hurford’s “The Origins of Grammar“, which clocks in at a whopping 808 pages.
I’m still reading the first chapter (which you can read for free here) but I thought I’d share some of his analyses of “Animal Syntax.”
Hurford’s general conclusion is that despite what you sometimes read in the popular press,
“No non-human has any semantically compositional syntax, where the form of the syntactic combination determines how the meanings of the parts combine to make the meaning of the whole.”
The crucial notion here is that of compositionality. Hurford argues that we can find animal calls and songs that are combinatorial, that is songs and calls in which elements are put together according to some kind of rule or pattern. But what we do not find, he argues, are the kinds of putting things together where the elements put together each have a specified meaning and the whole song, call or communicative assembly “means something which is a reflection of the meanings of the parts.”
To illustrate this, Hurford cites the call system of putty-nosed monkeys (Arnold and Zuberbühler 2006). These monkeys have only two different call signals in their repertoire, a ‘pyow’-sound that ‘means’, roughly, ‘LEOPARD’; and a ‘hack’ sound that ‘means’, roughly, ‘EAGLE’.
We all take comfort in our ability to project into the future. Be it through arbitrary patterns in Spring Pouchongtea leaves, or making statistical inferences about the likelihood that it will rain tomorrow, our accumulation of knowledge about the future is based on continued attempts of attaining certainty: that is, we wish to know what tomorrow will bring. Yet the difference between benignly staring at tea leaves and using computer models to predict tomorrow’s weather is fairly apparent: the former relies on a completely spurious relationship between tea leaves and events in the future, whereas the latter utilises our knowledge of weather patterns and then applies this to abstract from currently available data into the future. Put simply: if there are dense grey clouds in the sky, then it is likely we’ll get rain. Conversely, if tea-leaves arrange themselves into the shape of a middle finger, it doesn’t mean you are going to be continually dicked over for the rest of your life. Although, as I’ll attempt to make clear below, these are differences in degrees, rather than absolutes.
So, how are we going to get from tea-leaves to Lingua Francas? Well, the other evening I found myself watching Dr Nicholas Ostler give a talk on his new book, The Last Lingua Franca: English until the Return to Babel. For those of you who aren’t familiar with Ostler, he’s a relatively well-known linguist, having written several successful books popularising socio-historical linguistics, and first came to my attention through Razib Kahn’s detailed review of Empires of the Word. Indeed, on the basis of Razib’s post, I was not surprised by the depth of knowledge expounded during the talk. On this note alone I’m probably going to buy the book, as the work certainly filters into my own interests of historical contact between languages and the subsequent consequences. However, as you can probably infer from the previous paragraph, there were some elements I was slightly-less impressed with — and it is here where we get into the murky realms between tea-leaves and knowledge-based inferences. But first, here is a quick summary of what I took away from the talk:
How many languages do you speak? This is actually a difficult question, because there’s no such thing as a language, as I argue in this video.
This is a video of a talk I gave as part of the Edinburgh University Linguistics & English Language Society’s Soap Vox lecture series. I argue that ‘languages’ are not discrete, monolithic, static entities – they are fuzzy, emergent, complex, dynamic, context-sensitive categories. I don’t think anyone would actually disagree with this, yet some models of language change and evolution still include representations of a ‘language’ where the learner must ‘pick’ a language to speak, rather than picking variants and allowing higher-level categories like languages to emerge.
In this lecture I argue that languages shouldn’t be modelled as discrete, unchanging things by demonstrating that there’s no consistent, valid way of measuring the number of languages that a person speaks.
The slides aren’t always in view (it improves as the lecture goes on), but I’ll try and write this up as a series of posts soon.
A paper by Gell-Mann & Ruhlen in PNAS this week conducts a phylogenetic analysis of word order in languages and concludes that SOV is the most likely ancestor language word order. The main conclusions from the analysis are:
(i) The word order in the ancestral language was SOV.
(ii) Except for cases of diffusion, the direction of syntactic change, when it occurs, has been for the most part SOV > SVO and, beyond that, SVO > VSO/VOS with a subsequent reversion to SVO occurring occasionally. Reversion to SOV occurs only through diffusion.
(iii) Diffusion, although important, is not the dominant process in the evolution of word order.
(iv) The two extremely rare word orders (OVS and OSV) derive directly from SOV.
This analysis agrees with Luke Maurtis‘ work on function and Uniform Information Density (blogged about here).
A new paper in PlosOne has used new fancy research methods to look at whether humans are more capable of describing a word using just spoken communication, or whether the use of gesture also helps. This research is pertinent to the field of language evolution because it might help us understand if spoken language co-evolved with gesture as well as helping us understand how language is processed in the brain.
This new study builds on previous research in this area by using avatars in a virtual reality setting. Participants were either in control of the movements of their avatar, or not.
The study found that participants were much more successful in communicating concepts when the speaker was able to use their own gestures when explaining a concept using spoken language. The body language of the listener also impacted success at the task, showing the need for nonverbal feedback from the listener.
It’s worth noting that the primary purpose of this research wasn’t to find if gesture is helpful in communication (though that is certainly interesting and worthwhile) but rather whether using virtual reality is fruitful in these kinds of experiments.
The press release discusses some of the problems with using avatars:
The researchers note that there are limitations to nonverbal communication in virtual reality environments. First, they found that participants move much less in a virtual environment than they do in the “real world.” They also found that the perspective of the camera in the virtual environment affected the results.
Lead author, Dr. Trevor Dodds maintains, “this research demonstrates that virtual reality technology can help us gain a greater understanding of the role of body gestures in communication. We show that body gestures carry extra information when communicating the meaning of words. Additionally, with virtual reality technology we have learned that body gestures from both the speaker and listener contribute to the successful communication of the meaning of words. These findings are also important for the development of virtual environments, with applications including medical training, urban planning, entertainment and telecommunication.”
The work was led by Dr. Trevor Dodds at the Max Planck Institute for Biological Cybernetics in Germany.