Category Archives: Research Blogging

entangled_bank

On the entangled banks of representations (pt.1)

ResearchBlogging.orgLately, I took time out to read through a few papers I’d put on the backburner until after my first year review was completed. Now that’s out of the way, I found myself looking through Berwick et al.‘s review on Evolution, brain, and the nature of language. Much of the paper manages to pull off the impressive job of making it sound as if the field has arrived on a consensus in areas that are still hotly debated. Still, what I’m interested in for this post is something that is often considered to be far less controversial than it is, namely the notion of mental representations. As an example, Berwick et al. posit that mind/brain-based computations construct mental syntactic and conceptual-intentional representations (internalization), with internal linguistic representations then being mapped onto their ordered output form (externalization). From these premises, the authors then arrive at the reasonable enough assumption that language is an instrument of thought first, with communication taking a secondary role:

In marked contrast, linear sequential order does not seem to enter into the computations that construct mental conceptual-intentional representations, what we call ‘internalization’… If correct, this calls for a revision of the traditional Aristotelian notion: language is meaning with sound, not sound with meaning. One key implication is that communication, an element of externalization, is an ancillary aspect of language, not its key function, as maintained by what is perhaps a majority of scholars… Rather, language serves primarily as an internal ‘instrument of thought’.

If we take for granted their conclusions, and this is something I’m far from convinced by, there is still the question of whether or not we even need representations in the first place. If you were to read the majority of cognitive science, then the answer is a fairly straight forward one: yes, of course we need mental representations, even if there’s no solid definition as to what they are and the form they take in our brain. In fact, the notion of representations has become a major theoretical tenet of modern cognitive science, as evident in the way much of field no longer treats it as a point of contention. The reason for this unquestioning acceptance has its roots in the notion that mental representations enriched an impoverished stimulus: that is, if an organism is facing incomplete data, then it follows that they need mental representations to fill in the gaps.

Continue reading

Picture 31

Ways To Protolanguage 3 Conference

Today is the first day of the “Ways to Protolanguage 3” conference. which takes place on 25–26 May in in Wrocław, Poland. The Plenary speakers are Robin Dunbar, Joesp Call, and Peter Gärdenfors

Both Hannah and I are at the conference and we’re also live-tweeting about the conference using the hashtag #protolang3

Hannah’s just given her talk

Jack J. Wilson, Hannah Little (University of Leeds, UK; Vrije Universiteit Brussel, Belgium) – Emerging languages in esoteric and exoteric niches: evidence from rural sign languages (abstract here)

And I’m due tomorrow.

Michael Pleyer (Heidelberg University, Germany) - Cooperation and constructions: looking at the evolution of language from a usage-based and construction grammar perspective (abstract here)

The Programme can be found here: (Day 1 / Day 2)

Famous picture of Albert Einstein sticking out his tongue.

Sticking the tongue out: Early imitation in infants

Famous picture of Albert Einstein sticking out his tongue.

Albert Einstein sticking out the tongue to a neonate in an attempt to test their imitation of tongue protrusion.

The nativism-empiricism debate haunts the fields of language acquisition and evolution on more than just one level. How much of children’s social and cognitive abilities have to be present at birth, what is acquired through experience, and therefore malleable? Classically, this debate resolves around the poverty of stimulus. How much does a child have to take for granted in her environment, how much can she learn from the input?

Research into imitation has its own version of the poverty of stimulus, the correspondence problem. The correspondence problem can be summed up as follows: when you are imitating someone, you need to know which parts of your body map onto the body of the person you’re trying to imitate. If they wiggle their finger, you can establish correspondence by noticing that your hand looks similar to theirs, and that you can do the same movement with it, too. But this is much trickier with parts of your body that are out of your sight. If you want to imitate someone sticking their tongue out, you first have to realise that you have a tongue, too, and how you can move it in such a way that it matches your partner’s movements.

Continue reading

selten figure 2

The New Pluralistic Approach

There has been a lot of talk round these parts recently of the merits of pluralistic approaches to problems in language evolution, and condemning the assignment of too much explanatory power to statistical correlations away from other forms of evidence, such as cultural learning experiments. Sean and James recently published a paper about this here which includes some commentary on Hay & Bauer (2007), who find that speaker population size and phoneme inventory size correlate (the more speakers a language has, the bigger its phoneme inventory is). James has blogged about this extensively here. More recently Moran, McCloy & Wright presented a critical analysis of Hay & Bauer’s (2007) findings here along with a statistical analysis of their own which uses more languages than Hay & Bauer (2007), and finds little to no correlation between speaker population and various measures of the phonological system, I hope James will do a blog about this as the resident expert.

As I’ve just mentioned, doing further statistical analysis is one good way of disputing or confirming the results of large scale statistical studies. But turning to experimental evidence is also a good way to back up the findings of statistical results and to tease out patterns of causation. I discuss this briefly here.

Recently, I was reading Selten & Warglien (2007) (mentioned by James here and covered by John Hawks here), which is a study which looks at how simple languages emerge within a coordination task with no initial shared language. The experiment uses pairwise interactions in which participants had to refer to figures which could be distinguished using features on three levels of outer shape, inner shape and colour (see picture). Participants were given a code which had a limited number of letters which they were to use to communicate with one another. However, the use of letters within this code had a cost within the language game the participants were playing, so the less letters they used the higher their score. Also, the more communicatively successful they were, the higher their score.

selten figure 2

The study was primarily interested in what enhanced the emergence of structure in this code via the communication game. They looked at the effects of 2 variables, the number of letters available and variability in the set of figures.  I am only going to discuss the effects of the first variable here. Selten & Warglien (2007) start off with an experiment where only two (and then three) letters were available which showed very little convergence to a common code. A common code is defined as being a code where the signals for all figures agree between the two participants. However, when given a larger inventory of letters to play with, participants were much more successful at creating a common code. This is not surprising as more symbols permit a higher degree of cost efficiency within the language game as you can use more distinct, shorter expressions. Selten & Warglien (2007) also make the point that the human capability to produce a large variety of phonetic signals seems to be at the root of the emergence of most linguistic structure, because if you only have a small inventory of individual units, you have to rely more on positional structure. Positional systems are systems like the Arabic number notation which are more likely invented rapidly rather than the product of slow emergence via cultural evolution, but can be easily used once they have emerged.

This is all very interesting in its own right, but the reason I brought it up in this post is that Selten & Warglien (2007) have shown that you can experimentally explore the effects of the size of inventory on an artificial language in a laboratory setting. I know that the natural direction of causation is to assume that demographic structure (e.g. the size of a population) affects the linguistic structure (e.g. the size of the phoneme inventory), but it might be possible to see whether a common code can be more easily reached within a small language community using only a small number of phonemes, than with a larger speaker community. I’m also not sure how one might create an experimental proxy for size of population in an experiment such as this (perhaps repeated interaction between the same participants compared with interaction within changing pairs). It might also be possible to look at the effects that the size of inventory can have on other linguistic features that have been hypothesised to correlate with population size, e.g. how regular the compositional structure of an emerging language is given difference inventory sizes.

References

Hay, J., & Bauer, L. (2007). Phoneme inventory size and population size Language, 83 (2), 388-400 DOI: 10.1353/lan.2007.0071

Roberts, S. & Winters, J. (2012). Social Structure and Language Structure: the New Nomothetic Approach. Psychology of Language and Communication, 16(2), pp. 79-183. Retrieved 12 Feb. 2013, from doi:10.2478/v10057-012-0008-6

Selten, R., & Warglien, M. (2007). The emergence of simple languages in an experimental coordination game Proceedings of the National Academy of Sciences, 104 (18), 7361-7366 DOI: 10.1073/pnas.0702077104

piantadosi et al figure 1 pic

Is ambiguity dysfunctional for communicatively efficient systems?

Based on yesterday’s post, where I argued degeneracy emerges as a design solution for ambiguity pressures, a Reddit commentator pointed me to a cool paper by Piantadosi et al (2012) that contained the following quote:

The natural approach has always been: Is [language] well designed for use, understood typically as use for communication? I think that’s the wrong question. The use of language for communication might turn out to be a kind of epiphenomenon… If you want to make sure that we never misunderstand one another, for that purpose language is not well designed, because you have such properties as ambiguity. If we want to have the property that the things that we usually would like to say come out short and simple, well, it probably doesn’t have that property (Chomsky, 2002: 107).

The paper itself argues against Chomsky’s position by claiming ambiguity allows for more efficient communication systems. First of all, looking at ambiguity from the perspective of coding theory, Piantadosi et al argue that any good communication system will leave out information already in the context (assuming the context is informative about the intended meaning). Their other point, and one which they test through a corpus analysis of English, Dutch and German, suggests that as long as there are some ambiguities the context can resolve, then ambiguity will be used to make communication easier. In short, ambiguity emerges as a result of tradeoffs between ease of production and ease of comprehension, with communication systems favouring hearer inference over speaker effort:

The essential asymmetry is: inference is cheap, articulation expensive, and thus the design requirements are for a system that maximizes inference. (Hence … linguistic coding is to be thought of less like definitive content and more like interpretive clue.) (Levinson, 2000: 29).

If this asymmetry exists, and hearers are good at disambiguating in context, then a direct result of such a tradeoff should be that linguistic units which require less effort should be more ambiguous. This is what they found in results from their corpus analysis of word length, word frequency and phonotactic probability:

We tested predictions of this theory, showing that words and syllables which are more efficient are preferentially re-used in language through ambiguity, allowing for greater ease overall. Our regression on homophones, polysemous words, and syllables – though similar – are theoretically and statistically independent. We therefore interpret positive results in each as strong evidence for the view that ambiguity exists for reasons of communicative efficiency (Piantadosi et al., 2012: 288).

At some point, I’d like to offer a more comprehensive overview of this paper, but this will have to wait until I’ve read more of the literature. Until then, here’s some graphs of the results from their paper:

Continue reading

promethens stealing fire

Arguments against a “prometheus” scenario

The Biological Origin of Linguistic Diversity:

From some of the minds that brought you  Chater et al. (2009) comes a new and exciting paper in PlosONE.

Chater et al. (2009) used a computational model to show that biological adaptations for language are impossible because language changes too rapidly through cultural evolution for natural selection to be able to act.

This new paper, Baronchelli et al. (2012), uses similar models to first argue that if language changes quickly then “neutral genes” are selected for because biological evolution cannot act upon linguistic features when they are too much of a “moving target”. Secondly they show that if language changes slowly in order to facilitate coding of linguistic features in the genome, then two isolated subpopulations who originally spoke the same language will diverge biologically through genetic assimilation after they linguistically diverge, which they inevitably will.

The paper argues that because we can observe so much diversity in the world’s languages, but yet children can acquire any language they are immersed in, only the model which supports the selection of “neutral genes” is plausible. Because of this, a hypothesis in which domain general cognitive abilities facilitate language rather than a hypothesis for a biologically specified, special-purpose language system is much more plausible.

A Prometheus scenario:

Baronchelli et al. (2012) use the results of their models to argue against what they call a “Prometheus” scenario. This is a scenario in which “a single mutation (or very few) gave rise to the language faculty in an early human ancestor, whose descendants then dispersed across the globe.”

I wonder if “prometheus” scenario an established term in this context because I can’t find much by googling it. It seems an odd term to use given that Prometheus was the titan who “stole” fire and other cultural tools from the Gods to be used by humans. Since Prometheus was a Titan, he couldn’t pass his genes on to humans, and rather the beginning and proliferation of fire and civilization happened through a process of learning and cultural transmission. I know this is just meant to be an analogy and presumably the promethian aspect of it is alluding to it suddenly happening, but I can’t help but feel that the term “Prometheus scenario” should be given to the hypothesis that language is the result of cultual evolution acting upon domain general processes, rather than one which supports a genetically-defined language faculty in early humans.

References. 

Baronchelli A, Chater N, Pastor-Satorras R, & Christiansen MH (2012). The biological origin of linguistic diversity. PloS one, 7 (10) PMID: 23118922

Chater, N., Reali, F., & Christiansen, M. H. (2009). Restrictions on biological adaptation in language evolution. Proceedings of the National Academy of Sciences, 106(4), 1015- 1020.

emoticons

Taking the “icon” out of Emoticon

For some years now Simon Garrod and Nicolas Fay, among others, have been looking at the emergence of symbolic graphical symbols out of iconic ones using communication experiments which simulate repeated use of a symbol.

Garrod et al. (2007) use a ‘pictionary’ style paradigm where participants are to graphically depict one of 16 concepts without using words,  so that their partner can identify it. This process is repeated to see if repeated usage would take advantage of the  shared memory of the representation rather than the representation itself to the point where a iconic depiction of an item could become an arbitrary, symbolic one.

Garrod et al. (2007) showed that simple repetition is not enough to allow an arbitrary system to emerge and that feedback and interaction are required between communicators. The amount of interaction afforded to participants was shown to affect the emergence of signs due to a process of grounding. The signs that emerged from this process of interaction were shown to be arbitrary as participants not involved directly in the interaction were shown to have trouble interpreting the outcome signs.

The experimental evidence then shows that icons do indeed evolve into symbols as a consequence of the  shared memory of the representation rather than the representation itself.  Which is all well and good, but can this process be seen in the real world? YES!

I was talking to a friend on skype and he started typing repeated right round brackets:

))))))))

At first I just thought he had some problem with keys sticking on his keyboard, but after he did it two or three times I finally asked. To which he alluded that that they were smilies. Upon further questioning, it seems that this has become a norm for Russian internet chat that their emoticons have lost their eyes – presumably in the same process as Garrod et al. (2007) showed above.

 

 

 

 

 

 

 

 

 

 

 

 

 

They have also created an intensification system based on this slightly more arbitrary symbol, where by the more brackets repeated the happier or sadder you are. Among those in the UK and America, the need to intensify an emoticon has stayed well within the rhealms of iconicity with : D meaning “very happy” and D: meaning “oh God, WHHHHHYYYYY”. Japan have a completely different emoticon system altogether which focusses on the eyes:  ^_^ meaning happy and u_u meaning sad. Some of argued that this is because in Japan people tend to look to the eyes for emotional cues, whereas Americans tend to look to the mouth, as backed up by SCIENCE.

I’d be interested to see if norms have been established in other countries, either iconic or not.

Refs

Garrod S, Fay N, Lee J, Oberlander J, & Macleod T (2007). Foundations of representation: where might graphical symbol systems come from? Cognitive science, 31 (6), 961-87 PMID: 21635324

Yuki, M., Maddux, W., & Masuda, T. (2007). Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States Journal of Experimental Social Psychology, 43 (2), 303-311 DOI: 10.1016/j.jesp.2006.02.004

differences

Phonemic Diversity and Vanishing Phonemes: Looking for Alternative Hypotheses

ResearchBlogging.org

In my last post on the vanishing phonemes debate I briefly mentioned Atkinson’s two major theoretical points: (i) that there is a link between phoneme inventory sizes, mechanisms of cultural transmission and the underlying demographic processes supporting these changes; (ii) we could develop a Serial Founder Effect (SFE) model from Africa based on the phoneme inventory size. I also made the point that more work was needed on the details of the first claim before we went ahead and tested the second. To me at least, it seems slightly odd to assume the first model is correct, without really going to any great lengths to disprove it, and then go ahead and commit the statistical version of the narrative fallacy – you find a model that fits the past and use it to tell a story. Still, I guess the best way to get in the New York Times is to come up with a Human Origins story, and leave the boring phonemes as a periphery detail.

Unrealistic Assumptions?

One problem with these unrealistic assumptions is they lead us to believe there is a linear relationship between a linguistic variable (e.g. phoneme inventories) and a socio-demographic variable (e.g. population size). The reality is far more complicated. For instance, Atkinson (2011) and Lupyan & Dale (2010) both consider population size as a variable. Where the two differ is in their theoretical rationale for the application of this variable: whereas the former is interested in how these population dynamics impact upon the distribution of variation, the latter sees social structure as a substantial feature of the linguistic environment in which a language adapts. It is sometimes useful to tell simple stories, and abstract away from the messy complexities of life, yet for the relationships being discussed here, I think we’re still missing some vital points.

Continue reading

angerdisgustcontinuum2

Never mind language, emotions are in a category of their own

A new paper in the journal ‘Emotion’ has presented research which has implications for the evolution of language, emotion and for theories of linguistic relativity. The paper, entitled ‘Categorical Perception of Emotional Facial Expressions Does Not Require Lexical Categories’, looks at whether our perception of other people’s emotions depend on the language we speak or if it is universal. The results come from the Max Planck Institute for Psycholinguistics and Evolutionary Anthropology.

Human’s facial expressions are perceived categorically and this has lead to hypotheses that this is caused by linguistic mechanisms.

The paper presents a study which compared German speakers to native speakers of Yucatec Maya, which is a language which has no labels which distinguish disgust from anger. This was backed up by a free naming task in which speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger.

The study comprised of a match-to-sample task of facial expressions, and both speakers of German and Yucatec Maya perceived emotional facial expressions of disgust and anger, and other emotions, categorically. This effect was shown to be just as significant across the language groups, as well as across emotion continua (see figure 1.) regardless of lexical distinctions.

The results show that the perception of emotional signals is not the result of linguistic mechanisms  which create different lexical labels but instead shows evidence that emotions are subject to their own biologically evolved mechanisms. Sorry Whorfians!

References

Sauter DA, Leguen O, & Haun DB (2011). Categorical perception of emotional facial expressions does not require lexical categories. Emotion (Washington, D.C.) PMID: 22004379

Does Language Shape Thought? Different Manifestations of the Idea of Linguistic Relativity (I)

Does the language we speak influence or even shape the way we think? Last December, there was an interesting debate over at The Economist website with Lera Boroditsky defending the motion, and Language Log’s Mark Liberman against the motion (who IMO, both did a very good job).
The result of the online poll was quite clear: 78% agreed with the motion, while 22% disagreed.

There are, however, three main problems with this way of framing the question: First, it’s not really clear what ‘language’ really is, second, the same goes for “thought”, and third, there are many many ways of how “influencing” and “shaping” something can be conceptualized.
In this post I want to focus on the third problem and present a very useful classification system for hypotheses about linguistic relativity outlined in an article by Phillip Wolff and Kevin J. Holmes, which was published in the current issue Wiley Interdisciplinary Review: Cognitive Science.

Continue reading