Defining iconicity and its repercussions in language evolution

There was an awful lot of talk about iconicity at this year’s EvoLang conference (as well as in previous years), and its ability to bootstrap communication systems and solve symbol grounding problems, and this has lead to talk on its possible role in the emergence of human language. Some work has been more sceptical than other’s about the role of iconicity, and so I thought it would be useful to do a wee overview of some of the talks I saw in relation to how different presenters define iconicity (though this is by no stretch a comprehensive overview).

As with almost everything, how people define iconicity differs across studies. In a recent paper, Monaghan, Shillcock, Christiansen & Kirby (2014) identify two forms of iconicity in language; absolute iconicity and relative iconicity. Absolute iconicity is where some linguistic feature imitates a referent, e.g. onomatopoeia or gestural pantomime. Relative iconicity is where there is a signal-meaning mapping or there is a correlation between similar signals and similar meanings. Relative iconicity is usually only clear when the whole meaning and signal spaces can be observed together and systematic relations can be observed between them.

Liz Irvine gave a talk on the core assumption that iconicity played a big role in in bootstrapping language. She teases apart the distinction above by calling absolute iconicity, “diagrammatic iconicity” and relative iconicity, “imagic iconicity”. “Imagic iconicity” can be broken down even further and can be measured on a continuum either in terms of how signals are used and interpreted by language users, or simply by objectively looking at meaning-signal mappings where signs can be non-arbitrary, but not necessarily treated as iconic by language users. Irvine claims that this distinction is important in accessing the role of iconicity in the emergence of language. She argues that diagrammatic or absolute iconicity may aid adults in understanding new signs, but it doesn’t necessarily aid early language learning in infants. Whereas imagic, or relative iconicity, is a better candidate to aid language acquisition and language emergence, where language users do not interpret the signal-meaning mappings explicitly as being iconic, even though they are non-arbitrary.

Irvine briefly discusses that ape gestures are not iconic from the perspective of their users. Marcus Perlman, Nathaniel Clark and Joanne A. Tanner presented work on whether iconicity exists in ape gesture. They define iconicity as being gestures which in any way resemble or depict their meanings but break down these gestures into pantomimed actions, directive touches and visible directives, which are all arguably examples of absolute iconicity. Following from Irvine’s arguments, this broad definition of iconicity may not be so useful when drawing up scenarios for language evolution, and the authors try to provide more detailed and nuanced analysis drawing from the interpretation of signs from the ape’s perspective. Theories which currently exist on iconicity in ape gesture maintain that any iconicity is an artefact of the gesture’s development through inheritance and ritualisation. However, the authors argue that these theories do not currently account for the variability and creativity seen in iconic ape gestures which may help frame iconicity from the perspective of its user.

It’s difficult to analyse iconicity from an ape’s perspective, however, it should be much easier to get at how human’s perceive and interpret different types of iconicity via experiments. I think that experimental design can help get at this, but also analysis from a user perspective from post-experimental questionnaires or even post-experimental experiments (where naive participants are asked to rate to what degree a sign represents a meaning).

Gareth Roberts and Bruno Galantucci presented a study where their hypothesis was that a modality’s capacity for iconicity may inhibit the emergence of combinatorial structure (phonological patterning) in a system. This hypothesis may explain why emerging sign languages, which have more capacity for iconicity than spoken languages, can have fully expressive systems without a level of combinatorial structure (see here). They used the now famous paradigm from Galantucci’s 2005 experiment here. They asked participants to communicate a variety of meanings which were either lines, which could be represented through absolute iconicity with the modality provided, or circles which were various shades of green, which could not be iconically represented. The experiment showed that indeed, the signals used for circles were made up from combinatorial elements where the lines retained iconicity throughout the experiment. This is a great experiment and I really like it, however, I worry that it is only looking at two extreme ends of the iconicity continuum, and has not considered the effects of relative iconicity, or nuances of signal-meaning relations.  In de Boer and Verhoef (2012), a mathematical model shows that shared topology between signal and meaning spaces will generate an iconic system with signal-meaning mapping, but mismatched topologies will generate systems with conventionalised structure. I think it is important that experimental work now looks into more slight differences between signal and meaning spaces and the effects these differences will have on structure in emerging linguistic systems in the lab, and also how participant’s interpretation of any iconicity or structure in a system effects the nature of that iconicity or structure. I’m currently running some experiments exploring this myself, so watch this space!


Where possible, I’ve linked to studies as I’ve cited them.

All other studies cited are included in Erica A. Cartmill, Seán Roberts, Heidi Lyn & Hannah Cornish, ed., The Evolution of Language: Proceedings of the 10th international conference (EvoLang 10). It’s only £87.67 on Amazon, (but it may be wiser to email the authors if you don’t have a friend with a copy).

Can we detect traces of contact with Neandertals in present day languages? One condition for this is there being statistical differences between contact and non-contact languages.

How to speak Neanderthal

This week there’s an article about exploring Neandertal langauge in the New Scientist by Dan Dediu, Scott Moisik and I.  It discusses the idea that if Neandertals spoke modern languages, and if there was cultural contact between us and them, then ancient human languages may have been affected by Neandertal language (borrowing, contact effects etc.).  If this happened, then we may be able to detect these effects in today’s languages. The article and a recent blog post explains the idea, but I’ll cover some of the more technical stuff here.

Obviously, this is a very controversial idea:  the time scale is much longer than the usual linguistic reconstruction and we have no direct evidence for Neandertals speaking complex languages.  We’re definitely in for some flack.  So, this post briefly covers what we actually did.

Our EvoLang paper (and a full paper in prep) asks whether one necessary condition for coming anywhere near providing evidence for this idea is true:  Are there difference between current languages that were in contact (outside of Africa) and languages that were not in contact (inside Africa)?  This has been addressed before, for different reasons (Cysouw & Comrie, 2009: pdf), but with a smaller sample of data.

Can we detect traces of contact with Neandertals in present day languages? One condition for this is there being statistical differences between contact and non-contact languages.

Can we detect traces of contact with Neandertals in present day languages? One condition for this is there being statistical differences between contact and non-contact languages.

Using data from WALS, we ran a few tests:

  1. STRUCTURE analysis:  what is the most likely number of ‘founder’ populations that gives rise to the current diversity we see in African and Eurasian languages?  Do the estimated founder populations align with African and non-African languages?
  2. K-means clustering:  does a ‘natural’ statistical division between the world’s languages reflect a division between African and non-African languages? (is it better than chance and better than other continents?  Also run on phonetic data from PHOIBLE and lexical data from the ASJP)
  3. Weighted multidimensional scaling: If we compress WALS to a few dimensions, does the first dimension reflect a distinction between African and non-African languages?
  4. Phylogenetic reconstruction:  We reconstruct the cultural evolution of present-day language families to see if African and non-African languages have different cultural evolutionary biases (e.g. more likely to move towards or away from particular traits).  We used 3 phylogenies (WALS, Ethnologue, Glottolog), 3 branch length scaling assumptions (Grafen’s method, NNLS and UPGMA) and 3 methods of ancestral state reconstruction (Maximum parsimony, Maximum likelihood (BayesTraits) and Maximum likelihood (APE)).  We searched for features that have opposing biases in African and non-African languages that are bigger than 95% of all comparisons and are robust across all assumptions.
  5. Support Vector Machine learning:  We trained a Support Vector Machine (a supervised machine learning algorithm) to tell the difference between African and non-African languages.  We assessed the performance on unseen data, and also extract the most decisive linguistic features for making the distinction.  We estimate the number of features needed to get good results.
  6. Binary classification trees: This algorithm finds linguistic features to divide the data into sub-sets in a way that maximises the ease of differentiating African and non-African languages.

Results of a multidimensional scaling analysis of WALS, with African and non-African languages grouped by bag plots. The results differentiate African and non-African languages better than chance (p < 0.001) and better than other continent pairs (p = 0.004), but NOT better than 95% of linguistic variables (p = 0.06).

The detailed results will appear in our paper, but here’s what we conclude:

  • Some of the tests result in positive answers.  For example, the support vector machine analysis could differentiate between African and non-African typologies with 93% accuracy.  However, the algorithm needs at least linguistic 50 variables to make this distinction, so it’s unclear whether it’s picking up on actual differences, or just gaps in the data.
  • While some tests passed, our criterion was that ALL of the tests should pass for us to be at all confident of a statistical difference between African and non-African languages.  Some tests fail, so we can’t support this.
  • However, most of the problems we ran into were due to a lack of data.  We could get better estimates if we had more typological data of better quality from existing languages.  Another problem was implicational universals  – particular typological variables are correlated because they affect each other (e.g. verb-object order and prepositions/postpositions), causing patterns in the world’s languages that are confounded with geographic areas.
  • There’s a bigger question of whether, in theory, we can tell the difference between drift, contact effects, areal effects and language death.  Contact with Neandertals may just be too far into the past, with too many human languages dying in the meantime, to make this distinction possible.

So, our conclusion is that any attempt to reconstruct Neandertal languages will fail with the current data and theory we have.  Not surprising, really.  The interesting thing, for me, is that we actually have methods that can give us quantitative answers about this idea, and the answer might change as we document more languages and develop theories about historical change and contact.  As Chris Knight described our EvoLang presentation, this is one of my “most exciting and least conclusive” studies.

Skewed frequencies in phonology. Data from Fry (1947), based on an analysis of 17,000 sounds of transcribed British English text; cited in Taylor (2012: 162f.). “Token frequencies refer to the occurrences of the sounds in the text comprising the corpus; type frequencies are the number of occurrences in the word types in the text.”

The Myth of Language Universals at Birth

[This is a guest post by Stefan Hartmann]


“Chomsky still rocks!” This comment on Twitter refers to a recent paper in PNAS by David M. Gómez et al. entitled “Language Universals at Birth”. Indeed, the question Gómez et al. address is one of the most hotly debated questions in linguistics: Does children’s language learning draw on innate capacities that evolved specifically for linguistic purposes – or rather on domain-general skills and capabilities?

Lbifs, Blifs, and Brains

Gómez and his colleagues investigate these questions by studying how children respond to different syllable structures:

It is well known that across languages, certain structures are preferred to others. For example, syllables like blif are preferred to syllables like bdif and lbif. But whether such regularities reflect strictly historical processes, production pressures, or universal linguistic principles is a matter of much debate. To address this question, we examined whether some precursors of these preferences are already present early in life. The brain responses of newborns show that, despite having little to no linguistic experience, they reacted to syllables like blif, bdif, and lbif in a manner consistent with adults’ patterns of preferences. We conjecture that this early, possibly universal, bias helps shaping language acquisition.

More specifically, they assume a restriction on syllable structure known as the Sonority Sequencing Principle (SSP), which has been proposed as “a putatively universal constraint” (p. 5837). According to this principle, “syllables maximize the sonority distance from their margins to their nucleus”. For example, in /blif/, /b/ is less sonorous than /l/, which is in turn less sonorous than the vowel /i/, which constitues the syllable’s nucleus. In /lbif/, by contrast, there is a sonority fall, which is why this syllable is extremely ill-formed according to the SSP.

A simplified version of the sonority scale.

A simplified version of the sonority scale

In a first experiment, Gómez et al. investigated “whether the brains of newborns react differentially to syllables that are well- or extremely ill-formed, as defined by the SSP” (p. 5838). They had 24 newborns listen to /blif/- and /lbif/-type syllables while measuring the infant’s brain activities. In the left temporal and right frontoparietal brain areas, “well-formed syllables elicited lower oxyhemoglobin concentrations than ill-formed syllables.” In a second experiment, they presented another group of 24 newborns with syllables either exhibiting a sonority rise (/blif/) or two consonants of the same sonority (e.g. /bdif/) in their onset. The latter option is dispreferred across languages, and previous behavioral experiments with adult speakers have also shown a strong preference for the former pattern. “Results revealed that oxyhemoglobin concentrations elicited by well-formed syllables are significantly lower than concentrations elicited by plateaus in the left temporal cortex” (p. 5839). However, in contrast to the first experiment, there is no significant effect in the right frontoparietal region, “which has been linked to the processing of suprasegmental properties of speech” (p. 5838).

In a follow-up experiment, Gómez et al. investigated the role of the position of the CC-patterns within the word: Do infants react differently to /lbif/ than to, say, /olbif/? Indeed, they do: “Because the sonority fall now spans across two syllables (ol.bif), rather than a syllable onset (e.g., lbif), such words should be perfectly well-formed. In line with this prediction, our results show that newborns’ brain responses to disyllables like oblif and olbif do not differ.”

How much linguistic experience do newborns have?

Taken together, these results indicate that newborn infants are already sensitive for syllabification (as the follow-up experiment suggests) as well as for certain preferences in syllable structure. This leads Gómez et al. to the conclusion “that humans possess early, experience-independent linguistic biases concerning syllable structure that shape language perception and acquisition” (p. 5840). This conjecture, however, is a very bold one. First of all, seeing these preferences as experience-independent presupposes the assumption that newborn infants do not have linguistic experience at all. However, there is evidence that “babies’ language learning starts from the womb”. In their classic 1986 paper, Anthony DeCasper and Melanie Spence showed that “third-trimester fetuses experience their mothers’ speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.” Pregnant women were instructed to read aloud a story to their unborn children when they felt that the fetus was awake. In the postnatal phase, the infants’ reactions to the same or a different story read by their mother’s or another woman’s voice were studied by monitoring the newborns’ sucking behavior. Apart from the “experienced” infants who had been read the story, a group of “untrained” newborns were used as control subjects. They found that for experienced subjects, the target story was more reinforcing than a novel story, no matter if it was recited by their mother’s or a different voice. For the control subjects, by contrast, no difference between the stories could be found. “The only experimental variable that can systematically account for these findings is whether the infants’ mothers had recited the target story while pregnant” (DeCasper & Spence 1986: 143).

Continue reading


UFO Events, a Thought Experiment about the Evolution of Language

The problem of human origins, of which language origins is one aspect, is deep and important. It is also somewhat mysterious. If we could travel back in time at least some of those mysteries could be cleared up. One that interests me, for example, is whether or not the emergence of language was preceded by the emergence of music, or more likely, proto-music. Others are interested in the involvement of gesture in language origins.

Some of the attendant questions could be resolved by traveling back in time and making direct observations. Still, once we’d observed what happened and when it happened, questions would remain. We still wouldn’t know the neural and cognitive mechanisms, for they are not apparent from behavior alone. But our observations of just what happened would certainly constrain the space of models we’d have to investigate.

Unfortunately, we can’t travel back in time to make those observations. That difficulty has the peculiar effect of reversing the inferential logic of the previous paragraph. We find ourselves in the situation of using our knowledge of neural and cognitive mechanisms to constrain the space of possible historical sequences.

Except, of course, that our knowledge of neural and cognitive mechanisms is not very secure. And large swaths of linguistics are mechanism free. To be sure, there may be an elaborate apparatus of abstract formal mechanism, but just how that mechanism is realized in step-by-step cognitive and neural processes, that remains uninvestigated,  except among computational linguists.

The upshot of all this is that we must approach these questions indirectly. We have to gather evidence from a wide variety of disciplines – archeology, physical and cultural anthropology, cognitive psychology, developmental psychology, and the neurosciences – and piece it together. Such work entails a level of speculation that makes well-trained academicians queasy.

What follows is an out-take from Beethoven’s Anvil, my book on music. It’s about a thought experiment that first occurred to me while in graduate school in the mid-1970s. Consider the often astounding and sometimes absurd things that trainers can get animals to do, things the don’t do naturally. Those acts are, in some sense, inherent in their neuro-muscular endowment, but not evoked by their natural habitat. But place them in an environment ruled by humans who take pleasure in watching dancing horses, and . . . Except that I’m not talking about horses.

It seems to me that what is so very remarkable about the evolution of our own species is that the behavioral differences between us and our nearest biological relatives are disproportionate to the physical and physiological differences. The physical and physiological differences are relatively small, but the behavioral differences are large.

In thinking about this problem I have found it useful to think about how at least some chimpanzees came to acquire a modicum of language. All of them ended in failure. In the most intense of these efforts, Keith and Cathy Hayes raised a baby chimp in their household from 1947 to 1954. But that close and sustained interaction with Vicki, the young chimp in question, was not sufficient. Then in the late 1960s Allen and Beatrice Gardner began training a chimp, Washoe, in Ameslan, a sign language used among the deaf. This effort was far more successful. Within three years Washoe had a vocabulary of Ameslan 85 signs and she sometimes created signs of her own. Continue reading


Bootstrapping Recursion into the Mind without the Genes

Recursion is one of the most important mechanisms that has been introduced into linguistics in the past six decades or so. It is also one of the most problematic and controversial. These days significant controversy centers on question of the emergence of recursion in the evolution of language. These informal remarks bear on that issue.

Recursion is generally regarded as an aspect of language syntax. My teacher, the late David Hays, had a somewhat different view. He regarded recursion as mechanism of the mind as a whole and so did not specifically focus on recursion in syntax. By the time I began studying with him his interest had shifted to semantics.

He had the idea that abstract concepts could be defined over stories. Thus: charity is when someone does something nice for someone without thought of a reward. We can represent that with the following diagram:

MTL def

The charity node to the left is being defined by the structure of episodes at the right (the speech balloons are just dummies for a network structure). The head of the episodic structure is linked to the charity node with a metalingual arc (MTL), named after Jakobson’s metalingual function, which is language about language. So, one bit of language is defined by s complex pattern of language. Charity, of course, can appear in episodes defining other abstract stories, and so on, thus making the semantic system recursive.

Now let’s develop things a bit more carefully, but still informally. Nor do we need to get so far as the metalingual definition of abstract concepts. But we do need the metalingual mechanism. Continue reading


The Past, Present and Future of Language Evolution Research

During this year’s EvoLang conference, a book was launched with perspectives on the last conference. The past, present and future of language evolution research (McCrohon, Thompson, Verhoef & Yamauchi, 2014) is a volume of student responses to EvoLang9 in Kyoto. It includes basic reviews and criticism, synthesis of current approaches, experiments and sociological perspectives.

It makes for interesting reading. What comes across in all the papers is a drive for collaboration and integration of fields and ideas, as the diagram from the contribution by Barcceló-Coblijn and Martin shows. These are serious attempts to understand what has been learned so far and find new perspectives that incorporate empirical evidence. Many papers see neuroscientific evidence as a key to expanding many areas of research.


Continue reading

Screen Shot 2014-05-05 at 13.06.00

Empirical Advances in Language Evolution

This is a guest post by Jeremy Collins

Hauser, Yang, Berwick, Tattersall, Ryan, Watamull, Chomsky and Lewontin have recently published an article entitled ‘The Mystery of Language Evolution‘ (see also Sean’s post), in which they argue that theories of language evolution today are ‘accompanied by a poverty of evidence’ and that ‘the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever’.  Rather than criticise their article, I thought I would summarise what I think some of the empirical advances have been, in defence of the field.  A few well-known lines of research seem to have fleshed out some details of how language evolved, even if they are still in their infancy.

1. Vocal learning in other species. 

Culturally transmitted song has evolved multiple times in various bird species, dolphins and bats.  Although Hauser et al. dismiss bird song as irrelevant in that it is ‘finite’ and lacks compositional meaning (p.6), these species shed light on why culturally transmitted vocalisation evolved in humans.  These species typically live in groups of unrelated individuals, for instance, who co-operate in foraging.  The complexity of their learnt song may have evolved in the context of recognising and being altruistic towards kin (Sharp et al. 2005) (or by extension any unrelated members who exploit this altruism by managing to acquire the song of the group).  In a similar way, much of the complexity and cultural variability of human language may have developed in the context of in-group identification, such as our ability to detect subtle variations in accent (Fitch 2004).  While sexual selection is an important reason for the evolution of vocal learning in some of these species, it is unlikely to be the main driving force in humans given the lack of sexual dimorphism in language use, in contrast with song birds (Fitch 2004), although its role in human pair bonding is similar to pair bonding in monogamous parrot species (Pepperberg 1999).  Pepperberg (1999) showed that African Grey Parrots can learn to use spoken words and correctly answer questions involving abstract semantic categories, and with some understanding of syntax, showing how bird vocal learning is not necessarily as qualitatively different from human language acquisition as Hauser et al. suggest.

2. The genetics of language. 

The precise relationships between genes and language are unknown, as the authors say; but specific language disorders at least show that syntax and fluency of speech are heritable, which is an advance in its own right.  Vocabulary size and vocabulary acquisition patterns (e.g. rate of learning words at different ages in infancy) have also been shown to be heritable (Stromswold 2001). Although these are not genes ‘for’ these specific linguistic traits,they are likely to have been selected for partly in the context of language use, given the vast difference in syntactic complexity and vocabulary size between human languages and languages that primates, such as Kanzi or Nim Chimpsky, can acquire.

3. The neurobiology of language and tool use. 

The neural circuitry for language is likely to have been co-opted in part from the transmission and use of tools; they both involve complex motor actions and have been suggested to use similar areas of the brain such as Broca’s area, which is activated in experiments involving complex tool manufacture (Higuchi et al. 2009), and which is often lateralised differently in the brain in left-handed individuals (Knecht et al. 2000).  The prevalence of gesture in spoken languages, the fact that we can acquire complex sign languages, and the range of innate gestures in gorillas and chimpanzees (contrasted with their absence of vocalization) suggest that gesture may have been a platform for the evolution of language, and manual dexterity for the evolution of recursive syntax in particular (Arbib 2012).  If the authors want an evolutionary origin for ‘discrete infinity’, this is one candidate.

4. The study of sound symbolism. 

Three lines of evidence suggest that sound-symbolism helped spoken language evolve: robust sound-meaning pairings tested across 6000 languages, controlling for language family and region (such as proximal demonstratives and words for ‘small’ using a front vowel) (Blasi et al. 2014); rich systems of ideophones, namely words similar to onomatopoeia but which go beyond sound in being able to depict appearance, texture, motion, tastes, and emotions, in language families in Africa, Southeast Asia and the Americas (Dingemanse 2012); and innate associations of sounds and shapes independent of language, as suggested by ideophones, and the bouba/kiki and similar tests (Ramachandran 2013).

5. The study of the diversity of grammar. 

As an example, grammatical categories regularly develop from simpler, lexical categories, in ways that recur across many language families: e.g.pre-/post-positions develop from abstract nouns and verbs, adjectives develop from forms of nouns and verbs, tense and aspect markers develop from adverbs or nominalizers (e.g. the development of English ‘-ing’ from a nominal affix to a gerund marker to a participle marker), and so on (Heine and Kuteva 2007).  Cross-linguistic work can therefore shed light on what the first languages may have been like, such as having more weakly differentiated grammatical categories (e.g. collapsing adjectives or adpositions with nouns and verbs). Studies on patterns of basic word order suggest that that subject-object-verb order is likely to have been used, given its dominance in spoken languages today when controlling for geography and language family (Gell-Mann and Ruhlen 2011, Dryer 1992), and the way that people spontaneously converge on that word order when gesturing (Goldin-Meadow et al. 2008).  Languages spoken by small populations tend to develop case-marking and other complex morphology (Lupyan and Dale 2010), suggesting that this may also have been a feature of early languages.  Increasingly detailed surveys of linguistic diversity can help generate hypotheses like these, and hopefully soon allow ways of testing them.

Hauser et al.’s paper has some valid criticisms of the field (such as of models of the cultural evolution of compositionality, and the evidence for Neanderthal language), but I think that their assessment that ‘the fundamental questions remain as mysterious as ever’ is too pessimistic. Others have noted that none of the authors were at the last Evolution of Language conference, which is not surprising given what I remember of meeting Charles Yang, the second author on that paper, at the previous conference in Kyoto.  He was sitting gloomily at dinner with a group of Japanese generativists, who were not talking.  I asked him whether he had enjoyed any of the talks, and he said ‘Almost none.  Their notion of language is so…impoverished.’ He brightened up when the conversation turned back to Chomsky, whom he had had dinner with recently.  ‘We drank a lot of wine.  And Noam had two desserts.’


Jeremy Collins designs kitchens and bathrooms at the Max Planck Institute for Psycholinguistics.  His homepage is here.


Arbib M. A. (2012) Tool use and constructions.  Behav Brain Sci. 35(4):218-9.

Blasi et al. (2014) Sound symbolism and the origins of language.  IN

Cartmill, Roberts, Lyn & Cornsih (Eds. ) The Evolution of Language: Proceedings of the 10th EvoLang Conference.

Dingemanse, Mark. 2012. “Advances in the Cross-Linguistic Study of
Ideophones.” Language and Linguistics Compass 6 (10): 654–72.

Dryer, M. (1992). The Greenbergian word order correlations. Language, pages 81–138.

Fitch, W. T. (2004). The evolution of language. In: The Cognitive Neurosciences (3rd Edition, Ed. by Gazzaniga, M.). Cambridge, MA: MIT Press

Gell-Mann, M. and Ruhlen, M. (2011). The origin and evolution of word order. Proceedings of the National Academy of Sciences, 108(42):17290–17295.

Goldin-Meadow, S., So, W. C., O ̈zyu ̈rek, A., and Mylander, C. (2008). The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences, 105(27):9163–9168.

Higuchi, S., Chaminade, T., Imamizu, H., and Kawato, M. (2009). Shared neural correlates for language and tool use in broca’s area. Neuroreport, 20(15):1376–1381.

Knecht, S., Dr ̈ager, B., Deppe, M., Bobe, L., Lohmann, H., Fl ̈oel, A., Ringelstein, E.-B., and Henningsen, H. (2000). Handedness and hemispheric language dominance in healthy humans. Brain, 123(12):2512–2518.

Lupyan, G. and Dale, R. (2010). Language structure is partly determined by social structure. PLoS ONE, 5(1):e8559.

Pepperberg, I.M. (1999). The Alex Studies: Cognitive and Communicative Abilities of Grey Parrots. Harvard.

Sharp, S.P., McGowan, A., Wood, M.J., and Hatchwell, B.J. (2005).  Learned kin recognition cues in a social bird.  Nature, 434:1127-1130

Stromswold, K. (2001). The heritability of language: A review and metaanalysis of twin, adoption, and linkage studies. Language, 77(4):647–723.


Digital Criticism Comes of Age, a Post at 3QD

I’ve got a new post at 3 Quarks Daily: The Only Game in Town: Digital Criticism Comes of Age.

I open with Moretti – natch – then to Willard McCarty’s 2013 Busa Award Lecture, where he talks of embracing the computer as Other. I end with Said on his belief in an autonomous aesthetic realm, despite the difficulties of conceptualizing how it could possibly work. The thrust of the article, though, is whether or not we can actually get this venture moving, really moving. What are the chances of really embracing the Other?

Though I made my peace with the computer years ago, and so am biased, I don’t know the answer to that question. But I’ve made some progress in figuring out what that question entails and that form the bulk of my essay.

The issue is one that’s been with academic literary study since the early 20th Century. In the 1920s the matter was stated most succinctly by Archibald MacLeish, that poems should not mean but be. In that late 1950s we find ourselves in the “Polemical Introduction” to Northrup Frye’s well-known Anatomy of Criticism (pp. 27-28):

The reading of literature should, like prayer in the Gospels, step out of the talking world of criticism into the private and secret presence of literature. Otherwise the reading will not be a genuine literary experience, but a mere reflection of critical conventions, memories, and prejudices. The presence of incommunicable experienced in center of criticism will always keep criticism as art, as long as the critic recognized that criticism comes out of it but cannot be built on it.

The issue came home to me in a rejection letter for my first essay on “Kubla Khan” – which ended up going into Language and Style in 1985 – where the reviewer complained that the essay “ought to argue with itself, to put into question some of the patterns it establishes-or better, perhaps to let the poem talk back.”

What does he mean, “let the poem talk back”? I know very well that the statement isn’t meant to be taken literally. But what’s the non-literal version of the statement? Under what circumstances could a poem do something like talk back?

Under face-to-face performance circumstances. To be sure, the poem doesn’t talk, but the poet does. The poet recites the poem, the teller spins the tale, the audience reacts with silence, groans, laughter, remarks, and the poet replies. There we the poet/story-teller on an even footing, in the same “space,” one that really IS interactive. But criticism really isn’t like that, no matter how much this or that critic wishes otherwise. Continue reading

Screen Shot 2014-04-28 at 15.58.42

The Mystery of Language Evolution: We can’t know more until we do

Hauser, Yang, Berwick, Tattersall, Ryan, Watumull, Chomsky and Lewontin have a co-authored article on The Mystery of Language Evolution. It’s a review of current directions in the field with the basic message that we don’t yet understand enough for empirical evidence from animal studies, archaeology, palaeontology, genetics or modelling to inform theories of language evolution.  Here I summarise the paper and offer some criticisms.

The core language phenotype of interest, according to the authors, is discrete infinity as exemplified in recursive operations found in combinatorial phonology and hierarchical syntax. The authors argue that the methods of evolutionary biology cannot yet be adequately applied to the evolution of this phenotype.

The paper begins with an illustration of the methods of evolutionary biology in a case where this kind of inference is possible. Túngara frogs (pictured above) have a very simple communication system (males croak to attract females), and we know a lot about the mechanisms underlying production and perception and how it links to fitness. However, the obvious adaptive hypothesis (perception adapted after production) was proven wrong by comparison with living sister species (they had similar perception, but not production capacities, so production adapted to perception). This method is hard to apply to language evolution, because we don’t have a good idea of the mechanisms involved and we have no sister-species to compare ourselves to.

Specifically, the authors focus on 4 domains of inquiry, which they claim cannot contribute to theories of language evolution.

Continue reading

Culture, its evolution and anything inbetween