Defining iconicity and its repercussions in language evolution

There was an awful lot of talk about iconicity at this year’s EvoLang conference (as well as in previous years), and its ability to bootstrap communication systems and solve symbol grounding problems, and this has lead to talk on its possible role in the emergence of human language. Some work has been more sceptical than other’s about the role of iconicity, and so I thought it would be useful to do a wee overview of some of the talks I saw in relation to how different presenters define iconicity (though this is by no stretch a comprehensive overview).

As with almost everything, how people define iconicity differs across studies. In a recent paper, Monaghan, Shillcock, Christiansen & Kirby (2014) identify two forms of iconicity in language; absolute iconicity and relative iconicity. Absolute iconicity is where some linguistic feature imitates a referent, e.g. onomatopoeia or gestural pantomime. Relative iconicity is where there is a signal-meaning mapping or there is a correlation between similar signals and similar meanings. Relative iconicity is usually only clear when the whole meaning and signal spaces can be observed together and systematic relations can be observed between them.

Liz Irvine gave a talk on the core assumption that iconicity played a big role in in bootstrapping language. She teases apart the distinction above by calling absolute iconicity, “diagrammatic iconicity” and relative iconicity, “imagic iconicity”. “Imagic iconicity” can be broken down even further and can be measured on a continuum either in terms of how signals are used and interpreted by language users, or simply by objectively looking at meaning-signal mappings where signs can be non-arbitrary, but not necessarily treated as iconic by language users. Irvine claims that this distinction is important in accessing the role of iconicity in the emergence of language. She argues that diagrammatic or absolute iconicity may aid adults in understanding new signs, but it doesn’t necessarily aid early language learning in infants. Whereas imagic, or relative iconicity, is a better candidate to aid language acquisition and language emergence, where language users do not interpret the signal-meaning mappings explicitly as being iconic, even though they are non-arbitrary.

Irvine briefly discusses that ape gestures are not iconic from the perspective of their users. Marcus Perlman, Nathaniel Clark and Joanne A. Tanner presented work on whether iconicity exists in ape gesture. They define iconicity as being gestures which in any way resemble or depict their meanings but break down these gestures into pantomimed actions, directive touches and visible directives, which are all arguably examples of absolute iconicity. Following from Irvine’s arguments, this broad definition of iconicity may not be so useful when drawing up scenarios for language evolution, and the authors try to provide more detailed and nuanced analysis drawing from the interpretation of signs from the ape’s perspective. Theories which currently exist on iconicity in ape gesture maintain that any iconicity is an artefact of the gesture’s development through inheritance and ritualisation. However, the authors argue that these theories do not currently account for the variability and creativity seen in iconic ape gestures which may help frame iconicity from the perspective of its user.

It’s difficult to analyse iconicity from an ape’s perspective, however, it should be much easier to get at how human’s perceive and interpret different types of iconicity via experiments. I think that experimental design can help get at this, but also analysis from a user perspective from post-experimental questionnaires or even post-experimental experiments (where naive participants are asked to rate to what degree a sign represents a meaning).

Gareth Roberts and Bruno Galantucci presented a study where their hypothesis was that a modality’s capacity for iconicity may inhibit the emergence of combinatorial structure (phonological patterning) in a system. This hypothesis may explain why emerging sign languages, which have more capacity for iconicity than spoken languages, can have fully expressive systems without a level of combinatorial structure (see here). They used the now famous paradigm from Galantucci’s 2005 experiment here. They asked participants to communicate a variety of meanings which were either lines, which could be represented through absolute iconicity with the modality provided, or circles which were various shades of green, which could not be iconically represented. The experiment showed that indeed, the signals used for circles were made up from combinatorial elements where the lines retained iconicity throughout the experiment. This is a great experiment and I really like it, however, I worry that it is only looking at two extreme ends of the iconicity continuum, and has not considered the effects of relative iconicity, or nuances of signal-meaning relations.  In de Boer and Verhoef (2012), a mathematical model shows that shared topology between signal and meaning spaces will generate an iconic system with signal-meaning mapping, but mismatched topologies will generate systems with conventionalised structure. I think it is important that experimental work now looks into more slight differences between signal and meaning spaces and the effects these differences will have on structure in emerging linguistic systems in the lab, and also how participant’s interpretation of any iconicity or structure in a system effects the nature of that iconicity or structure. I’m currently running some experiments exploring this myself, so watch this space!

References

Where possible, I’ve linked to studies as I’ve cited them.

All other studies cited are included in Erica A. Cartmill, Seán Roberts, Heidi Lyn & Hannah Cornish, ed., The Evolution of Language: Proceedings of the 10th international conference (EvoLang 10). It’s only £87.67 on Amazon, (but it may be wiser to email the authors if you don’t have a friend with a copy).

The Myth of Language Universals at Birth

[This is a guest post by Stefan Hartmann]

 

“Chomsky still rocks!” This comment on Twitter refers to a recent paper in PNAS by David M. Gómez et al. entitled “Language Universals at Birth”. Indeed, the question Gómez et al. address is one of the most hotly debated questions in linguistics: Does children’s language learning draw on innate capacities that evolved specifically for linguistic purposes – or rather on domain-general skills and capabilities?

Lbifs, Blifs, and Brains

Gómez and his colleagues investigate these questions by studying how children respond to different syllable structures:

It is well known that across languages, certain structures are preferred to others. For example, syllables like blif are preferred to syllables like bdif and lbif. But whether such regularities reflect strictly historical processes, production pressures, or universal linguistic principles is a matter of much debate. To address this question, we examined whether some precursors of these preferences are already present early in life. The brain responses of newborns show that, despite having little to no linguistic experience, they reacted to syllables like blif, bdif, and lbif in a manner consistent with adults’ patterns of preferences. We conjecture that this early, possibly universal, bias helps shaping language acquisition.

More specifically, they assume a restriction on syllable structure known as the Sonority Sequencing Principle (SSP), which has been proposed as “a putatively universal constraint” (p. 5837). According to this principle, “syllables maximize the sonority distance from their margins to their nucleus”. For example, in /blif/, /b/ is less sonorous than /l/, which is in turn less sonorous than the vowel /i/, which constitues the syllable’s nucleus. In /lbif/, by contrast, there is a sonority fall, which is why this syllable is extremely ill-formed according to the SSP.

A simplified version of the sonority scale.
A simplified version of the sonority scale

In a first experiment, Gómez et al. investigated “whether the brains of newborns react differentially to syllables that are well- or extremely ill-formed, as defined by the SSP” (p. 5838). They had 24 newborns listen to /blif/- and /lbif/-type syllables while measuring the infant’s brain activities. In the left temporal and right frontoparietal brain areas, “well-formed syllables elicited lower oxyhemoglobin concentrations than ill-formed syllables.” In a second experiment, they presented another group of 24 newborns with syllables either exhibiting a sonority rise (/blif/) or two consonants of the same sonority (e.g. /bdif/) in their onset. The latter option is dispreferred across languages, and previous behavioral experiments with adult speakers have also shown a strong preference for the former pattern. “Results revealed that oxyhemoglobin concentrations elicited by well-formed syllables are significantly lower than concentrations elicited by plateaus in the left temporal cortex” (p. 5839). However, in contrast to the first experiment, there is no significant effect in the right frontoparietal region, “which has been linked to the processing of suprasegmental properties of speech” (p. 5838).

In a follow-up experiment, Gómez et al. investigated the role of the position of the CC-patterns within the word: Do infants react differently to /lbif/ than to, say, /olbif/? Indeed, they do: “Because the sonority fall now spans across two syllables (ol.bif), rather than a syllable onset (e.g., lbif), such words should be perfectly well-formed. In line with this prediction, our results show that newborns’ brain responses to disyllables like oblif and olbif do not differ.”

How much linguistic experience do newborns have?

Taken together, these results indicate that newborn infants are already sensitive for syllabification (as the follow-up experiment suggests) as well as for certain preferences in syllable structure. This leads Gómez et al. to the conclusion “that humans possess early, experience-independent linguistic biases concerning syllable structure that shape language perception and acquisition” (p. 5840). This conjecture, however, is a very bold one. First of all, seeing these preferences as experience-independent presupposes the assumption that newborn infants do not have linguistic experience at all. However, there is evidence that “babies’ language learning starts from the womb”. In their classic 1986 paper, Anthony DeCasper and Melanie Spence showed that “third-trimester fetuses experience their mothers’ speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.” Pregnant women were instructed to read aloud a story to their unborn children when they felt that the fetus was awake. In the postnatal phase, the infants’ reactions to the same or a different story read by their mother’s or another woman’s voice were studied by monitoring the newborns’ sucking behavior. Apart from the “experienced” infants who had been read the story, a group of “untrained” newborns were used as control subjects. They found that for experienced subjects, the target story was more reinforcing than a novel story, no matter if it was recited by their mother’s or a different voice. For the control subjects, by contrast, no difference between the stories could be found. “The only experimental variable that can systematically account for these findings is whether the infants’ mothers had recited the target story while pregnant” (DeCasper & Spence 1986: 143).

Continue reading “The Myth of Language Universals at Birth”

UFO Events, a Thought Experiment about the Evolution of Language

The problem of human origins, of which language origins is one aspect, is deep and important. It is also somewhat mysterious. If we could travel back in time at least some of those mysteries could be cleared up. One that interests me, for example, is whether or not the emergence of language was preceded by the emergence of music, or more likely, proto-music. Others are interested in the involvement of gesture in language origins.

Some of the attendant questions could be resolved by traveling back in time and making direct observations. Still, once we’d observed what happened and when it happened, questions would remain. We still wouldn’t know the neural and cognitive mechanisms, for they are not apparent from behavior alone. But our observations of just what happened would certainly constrain the space of models we’d have to investigate.

Unfortunately, we can’t travel back in time to make those observations. That difficulty has the peculiar effect of reversing the inferential logic of the previous paragraph. We find ourselves in the situation of using our knowledge of neural and cognitive mechanisms to constrain the space of possible historical sequences.

Except, of course, that our knowledge of neural and cognitive mechanisms is not very secure. And large swaths of linguistics are mechanism free. To be sure, there may be an elaborate apparatus of abstract formal mechanism, but just how that mechanism is realized in step-by-step cognitive and neural processes, that remains uninvestigated,  except among computational linguists.

The upshot of all this is that we must approach these questions indirectly. We have to gather evidence from a wide variety of disciplines – archeology, physical and cultural anthropology, cognitive psychology, developmental psychology, and the neurosciences – and piece it together. Such work entails a level of speculation that makes well-trained academicians queasy.

What follows is an out-take from Beethoven’s Anvil, my book on music. It’s about a thought experiment that first occurred to me while in graduate school in the mid-1970s. Consider the often astounding and sometimes absurd things that trainers can get animals to do, things the don’t do naturally. Those acts are, in some sense, inherent in their neuro-muscular endowment, but not evoked by their natural habitat. But place them in an environment ruled by humans who take pleasure in watching dancing horses, and . . . Except that I’m not talking about horses.

It seems to me that what is so very remarkable about the evolution of our own species is that the behavioral differences between us and our nearest biological relatives are disproportionate to the physical and physiological differences. The physical and physiological differences are relatively small, but the behavioral differences are large.

In thinking about this problem I have found it useful to think about how at least some chimpanzees came to acquire a modicum of language. All of them ended in failure. In the most intense of these efforts, Keith and Cathy Hayes raised a baby chimp in their household from 1947 to 1954. But that close and sustained interaction with Vicki, the young chimp in question, was not sufficient. Then in the late 1960s Allen and Beatrice Gardner began training a chimp, Washoe, in Ameslan, a sign language used among the deaf. This effort was far more successful. Within three years Washoe had a vocabulary of Ameslan 85 signs and she sometimes created signs of her own. Continue reading “UFO Events, a Thought Experiment about the Evolution of Language”

Bootstrapping Recursion into the Mind without the Genes

Recursion is one of the most important mechanisms that has been introduced into linguistics in the past six decades or so. It is also one of the most problematic and controversial. These days significant controversy centers on question of the emergence of recursion in the evolution of language. These informal remarks bear on that issue.

Recursion is generally regarded as an aspect of language syntax. My teacher, the late David Hays, had a somewhat different view. He regarded recursion as mechanism of the mind as a whole and so did not specifically focus on recursion in syntax. By the time I began studying with him his interest had shifted to semantics.

He had the idea that abstract concepts could be defined over stories. Thus: charity is when someone does something nice for someone without thought of a reward. We can represent that with the following diagram:

MTL def

The charity node to the left is being defined by the structure of episodes at the right (the speech balloons are just dummies for a network structure). The head of the episodic structure is linked to the charity node with a metalingual arc (MTL), named after Jakobson’s metalingual function, which is language about language. So, one bit of language is defined by s complex pattern of language. Charity, of course, can appear in episodes defining other abstract stories, and so on, thus making the semantic system recursive.

Now let’s develop things a bit more carefully, but still informally. Nor do we need to get so far as the metalingual definition of abstract concepts. But we do need the metalingual mechanism. Continue reading “Bootstrapping Recursion into the Mind without the Genes”

Happy Darwin Day!

I had hoped to celebrate Darwin day with a longer post discussing how language is often viewed as a challenging puzzle to natural selection. My main worry is that the formal design metaphor used in much of linguistics has been used, incorrectly IMHO, to divert attention away from studying language as a biological system based on organic logic. If this doesn’t make much sense, then you can do some background reading with Terrence Deacon’s paper, Language as an emergent function: Some radical neurological and evolutionary implications. Alas, that’s all I have to say on the matter for now, but if you’re looking for something related to Darwin, evolution and the origin of language, then I strongly suggest you head over to the excellent Darwin Correspondence project and read their blog post on the subject:

Darwin started thinking about the origin of language in the late 1830s. The subject formed part of his wide-ranging speculations about the transmutation of species. In his private notebooks, he reflected on the communicative powers of animals, their ability to learn new sounds and even to associate them with words. “The distinction of language in man is very great from all animals”, he wrote, “but do not overrate—animals communicate to each other” (Barrett ed. 1987, p. 542-3). Darwin observed the similarities between animal sounds and various natural cries and gestures that humans make when expressing strong emotions such as fear, surprise, or joy. He noted the physical connections between words and sounds, exhibited in words like “roar”, “crack”, and “scrape” that seemed imitative of the things signified. He drew parallels between language and music, and asked: “did our language commence with singing—is this the origin of our pleasure in music—do monkeys howl in harmony”? (Barrett ed. 1987, p. 568).

Retiring Procrustean Linguistics

Many of you are probably already aware of the Edge 2014 question: what scientific ideas are ready for retirement? The question was derived from the Kuhnian-esque, and somewhat tongue-in-cheek, quote by theoretical physicist Max Planck:

A new scientific theory does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it.

Some of the big themes that jumped out at me were bashing the scientific method, bemoaning our enthusiasm for big data and showing us how we don’t understand and routinely misapply statistics. Other relevant candidates that popped up for retirement were culturelearninghuman natureinnateness, and brain plasticity. Lastly, on the language front, we had Benjamin Bergen and Nick Enfield weighing in against universal grammar and linguistic competency, whilst John McWhorter rallied against strong linguistic relativity and Dan Sperber challenged our conventional understanding of meaning.

And just so you’re aware: I’m not necessarily in agreement with all of the perspectives I’ve linked to above, but I do think a lot of them are interesting and definitely worth a read (if only to clarify your own position on the matters). On this note, you should probably go over and read Norbert Hornstein’s post about the flaws of Bergen’s argument, which basically boil down to a conflation between I-languages and E-languages (and where we should expect to observe universal properties).

If I had to offer my own candidate for retirement, then it would be what Anne Buchanan over at the excellent blog, The Mermaid’s Tale, termed Procrustean Science:

In classical Greek mythology, Procrustes was a criminal who produced an iron bed and made his victims fit the bed…by cutting off any parts of their bodies that didn’t fit. The metaphorical use of the word means “enforcing uniformity or conformity without regard to natural variation or individuality.” It is in this spirit that Woese characterized much of modern biology as procrustean, because rather than adapt its explanations to the facts, the facts are forced to lie in a bed of theory that is taken for granted–and thus, the facts must fit!

Continue reading “Retiring Procrustean Linguistics”

What is combinatorial structure?

Languages have structure on two levels. The level on which small meaningless building blocks (phonemes) make up bigger meaningful building blocks (morphemes), and the level of structure at which these meaningful building blocks make up even bigger meaningful structures (words, sentences, utterances). This was identified way back in the 1960s as one of Hockett’s design features for language know as “duality of patterning”, and in most of linguistics people refer to these different levels of structure as “phonology” and “(morpho)syntax”.

However, in recent years these contrasting levels of structure have started to be talked about in the context of language evolution, either in reference to artificial language learning experiments or experimental semiotics, where a proxy for language is used so it doesn’t make sense to talk about phonological or morphosyntactic structure, or when talking about animal communication where it also doesn’t make sense to talk about terms which pertain to human language. Instead, terms such as “combinatorial” and “compositional” structure are used, occasionally contrastively, or sometimes they get conflated to mean the same thing.

In  the introduction to a recent special issue in Language and Cognition on new perspectives on duality of patterning, Bart de Boer, Wendy Sandler and Simon Kirby helpfully outline their preferred use of terminology:

Duality of patterning (Hockett, 1960) is the property of human language that enables combinatorial structure on two distinct levels: meaningless sounds can be combined into meaningful morphemes and words, which themselves could be combined further. We will refer to recombination at the first level as combinatorial structure, while recombination at the second level will be called compositional structure.

You will notice that they initially call both levels of structure “combinatorial”, and they both arguably are, and my point in this blog post isn’t necessarily that only structure on the first level should be called combinatorial, but that work talking about combinatorial structure should establish what their terminology means.

A recent paper by Scott-Philips and Blythe (2013), which is entitled “Why is combinatorial communication rare in the natural world, and why is language an exception to this trend?” presents an agent based model to show how limited the conditions are from which combinatorial communication can emerge. Obviously, in order to do this they need to define what they mean by combinatorial communication and present this figure by way of explanation:

F1.medium

They explain:

In a combinatorial communication system, two (or more) holistic signals (A and B in this figure) are combined to form a third, composite signal (A + B), which has a different effect (Z) to the sum of the two individual signals (X + Y). This figure illustrates the simplest combinatorial communication system possible. Applied to the putty-nosed monkey system, the symbols in this figure are: a, presence of eagles; b, presence of leopards; c, absence of food; A, ‘pyow’; B, ‘hack’ call; C = A + ‘pyow–hack’; X, climb down; Y, climb up; Z ≠ X + Y, move to a new location. Combinatorial communication is rare in nature: many systems have a signal C = A + B with an effect Z = X + Y; very few have a signal C = A + B with an effect Z ≠ X + Y.

In this example, the building blocks which make C , A and B, are arguably meaningful because they act as signals in their own right, therefore, if C had a meaning which was a combination of the meanings of A and B, this system (using de Boer, Sandler and Kirby’s definition) would be compositional (this isn’t represented in the figure above). However, if the meaning of C is not a combination of the meanings of A and B, then A and B are arguably meaningless building blocks (and their individual expressions just happen to have meaning, for example the individual phoneme /a/ being an indefinite determiner in English, but not having this meaning when it is used in the word “cat”). In this case, the system would be combinatorial (as defined by the figure above, as well as under the definition of de Boer, Sadler and Kirby). So far so good, it looks like we are in agreement.

However, later in their paper Scott-Philips and Blythe go on to argue:

Coded ‘combinatorial’ signals are in a sense not really combinatorial at all. After all, there is no ‘combining’ going on. There is really just a third holistic signal, which happens to be comprised of the same pieces as other existing holistic signals. Indeed, the most recent experimental results suggest that the putty-nosed monkeys interpret the ‘combinatorial’ pyow–hack calls in exactly this idiomatic way, rather than as the product of two component parts of meaning. By contrast, the ostensive creation of new composite signals is clearly combinatorial: the meaning of the new, composite signal is in part (but only in part) a function of the meanings of the component pieces.

The argument they are giving here is that unless the meaning of C is a combination of A and B (or compositional as defined above), then it is not really a combinatorial signal.

Scott-Philips and Blythe definitely know and demonstrate that there is a difference between the two levels of structure, but they conflate them both under one term, “combinatorial”, which makes it harder to understand that there is a very clear difference. Also, changing the definition of what they mean by “combinatorial” between the introduction of their paper and their discussion confuses their argument.

Perhaps we should all agree to adopt the terminology proposed by de Boer, Sandler and Kirby, but given the absence of a consensus on the matter, at the very least I think outlining exactly what is meant by combinatorial (or compositional) needs to be established at the beginning of every paper using these terms.

 

References

de Boer, B., Sandler, W., & Kirby, S. (2012). New perspectives on duality of patterning: Introduction to the special issue. Language and Cognition4(4).

Hockett, C. 1960. The origin of speech. Scientific American 203. 88–111.

Scott-Phillips, T. C., & Blythe, R. A. (2013). Why is combinatorial communication rare in the natural world, and why is language an exception to this trend?. Journal of The Royal Society Interface10(88), 20130520.

Language Evolution or Language Change?

You sometimes hear people complaining about the use of the term “language evolution” when what people really mean is historical linguistics, language change or the cultural evolution of language. So what’s the difference?

Some people argue that evolution is a strictly biological phenomenon; how the brain evolved the structures which acquire and create language, and any linguistic change is anything outside of this.

Sometimes this debate gets reduced to the matter of whether there are enough parallels between the cultural evolution of language and biological evolution to justify them both having the “evolution” label. George Walkden recently did a presentation in Manchester on why language change is not language evolution and dedicated quite a large chunk of a presentation to where the analogy between languages and species fall down. It is true that there are a lot of differences between languages and species, and how these things replicate and interact, and of course it is difficult to find them perfectly analogous.

However, focussing on the differences between biological and cultural evolution in language causes one to overlook why a lot of evolutionary linguistics work looks at cultural evolution. Work on cultural evolution is trying to address the same question as studies looking directly at physiology, why is language structured the way it is? Obviously how structure evolved is the main question here, but how much of this was biological, and how much is cultural is still a very open question. And any work which looks at how structure comes about, either through biological or cultural evolution can, in my opinion, legitimately be called evolutionary linguistics.

Additionally, in the absence of direct empirical evidence in language evolution, the indirect evidence that we can gather, either through observing the structure of the world’s languages, or by using artificial learning experiments, can help us answer questions about our cognitive abilities.

Furthermore, Kirby (2002) outlined 3 timescales of language evolution on the levels of biological evolution (phylogenetic), cultural evolution (glossogentic) and individual development (ontogentic). All of these timescales interact and influence each other, so it’s necessary to consider all of these levels in language evolution research, and to say work on any of these timescales is not language evolution research is not respecting the big picture.

Screen Shot 2013-11-03 at 20.35.22

So what’s the difference between language change and language evolution? As with almost everything, it’s not a black and white issue. I would say though that studies looking at universal trends in language, or cultural evolution experiments in the lab, are very relevant to language evolution. What I’d label historical linguistics, or studies on language change, however, is work which presents data from just one language, as it is hard to make inferences about the evolution of our universal capability for language with just one data point.

 

Figure 1 from: Kirby, S. (2002b). Natural language from artificial lifeArtificial Life, 8(2):185-215.