Animal Signalling Theory 101 – The Handicap Principle

One of the most important concepts in animal signalling theory, proposed by Amotz Zahavi in a seminal 1975 paper and in later works (Zahavi 1977; Zahavi & Zahavi 1997), is the handicap principle. A general definition is that females have evolved mating preferences for males who display exaggerated ornaments or behaviours that are costly to maintain and develop, and that this cost ensures an ‘honest’ signal of male genetic quality.

As a student I found it quite difficult to identify a working definition for this important type of signal mainly due to the apparent ‘coining fest’ that has taken place over the years since Zahavi outlined his original idea in 1975. For this reason, I have decided to provide a brief outline of the terminological and conceptual differences that exist in relation to the handicap principle in an attempt to help anyone who might be struggling to navigate the literature.

As Zahavi did not define the handicap principle mathematically, a number of interpretations can be found in the key literature due to scholars disagreeing as to the true nature of his original idea. Until John Maynard Smith and Harper simplified and clarified things wonderfully in their 2003 publication Animal Signals, to my knowledge at least four different interpretations of the handicap were being used and explored empirically and through mathematical modelling, each with distinct differences that aren’t all that obvious to grasp without delving into the maths.

Continue reading “Animal Signalling Theory 101 – The Handicap Principle”

Tool making and Language Evolution

There’s an often cited gap in tool making history in which humans did not advance from simple Oldowan tools (which date back to about 2.5 million years ago) until about 500,000 years ago when progress became much faster. There is much debate as to whether this gap in progress is the result of the cognitive abilities to make more innovative tools or if it was an issue of dexterity.

A recent article by Faisal et al. (2010) “The Manipulative Complexity of Lower Paleolithic Stone Toolmaking” has tried to address these problems by assessing the manipulative complexity of tool making tasks from the Oldowan tools to the more advanced hand axes from much later.

A stone ‘core’ (A) is struck with a hammerstone (B) in order to detach sharp stone ‘flakes’. In Oldowan toolmaking (C, top) the detached flakes (left in photo) are used as simple cutting tools and the core (right in photo) is waste. In Acheulean toolmaking (C, bottom), strategic flake detachments are used to shape the core into a desired form, such as a handaxe. Both forms of toolmaking are associated with activation of left ventral premotor cortex (PMv), Acheulean toolmaking activates additional regions in the right hemisphere, including the supramarginal gyrus (SMG) of the inferior parietal lobule, right PMv, and the right hemisphere homolog of anterior Broca's area: Brodmann area 45 (BA 45).

The following is taken from a press release from Eureka.org:

Researchers used computer modelling and tiny sensors embedded in gloves to assess the complex hand skills that early humans needed in order to make two types of tools during the Lower Palaeolithic period, which began around 2.5 million years ago. The cross-disciplinary team, involving researchers from Imperial College London, employed a craftsperson called a flintnapper to faithfully replicate ancient tool-making techniques.

The team say that comparing the manufacturing techniques used for both Stone Age tools provides evidence of how the human brain and human behaviour evolved during the Lower Palaeolithic period.

The flintnapper who participated in today’s study created two types tools including the razor-sharp flakes and hand-held axes. He wore a data glove with sensors enmeshed into its fabric to record hand and arm movements during the production of these tools.

After analysing this data, the researchers discovered that both flake and hand-held axe manufacturing techniques were equally complex, requiring the same kind of hand and arm dexterity. This enabled the scientists to rule out motor skills as the principal factor for holding up stone tool development.

The team deduced from their results that the axe-tool required a high level of brain processing.

This has implications for language evolution as brain scans from tool makers have shown significant overlap with areas involved in discourse-level language processing as well as complex hand gestures. The study finishes with the following:

…the anatomical overlap of Late Acheulean toolmaking and right hemisphere linguistic processing may reflect the flexible “mapping” of diverse overt behaviors onto shared functional substrates in the brain. This implies that: 1) selection acting on either language or toolmaking abilities could have indirectly favored elaboration of neural substrates important for the other, and 2) archaeological evidence of Paleolithic toolmaking can provide evidence for the presence of cognitive capacities also important to the modern human faculty for language.

Read the original article at PLoS ONE:

http://www.plosone.org/article/info:doi/10.1371/journal.pone.0013718

Sexual Selection in the age of Mass Media

This month a red deer was crowned as Emperor of Exmoor and Britain’s largest wild animal in a series of newspaper articles.  Today, it’s emerged that the deer has been shot by a hunter willing to pay the presumably high price on the hunting rights.

When Richard Austin, the photographer that took the pictures for the articles, was asked if he felt responsible, he said that he always believed the size of the deer’s antlers would get him killed in the end.  The Emperor’s antlers may have kept other deer away, but it attracted far deadlier predators.  Humans have been breeding animals – and killing them – for sport for a long time, but it’s only recently that prize targets can be advertised so widely.  The size of the antlers may be a product of sexual selection, but now cultural processes are counteracting this.  The Emperor was even killed during the mating season, unable to pass on its genes.  If you’re a deer, it’s maybe best to stay mid-size rather than risk the growing threat of trophy-hunting.

On Phylogenic Analogues

A recent post by Miko on Kirschner and Gerhart’s work on developmental constraints and the implications for evolutionary biology caught my eye due to the possible analogues which could be drawn with language in mind. It starts by saying that developmental constraints are the most intuitive out of all of the known constraints on phenotypic variation.  Essentially, whatever evolves must evolve from the starting point, and it cannot ignore the features of the original. Thus, a winged horse would not occur, as six limbs would violate the basic bauplan of tetrapods. In the same way, a daughter language cannot evolve without taking into account the language it derives from and language universals. But instead of viewing this as a constraint which limits the massive variation we see biologically or linguistically between different phenotypes, developmental constraints can be seen as a catalyst for regular variation.

ResearchBlogging.orgA recent post by Miko on Kirschner and Gerhart’s work on developmental constraints and the implications for evolutionary biology caught my eye due to the possible analogues which could be drawn with language in mind. It starts by saying that developmental constraints are the most intuitive out of all of the known constraints on phenotypic variation.  Essentially, whatever evolves must evolve from the starting point, and it cannot ignore the features of the original. Thus, a winged horse would not occur, as six limbs would violate the basic bauplan of tetrapods. In the same way, a daughter language cannot evolve without taking into account the language it derives from and language universals. But instead of viewing this as a constraint which limits the massive variation we see biologically or linguistically between different phenotypes, developmental constraints can be seen as a catalyst for regular variation.

A pretty and random tree showing variation among IE languages.

Looking back over my courses, I’m surprised by how little I’ve noticed (different from how much was actually said) about reasons for linguistic variation. The modes of change are often noted: <th> is fronted in Fife, for instance, leading to the ‘Firsty Ferret’ instead of the ‘Thirsty Ferret’ as a brew, for instance. However, why the <th> is fronted at all isn’t explained beyond cursory hypothesis. But that’s a bit besides the point: what is the point is that phenotypic variation is not necessarily random, as there are constraints – due to the “buffering and canalizing of development” – which limit variation to a defined range of possibilities. There clearly aren’t any homologues between biological embryonic processes and linguistic constraints, but there are developmental analogues: the input bottleneck (paucity of data) given to children, learnability constraints, the necessity for communication, certain biological constraints to do with production and perception, etc. These all act on language to make variation occur only within certain channels, many of which would be predictable.

Another interesting point raised by the article is the robustness of living systems to mutation. The buffering effect of embryonic development results in the accumulation of ‘silent’ variation.  This has been termed evolutionary capacitance. Silent variation can lay quiet, accumulating, not changing the phenotype noticeably until environmental or genetic conditions unmask them. I’ve seen little research (not that I don’t expect there to be plenty) on the theoretical implications of the influence of evolutionary capacitance on language change – in other words, how likely a language is to make small variations which don’t affect language understanding before a new language emerges (not that the term language isn’t arbitrary based on the speaking community, anyway). Are some languages more robust than others? Is robustness a quality which makes a language more likely to be used in multilingual settings – for instance, in New Guinea, if seven languages are mutually indistinguishable, is it likely the that local lingua franca is forced by its environment to be more robust in order to maximise comprehension?

The article goes on about the cost of robustness: stasis. This can be seen clearly in Late Latin, which was more robust than the daughter languages as it was needed to communicate in different environments where the language had branched off into the Romance languages, and an older form was necessary in order for communication to ensue. Thus, Latin retained usage well after the rest of it had evolved into other languages. Another example would be Homeric Greek, which retained many features lost in Attic, Doric, Koine, and other dialects, as it was used in only a certain environment and was therefore resistant to change. This has all been studied before better than I can sum it up here. But the point I am making is that analogues can be clearly drawn here, and some interesting theories regarding language become apparent only when seen in this light.

A good example, also covered, would be exploratory processes, as Kirschner and Gerhart call them. These are processes which allow for variation to occur in environments where other variables are forced to change. The example given is the growth of bone length, which requires corresponding muscular, circulatory, and other dependant systems to also change. The exploratory processes allow for future change to occur in the other systems. That is, they expedite plasticity. So, for instance, an ad hoc linguistic example would be the loss of a fixed word order, which would require that morphology step in to fill the gap. In such a case, particles or affixes or the like would have to have already paved the way for case markers to evolve, and would have had to have been present to some extent in the original word order system. (This may not be the best example, but I hope my point comes across.)

Naturally, much of this will have seemed intuitive. But, as Miko stated, these are useful concepts for thinking about evolution; and, in my own case especially, the basics ought to be brought back into scrutiny fairly frequently. Which is justification enough for this post. As always, comments appreciated and accepted. And a possible future post: clade selection as a nonsensical way to approach phylogenic variation.

References:

Caldwell, M. (2002). From fins to limbs to fins: Limb evolution in fossil marine reptiles American Journal of Medical Genetics, 112 (3), 236-249 DOI: 10.1002/ajmg.10773

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

Domain-General Regions and Domain-Specific Networks

The notion of a domain-specific, language acquisition device is something that still divides linguists. Yet, in an ongoing debate spanning at least several decades, there is still no evidence, at least to my knowledge, for the existence of a Universal Grammar. Although, you’d be forgiven for thinking that the problem was solved many years ago, especially if you were to believe the now  sixteen-year old words of Massimo Piattelli-Palmarini (1994):

The extreme specificity of the language system, indeed, is a fact, not just a working hypothesis, even less a heuristically convenient postulation. Doubting that there are language-specific, innate computational capacities today is a bit like being still dubious about the very existence of molecules, in spite of the awesome progress of molecular biology.

Suffice to say, the analogy between applying scepticism of molecules and scepticism of Universal Grammar is a dud, even if it does turn out that the latter does exist. Why? Well, as stated above: we still don’t know if humans have, or for that matter, even require, an innate ability to process certain grammatical principles. The rationale for thinking that we have some innate capacity for acquiring language can be delineated into a twofold argument: first, children seem adept at rapidly learning a language, even though they aren’t exposed to all of the data; and second, cognitive science told us that our brains are massively modular, or at the very least, should entail some aspect that is domain specific to language (see FLB/FLN distinction in Hauser, Chomsky & Fitch, 2002). I think the first point has been done to death on this blog: cultural evolution can provide an alternative explanation as to how children successfully learn language (see here and here and Smith & Kirby, 2008). What I haven’t really spoken about is the mechanism behind our ability to process language, or to put it differently: how are our brains organised to process language?

Continue reading “Domain-General Regions and Domain-Specific Networks”

The 20th Anniversary of Steven Pinker & Paul Bloom: Natural Language and Natural Selection (1990)

The day before yesterday Wintz mentioned two important birthdays in the field of language evolution (see here): First, Babel’s Dawn turned four, and second, as both Edmund Blair Bolles and Wintz pointed out, Steven Pinker‘s and Paul Bloom‘s seminal paper “Natural Language and Natural Selection” (preprint can be found here) has its 20th anniversary.
Wintz wrote that he planned on writing
“a post on Pinker and Bloom’s original paper, and how the field has developed over these last twenty years, at some point in the next couple of weeks,”
and I thought I’d also offer a short perspective on the paper, by reposting an slightly edited post I wrote on the paper in 2008 (yes I know, I do a lot of reposting of old material, but I’m planning on writing more new stuff as well, I promise 😉 ).
So here we go:

Some Links #19: The Reality of a Universal Language Faculty?

I noticed it’s almost been a month since I last posted some links. What this means is that many of the links I planned on posting are terribly out of date and these last few days I haven’t really had the time to keep abreast of the latest developments in the blogosphere (new course + presentation at Edinburgh + current cold = a lethargic Wintz). I’m hoping next week will be a bit nicer to me.

The reality of a universal language faculty? Melodye offers up a thorough post on the whole Universal Grammar hypothesis, mostly drawing from the BBS issue dedicated Evans & Levinson (2009)’s paper on the myth of language universals, and why it is a weak position to take. Key paragraph:

When we get to language, then, it need not be surprising that many human languages have evolved similar means of efficiently communicating information. From an evolutionary perspective, this would simply suggest that various languages have, over time, ‘converged’ on many of the same solutions.  This is made even more plausible by the fact that every competent human speaker, regardless of language spoken, shares roughly the same physical and cognitive machinery, which dictates a shared set of drives, instincts, and sensory faculties, and a certain range of temperaments, response-patterns, learning facilities and so on.  In large part, we also share fairly similar environments — indeed, the languages that linguists have found hardest to document are typically those of societies at the farthest remove from our own (take the Piraha as a case in point).

My own position on the matter is fairly straightforward enough: I don’t think the UG perspective is useful. One attempt by Pinker and Bloom (1990) argued that this language module, in all its apparent complexity, could not have arisen by any other means than via natural selection – as did the eye and many other complex biological systems. Whilst I agree with the sentiment that natural selection, and more broadly, evolution, is a vital tool in discerning the origins of language, I think Pinker & Bloom initially overlooked the significance of cultural evolutionary and developmental processes. If anything, I think the debate surrounding UG has held back the field in some instances, even if some of the more intellectually vibrant research emerged as a product of arguing against its existence. This is not to say I don’t think our capacity for language has been honed via natural selection. It was probably a very powerful pressure in shaping the evolutionary trajectory of our cognitive capacities. What you won’t find, however, is a strongly constrained language acquisition device dedicated to the processing of arbitrary, domain-specific linguistic properties, such as X-bar theory and case marking.

Babel’s Dawn Turns Four. In the two and half years I’ve been reading Babel’s Dawn it has served as a port for informative articles, some fascinating ideas and, lest we forget, some great writing on the evolution of language. Edmund Blair Bolles highlights the blog’s fourth anniversary by referring to another, very important, birthday:

This blog’s fourth anniversary has rolled around. More notably, the 20th anniversary of Steven Pinker and Paul Bloom‘s famous paper, “Natural Language and Natural Selection,” seems to be upon us. Like it or quarrel with it, Pinker-Bloom broke the dam that had barricaded serious inquiry since 1866 when the Paris Linguistic Society banned all papers on language’s beginnings. The Journal of Evolutionary Psychology is marking the Pinker-Bloom anniversary by devoting its December issue to the evolution of language. The introductory editorial, by Thomas Scott-Phillips, summarizes language origins in terms of interest to the evolutionary psychologist, making the editorial a handy guide to the differences between evolutionary psychology and evolutionary linguistics.

Hopefully I’ll have a post on Pinker and Bloom’s original paper, and how the field has developed over these last twenty years, at some point in the next couple of weeks. I think it’s historical importance will, to echo Bolles, be its value in opening up the field: with the questions of language origins and evolution turning into something worthy of serious intellectual investigation.

Other Links

Hypnosis reaches the parts brain scans and neurosurgery cannot.

Are Humans Still Evolving? (Part Two is here).

The Limits of Science.

On Language — Learning Language in Chunks.

Farmers, foragers, and us.

Tweet This.

On Music and The Brain.

Why I spoofed science journalism, and how to fix in.

The adaptive space of complexity.

Genetic Anchoring, Tone and Stable Characteristics of Language

In 2007, Dan Dediu and Bob Ladd published a paper claiming there was a non-spurious link between the non-derived alleles of ASPM and Microcephalin and tonal languages. The key idea emerging from this research is one where certain alleles may bias language acquisition or processing, subsequently shaping the development of a language within a population of learners. Therefore, investigating potential correlations between genetic markers and typological features may open up new avenues of thinking in linguistics, particularly in our understanding of the complex levels at which genetic and cognitive biases operate. Specifically, Dediu & Ladd refer to three necessary components underlying the proposed genetic influence on linguistic tone:

[…] from interindividual genetic differences to differences in brain structure and function, from these differences in brain structure and function to interindividual differences in language-related capacities, and, finally, to typological differences between languages.”

That the genetic makeup of a population can indirectly influence the trajectory of language change differs from previous hypotheses into genetics and linguistics. First, it is distinct from attempts to correlate genetic features of populations with language families (e.g. Cavalli-Sforza et al., 1994). And second, it differs from Pinker and Bloom’s (1990) assertions of genetic underpinnings leading to a language-specific cognitive module. Furthermore, the authors do not argue that languages act as a selective pressure on ASPM and Microcephalin, rather this bias is a selectively neutral byproduct. Since then, there have been numerous studies covering these alleles, with the initial claims (Evans et al., 2004) for positive selection being under dispute (Fuli Yu et al., 2007), as well as any claims for a direct relationship between dyslexia, specific language impairment, working memory, IQ, and head-size (Bates et al., 2008).

A new paper by Dediu (2010) delves further into this potential relationship between ASPM/MCPH1 and linguistic tone, by suggesting this typological feature is genetically anchored to the aforementioned alleles. Generally speaking, cultural and linguistic processes will proceed on shorter timescales when compared to genetic change; however, in tandem with other recent studies (see my post on Greenhill et al., 2010), some typological features might be more consistently stable than others. Reasons for this stability are broad and varied. For instance, word-use within a population is a good indicator of predicting rates of lexical evolution (Pagel et al., 2007). Genetic aspects, then, may also be a stabilising factor, with Dediu claiming linguistic tone is one such instance:

From a purely linguistic point of view, tone is just another aspect of language, and there is no a priori linguistic reason to expect that it would be very stable. However, if linguistic tone is indeed under genetic biasing, then it is expected that its dynamics would tend to correlate with that of the biasing genes. This, in turn, would result in tone being more resistant to ‘regular’ language change and more stable than other linguistic features.

Continue reading “Genetic Anchoring, Tone and Stable Characteristics of Language”

Memory, Social Structure and Language: Why Siestas affect Morphological Complexity

Children are better than adults at learning second languages.  Children find it easy, can do it implicitly and achieve a native-like competence.  However, as we get older we find learning a new language difficult, we need explicit teaching and find some aspects difficult to master such as grammar and pronunciation.  What is the reason for this?  The foremost theories suggest it is linked to memory constraints (Paradis, 2004; Ullman, 2005).  Children find it easy to incorporate knowledge into procedural memory – memory that encodes procedures and motor skills and has been linked to grammar, morphology and pronunciation.  Procedural memory atrophies in adults, but they develop good declarative memory – memory that stores facts and is used for retrieving lexical items.  This seems to explain the difference between adults and children in second language learning.  However, this is a proximate explanation.  What about the ultimate explanation about why languages are like this?

Continue reading “Memory, Social Structure and Language: Why Siestas affect Morphological Complexity”

A history of evolution pt. 2: The Wealth of Nations, Populations and On the Origin

Title page of the original edition of Malthus' 1798 work

Continue reading “A history of evolution pt. 2: The Wealth of Nations, Populations and On the Origin”