Jim Hurford: What is wrong, and what is right, about current theories of language, in the light of evolution?

As I mentioned in my previous post, the 2012 Poznań Linguistic Meeting (PLM) features a thematic section on “Theory and evidence in language evolution research.” This section’s invited speaker was Jim Hurford, who is Emeritus Professor at Edinburgh University. Hurford is a very eminent figure in language evolution research and has published two very influential and substantive volumes on “Language in the Light of Evolution”: The Origins of Meaning (2007) and The Origins of Grammar (2011).

In his Talk, Hurford asked “What is wrong, and what is right, about current theories of language, in the light of evolution?” (you can find the abstract here).

Hurford presented two extreme positions on the evolution of language (which nevertheless are advocated by quite a number of evolutionary linguists) and then discussed what kinds of evidence and lines of reasoning support or seem to go against these positions.

Extreme position A, which basically is the Chomskyan position of Generative Grammar, holds that:

(1) There was a single biological mutation which (2) created a new unique cognitive domain, which then (3) immediately enabled the unlimited command of complex structures via the computational operation of merge. Further, according to this extreme position, (4) this domain is used primarily for advanced private thought and only derivatively for public communication and lastly (5) it was not promoted by natural selection.

On the other end of the spectrum there is extreme position B, which holds that:

(1) there were many cumulative mutations which (2) allowed the expanding interactions of pre-existing cognitive domains creating a new domain, which however is not characterized by principles unique to language. This then (3) gradually enabled the command of successively more complex structures. Also, on this view, this capacity was used primarily for public communication, and only derivatively for advanced private thought and was (5) promoted by natural selection.

Hurford then went on to discuss which of these individual points were more likely to capture what actually happened in the evolution of language.

He first looked at the debate over the role of natural selection in the evolution of language. In Generative Grammar there is a biological neurological mechanism or computational apparatus, called Universal Grammar (UG) by Chomsky, which determines what languages human infants could possibly acquire. In former Generative Paradigms, like the Government & Binding Approach of the 1980s, UG was thought to be extremely complex. What was more, some of these factors and structures seemed extremely arbitrary. Thus, from this perspective, it seemed inconceivable that they could have been selected for by natural selection. This is illustrated quite nicely in a famous quote by David Lightfoot:

“Subjacency has many virtues, but I am not sure that it could have increased the chances of having fruitful sex (Lightfoot 1991: 69)”

Continue reading “Jim Hurford: What is wrong, and what is right, about current theories of language, in the light of evolution?”

PLM2012 Coverage: Pleyer & Winters: Integrating Cognitive Linguistics and Language Evolution Research

Today James and I are giving a talk at the Poznań Linguistic Meeting (PLM) on “Integrating Cognitive Linguistics and Language Evolution Research.”  It’s a talk in a thematic section on “Theory and evidence in language evolution research”, which I hope to blog about a bit tomorrow.

We’ll come back to our talk later on and talk about it in a bit more detail but for the time being here’s our abstract:

Cognitive Linguistics is a school of modern linguistic theory and practice that sees language as an integral part of cognition and tries to explain linguistic phenomena with relation to general cognitive capacities (e.g. Evans 2012; Geeraerts & Cuyckens, 2007). In this talk, we argue that there is a wealth of relevant research and theorizing in Cognitive Linguistics that can make important contributions to the study of the evolution of language and cognition. This is in line with recent developments in the field, which have attempted to apply key insights from Cognitive Linguistics on the nature of language and its relation to cognition and culture to the question of language evolution and change (cf. e.g. Evans, 2012; Pleyer, 2012; Sinha, 2009; Tomasello, 2008)

We illustrate this proposal with relation to the three timescales that have a bearing on explicating the structure and evolution of language (Kirby, 2012):

  1. The ontogenetic timescale of individuals acquiring language
  2. The glossogenetic timescale of historical language change
  3. The phylogenetic timescale of the evolution of the species

Continue reading “PLM2012 Coverage: Pleyer & Winters: Integrating Cognitive Linguistics and Language Evolution Research”

PLM2012 Coverage: Dirk Geeraerts: Corpus Evidence for Non-Modularity

The first plenary talk at this year’s Poznań Linguistic Meeting was by Dirk Geeraerts, who is professor of linguistics at the University of Leuven, Belgium.

In his talk, he discussed the possibility that corpus studies could yield evidence against the supposed modularity of language and mind endorsed by, for example, Generative linguists (you can find the abstract here)

Geeraerts began his talk by stating that there seems to be a paradigm shift in linguistics from an analysis of structure that is based on introspection to analyses of behaviour based on quantitative linguistic studies. More and more researchers are adopting quantified corpus-based analyses, which test hypotheses using statistical testing of language behaviour. As a data-set they use experimental data or large corpora. In his talk, he discussed the possibility that corpus studies could yield evidence against the supposed modularity of language and mind endorsed by, for example, Generative linguists (you can find the abstract here)

Multifactoriality

One further trend Geeraerts identified in this paradigm shift is that these kinds of analyses become more and more multifactorial in that they include multiple different factors which are both internal and external to language. Importantly, this way of doing linguistics is fundamentally different than the mainstream late 20th century view of linguistics.

What is important to note here when comparing this trend to other approaches to studying language is that multifactoriality goes against Chomsky’s idea of grammar as an ideal mental system that can be studied through introspection. In the traditional view, it is supposed that there is some kind of ideal language system which everyone has access to. This line of reasoning then justifies introspection as a method of studying the whole system of language and making valid generalizations about it. However, this goes against the emerging corpus linguistic view of language. On this view a random speaker is not representative for the linguistic community as a whole. The linguistic system is not homogenous across all speakers, and therefore introspection doesn’t suffice.

Modularity

The main thrust of Geeraerts’ talk was that research within this emerging paradigm also might call into question the assumption of the modularity of the mind (as advocated, for example by Jerry Fodor or Neil Smith): The view of the mind as a compartmentalized system consisting of discrete components or modules (for example, the visual system, language) plus a central processor.

Continue reading “PLM2012 Coverage: Dirk Geeraerts: Corpus Evidence for Non-Modularity”

The Oxford Handbook of Language Evolution – Book Review on Linguist List

My review of Maggie Tallerman‘s and Kathleen R. Gibson‘s “Oxford Handbook of Language Evolution”  was published on Linguist List yesterday (you can read it here).

Here’s my opinion in a nutshell: This is a great volume and I’ve really learned a lot from reading it. The authors have done a great job trying to be accessible to an interdisciplinary audience. It’s  a great place to start if you’re interested in language evolution or want to get a quick overview of a specific topic in language evolution research. I would’ve liked it if the chapters had a “Further Reading” section, however (like  Christiansen and Kirby’s 2003 volume). Some chapters felt a bit too short for me (Steven Mithen‘s chapter on “Musicality and Language” for example is only 3 pages long, Merlin Donald‘s chapter on “the Mimetic Origins of Language” is 4 pages long). I also feel that some topics, like language acquisition, could’ve been dealt with  more extensively, but then again, if you compile a handbook, it’s impossible to make everybody happy. Other recent book-length overviews of language evolution (e.g. Fitch’s 2010 book and Hurford’s 2007 and 2012 tomes) are more detailled, but also more technical and not as comprehensive and don’t cover as many topics. To quote my review:

Overall, the Oxford Handbook of Language Evolution is a landmark publication in  the field that will serve as a useful guide and reference work through the  entanglements and pitfalls of the language evolution jungle for both experienced  scholars and newcomers alike.

One last thing I’m particularly unhappy about is that the handbook doesn’t have an Acacia Tree on the cover – which seems like a missed opportunity (kidding).

I’ll try to write about some of my favourite chapters in more detail somewhere down the road/in a couple of weeks.

Evolang Coverage: Simon Fisher: Molecular Windows into Speech and Language

In his clear and engaging plenary talk, Simon Fisher, who is director of the Department “Language & Genetics” at the Max-Planck-Institute for Psycholinguistics, the Netherlands, gave a summary of the current state of research on what molecular biology and genetics can contribute to the question of language evolution. Fisher was involved in the discovery of the (in)famous FOXP2 gene, which was found to be linked to hereditary language impairment in an English family. He has also done a lot of subsequent work on this gene, so naturally it was the also main focus of his talk.

But before he dealt with this area, he dispelled what he called the ‘abstract gene myth’. According to Fisher, it cannot be stressed enough that there is no direct relation between genes and behavior and that we have to “mind the gap”, as he put it. There is a long chain of interactions and relations that stand between genes one the one side, and speech and language on the other. DNA is related to the building of proteins, which is related to the development of cells. These in turn are related to neural circuits, which then relate to the human brain as whole, which then are related to speech and language.

So when we try to look at these complex net of relations, what we can say is that there is a subset of children which grow up in normal environments but still do not develop normal language skills. From a genetic perspective it is of interest that of these children, there are cases where these impairments cannot be explained by other transparent impairments like cerebral palsy, hearing loss, etc. Moreover, there are cases in which language disorders are heritable. This suggests that there are genetic factors that play a role in some of these impairments.

The most famous example of such a case of heritable language impairment is the English KE family, where affected members of the family are missing one copy of the FOXP2 gene. These family members exhibit impaired speech development. Specifically, they have difficulty in learning and producing sequences of complex oro-facial movements that underlie speech. However, they do show deficits in a wide range of language-related skills, including spoken and written language. It thus has to be emphasized that the missing FOXP2 gene seems to affect all aspects of linguistic development. It is also important that is not accompanied by general motor dyspraxia.

In general, non-verbal deficits are not central to the disorder. Affected individuals start out with a normal non-nonverbal IQ, but then don’t keep up with their peers, something that is very likely to be related to the fact that possessing non-impaired language opens the door for the enhancement of intelligence in various ways, something which people with only one FOXP2 gene cannot take advantage of to the same degree. In general, deficits in verbal cognition are much more severe and wide-ranging than other possible impairments. It is also important to note that after the FOXP2 gene was discovered in the KE family, researchers found a dozen of cases of a damaged FOXP2 gene that led to language-related problems.

FOXP2 is a so-called transcription factor, which means that it can activate and repress other genes. As Fisher points out, in a way FOXP2 functions as a kind of ‘genetic dimmer switch’ that tunes down the expression of other genes. In this context, it should become clear that FOXP2 is not “the gene for language.” Versions of FOXP2 are found in highly similar form in vertebrae species that lack speech and language. It therefore played very ancient roles in the brain of our common ancestor. Neither is FOXP2 exclusively expressed in the brain. It is also involved in the development of the lung, the intestines and the heart. However, work by Simon Fisher and his colleagues shows that FOXP2 is important for neural connectivity. Interestingly, mice with one damaged FOXP2 copy are absolutely normal in their normal baselines motor behavior. However, they have significant deficits in what Fisher called ‘voluntary motor learning.”

From an evolutionary perspective, it is relevant that there have been very little changes in the gene over the course of vertebrae evolution. However, there seem to have been more changes to the gene since our split from the chimpanzee lineage than there have been since the split from the mouse lineage. This means that when it comes to FOXP2, the protein of a chimpanzee is actually closer to a mouse than to a human.

Overall, what current knowledge about the molecular bases of language tells us is that these uniquely human capacities build on evolutionary ancient system. However, much more work is needed to understand the influence of FOXP2 on the molecular and cellular level and how these are related to the development of neural circuits, the brain, and finally our capacity for fully-formed complex human language.

Evolang coverage: Animal Communication and the Evolution of Language

Are there more differences or more similarities between human language and other animal communication systems? And what exactly does it tell us if we find precursors and convergent evolution of aspects similar to human language? These were some of the key questions at this year’s Evolang’s Animal Communication and Language Evolution Workshop (proceedings for all workshops here).

As Johan Bolhuis pointed out, ever since Darwin (1871), comparing apes and humans’ seemed like the most logical thing to do when trying to find out more about the evolution of traits presumed to be special to humans. Apes and especially chimpanzees, so the reasoning goes, are after all our closest relatives and serve as the best models for the capacities of our prelinguistic hominid ancestors. The comparative aspects of language have gained new attention since the controversial Hauser, Chomsky, Fitch (2002) paper in Science. For example, their claim that the capacity for producing and understanding recursive embedding of a certain kind is uniquely human was taken up by some researchers (including Hauser and Fitch themselves) who looked for syntactic abilities in other animals. More recently, songbirds have also become a centre of attention in the animal communication literature, with pretty much everything being quite controversial, however.

What is important here, according to the second workshop organizer Kazuo Okanoya, is that when doing research and theorizing, we should not treat humans as a special case, but as on a continuum with animals. And this also holds for language. In explaining language evolution, we don’t want to speak of a sudden burst that gave us something that is wholly different from anything else in the animal kingdom, but more of a continuous transition and emergence of language. For this it is, important to study other animals in closer details if we are to arrive at a continuous explanation of language emergence. Granted, humans are special. But simply saying they are special isn’t scientific. We need to detail in what ways humans are special.

Regarding the central question whether there are more differences or similarities between language and animal communication, and what exactly these similarities and differences are, opinions of course differ. After the first speaker didn’t turn up Irene Pepperberg gave an impromptu talk on her work with parrots. Taking the example of a complex exclusion task, she argued that symbol-trained animals can do things other animals simply cannot, and that this might be tied to the complex cognitive processing that occurs during language (and vocal) learning. She also stressed that birds can serve as good models for the evolution of some aspects underlying language because they developed broadly similar vocal learning capacities like humans in a process referred to as parallel evolution, convergence, or analogy. Responding to other prevalent criticism, Pepperberg counters the view that animals like Alex and Kanzi are simply exceptional and unique, just like not every human is a Picasso or a Beethoven. What Picasso and Beethoven show us is what humans can be capable of, and the same holds for animals and Alex and Kanzi. No one would argue that animals have language in the sense that humans do. But given that they have the brain structures and cognitive capacities to allow a more complicated vocal learning and complicated cognitive processing means we can use them as a model of how these processes might have got started. There is still much work to be done, especially questions like what animals like parrots actually need and use these complex vocal and cognitive capacities for in the wild.

Whereas Dominic Mitchell argued in his talk that there is indeed a discontinuity between animal communication and human language with reference to animal signaling theory (e.g. Krebs & Dawkins 1984), Ramon Ferrer-i-Cancho after him focused more on the similarities. Specifically, he showed quite convincingly that statistical patterns in language, like Zipf’s law, the law of brevity, the law that more frequent words are shorter, and the Menzerath-Altmann law (the longer the words the shorter the syllables) can also be found in the communicative behaviours of other animals. Zipf’s law for word frequencies, for example, can also be observed in the whistles of bottlenose dolphins. A criticism of Zipf’s law in the Chomskyan tradition holds that it just as well applies to random typing and rolling the dice, but Ferrer-i-Cancho showed that it is simply not the case by plotting the actual distribution of random typing and rolling the dice which is actually quite different from the logarithmic distribution of Zipf’s law if you look at it in any detail. The law that more frequent words are shorter can also be found in Chickadee calls, Formosan macaques and Common marmosets. There is some controversy whether this law really holds for all of these species, especially common marmosets, but Ferrer-i-Cancho presented a reanalysis of criticism in which he showed that what there are no “true exceptions” to the law. He proposes an information theoretic explanation for these kinds of behavioural universals where communicative solutions converge on a local optimum of differing communicative demands. He also proposes that considerations like this should lead us to change our perspective and concepts of universals quite radically, and that instead of looking only for linguistic universals we should also look for universals of communicative behavior and universal principles beyond human language such as cognitive effort minimization and mean code length minimization.

Returning to birds, Johan J. Bolhius picked the issue of similarities and differences up again and showed that there is in fact a staggering amount of similarities between birds and humans. For example, songbirds also learn their songs from a tutor (most often their father) and make almost perfect copies of their songs. As Hauser, Chomsky, Fitch 2002 have already pointed out, this signal copying seems not to be present in apes and monkeys. But the similarities go even further than that: Songbirds “babble” before they can sing properly (a period called ‘subsong’) and they also have a sensitive period for learning. And there are not only behavioural, but also neural similarities. In fact, songbirds seem to have a neural organization, broadly similar to the human separation between Broca’s area (mostly concerned with production, although this simple view of course is not the whole story, as James, for example, has shown) and Wernicke’s area (mostly concerned with understanding). So there seem to be regions that are exclusively activated when animals hears songs (kinda Wernicke-Type region) and regions with neuronal activation when animals sing, something which is called the ‘song system. Interestingly, this activation is also related to how much the animal has learned about that particular song it is hearing, so the better it knows the song the more activation is there. This means that this regions might be related to song memory. In lesion studies, where these regions involved in listening to a known song were damaged, recognition of the songs were indeed impaired but not wholly wiped out. Song production, on the other hand was completely unimpaired, mirroring the results from patients with lesions to either Broca’s or Wernicke’s areas. Zebra finches also show some degree of lateralization in that there is stronger activation in the left hemisphere when they hear the song they know, but not when the song they hear is unfamiliar. Although FOXP2 is not a “language gene”, which can’t be stressed enough, it is interesting that songbirds in which the bird-FOXP2-gene was “knocked out” show incomplete learning of the tutor songs.

Overall, Bolhuis concludes that what we can learn from looking at birdsong is that there are three significant factors evolved in the evolution of language:

Homology in the neural and genetic mechanisms due to our shared evolutionary past with birds.

Convergences or parallel evolution of auditory-vocal learning

And last specialisations, specifically human language syntax, which as Bolhuis argued in a paper with Bob Berwick and Kazuo Okanoya is still vastly different in complexity and hierarchical embedding from everything in songbird vocal behavior.

This focus on syntactic ability stems of course from a generativist perspective on these issues, and future research, especially from new and up-and-coming linguistic schools like Cognitive Linguistics and Construction Grammar (cf. Hurford 2012) is sure to bring more light into the matter of how exactly human language works, what kinds of elements and constructions it is made of, and how these compare to what is found in animals, and whether there really a single unitary thing like the fabled “syntactic ability” of humans (cf. e.g.work by Ewa Dabrowska)

 

Evolang Previews: Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition

Evolang is busy this year – 4 parallel sessions and over 50 posters. We’ll be posting a series of previews to help you decide what to go and see. If you’d like to post a preview of your work, get in touch and we’ll give you a guest slot.

Michael Pleyer Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition Poster Session 1, 17:20-19:20, “Hall” (2F), 14th March

Perspective-taking and -setting in language, cognition and interaction is crucial to the creation of meaning and to how people share knowledge and experiences. As I’ve already written about on this blog (e.g. herehere, here), it probably also played an important part in the story of how human language and cognition came to be. In my poster presentation I argue that a particular school of linguistic thought, Cognitive Linguistics (e.g. Croft & Cruse 2004; Evans & Green 2006; Geeraerts & Cuyckens 2007; Ungerer & Schmid 2006), has quite a lot to say about the structure and cognitive foundations of perspective-taking and -setting in language.

Therefore an interdisciplinary dialogue between Cognitive Linguistics and research on the evolution of language might prove highly profitable. To illustrate this point, I offer an example of one potential candidate for such an interdisciplinary dialogue, so-called Blending Theory (e.g. Fauconnier & Turner 2002), which, I argue,  can serve as a useful model for the kind of representational apparatus that needed to evolve in the human lineage to support linguistic interaction. In this post I will not say much about Blending Theory (go see my poster for that 😉 or browse here ), but I want to  elaborate a bit on Cognitive Linguistics and why it is a promising school of thought for language evolution research, something which I also elaborate on in my proceedings paper.

So what is Cognitive Linguistics?

Evans & Green (2006: 50), define Cognitive Linguistics as

“the study of language in a way that is compatible with what is known about the human mind, treating language as reflecting and revealing the mind.”

Cognitive Linguistics sees language as tightly integrated with human cognition. What is more, a core assumption of Cognitive Linguistics is that principles inherent in language can be seen as instantiations of more general principles of human cognition. This means that language is seen as drawing on mechanisms and principles that are not language-specific but general to cognition, like conceptualisation, categorization, entrenchment, routinization, and so forth.

From the point of view of the speaker, the most important function of language is that it expresses conceptualizations, i.e. mental representations. From the point of view of the hearer, linguistic utterances then serve as prompts for the dynamic construction of a mental representation. Crucially, this process of constructing a mental representation is fundamentally tied to human cognition and our knowledge of the world around us. Continue reading “Evolang Previews: Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition”

Animal Cognition & Consciousness (II): Metacognition & Mentalizing

As I wrote in my last post, three kinds of behaviours are most often discussed in debates about animal consciousness and cognition:

“1. Mirror self-recognition

2. Tests of metacognition;

3. Metacognition of others’ mental states” (Gómez 2009: 45)

After having discussed the first capacitiy in my previous post, I will discuss the latter two in this post, starting with metacognition, that is being aware of one’s own knowledge states, and then turn to being aware of other’s mental states.

Metacognition.

Being aware of one’s own mental states, i.e., reflective consciousness, surely seems to be one of the most crucial components of self-awareness. In one paradigm used to test for metacognitive awareness, monkeys were trained to select, out of a number of two or more images, the one that is identical to an image they have been shown earlier. As is to expected, the monkeys’ performance progressively deteriorated the longer the delay was between the sample image and the selection task.

 

Continue reading “Animal Cognition & Consciousness (II): Metacognition & Mentalizing”

Animal Cognition & Consciousness (I): Mirror Self-Recognition

Darwin made a mistake. At least that is what Derek Penn and his colleagues (2008) claim in a recent and controversial paper in Behavioral and Brain Sciences. Darwin (1871) famously argued that the difference between humans and animals was “one of degree, not of kind.”

This, according to Penn et al. is of course true from an evolutionary perspective, but in their view,

“the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds” (Penn et al. 2008: 109).

They hold that humans are not simply smarter, but human cognition differs fundamentally and qualitatively from that of other animals.

One pervasive proposal is that we do not simply possess a unique set of cognitive capacities, but that it might be consciousness itself that is uniquely human as well, a view that goes back at least to Descartes (Burkhardt & Bekoff 2009: 41). However, there are also many scholars and researchers who agree that there is evidence for higher-order cognition in nonhuman animals ( ‘animals’ after this) and that they might possess at least some degree of consciousness (Burkhard & Bekoff 2009: 40f.).

In this and my next post, I will write about three kinds of phenomena that are most often discussed in debates on whether animals have some form of higher-order cognition and consciousness or not: self-awareness, awareness of one’s own cognitive states, and awareness of others’ cognitive states and intentions.

Continue reading “Animal Cognition & Consciousness (I): Mirror Self-Recognition”

James Hurford: Animals Do Not Have Syntax (Compositional Syntax, That Is)

After passing my final exams I feel that I can relax a bit and have the time to read a book again. So instead of reading a book that I need to read purely for ‘academic reasons’, I thought I’d pick one I’d thoroughly enjoy: James Hurford’s “The Origins of Grammar“, which clocks in at a whopping 808 pages.
I’m still reading the first chapter (which you can read for free here) but I thought I’d share some of his analyses of “Animal Syntax.”
Hurford’s general conclusion is that despite what you sometimes read in the popular press,

“No non-human has any semantically compositional syntax, where the form of the syntactic combination determines how the meanings of the parts combine to make the meaning of the whole.”

The crucial notion here is that of compositionality. Hurford argues that we can find animal calls and songs that are combinatorial, that is songs and calls in which elements are put together according to some kind of rule or pattern. But what we do not find, he argues, are the kinds of putting things together where the elements put together each have a specified meaning and the whole song, call or communicative assembly “means something which is a reflection of the meanings of the parts.”

(Link)
To illustrate this, Hurford cites the call system of putty-nosed monkeys (Arnold and Zuberbühler 2006). These monkeys have only two different call signals in their repertoire, a ‘pyow’-sound that ‘means’, roughly, ‘LEOPARD’; and a ‘hack’ sound that ‘means’, roughly, ‘EAGLE’.

Continue reading “James Hurford: Animals Do Not Have Syntax (Compositional Syntax, That Is)”