Tag Archives: universal grammar

Domain-General Regions and Domain-Specific Networks

The notion of a domain-specific, language acquisition device is something that still divides linguists. Yet, in an ongoing debate spanning at least several decades, there is still no evidence, at least to my knowledge, for the existence of a Universal Grammar. Although, you’d be forgiven for thinking that the problem was solved many years ago, especially if you were to believe the now  sixteen-year old words of Massimo Piattelli-Palmarini (1994):

The extreme specificity of the language system, indeed, is a fact, not just a working hypothesis, even less a heuristically convenient postulation. Doubting that there are language-specific, innate computational capacities today is a bit like being still dubious about the very existence of molecules, in spite of the awesome progress of molecular biology.

Suffice to say, the analogy between applying scepticism of molecules and scepticism of Universal Grammar is a dud, even if it does turn out that the latter does exist. Why? Well, as stated above: we still don’t know if humans have, or for that matter, even require, an innate ability to process certain grammatical principles. The rationale for thinking that we have some innate capacity for acquiring language can be delineated into a twofold argument: first, children seem adept at rapidly learning a language, even though they aren’t exposed to all of the data; and second, cognitive science told us that our brains are massively modular, or at the very least, should entail some aspect that is domain specific to language (see FLB/FLN distinction in Hauser, Chomsky & Fitch, 2002). I think the first point has been done to death on this blog: cultural evolution can provide an alternative explanation as to how children successfully learn language (see here and here and Smith & Kirby, 2008). What I haven’t really spoken about is the mechanism behind our ability to process language, or to put it differently: how are our brains organised to process language?

Continue reading

Some Links #19: The Reality of a Universal Language Faculty?

I noticed it’s almost been a month since I last posted some links. What this means is that many of the links I planned on posting are terribly out of date and these last few days I haven’t really had the time to keep abreast of the latest developments in the blogosphere (new course + presentation at Edinburgh + current cold = a lethargic Wintz). I’m hoping next week will be a bit nicer to me.

The reality of a universal language faculty? Melodye offers up a thorough post on the whole Universal Grammar hypothesis, mostly drawing from the BBS issue dedicated Evans & Levinson (2009)’s paper on the myth of language universals, and why it is a weak position to take. Key paragraph:

When we get to language, then, it need not be surprising that many human languages have evolved similar means of efficiently communicating information. From an evolutionary perspective, this would simply suggest that various languages have, over time, ‘converged’ on many of the same solutions.  This is made even more plausible by the fact that every competent human speaker, regardless of language spoken, shares roughly the same physical and cognitive machinery, which dictates a shared set of drives, instincts, and sensory faculties, and a certain range of temperaments, response-patterns, learning facilities and so on.  In large part, we also share fairly similar environments — indeed, the languages that linguists have found hardest to document are typically those of societies at the farthest remove from our own (take the Piraha as a case in point).

My own position on the matter is fairly straightforward enough: I don’t think the UG perspective is useful. One attempt by Pinker and Bloom (1990) argued that this language module, in all its apparent complexity, could not have arisen by any other means than via natural selection – as did the eye and many other complex biological systems. Whilst I agree with the sentiment that natural selection, and more broadly, evolution, is a vital tool in discerning the origins of language, I think Pinker & Bloom initially overlooked the significance of cultural evolutionary and developmental processes. If anything, I think the debate surrounding UG has held back the field in some instances, even if some of the more intellectually vibrant research emerged as a product of arguing against its existence. This is not to say I don’t think our capacity for language has been honed via natural selection. It was probably a very powerful pressure in shaping the evolutionary trajectory of our cognitive capacities. What you won’t find, however, is a strongly constrained language acquisition device dedicated to the processing of arbitrary, domain-specific linguistic properties, such as X-bar theory and case marking.

Babel’s Dawn Turns Four. In the two and half years I’ve been reading Babel’s Dawn it has served as a port for informative articles, some fascinating ideas and, lest we forget, some great writing on the evolution of language. Edmund Blair Bolles highlights the blog’s fourth anniversary by referring to another, very important, birthday:

This blog’s fourth anniversary has rolled around. More notably, the 20th anniversary of Steven Pinker and Paul Bloom‘s famous paper, “Natural Language and Natural Selection,” seems to be upon us. Like it or quarrel with it, Pinker-Bloom broke the dam that had barricaded serious inquiry since 1866 when the Paris Linguistic Society banned all papers on language’s beginnings. The Journal of Evolutionary Psychology is marking the Pinker-Bloom anniversary by devoting its December issue to the evolution of language. The introductory editorial, by Thomas Scott-Phillips, summarizes language origins in terms of interest to the evolutionary psychologist, making the editorial a handy guide to the differences between evolutionary psychology and evolutionary linguistics.

Hopefully I’ll have a post on Pinker and Bloom’s original paper, and how the field has developed over these last twenty years, at some point in the next couple of weeks. I think it’s historical importance will, to echo Bolles, be its value in opening up the field: with the questions of language origins and evolution turning into something worthy of serious intellectual investigation.

Other Links

Hypnosis reaches the parts brain scans and neurosurgery cannot.

Are Humans Still Evolving? (Part Two is here).

The Limits of Science.

On Language — Learning Language in Chunks.

Farmers, foragers, and us.

Tweet This.

On Music and The Brain.

Why I spoofed science journalism, and how to fix in.

The adaptive space of complexity.

Some Links #13: Universal Grammar Haters

Universal Grammar haters. Mark Lieberman takes umbrage with claims that Ewa Dabrowska’s recent work challenges the concept of a biologically evolved substrate for language. Put simply: it doesn’t. What their experiments suggest is that there are considerable differences in native language attainment. As some of you will probably know, I’m not necessarily a big fan of most UG conceptions, however, there are plenty of papers that directly deal with such issues. Dabrowska’s not being one of them. In Lieberman’s own words:

In support of this view, let me offer another analogy. Suppose we find that deaf people are somewhat more likely than hearing people to remember the individual facial characteristics of a stranger they pass on the street. This would be an interesting result, but would we spin it to the world as a challenge to the widely-held theory that there’s an evolutionary substrate for the development of human face-recognition abilities?

Remote control neurons. I remember reading about optogenetics awhile back. It’s a clever technique that enables neural manipulation through the use of light-activated channels and enzymes. Kevin Mitchell over at GNXP classic refers to a new approach where neurons are activated using a radio frequency magnetic field. The obvious advantage to this new approach being fairly straight-forward: magnetic-fields pass through brains far more easily than light. It means the new approach is a lot less invasive, without the need to insert micro-optical fibres or light-emitting diodes. Cool stuff.

Motor imagery enhances object recognition. Neurophilosophy has an article about a study showing that motor simulations may enhance the recognition of tools:

According to these results, then, the simple action of squeezing the ball not only slowed down the participants’ naming of tools, but also slightly reduced their accuracy in naming them correctly. This occured, the authors say, because squeezing the ball involves the same motor circuits needed for generating the simulation, so it interferes with the brain’s ability to generate the mental image of reaching out and grasping the tool. This in turn slows identification of the tools, because their functionality is an integral component of our conceptualization of them. There is other evidence that  parallel motor simulations can interfere with movements, and with each other: when reaching for a pencil, people have a larger grip aperture if a hammer is also present than if the pencil is by itself.

On the Origin of Science Writers. If you fancy yourself as a science writer, then Ed Yong, of Not Exactly Rocket Science, wants to read your story. As expected, he’s got a fairly large response (97 comments at the time of writing), which includes some of my favourite science journalists and bloggers. It’s already a useful resource, full of fascinating stories and bits of advice, from a diverse source of individuals.

Some thoughts about science blog aggregation. Although it’s still hanging about, many people, including myself, are looking for an alternative to the ScienceBlogs network. Dave Munger points to Friendfeed as one potential solution, with him setting up a feed for all the Anthropology posts coming in from Research Blogging. Also, in the comments Christina Pikas mentioned Nature Blogs, which, I’m ashamed to say, I haven’t come across before.

Answering Wallace's challenge: Relaxed Selection and Language Evolution

ResearchBlogging.orgHow does natural selection account for language? Darwin wrestled with it, Chomsky sidestepped it, and Pinker claimed to solve it. Discerning the evolution of language is therefore a much sought endeavour, with a vast number of explanations emerging that offer a plethora of choice, but little in the way of consensus. This is hardly new, and at times has seemed completely frivolous and trivial. So much so that in the 19th Century, the Royal Linguistic Society in London actually went as far as to ban any discussion and debate on the origins of language. Put simply: we don’t really know that much. Often quoted in these debates is Alfred Russell Wallace, who, in a letter to Darwin, argued that: “natural selection could only have endowed the savage with a brain a little superior to that of an ape whereas he possesses one very little inferior to that of an average member of our learned society”.

This is obviously relevant for those of us studying language evolution. If, as Wallace challenged, natural selection (and more broadly, evolution) is unable to account for our mental capacities and behavioural capabilities, then what is the source behind our propensity for language? Well, I think we’ve come far enough to rule out the spiritual explanations of Wallace (although it still persists on some corners of the web), and whilst I agree that biological natural selection alone is not sufficient to explain language, we can certainly place it in an evolutionary framework.

Such is the position of Prof Terrence Deacon, who, in his current paper for PNAS, eloquently argues for a role for relaxed selection in the evolution of the language capacity. He’s been making these noises for a while now, as I previously mentioned here, with him also recognising evolutionary-similar processes in development. However, with the publication of this paper I think it’s about time I disseminated his current ideas in more detail, which, in my humble opinion, offers a more nuanced position than the strict modular adaptationism previously championed by Pinker et al (I say previously, because Pinker also has a paper in this issue, and I’m going to read it before making any claims about his current position on the matter).

Continue reading

Broca's area and the processing of hierarchically organised sequences pt.2

ResearchBlogging.org3. Neurological processing of hierarchically organised sequences in non-linguistic domains

A broader perspective sees grammar as just one of many hierarchically organised behaviours being processed in similar, prefrontal neurological regions (Greenfield, 1991; Givon, 1998). As Broca’s area is found to be functionally salient in grammatical processing, it is logical to assume that this is the place to search for activity in analogous hierarchical sequences. Such is the basis for studies into music (Maess et al., 2001), action planning (Koechlin and Jubault, 2006) and tool-production (Stout et al., 2008).

Continue reading

Broca's area and the processing of hierarchically organised sequences pt.1

ResearchBlogging.orgEver since its discovery in 1861, Broca’s area (named after its discoverer, Paul Broca) has been inextricably linked with language (Grodzinsky and Santi, 2008). Found in the left hemisphere of the Pre-Frontal Cortex (PFC), Broca’s region traditionally[1] comprises of Broadmann’s areas (BA) 44 and 45 (Hagoort, 2005). Despite being relegated in its status as the centre of language, this region is still believed to play a vital role in certain linguistic aspects.

Of particular emphasis is syntax. However, syntactic processing is not unequivocally confined to Broca’s area, with a vast body of evidence from “Studies investigating lesion deficit correlations point to a more distributed representation of syntactic processes in the left perisylvian region.” (Fiebach, 2005, pg. 80). A more constrained approach places Broca’s area as processing an important functional component of grammar (Grodzinsky and Santi, 2007). One of these suggestions points specifically to how humans are able to organise phrases in hierarchical structures[2].

In natural languages, “[…] the noun phrases and the verb phrase within a clause typically receive their grammatical role (e.g., subject or object) by means of hierarchical relations rather than through the bare linear order of the words in a string. [my emphasis]” (Musso et al., 2003, pg. 774). Furthermore, these phrases can be broken down into smaller segments, with noun phrases, for example, consisting of a determiner preceding a noun (Chomsky, 1957). According to Chomsky (1957) these rules exist without the need for interaction in other linguistic domains. Take for example his now famous phrase of “Colourless green ideas sleep furiously.” (ibid, pg. 15). Despite being syntactically correct, it is argued the sentence as a whole is semantically meaningless.

The relevant point to take away is a sentence is considered hierarchical if phrases are embedded within other phrases. Yet, examples of hierarchical organisation are found in many domains besides syntax. This includes other language phenomena, such as prosody. Also, non-linguistic behaviours – such as music (Givon, 1998), action sequences (Koechlin and Jubault, 2006), tool-use (cf. Scott-Frey, 2004) and tool-production (Stout et al., 2008) – are all cognitively demanding tasks, comparable with that of language. We can even see instances of non-human hierarchical representations: from the songs of humpback whales (Suzuki, Buck and Tyack, 2006) to various accounts of great apes (McGrew, 1992; Nakamichi, 2003) and crows (Hunt, 2000) using and manufacturing their own tools[3].

With this in mind, we can ask ourselves two questions corresponding to Broca’s area and hierarchical organisation: Does Broca’s area process hierarchically organised sequences in language? And if so, is this processing language-specific? The logic behind this two-part approach is to help focus in on the problem. For instance, it may be found hierarchical structures in sentences are processed by Broca’s area. But this belies the notion of other hierarchically organised behaviours also utilising the same cognitive abilities.

Continue reading

Reading Round Up

Here’s some stuff I’ve been reading over the last month or so:

Okay, so that brings you up to date with my reading from May through to July. Next round up will cover August. How fascinating :-/


Current Issues in Language Evolution

As part of my assessment this term I’m to write four mock peer-reviewed items for a module called Current Issues in Language Evolution. It’s a great module run by Simon Kirby, examining some of the best food for thought in the field. Alone this is an interesting endeavour, after all we’re right in the middle of a language evolution renaissance, however, even cooler are the lectures, where students get to do their own presentations on a particular paper. I already did my presentation at the start of this term, on Dediu and Ladd’s paper, which went rather well, even if one of my slip ups did not go unnoticed (hint: always label the graphs). So, over the next few weeks, in amongst additional posts covering some of the presentations in class, I’ll hopefully be writing articles on these four five papers:

Continue reading