Tag Archives: morphology

Linguistic Structure: the Result of L2 Learners?

Wray and Grace (2007) propose that the structure of a language is dependent of the social structure of the population who speak it. Lupyan & Dale (2010) later showed this using statistical analysis. This has been discussed extensively on this blog before:

http://www.replicatedtypo.com/science/language-as-a-complex-adaptive-system/422/

http://www.replicatedtypo.com/uncategorized/memory-social-structure-and-language-why-siestas-affect-morphological-complexity/2382/

One of the proposed reasons for why large population size is thought to affect linguistic structure is that larger populations will have a larger ratio of second language (L2) speakers to first language (L1) speakers.

Languages within exoteric niches (large population and geographical spread with many language neighbors) have been shown to be more more morphologically isolating and, as a result, regular. This has proposed to be because of the biases of adult second language learners.

Esoteric languages are more irregular and morphologically complex and idiosyncratic. This is thought to be because of the biases of child learners.

There are studies which show that adult learners have a tendency to regularise languages but only under some circumstances. Hudson Kam & Newport (2009) show that adult learners will regularise unpredictable variability but only if it exists above a certain level of scatter and complexity.

As for the learning biases of children, Wray & Grace (2007) cite only one study which looked at children who were ‘native’ speakers of Esperanto (Bergen, 2001). Bergen (2001) found that the language that the children learnt displayed a loss of the accusative case and also displayed attrition in the tense system. Although Wray & Grace (2007) suggest that this explains patterns seen in esoteric communities, it may not be as straight forward as they suggest. The evidence suggests that esoteric conditions are going to display more morphological strategies in their languages which is the opposite to the biases the child learners of Esperanto are displaying. The children are rejecting morphological strategies in favour of attrition and word order.

I wanted to point out in this post that there is evidence to suggest that adult learners preserve irregularities and idiosyncrasies, while children learners regularize (suggesting the opposite to Wray & Grace).

Studies which have addressed these problems include Hudson Kam & Newport (2005) where adult learners of an artificial language preserved unpredictable variation and child learners of the same language regularized it. Hudson Kam & Newport (2009) show in a similar study that child learners of an artificial language will regularise unpredictable irregularity but, as mentioned above, adult learners will only do this where the irregularity passes a certain level of complexity.

However, some evidence does support Wray & Grace’s (2007) proposal about adult learners.  Smith & Wonnacott (2010) show that despite there being a tendency within individual adult learners to maintain the level of unpredicted variability within the language learning process, when put into a diffusion chain of adult learners the language regularises.  Smith & Wonnacott (2010) suggest that gradual processes such as this can explain the regularisation of languages over time. While this fits nicely with Wray & Grace’s (2007) theory there is still the problem that children are just as liable to regularise as adults if not more so.

 

This is just some relevant experiments which I thought lent something to the debate. I know there are other factors which have been proposed to have an effect on linguistic structure. I was just curious about people’s opinions on quite to what level L2 speakers have an effect.

On Phylogenic Analogues

ResearchBlogging.orgA recent post by Miko on Kirschner and Gerhart’s work on developmental constraints and the implications for evolutionary biology caught my eye due to the possible analogues which could be drawn with language in mind. It starts by saying that developmental constraints are the most intuitive out of all of the known constraints on phenotypic variation.  Essentially, whatever evolves must evolve from the starting point, and it cannot ignore the features of the original. Thus, a winged horse would not occur, as six limbs would violate the basic bauplan of tetrapods. In the same way, a daughter language cannot evolve without taking into account the language it derives from and language universals. But instead of viewing this as a constraint which limits the massive variation we see biologically or linguistically between different phenotypes, developmental constraints can be seen as a catalyst for regular variation.

A pretty and random tree showing variation among IE languages.

Looking back over my courses, I’m surprised by how little I’ve noticed (different from how much was actually said) about reasons for linguistic variation. The modes of change are often noted: <th> is fronted in Fife, for instance, leading to the ‘Firsty Ferret’ instead of the ‘Thirsty Ferret’ as a brew, for instance. However, why the <th> is fronted at all isn’t explained beyond cursory hypothesis. But that’s a bit besides the point: what is the point is that phenotypic variation is not necessarily random, as there are constraints – due to the “buffering and canalizing of development” – which limit variation to a defined range of possibilities. There clearly aren’t any homologues between biological embryonic processes and linguistic constraints, but there are developmental analogues: the input bottleneck (paucity of data) given to children, learnability constraints, the necessity for communication, certain biological constraints to do with production and perception, etc. These all act on language to make variation occur only within certain channels, many of which would be predictable.

Another interesting point raised by the article is the robustness of living systems to mutation. The buffering effect of embryonic development results in the accumulation of ‘silent’ variation.  This has been termed evolutionary capacitance. Silent variation can lay quiet, accumulating, not changing the phenotype noticeably until environmental or genetic conditions unmask them. I’ve seen little research (not that I don’t expect there to be plenty) on the theoretical implications of the influence of evolutionary capacitance on language change – in other words, how likely a language is to make small variations which don’t affect language understanding before a new language emerges (not that the term language isn’t arbitrary based on the speaking community, anyway). Are some languages more robust than others? Is robustness a quality which makes a language more likely to be used in multilingual settings – for instance, in New Guinea, if seven languages are mutually indistinguishable, is it likely the that local lingua franca is forced by its environment to be more robust in order to maximise comprehension?

The article goes on about the cost of robustness: stasis. This can be seen clearly in Late Latin, which was more robust than the daughter languages as it was needed to communicate in different environments where the language had branched off into the Romance languages, and an older form was necessary in order for communication to ensue. Thus, Latin retained usage well after the rest of it had evolved into other languages. Another example would be Homeric Greek, which retained many features lost in Attic, Doric, Koine, and other dialects, as it was used in only a certain environment and was therefore resistant to change. This has all been studied before better than I can sum it up here. But the point I am making is that analogues can be clearly drawn here, and some interesting theories regarding language become apparent only when seen in this light.

A good example, also covered, would be exploratory processes, as Kirschner and Gerhart call them. These are processes which allow for variation to occur in environments where other variables are forced to change. The example given is the growth of bone length, which requires corresponding muscular, circulatory, and other dependant systems to also change. The exploratory processes allow for future change to occur in the other systems. That is, they expedite plasticity. So, for instance, an ad hoc linguistic example would be the loss of a fixed word order, which would require that morphology step in to fill the gap. In such a case, particles or affixes or the like would have to have already paved the way for case markers to evolve, and would have had to have been present to some extent in the original word order system. (This may not be the best example, but I hope my point comes across.)

Naturally, much of this will have seemed intuitive. But, as Miko stated, these are useful concepts for thinking about evolution; and, in my own case especially, the basics ought to be brought back into scrutiny fairly frequently. Which is justification enough for this post. As always, comments appreciated and accepted. And a possible future post: clade selection as a nonsensical way to approach phylogenic variation.

References:

Caldwell, M. (2002). From fins to limbs to fins: Limb evolution in fossil marine reptiles American Journal of Medical Genetics, 112 (3), 236-249 DOI: 10.1002/ajmg.10773

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

What can Hungarian Postpositions tell us about Language Evolution?

I spent quite a lot of time as an undergraduate analysing Hungarian syntax with my generative head on and using the minimalist framework. Bear with me. This post is the result of me trying to marry all of them hours spent reading “The Minimalist Program” (Chomsky, 1995) and starring at Hungarian with what I’m currently doing¹ and ultimately trying to convince myself that I wasn’t wasting time.

So here’s a condensed summary of what my dissertation was about:

Hungarian has a massive case system which, as well as structural cases, has many items which have locational, instrumental and relational uses (lexical case markers). Because of this many constructions which feature prepositions in English, when translated into Hungarian can be translated as case markers or postpositions.

It struck me as odd that these 2 things; case markers and postpositions, despite having the same position in the structure (as a right-headed modifier to the noun) and very similar semantic function, would have different analyses in the syntactic framework, simply due to the fact that one was morphologically attached (case markers) and the other not (postpositions).

Continue reading

That’s Linguistics (Not logistics)


Linguists really need a catchy tune to match those in logistics. Any takers?

I always remember when one of my former lecturers said he was surprised by how little the average person will know about linguistics. For me, this was best exemplified when, upon enquiring about my degree, my friend paused for a brief moment and said: “Linguistics. That’s like logistics, right?” Indeed. Not really being in the mood to bash my friend’s ignorance into a bloody pulp of understanding, I decided to take a swig of my beer and simply replied: “No, not really. But it doesn’t matter.” Feeling guilty for not gathering the entire congregation of party-goers, sitting them down and proceeding to explain the fundamentals of linguistics, I have instead decided to write a series of 101 posts.

With that said, a good place to start is by providing some dictionary definitions highlighting the difference between linguistics and logistics:

Linguistics /lɪŋˈgwɪs.tɪks/ noun

the systematic study of the structure and development of language in general or of particular languages.

Logistics /ləˈdʒɪs.tɪks/ plural noun

the careful organization of a complicated activity so that it happens in a successful and effective way.

Arguably, linguistics is a logistical solution for successfully, and rigorously, studying language through the scientific method, but to avoid further confusion this is the last time you’ll see logistics in these posts. So, as you can probably infer, linguistics is a fairly broad term that, for all intensive purposes, simply means it’s a discipline for studying language. Those who partake in the study of language are known as linguists. This leads me to another point of contention: a linguist isn’t synonymous with a polyglot. Although there are plenty of linguists who do speak more than one language, many of them are quite content just sticking to their native language. It is, after all, possible for linguists to study many aspects of a language without necessarily having anything like native-level competency. In fact, other than occasionally shouting pourquoi when (drunkly) reflecting on my life choices, or ach-y-fi when a Brussels sprout somehow manages to make its way near my plate, I’m mainly monolingual.

Continue reading

Alcohol Consumption affects Morphological Complexity

I previously talked about how changes in the demography of learners can affect the cultural evolution of a language.  The hypothesis is that language adapts to the balance between declarative and procedural memory users.  Since alcohol consumption affects procedural but not declarative memory (Smith & Smith, 2003), we might expect to see communities that have a high alcohol consumption using less complex morphology.

I find that communities that have a morphologically marked future tense have significantly higher alcohol consumption than communities that have a lexically marked future tense (Alcohol consumption data from WHO, language structure data from World atlas of language structures, 198 languages, t = 14.8, p<0.0001).  This statistic does not take into account many factors, but is meant as a motivation for further research into language structure and social structure.

Smith C, & Smith D (2003). Ingestion of ethanol just prior to sleep onset impairs memory for procedural but not declarative tasks. Sleep, 26 (2), 185-91 PMID: 12683478

Memory, Social Structure and Language: Why Siestas affect Morphological Complexity

Children are better than adults at learning second languages.  Children find it easy, can do it implicitly and achieve a native-like competence.  However, as we get older we find learning a new language difficult, we need explicit teaching and find some aspects difficult to master such as grammar and pronunciation.  What is the reason for this?  The foremost theories suggest it is linked to memory constraints (Paradis, 2004; Ullman, 2005).  Children find it easy to incorporate knowledge into procedural memory – memory that encodes procedures and motor skills and has been linked to grammar, morphology and pronunciation.  Procedural memory atrophies in adults, but they develop good declarative memory – memory that stores facts and is used for retrieving lexical items.  This seems to explain the difference between adults and children in second language learning.  However, this is a proximate explanation.  What about the ultimate explanation about why languages are like this?

Continue reading

Some Links #18: GxExC

The depression map: genes, culture, serotonin, and a side of pathogens. Another new science blog network (Wired) and once again a new stable of good science writers. I’m particularly pleased to see that David Dobbs, a former SciBling and top science writer, has found a new home for Neuron Culture. I was also pleased to see he had written an article on studies into the interactions between genes and culture, namely: Chiao & Blizinsky (2009) and Way & Lieberman (2010). And I was even more pleased to see that he’d mentioned both mine and Sean’s posts on the social sensitivity hypothesis. Suffice to say, I was pleased.

Take home paragraph:

In a sense, these studies are looking not at gene-x-environment interactions, or GxE, but at genes x (immediate) environment x culture — GxExC. The third variable can make all the difference. Gene-by-environment studies over the last 20 years have contributed enormously to our understanding of mood and behavior. Without them we would not have studies, like these led by Chiao and Way and Kim, that suggest broader and deeper dimensions to what makes us struggle, thrive, or just act differently in different situations. GxE is clearly important. But when we leave out variations in culture, we risk profoundly misunderstanding how these genes — and the people who carry them — actually operate in the big wide world.
Razib also has some thoughts on the topic:
The same issues are not as operative when it comes to culture. Two tribes can speak different dialects or languages. If a woman moves from one tribe to another her children don’t necessarily speak a mixture of languages, rather, they may speak the language of their fathers. The nature of cultural inheritance is more flexible, and so allows for the persistence of more heritable variation at different levels of organization. Differences of religion, language, dress, and values, can be very strong between two groups who have long lived near each other and may be genetically similar.

Homo was born vocalizing. Babel’s Dawn links to a recently finished PhD thesis that supposedly argues for a relatively recent emergence for language (approx. 120,000 years ago). She defends her assertions by stating: “[...] all of the unique cognitive traits attributed to humans arose as the consequence of one crucial mutation, which radically altered the architecture of the ancestral primate brain.” I haven’t read the thesis, and I probably won’t as I’m already stretched in regards to my reading, but I’m completely unconvinced by the hopeful mutation hypothesis. Plus, as Bolles notes in his post, there is plenty of available evidence to the contrary.

Primed for Reading. Robert Boyd reviews Stanislas Dehane’s new book, Reading in the Brain: The Science and Evolution of a Human Invention, which I’ll be picking up soon. In the meantime, to give you a bit of background, I suggest you read Dehane’s (2007) paper on the Neuronal Recycling Hypothesis: the Cultural recycling of cortical maps. H/T: Gene Expression.

Through the looking glass (part 1). The Lousy Linguist reviews Guy Deutscher’s new book, Through the Language Glass: Why the World Looks Different in Other Languages, with the general takeaway message being that, in part one at least, one where the book is a bit science-lite. What really interested me, though, were these two paragraphs:

We discover quite quickly what Deutscher is doing as he begins to walk through complexity issues of “particular areas of language” (page 109), namely morphology, phonology, and subordination. And these last 15 pages are really the gem of Part 1. He shows that there is an interesting, somewhat illogical, entirely engaging but as yet unexplained set of correlations between speaker population size and linguistic complexity.

For example, languages with small numbers of speakers tend to have more morphologically rich grammars (hence one could claim that small = more complex). However, small languages with small numbers of speakers also tend to have small phonological inventories. Hmmm, weird, right? [My emphasis]

As those of you who read this blog will know: I don’t think it’s weird that small speaker populations also tend to have small phonological inventories.

Clothing lice out of Africa. A cool new paper by Troups et al which looks at the evolutionary history of clothing lice to provide specific estimates on the origin of clothing. Using a Bayesian coalescent modelling approach, they estimate that clothing lice diverged from head louse ancestors between 83,000 and 170,000 years ago. H/T: Dienekes.

More on Phoneme Inventory Size and Demography

On the basis of Sean’s comment, about using a regression to look at how phoneme inventory size improved as geographic spread was incorporated along with population size, I decided to look at the stats a bit more closely (original post is here). It’s fairly easy to perform multiple regression in R, which, in the case of my data, resulted in highly significant results (p<0.001) for the intercept, area and population (residual standard error = 9.633 on 393 degrees of freedom; adjusted R-Squared = 0.1084). I then plotted all the combinations as scatterplots for each pair of variables. As you can see below, this is fairly useful as a quick summary but it is also messy and confusing. Another problem is that the pairs plot is on the original data and not the linear model.

Continue reading

Phoneme Inventory Size and Demography

It’s long since been established that demography drives evolutionary processes (see Hawks, 2008 for a good overview). Similar attempts are also being made to describe cultural (Shennan, 2000; Henrich, 2004; Richerson & Boyd, 2009) and linguistic (Nettle, 1999a; Wichmann & Homan, 2009; Vogt, 2009) processes by considering the effects of population size and other demographic variables. Even though these ideas are hardly new, until recently, there was a ceiling as to the amount of resources one person could draw upon. In linguistics, this paucity of data is being remedied through the implementation of large-scale projects, such as WALS, Ethnologue and UPSID, that bring together a vast body of linguistic fieldwork from around the world. Providing a solid direction for how this might be utilised is a recent study by Lupyan & Dale (2010). Here, the authors compare the structural properties of more than 2000 languages with three demographic variables: a language’s speaker population, its geographic spread and the number of linguistic neighbours. The salient point being that certain differences in structural features correspond to the underlying demographic conditions.

With that said, a few months ago I found myself wondering about a particular feature, the phoneme inventory size, and its potential relationship to underlying demographic conditions of a speech community. What piqued my interest was that two languages I retain a passing interest in, Kayardild and Pirahã, both contain small phonological inventories and have small speaker communities. The question being: is their a correlation between the population size of a language and its number of phonemes? Despite work suggesting at such a relationship (e.g. Trudgill, 2004), there is little in the way of empirical evidence to support such claims. Hay & Bauer (2007) perhaps represent the most comprehensive attempt at an investigation: reporting a statistical correlation between the number of speakers of a language and its phoneme inventory size.

In it, the authors provide some evidence for the claim that the more speakers a language has, the larger its phoneme inventory. Without going into the sub-divisions of vowels (e.g. separating monophthongs, extra monophtongs and diphthongs) and consonants (e.g. obstruents), as it would extend the post by about 1000 words, the vowel inventory and consonant inventory are both correlated with population size (also ruling out that language families are driving the results). As they note:

That vowel inventory and consonant inventory are both correlated with population size is quite remarkable. This is especially so because consonant inventory and vowel inventory do not correlate with one another at all in this data-set (rho=.01, p=.86). Maddieson (2005) also reports that there is no correlation between vowel and consonant inventory size in his sample of 559 languages. Despite the fact that there is no link between vowel inventory and consonant inventory size, both are significantly correlated with the size of the population of speakers.

Using their paper as a springboard, I decided to look at how other demographic factors might influence the size of the phoneme inventory, namely: population density and the degree of social interconnectedness.

Continue reading

The Problem With a Purely Adaptationist Theory of Language Evolution

According to the evolutionary psychologist Geoffrey Miller and his colleagues (e.g Miller 2000b), uniquely human cognitive behaviours such as musical and artistic ability and creativity, should be considered both deviant and special. This is because traditionally, evolutionary biologists have struggled to fathom exactly how such seemingly superfluous cerebral assets would have aided our survival. By the same token, they have observed that our linguistic powers are more advanced than seems necessary to merely get things done, our command of an expansive vocabulary and elaborate syntax allows us to express an almost limitless range of concepts and ideas above and beyond the immediate physical world. The question is: why bother to evolve something so complicated, if it wasn’t really all that useful?

Miller’s solution is that our most intriguing abilities, including language, have been shaped predominantly by sexual selection rather than natural selection, in the same way that large cumbersome ornaments, bright plumages and complex song have evolved in other animals. As one might expect then, Miller’s theory of language evolution has been hailed as a key alternative to the dominant view that language evolved because it conferred a distinct survival advantage to its users through improved communication (e.g. Pinker 2003). He believes that language evolved in response to strong sexual selection pressure for interesting and entertaining conversation because linguistic ability functioned as an honest indicator of general intelligence and underlying genetic quality; those who could demonstrate verbal competence enjoyed a high level of reproductive success and the subsequent perpetuation of their genes. Continue reading