Cultural differences in lateral transmission: Phylogenetic trees are OK for Linguistics but not biology

The three areas under analysis

An article in PLos ONE debunks the myth that hunter-gatherer societies borrow more words than agriculturalist societies. In doing so, it suggests that horizontal transmission is low enough for phylogenetic analyses to be a valid linguistic tool.

Lexicons from around 20% of the extant languages spoken by hunter-gatherer societies were coded for etymology (available in the supplementary material). The levels of borrowed words were compared with the languages of agriculturalist and urban societies taken from the World Loanword Database.  The study focussed on three locations:  Northern Australia, northwest Amazonia, and California and the Great Basin.

In opposition to some previous hypotheses, hunter-gatherer societies did not borrow significantly more words than agricultural societies in any of the regions studied.

The rates of borrowing were universally low, with most languages not borrowing more than 10% of their basic vocabulary.  The mean rate for hunter-gatherer societies was 6.38% while the mean for 5.15%.  This difference is actually significant overall, but not within particular regions.  Therefore, the authors claim, “individual area variation is more important than any general tendencies of HG or AG languages”.

Interestingly, in some regions, mobility, population size and population density were significant factors.  Mobile populations and low-density populations had significantly lower borrowing rates, while smaller populations borrowed proportionately more words.  This may be in line with the theory of linguistic carrying capacity as discussed by Wintz (see here and here).  The level of exogamy was a significant factor in Australia.

The study concludes that phylogenetic analyses are a valid form of linguistic analysis because the level of horizontal transmission is low.  That is, languages are tree-like enough for phylogenetic assumptions to be valid:

“While it is important to identify the occasional aberrant cases of high borrowing, our results support the idea that lexical evolution is largely tree-like, and justify the continued application of quantitative phylogenetic methods to examine linguistic evolution at the level of the lexicon. As is the case with biological evolution, it will be important to test the fit of trees produced by these methods to the data used to reconstruct them. However, one advantage linguists have over biologists is that they can use the methods we have described to identify borrowed lexical items and remove them from the dataset. For this reason, it has been proposed that, in cases of short to medium time depth (e.g., hundreds to several thousand years), linguistic data are superior to genetic data for reconstructing human prehistory “

Excellent – linguistics beats biology for a change!

However, while the level of horizontal transmission might not be a problem in this analysis, there may be a problem in the paths of borrowing.  If a language borrows relatively few words, but those words come from many different languages, and may have many paths through previous generations, there may be a subtle effect of horizontal transition that is being masked.  The authors acknowledge that they did not address the direction of transmission in a quantitative way.

A while ago, I did study of English etymology trying to quantify the level of horizontal transmission through time (description here).  The graph for English doesn’t look tree-like to me, perhaps the dynamics of borrowing works differently for languages with a high level of contact:

Claire Bowern, Patience Epps, Russell Gray, Jane Hill, Keith Hunley, Patrick McConvell, Jason Zentz (2011). Does Lateral Transmission Obscure Inheritance in Hunter-Gatherer Languages? PLoS ONE, 6 (9) : doi:10.1371/journal.pone.0025195

Cognitivism and the Critic 2: Symbol Processing

It has long been obvious to me that the so-called cognitive revolution is what happened when computation – both the idea and the digital technology – hit the human sciences. But I’ve seen little reflection of that in the literary cognitivism of the last decade and a half. And that, I fear, is a mistake.

Thus, when I set out to write a long programmatic essay, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, I argued that we think of literary text as a computational form. I submitted the essay and found that both reviewers were puzzled about what I meant by computation. While publication was not conditioned on providing such satisfaction, I did make some efforts to satisfy them, though I’d be surprised if they were completely satisfied by those efforts.

That was a few years ago.

Ever since then I pondered the issue: how do I talk about computation to a literary audience? You see, some of my graduate training was in computational linguistics, so I find it natural to think about language processing as entailing computation. As literature is constituted by language it too must involve computation. But without some background in computational linguistics or artificial intelligence, I’m not sure the notion is much more than a buzzword that’s been trendy for the last few decades – and that’s an awful long time for being trendy.

I’ve already written one post specifically on this issue: Cognitivism for the Critic, in Four & a Parable, where I write abstracts of four texts which, taken together, give a good feel for the computational side of cognitive science. Here’s another crack at it, from a different angle: symbol processing.

Operations on Symbols

I take it that ordinary arithmetic is most people’s ‘default’ case for what computation is. Not only have we all learned it, it’s fundamental to our knowledge, like reading and writing. Whatever we know, think, or intuit about computation is built on our practical knowledge of arithmetic.

As far as I can tell, we think of arithmetic as being about numbers. Numbers are different from words. And they’re different from literary texts. And not merely different. Some of us – many of whom study literature professionally – have learned that numbers and literature are deeply and utterly different to the point of being fundamentally in opposition to one another. From that point of view the notion that literary texts be understood computationally is little short of blasphemy.

Not so. Not quite.

The question of just what numbers are – metaphysically, ontologically – is well beyond the scope of this post. But what they are in arithmetic, that’s simple; they’re symbols. Words too are symbols; and literary texts are constituted of words. In this sense, perhaps superficial, but nonetheless real, the reading of literary texts and making arithmetic calculations are the same thing, operations on symbols. Continue reading “Cognitivism and the Critic 2: Symbol Processing”

Statistics and Symbols in Mimicking the Mind

MIT recently held a symposium on the current status of AI, which apparently has seen precious little progress in recent decades. The discussion, it seems, ground down to a squabble over the prevalence of statistical techniques in AI and a call for a revival of work on the sorts of rule-governed models of symbolic processing that once dominated much of AI and its sibling, computational linguistics.

Briefly, from the early days in the 1950s up through the 1970s both disciplines used models built on carefully hand-crafted symbolic knowledge. The computational linguists built parsers and sentence generators and the AI folks modeled specific domains of knowledge (e.g. diagnosis in elected medical domains, naval ships, toy blocks). Initially these efforts worked like gang-busters. Not that they did much by Star Trek standards, but they actually did something and they did things never before done with computers. That’s exciting, and fun.

In time, alas, the excitement wore off and there was no more fun. Just systems that got too big and failed too often and they still didn’t do a whole heck of a lot.

Then, starting, I believe, in the 1980s, statistical models were developed that, yes, worked like gang-busters. And these models actually did practical tasks, like speech recognition and then machine translation. That was a blow to the symbolic methodology because these programs were “dumb.” They had no knowledge crafted into them, no rules of grammar, no semantics. Just routines the learned while gobbling up terabytes of example data. Thus, as Google’s Peter Norvig points out, machine translation is now dominated by statistical methods. No grammars and parsers carefully hand-crafted by linguists. No linguists needed.

What a bummer. For machine translation is THE prototype problem for computational linguistics. It’s the problem that set the field in motion and has been a constant arena for research and practical development. That’s where much of the handcrafted art was first tried, tested, and, in a measure, proved. For it to now be dominated by statistics . . . bummer.

So that’s where we are. And that’s what the symposium was chewing over.

Continue reading “Statistics and Symbols in Mimicking the Mind”

The Return of the Phoneme Inventories

Right, I already referred to Atkinson’s paper in a previous post, and much of the work he’s presented is essentially part of a potential PhD project I’m hoping to do. Much of this stems back to last summer, where I mentioned how the phoneme inventory size correlates with certain demographic features, such as population size and population density. Using the the UPSID data I generated a generalised additive model to demonstrate how area and population size interact in determining the phoneme inventory size:

Interestingly, Atkinson seems to derive much of his thinking, at least in his choice of demographic variables, from work into the transmission of cultural artefacts (see here and here). For me, there are clear uses for these demographic models in testing hypotheses for linguistic transmission and change, as I see language as a cultural product. It appears Atkinson reached the same conclusion. Where we depart, however, is in our overall explanations of the data. My major problem with the claim is theoretical: he hasn’t ruled out other historical-evolutionary explanations for these patterns.

Before we get into the bulk of my criticism, I’ll provide a very brief overview of the paper.

Continue reading “The Return of the Phoneme Inventories”

Evolved structure of language shows lineage-specific trends in word-order universals

Via Simon Greenhill:

Dunn M, Greenhill SJ, Levinson SC, & Gray RD (2011). Evolved structure of language shows lineage-specific trends in word-order universals. Nature.

Some colleagues and I have a new paper out in Nature showing that the evolved structure of language shows lineage-specific trends in word-order universals. I’ve written an overview/FAQ on this paper here, and there’s a nice review of it here and here.

The Abstract:

Languages vary widely but not without limit. The central goal of linguistics is to describe the diversity of human languages and explain the constraints on that diversity. Generative linguists following Chomsky have claimed that linguistic diversity must be constrained by innate parameters that are set as a child learns a language. In contrast, other linguists following Greenberg have claimed that there are statistical tendencies for co-occurrence of traits reflecting universal systems biases, rather than absolute constraints or parametric variation. Here we use computational phylogenetic methods to address the nature of constraints on linguistic diversity in an evolutionary framework. First, contrary to the generative account of parameter setting, we show that the evolution of only a few word-order features of languages are strongly correlated. Second, contrary to the Greenbergian generalizations, we show that most observed functional dependencies between traits are lineage-specific rather than universal tendencies. These findings support the view that—at least with respect to word order—cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states.

 

Variation in Experiment Participant Applications

What do people expect when they sign up to a linguistics experiment?

I’m currently running an experiment and so I posted an ad for participants.  It simply states “You will take part in a linguistics experiment.  You will be paid £6.”, and gives my email.  I got 30 replies in a few hours, but was struck by the variation in the responses.  Here are some pointless graphs:

First, a look at the distribution of email subjects.  This reveals that most people know they are going to participate in an experiment, but fewer realise that they will contribute to research.  One person thought that they would be doing a “Research Study”.  What’s one of them?

However, here’s the killer.  Analysing the first lines of the emails, I noticed a distinct power law relationship between frequency and casualness.

People just don’t respect linguists any more.

And that’s why I make people do hours of mind-bending iterated learning experiments with spinning cats.

The Genesis of Grammar

In my previous post on linguistic replicators and major transitions, I mentioned grammaticalisation as a process that might inform us about the contentive-functional split in the lexicon. Naturally, it makes sense that grammaticalisation might offer insights into other transitions in linguistics, and, thanks to an informative comment from a regular reader, I was directed to a book chapter by Heine & Kuteva (2007): The Genesis of Grammar: On combining nouns. I might dedicate a post to the paper in the future, but, as with many previous claims, this probably won’t happen. So instead, here is the abstract and a table of the authors’ hypothesised grammatical innovations:

That it is possible to propose a reconstruction of how grammar evolved in human languages is argued for by Heine and Kuteva (2007). Using observations made within the framework of grammaticalization theory, these authors hypothesize that time-stable entities denoting concrete referential concepts, commonly referred to as ‘nouns’, must have been among the first items distinguished by early humans in linguistic discourse. Based on crosslinguistic findings on grammatical change, this chapter presents a scenario of how nouns may have contributed to introducing linguistic complexity in language evolution.

The Bog

If you like wading through deposits of dead animal material, then you should go over and visit Richard Littauer’s new blog, The Bog. Having been exposed to his writings on both this blog, and through the Edinburgh language society website, I’m sure it will be worth a visit — for good writing, if not for your dire need to distinguish between forest swamps and shrub swamps. His first post is on Mung, the colloquial name for Pylaiella littoris, which is apparently a common seaweed. Here is his quick overview of the blog:

So, The Bog is going to be the resting place for various studies and explorations. Richard Littauer is the writer; he is working on his MA in Linguistics at Edinburgh University. He writes about evolutionary linguistics and culture at Replicated Typo, about general linguistic musings at a non-academic standard at Lang. Soc., about constructed languages on Llama, and about various philosophical things at Pitch Black Press. Since none of these blogs were a perfect fit for the scientific equivalent of a swamp-romp through subjects he doesn’t study, he set up this blog. Expect posts about ecology, biology, linguistics, anthropology, or anything in between.

The fact that it’s called the Bog has nothing to do with the British slang for ‘bathroom’. Rather, Richard (well, I) have an affinity with swamps for some unexplained reasons. Expect posts on swamps.

If that doesn’t appeal to you, then Richard is also well-known for being the world’s number one Na’vi fan.

From Natyural to Nacheruhl: Utterance Selection and Language Change

Most of us should know by now that language changes. It’s why the 14th Century prose of Geoffrey Chaucer is nearly impenetrable to modern day speakers of English. It is also why Benjamin Franklin’s phonetically transcribed pronunciation of the English word natural sounded like natyural (phonetically [nætjuɹəl]) rather than our modern variant with a ch sound (phonetically [nætʃəɹəl]). However, it is often taken for granted on this blog that language change can be understood as an evolutionary process. Many people might not see the utility of such thinking outside the realm of biology. That is, evolutionary theory is strictly the preserve of describing biological change, and is less useful as a generalisable concept. A relatively recent group of papers, however, have taken the conceptual machinery of evolutionary theory (see Hull, 2001) and applied it to language.

It's all natyural, yo!

Broadly speaking, these utterance selection models highlight that language change occurs across two steps, each corresponding to an evolutionary process: (1) the production of an utterance, and (2) the propagation of linguistic variants within a speech community. The first of these, the production of an utterance, takes place across an extremely short timescale: we will replicate particular sounds, words, and constructions millions of times across our production lifetime. It is as this step where variation is generated: phonetic variation, for instance, is not only generated through different speakers having different phonetic values for a single phoneme — the same speaker will produce different phonetic values for a single phoneme based on the context. Through variation comes the possibility of selection within a speech community. This leads us to our second timescale, which sees the selection and propagation of these variants — a process that may “take many generations of the replication of the word, which may–or may not–extend beyond the lifetime of an individual speaker.” (Croft, in press).

Recent mathematical work in this area has highlighted four selection mechanisms: replicator selection, neutral evolution, neutral interactor selection, and weighted interactor selection. I’ll now provide a brief overview of each of these mechanisms in relation to language change.

Continue reading “From Natyural to Nacheruhl: Utterance Selection and Language Change”

Musings of a Palaeolinguist

Hannah recently directed me towards a new language evolution blog: Musings of a Palaeolinguist. From my reading of the blog, the general flavour seems to be focused on gradualist and abruptist accounts of language evolution. Here is a section from one of her posts, Evolution of Language and the evolution of syntax: Same debate, same solution?, which also touches on the protolanguage concept:

In my thesis, I went through a literature review of gradual and abruptist arguments for language evolution, and posited an intermediate stage of syntactic complexity where a language might have only one level of embedding in its grammar.  It’s a shaky and underdeveloped example of an intermediate stage of language, and requires a lot of exploration; but my reason for positing it in the first place is that I think we need to think of the evolution of syntax the way many researchers are seeing the evolution of language as a whole, not as a monolithic thing that evolved in one fell swoop as a consequence of a genetic mutation, but as a series of steps in increasing complexity.

Derek Bickerton, one of my favourite authors of evolutionary linguistics material, has written a number of excellent books and papers on the subject.  But he also argues that language likely experienced a jump from a syntax-less protolanguage to a fully modern version of complex syntax seen in languages today.  To me that seems unintuitive.  Children learn syntax in steps, and non-human species seem to only be able to grasp simple syntax.  Does this not suggest that it’s possible to have a stable stage of intermediate syntax?

I’ve generally avoided writing about these early stages of language, largely because I had little useful to say on the topic, but I’ve now got some semi-developed thoughts that I’ll share in another post. In regards to the above quote, I do agree with the author’s assertion of there being an intermediate stage, rather than Bickerton’s proposed jump. In fact, we see languages today (polysynthetic) where there are limitations on the level of embedding, with one example being Bininj Gun-wok. We can also stretch the discussion to look at recursion in languages, as Evans and Levinson (2009) demonstrate:

In discussions of the infinitude of language, it is normally assumed that once the possibility of embedding to one level has been demonstrated, iterated recursion can then go on to generate an infinite number of levels, subject only to memory limitations. And it was arguments from the need to generate an indefinite number of embeddings that were crucial in demonstrating the inadequacy of finite state grammars. But, as Kayardild shows, the step from one-level recursion to unbounded recursion cannot be assumed, and once recursion is quarantined to one level of nesting it is always possible to use a more limited type of grammar, such as finite state grammar, to generate it.