Robustness, Evolvability, Degeneracy and stuff like that…

Much of the work I plan to do for this year involves integrating traditional and contemporary theories of language change within an evolutionary framework. In my previous post I introduced the concept of degeneracy, which, to briefly recap, refers to components that have a structure-to-function ratio of many-to-one, with a single degenerate structure being capable of performing distinct functions under different conditions (pluripotent). Whitcare (2010: 5) provides a case in point for biological systems: here, the adhesin gene family in A. Saccharomyces “ expresses proteins that typically play unique roles during development, yet can perform each other’s functions when expression levels are altered”.

But what about degeneracy in language? For a start, we already know from basic linguistic theory forms (i.e. structures) are paired with meanings (i.e. functions). More recent work has expanded upon this notion, especially in developing the concept of constructions (Goldberg, 2003): “direct form-meaning pairings that range from the very specific (words or idioms) to the more general (passive constructions, ditranstive construction), and from very small units (words with affixes, walked) to clause-level or even discourse-level units” (Beckner et al., 2009: 5). When applied to constructions, degeneracy fits squarely with work identifying language as a Complex Adaptive System (see here) and as a culturally transmitted replicator (see here and here), which offers a link between the generation of first order synchronic variation – in the form of innovation (e.g. newly introduced linguistic material in the form of sounds, words, grammatical constructions etc) – and the selection, propagation and fixation of linguistic variants within a speaker community.

For the following example, I’m going to look at a specific type of discourse-pragmatic feature, or construction, which has undergone renewed interest over the last thirty-years. Known as General Extenders (GEs) – utterance- or clause-final discourse particles, such as and stuff and or something – researchers are realising that, far from being superfluous linguistic baggage, these features “carry social meaning, perform indispensible functions in social interaction, and constitute essential elements of sentence grammar” (Pichler, 2010: 582). Of specific relevance, GEs, and discourse-pragmatic particles more generally, are multifunctional: that is, they are not confined to a single communicative domain, and can even come to serve multiple roles within the same communicative context or utterance.

It is proposed the concept of degeneracy will allow us to explain how multifunctional discourse markers emerge from variation existent at structural components of linguistic organisation, such as the phonological and morphosyntactic components. If anything, I hope the post might serve as some food for thought, as I’m still grappling with the applications of the theory (and whether there’s anything useful to say!).

Continue reading “Robustness, Evolvability, Degeneracy and stuff like that…”

Cultural Evolution and the Impending Singularity

Prof. Alfred Hubler is an actual mad professor who is a danger to life as we know it.  In a talk this evening he went from ball bearings in castor oil to hyper-advanced machine intelligence and from some bits of string to the boundary conditions of the universe.  Hubler suggests that he is building a hyper-intelligent computer.  However, will hyper-intelligent machines actually give us a better scientific understanding of the universe, or will they just spend their time playing Tetris?

Let him take you on a journey…

Continue reading “Cultural Evolution and the Impending Singularity”

Statistics and Symbols in Mimicking the Mind

MIT recently held a symposium on the current status of AI, which apparently has seen precious little progress in recent decades. The discussion, it seems, ground down to a squabble over the prevalence of statistical techniques in AI and a call for a revival of work on the sorts of rule-governed models of symbolic processing that once dominated much of AI and its sibling, computational linguistics.

Briefly, from the early days in the 1950s up through the 1970s both disciplines used models built on carefully hand-crafted symbolic knowledge. The computational linguists built parsers and sentence generators and the AI folks modeled specific domains of knowledge (e.g. diagnosis in elected medical domains, naval ships, toy blocks). Initially these efforts worked like gang-busters. Not that they did much by Star Trek standards, but they actually did something and they did things never before done with computers. That’s exciting, and fun.

In time, alas, the excitement wore off and there was no more fun. Just systems that got too big and failed too often and they still didn’t do a whole heck of a lot.

Then, starting, I believe, in the 1980s, statistical models were developed that, yes, worked like gang-busters. And these models actually did practical tasks, like speech recognition and then machine translation. That was a blow to the symbolic methodology because these programs were “dumb.” They had no knowledge crafted into them, no rules of grammar, no semantics. Just routines the learned while gobbling up terabytes of example data. Thus, as Google’s Peter Norvig points out, machine translation is now dominated by statistical methods. No grammars and parsers carefully hand-crafted by linguists. No linguists needed.

What a bummer. For machine translation is THE prototype problem for computational linguistics. It’s the problem that set the field in motion and has been a constant arena for research and practical development. That’s where much of the handcrafted art was first tried, tested, and, in a measure, proved. For it to now be dominated by statistics . . . bummer.

So that’s where we are. And that’s what the symposium was chewing over.

Continue reading “Statistics and Symbols in Mimicking the Mind”

Cultural inheritance in studies of artifical grammar learning

Recently, I’ve been attending an artificial language learning research group and have discovered an interesting case of cultural inheritance.  Arthur Reber was one of the first researchers to look at the implicit learning of grammar.  Way back in 1967, he studied how adults (quaintly called ‘Ss’ in the original paper) learned an artificial grammar, created from a finite state automata.  Here is the grand-daddy of artificial language learning automata:

Continue reading “Cultural inheritance in studies of artifical grammar learning”

The Genesis of Grammar

In my previous post on linguistic replicators and major transitions, I mentioned grammaticalisation as a process that might inform us about the contentive-functional split in the lexicon. Naturally, it makes sense that grammaticalisation might offer insights into other transitions in linguistics, and, thanks to an informative comment from a regular reader, I was directed to a book chapter by Heine & Kuteva (2007): The Genesis of Grammar: On combining nouns. I might dedicate a post to the paper in the future, but, as with many previous claims, this probably won’t happen. So instead, here is the abstract and a table of the authors’ hypothesised grammatical innovations:

That it is possible to propose a reconstruction of how grammar evolved in human languages is argued for by Heine and Kuteva (2007). Using observations made within the framework of grammaticalization theory, these authors hypothesize that time-stable entities denoting concrete referential concepts, commonly referred to as ‘nouns’, must have been among the first items distinguished by early humans in linguistic discourse. Based on crosslinguistic findings on grammatical change, this chapter presents a scenario of how nouns may have contributed to introducing linguistic complexity in language evolution.

Musings of a Palaeolinguist

Hannah recently directed me towards a new language evolution blog: Musings of a Palaeolinguist. From my reading of the blog, the general flavour seems to be focused on gradualist and abruptist accounts of language evolution. Here is a section from one of her posts, Evolution of Language and the evolution of syntax: Same debate, same solution?, which also touches on the protolanguage concept:

In my thesis, I went through a literature review of gradual and abruptist arguments for language evolution, and posited an intermediate stage of syntactic complexity where a language might have only one level of embedding in its grammar.  It’s a shaky and underdeveloped example of an intermediate stage of language, and requires a lot of exploration; but my reason for positing it in the first place is that I think we need to think of the evolution of syntax the way many researchers are seeing the evolution of language as a whole, not as a monolithic thing that evolved in one fell swoop as a consequence of a genetic mutation, but as a series of steps in increasing complexity.

Derek Bickerton, one of my favourite authors of evolutionary linguistics material, has written a number of excellent books and papers on the subject.  But he also argues that language likely experienced a jump from a syntax-less protolanguage to a fully modern version of complex syntax seen in languages today.  To me that seems unintuitive.  Children learn syntax in steps, and non-human species seem to only be able to grasp simple syntax.  Does this not suggest that it’s possible to have a stable stage of intermediate syntax?

I’ve generally avoided writing about these early stages of language, largely because I had little useful to say on the topic, but I’ve now got some semi-developed thoughts that I’ll share in another post. In regards to the above quote, I do agree with the author’s assertion of there being an intermediate stage, rather than Bickerton’s proposed jump. In fact, we see languages today (polysynthetic) where there are limitations on the level of embedding, with one example being Bininj Gun-wok. We can also stretch the discussion to look at recursion in languages, as Evans and Levinson (2009) demonstrate:

In discussions of the infinitude of language, it is normally assumed that once the possibility of embedding to one level has been demonstrated, iterated recursion can then go on to generate an infinite number of levels, subject only to memory limitations. And it was arguments from the need to generate an indefinite number of embeddings that were crucial in demonstrating the inadequacy of finite state grammars. But, as Kayardild shows, the step from one-level recursion to unbounded recursion cannot be assumed, and once recursion is quarantined to one level of nesting it is always possible to use a more limited type of grammar, such as finite state grammar, to generate it.

That’s Linguistics (Not logistics)


Linguists really need a catchy tune to match those in logistics. Any takers?

I always remember when one of my former lecturers said he was surprised by how little the average person will know about linguistics. For me, this was best exemplified when, upon enquiring about my degree, my friend paused for a brief moment and said: “Linguistics. That’s like logistics, right?” Indeed. Not really being in the mood to bash my friend’s ignorance into a bloody pulp of understanding, I decided to take a swig of my beer and simply replied: “No, not really. But it doesn’t matter.” Feeling guilty for not gathering the entire congregation of party-goers, sitting them down and proceeding to explain the fundamentals of linguistics, I have instead decided to write a series of 101 posts.

With that said, a good place to start is by providing some dictionary definitions highlighting the difference between linguistics and logistics:

Linguistics /lɪŋˈgwɪs.tɪks/ noun

the systematic study of the structure and development of language in general or of particular languages.

Logistics /ləˈdʒɪs.tɪks/ plural noun

the careful organization of a complicated activity so that it happens in a successful and effective way.

Arguably, linguistics is a logistical solution for successfully, and rigorously, studying language through the scientific method, but to avoid further confusion this is the last time you’ll see logistics in these posts. So, as you can probably infer, linguistics is a fairly broad term that, for all intensive purposes, simply means it’s a discipline for studying language. Those who partake in the study of language are known as linguists. This leads me to another point of contention: a linguist isn’t synonymous with a polyglot. Although there are plenty of linguists who do speak more than one language, many of them are quite content just sticking to their native language. It is, after all, possible for linguists to study many aspects of a language without necessarily having anything like native-level competency. In fact, other than occasionally shouting pourquoi when (drunkly) reflecting on my life choices, or ach-y-fi when a Brussels sprout somehow manages to make its way near my plate, I’m mainly monolingual.

Continue reading “That’s Linguistics (Not logistics)”

Memory, Social Structure and Language: Why Siestas affect Morphological Complexity

Children are better than adults at learning second languages.  Children find it easy, can do it implicitly and achieve a native-like competence.  However, as we get older we find learning a new language difficult, we need explicit teaching and find some aspects difficult to master such as grammar and pronunciation.  What is the reason for this?  The foremost theories suggest it is linked to memory constraints (Paradis, 2004; Ullman, 2005).  Children find it easy to incorporate knowledge into procedural memory – memory that encodes procedures and motor skills and has been linked to grammar, morphology and pronunciation.  Procedural memory atrophies in adults, but they develop good declarative memory – memory that stores facts and is used for retrieving lexical items.  This seems to explain the difference between adults and children in second language learning.  However, this is a proximate explanation.  What about the ultimate explanation about why languages are like this?

Continue reading “Memory, Social Structure and Language: Why Siestas affect Morphological Complexity”

Some Links #17: The Return of Whorf

The famous Klingon linguist, Whorf, has returned with his theories on linguistic relativity (I know, terrible joke).

The Largest Whorfian Study Ever. The Lousy Linguist looks at the paper Ways to go: Methodological considerations in Whorfian studies on motion events. As you can probably guess, the paper deals with the methodological issues surrounding linguistic relativity. It’s all interesting stuff, bringing to light important questions about how the brain handles language. I’m fairly lay when it comes to this topic, so for more background on the current events, see similar posts over at Language Log: Never Mind the Conclusions, What’s the Evidence? and SLA Blog: Linguistic Relativity, Whorf, Linguistic Relativity.

But Science Doesn’t Work That Way: Miller & Chomsky (1963). Many of you who read this blog will be familiar with the position taken by Melody’s post over at Child’s Play: against a strong nativist position in language acquisition. It’s the first part in a series of posts so I’ll reserve judgement on her conclusions until she’s finished. But much of her post is drawn from a brilliant paper by Scholz and Pullum (2005): Irrational Nativist Exuberance. Key paragraph:

Do we really want to say that phonemes are ‘innate’?

I haven’t yet addressed how we know — with all but certainty — that the model Miller and Chomsky used had to be a poor approximation of human learning capabilities.  It has to do with phonemes.

Experiments have shown that people are remarkably sensitive to the transitional probabilities between phonemes in their native languages, both when speaking and when listening to speech.  If Miller and Chomsky’s assessment of probabilistic learning is correct, then the problem of “parameter estimation” should apply not only to learning the probabilities between words, but also to learning the probabilities between phonemes.  Given that people do learn to predict phonemes, Miller and Chomsky’s logic would force us to conclude that not only must ‘grammar’ be innate, but the particular distribution of phonemes in English (and every other language) must be innate as well.

We only get to this absurdist conclusion because Miller & Chomsky’s argument mistakes philosophical logic for science (which is, of course, exactly what intelligent design does).  So what’s the difference between philosophical logic and science? Here’s the answer, in Einstein’s words, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

PLoS Blogs. Yet another blogging network. This time it’s with the Public Library of Science. The most notable move, for me at least, is Neuroanthropology. That move hasn’t seemed to impact upon their ability to produce good articles, the latest of which being in regards to Uner Tan Syndrome (I’m sure there was a documentary about this on BBC…).

Hap Map 3: more people ~ more genetic variation. Razib has a cool read on the new HapMap dataset. The current paper (Integrating common and rare genetic variation in diverse human populations) looked for variants across the genome in 11 populations, consisting of 1184 samples. It’s been especially useful with less common variants. As with previous versions, you can also explore the data. Here’s the conclusion from the paper:

With improvements in sequencing technology, low-frequency variation is becoming increasingly accessible. This greater resolution will no doubt expand our ability to identify genes and variants associated with disease and other human traits. This study integrates CNPs and lower-frequency SNPs with common SNPs in a more diverse set of human populations than was previously available. The results underscore the need to characterize population-genetic parameters in each population, and for each stratum of allele frequency, as it is not possible to extrapolate from past experience with common alleles. As expected, lower-frequency variation is less shared across populations, even closely related ones, highlighting the importance of sampling widely to achieve a comprehensive understanding of human variation.

Mathematics: From the Birth of Numbers. Someone gave this in to the charity store I work at: it’s a brilliant book by Jan Gullberg on (surprise, surprise) the history of mathematics. The first chapter was on mathematics and language, so I had to pick it up, and not just for that chapter alone, as there are plenty of gaps in my mathematical knowledge I’m sure this will clear up.

Theory of Mind and Language Evolution; What can psychopathology tell us?

Theory of Mind is the ability to infer other persons’ mental states and emotions. It is thought to have evolved as part of the human’s social brain and probably emerged as an adaptive response to increasingly complex primate social interaction.

Brüne and Brüne-Cohrs (2006) explore the ‘evolutionary cost’ of language evolution:

This sophisticated ‘metacognitive’ ability comes at an evolutionary cost, reflected in a broad spectrum of psychopathological conditions. Extensive research into autistic spectrum disorders has revealed that theory of mind may be selectively impaired, leaving other cognitive faculties intact. Recent studies have shown that observed deficits in theory of mind task performance are part of a broad range of symptoms in schizophrenia, bipolar affective disorder, some forms of dementia, ‘psychopathy’ and in other psychiatric disorders.

Now it’s fairly uncontroversial to assert that without the ability of theory of mind humans would have never evolved language (Sperber and Wilson, 2002). This is due to the fact that if one can’t attribute another to have a ‘mind’ like ones own, or assume that other minds hold different information to ones own then one would see little point in trying to share information. (I’m sorry for the amount of ‘ones’ in that sentence).

Sooo, it does not seem presumptuous to assume that people interested in the evolution of language should be interested in theory of mind, in fact for many years evolutionary linguists, psychologists and biologists have been looking into this, but mostly through observing the behaviour of animals, and especially primates to see if they display theory of mind capabilities. A good summary of this work can be found here, and a lot of relevant studies can be found on this blog in the What makes humans unique? posts by Michael. I’m not going to look at the animal data in this post, but instead what the deficiencies in some human conditions can tell us about the evolution of theory of mind. That is, what can autism, schizophrenia, bipolar affective disorder, dementia, ‘psychopathy’ and other psychiatric disorders tell us?

Continue reading “Theory of Mind and Language Evolution; What can psychopathology tell us?”