Replicated Typo 2.0 has reached 100,000 hits! The most popular search term that leads visitors here is ‘What makes humans unique?’ and part of the answer has to be our ability to transmit our culture. But as we’ve shown on this blog, culturally transmitted features can be highly correlated with each other. This fact is a source of both frustration and fascination, so I’ve roped together some of my favourite investigations of cultural correlations into a correlation super-chain. In addition, there’s a whole new spurious correlation at the end of the article!
Having had several months off, I thought I’d kick things off by looking at a topic that’s garnered considerable interest in evolutionary theory, known as degeneracy. As a concept, degeneracy is a well known characteristic of biological systems, and is found in the genetic code (many different nucleotide sequences encode a polypeptide) and immune responses (populations of antibodies and other antigen-recognition molecules can take on multiple functions) among many others (cf. Edelman & Gally, 2001). More recently, degeneracy is appreciated as having applications in a wider range of phenomena, with Paul Mason (2010) offering the following value-free, scientific definition:
Degeneracy is observed in a system if there are components that are structurally different (nonisomorphic) and functionally similar (isofunctional) with respect to context.
A pressing concern in evolutionary research is how increasingly complex forms “are able to evolve without sacrificing robustness or the propensity for future beneficial adaptations” (Whitcare & Bender, 2010). One common solution is to refer to redundancy: duplicate elements that have a structure-to-function ratio of one-to-one (Mason, 2010). Nature does redundancy well, and is exemplified by the human body: we have two eyes, two lungs, two kidneys, and so on. Still, even with redundant components, selection in biological systems would result in a situation where competitive elimination leads to the eventual extinction of redundant variants (ibid).
Will advanced computers use H. Sapiens as batteries?
I also blogged about a part of this talk here (why a mad scientist’s attempt at creating A.I. to make new scientific discoveries was doomed).
The talk was given a prise for best talk by the judging panel which included David Krakauer, Tom Carter and best-selling author Cormac McCarthy. At several points in the talk, I completely forget what I was supposed to say because the people filming the event asked me to set my screen up in a way so I couldn’t see my notes.
Sperl, M., Chang, A., Weber, N., & Hübler, A. (1999). Hebbian learning in the agglomeration of conducting particles Physical Review E, 59 (3), 3165-3168 DOI: 10.1103/PhysRevE.59.3165
Chater N, & Christiansen MH (2010). Language acquisition meets language evolution. Cognitive science, 34 (7), 1131-57 PMID: 21564247
Ay N, Flack J, & Krakauer DC (2007). Robustness and complexity co-constructed in multimodal signalling networks. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362 (1479), 441-7 PMID: 17255020
Ackley, D.H., and Cannon, D.C.. “Pursue Robust Indefinite Scalability”. In Proceedings of the Thirteenth Workshop on Hot Topics in Operating Systems (HOTOS-XIII) (2011, May). Abstract, PDF.
Guttal V, & Couzin ID (2010). Social interactions, information use, and the evolution of collective migration. Proceedings of the National Academy of Sciences of the United States of America, 107 (37), 16172-7 PMID: 20713700
EDIT: Since writing this post, I have discovered a major flaw with the conclusion which is described here.
One of the problems with large-scale statistical analyses of linguistic typologies is the temporal resolution of the data. Because we only typically have single measurements for populations, we can’t see the dynamics of the system. A correlation between two variables that exists now may be an accident of more complex dynamics. For instance, Lupyan & Dale (2010) find a statistically significant correlation between a linguistic population’s size and its morphological complexity. One hypothesis is that the language of larger populations are adapting to adult learners as they comes into contact with other languages. Hay & Bauer (2007) also link demography with phonemic diversity. However, it’s not clear how robust these relationships are over time, because of a lack of data on these variables in the past.
To test this, a benchmark is needed. One method is to use careful statistical controls, such as controlling for the area that the language is spoken in, the density of the population etc. However, these data also tend to be synchronic. Another method is to compare the results against the predictions of a simple model. Here, I propose a simple model based on a dynamic where cultural variants in small populations change more rapidly than those in large populations. This models the stochastic nature of small samples (see the introduction of Atkinson, 2011 for a brief review of this idea). This model tests whether chaotic dynamics lead to periods of apparent correlation between variables. Source code for this model is available at the bottom.
Prof. Alfred Hubler is an actual mad professor who is a danger to life as we know it. In a talk this evening he went from ball bearings in castor oil to hyper-advanced machine intelligence and from some bits of string to the boundary conditions of the universe. Hubler suggests that he is building a hyper-intelligent computer. However, will hyper-intelligent machines actually give us a better scientific understanding of the universe, or will they just spend their time playing Tetris?
Right, I already referred to Atkinson’s paper in a previous post, and much of the work he’s presented is essentially part of a potential PhD project I’m hoping to do. Much of this stems back to last summer, where I mentioned how the phoneme inventory size correlates with certain demographic features, such as population size and population density. Using the the UPSID data I generated a generalised additive model to demonstrate how area and population size interact in determining the phoneme inventory size:
Interestingly, Atkinson seems to derive much of his thinking, at least in his choice of demographic variables, from work into the transmission of cultural artefacts (see here and here). For me, there are clear uses for these demographic models in testing hypotheses for linguistic transmission and change, as I see language as a cultural product. It appears Atkinson reached the same conclusion. Where we depart, however, is in our overall explanations of the data. My major problem with the claim is theoretical: he hasn’t ruled out other historical-evolutionary explanations for these patterns.
Before we get into the bulk of my criticism, I’ll provide a very brief overview of the paper.
Recently, I’ve been attending an artificial language learning research group and have discovered an interesting case of cultural inheritance. Arthur Reber was one of the first researchers to look at the implicit learning of grammar. Way back in 1967, he studied how adults (quaintly called ‘Ss’ in the original paper) learned an artificial grammar, created from a finite state automata. Here is the grand-daddy of artificial language learning automata:
It’s Charles Darwin’s birthday today! He’s 202. So in celebration I’ve written a post on the still ongoing controversy which the theory of evolution by natural selection caused and is causing, specifically with regards to the emergence of human intelligence.
Alfred Russel Wallace is widely seen as the co-discoverer of the theory of evolution by natural selection. While Darwin had been formulating his theory from as early as the late 1830s, he kept quite about it for more than twenty years while he amassed evidence to support it. In 1858 Alfred Russell Wallace, a naturalist of the same time, sent Darwin a letter outlining for him a theory of evolution which very closely mirrored Darwin’s own. The pair co-presented their theory to the Linnaean Society in 1858 but due to Darwin’s long time amassing evidence and refining his ideas, it was his book, On The Origin of Species, which was published in 1859 and set Darwin’s name firmly in the history books as the discoverer of natural selection.
While Wallace’s part in the discovery of natural selection is far from undocumented or unknown, it is largely for presenting ‘the same ideas’ as Darwin for which he is known and what is rarely discussed in the differences in their ideas. In this post I will briefly discuss a new(ish) paper by Steven Pinker on the evolution of human intelligence and some the differences between the thinking of Darwin and Wallace on the subject.
Darwin, unsurprisingly, asserted that the abstract nature of human intelligence can be fully explained by natural selection. In opposition to this Wallace claimed that it was of no use to ancestral humans and therefore could only be explained by intelligent design:
“Natural selection could only have endowed savage man with a brain a few degrees superior to that of an ape, whereas he actually possesses one very little inferior to that of a philosopher.”(Wallace, 1870:343)
Unsurprisingly most scientists these days do not agree with Wallace on either the point that the human brain could not be the result of natural selection or that as a result of this problem it must have been a product of design by a higher being. It would be both dismissive and dull to leave the discussion at that however, which is where Pinker comes in. Despite Wallace’s argument probably coming to the wrong conclusion he does bring up some very interesting questions which need answering, namely that of; “why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did?” (Pinker, 2010:8993)
In my previous post on linguistic replicators and major transitions, I mentioned grammaticalisation as a process that might inform us about the contentive-functional split in the lexicon. Naturally, it makes sense that grammaticalisation might offer insights into other transitions in linguistics, and, thanks to an informative comment from a regular reader, I was directed to a book chapter by Heine & Kuteva (2007): The Genesis of Grammar: On combining nouns. I might dedicate a post to the paper in the future, but, as with many previous claims, this probably won’t happen. So instead, here is the abstract and a table of the authors’ hypothesised grammatical innovations:
That it is possible to propose a reconstruction of how grammar evolved in human languages is argued for by Heine and Kuteva (2007). Using observations made within the framework of grammaticalization theory, these authors hypothesize that time-stable entities denoting concrete referential concepts, commonly referred to as ‘nouns’, must have been among the first items distinguished by early humans in linguistic discourse. Based on crosslinguistic findings on grammatical change, this chapter presents a scenario of how nouns may have contributed to introducing linguistic complexity in language evolution.
The Categorisation Game or Naming Game looks at how agents in a population converge on a shared system for referring to continuous stimuli (Steels, 2005; Nowak & Krakauer, 1999). Agents play games with each other, one referring to an object with a word and the other trying to guess what object the first agent was referring to. Through experience with the world and feedback from other agents, agents update their words. Eventually, agents are able to communicate effectively. The model is usually couched in terms of agents trying to agree on labels for colours (a continuous meaning space). In this post I’ll show that the algorithms used have implicit mutual exclusivity biases, which favour monolingual viewpoints. I’ll also show that this bias is not necessary and obscures some interesting insights into evolutionary dynamics of langauge.