EvoLang proceedings are now online

This year, the proceedings of the Evolution of Language conference will appear online.  The first group of papers are already up:

Browse the EvoLang Electronic Proceedings

The move to self-publishing is a bit of an experiment, but hopefully it’ll mean that the papers are more accessible to a wider audience.  To aid this, the papers are published under Creative Commons licenses.  Some papers also include supplementary materials.

The full list of papers will be updated as revisions come in, but here are some interesting papers available so far:

Continue reading “EvoLang proceedings are now online”

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading “Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains”

Underwood and Sellers 2015: Beyond narrative we have simulation

It is one thing to use computers to crunch data. It’s something else to use computers to simulate a phenomenon. Simulation is common in many disciplines, including physics, sociology, biology, engineering, and computer graphics (CGI special effects generally involve simulation of the underlying physical phenomena). Could we simulate large-scale literary processes?

In principal, of course. Why not? In practice, not yet. To be sure, I’ve seen the possibility mentioned here and there, and I’ve seen an example or two. But it’s not something many are thinking about, much less doing.

Nonetheless, as I was thinking about How Quickly Do Literary Standards Change? (Underwood and Sellers 2015) I found myself thinking about simulation. The object of such a simulation would be to demonstrate the principle result of that work, as illustrated in this figure:

19C Direction

Each dot, regardless of color or shape, represents the position of a volume of poetry in a one-dimensional abstraction over 3200 dimensional space – though that’s not how Underwood and Sellers explain it (for further remarks see “Drifting in Space” in my post, Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction). The trend line indicates that poetry is shifting in that space along a uniform direction over the course of the 19th century. Thus there seems to be a large-scale direction to that literary system. Could we create a simulation that achieves that result through ‘local’ means, without building a telos into the system?

The only way to find out would be to construct such a system. I’m not in a position to do that, but I can offer some remarks about how we might go about doing it.

* * * * *

I note that this post began as something I figured I could knock out in two or three afternoons. We’ve got a bunch of texts, a bunch of people, and the people choose to read texts, cycle after cycle after cycle. How complicated could it be to make a sketch of that? Pretty complicated.

What follows is no more than a sketch. There’s a bunch of places where I could say more and more places where things need to be said, but I don’t know how to say them. Still, if I can get this far in the course of a week or so, others can certainly take it further. It’s by no means a proof of concept, but it’s enough to convince me that at some time in the future we will be running simulations of large scale literary processes.

I don’t know whether or not I would create such a simulation given a budget and appropriate collaborators. But I’m inclined to think that, if not now, then within the next ten years we’re going to have to attempt something like this, if for no other reason than to see whether or not it can tell us anything at all. The fact is, at some point, simulation is the only way we’re going to get a feel for the dynamics of literary process.

* * * * *

It’s a long way through this post, almost 5000 words. I begin with a quick look at an overall approach to simulating a literary system. Then I add some details, starting with stand-ins (simulations of) texts and people. Next we have processes involving those objects. That’s the basic simulation, but it’s not the end of my post. I have some discussion of things we might do with this system followed with suggestions about extending it. I conclude with a short discussion of the E-word. Continue reading “Underwood and Sellers 2015: Beyond narrative we have simulation”

How spurious correlations arise from inheritance and borrowing (with pictures)

James and I have written about Galton’s problem in large datasets.  Because two modern languages can have a common ancestor, the traits that they exhibit aren’t independent observations.  This can lead to spurious correlations: patterns in the data that are statistical artefacts rather than indications of causal links between traits.

However, I’ve often felt like we haven’t articulated the general concept very well.  For an upcoming paper, we created some diagrams that try to present the problem in its simplest form.

Spurious correlations can be caused by cultural inheritance 

Gproblem2

Above is an illustration of how cultural inheritance can lead to spurious correlations.  At the top are three independent historical cultures, each of which has a bundle of various traits which are represented as coloured shapes.  Each trait is causally independent of the others.  On the right is a contingency table for the colours of triangles and squares.  There is no particular relationship between the colour of triangles and the colour of squares.  However, over time these cultures split into new cultures.  Along the bottom of the graph are the currently observable cultures.  We now see a pattern has emerged in the raw numbers (pink triangles occur with orange squares, and blue triangles occur with red squares).  The mechanism that brought about this pattern is simply that the traits are inherited together, with some combinations replicating more often than others: there is no causal mechanism whereby pink triangles are more likely to cause orange squares.

Spurious correlations can be caused by borrowing

Gproblem_HorizontalB

Above is an illustration of how borrowing (or areal effects or horizontal cultural inheritance) can lead to spurious correlations.  Three cultures (left to right) evolve over time (top to bottom).  Each culture has a bundle of various traits which are represented as coloured shapes.  Each trait is causally independent of the others.  On the right is a count of the number of cultures with both blue triangles and red squares.  In the top generation, only one out of three cultures have both.  Over some period of time, the blue triangle is borrowed from the culture on the left to the culture in the middle, and then from the culture in the middle to the culture on the right.  By the end, all languages have blue triangles and red squares.  The mechanism that brought about this pattern is simply that one trait spread through the population: there is no causal mechanism whereby blue triangles are more likely to cause red squares.

A similar effect would be caused by a bundle of causally unrelated features being borrowed, as shown below.

Gproblem_Horizontal

Empty Constructions and the Meaning of “Meaning”

Textbooks are boring. In most cases, they consist of a rather tiring collection of more or less undisputed facts, and they omit the really interesting stuff such as controversial discussions or problematic cases that pose a serious challenge to a specific scientific theory. However, Martin Hilpert’s “Construction Grammar and its Application to English” is an admirable exception since it discusses various potential problems for Construction Grammar at length. What I found particularly interesting was the problem of “meaningless constructions”. In what follows, I will present some examples for such constructions and discuss what they might tell us about the nature of linguistic constructions. First, however, I will outline some basic assumptions of Construction Grammar. Continue reading “Empty Constructions and the Meaning of “Meaning””

John Lawler on Generative Grammar

From a Facebook conversation with Dan Everett (about slide rules, aka slipsticks, no less) and others:

The constant revision and consequent redefining and renaming of concepts – some imaginary and some very obvious – has led to a multi-dimensional spectrum of heresy in generative grammar, so complex that one practically needs chromatography to distinguish variants. Babel comes to mind, and also Windows™ versions. Most of the literature is incomprehensible in consequence – or simply repetitive, except it’s too much trouble to tell which.

–John Lawler

Systematic reviews 101: Internal and External Validity

Who remembers last summer when I started writing a series of posts on systematic literature reviews?

I apologise for neglecting it for so long, but here is a quick write up on assessing the studies you are including in your review for internal and external validity, with special reference to experiments in artificial language learning and evolutionary linguistics (though this is relevant to any field which aspires to adopt scientific method).

In the first post in the series, I outlined the differences between narrative and systematic reviews. One of the defining features of a systematic review is that it is not written with a specific hypothesis in mind. The literature search (which my next post will be about) is conducted with predefined inclusion criteria and, as a result, you will end up with a pile of studies to review regardless of there conclusion, or indeed regardless of there quality. Due to a lack of a filter to catch bad science, we need methods to assess the quality of a study or experiment which is what this post will be about.

(This will also help with DESIGNING a valid experiment, as well as assessing the validity of other people’s.)

What is validity?

Validity is the extent to which a conclusion is a well-founded one given the design and analysis of an experiment. It comes in two different flavours: external validity and internal validity.

External Validity

External validity is the extent to which the results of an experiment or study can be extrapolated to different situations. This is EXTREMELY important in the case of experiments in evolutionary linguistics because the whole point of experiments in evolutionary linguistics is to extrapolate your results to different situations (i.e. the emergence of linguistic structure in our ancestors), and we don’t have access to our ancestors to experiment on.

Here are some of things that effect an experiment’s external validity (in linguistics/psychology):

  • Participant characteristics (age (especially important in language learning experiments), gender, etc.)
  • Sample size
  • Type of learning/training (important in artificial language learning experiments)
  • Characteristics of the input (e.g. the nature of the structure in an input language)
  • Modality of the artificial language (how similar to actual linguistic modalities?)
  • Modality of output measures (how the outcome was measured and analysed)
  • The task from which the output was produced (straightforward imitation or communication or some other task)

Internal Validity

Internal validity is how well an experiment reduces its own systematic error within the circumstances of the experiment being performed.

Here are some of things that effect an experiment’s internal validity:

  •  Selection bias (who’s doing the experiment and who gets put in which condition)
  • Performance bias (differences between conditions other than the ones of interest, e.g. running people in condition one in the morning and condition two in the afternoon)
  • Detection bias (how the outcomes measures are coded and interpreted, blinding which condition a participant is in before coding is paramount to reduce the researcher’s bias to want to find a difference between conditions. A lot of retractions lately have been down to failures to act against detection bias.)
  • Attrition bias (Ignoring drop-outs, especially if one condition is especially stressful, causing high drop-out rates and therefore bias in the participants who completed it. This probably isn’t a big problem in most evolutionary linguistics research, but may be in other psychological stuff.)

Different types of bias will be relevant to different fields of research and different research questions, so it may be an idea to come up with your own scoring method for validity to subject different studies to within your review. But remember to be explicit about what your scoring methods are, and the pros and cons of the studies you are writing about.

Hopefully this introduction will have helped you think about validity within experiments in what you’re interested in, and helped you take an objective view on assessing the quality of studies you are reviewing, or indeed conducting.

 

Toward a Computational Historicism. Part 1: Discourse and Conceptual Topology

Poets are the unacknowledged legislators of the world.
– Percy Bysshe Shelley

… it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.
¬–Sydney Lamb

Worlds within worlds – that’s how Tim Perper, my friend and colleague, described biology. At the smallest scale we have individual molecules, with DNA being of prime importance. At the largest scale we have the earth as a whole, with all living beings interacting in a single ecosystem over billions of years. In between we have cells, tissues, and organs of various sizes, autonomous organisms, populations of organisms on various scales from the invisible to continent-spanning, and interactions among populations of organisms on various scales.

Literature too is like that, from single figures and tropes, even single words (think of Joyce’s portmanteaus) through complete works of various sizes, from haiku to oral epics, from short stories through multi-volume novels, onto whole bodies of literature circulating locally, regionally, across continents and between them, from weeks and years to centuries and millennia. Somehow we as humanists and literary critics must comprehend it all. Breathtaking, no?

In this essay I sketch a potential computational historicism operating at multiple scales, both in time and textual extent. In the first part I consider network models on three scale: 1) topic models at the macroscale, 2) Moretti’s plot networks at the mesoscale, and 3) cognitive networks, taken from computational linguistics, at the microscale. I give examples of each and conclude by sketching relationships among them. I open the second part by presenting an account of abstraction given by David Hays in the early 1970s; in this model abstract concepts are defined over stories. I then move on to Hauser and Le-Khac on 19th Century novels, Stephen Greenblatt on self and person, and consider several texts, Amleth, Hamlet, The Winter’s Tale, Wuthering Heights, and Heart of Darkness.

Graphs and Networks

To the mathematician the image below depicts a topological object called a graph. Civilians tend to call such objects networks. The nodes or vertices, as they are called, are connected by arcs or edges.

net

Such graphs can be used to represent many different kinds of phenomena, a road map is an obvious example, a kinship tree is another, sentence structure is a third example. The point is that such graphs are signs of phenomena, notations. They are not the phenomena itself. Continue reading “Toward a Computational Historicism. Part 1: Discourse and Conceptual Topology”

What’s a Language? Evidence from the Brain

Yesterday I put up a post (A Note on Memes and Historical Linguistics) in which I argued that, when historical linguists chart relationships between things they call “languages”, what they’re actually charting is mostly relationships among phonological systems. Though they talk about languages, as we ordinarily use the term, that’s not what they actually look at. In particular, they ignore horizontal transfer of words and concepts between languages.

Consider the English language, which is classified as a Germanic language. As such, it is different from French, which is a Romance language, though of course both Romance and Germanic languages are Indo-European. However, in the 11th Century CE the Norman French invaded Britain and they stuck around, profoundly influencing language and culture in Britain, especially the part that’s come to be known as England. Because of their focus on phonology, historical linguists don’t register this event and its consequences. The considerable French influence on English simply doesn’t count because it affected the vocabulary, but not the phonology.

Well, the historical linguists aren’t the only ones who have a peculiar view of their subject matter. That kind of peculiar vision is widespread.

Let’s take a look at a passage from Sydney Lamb’s Pathways of the Brain (John Benjamins 1999). He begins by talking about Roman Jakobson, one of the great linguists of the previous century:

Born in Russia, he lived in Czechoslovakia and Sweden before coming to the United States, where he became a professor of Slavic Linguistics at Harvard. Using the term language in a way it is commonly used (but which gets in the way of a proper understanding of the situation), we could say that he spoke six languages quite fluently: Russian, Czech, German, English, Swedish, and French, and he had varying amounts of skill in a number of others. But each of them except Russian was spoken with a thick accent. It was said of him that, “He speaks six languages, all of them in Russian”. This phenomenon, quite common except in that most multilinguals don’t control as many ‘languages’, actually provides excellent evidence in support of the conclusion that from a cognitive point of view, the ‘language’ is not a unit at all.

Think about that. “Language” is a noun, nouns are said to represent persons, places, or things – as I recall from some classroom long ago and far away. Language isn’t a person or a place, so it must be a thing. And the generic thing, if it makes any sense at all to talk of such, is a self-contained ‘substance’ (to borrow a word from philosophy), demarcated from the rest of the world. It is, well, it’s a thing, like a ball, you can grab it in your metaphorical hand and turn it around as you inspect it. Continue reading “What’s a Language? Evidence from the Brain”

Systematic reviews 101: How to phrase your research question

keep-calm-and-formulate-your-research-question
Image from the JEPS Bulletin

As promised, and first thing’s first, when writing a systematic review, how should we phrase our research question? This is useful when phrasing questions for individual studies too.

PICO is a useful mnemonic for building research questions in clinical science:

  • Patient group
  • Intervention
  • Comparison/Control group
  • Outcome measures

How does this look in practice?

What is the effect of [intervention] on [outcome measure] in [patient group] (compared to [control group])?

How can we make this more applicable for language evolution?

I guess we can change the mnemonic:

Population (either whole language populations in large scale studies, small sample populations either in the real world or under a certain condition in a laboratory experiment, or a population of computational or mathematical agents or population proxy)

Intervention
Comparison/Control group
Outcome measures

Here are some examples of what this might look like using language evolution research:

What is the effect of [L2 speakers] on [morphological complexity] in [large language populations] compared to [small language populations]?

What is the effect of [speed of cultural evolution] on [the baldwin effect] in [a population of baysian agents]?

What is the effect of [iterated learning] on [the morphosyntactic structure in an artificial language] in [experimental participants]?

What is the effect of [communication] on the [distribution of vowels] in [a population of computational agents]?

All of the above are good research questions for individual studies, but I’m not sure it would be possible to do a review on any of the above research questions simply because there is not enough studies, and even when studies have investigated the same intervention and outcome measure, they haven’t used the same type of population.

In clinical research the same studies are done again and again, with the same disease, intervention and population. This makes sense as one study does not necessarily create enough evidence to risk people’s lives on the results. We don’t have this problem in language evolution (thank god), however I feel we may suffer from a lack of replication of  studies. There has been quite a lot of movement recently (see here) to make replication of psychological experiments encouraged, worthwhile and publishable. It is also relatively easy to replicate computational modelling work, but the tendency is to change the parameters or interventions to generate new (and therefore publishable) findings. And real world data is a problem because we end up analysing the same database of languages over and over again. However, I suppose controlling for things like linguistic family, and therefore treating each language family as its own study, in a way, is a sort of meta-analysis of natural replications.

I’m not sure there’s an immediate solution to the problems I’ve identified above, and I’m certainly not the first person to point them out, but thinking carefully about your research question before starting to conduct a review is very useful and excellent practice, and you should remember that when doing a systematic review, the narrower your research question, the easier, more thorough and complete your review will be.