The computational envelope of language – Once more into the breach

Time to saddle-up and once more ride my current hobby horse, or one of them at least. In this case, the idea that natural language is the simplest aspect of human activity that is fundamentally and irreducibly computational in nature.

Let’s back into it.

* * * * *

Is arithmetic calculation computational in kind?

Well yes, of course. If anything is computation, that sure is.

Well then, in my current view, arithmetic calculation is language from which meaning has been completely removed, squeezed out as it were, leaving us with syntax, morphology, and so forth.

Elaborate.

First, let’s remind ourselves that arithmetic calculation, as performed by writing symbols on some surface, is a very specialized form of language. Sure, we think of it as something different from language…

All those years of drill and practice in primary school?

Yes. We have it drilled into our heads that arithmetic is one thing, over here, while language is something different, over there. But it’s obvious, isn’t it, that arithmetic is built from language?

OK, I’ll accept that.

So, arithmetic calculation has two kinds of symbols, numerals and operators. Both are finite in number. Numerals can be concatenated into strings of any length and in any order and combination.

OK. In the standard Arabic notation there are ten numerals, zero (0) through (9).

That’s correct.

And we’ve got five operators, +, -, * [times], ÷, and =. And, come to think of it, we probably should have left and right parenthesis as well.

OK. What’s the relationship between these two kinds of symbols?

Hmmmm….The operators allow as to specify various relationships between strings of numerals.

Starting with, yes, starting with a basic set of equivalences of the form, NumStr Op NumStr = NumStr, where Op is one from +, -, *, and ÷ and NumStr is a string of one or, in the case of these primitive equivalences, two numerals. [1]

Thus giving us those tables we memorized in grade school. Right!

What do you mean by semantics being removed?

Well, what are the potentially meaning-bearing elements in this collection?

That would be the numerals, no?

Yes. What do they mean?

Why, they don’t meaning anything…

Well… But they aren’t completely empty, are they?

No.

Elaborate. What’s not empty about, say, 5?

5 could designate…

By “designate” you mean “mean”?

Yes. 5 could designate any collection with five members. 5 apples, 5 oranges, 5 mountains, 5 stars…

What about an apple, an orange, a mountain, a star, and a dragon?

Yes, as long as there’s five of them.

Ah, I see. The numerals, or strings of numerals, are connected to the world though the operation of counting. When we use them to count, they, in effect, become numbers. But, yes, that’s a very general kind of relationship. Not much semantics or meaning there.

Right. And that’s what I mean by empty of semantics. All we’ve got left is syntax, more or less.

Sounds a bit like Searle in his Chinese Room.

Yes, it does, doesn’t it?

The idea is that the mental machinery we use to do arithmetic calculation, that’s natural computation, computation performed by a brain, from which semantics has been removed. That machinery is there in ordinary language, or even extraordinary language. Language couldn’t function without it. That’s where language gets its combinatorial facility.

And THAT sounds like Chomsky, no?

Yes.

* * * * *

And so it goes, on and on.

When the intellectual history of the second half of the twentieth century gets written, the discovery of the irreducibly computational nature of natural language will surely be listed as one of the highlights. Just who will get the honor, that’s not clear, though Chomsky is an obvious candidate. He certainly played a major role. But he didn’t figure out how an actual physical system could do it (the question was of little or no interest to him), and surely that’s part of the problem. If so, however, then we still haven’t gotten it figured out, have we?

* * * * *

[1] Isn’t that a bit sophisticated for the Glaucon figure in this dialog? Yes, but this is a 21st century Glaucon. He’s got a few tricks up his sleeve.

[2] Sounds a bit like the Frege/Russell set theory definition of number: a natural number n is the collection of all sets with n elements.

Color term salience and cultural evolution

The most salient colors (black, white, and perhaps red) are named in all languages; the least salient of the set are named in fewer languages. Salience correlates with earliness of introduction.

David G. Hays, Enid Margolis, Raoul Naroll, Dale Revere Perkins, Color Term Salience. American Anthropologist, 74:1107-1121, 1972. DOI: 10.1525/aa.1972.74.5.02a00050

Abstract: Eleven focal colors are named by basic color terms in many languages. The most salient colors (black, white, and perhaps red) are named in all languages; the least salient of the set are named in fewer languages. Salience correlates with earliness of introduction, as measured by a scale of social evolution; with brevity of expression, as measured by phonemic length of basic color terms; with frequency of use, as measured by frequency of basic color terms in literary languages; and with frequency of mention in ethnographic literature. None of these correlations are established in the pioneer study of Berlin and Kay (1969), a study whose defects are well exposed by Durbin (1972) and Wescott (1970). The first two were documented respectively in Naroll (1970) and Durbin (1972); the last two are documented here. These four correlations independently support the Berlin-Kay color salience theory. They furnish a sound basis for further research on color term salience in particular and indeed on salience phenomena in general. We speculate that salience may be an important general principle of cultural evolution.

Consider this finding: “Salience correlates with earliness of introduction, as measured by a scale of social evolution”. What that means is that less complex societies (as measured by one of the standard indexes, Marsh’s socially complexity scale) have fewer basic color terms than more complex ones. Why?

EvoLang proceedings are now online

This year, the proceedings of the Evolution of Language conference will appear online.  The first group of papers are already up:

Browse the EvoLang Electronic Proceedings

The move to self-publishing is a bit of an experiment, but hopefully it’ll mean that the papers are more accessible to a wider audience.  To aid this, the papers are published under Creative Commons licenses.  Some papers also include supplementary materials.

The full list of papers will be updated as revisions come in, but here are some interesting papers available so far:

Continue reading “EvoLang proceedings are now online”

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading “Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains”

Underwood and Sellers 2015: Beyond narrative we have simulation

It is one thing to use computers to crunch data. It’s something else to use computers to simulate a phenomenon. Simulation is common in many disciplines, including physics, sociology, biology, engineering, and computer graphics (CGI special effects generally involve simulation of the underlying physical phenomena). Could we simulate large-scale literary processes?

In principal, of course. Why not? In practice, not yet. To be sure, I’ve seen the possibility mentioned here and there, and I’ve seen an example or two. But it’s not something many are thinking about, much less doing.

Nonetheless, as I was thinking about How Quickly Do Literary Standards Change? (Underwood and Sellers 2015) I found myself thinking about simulation. The object of such a simulation would be to demonstrate the principle result of that work, as illustrated in this figure:

19C Direction

Each dot, regardless of color or shape, represents the position of a volume of poetry in a one-dimensional abstraction over 3200 dimensional space – though that’s not how Underwood and Sellers explain it (for further remarks see “Drifting in Space” in my post, Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction). The trend line indicates that poetry is shifting in that space along a uniform direction over the course of the 19th century. Thus there seems to be a large-scale direction to that literary system. Could we create a simulation that achieves that result through ‘local’ means, without building a telos into the system?

The only way to find out would be to construct such a system. I’m not in a position to do that, but I can offer some remarks about how we might go about doing it.

* * * * *

I note that this post began as something I figured I could knock out in two or three afternoons. We’ve got a bunch of texts, a bunch of people, and the people choose to read texts, cycle after cycle after cycle. How complicated could it be to make a sketch of that? Pretty complicated.

What follows is no more than a sketch. There’s a bunch of places where I could say more and more places where things need to be said, but I don’t know how to say them. Still, if I can get this far in the course of a week or so, others can certainly take it further. It’s by no means a proof of concept, but it’s enough to convince me that at some time in the future we will be running simulations of large scale literary processes.

I don’t know whether or not I would create such a simulation given a budget and appropriate collaborators. But I’m inclined to think that, if not now, then within the next ten years we’re going to have to attempt something like this, if for no other reason than to see whether or not it can tell us anything at all. The fact is, at some point, simulation is the only way we’re going to get a feel for the dynamics of literary process.

* * * * *

It’s a long way through this post, almost 5000 words. I begin with a quick look at an overall approach to simulating a literary system. Then I add some details, starting with stand-ins (simulations of) texts and people. Next we have processes involving those objects. That’s the basic simulation, but it’s not the end of my post. I have some discussion of things we might do with this system followed with suggestions about extending it. I conclude with a short discussion of the E-word. Continue reading “Underwood and Sellers 2015: Beyond narrative we have simulation”

How spurious correlations arise from inheritance and borrowing (with pictures)

James and I have written about Galton’s problem in large datasets.  Because two modern languages can have a common ancestor, the traits that they exhibit aren’t independent observations.  This can lead to spurious correlations: patterns in the data that are statistical artefacts rather than indications of causal links between traits.

However, I’ve often felt like we haven’t articulated the general concept very well.  For an upcoming paper, we created some diagrams that try to present the problem in its simplest form.

Spurious correlations can be caused by cultural inheritance 

Gproblem2

Above is an illustration of how cultural inheritance can lead to spurious correlations.  At the top are three independent historical cultures, each of which has a bundle of various traits which are represented as coloured shapes.  Each trait is causally independent of the others.  On the right is a contingency table for the colours of triangles and squares.  There is no particular relationship between the colour of triangles and the colour of squares.  However, over time these cultures split into new cultures.  Along the bottom of the graph are the currently observable cultures.  We now see a pattern has emerged in the raw numbers (pink triangles occur with orange squares, and blue triangles occur with red squares).  The mechanism that brought about this pattern is simply that the traits are inherited together, with some combinations replicating more often than others: there is no causal mechanism whereby pink triangles are more likely to cause orange squares.

Spurious correlations can be caused by borrowing

Gproblem_HorizontalB

Above is an illustration of how borrowing (or areal effects or horizontal cultural inheritance) can lead to spurious correlations.  Three cultures (left to right) evolve over time (top to bottom).  Each culture has a bundle of various traits which are represented as coloured shapes.  Each trait is causally independent of the others.  On the right is a count of the number of cultures with both blue triangles and red squares.  In the top generation, only one out of three cultures have both.  Over some period of time, the blue triangle is borrowed from the culture on the left to the culture in the middle, and then from the culture in the middle to the culture on the right.  By the end, all languages have blue triangles and red squares.  The mechanism that brought about this pattern is simply that one trait spread through the population: there is no causal mechanism whereby blue triangles are more likely to cause red squares.

A similar effect would be caused by a bundle of causally unrelated features being borrowed, as shown below.

Gproblem_Horizontal

Empty Constructions and the Meaning of “Meaning”

Textbooks are boring. In most cases, they consist of a rather tiring collection of more or less undisputed facts, and they omit the really interesting stuff such as controversial discussions or problematic cases that pose a serious challenge to a specific scientific theory. However, Martin Hilpert’s “Construction Grammar and its Application to English” is an admirable exception since it discusses various potential problems for Construction Grammar at length. What I found particularly interesting was the problem of “meaningless constructions”. In what follows, I will present some examples for such constructions and discuss what they might tell us about the nature of linguistic constructions. First, however, I will outline some basic assumptions of Construction Grammar. Continue reading “Empty Constructions and the Meaning of “Meaning””

John Lawler on Generative Grammar

From a Facebook conversation with Dan Everett (about slide rules, aka slipsticks, no less) and others:

The constant revision and consequent redefining and renaming of concepts – some imaginary and some very obvious – has led to a multi-dimensional spectrum of heresy in generative grammar, so complex that one practically needs chromatography to distinguish variants. Babel comes to mind, and also Windows™ versions. Most of the literature is incomprehensible in consequence – or simply repetitive, except it’s too much trouble to tell which.

–John Lawler

Systematic reviews 101: Internal and External Validity

Who remembers last summer when I started writing a series of posts on systematic literature reviews?

I apologise for neglecting it for so long, but here is a quick write up on assessing the studies you are including in your review for internal and external validity, with special reference to experiments in artificial language learning and evolutionary linguistics (though this is relevant to any field which aspires to adopt scientific method).

In the first post in the series, I outlined the differences between narrative and systematic reviews. One of the defining features of a systematic review is that it is not written with a specific hypothesis in mind. The literature search (which my next post will be about) is conducted with predefined inclusion criteria and, as a result, you will end up with a pile of studies to review regardless of there conclusion, or indeed regardless of there quality. Due to a lack of a filter to catch bad science, we need methods to assess the quality of a study or experiment which is what this post will be about.

(This will also help with DESIGNING a valid experiment, as well as assessing the validity of other people’s.)

What is validity?

Validity is the extent to which a conclusion is a well-founded one given the design and analysis of an experiment. It comes in two different flavours: external validity and internal validity.

External Validity

External validity is the extent to which the results of an experiment or study can be extrapolated to different situations. This is EXTREMELY important in the case of experiments in evolutionary linguistics because the whole point of experiments in evolutionary linguistics is to extrapolate your results to different situations (i.e. the emergence of linguistic structure in our ancestors), and we don’t have access to our ancestors to experiment on.

Here are some of things that effect an experiment’s external validity (in linguistics/psychology):

  • Participant characteristics (age (especially important in language learning experiments), gender, etc.)
  • Sample size
  • Type of learning/training (important in artificial language learning experiments)
  • Characteristics of the input (e.g. the nature of the structure in an input language)
  • Modality of the artificial language (how similar to actual linguistic modalities?)
  • Modality of output measures (how the outcome was measured and analysed)
  • The task from which the output was produced (straightforward imitation or communication or some other task)

Internal Validity

Internal validity is how well an experiment reduces its own systematic error within the circumstances of the experiment being performed.

Here are some of things that effect an experiment’s internal validity:

  •  Selection bias (who’s doing the experiment and who gets put in which condition)
  • Performance bias (differences between conditions other than the ones of interest, e.g. running people in condition one in the morning and condition two in the afternoon)
  • Detection bias (how the outcomes measures are coded and interpreted, blinding which condition a participant is in before coding is paramount to reduce the researcher’s bias to want to find a difference between conditions. A lot of retractions lately have been down to failures to act against detection bias.)
  • Attrition bias (Ignoring drop-outs, especially if one condition is especially stressful, causing high drop-out rates and therefore bias in the participants who completed it. This probably isn’t a big problem in most evolutionary linguistics research, but may be in other psychological stuff.)

Different types of bias will be relevant to different fields of research and different research questions, so it may be an idea to come up with your own scoring method for validity to subject different studies to within your review. But remember to be explicit about what your scoring methods are, and the pros and cons of the studies you are writing about.

Hopefully this introduction will have helped you think about validity within experiments in what you’re interested in, and helped you take an objective view on assessing the quality of studies you are reviewing, or indeed conducting.

 

Toward a Computational Historicism. Part 1: Discourse and Conceptual Topology

Poets are the unacknowledged legislators of the world.
– Percy Bysshe Shelley

… it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.
¬–Sydney Lamb

Worlds within worlds – that’s how Tim Perper, my friend and colleague, described biology. At the smallest scale we have individual molecules, with DNA being of prime importance. At the largest scale we have the earth as a whole, with all living beings interacting in a single ecosystem over billions of years. In between we have cells, tissues, and organs of various sizes, autonomous organisms, populations of organisms on various scales from the invisible to continent-spanning, and interactions among populations of organisms on various scales.

Literature too is like that, from single figures and tropes, even single words (think of Joyce’s portmanteaus) through complete works of various sizes, from haiku to oral epics, from short stories through multi-volume novels, onto whole bodies of literature circulating locally, regionally, across continents and between them, from weeks and years to centuries and millennia. Somehow we as humanists and literary critics must comprehend it all. Breathtaking, no?

In this essay I sketch a potential computational historicism operating at multiple scales, both in time and textual extent. In the first part I consider network models on three scale: 1) topic models at the macroscale, 2) Moretti’s plot networks at the mesoscale, and 3) cognitive networks, taken from computational linguistics, at the microscale. I give examples of each and conclude by sketching relationships among them. I open the second part by presenting an account of abstraction given by David Hays in the early 1970s; in this model abstract concepts are defined over stories. I then move on to Hauser and Le-Khac on 19th Century novels, Stephen Greenblatt on self and person, and consider several texts, Amleth, Hamlet, The Winter’s Tale, Wuthering Heights, and Heart of Darkness.

Graphs and Networks

To the mathematician the image below depicts a topological object called a graph. Civilians tend to call such objects networks. The nodes or vertices, as they are called, are connected by arcs or edges.

net

Such graphs can be used to represent many different kinds of phenomena, a road map is an obvious example, a kinship tree is another, sentence structure is a third example. The point is that such graphs are signs of phenomena, notations. They are not the phenomena itself. Continue reading “Toward a Computational Historicism. Part 1: Discourse and Conceptual Topology”