Polythetic Entitation & Cultural Coordinators

Timothy Taylor has an interesting entry in this year’s “Edqe Question” idea-fest. It has the ungainly title, Polythetic Entitation. He attributes the idea to the late David Clarke:

Clarke argued that the world of wine glasses was different to the world of biology, where a simple binary key could lead to the identification of a living creature (Does it have a backbone? If so, it is a vertebrate. Is it warm blooded? If so, it is a mammal or bird. Does it produce milk? … and so on). A wine glass is a polythetic entity, which means that none of its attributes, without exception, is simultaneously sufficient and necessary for group membership. Most wine glasses are made of clear glass, with a stem and no handle, but there are flower vases with all these, so they are not definitionally-sufficient attributes; and a wine glass may have none of these attributes—they are not absolutely necessary. It is necessary that the wine glass be able to hold liquid and be of a shape and size suitable for drinking from, but this is also true of a teacup. If someone offered me a glass of wine, and then filled me a fine ceramic goblet, I would not complain.

Taylor is an archaeologist as was Clarke. They face the problem of how to identify cultural objects without knowing how they are used. An object’s physical characteristics generally do not speak unequivocally, hence the term polythetic (vs. monothetic). Thus:

Asking at the outset whether an object is made of glass takes us down a different avenue from first asking if it has a stem, or if it is designed to hold liquid. The first lumps the majority of wine glasses with window panes; the second groups most of them with vases and table lamps; and the third puts them all into a super-category that includes breast implants and Lake Mead, the Hoover dam reservoir. None of the distinctions provides a useful classificatory starting point. So grouping artefacts according to a kind of biological taxonomy will not do.

As a prehistoric archaeologist David Clarke knew this, and he also knew that he was continually bundling classes of artefacts into groups and sub-groups without knowing whether his classification would have been recognized emically, that is, in terms understandable to the people who created and used the artefacts. Although the answer is that probably they did have different functions, how might one work back from the purely formal, etic, variance—the measurable features or attributes of an artefact—to securely assign it to its proper category?

What matters for proper classification are the attributes with “cultural salience” (Taylor’s term).

Now cultural salience is how I define the genetic elements of culture, which I have taken to calling coordinators. Coordinators are the culturally salient properties of objects or processes. In a terminology originally promulgated by Kenneth Pike, they are emics (notice that Taylor uses this terminology as well).

One thing that became clear to me in Dan Everett’s Dark Matter of the Mind (see my review in 3 Quarks Daily) is that a culture covers or paints (other terms of art I am considering) their natural environment with coordinators. Thus Everett talks about how, even after he’d been among the Pirahã for a couple years he simply could not see the jungle as well as they did. They were born and raised in it; he was not. Features of the jungle – creatures and events – that were obvious to the Pirahã because they had learned to identify them, that were culturally salient to the Pirahã, were invisible to Everett. They may have been right in front of his (lying) eyes, but he couldn’t discern them. They were not culturally salient to him, for his mind/brain had developed in a very different physical environment.

The polythetic nature of cultural artfacts is closely related to what I have called abundance elsewhere. The phenomena of the world have many properties; they are abundant. Only some of those properties will even be perceptually available; after all, our ears cannot hear all sounds, our eyes cannot see all electromagnetic radiation, etc. Of the perceptually available properties, only some will be culturally salient. This is as true for natural objects as for cultural artifacts and activities.

Dan Everett’s Dark Matter @ 3QD

Consider these three words: gavagai, gabagaí, gabagool. If you’ve been binge watching episodes in the Star Trek franchise you might suspect them to be the equivalent of veni, vidi, vici, in the language of a space-faring race from the Gamma Quadrant. The truth, however, is even stranger.

The first is a made-up word that is well-known in certain philosophical circles. The second is not quite a word, but is from Pirahã, the Amazonian language brought to our attention by ex-missionary turned linguist, Daniel Everett, and can be translated as “frustrated initiation,” which is how Everett characterized his first field trip among the Pirahã. The third names an Italian cold cut that is likely spelled “capicola” or “capocolla” when written out and has various pronunciations depending on the local language. In New York and New Jersey, Tony Soprano country, it’s “gabagool”.

Everett discusses first two in his wide-ranging new book, Dark Matter of the Mind: The Culturally Articulated Unconscious (2016), which I review at 3 Quarks Daily. As for gabagool, good things come in threes, no?

Why gavagai? Willard van Orman Quine coined the word for a thought experiment that points up the problem of word meaning. He broaches the issue by considering the problem of radical translation, “translation of the language of a hitherto untouched people” (Word and Object 1960, 28). He asks us to consider a “linguist who, unaided by an interpreter, is out to penetrate and translate a language hitherto unknown. All the objective data he has to go on are the forces that he sees impinging on the native’s surfaces and the observable behavior, focal and otherwise, of the native.” That is to say, he has no direct access to what is going on inside the native’s head, but utterances are available to him. Quine then asks us to imagine that “a rabbit scurries by, the native says ‘Gavagai’, and the linguist notes down the sentence ‘Rabbit’ (of ‘Lo, a rabbit’) as tentative translation, subject to testing in further cases” (p. 29).

Quine goes on to argue that, in thus proposing that initial translation, the linguist is making illegitimate assumptions. He begins his argument by nothing that the native might, in fact, mean “white” or “animal” and later on offers more exotic possibilities, the sort of things only a philosopher would think of. Quine also notes that whatever gestures and utterances the native offers as the linguist attempts to clarify and verify will be subject to the same problem.

As Everett notes, however, in his chapter on translation (266):

On the side of mistakes never made, however, Quine’s gavagai problem is one. In my field research on more than twenty languages—many of which involved monolingual situations …, whenever I pointed at an object or asked “What’s that?” I always got an answer for an entire object. Seeing me point at a bird, no one ever responded “feathers.” When asked about a manatee, no one ever answered “manatee soul.” On inquiring about a child, I always got “child,” “boy,” or “girl,” never “short hair.”

Later:

I believe that the absence of these Quinean answers results from the fact that when one person points toward a thing, all people (that I have worked with, at least) assume that what is being asked is the name of the entire object. In fact, over the years, as I have conducted many “monolingual demonstrations,” I have never encountered the gavagai problem. Objects have a relative salience… This is perhaps the result of evolved perception.

Frankly, I forget how I reacted to Quine’s thought experiment when I first read it as an undergraduate back in the 1960s. I probably found it a bit puzzling, and perhaps I even half-believed it. But that was a long time ago. When I read Everett’s comments on it I was not surprised to find that the gavagai problem doesn’t arise in the real world and find his suspected explanation, evolved perception, convincing.

As one might expect, Everett devotes quite a bit of attention to recursion, with fascinating examples from Pirahã concerning evidentials, but I deliberately did not bring that up in my review. Why, given that everyone and their Aunt Sally seem to be all a-twitter about the issue, didn’t I discuss it? That’s why, I’m tired of it and think that, at this point, it’s a case of the tail wagging the dog. I understand well enough why it’s an important issue, but it’s time to move on.

The important issue is to shift the focus of linguistic theory away from disembodied and decontextualized sentences and toward conversational interaction. That’s been going on for some time now and Everett has played a role in that shift. While the generative grammarians use merge as a term for syntactic recursion it could just as well be used to characterize how partners assimilate what they’re hearing with what they’re thinking. Perhaps that’s what syntax is for and why it arose, to make conversation more efficient–and I seem to think that Everett has a suggestion to that effect in his discussion of the role of gestures in linguistic interaction.

Anyhow, if these and related matters interest you, read my review and read Everett’s book.

Mutable stability in the transmission of medieval texts

I’ve just checked in at Academica.edu and was alerted to this article:

Stephen G. Nichols, Mutable Stability, a Medieval Paradox: The Case of Le Roman de la Rose, Queste 23 (2016) 2, pp. 71-103.

I’ve not yet read it, but a quick skim makes it clear that it speaks to a current debate in cultural evolution concerning the high-fidelity transmission of “memes” (Dan Dennett) vs. the variable transmission of objects as guided by “factors of attraction” (Dan Sperber). I’ve not yet read it, but here’s some tell-tale passages. This is from the beginning (p. 71):

Yet even those who argue, to the contrary, that ‘transmission errors’ often represent creative ‘participation’ by a talented scribe, must recognize the attraction of a stable work.After all, despite an extraordinary record of innovation, invention, and discovery, the Middle Ages are an era that resisted change in and for itself. And yet this same veneration of conservative values underlies a fascinating paradox of medieval culture: its delicate and seemingly contradictory balance between stability, on the one hand, and transformation, on the other. It may be that only an era that saw no contradiction in promulgating an omnipotent, unchanging divinity, which was at the same time a dynamic principle of construction and transformation, could have managed the paradox of what I want to call ‘mutable stability’.

Here’s Dawkins in the 2nd chapter of The Selfish Gene:

Darwin’s ‘survival of the fittest’ is really a special case of a more general law of survival of the stable. The universe is populated by stable things. A stable thing is a collection of atoms that is permanent enough or common enough to deserve a name. It may be a unique collection of atoms, such as the Matterhorn, that lasts long enough to be worth naming. Or it may be a class of entities, such as rain drops, that come into existence at a sufficiently high rate to deserve a collective name, even if any one of them is short-lived. The things that we see around us, and which we think of as needing explanation–rocks, galaxies, ocean waves–are all, to a greater or lesser extent, stable patterns of atoms.

Etc.

Back to Nichols, a bit later in the article (p. 77):

In this case, however, it’s one that allows us to understand the paradox of medieval narrative forms whose ‘stability’ over time – in some cases over several centuries – depends on what I call the generative – or regenerative – force of transmission. Why ‘regenerative’ if transmission involves reproducing the ‘same’ work from one representation to another? The answer to that question involves recognizing the complex forces at play in the transmission of me- dieval texts, beginning with concepts like ‘the same’ and ‘seeing’ or ‘perspective’. After all, in a culture where the technology of transmission depends on copying each text by hand, what the scribe sees, or thinks she or he sees, must be factored into our definition of ‘sameness’ when comparing original and copy.

In the event, ‘sameness’, for the medieval mind had a very different connotation from our modern senses of the term. Indeed, it even involves a different process of perception and imagination. Whereas in our age of mechanical and digital reproduction, we are used to standards of ‘exactness’ for things we recognize as identical, me- dieval people had neither the means nor the expectation to make ‘same’ and ‘exact imitation’ synonymous. Indeed, one may even question the existence at that time of such a concept as ‘exact imitation’, at least as we understand it. Continue reading “Mutable stability in the transmission of medieval texts”

Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures

This post is about terminology, but also about things – in particular, an abstract thing – and measurements of those things. The things and measurements arise in the study of cultural evolution.

Let us start with a thing. What is this?

9dot3

If you are a regular reader here at New Savanna you might reply: Oh, that’s the whatchamacallit from Jocker’s Macroanalysis. Well, yes, it’s an illustration from Macroanalysis. But that’s not quite the answer I was looking for. But let’s call that answer a citation and set it aside.

Let’s ask the same question, but of a different object: What’s this?

20141231-_IGP2188

I can imagine two answers, both correct, each it its own way:

1. It’s a photo of the moon.

2. The moon.

Strictly speaking, the first is correct and the second is not. It IS a photograph, not the moon itself. But the second answer is well within standard usage.

Notice that the photo does not depict the moon in full (whatever that might mean), no photograph could. That doesn’t change the fact that it is the moon that is depicted, not the sun, or Jupiter, or Alpha Centauri, or, for that matter, Mickey Mouse. We do not generally expect that representations of things should exhaust those things.

Now let us return to the first image and once again ask: What is this? I want two answers, one to correspond with each of our answers about the moon photo. I’m looking for something of the form:

1. A representation of X.

2. X.

Let us start with X. Jockers was analyzing a corpus of roughly 3300 19th century Anglophone novels. To do that he evaluated each of them on each of 600 features. Since those evaluations can be expressed numerically Jockers was able to create a 600-dimensional space in which teach text occupies a single point. He then joined all those points representing texts that are relatively close to one another. Those texts are highly similar with respect to the 600 features that define the space.

The result is a directed graph having 3300 nodes in 600 dimensions. So, perhaps we can say that X is a corpus similarity graph. However, we cannot see in 600 dimensions so there is no way we can directly examine that graph. It exists only as an abstract object in a computer. What we can do, and what Jockers did, is project a 600D object into two dimensions. That’s what we see in the image.

Continue reading “Ontology and Cultural Evolution: “Spirit” or “Geist” and some of its measures”

Culture shapes the evolution of cognition

A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone’s favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.

 

Abstract:

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Paper: http://www.pnas.org/content/early/2016/03/30/1523631113.full

Two grants for PhD students in cultural evolution at Max Planck Institute (Jena)

The MPI for the Science of Human History is offering two grants for PhD students, starting 2016 (deadline for applications is March 21st, 2016).

The Minds and Traditions research group (“the Mint”), an Independent Max Planck Research Group at the Max Planck Institute for the Science of Human History in Jena (Germany) is offering two grants for two doctoral projects focusing on “cognitive science and cultural evolution of visual culture and graphic codes“.

Funding is available for four years (three years renewable twice for six months), starting in September 2016. The PhD students will be expected to take part in a research project devoted to the cognitive science and cultural evolution of graphic codes.

More details here.

Future tense and saving money: no correlation when controlling for cultural evolution

This week our paper on future tense and saving money is published (Roberts, Winters & Chen, 2015).  In this paper we test a previous claim by Keith Chen about whether the language people speak influences their economic decisions (see Chen’s TED talk here or paper).  We find that at least part of the previous study’s claims are not robust to controlling for historical relationships between cultures. We suggest that large-scale cross-cultural patterns should always take cultural history into account.

Does language influence the way we think?

There is a longstanding debate about whether the constraints of the languages we speak influence the way we behave. In 2012, Keith Chen discovered a correlation between the way a language allows people to talk about future events and their economic decisions: speakers of languages which make an obligatory grammatical distinction between the present and the future are less likely to save money.

Continue reading “Future tense and saving money: no correlation when controlling for cultural evolution”

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading “Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains”

Underwood and Sellers 2015: Beyond narrative we have simulation

It is one thing to use computers to crunch data. It’s something else to use computers to simulate a phenomenon. Simulation is common in many disciplines, including physics, sociology, biology, engineering, and computer graphics (CGI special effects generally involve simulation of the underlying physical phenomena). Could we simulate large-scale literary processes?

In principal, of course. Why not? In practice, not yet. To be sure, I’ve seen the possibility mentioned here and there, and I’ve seen an example or two. But it’s not something many are thinking about, much less doing.

Nonetheless, as I was thinking about How Quickly Do Literary Standards Change? (Underwood and Sellers 2015) I found myself thinking about simulation. The object of such a simulation would be to demonstrate the principle result of that work, as illustrated in this figure:

19C Direction

Each dot, regardless of color or shape, represents the position of a volume of poetry in a one-dimensional abstraction over 3200 dimensional space – though that’s not how Underwood and Sellers explain it (for further remarks see “Drifting in Space” in my post, Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction). The trend line indicates that poetry is shifting in that space along a uniform direction over the course of the 19th century. Thus there seems to be a large-scale direction to that literary system. Could we create a simulation that achieves that result through ‘local’ means, without building a telos into the system?

The only way to find out would be to construct such a system. I’m not in a position to do that, but I can offer some remarks about how we might go about doing it.

* * * * *

I note that this post began as something I figured I could knock out in two or three afternoons. We’ve got a bunch of texts, a bunch of people, and the people choose to read texts, cycle after cycle after cycle. How complicated could it be to make a sketch of that? Pretty complicated.

What follows is no more than a sketch. There’s a bunch of places where I could say more and more places where things need to be said, but I don’t know how to say them. Still, if I can get this far in the course of a week or so, others can certainly take it further. It’s by no means a proof of concept, but it’s enough to convince me that at some time in the future we will be running simulations of large scale literary processes.

I don’t know whether or not I would create such a simulation given a budget and appropriate collaborators. But I’m inclined to think that, if not now, then within the next ten years we’re going to have to attempt something like this, if for no other reason than to see whether or not it can tell us anything at all. The fact is, at some point, simulation is the only way we’re going to get a feel for the dynamics of literary process.

* * * * *

It’s a long way through this post, almost 5000 words. I begin with a quick look at an overall approach to simulating a literary system. Then I add some details, starting with stand-ins (simulations of) texts and people. Next we have processes involving those objects. That’s the basic simulation, but it’s not the end of my post. I have some discussion of things we might do with this system followed with suggestions about extending it. I conclude with a short discussion of the E-word. Continue reading “Underwood and Sellers 2015: Beyond narrative we have simulation”

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.

Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century? Continue reading “Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015”