Category Archives: Evolution

IMGP8393rd

Reading Macroanalysis: Notes on the Evolution of Nineteenth Century Anglo-American Literary Culture

Matthew L. Jockers. Macroanalysis: Digital Methods & Literary History. University of Illinois Press, 2013. x + 192 pp. ISBN 978-0252-07907-8

I’ve compiled all the posts into a working paper. HERE’s the SSRN link. Abstract and introduction below.

* * * * *

Abstract: Macroanalysis is a statistical study of a corpus of 3346 19th Century American, British, Irish, and Scottish novels. Jockers investigates metatdata; the stylometrics of authorship, gender, genre, and national origin; themes, using a 500 item topic model; and influence, developing a graph model of the entire corpus in a 578 dimensional feature space. I recast his model in terms of cultural evolution where the dynamics are those of blind variation and selective retention. Texts become phenotypical objects, words become genetic objects, and genres become species-like objects. The genetic elements combine and recombine in authors’ minds but they are substantially blind to audience preferences. Audiences determine whether or not a text remains alive in society.

* * * * *

Introduction: Get in the Driver’s Seat

I knew it was going to be good. But not THIS good. A better formulation: I didn’t know it would good in THIS way, that it would put me in driver’s seat, if only in a limited way.

The driver’s seat, you ask, what do you mean? In this case it means that I could actively work with the data. When, for example, I read Moretti’s Graphs, Maps, Trees, I read it as I do pretty much any book, though this one had a bunch of charts and diagrams, which is unusual for literary criticism. There wasn’t anything for me to do other than just read.

If I didn’t have ready access to the web, reading Macroanalysis would have been the same. But I do have web access and I use it all the time. So, when I got to Chapter 8, “Theme,” I also accessed the topic browser that Jockers had put on the web. Through this browser I could explore the topic model Jockers used in the book and, in particular, I could use it to investigate matters that Jockers hadn’t considered.

So I moved from thinking about Jockers’ work to using his work for my own intellectual ends. I ended up writing four posts (6.1 – 6.4) on that material totaling almost 12,000 words and I don’t know how many charts and graphs, all of which I got from Jockers’ web site. Once I’d worked through an initial curiosity about a spike that looked like Call of the Wild (but wasn’t, because that text isn’t in the database) I settled into some explorations framed by Leslie Fiedler’s Love and Death in the American Novel, Melville’s Moby Dick, and Edward Said’s anxiety on behalf of the autonomous existence of the aesthetic realm.

Data is Independent of Interpretations

You can do that as well, or whatever you wish. While the web browser gives you only limited access to Jockers’ corpus, that access is real and useful. A lot of work in digital criticism, and digital humanities in general, is like that. It produces ‘knowledge utilities’ that are generally useful, not just the private preserves of the original investigator.

There is an important epistemological point here as well. Jockers was led to this work by a certain set of intellectual concerns. Some of those concerns are quite general–about literature and the novel–while others are more specific–he has a particular interest in Irish and Irish-American literature. But I had no trouble putting his results to use in service of my own somewhat different interests. Continue reading

20140825-IMGP9432

From Macroanalysis to Cultural Evolution

The purpose of this post is to recast the work reported in Macroanalysis: Digital Methods & Literary History in terms appropriate to cultural evolution. The idea is to propose a model of cultural evolution and assign objects from Jockerss analysis to play roles in that model. I will leave Jockers’ work untouched. All I’m doing is reframing it.

Before doing that, however, I should note that in the last quarter of a century or so there has been quite a lot of work on cultural evolution in a variety of discipline including linguistics, anthropology, archaeology, and biology. Though it must be done at some time, I have no intention of even attempting to review that work here and so to place the scheme I propose in relation to it. That’s a job for another time and another venue. I note, however, that I have done quite a bit of work on cultural evolution myself and that some of that discussion can be found in documents I list at the end of this post.

Why Evolution?

First of all, why bother to recast the processes of literary history in evolutionary terms at all? Jockers wrote an excellent book without creating an evolutionary model, though he mentioned evolution here and there. What’s to be gained by this recasting?

As far as I can tell, much of the work that has been done on cultural evolution has been undertaken simply to exercise and extend the range of evolutionary discourse. It has not, as yet, resulted in an understanding of cultural process that is deeper than more conventional forms of historical discourse. Much of my own work has been undertaken in this spirit. I believe that, yes, at some point, evolutionary explanation will prove more robust that other forms of explanation, but we’re not there yet.

This work in effect is looking to evolutionary accounts as exhibiting something like formal cause in Aristotle’s sense. Evolutionary accounts are about distribution of traits across populations. In biology such accounts have a characteristic formal appearance so that, e.g. phylogenetic analysis of a population of entities tends to “look” a certain way. So, in the cultural sphere, let’s conduct a similar analysis and see how things look even if we don’t have our entities embedded in the kind of causal framework that genetics and population biology, molecular biology, and developmental biology provide the biologist.

That’s fine, as long as we remind ourselves periodically that that’s what we’re doing. But we must keep looking for the terms in which to construct a causal model.

What I specifically want from an evolutionary approach to culture is

  • a way to think about Said’s autonomous aesthetic realm,
  • a way to prove out Shelley’s assertion that “poets are the unacknowledged legislators of the world,”
  • a way of restoring agency to writers and readers rather than casting them as puppets of various vast and impersonal forces, and
  • a way of thinking about the canon in relation to the whole of literary culture.

That’s what I want. Those requirements imply having a causal model. Whether or not I’ll get it, that’s another matter.

Current critical approaches, however, in which individual humans are but nodal points in the machinations of vast and impersonal hegemonic forces, have trouble on all these points. Individual human beings are deprived of agency thus turning readers into zombies watching the ghosts of dead authors flicker on the remaining walls of Plato’s cave. The canon is captive to those same hegemonic forces, which have promulgated Shelley’s defense as an opiate for the masses, which R’ us.

The critical machine is broken. It’s time to start over. Before we do that, however, I need to dispense with one objection to seeking an evolutionary account of cultural phenomena. Continue reading

IMGP8511rd

Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society


This is revised from the introduction to a website I put up in the old days of web 1.0, all in hand-coded HTML. Where I’ve since uploaded downloadable versions of the documents I’ve used those links in this revised introduction, but you’re welcome to access the online versions from the old introduction.

Mind and Culture

A central phenomenon of the human presence on earth is that, over the long term, we have gained ever more capacity to understand and manipulate the physical world and, though some would debate this, the human worlds of psyche and society. The major purpose of the theory which the late David Hays and I have developed (and which I continue to develop) is to understand the mental structures and processes underlying that increased capacity. While more conventional students of history and of cultural evolution have much to say about what happened and when and what was influenced by what else, few have much to say about the conceptual and affective mechanisms in which these increased capacities are embedded. That is the story we have been endeavoring to tell.

Our theory is thus about processes in the human mind. Those processes evolve in tandem with culture. They require culture for their support while they enable culture through their capacities. In particular, we believe that the genetic elements of culture are to be found in the external world, in the properties of artifacts and behaviors, not inside human heads. Hays first articulated this idea in his book on the evolution of technology and I have developed it in my papers Culture as an Evolutionary Arena, Culture’s Evolutionary Landscape, in my book on music, Beethoven’s Anvil: Music in Mind and Culture, and in various posts at New Savanna and one for the National Humanities Center which I have aggregated into three working papers:

This puts our work at odds with some students of cultural evolution, especially those who identify with memetics, who tend to think of culture’s genetic elements as residing in nervous systems.

We have aspired to a system of thought in which the mechanisms of mind and feeling have discernible form and specificity rather than being the airy nothings of philosophical wish and theological hope. We would be happy to see computer simulations of the mechanisms we’ve been proposing. Unfortunately neither the computational art nor our thinking have been up to this task. But that, together with the neuropsychologist’s workbench, is the arena in which these matters must eventually find representation investigation, and a long way down the line, resolution. The point is that, however vague our ideas about mechanisms currently may be, it is our conviction that the phenomenon under investigation, culture and its implementation in the human brain, is not vague and formless, nor is it, any more, beyond our ken.

Major Transitions

The story we tell is one of cultural paradigms existing at four levels of sophistication, which we call ranks. In the terminology of current evolutionary biology, these ranks represent major transitions in cultural life. Rank 1 paradigms emerged when the first humans appeared on the savannas of Africa speaking language as we currently know it. Those paradigms structured the lives of primitive which societies emerged perhaps 50,000 to 100,000 years ago. Around 5,000 to 10,000 years ago Rank 2 paradigms emerged in relatively large stable human societies with people subsisting on systematic agriculture, living in walled cities and reading written texts. Rank 3 paradigms first emerged in Europe during the Renaissance and gave European cultures the capacity to dominate, in a sense, to create, world history over the last 500 years. This century has begun to see the emergence of Rank 4 paradigms. Continue reading

wales

Vyv Evans: The Human Meaning-Making Engine

If you read my last post here at Replicated Typo to the very end, you may remember that I promised to recommend a book and to return to one of the topics of this previous post. I won’t do this today, but I promise I will catch up on it in due time.

What I just did – promising something – is a nice example for one of the two functions of language which Vyvyan Evans from Bangor University distinguished in his talk on “The Human Meaning-Making Engine” yesterday at the UK Cognitive Linguistics Conference. More specifically, the act of promising is an example for the interactive function of language, which is of course closely intertwined with its symbolic function. Evans proposed two different sources for this two functions. The interactive function, he argued, arises from the human instinct for cooperation, whereas meaning arises from the interaction between the linguistic and the conceptual system. While language provides the “How” of meaning-making, the conceptual system provides the “What”. Evans used some vivid examples (e.g. this cartoon exemplifying nonverbal communication) to make clear that communication is not contingent on language. However, “language massively amplifies our communicative potential.” The linguistic system, he argued, has evolved as an executive control system for the conceptual system. While the latter is broadly comparable with that of other animals, especially great apes, the linguistic system is uniquely human. What makes it unique, however, is not the ability to refer to things in the world, which can arguably be found in other animals, as well. What is uniquely human, he argued, is the ability to symbolically refer in a sign-to-sign (word-to-word) direction rather than “just” in a sign-to-world (word-to-world) direction.  Evans illustrated this “word-to-word” direction with Hans-Jörg Schmid’s (e.g.  2000; see also here)  work on “shell nouns”, i.e. nouns “used in texts to refer to other passages of the text and to reify them and characterize them in certain ways.” For instance, the stuff I was talking about in the last paragraph would be an example of a shell noun.

According to Evans, the “word-to-word” direction is crucial for the emergence of e.g. lexical categories and syntax, i.e. the “closed-class” system of language. Grammaticalization studies indicate that the “open-class” system of human languages is evolutionarily older than the “closed-class” system, which is comprised of grammatical constructions (in the broadest sense). However, Evans also emphasized that there is a lot of meaning even in closed-class constructions, as e.g. Adele Goldberg’s work on argument structure constructions shows: We can make sense of a sentence like “Someone somethinged something to someone” although the open-class items are left unspecified.

Constructions, he argued, index or cue simulations, i.e. re-activations of body-based states stored in cortical and subcortical brain regions. He discussed this with the example of the cognitive model for Wales: We know that Wales is a geographical entity. Furthermore, we know that “there are lots of sheep, that the Welsh play Rugby, and that they dress in a funny way.” (Sorry, James. Sorry, Sean.) Oh, and “when you’re in Wales, you shouldn’t say, It’s really nice to be in England, because you will be lynched.”

On a more serious note, the cognitive models connected to closed-class constructions, e.g. simple past -ed or progressive -ing, are of course much more abstract but can also be assumed to arise from embodied simulations (cf. e.g. Bergen 2012). But in addition to the cognitive dimension, language of course also has a social and interactive dimension drawing on the apparently instinctive drive towards cooperative behaviour. Culture (or what Tomasello calls “collective intentionality”)  is contigent on this deep instinct which Levinson (2006) calls the “human interaction engine”. Evans’ “meaning-making engine” is the logical continuation of this idea.

Just like Evans’ theory of meaning (LCCM theory), his idea of the “meaning-making engine” is basically an attempt at integrating a broad variety of approaches into a coherent model. This might seem a bit eclectic at first, but it’s definitely not the worst thing to do, given that there is significant conceptual overlap between different theories which, however, tends to be blurred by terminological incongruities. Apart from Deacon’s (1997) “Symbolic Species” and Tomasello’s work on shared and joint intentionality, which he explicitly discussed, he draws on various ideas that play a key role in Cognitive Linguistics. For example, the distinction between open- and closed-class systems features prominently in Talmy’s (2000) Cognitive Semantics, as does the notion of the human conceptual system. The idea of meaning as conceptualization and embodied simulation of course goes back to the groundbreaking work of, among others, Lakoff (1987) and Langacker (1987, 1991), although empirical support for this hypothesis has been gathered only recently in the framework of experimental semantics (cf. Matlock & Winter forthc. – if you have an account at academia.edu, you can read this paper here). All in all, then, Evans’ approach might prove an important further step towards integrating Cognitive Linguistics and language evolution research, as has been proposed by Michael and James in a variety of talks and papers (see e.g. here).

Needless to say, it’s impossible to judge from a necessarily fairly sketchy conference presentation if this model qualifies as an appropriate and comprehensive account of the emergence of meaning. But it definitely looks promising and I’m looking forward to Evans’ book-length treatment of the topics he touched upon in his talk. For now, we have to content ourselves with his abstract from the conference booklet:

In his landmark work, The Symbolic Species (1997), cognitive neurobiologist Terrence Deacon argues that human intelligence was achieved by our forebears crossing what he terms the “symbolic threshold”. Language, he argues, goes beyond the communicative systems of other species by moving from indexical reference – relations between vocalisations and objects/events in the world — to symbolic reference — the ability to develop relationships between words — paving the way for syntax. But something is still missing from this picture. In this talk, I argue that symbolic reference (in Deacon’s terms), was made possible by parametric knowledge: lexical units have a type of meaning, quite schematic in nature, that is independent of the objects/entities in the world that words refer to. I sketch this notion of parametric knowledge, with detailed examples. I also consider the interactional intelligence that must have arisen in ancestral humans, paving the way for parametric knowledge to arise. And, I also consider changes to the primate brain-plan that must have co-evolved with this new type of knowledge, enabling modern Homo sapiens to become so smart.

 

References

Bergen, Benjamin K. (2012): Louder than Words. The New Science of How the Mind Makes Meaning. New York: Basic Books.

Deacon, Terrence W. (1997): The Symbolic Species. The Co-Evolution of Language and the Brain. New York, London: Norton.

Lakoff, George (1987): Women, Fire, and Dangerous Things. What Categories Reveal about the Mind. Chicago: The University of Chicago Press.

Langacker, Ronald W. (1987): Foundations of Cognitive Grammar. Vol. 1. Theoretical Prerequisites. Stanford: Stanford University Press.

Langacker, Ronald W. (1991): Foundations of Cognitive Grammar. Vol. 2. Descriptive Application. Stanford: Stanford University Press.

Levinson, Stephen C. (2006): On the Human “Interaction Engine”. In: Enfield, Nick J.; Levinson, Stephen C. (eds.): Roots of Human Sociality. Culture, Cognition and Interaction. Oxford: Berg, 39–69.

Matlock, Teenie & Winter, Bodo (forthc): Experimental Semantics. In: Heine, Bernd; Narrog, Heiko (eds.): The Oxford Handbook of Linguistic Analysis. 2nd ed. Oxford: Oxford University Press.

Schmid, Hans-Jörg (2000): English Abstract Nouns as Conceptual Shells. From Corpus to Cognition. Berlin, New York: De Gruyter (Topics in English Linguistics, 34).

Talmy, Leonard (2000): Toward a Cognitive Semantics. 2 vol. Cambridge, Mass: MIT Press.

 

miyagawaetal2014_3

Why Disagree? Some Critical Remarks on the Integration Hypothesis of Human Language Evolution

Shigeru Miyagawa, Shiro Ojima, Robert Berwick and Kazuo Okanoya have recently published a new paper in Frontiers in Psychology, which can be seen as a follow-up to the 2013 Frontiers paper by Miyagawa, Berwick and Okanoya (see Hannah’s post on this paper). While the earlier paper introduced what they call the “Integration Hypothesis of Human Language Evolution”, the follow-up paper seeks to provide empirical evidence for this theory and discusses potential challenges to the Integration Hypothesis.

The basic idea of the Integration Hypothesis, in a nutshell, is this: “All human language sentences are composed of two meaning layers” (Miyagawa et al. 2013: 2), namely “E” (for “expressive”) and “L” (for “lexical”). For example, sentences like “John eats a pizza”, “John ate a pizza”, and “Did John eat a pizza?” are supposed to have the same lexical meaning, but they vary in their expressive meaning. Miyagawa et al. point to some parallels between expressive structure and birdsong on the one hand and lexical structure and the alarm calls of non-human primates on the other. More specifically, “birdsongs have syntax without meaning” (Miyagawa et al. 2014: 2), whereas alarm calls consist of “isolated uttered units that correlate with real-world references” (ibid.). Importantly, however, even in human language, the Expression Structure (ES) only admits one layer of hierarchical structure, while the Lexical Structure (LS) does not admit any hierarchical structure at all (Miyagawa et al. 2013: 4). The unbounded hierarchical structure of human language (“discrete infinity”) comes about through recursive combination of both types of structure.

This is an interesting hypothesis (“interesting” being a convenient euphemism for “well, perhaps not that interesting after all”). Let’s have a closer look at the evidence brought forward for this theory.

Miyagawa et al. “focus on the structures found in human language” (Miyagawa et al. 2014: 1), particularly emphasizing the syntactic structure of sentences and the internal structure of words. In a sentence like “Did John eat pasta?”, the lexical items John, eat, and pasta constitute the LS, while the auxiliary do, being a functional element, is seen as belonging to the expressive layer. In a more complex sentence like “John read the book that Mary wrote”, the VP and NP notes are allocated to the lexical layer, while the DP and CP nodes are allocated to the expressive layer.

Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures

Fig. 9 from Miyagawa et al. (2014), illustrating how unbounded hierarchical structure emerges from recursive combination of E- and L-level structures

As pointed out above, LS elements cannot directly combine with each other according to Miyagawa et al. (the ungrammaticality of e.g. John book and want eat pizza is taken as evidence for this), while ES is restricted to one layer of hierarchical structure. Discrete infinity then arises through recursive application of two rules:

(i) EP →  E LP
(ii) LP → L EP
Rule (i) states that the E category can combine with LP to form an E-level structure. Rule (ii) states that the L category can combine with an E-level structure to form an L-level structure. Together, these two rules suffice to yield arbitrarily deep hierarchical structures.

The alternation between lexical and expressive elements, as exemplified in Figure (3) from the 2014 paper (= Figure 9 from the 2013 paper, reproduced above), is thus essential to their theory since they argue that “inside E and L we only find finite-state processes” (Miyagawa et al. 2014: 3). Several phenomena, most notably Agreement and Movement, are explained as “linking elements” between lexical and functional heads (cf. also Miyagawa 2010). A large proportion of the 2014 paper is therefore dedicated to phenomena that seem to argue against this hypothesis.

For example, word-formation patterns that can be applied recursively seem to provide a challenge for the theory, cf. example (4) in the 2014 paper:

(4) a. [anti-missile]
b. [anti-[anti-missile]missile] missile

The ostensible point is that this formation can involve center embedding, which would constitute a non-finite state construction.

However, they propose a different explanation:

When anti- combines with a noun such as missile, the sequence anti-missile is a modifier that would modify a noun with this property, thus, [anti-missile]-missile,  [anti-missile]-defense. Each successive expansion forms via strict adjacency, (…) without the need to posit a center embedding, non-regular grammar.

Similarly, reduplication is re-interpreted as a finite state process. Furthermore, they discuss N+N compounds, which seems to violate “the assumption that L items cannot combine directly — any combination requires intervention from E.” However, they argue that the existence of linking elements in some languages provides evidence “that some E element does occur between the two L’s”. Their example is German Blume-n-wiese ‘flower meadow’, others include Freundeskreis ‘circle of friends’ or Schweinshaxe ‘pork knuckle’. It is commonly assumed that linking elements arose from grammatical markers such as genitive -s, e.g. Königswürde ‘royal dignity’ (from des Königs Würde ‘the king’s dignity’). In this example, the origin of the linking element is still transparent. The -es- in Freundeskreis, by contrast, is an example of a so-called unparadigmatic linking element since it literally translates to ‘circle of a friend’. In this case as well as in many others, the linking element cannot be traced back directly to a grammatical affix. Instead, it seems plausible to assume that the former inflectional suffix was reanalyzed as a linking element from the paradigmatic cases and subsequently used in other compounds as well.

To be sure, the historical genesis of German linking elements doesn’t shed much light on their function in present-day German, which is subject to considerable debate. Keeping in mind that these items evolved gradually however raises the question how the E and L layers of compounds were linked in earlier stages of German (or any other language that has linking elements). In addition, there are many German compounds without a linking element, and in other languages such as English, “linked” compounds like craft-s-man are the exception rather than the rule. Miyagawa et al.’s solution seems a bit too easy to me: “In the case of teacup, where there is no overt linker, we surmise that a phonologically null element occurs in that position.”

As an empiricist, I am of course very skeptical towards any kind of null element. One could possibly rescue their argument by adopting concepts from Construction Grammar and assigning E status to the morphological schema [N+N], regardless of the presence or absence of a linking element, but then again, from a Construction Grammar point of view, assuming a fundamental dichotomy between E and L structures doesn’t make much sense in the first place. That said, I must concede that the E vs. L distinction reflects basic properties of language that play a role in any linguistic theory, but especially in Construction Grammar and in Cognitive Linguistics. On the one hand, it reflects the rough distinction between “open-class” and “closed-class” items, which plays a key role in Talmy’s (2000) Cognitive Semantics and in the grammaticalization literature (cf. e.g. Hopper & Traugott 2003). As many grammaticalization studies have shown, most if not all closed-class items are “fossils” of open-class items. The abstract concepts they encode (e.g. tense or modality) are highly relevant to our everyday experience and, consequently, to our communication, which is why they got grammaticized in the first place. As Rose (1973: 516) put it, there is no need for a word-formation affix deriving denominal verbs meaning “grasp NOUN in the left hand and shake vigorously while standing on the right foot in a 2 ½ gallon galvanized pail of corn-meal-mush”. But again, being aware of the historical emergence of these elements begs the question if a principled distinction between the meanings of open-class vs. closed-class elements is warranted.

On the other hand, the E vs. L distinction captures the fundamental insight that languages pair form with meaning. Although they are explicitly talking about the “duality of semantics“, Miyagawa et al. frequently allude to formal properties of language, e.g. by linking up syntactic strutures with the E layer:

The expression layer is similar to birdsongs; birdsongs have specific patterns, but they do not contain words, so that birdsongs have syntax without meaning (Berwick et al., 2012), thus it is of the E type.

While the “expression” layer thus seems to account for syntactic and morphological structures, which are traditionally regarded as purely “formal” and meaningless, the “lexical” layer captures the referential function of linguistic units, i.e. their “meaning”. But what is meaning, actually? The LS as conceptualized by Miyagawa et al. only covers the truth-conditional meaning of sentences, or their “conceptual content”, as Langacker (2008) calls it. From a usage-based perspective, however, “an expression’s meaning consists of more than conceptual content – equally important to linguistic semantics is how that content is shaped and construed.” (Langacker 2002: xv) According to the Integration Hypothesis, this “construal” aspect is taken care of by closed-class items belonging to the E layer. However, the division of labor envisaged here seems highly idealized. For example, tense and modality can be expressed using open-class (lexical) items and/or relying on contextual inference, e.g. German Ich gehe morgen ins Kino ‘I go to the cinema tomorrow’.

It is a truism that languages are inherently dynamic, exhibiting a great deal of synchronic variation and diachronic change. Given this dynamicity, it seems hard to defend the hypothesis that a fundamental distinction between E and L structures which cannot combine directly can be found universally in the languages of the world (which is what Miyagawa et al. presuppose). We have already seen that in the case of compounds, Miyagawa et al. have to resort to null elements in order to uphold their hypothesis. Furthermore, it seems highly likely that some of the “impossible lexical structures” mentioned as evidence for the non-combinability hypothesis are grammatical at least in some creole languages (e.g. John book, want eat pizza).

In addition, it seems somewhat odd that E- and L-level structures as “relics” of evolutionarily earlier forms of communication are sought (and expected to be found) in present-day languages, which have been subject to millennia of development. This wouldn’t be a problem if the authors were not dealing with meaning, which is not only particularly prone to change and variation, but also highly flexible and context-dependent. But even if we assume that the existence of E-layer elements such as affixes and other closed-class items draws on innate dispositions, it seems highly speculative to link the E layer with birdsong and the L layer with primate calls on semantic grounds.

The idea that human language combines features of birdsong with features of primate alarm calls is certainly not too far-fetched, but the way this hypothesis is defended in the two papers discussed here seems strangely halfhearted and, all in all, quite unconvincing. What is announced as “providing empirical evidence” turns out to be a mostly introspective discussion of made-up English example sentences, and if the English examples aren’t convincing enough, the next best language (e.g. German) is consulted. (To be fair, in his monograph, Miyagawa (2010) takes a broader variety of languages into account.) In addition, much of the discussion is purely theory-internal and thus reminiscent of what James has so appropriately called “Procrustean Linguistics“.

To their credit, Miyagawa et al. do not rely exclusively on theory-driven analyses of made-up sentences but also take some comparative and neurological studies into account. Thus, the Integration Hypothesis – quite unlike the “Mystery” paper (Hauser et al. 2014) co-authored by Berwick and published in, you guessed it, Frontiers in Psychology (and insightfully discussed by Sean) – might be seen as a tentative step towards bridging the gap pointed out by Sverker Johansson in his contribution to the “Perspectives on Evolang” section in this year’s Evolang proceedings:

A deeper divide has been lurking for some years, and surfaced in earnest in Kyoto 2012: that between Chomskyan biolinguistics and everybody else. For many years, Chomsky totally dismissed evolutionary linguistics. But in the past decade, Chomsky and his friends have built a parallel effort at elucidating the origins of language under the label ‘biolinguistics’, without really connecting with mainstream Evolang, either intellectually or culturally. We have here a Kuhnian incommensurability problem, with contradictory views of the nature of language.

On the other hand, one could also see the Integration Hypothesis as deepening the gap since it entirely draws on generative (or “biolinguistic”) preassumptions about the nature of language which are not backed by independent empirical evidence. Therefore, to conclusively support the Integration Hypothesis, much more evidence from many different fields would be necessary, and the theoretical preassumptions it draws on would have to be scrutinized on empirical grounds, as well.

References

Hauser, Marc D.; Yang, Charles; Berwick, Robert C.; Tattersall, Ian; Ryan, Michael J.; Watumull, Jeffrey; Chomsky, Noam; Lewontin, Richard C. (2014): The Mystery of Language Evolution. In: Frontiers in Psychology 4. doi: 10.3389/fpsyg.2014.00401

Hopper, Paul J.; Traugott, Elizabeth Closs (2003): Grammaticalization. 2nd ed. Cambridge: Cambridge University Press.

Johansson, Sverker: Perspectives on Evolang. In: Cartmill, Erica A.; Roberts, Séan; Lyn, Heidi; Cornish, Hannah (eds.) (2014): The Evolution of Language. Proceedings of the 10th International Conference. Singapore: World Scientific, 14.

Langacker, Ronald W. (2002): Concept, Image, and Symbol. The Cognitive Basis of Grammar. 2nd ed. Berlin, New York: De Gruyter (Cognitive Linguistics Research, 1).

Langacker, Ronald W. (2008): Cognitive Grammar. A Basic Introduction. Oxford: Oxford University Press.

Miyagawa, Shigeru (2010): Why Agree? Why Move? Unifying Agreement-Based and Discourse-Configurational Languages. Cambridge: MIT Press (Linguistic Inquiry, Monographs, 54).

Miyagawa, Shigeru; Berwick, Robert C.; Okanoya, Kazuo (2013): The Emergence of Hierarchical Structure in Human Language. In: Frontiers in Psychology 4. doi 10.3389/fpsyg.2013.00071

Miyagawa, Shigeru; Ojima, Shiro; Berwick, Robert C.; Okanoya, Kazuo (2014): The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages. In: Frontiers in Psychology 5. doi 10.3389/fpsyg.2014.00564

Rose, James H. (1973): Principled Limitations on Productivity in Denominal Verbs. In: Foundations of Language 10, 509–526.

Talmy, Leonard (2000): Toward a Cognitive Semantics. 2 vol. Cambridge, Mass: MIT Press.

P.S.: After writing three posts in a row in which I critizised all kinds of studies and papers, I herby promise that in my next post, I will thoroughly recommend a book and return to a question raised only in passing in this post.  [*suspenseful cliffhanger music*]

Skewed frequencies in phonology. Data from Fry (1947), based on an analysis of 17,000 sounds of transcribed British English text; cited in Taylor (2012: 162f.). “Token frequencies refer to the occurrences of the sounds in the text comprising the corpus; type frequencies are the number of occurrences in the word types in the text.”

The Myth of Language Universals at Birth

[This is a guest post by Stefan Hartmann]

 

“Chomsky still rocks!” This comment on Twitter refers to a recent paper in PNAS by David M. Gómez et al. entitled “Language Universals at Birth”. Indeed, the question Gómez et al. address is one of the most hotly debated questions in linguistics: Does children’s language learning draw on innate capacities that evolved specifically for linguistic purposes – or rather on domain-general skills and capabilities?

Lbifs, Blifs, and Brains

Gómez and his colleagues investigate these questions by studying how children respond to different syllable structures:

It is well known that across languages, certain structures are preferred to others. For example, syllables like blif are preferred to syllables like bdif and lbif. But whether such regularities reflect strictly historical processes, production pressures, or universal linguistic principles is a matter of much debate. To address this question, we examined whether some precursors of these preferences are already present early in life. The brain responses of newborns show that, despite having little to no linguistic experience, they reacted to syllables like blif, bdif, and lbif in a manner consistent with adults’ patterns of preferences. We conjecture that this early, possibly universal, bias helps shaping language acquisition.

More specifically, they assume a restriction on syllable structure known as the Sonority Sequencing Principle (SSP), which has been proposed as “a putatively universal constraint” (p. 5837). According to this principle, “syllables maximize the sonority distance from their margins to their nucleus”. For example, in /blif/, /b/ is less sonorous than /l/, which is in turn less sonorous than the vowel /i/, which constitues the syllable’s nucleus. In /lbif/, by contrast, there is a sonority fall, which is why this syllable is extremely ill-formed according to the SSP.

A simplified version of the sonority scale.

A simplified version of the sonority scale

In a first experiment, Gómez et al. investigated “whether the brains of newborns react differentially to syllables that are well- or extremely ill-formed, as defined by the SSP” (p. 5838). They had 24 newborns listen to /blif/- and /lbif/-type syllables while measuring the infant’s brain activities. In the left temporal and right frontoparietal brain areas, “well-formed syllables elicited lower oxyhemoglobin concentrations than ill-formed syllables.” In a second experiment, they presented another group of 24 newborns with syllables either exhibiting a sonority rise (/blif/) or two consonants of the same sonority (e.g. /bdif/) in their onset. The latter option is dispreferred across languages, and previous behavioral experiments with adult speakers have also shown a strong preference for the former pattern. “Results revealed that oxyhemoglobin concentrations elicited by well-formed syllables are significantly lower than concentrations elicited by plateaus in the left temporal cortex” (p. 5839). However, in contrast to the first experiment, there is no significant effect in the right frontoparietal region, “which has been linked to the processing of suprasegmental properties of speech” (p. 5838).

In a follow-up experiment, Gómez et al. investigated the role of the position of the CC-patterns within the word: Do infants react differently to /lbif/ than to, say, /olbif/? Indeed, they do: “Because the sonority fall now spans across two syllables (ol.bif), rather than a syllable onset (e.g., lbif), such words should be perfectly well-formed. In line with this prediction, our results show that newborns’ brain responses to disyllables like oblif and olbif do not differ.”

How much linguistic experience do newborns have?

Taken together, these results indicate that newborn infants are already sensitive for syllabification (as the follow-up experiment suggests) as well as for certain preferences in syllable structure. This leads Gómez et al. to the conclusion “that humans possess early, experience-independent linguistic biases concerning syllable structure that shape language perception and acquisition” (p. 5840). This conjecture, however, is a very bold one. First of all, seeing these preferences as experience-independent presupposes the assumption that newborn infants do not have linguistic experience at all. However, there is evidence that “babies’ language learning starts from the womb”. In their classic 1986 paper, Anthony DeCasper and Melanie Spence showed that “third-trimester fetuses experience their mothers’ speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.” Pregnant women were instructed to read aloud a story to their unborn children when they felt that the fetus was awake. In the postnatal phase, the infants’ reactions to the same or a different story read by their mother’s or another woman’s voice were studied by monitoring the newborns’ sucking behavior. Apart from the “experienced” infants who had been read the story, a group of “untrained” newborns were used as control subjects. They found that for experienced subjects, the target story was more reinforcing than a novel story, no matter if it was recited by their mother’s or a different voice. For the control subjects, by contrast, no difference between the stories could be found. “The only experimental variable that can systematically account for these findings is whether the infants’ mothers had recited the target story while pregnant” (DeCasper & Spence 1986: 143).

Continue reading

darwin_birthday

Happy Darwin Day!

I had hoped to celebrate Darwin day with a longer post discussing how language is often viewed as a challenging puzzle to natural selection. My main worry is that the formal design metaphor used in much of linguistics has been used, incorrectly IMHO, to divert attention away from studying language as a biological system based on organic logic. If this doesn’t make much sense, then you can do some background reading with Terrence Deacon’s paper, Language as an emergent function: Some radical neurological and evolutionary implications. Alas, that’s all I have to say on the matter for now, but if you’re looking for something related to Darwin, evolution and the origin of language, then I strongly suggest you head over to the excellent Darwin Correspondence project and read their blog post on the subject:

Darwin started thinking about the origin of language in the late 1830s. The subject formed part of his wide-ranging speculations about the transmutation of species. In his private notebooks, he reflected on the communicative powers of animals, their ability to learn new sounds and even to associate them with words. “The distinction of language in man is very great from all animals”, he wrote, “but do not overrate—animals communicate to each other” (Barrett ed. 1987, p. 542-3). Darwin observed the similarities between animal sounds and various natural cries and gestures that humans make when expressing strong emotions such as fear, surprise, or joy. He noted the physical connections between words and sounds, exhibited in words like “roar”, “crack”, and “scrape” that seemed imitative of the things signified. He drew parallels between language and music, and asked: “did our language commence with singing—is this the origin of our pleasure in music—do monkeys howl in harmony”? (Barrett ed. 1987, p. 568).

koala

Koalas use a novel vocal organ to produce unusually low-pitched mating calls

Before Tecumseh Fitch put forward the size exaggeration hypothesis, many thought that the lowered larynx was unique to humans, which suggested that it was an adaptation specifically for the production of speech. However, Fitch showed that lowered larynxes appear in other animals, most notably the red deer, to exaggerate their perceived body-size by making the low calls of a typically larger animal. Whether this is the adaptive pressure that caused the human larynx to lower is still a controversial issue, and I talk about a couple of hypotheses here.

I was reminded of this this morning when I saw this Koala on the BBC news making incredibly low mating calls. However, the Koala don’t achieve this incredibly low bellow by lowering their larynx, instead they have an extra, larger pair of (previously undocumented) vocal folds spanning the intra-pharyngeal ostium (IPO), an oval opening within the velum.

The really short paper, along with a really creepy figure of a koala cut in two, is here:

http://www.sciencedirect.com/science/article/pii/S0960982213013444

IMGP3039

A Note on Memes and Historical Linguistics

When I began my most recent series of posts on memes, I did so because I wanted to think specifically about language: Does it make sense to treat words as memes? That question arose for a variety of reasons.

In the first place, if you are going to think about culture as an evolutionary phenomenon, language automatically looms large as so very much of culture depends on or is associated with language. And language consists of words, among other things. Further, historical linguistics is a well-developed discipline. We know a lot about how languages have changed over time, and change over time is what evolution is about.

However, words have meanings. And word meanings are rather fuzzy things, subject to dispute and to change that is independent of the word-form itself. Did I really want to treat word meanings as memes? That seemed rather iffy. But if I don’t treat word meanings as memetic, then what happens to language?

But THAT’s not quite how I put it going into that series of posts. Of course, I’ve known for a long time that words have forms and meanings. I don’t know whether it was my freshman year or my sophomore year that I read Roland Barthe’s Elements of Semiology (English translation 1967). That gave me Saussure’s trilogy of sign, signifier, and signified, the last of which seemed rather mysterious: “the signified is not ‘a thing’ but a mental representation of the ‘thing’.” Getting comfortable with that distinction, between the thing and the concept of the thing, that took time and effort.

That’s an aside. Suffice to say, I got comfortable with that distinction. The distinction between signifier and signified was much easier.

And yet that distinction was not uppermost in my mind when I thought of language and cultural evolution. When I thought of memes. When I approached this series of essays, though some papers by Daniel Dennett, I thought of words they same way Dennett did, the whole kit and caboodle had to be a meme. It was the sign that’s the meme.

That’s not how I ended up, of course. That ending took me a bit by surprise. Coming down that home stretch I was getting worried. It appeared to me that I was faced with two different classes of memes: couplers and the other one. What I did then was to divide the other one into two classes: targets and designators. And to do that I had to call on that thing I’ve known for decades and split the word in two: signifier and signified. It’s only the signifier that’s memetic. Signifiers are memes, but not signifieds.

It took me a couple of months to work that out, and I’d known it all along.

Sorta’.

What does that have to do with historical linguistics? Historical linguistics is based mostly on the study of relationships among signifiers, that is, relationships among the memetic elements of languages. Which makes sense, of course.

But… Continue reading

IMGP8243rdCROP

Cultural Evolution, Memes, and the Trouble with Dan Dennett

This is the final post in my current series on memes, cultural evolution, and the thought of Daniel Dennett. You can download a PDF of the whole series HERE. The abstract and introduction are below.

* * * * *

Abstract: Philosopher Dan Dennett’s conception of the active meme, moving about from brain to brain, is physically impossible and conceptually empty. It amounts to cultural preformationism. As the cultural analogue to genes, memes are best characterized as the culturally active properties of things, events, and processes in the external world. Memes are physically embodied in a substrate. The cultural analogue to the phenotype can be called an ideotype; ideotypes are mental entities existing in the minds of individual humans. Memes serve as targets for designing and fabricating artifacts, as couplers to synchronize and coordinate human interaction, and as designators (Saussaurian signifiers). Cultural change is driven by the movement of memes between populations with significantly different cultural practices understood through different populations of ideotypes.

* * * * *

Introduction: Taming the Wild Meme

These notes contain my most recent thinking on cultural evolution, an interest that goes back to my dissertation days in the 1970s at the State University of New York at Buffalo. My dissertation, Cognitive Science and Literary Theory (1978), included a chapter on narrative, “From Ape to Essence and the Evolution of Tales,” (subsequently published as “The Evolution of Narrative and the Self”). But that early work didn’t focus on the process of cultural evolution. Rather, it was about the unfolding of ever more sophisticated cultural forms–an interest I shared with my teacher, the late David G. Hays.

My current line of investigation is very much about process, the standard evolutionary process of random variation and selective retention as applied to cultural forms, rather than living forms. I began that work in the mid-1990s and took my cue from Hays, as I explain in the section below, “What’s a meme? Where I got my conception”. At the end of the decade I had drafted a book on music, Beethoven’s Anvil: Music in Mind and Culture (Basic 2001), in which I arrived at pretty much my current conception, but only with respect to music: music memes are the culturally active properties musical sound.

I didn’t generalize the argument to language until I prepared a series of posts conceived as background to a (rather long and detailed) post I wrote for the National Humanities Institute in 2010: Cultural Evolution: A Vehicle for Cooperative Interaction Between the Sciences and the Humanities (PDF HERE). But I didn’t actually advance this conception in that post. Rather, I tucked it into an extensive series of background posts that I posted at New Savanna prior to posting my main article. That’s where, using the emic/etic distinction, I first advanced the completely general idea that memes are observable properties of objects and things that are culturally active. I’ve collected that series of posts into a single downloadable PDF: The Evolution of Human Culture: Some Notes Prepared for the National Humanities Center, Version 2.

But I still had doubts about that position. Though the last three of those background posts were about language, I still had reservations. The problem was meaning: If that conception was correct, then word meanings could not possibly be memetic. Did I really want to argue that?

The upshot of this current series of notes is that, yes, I really want to argue it. And I have done at some length while using several articles by the philosopher Daniel Dennett as my foil. For the most part I focus on figuring out what kinds of entities play the role of memes, but toward the end, “Cultural Evolution, So What?”, I have a few remarks about long-term dynamics, that is, about cultural change. Continue reading