A large part of human humour depends on understanding that the intention of the person telling the joke might be different to what they are actually saying. The person needs to tell the joke so that you understand that they're telling a joke, so they need to to know that you know that they do not intend to convey the meaning they are about to utter... Things get even more complicated when we are telling each other jokes that involve other people having thoughts and beliefs about other people. We call this knowledge nested intentions, or recursive mental attributions. We can already see, based on my complicated description, that this is a serious matter and requires scientific investigation. Fortunately, a recent paper by Dunbar, Launaway and Curry (2015) investigated whether the structure of jokes is restricted by the amount of nested intentions required to understand the joke and they make a couple of interesting predictions on the mental processing that is involved in processing humour, and how these should be reflected in the structure and funniness of jokes. In today’s blogpost I want to discuss the paper's methodology and some of its claims.
The book is available through print-on-demand publisher Lulu for £23.72. This is the lowest price allowed by the site, and will provide EvoLang with £2.81 for each sale. The book now also has an ISBN: 978-1-326-61450-8.
This book is being made available due to popular demand, but all the papers and abstracts are freely available from the proceedings website, which is the canonical source. Unfortunately, the costs were too great to publish in colour, so the inside of the book is black and white.
In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.
And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.
Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):
To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.
Just so, the way of the world.
Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:
A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.
How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.
And there we have it. Continue reading
Here's an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett's FB page, Yorick Wilks observed: "It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer."
Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.
So EvoLang is over. But if you missed any of it, the papers are still available online. In celebration of the new digital format, I've chosen a number of papers for some post-conference awards (nothing official, just for fun!).
Most viewed papers
The proceedings website received 6,000 page hits, most of them during the conference itself. Here are the top 3 most viewed pages:
Semantic Approximation And Its Effect On The Development Of Lexical Conventions
Bill Noble and Raquel Fernández
Evolution Of What?
Most news coverage
Two papers were covered by Science magazine:
Dendrophobia In Bonobo Comprehension Of Spoken English
Robert Truswell (read the article here)
Most cited paper
One of the advantages of the papers being accessible online, and before the conference, is that other people may cite them. Indeed, on the day EvoLang ended, I received a short piece to review which cited this paper, which therefore gets the prize:
Anatomical Biasing Of Click Learning And Production: An MRI And 3D Palate Imaging Study
Dan Dediu and Scott Moisik
Best paper by an academic couple
By my count, there were 4 papers submitted by academic couples. My favorite was a great collaboration on a novel topic: the paper by Monika Pleyer and Michael Pleyer on taking the first steps towards integrating politeness theory and evolution (it was also shortlisted for best talk).
The Evolution Of Im/politeness
Monika Pleyer and Michael Pleyer
Best supplementary materials
8 accepted papers included supplementary materials, which are available on the website. These range from hilarious image stimuli (my favorite: a witch painting a pizza), to a 7-page model explanation, through to netlogo code and raw data and analysis scripts. But I'm afraid I'm going to choose my own paper's supplementary materials for including videos of people playing Minecraft. For science.
Deictic Tools Can Limit The Emergence Of Referential Symbol Systems
Elizabeth Irvine and Sean Roberts
A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone's favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.
A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.
The first day of EvoLang includes several workshops (full list here) to which all attendees are invited. Gregory Mills and I are running a workshop on language evolution and interaction, and the schedule and papers are now available online.
Language Adapts to Interaction, 08:30-13:30, Monday, 21st March, 2016, New Orleans
Language has been shown to be adapted to constraints from many domains such as production, transmission, memory, processing and acquisition. These adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is interaction – face-to-face conversation. Taking turns at talk, repairing problems in communication and organising conversation into contingent sequences seem completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species. Therefore, the infrastructure for interaction may provide an insight into the origins of our unique communicative abilities. The emerging picture is that the infrastructure for interaction is an evolutionary old requirement for the emergence of a complex linguistic system, and for a cooperative, cumulative culture more generally. That is, Language Adapts to Interaction.
The keynote talk is given by John Haviland, who covers an emerging sign language called Z, and argues that interactional tools such as gaze, pointing and attention management form the basis of both aspects of interaction such as turn taking, but also grammatical features in the language.
This is a preview of the talk Redundant Features Are Less Likely To Survive: Empirical Evidence From The Slavic Languages by Aleksandrs Berdicevskis and Hanne Eckhoff. Tuesday 22nd March, 14:30, room D.
One of the methodological trends of this year’s EvoLang seems to be intelligent exaptation. What I mean by this is that people do research on language evolution using tools that were developed for a completely different purpose. Examples include using zombies to observe the emergence of languages under severe phonological constraints, Minecraft to investigate the role of pointing in the emergence of language and EvoLang to study EvoLang. In addition to that, Hanne Eckhoff and I use syntactic parsers to quantify morphological redundancy.
The basic idea is to put to test an assumption that redundant features are more likely to disappear from languages, especially if social factors favour the loss of excessive complexity. The problem is that nobody really knows what is redundant in real languages and what is not. We can define a feature as redundant if it is not necessary for successful communication, i.e. if hearers can infer the meanings of the messages they receive without using this feature. It is, however, still a long way from this definition to a quantitative measure. In theory, one could run psycholinguistic experiments, in practice, it is a difficult and costly venture (I tried).
In this paper, we replace humans with a dependency parser. For those who are not into computational linguistics: a parser is a program which can automatically identify (well, attempt to identify) the syntactic structure of a given sentence. A typical parser is first trained on a large number of human-annotated sentences. After its learning is over, it can parse non-annotated sentences on its own, relying on the information about the form of every word, its lemma, part of speech, morphological features and the linear order of words — just like a human being. If we remove a certain feature from its input and compare performance before and after the removal, we can estimate how important (=non-redundant) the feature was.
We test whether this measure is any good by running a pilot study with the Slavic language group. We estimate the redundancy of morphological features in Common Slavic (Common Slavic itself has left no written legacy, but we happen to have an excellent treebank of Old Church Slavonic, which is often used as a proxy) and try to predict which features are likely to die out in 13 modern Slavic languages. While redundancy is not of course a sole determiner of the survivability, it turns out be a fairly good predictor.
Come to the talk to hear about fierce morphological competitions! They are friends, dative and locative, almost brothers, but if only one can stay alive, which will sacrifice itself? The perfect participle is an underdog past tense, its frequency negligible compared to that of its rivals, the aorist and the imperfect, but does its high non-redundancy score give it some hope?
Aleksandrs Berdicevskis is a postdoc in computational historical linguistics at an edge of the world (namely The Arctic University of Norway in the city of Tromsø) with a PhD in sociolinguistics from the University of Bergen, MA in theoretical linguistics from Moscow State University, two years’ experience in science journalism, two kids and a long-standing interest in language evolution.
The first question he usually gets from new acquaintances is about the spelling of his name. The first name is a common Russian name (Aleksandr-) with the obligatory Latvian inflectional marker for nominative masculine singular (-s). The full form is used in formal communication only, otherwise he is usually called Sasha (the Russian hypocorism for Aleksandr) or, for simplicity’s sake, Alex.
Replicated Typo is doing a series of previews for this year's EvoLang conference. If you'd like to add a preview of your own presentation, get in touch with Sean Roberts.
At this year's EvoLang Liz Irvine and I will be talking about how pointing can inhibit the emergence of symbolic communication.
Usually, pointing is thought to help the process of bootstrapping a symbolic system. You can point to stuff to help people agree on what certain symbols refer to. This process has been formalised in the 'naming game' (see Matt Spike's talk):
- I request an object by naming it (with an arbitrary symbol)
- You guess what I mean and give me an object
- I point to the object that I meant you to give me (feedback)
- We remember the name that referred to this object
This game is the basis for many models of the emergence of shared symbolic systems, including iterated learning experiments (e.g Feher et al., and Macuch Silva & Roberts). Here's some robots playing the naming game in Luc Steels' lab:
However, the setup of these experiments assumes one crucial thing: that the individuals can't use pointing to make the request in the first place. Most experiments are set up so that participants must communicate symbolically before they can use pointing. If you allowed pointing to be used in a naming game, then it would probably go something like this:
- I point at the object I want.
I request an object by naming it (with an arbitrary symbol)
- You guess what I mean and give me an object
- I point to the object that I meant you to give me (feedback)
We remember the name that referred to this object
That is, if we're good enough at pointing then we don't need a symbolic language for this task.
Of course, there must have been some task in our evolutionary history that provided a pressure for us to develop language. We set out to explore what kind of task this might have been by running an experiment in Minecraft.