What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading “What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics””

Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here’s an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett’s FB page, Yorick Wilks observed: “It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer.”

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

EvoLang: Post-conference awards

So EvoLang is over.  But if you missed any of it, the papers are still available online.  In celebration of the new digital format, I’ve chosen a number of papers for some post-conference awards (nothing official, just for fun!).

Most viewed papers

The proceedings website received 6,000 page hits, most of them during the conference itself.  Here are the top 3 most viewed pages:

The Low-complexity-belt: Evidence For Large-scale Language Contact In Human Prehistory?
Christian Bentz

Semantic Approximation And Its Effect On The Development Of Lexical Conventions
Bill Noble and Raquel Fernández

Evolution Of What?
Christina Behme

Most news coverage

Two papers were covered by Science magazine:

Dendrophobia In Bonobo Comprehension Of Spoken English
Robert Truswell (read the article here)

The Fidelity Of Iterated Vocal Imitation
Pierce Edmiston , Marcus Perlman and Gary Lupyan (read the article here)

Most cited paper

One of the advantages of the papers being accessible online, and before the conference, is that other people may cite them.  Indeed, on the day EvoLang ended, I received a short piece to review which cited this paper, which therefore gets the prize:

Anatomical Biasing Of Click Learning And Production: An MRI And 3D Palate Imaging Study
Dan Dediu and Scott Moisik

Best paper by an academic couple

By my count, there were 4 papers submitted by academic couples.  My favorite was a great collaboration on a novel topic:  the paper by Monika Pleyer and Michael Pleyer on taking the first steps towards integrating politeness theory and evolution (it was also shortlisted for best talk).

The Evolution Of Im/politeness
Monika Pleyer and Michael Pleyer

Best supplementary materials

8 accepted papers included supplementary materials, which are available on the website.  These range from hilarious image stimuli (my favorite: a witch painting a pizza), to a 7-page model explanation, through to netlogo code and raw data and analysis scripts.  But I’m afraid I’m going to choose my own paper’s supplementary materials for including videos of people playing Minecraft.  For science.

Deictic Tools Can Limit The Emergence Of Referential Symbol Systems
Elizabeth Irvine and Sean Roberts

Culture shapes the evolution of cognition

A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone’s favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.

 

Abstract:

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Paper: http://www.pnas.org/content/early/2016/03/30/1523631113.full

JoLE special issue on Phonetics and Phonology: Deadline Extension

As has been advertised on the blog previously, The Journal of Language Evolution is hosting a special issue on the emergence of phonetics and phonology. The call for papers can be found here:
The deadline for papers was 17th April 2016, and is now being extended to 31st July 2016.
 
However, if you plan to submit to the special issue, or have any questions about it, please email Hannah Little (hannah@ai.vub.ac.be), if possible by the original deadline (17th April 2016).

EvoLang Preview: Language Adapts to Interaction workshop

LATI_Wheel

The first day of EvoLang includes several workshops (full list here) to which all attendees are invited.  Gregory Mills and I are running a workshop on language evolution and interaction, and the schedule and papers are now available online.

Language Adapts to Interaction, 08:30-13:30, Monday, 21st March, 2016, New Orleans

Language has been shown to be adapted to constraints from many domains such as production, transmission, memory, processing and acquisition. These adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is interaction – face-to-face conversation. Taking turns at talk, repairing problems in communication and organising conversation into contingent sequences seem completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species. Therefore, the infrastructure for interaction may provide an insight into the origins of our unique communicative abilities.  The emerging picture is that the infrastructure for interaction is an evolutionary old requirement for the emergence of a complex linguistic system, and for a cooperative, cumulative culture more generally.  That is, Language Adapts to Interaction.

The keynote talk is given by John Haviland, who covers an emerging sign language called Z, and argues that interactional tools such as gaze, pointing and attention management form the basis of both aspects of interaction such as turn taking, but also grammatical features in the language.

Continue reading “EvoLang Preview: Language Adapts to Interaction workshop”

EvoLang Preview: Morphological Redundancy and Survivability

This is a preview of the talk Redundant Features Are Less Likely To Survive: Empirical Evidence From The Slavic Languages by Aleksandrs Berdicevskis and Hanne Eckhoff.  Tuesday 22nd March, 14:30, room D.

One of the methodological trends of this year’s EvoLang seems to be intelligent exaptation. What I mean by this is that people do research on language evolution using tools that were developed for a completely different purpose. Examples include using zombies to observe the emergence of languages under severe phonological constraints, Minecraft to investigate the role of pointing in the emergence of language and EvoLang to study EvoLang. In addition to that, Hanne Eckhoff and I use syntactic parsers to quantify morphological redundancy.

The basic idea is to put to test an assumption that redundant features are more likely to disappear from languages, especially if social factors favour the loss of excessive complexity. The problem is that nobody really knows what is redundant in real languages and what is not. We can define a feature as redundant if it is not necessary for successful communication, i.e. if hearers can infer the meanings of the messages they receive without using this feature. It is, however, still a long way from this definition to a quantitative measure. In theory, one could run psycholinguistic experiments, in practice, it is a difficult and costly venture (I tried).

In this paper, we replace humans with a dependency parser. For those who are not into computational linguistics: a parser is a program which can automatically identify (well, attempt to identify) the syntactic structure of a given sentence. A typical parser is first trained on a large number of human-annotated sentences. After its learning is over, it can parse non-annotated sentences on its own, relying on the information about the form of every word, its lemma, part of speech, morphological features and the linear order of words — just like a human being. If we remove a certain feature from its input and compare performance before and after the removal, we can estimate how important (=non-redundant) the feature was.

redundancy_preview
If we remove all information about, say, dative from the parser’s input (to the left), it will have harder time to understand that the phrase two masters is an oblique object.

We test whether this measure is any good by running a pilot study with the Slavic language group. We estimate the redundancy of morphological features in Common Slavic (Common Slavic itself has left no written legacy, but we happen to have an excellent treebank of Old Church Slavonic, which is often used as a proxy) and try to predict which features are likely to die out in 13 modern Slavic languages. While redundancy is not of course a sole determiner of the survivability, it turns out be a fairly good predictor.

Come to the talk to hear about fierce morphological competitions! They are friends, dative and locative, almost brothers, but if only one can stay alive, which will sacrifice itself? The perfect participle is an underdog past tense, its frequency negligible compared to that of its rivals, the aorist and the imperfect, but does its high non-redundancy score give it some hope?

 

Aleksandrs Berdicevskis is a postdoc in computational historical linguistics at an edge of the world (namely The Arctic University of Norway in the city of Tromsø) with a PhD in sociolinguistics from the University of Bergen, MA in theoretical linguistics from Moscow State University, two years’ experience in science journalism, two kids and a long-standing interest in language evolution.
The first question he usually gets from new acquaintances is about the spelling of his name. The first name is a common Russian name (Aleksandr-) with the obligatory Latvian inflectional marker for nominative masculine singular (-s). The full form is used in formal communication only, otherwise he is usually called Sasha (the Russian hypocorism for Aleksandr) or, for simplicity’s sake, Alex.

EvoLang Preview: Using Minecraft to explore Language Evolution

Replicated Typo is doing a series of previews for this year’s EvoLang conference.  If you’d like to add a preview of your own presentation, get in touch with Sean Roberts.

At this year’s EvoLang Liz Irvine and I will be talking about how pointing can inhibit the emergence of symbolic communication.

Usually, pointing is thought to help the process of bootstrapping a symbolic system.  You can point to stuff to help people agree on what certain symbols refer to.  This process has been formalised in the ‘naming game’ (see Matt Spike’s talk):

  1. I request an object by naming it (with an arbitrary symbol)
  2. You guess what I mean and give me an object
  3. I point to the object that I meant you to give me (feedback)
  4. We remember the name that referred to this object

This game is the basis for many models of the emergence of shared symbolic systems, including iterated learning experiments (e.g Feher et al., and Macuch Silva & Roberts).  Here’s some robots playing the naming game in Luc Steels’ lab:

Robots use pointing to draw attention to objects in a naming game, see here.

However, the setup of these experiments assumes one crucial thing: that the individuals can’t use pointing to make the request in the first place.  Most experiments are set up so that participants must communicate symbolically before they can use pointing.  If you allowed pointing to be used in a naming game, then it would probably go something like this:

  1. I point at the object I want.
  2. I request an object by naming it (with an arbitrary symbol)
  3. You guess what I mean and give me an object
  4. I point to the object that I meant you to give me (feedback)
  5. We remember the name that referred to this object

That is, if we’re good enough at pointing then we don’t need a symbolic language for this task.

Of course, there must have been some task in our evolutionary history that provided a pressure for us to develop language.  We set out to explore what kind of task this might have been by running an experiment in Minecraft.

Continue reading “EvoLang Preview: Using Minecraft to explore Language Evolution”

CfP: Interaction and Iconicity in the Evolution of Language

Following the ICLC theme session on “Cognitive Linguistics and the Evolution of Language” last year,  I’m guest-editing a Special Issue of the journal Interaction Studies together with Michael Pleyer, James Winters, and Jordan Zlatev. This volume, entitled “Interaction and Iconicity in the Evolution of Language: Converging Perspectives from Cognitive and Evolutionary Linguistics”, will focus on issues that emerged as common themes during the ICLC workshop.

Although many contributors to the theme session have already agreed to submit a paper, we would like to invite a limited number of additional contributions relevant to the topic of the volume. Here’s our Call for Papers.

Continue reading “CfP: Interaction and Iconicity in the Evolution of Language”

1st issue of The Journal of Language Evolution: discussion on tone and humidity

The origins of language, and how they change over time, are tricky topics.  We can’t travel back in time to observe how it happened, and we’re only just beginning to understand the range of variation in existing languages.  Traditionally, the study of language evolution was more of a philosophical enterprise, with many educated guesses and a lot of debate about theoretical distinctions.  But these days it’s clear that a much wider approach is needed.  Thinking about how so many diverse ways of communicating could have emerged in a single species (and that species alone) involves thinking about topics as diverse as genetics, animal communication, cultural evolution, emerging sign languages, and the history of human migration and contact (even Chomsky recently wrote of the importance of acquisition, pragmatics, computer science and neuroscience in understanding the language faculty!).

The new Journal of Language Evolution will tackle these issues by reaching out to new areas of research and by embracing new quantitative methods, as Dan Dediu discusses in the editorial of the first issue.  The issue includes an introduction to the linguistic diversity of planet Earth by Harald Hammarstrom, which demonstrates how important the work of language documentation (especially of endangered languages) is for shaping our ideas about what evolved.  Bodo Winter and also provide an introduction to mixed models and growth curves, which is becoming an increasingly important tool in the language sciences.  Extending the topics to pragmatics, Cat Silvey reviews Thom Scott Phillips’ book Speaking our Minds.

Climate and Language Evolution

But if this isn’t deep enough into the frontiers of language evolution for you, there is also a debate on humidity and tone.  Caleb Everett, Damian Blasí and myself discuss the potential effects of our ecology on language evolution.  This includes obvious differences such as some languages having more specific words for relevant climatic factors (not just words for snow, but watch this space for news on that front), to the way the climate affects population movement.  We focussed on one controversial idea: dry air affects phonation accuracy, so some sounds should be harder to produce accurately in dry climates.  Over a long period of time, this might lead to languages changing to avoid these sounds.

Here’s a simple diagram of what we mean:

Continue reading “1st issue of The Journal of Language Evolution: discussion on tone and humidity”