IMGP3639

Words, Binding, and Conversation as Computation

I’ve been thinking about my draft article, Form, Event, and Text in an Age of Computation. It presents me with the same old rhetorical problem: how to present computation to literary critics? In particular, I want to convince them that literary form is best thought of as being computational in kind. My problem is this: If you’ve already got ‘it’, whatever it is, then my examples make sense. If you don’t, then it’s not clear to me that they do make sense. In particular, cognitive networks are a stretch. Literary criticism just doesn’t give you any useful intuitions of form as being independent of meaning.

Any how, I’ve been thinking about words and about conversation. What I’m thinking is that the connection between signifier and signified is fundamentally computed in the sense that I’m after. It’s not ‘hard-wired’ at all. Rather it’s established dynamically. That’s what the first part of this post is about. The second part then goes on to argue that conversation is fundamentally computational.

This is crude and sketchy. We’ll see.

Words as bindings between sound and sense

What is a word? I’m not even going to attempt a definition, as we all know one when we see it, so to speak. What I will say, however, is that the common-sense core intuition tends to exaggeration their Parmenidean stillness and constancy at the expense of the Heraclitean fluctuation. What does this word mean:

race

It’s a simple word, an everyday word. Out there in the middle of nowhere, without context, it’s hard to say what it means. I could mean this, it could mean that. It depends.

When I look it up in the dictionary on my computer, New Oxford American Dictionary, it lists three general senses. One, “a ginger root,” is listed as “dated.” The other two senses are the ones I know, and each has a number of possibilities. One set of meanings has to do with things moving and has many alternatives. The other deals with kinds of beings, biological or human. These meanings no doubt developed over time.

And, of course, the word’s appearance can vary widely depending on typeface or how it’s handwritten, either in cursive script or printed. The spoken word varies widely as well, depending on the speaker–male, female, adult, child, etc.–and discourse context. It’s not a fixed object at all.

What I’m suggesting, then, is that this common ‘picture’ is too static:

sign

There we have it, the signifier and the signified packaged together in a little ‘suitcase’ with “sign” as the convenient handle for the package. It gives the impression the sentences are little ‘trains’ of meaning, with one box connected to the next in a chain of signifiers.

No one who thinks seriously about it actually thinks that way. But that’s where thinking starts. For that matter, by the time one gets around to distinguishing between signifier and signified one has begun to move away from the static conception. My guess is that the static conception arises from the fact of writing and the existence of dictionaries. There they are, one after another. No matter when you look up a word, it’s there in the same place, having the same definition. It’s a thing, an eternal Parmenidean thing.

Later in The Course in General Linguistics, long after he’s introduced the signifier/signified distinction, de Saussure presents us with this picture [1]:

waves sign

He begins glossing it as follows (112): “The linguistic fact can therefore be pictured in its totality–i.e. language–as a series of contiguous subdivisions marked off on both the indefinite plane of jumbled ideas (A) and the equally vague plane of sounds (B).” He goes on to note “the somewhat mysterious fact is rather that ‘thought-sound’ implies division, and that language words out its units while taking shape between two shapeless masses.” I rather like that, and I like that he chose undulating waves as his visual image. Continue reading

Form, Event, and Text in an Age of Computation

IMGP1879rd 1by1 B&W

I've put another article online. This is not a working paper. It is a near-final draft of an article I will be submitting for publication once I have had time to let things settle in my mind. I'd appreciate any comments you have. You can download the paper in the usual places:

Academia.edu: https://www.academia.edu/27706433/Form_Event_and_Text_in_an_Age_of_Computation
SSRN: http://ssrn.com/abstract=2821678

Abstract: Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I then adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition and the central section is clearly marked in audience response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

CONTENTS

Computational Semantics: Model and Text 3
Literary Form, Attridge and Latour 8
Obama’s Pinckney Eulogy as Performance 11
Obama’s Pinckney Eulogy as Text 15
Description in Method 19

Form, Event, and Text in an Age of Computation

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is a bit different from that prevalent among practitioners of distant reading. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine.

But, for whatever reason, digital critics have little or no interest in computation as something one enacts while reading any one of those texts. That is the sense of computation that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was the idea of computation that drove the so-called cognitive revolution in its early years:

... the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in “memory” and retrieve them again, classify inputs, recognize patterns, and so on. Whether they do these things just like people was less important than that they do them at all. The coming of the computer provided a much-needed reassurance that cognitive processes were real; that they could be studied and perhaps understood.

Much of the work in the newer psychologies is conducted in a vocabulary that derives from computing and, in many cases, involves computer simulations of mental processes. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools through which we could construct models with differentiated components and processes meshing like, well, clockwork. It gave us a way to objectify our theories.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is not necessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms, digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic device. Specifically, I propose that it helps us think about and describe literary form in ways we haven’t done before.

First I present a model of computational semantics for Shakespeare’s Sonnet 129. This affords us a distinction between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition; the central section is clearly marked in the audience’s response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. I prefer to think of it as speculative engineering. To be sure, engineering, like science, is technical. But engineering is about design and construction, perhaps even Latourian composition. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

Double-blind reviewing at EvoLang 11 reveals gender bias

In a new paper in the Journal of Language Evolution, Tessa Verhoef and I analyse reviewer ratings for papers submitted to the EvoLang conference between 2012 and 2016 .  In the most recent conference, we trialed double-blind review for the first time, and we wanted to see if hiding the identity of authors revealed any biases in reviewers' ratings.

We found that:

  • Proportionately few papers are submitted from female first authors.
  • In single-blind review, there was no big difference in average ratings for papers by male or female first authors ...
  • ... but female first-authored papers were rated significantly higher than male first authored papers in the double-blind condition.

There are many possible explanations of these findings, but they are indicative of a bias against female authors.  This fits with a wider literature of gender biases in science.  We suggest that double-blind review is one tool that can help reduce the effects of gender biases, but does not tackle the underlying problem directly.  We were pleased to see better representation of women on the most recent EvoLang talks and plenary speaker list, and look forward to making our field more inclusive.

The paper is available, free and open-access, at the Journal of Language Evolution.  The data and statistical code is also available on github.

Language Evolution and Gaming at Nineworlds

I'll be appearing at Nineworlds convention as part of Stephanie Rennick's panel on "Lessons for Academia from Computer Games".  The idea is to talk about ways in which games have informed our research, and here's some of the things I'll mention:

Minecraft shows us how language evolved
CompletedStructure
How were the very first languages created?  How do you agree on words for things if you don't have a language yet?  The accepted theory is people point at stuff they need and invent a word for it at the same time.  After many rounds of negotiation, people come to a consensus about how to describe things.  We tried to simulate this in Minecraft by getting people to build a little house together, but they could only communicate by knocking on the table.  But what we found was that, if you gave people the ability to point at things, they could do the task perfectly well without inventing a communication system at all.  This was quite surprising, and suggests that language did not originate as a simple way of requesting things, but maybe as a way of referring to stuff that you can't easily point to, like the future or emotions.  More here

A chimp playing a computer game shows us we have flexible brains

_58374024_1177461-low_res-animal-einsteins

Ayumu is a chimpanzee who plays computer games, and they're REALLY GOOD.  In a game where you have to memorise the location of numbers on a screen, they left human participants in the dust (there's a fun video of this).  The original researchers concluded that there was a genetic difference between us and chimpanzees:  Chimps had evolved better visual memory for hunting, and we evolved better auditory memory for speaking.  However, we wondered if Ayumu could beat experienced gamers.  We set up a 'Chimp Challenge' online where people could play the game.  We found over 60 people who were as good as Ayumu.  This suggests that the difference is also due to our experience - humans have very flexible brains that can get good at a lot of different things. More here.

Computer games can help us learn about linguistic diversity
GLG_New_raw_NoWeights2
Linguists are great at spotting differences between languages, but we don't actually know very much about what differences matter most to people.  We explored "the great language game" - an online game where you have to name the language being spoken in a recording.  Looking at 15 million results, we found that the more different languages were, the easier people could tell them apart.  But we also found that people confused some languages that linguists would consider extremely different, and also that there were differences depending on the languages you know.  We suggest that how you experience a foreign language is linked to you cultural knowledge and beliefs.  We took this one step further by creating an updated version of the game with some very rare languages, which we hope to analyse in the future.  More here.
IMGP7746

I know (1) that you think (2) it’s funny, and you know (3) that I know (4) that, too.

A large part of human humour depends on understanding that the intention of the person telling the joke might be different to what they are actually saying. The person needs to tell the joke so that you understand that they're telling a joke, so they need to to know that you know that they do not intend to convey the meaning they are about to utter... Things get even more complicated when we are telling each other jokes that involve other people having thoughts and beliefs about other people. We call this knowledge nested intentions, or recursive mental attributions. We can already see, based on my complicated description, that this is a serious matter and requires scientific investigation. Fortunately, a recent paper by Dunbar, Launaway and Curry (2015) investigated whether the structure of jokes is restricted by the amount of nested intentions required to understand the joke and they make a couple of interesting predictions on the mental processing that is involved in processing humour, and how these should be reflected in the structure and funniness of jokes. In today’s blogpost I want to discuss the paper's methodology and some of its claims.

Continue reading

EvoLang proceedings now in physical form

The proceedings of the 11th Evolution of Language conference are now available to buy as a physical book.EvoLang11

The book is available through print-on-demand publisher Lulu for £23.72.  This is the lowest price allowed by the site, and will provide EvoLang with £2.81 for each sale.  The book now also has an ISBN: 978-1-326-61450-8.

This book is being made available due to popular demand, but all the papers and abstracts are freely available from the proceedings website, which is the canonical source.  Unfortunately, the costs were too great to publish in colour, so the inside of the book is black and white.

Buy the book now!

20160514-_IGP6641

What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading

20151124-P1110611

Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here's an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett's FB page, Yorick Wilks observed: "It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer."

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

EvoLang: Post-conference awards

So EvoLang is over.  But if you missed any of it, the papers are still available online.  In celebration of the new digital format, I've chosen a number of papers for some post-conference awards (nothing official, just for fun!).

Most viewed papers

The proceedings website received 6,000 page hits, most of them during the conference itself.  Here are the top 3 most viewed pages:

The Low-complexity-belt: Evidence For Large-scale Language Contact In Human Prehistory?
Christian Bentz

Semantic Approximation And Its Effect On The Development Of Lexical Conventions
Bill Noble and Raquel Fernández

Evolution Of What?
Christina Behme

Most news coverage

Two papers were covered by Science magazine:

Dendrophobia In Bonobo Comprehension Of Spoken English
Robert Truswell (read the article here)

The Fidelity Of Iterated Vocal Imitation
Pierce Edmiston , Marcus Perlman and Gary Lupyan (read the article here)

Most cited paper

One of the advantages of the papers being accessible online, and before the conference, is that other people may cite them.  Indeed, on the day EvoLang ended, I received a short piece to review which cited this paper, which therefore gets the prize:

Anatomical Biasing Of Click Learning And Production: An MRI And 3D Palate Imaging Study
Dan Dediu and Scott Moisik

Best paper by an academic couple

By my count, there were 4 papers submitted by academic couples.  My favorite was a great collaboration on a novel topic:  the paper by Monika Pleyer and Michael Pleyer on taking the first steps towards integrating politeness theory and evolution (it was also shortlisted for best talk).

The Evolution Of Im/politeness
Monika Pleyer and Michael Pleyer

Best supplementary materials

8 accepted papers included supplementary materials, which are available on the website.  These range from hilarious image stimuli (my favorite: a witch painting a pizza), to a 7-page model explanation, through to netlogo code and raw data and analysis scripts.  But I'm afraid I'm going to choose my own paper's supplementary materials for including videos of people playing Minecraft.  For science.

Deictic Tools Can Limit The Emergence Of Referential Symbol Systems
Elizabeth Irvine and Sean Roberts

thompson

Culture shapes the evolution of cognition

A new paper, by Bill Thompson, Simon Kirby and Kenny Smith, has just appeared which contributes to everyone's favourite debate. The paper uses agent-based Bayesian models that incorporate learning, culture and evolution to make the claim that weak cognitive biases are enough to create population-wide effects, making a strong nativist position untenable.

 

Abstract:

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.

Paper: http://www.pnas.org/content/early/2016/03/30/1523631113.full

Culture, its evolution and anything inbetween