An Open Letter to Dan Everett about Literary Criticism

If you’ve heard of Dan Everett at all, most likely you’ve heard about his work among the Pirahã and his battle with Noam Chomsky and the generative grammarians. He went into the Amazon to live among the Pirahã in the mid-1970s with the intention of learning their language, translating the Bible into it, and converting them to Christianity. Things didn’t work out that way. Yes, he learned their language, and managed to translate a bit of the Bible into Pirahã. But, no, he didn’t convert them. They converted him, as it were, so he is now an atheist.

Not only did Everett learn Pirahã, but he compiled a grammar and reached the conclusion – a bit reluctantly at first – that it lacks recursion. Recursion is the property that Chomsky believes is irreducibly intrinsic to human language. And so Everett found himself in pitched battle with Chomsky, the man whose work revolutionized linguistics in the mid-1950s. If that interests you, well you can run a search on something like “Everett Chomsky recursion” (don’t type the quotes into the search box) and get more hits than you can shake a stick at.

I’ve never met Dan face-to-face, but I know him on Facebook where I’m one of 10 to 20 folks who chat with him on intellectual matters. Not so long ago I reviewed his most recent book, Dark Matter of the Mindover at 3 Quarks Daily. I thus know him, after a fashion.

And so I thought I’d address an open letter to him on my current hobbyhorse: What’s up with literary criticism?

* * * * *

Dear Dan,

I’ve been trying to make sense of literary criticism for a long time. In particular, I’ve been trying to figure out why literary critics give so little descriptive attention to the formal properties of literary texts. I don’t expect you to answer the question for me but, who knows, as an outsider to the discipline and with an interest in language and culture, perhaps you might have an idea or two.

I figured I’d start by quoting a fellow linguist, one moreover with an affection for Brazil, Haj Ross. Then I look at Shakespeare as a window into the practice of literary criticism. I introduce the emic/etic distinction in that discussion. After that we’ll take a look at Joseph Conrad’s Heart of Darkness in the course of which I introduce the question, What would I teach in a first level undergraduate class? I find that to be a very useful way of thinking about the discipline; I figure that might also appeal to you as a Dean and Acting Provost. I conclude by returning to the abstractosphere by distinguishing between naturalist and ethical criticism. Alas, it’s a long way through, so you might want to pour yourself a scotch.

Haj’s Problem: Interpretation and Poetics

Let’s start with the opening paragraphs from a letter that Haj Ross has posted to Academia.edu. Of course you know who Haj is, but I think it’s useful to note that, back in the 1960s when he was getting a degree in linguistics under Chomsky at MIT, he was also studying poetics under Roman Jakobson at Harvard, and that, over the years, he has produced a significant body of descriptive work on poetry that, for the most part, exists ‘between the cracks’ in the world of academic publication. The letter is dated November 30, 1989 and it was written when Haj was in Brazil at Departamento de Lingüística, Universidade Federal de Minas Gerais, Belo Horizonte [1]. He’s not sure whom he wrote it to, but thinks it was one Bill Darden. He posted it with the title “Kinds of meanings for poetic architectures” and with a one-line abstract: “How number can become the fabric on which the light of the poem can be projected”. Here’s the opening two paragraphs:

You correctly point out that I don’t have any theory of how all these structures that I find connect to what/how the poem means. You say that one should start with a discussion of meaning first.

That kind of discussion, which I have not heard much of, but already enough for me, I think, seems to be what people in literature departments are quite content to engage in for hours. What I want to know, however, is: what do we do when disputes arise as to what two people think something means? This is not a straw question – I have heard Freudians ram Freudian interpretations down poems’ throats, and I think also Marxists, etc., and somehow, just as most discussions among Western philosophers leave me between cold and impatient, so do these literary ones. So, for that matter, do purely theoretical, exampleless linguistic discussions. Armies may march on their stomachs; I march on examples. So I would much rather hear how the [p]’s in a poem are arrayed than about how the latent Oedipal etc., etc. In the former case, I know where to begin to make comments, in the latter, ich verstumme.

You’ll have to read the whole letter to find out what he meant by that one-line abstract, but I assure you that it’s both naïve and deep at one and the same time, mentioning, among other things, the “joy of babbling” and the role of the tamboura in Indian classical music. At the moment I’m interested in just those two opening paragraphs.

While I got my degree in literary criticism and understand the drive/will to meaning, I also understand Haj’s attraction to verifiable pattern/structures and his willingness to pursue that even though he cannot connect it to meaning. Yes, meaning is the primary objective of academic literary criticism and, yes, justifying proposed meanings is (deeply) problematic. I also know that the academic discipline of literary criticism was NOT founded on the activity of interpreting texts. It was founded in the late 19th century on philology, literary history, and editing – that is, editing the canonical literary works for study by students and scholars. Roughly speaking, the interest in interpretation dates back to the second quarter of the 20th century, but it didn’t become firmly institutionalized until the third quarter of the century. You can see that institutionalization in this Ngram search on the phrase “close reading”, which is a term of art for interpretive analysis:

close reading

Figure 1: “Close reading”

And that’s when things became interesting. As more and more critics came to focus on interpretation, the profession became acutely aware of a problem: different critics produced different interpretations, which is the correct interpretation? Some critics even began to wonder whether or not there was such a thing as the correct interpretation. We are now well within the scope of the problem that bothered Haj: How do you justify one interpretation over another?

That’s the issue that was in play when I entered Johns Hopkins as a freshman in 1965. Though I had declared an interest in psychology, once I’d been accepted I gravitated toward literature. Which means that, even as I was working as hard as I could to figure out how to interpret a literary text, I was also party to conversations about the problematic nature of interpretation. As I have written elsewhere about those years at Hopkins [2] there’s no need to recount them here. The important point is simply that literary critics were acutely aware of the problematic nature of interpretation and devoted considerable effort to resolving the problem.

In the course of that problematic thrashing about, literary critics turned to philosophy, mostly Continental (though not entirely), and linguistics, mostly structuralist linguistics. In 1975 Jonathan Culler published Structuralist Poetics, which garnered him speaking invitations all over America and made his career. For Culler, and for American academia, structuralism was mostly French: Saussure, Jakobson (not French, obviously), Greimas, Barthes, and Lévi-Strauss, among others. But Culler also wrote of literary competence, clearly modeled on Chomsky’s notion of linguistic competence, and even deep structure. At this point literary critics, not just Culler, were interested in linguistics.

Here’s a paragraph from Culler’s preface (xiv-xv):

The type of literary study which structuralism helps one to envisage would not be primarily interpretive; it would not offer a method which, when applied to literary works, produced new and hitherto unexpected meanings. Rather than a criticism which discovers or assigns meanings, it would be a poetics which strives to define the conditions of meaning. Granting new attention to the activity of reading, it would attempt to specify how we go about making sense of texts, what are the interpretive operations on which literature itself, as an institution, is based. Just as the speaker of a language has assimilated a complex grammar which enables him to read a series of sounds or letters as a sentence with a meaning, so the reader of literature has acquired, through his encounters with literary works, implicit mastery of various semiotic conventions which enable him to read series of sentences as poems or novels endowed with shape and meaning. The study of literature, as opposed to the perusal and discussion of individual works, would become an attempt to understand the conventions which make literature possible. The major purpose of this book is to show how such a poetics emerges from structuralism, to indicate what it has already achieved, and to sketch what it might become.

However much critics may have been interested in this book, that interest did not produce a flourishing poetics. Even Culler himself abandoned poetics after this book. Interpretation had become firmly established as the profession’s focus.

As for the problem of justifying one interpretation over another, deconstructive critics argued that the meaning of texts was indeterminate and so, ultimately, there is no justification. Reader response critics produced a similar result by different means. The issue was debated into the 1990s and then more or less put on the shelf without having been resolved.

I have no quarrel with that. I think the basic problem is that literary texts of whatever kind – lyric or narrative poetry, drama, prose fiction – are different in kind from the discursive texts written to explicate them. There is no well-formed way of translating meaning from a literary to a discursive text. When you further consider that different critics may have different values, the problem becomes more intractable. Interpretation cannot, in principle, be strongly determined.

What, you might ask, what about the meaning that exists in a reader’s mind prior to any attempt at interpretation? Good question. But how do we get at THAT? It simply is not available for inspection.

What happens, though, when you give up the search for meaning? Or, if not give up, you at least bracket it and subordinate it to an interest in pattern and structure as intrinsic properties of texts? Is a poetics possible? Let’s set that aside for awhile and take a detour though the profession’s treatment of The Bard, William Shakespeare, son of a glover and London actor.

Continue reading “An Open Letter to Dan Everett about Literary Criticism”

Sharing Experience: Computation, Form, and Meaning in the Work of Literature

I’ve uploaded another document: Sharing Experience: Computation, Form, and Meaning in the Work of Literature. You can download it from Academia.edu:

https://www.academia.edu/28764246/Sharing_Experience_Computation_Form_and_Meaning_in_the_Work_of_Literature

It’s considerably revised from a text I’d uploaded a month ago: Form, Event, and Text in an Age of Computation. You might also look at my post, Obama’s Affective Trajectory in His Eulogy for Clementa Pinckney, which could have been included in the article, but I’m up against a maximum word count as I am submitting the article for publication. You might also look at the post, Words, Binding, and Conversation as Computation, which figured heavily in my rethinking.

Here’s the abstract of the new article, followed by the TOC and the introduction:

Abstract

It is by virtue of its form that a literary work constrains meaning so that it can be a vehicle for sharing experience. Form is thus an intermediary in Latour’s sense, while meaning is a mediator. Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of signifiers, and (3) the path one computes through (1) under constraints imposed by (2). As a text, Obama’s Eulogy for Clementa Pinckney is a ring-composition; as a performance, the central section is clearly marked by audience response. Recent work on synchronization of movement and neural activity across communicating individuals affords insight into the physical substrate of intersubjectivity. The ring-form description is juxtaposed to the performative meaning identified by Glenn Loury and John McWhorter.

CONTENTS

Introduction: Speculative Engineering 2
Form: Macpherson & Attridge to Latour 3
Computational Semantics: Network and Text 6
Obama’s Pinckney Eulogy as Text 10
Obama’s Pinckney Eulogy as Performance 13
Meaning, History, and Attachment 18
Coda: Form and Sharability in the Private Text 20

Introduction: Speculative Engineering

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is differs from that of computational critics. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine [1].

Digital critics, however, have little interest in computation as a process one enacts while reading a text, the sense that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was computation that drove the so-called cognitive revolution [2]. Much of the work in cognitive science is conducted in a vocabulary derived computing and, in many cases, involves computer simulations. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools for constructing models with differentiated components and processes meshing like, well, clockwork. Moreover, so far as I know, computation of one kind or another provides the only working models we have for language processes.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is unnecessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms; digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic: it helps us think about and systematically describe literary form in ways we haven’t done before.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. Speculative engineering is a better characterization. Engineering is about design and construction, perhaps even Latourian composition [3]. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made [4]. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

Caveat emptor: This bridge is of complex construction. I start with form, move to computation, with Shakespeare’s Sonnet 129 as my example, and then to President Obama’s Eulogy for Clementa Pinckney. After describing its structure (ring-composition) I consider the performance situation in which Obama delivered it, arguing that those present constituted a single physical system in which for sharing experience. I conclude by discussing meaning, history, and attachment.

References

[1] William Benzon, “The Only Game in Town: Digital Criticism Comes of Age,” 3 Quarks Daily, May 5, 2014, http://www.3quarksdaily.com/3quarksdaily/2014/05/the-only-game-in-town-digital-criticism-comes-of-age.html

[2] Ulric Neisser, Cognition and Reality: Principles and Implications of Cognitive Psychology (San Francisco: W. H. Freeman, 1976), 5-6.

[3] Bruno Latour, “An Attempt at a ‘Compositionist Manifesto’,” New Literary History 41 (2010), 471-490.

[4] For example, see Steven Pinker, How the Mind Works (New York: W.W. Norton & company, Inc., 1997), 21 ff.

Form, Event, and Text in an Age of Computation

IMGP1879rd 1by1 B&W

I’ve put another article online. This is not a working paper. It is a near-final draft of an article I will be submitting for publication once I have had time to let things settle in my mind. I’d appreciate any comments you have. You can download the paper in the usual places:

Academia.edu: https://www.academia.edu/27706433/Form_Event_and_Text_in_an_Age_of_Computation
SSRN: http://ssrn.com/abstract=2821678

Abstract: Using fragments of a cognitive network model for Shakespeare’s Sonnet 129 we can distinguish between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I then adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition and the central section is clearly marked in audience response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

CONTENTS

Computational Semantics: Model and Text 3
Literary Form, Attridge and Latour 8
Obama’s Pinckney Eulogy as Performance 11
Obama’s Pinckney Eulogy as Text 15
Description in Method 19

Form, Event, and Text in an Age of Computation

The conjunction of computation and literature is not so strange as it once was, not in this era of digital humanities. But my sense of the conjunction is a bit different from that prevalent among practitioners of distant reading. They regard computation as a reservoir of tools to be employed in investigating texts, typically a large corpus of texts. That is fine.

But, for whatever reason, digital critics have little or no interest in computation as something one enacts while reading any one of those texts. That is the sense of computation that interests me. As the psychologist Ulric Neisser pointed out four decades ago, it was the idea of computation that drove the so-called cognitive revolution in its early years:

… the activities of the computer itself seemed in some ways akin to cognitive processes. Computers accept information, manipulate symbols, store items in “memory” and retrieve them again, classify inputs, recognize patterns, and so on. Whether they do these things just like people was less important than that they do them at all. The coming of the computer provided a much-needed reassurance that cognitive processes were real; that they could be studied and perhaps understood.

Much of the work in the newer psychologies is conducted in a vocabulary that derives from computing and, in many cases, involves computer simulations of mental processes. Prior to the computer metaphor we populated the mind with sensations, perceptions, concepts, ideas, feelings, drives, desires, signs, Freudian hydraulics, and so forth, but we had no explicit accounts of how these things worked, of how perceptions gave way to concepts, or how desire led to action. The computer metaphor gave us conceptual tools through which we could construct models with differentiated components and processes meshing like, well, clockwork. It gave us a way to objectify our theories.

My purpose in this essay is to recover the concept of computation for thinking about literary processes. For this purpose it is not necessary either to believe or to deny that the brain (with its mind) is a digital computer. There is an obvious sense in which it is not a digital computer: brains are parts of living organisms, digital computers are not. Beyond that, the issue is a philosophical quagmire. I propose only that the idea of computation is a useful heuristic device. Specifically, I propose that it helps us think about and describe literary form in ways we haven’t done before.

First I present a model of computational semantics for Shakespeare’s Sonnet 129. This affords us a distinction between (1) the mind/brain cognitive system, (2) the text considered merely as a string of verbal or visual signifiers, and (3) the path one’s attention traces through (1) under constraints imposed by (2). To a first approximation that path is consistent with Derek Attridge’s concept of literary form, which I adapt to Bruno Latour’s distinction between intermediary and mediator. Then we examine the event of Obama’s Eulogy for Clementa Pinckney in light of recent work on synchronized group behavior and neural coordination in groups. A descriptive analysis of Obama’s script reveals that it is a ring-composition; the central section is clearly marked in the audience’s response to Obama’s presentation. I conclude by comparing the Eulogy with Tezuka’s Metropolis and with Conrad’s Heart of Darkness.

Though it might appear that I advocate a scientific approach to literary criticism, that is misleading. I prefer to think of it as speculative engineering. To be sure, engineering, like science, is technical. But engineering is about design and construction, perhaps even Latourian composition. Think of it as reverse-engineering: we’ve got the finished result (a performance, a script) and we examine it to determine how it was made. It is speculative because it must be; our ignorance is too great. The speculative engineer builds a bridge from here to there and only then can we find out if the bridge is able to support sustained investigation.

What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

And there we have it. Continue reading “What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics””

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.

Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century? Continue reading “Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015”

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs together. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):

Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.

I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled. Continue reading “Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking”

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation. Continue reading “Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction”

Cultural Beings Evolving in the Mesh

This one wrestled me hard. In it I use new terminology and concepts–coordinators, phantasms, cultural beings–as though I know what they mean and am comfortable with them. But that’s not quite the case. It’s only recently that I’ve invented them. It’s one them to use terms in a document where you define them. It’s another thing to use them in an extended exposition. That’s where they come to define themselves. So this post, a long one, is something of a shake-down cruise. Sure, I’d like to have things all worked out nice and neat. But there’s no way to do that except to put the terms out there and see how they do. That’s what I’m doing.

* * * * *

In this post I further explore the notion of a cultural being and I introduce a metaphor for culture, that of a hyperfluid (cf. Tim Morton’s concept of a hyperobject). A cultural being, if you recall, consists of an envelope or package or coordinators along with all the actions that have given it life. It is thus a rather strange notion, which is why I want to explore it.

And I want to explore it in tandem with another strange notion, that of culture as something we might call a hyperfluid, something that has multiple levels of viscosity and thus changes at different rates. I’ve long thought of the brain, as a functioning entity, as hyperfluid in this sense. At the deepest “thickest” layer we have the physical structure of the brain itself. At the most superficial “thinnest” layer we have the flickering of neural impulses from one millisecond to the next. But that flickering can lead to changes in synaptic structure and, over time, those changes can “rewire” the brain at a fairly “deep” level, so-called cortical plasticity.

So it is with culture, which is, after all, a collective product of the brains of all those individuals in a social group over the life of the group. The thickest cultural layers settle to the bottom. These are the features that endure over decades and centuries, if not millennia. It’s at culture’s thin surface that we see ordinary everyday behavior. Here is where people read books, and write them, where they listen to music, and make it. In this process each individual will participate in many cultural beings and will, in turn, be shaped by them. And some cultural beings will attract only a few individuals while others will attract more. Some of those cultural beings will outlive any and all of the individuals that have participated in them.

What we have, then, are collections of individual human beings, biological beings, on the one hand. Each of them participates in and is (partially) formed by many cultural beings. On the other hand, we a bunch of collection of cultural beings, each of which has attracted participation by at least some individual humans while some will attract participation by many humans. Some of the latter are able to attract participation over decades and even longer, so that they outlive any of the humans that have participated in them. It is the interaction of these two sets of beings that give us this hyperfluid culture that lives in the social mesh.

Notice that I talk of cultural beings as living. They are and they are not. Coordinators–targets, couplers, and designators–are not alive. The humans who read the books and listen to the music, of course, are alive and it is that vivacity, their phantasms, which I am, in effect, allocating to the books and musical performers they witness. Cultural beings as I have defined them do change. While they are utterly dependent on humans, they also have a degree of autonomy from any individual humans. They are neither alive nor inanimate in a strict sense. So I will refer to them as being alive, a provocation that seems warranted as a device to stimulate thought.

Being in the Mesh

What do I mean by the mesh? For the most part I’m using that term as more or less equivalent to what Latour has in mind when he talks of networks of social actors, where the actors are not just humans, but everything encompassed within society, the humans, animals, plants, and material objects, both natural and man-made. All of it is gathered into the causal nexus of the social: the mesh.

But let’s start with the humans, biological beings. Make no mistake, they’re the ones that hold the mesh together, and culture is the glue that they deploy. So far as we know, the most basic case is that of foraging bands of hunter-gatherers. That’s how humankind began on the African savannas. Let’s think about them for just a bit to refresh ourselves.

Such bands typically have a dozen to thirty or forty members, all of whom know one another, some better than others, but all of them face-to-face relations. A number of bands will occupy a given territory, and people in any given band will have friends and relatives in other bands. So maybe we have two hundred or a thousand or more people all speaking more or less the same language and having more or less the same culture.

Those people and their relations are the core of the mesh. But we must also include the environment in which they live and the artifacts they’ve manufactured. They too are part of the mesh. They mediate relations among the humans.

Those non-humans are ‘covered’ with coordinators through which the humans assimilate them into their cultural system. Humans often place markings in the environment, such as slash marks on trees, or paintings on rock faces of cliffs and in caves. Or they may place stones to mark boundaries, and so forth. But environmental features often enter into myths and stories without themselves being physically altered and so become assimilated to culture. All of these involve coordinators in the technical sense I’ve been using.

Those myths and stories are what I have come to call cultural beings, as you recall from the introduction, a term I use to encompass not only the string of linguistic signifiers used to convey the stories, that is, the envelopes of coordinators, but the various individual mental acts (the phantasms) giving them life. Some or many of those environmental features may also function as cultural beings if they figure centrally enough in mythology and in the group’s way of life, that is, if appropriate phantasms are consistently associated with them. But I don’t want to get into figuring out just what qualifies as a cultural being. The question is an important one, but I’m going to leave it for later. For my immediate purposes we can regard the term as something of a placeholder whose extension will be in a specified later more specialized discourse. We can make do with some rough and ready observations. Continue reading “Cultural Beings Evolving in the Mesh”

The Direction of Cultural Evolution, Macroanalysis at 3 Quarks Daily

As soon as I finished up my series of posts about Matt Jockers, Macroanalysis: Digital Methods & Literary History, I set up a file on my Mac for further thoughts, knowing full well I’d keep thinking about the book. I’ve now posted the first of those continuing thoughts at 3 Quarks Daily: Macroanalysis and the Directional Evolution of Nineteenth Century English-Language Novels.

The issue is cultural evolution, a notion that Jockers flirts with, but rejects. Of course I’ve been committed to the idea for a long time and I’ve decided that his data, that is, the patterns he’s found in his data, constitute a very strong argument of conceptualizing literary history as an evolutionary phenomenon. That’s what my 3QD post is about, a fairly detailed (a handful of new visualizations) reanalysis of Jockers’ account of literary influence.

From Influence to Evolution

It is one thing to track influence among a handful of texts; that is the ordinary business of traditional literary history. You read the texts, look for similar passages and motifs, read correspondence and diaries by the authors, and so forth, and arrive at judgements about how the author of some later text was influenced by authors of earlier texts. It’s not practical to do that for over 3000 texts, most of which you’ve never read, nor has anyone read many or even most them in over 100 years.

Here, in brief, is what Jockers did: He assumed that, if Author X was influenced by Author Q, then X’s texts would be very similar to Q’s. Given the work he’d already done on stylistic and thematic features, it was easy for Jockers to combine those features into a single list comprising almost 600 features. With each text scored on all of those features it was then relatively easy for Jockers to calculate the similarity between texts and represent it in a directed graph where texts are represented by nodes and similarity by the edges between nodes. The length of the edge between two texts is proportional to their similarity.

Note, however, that when Jockers created the graph, he did not include all possible edges. With 3346 nodes in the graph, the full graph where each node is connected to all of the others would have contained millions of edges and been all but impossible to deal with. Jockers reasoned that only where a pair of books was highly similar could one reasonably conjecture and influence from the older to the newer. So he culled all edges below a certain threshold, leaving the final graph with only 165,770 edges (p. 163).

When Jockers visualized the graph (using Force Atlas 2 in the Gephi) he found, much to his delight, that the graph was laid out roughly in temporal order from left to right. And yet, as he points out, there is no date information in the data itself, only information about some 600 stylist and thematic features of the novels. What I argue in my 3QD post is that that in itself is evidence that 19th century literary culture constitutes an evolutionary system. That’s what you would expect if literary change were an evolutionary process. Continue reading “The Direction of Cultural Evolution, Macroanalysis at 3 Quarks Daily”

Report on Cultural Evolution for the National Humanities Center, Revised Edition

Back in 2010 I wrote a piece for the National Humanities Center (USA), Cultural Evolution A Vehicle for Cooperative Interaction Between the Sciences and the Humanities, which is online at their Forum along with comments. I have since revised it to include a section on Jockers, Macroanalysis: Digital Methods in Literary History (2013). You can download the revised version from my SSN page. I’ve placed the added section below.

* * * * *

A Start: 19th Century Anglophone Literary Culture

Let me set the stage by quoting a passage from the excellent review Tim Lewens (2014) wrote for the Stanford Encyclopedia of Philosophy:

The prima-facie case for cultural evolutionary theories is irresistible. Members of our own species are able to survive and reproduce in part because of habits, know-how and technology that are not only maintained by learning from others, they are initially generated as part of a cumulative project that builds on discoveries made by others. And our own species also contains sub-groups with different habits, know-how and technologies, which are once again generated and maintained through social learning. The question is not so much whether cultural evolution is important, but how theories of cultural evolution should be fashioned, and how they should be related to more traditional understandings of organic evolution.

Building on discoveries made by others, we can see that kind of process in a graphic that Matthew Jockers used late in Macroanalysis: Digital Methods in Literary History (2013), though that’s not what Jockers had in mind in that particular investigation. He was working with a corpus of 3346 Ninetheenth Century novels by American, British, Irish and Scottish authors and was interested in tracking influence among them. It is one thing to track influence among a handful of texts; that is the ordinary business of traditional literary history. You read the texts, look for similar passages and motifs, read correspondence and diaries by the authors, and so forth, and arrive at judgements about how the author of some later text was influenced by authors of earlier texts.

It’s not practical to do that for over 3000 texts, most of which you’ve never read, nor has anyone read them in over 100 years. Jockers was using recently developed techniques for analyzing “big data,” in this case, a pile of 19th Century Anglophone novels. Without going into the details – you can find most of them in Jockers, pp. 156 ff.) – Jockers had the computer ‘measure’ each text on almost 600 different traits and then calculated the pair-wise similarity of all the texts. He then tossed out all values below a certain relatively high threshold and then had the computer create a network visualization of the remaining connections. Each text is represented as a ‘node’ in the network and the similarity between two texts is represented by the ‘edge’ (of link) connecting them. The length of the edge is proportional to the degree of similarity. Jockers then had the computer create a visualization of this network, where each text would be next to similar texts in the resulting image. Here’s that image (Figure 9.3 in the book, p. 165, color version from the web):

9dot3

It turns out that the visualization routine laid the graph out more or less in chronological order, going from older to newer, left to right. Note that there was no temporal information in the data from which that graph was derived (pp. 164-65):

The fact that they line up in a chronological manner is incidental, but rather extraordinary. The chronological alignment reveals that thematic and stylistic change does occur over time. The themes that writers employ and the high-frequency function words they use to build the frameworks for their themes are nearly, but not always, tethered in time. At this macro scale, style and theme are observed to evolve chronologically, and most books and authors in this network cluster into communities with their chronological peers. Not every book and not every author is a slave to his or her epoch.

On Jockers’ first sentence, it’s neither incidental nor extraordinary IF an evolutionary process regulates cultural change. For evolution proceeds through “descent with modification,” as Darwin put it, and that goes for cultural as well as biological evolution. If a later individual is modified from its immediate predecessors, it will in fact resemble them a great deal; the modifications do not change the basic character of the descendants.

As his language indicates, Jockers wasn’t looking for THAT result. It surprised him. Though he alludes to cultural evolution here and there in the book, he rejected it as a basic premise of his investigation (pp. 171-172). The evolutionary interpretation is mine, not his.

We must further realize that that interpretation is an assertion about the collective mentality. Jockers wasn’t examining the minds of millions of 19th century readers of English-language novels in Britain and America, but the history of those novels is a function of the tastes and interests of those readers. Those books wouldn’t have been written if publishers didn’t think they could see them to the public. Those tastes changed gradually, with the themes and styles of novels appealing to those tastes changing gradually as well.

The study of cultural evolution is thus the study of collective mentality. We are interested in the collective psyche. How can we think of the collective psyche without falling into hopeless mysticism?