Cognitivism and the Critic 2: Symbol Processing

It has long been obvious to me that the so-called cognitive revolution is what happened when computation – both the idea and the digital technology – hit the human sciences. But I’ve seen little reflection of that in the literary cognitivism of the last decade and a half. And that, I fear, is a mistake.

Thus, when I set out to write a long programmatic essay, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, I argued that we think of literary text as a computational form. I submitted the essay and found that both reviewers were puzzled about what I meant by computation. While publication was not conditioned on providing such satisfaction, I did make some efforts to satisfy them, though I’d be surprised if they were completely satisfied by those efforts.

That was a few years ago.

Ever since then I pondered the issue: how do I talk about computation to a literary audience? You see, some of my graduate training was in computational linguistics, so I find it natural to think about language processing as entailing computation. As literature is constituted by language it too must involve computation. But without some background in computational linguistics or artificial intelligence, I’m not sure the notion is much more than a buzzword that’s been trendy for the last few decades – and that’s an awful long time for being trendy.

I’ve already written one post specifically on this issue: Cognitivism for the Critic, in Four & a Parable, where I write abstracts of four texts which, taken together, give a good feel for the computational side of cognitive science. Here’s another crack at it, from a different angle: symbol processing.

Operations on Symbols

I take it that ordinary arithmetic is most people’s ‘default’ case for what computation is. Not only have we all learned it, it’s fundamental to our knowledge, like reading and writing. Whatever we know, think, or intuit about computation is built on our practical knowledge of arithmetic.

As far as I can tell, we think of arithmetic as being about numbers. Numbers are different from words. And they’re different from literary texts. And not merely different. Some of us – many of whom study literature professionally – have learned that numbers and literature are deeply and utterly different to the point of being fundamentally in opposition to one another. From that point of view the notion that literary texts be understood computationally is little short of blasphemy.

Not so. Not quite.

The question of just what numbers are – metaphysically, ontologically – is well beyond the scope of this post. But what they are in arithmetic, that’s simple; they’re symbols. Words too are symbols; and literary texts are constituted of words. In this sense, perhaps superficial, but nonetheless real, the reading of literary texts and making arithmetic calculations are the same thing, operations on symbols. Continue reading “Cognitivism and the Critic 2: Symbol Processing”

Statistics and Symbols in Mimicking the Mind

MIT recently held a symposium on the current status of AI, which apparently has seen precious little progress in recent decades. The discussion, it seems, ground down to a squabble over the prevalence of statistical techniques in AI and a call for a revival of work on the sorts of rule-governed models of symbolic processing that once dominated much of AI and its sibling, computational linguistics.

Briefly, from the early days in the 1950s up through the 1970s both disciplines used models built on carefully hand-crafted symbolic knowledge. The computational linguists built parsers and sentence generators and the AI folks modeled specific domains of knowledge (e.g. diagnosis in elected medical domains, naval ships, toy blocks). Initially these efforts worked like gang-busters. Not that they did much by Star Trek standards, but they actually did something and they did things never before done with computers. That’s exciting, and fun.

In time, alas, the excitement wore off and there was no more fun. Just systems that got too big and failed too often and they still didn’t do a whole heck of a lot.

Then, starting, I believe, in the 1980s, statistical models were developed that, yes, worked like gang-busters. And these models actually did practical tasks, like speech recognition and then machine translation. That was a blow to the symbolic methodology because these programs were “dumb.” They had no knowledge crafted into them, no rules of grammar, no semantics. Just routines the learned while gobbling up terabytes of example data. Thus, as Google’s Peter Norvig points out, machine translation is now dominated by statistical methods. No grammars and parsers carefully hand-crafted by linguists. No linguists needed.

What a bummer. For machine translation is THE prototype problem for computational linguistics. It’s the problem that set the field in motion and has been a constant arena for research and practical development. That’s where much of the handcrafted art was first tried, tested, and, in a measure, proved. For it to now be dominated by statistics . . . bummer.

So that’s where we are. And that’s what the symposium was chewing over.

Continue reading “Statistics and Symbols in Mimicking the Mind”

Referential labelling in Diana Monkeys Ok, so I was going to write an essay for my Origins of Language module on this but then got distracted by syntax (again) so I thought I’d put my thoughts in a blog post just so they don’t go to waste.

Diana monkeys, like vervet monkeys, use alarm calls to communicate the presence of a predator to other monkeys.

They produce (and respond to) different alarm calls corresponding to how close the predator is, whether the predator is above or below them and whether the predator is a leopard or an eagle.  They respond instantly regardless of how imminent an attack is.

In this post I will explore some of the evidence relating to how sophisticated the Diana monkey’s understanding of the call’s meaning is and also the mental mechanisms relating to the call’s production.

Zuberbühler (2000a) discusses some types of species which have alarm calls but instead of each alarm call representing a different predator, each alarm call represents a different level (or types) of danger. The aim of the Zuberbühler paper then, was to set out if this was the case for Diana monkeys or if they really did have referential ‘labels’ for different predators.

Continue reading “Referential labelling in Diana Monkeys”

Language, Thought, and Space (II): Universals and Variation

Spatial orientation is crucial when we try to navigate the world around us. It is a fundamental domain of human experience and depends on a wide array of cognitive capacities and integrated neural subsystems. What is most important for spatial cognition however, are the frames of references we use to locate and classify ourselves, others, objects, and events.

Often, we define a landmark (say ourselves, or a tree, or the telly) and then define an object’s location in relation to this landmark (the mouse is to my right, the bike lies left of the tree, my keys have fallen behind the telly). But as it turns out, many languages are not able to express a coordinate system with the meaning of the English expression “left of.” Instead, they employ a compass-like system of orientation.

They do not use a relative frame of reference, like in the English “the cat is behind the truck” but instead use an absolute frame of reference that can be illustrated in English by sentences such as “the cat is north of the truck.” (Levinson 2003: 3). This may seem exotic for us, but for many languages it is the dominant – although often not the only – way of locating things in space.

What cognitive consequences follow from this?

Continue reading “Language, Thought, and Space (II): Universals and Variation”

The Problem With a Purely Adaptationist Theory of Language Evolution

According to the evolutionary psychologist Geoffrey Miller and his colleagues (e.g Miller 2000b), uniquely human cognitive behaviours such as musical and artistic ability and creativity, should be considered both deviant and special. This is because traditionally, evolutionary biologists have struggled to fathom exactly how such seemingly superfluous cerebral assets would have aided our survival. By the same token, they have observed that our linguistic powers are more advanced than seems necessary to merely get things done, our command of an expansive vocabulary and elaborate syntax allows us to express an almost limitless range of concepts and ideas above and beyond the immediate physical world. The question is: why bother to evolve something so complicated, if it wasn’t really all that useful?

Miller’s solution is that our most intriguing abilities, including language, have been shaped predominantly by sexual selection rather than natural selection, in the same way that large cumbersome ornaments, bright plumages and complex song have evolved in other animals. As one might expect then, Miller’s theory of language evolution has been hailed as a key alternative to the dominant view that language evolved because it conferred a distinct survival advantage to its users through improved communication (e.g. Pinker 2003). He believes that language evolved in response to strong sexual selection pressure for interesting and entertaining conversation because linguistic ability functioned as an honest indicator of general intelligence and underlying genetic quality; those who could demonstrate verbal competence enjoyed a high level of reproductive success and the subsequent perpetuation of their genes. Continue reading “The Problem With a Purely Adaptationist Theory of Language Evolution”

Chomsky Chats About Language Evolution

If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you’ll find this blurb:

Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT’s Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky’s office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.

I’ve neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn’t look like semantics to me, it looked like syntax.

Then I found Syd Lamb’s stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I’ve figured out how to package a very robust semantics into Lamb’s diagrammatic notation. But that’s another story.

Broca's area and the processing of hierarchically organised sequences pt.2

ResearchBlogging.org3. Neurological processing of hierarchically organised sequences in non-linguistic domains

A broader perspective sees grammar as just one of many hierarchically organised behaviours being processed in similar, prefrontal neurological regions (Greenfield, 1991; Givon, 1998). As Broca’s area is found to be functionally salient in grammatical processing, it is logical to assume that this is the place to search for activity in analogous hierarchical sequences. Such is the basis for studies into music (Maess et al., 2001), action planning (Koechlin and Jubault, 2006) and tool-production (Stout et al., 2008).

Continue reading “Broca's area and the processing of hierarchically organised sequences pt.2”

Broca's area and the processing of hierarchically organised sequences pt.1

ResearchBlogging.orgEver since its discovery in 1861, Broca’s area (named after its discoverer, Paul Broca) has been inextricably linked with language (Grodzinsky and Santi, 2008). Found in the left hemisphere of the Pre-Frontal Cortex (PFC), Broca’s region traditionally[1] comprises of Broadmann’s areas (BA) 44 and 45 (Hagoort, 2005). Despite being relegated in its status as the centre of language, this region is still believed to play a vital role in certain linguistic aspects.

Of particular emphasis is syntax. However, syntactic processing is not unequivocally confined to Broca’s area, with a vast body of evidence from “Studies investigating lesion deficit correlations point to a more distributed representation of syntactic processes in the left perisylvian region.” (Fiebach, 2005, pg. 80). A more constrained approach places Broca’s area as processing an important functional component of grammar (Grodzinsky and Santi, 2007). One of these suggestions points specifically to how humans are able to organise phrases in hierarchical structures[2].

In natural languages, “[…] the noun phrases and the verb phrase within a clause typically receive their grammatical role (e.g., subject or object) by means of hierarchical relations rather than through the bare linear order of the words in a string. [my emphasis]” (Musso et al., 2003, pg. 774). Furthermore, these phrases can be broken down into smaller segments, with noun phrases, for example, consisting of a determiner preceding a noun (Chomsky, 1957). According to Chomsky (1957) these rules exist without the need for interaction in other linguistic domains. Take for example his now famous phrase of “Colourless green ideas sleep furiously.” (ibid, pg. 15). Despite being syntactically correct, it is argued the sentence as a whole is semantically meaningless.

The relevant point to take away is a sentence is considered hierarchical if phrases are embedded within other phrases. Yet, examples of hierarchical organisation are found in many domains besides syntax. This includes other language phenomena, such as prosody. Also, non-linguistic behaviours – such as music (Givon, 1998), action sequences (Koechlin and Jubault, 2006), tool-use (cf. Scott-Frey, 2004) and tool-production (Stout et al., 2008) – are all cognitively demanding tasks, comparable with that of language. We can even see instances of non-human hierarchical representations: from the songs of humpback whales (Suzuki, Buck and Tyack, 2006) to various accounts of great apes (McGrew, 1992; Nakamichi, 2003) and crows (Hunt, 2000) using and manufacturing their own tools[3].

With this in mind, we can ask ourselves two questions corresponding to Broca’s area and hierarchical organisation: Does Broca’s area process hierarchically organised sequences in language? And if so, is this processing language-specific? The logic behind this two-part approach is to help focus in on the problem. For instance, it may be found hierarchical structures in sentences are processed by Broca’s area. But this belies the notion of other hierarchically organised behaviours also utilising the same cognitive abilities.

Continue reading “Broca's area and the processing of hierarchically organised sequences pt.1”

How do biology and culture interact?

In the year of Darwin, I’m not too surprised at the number of articles being published on the interactions between cultural change and biological evolution — this synthesis, if achieved, will certainly be a crucial step in explaining how humans evolved. Still, it’s unlikely we’re going to see the Darwin of culture in 2009, given we’re still disputing some of the fundamentals surrounding these two modes of evolution. One of these key arguments is whether or not culture inhibits biological evolution. That we’re seeing accelerated changes in the human genome seems to suggest (for some) that culture is one of these evolutionary selection pressures, as John Hawks explains:
Continue reading “How do biology and culture interact?”