Statistics and Symbols in Mimicking the Mind

MIT recently held a symposium on the current status of AI, which apparently has seen precious little progress in recent decades. The discussion, it seems, ground down to a squabble over the prevalence of statistical techniques in AI and a call for a revival of work on the sorts of rule-governed models of symbolic processing that once dominated much of AI and its sibling, computational linguistics.

Briefly, from the early days in the 1950s up through the 1970s both disciplines used models built on carefully hand-crafted symbolic knowledge. The computational linguists built parsers and sentence generators and the AI folks modeled specific domains of knowledge (e.g. diagnosis in elected medical domains, naval ships, toy blocks). Initially these efforts worked like gang-busters. Not that they did much by Star Trek standards, but they actually did something and they did things never before done with computers. That’s exciting, and fun.

In time, alas, the excitement wore off and there was no more fun. Just systems that got too big and failed too often and they still didn’t do a whole heck of a lot.

Then, starting, I believe, in the 1980s, statistical models were developed that, yes, worked like gang-busters. And these models actually did practical tasks, like speech recognition and then machine translation. That was a blow to the symbolic methodology because these programs were “dumb.” They had no knowledge crafted into them, no rules of grammar, no semantics. Just routines the learned while gobbling up terabytes of example data. Thus, as Google’s Peter Norvig points out, machine translation is now dominated by statistical methods. No grammars and parsers carefully hand-crafted by linguists. No linguists needed.

What a bummer. For machine translation is THE prototype problem for computational linguistics. It’s the problem that set the field in motion and has been a constant arena for research and practical development. That’s where much of the handcrafted art was first tried, tested, and, in a measure, proved. For it to now be dominated by statistics . . . bummer.

So that’s where we are. And that’s what the symposium was chewing over.

Continue reading “Statistics and Symbols in Mimicking the Mind”

Chomsky derides purely statistical methods

This month sees MIT’s Brains, Minds, and Machines symposium. The opening panel discussion was moderated by Steven Pinker and called for a reboot in artificial intelligence. The panel consisted of Noam Chomsky, Marvin Minsky, Patrick Winston, Susan Carey, Emilio Bizzi, and Sidney Brenner. Most panelists called for a reboot of old style research methods in AI as opposed to the more narrow applications of AI seen today. An article on Technology review summarizes Chomsky’s contribution:

Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

I wondered what people thought of this argument and how it relates to the computational and statistical models used to demonstrate language that are becoming so fashionable these days.

Language, Thought and Space (I): Lumpers and Splitters

There have been some very interesting discussions of the relationship between language and thought recently, including for example, Sean’s absolutely fascinating series of posts about the evolution of colour terms,  a great post on descriptions of motion in different languages over at the lousy linguist (here), Guy Deutscher’s article “Does Your Language Shape How You Think?” (for discussions, see e.h. here and here), a slightly less recent piece by Lera Boroditsky in the Wall Street Journal, and an excellent recent discussion of her article by Mark Liberman (here). (see also James’ post, including a great/terrible joke about Whorf).

One of the things that Deutscher wrote in his article was that:

“The area where the most striking evidence for the influence of language on thought has come to light is the language of space — how we describe the orientation of the world around us.”

As I’ve written a bit about this topic on my other blog, Shared Symbolic Storage, I’ll repost a short series of posts over the next couple of days.
As Deutscher said, this is a very fascinating avenue of linguistic research that gives much insight into the nature of language and cognition as well as their relationship. In addition, it also presents us with new facts and considerations we have to take into account when we think about how language and cognition evolved.

Continue reading “Language, Thought and Space (I): Lumpers and Splitters”

The Problem With a Purely Adaptationist Theory of Language Evolution

According to the evolutionary psychologist Geoffrey Miller and his colleagues (e.g Miller 2000b), uniquely human cognitive behaviours such as musical and artistic ability and creativity, should be considered both deviant and special. This is because traditionally, evolutionary biologists have struggled to fathom exactly how such seemingly superfluous cerebral assets would have aided our survival. By the same token, they have observed that our linguistic powers are more advanced than seems necessary to merely get things done, our command of an expansive vocabulary and elaborate syntax allows us to express an almost limitless range of concepts and ideas above and beyond the immediate physical world. The question is: why bother to evolve something so complicated, if it wasn’t really all that useful?

Miller’s solution is that our most intriguing abilities, including language, have been shaped predominantly by sexual selection rather than natural selection, in the same way that large cumbersome ornaments, bright plumages and complex song have evolved in other animals. As one might expect then, Miller’s theory of language evolution has been hailed as a key alternative to the dominant view that language evolved because it conferred a distinct survival advantage to its users through improved communication (e.g. Pinker 2003). He believes that language evolved in response to strong sexual selection pressure for interesting and entertaining conversation because linguistic ability functioned as an honest indicator of general intelligence and underlying genetic quality; those who could demonstrate verbal competence enjoyed a high level of reproductive success and the subsequent perpetuation of their genes. Continue reading “The Problem With a Purely Adaptationist Theory of Language Evolution”

Chomsky Chats About Language Evolution

If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you’ll find this blurb:

Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT’s Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky’s office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.

I’ve neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn’t look like semantics to me, it looked like syntax.

Then I found Syd Lamb’s stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I’ve figured out how to package a very robust semantics into Lamb’s diagrammatic notation. But that’s another story.

What Makes Humans Unique? (II): Six Candidates for What Makes Human Cognition Uniquely Human

ResearchBlogging.orgWhat makes humans unique? This never-ending debate has sparked a long list of proposals and counter-arguments and, to quote from a recent article on this topic,

“a similar fate  most likely awaits some of the claims presented here. However such demarcations  simply  have  to  be  drawn  once  and  again.  They  focus  our  attention, make us wonder, and direct and stimulate research, exactly because they provoke and challenge other researchers to take up the glove and prove us wrong.” (Høgh-Olesen 2010: 60)

In this post, I’ll focus on six candidates that might play a part in constituting what makes human cognition unique, though there are countless others (see, for example, here).

One of the key candidates for what makes human cognition unique is of course language and symbolic thought. We are “the articulate mammal” (Aitchison 1998) and an “animal symbolicum” (Cassirer 2006: 31). And if one defining feature truly fits our nature, it is that we are the “symbolic species” (Deacon 1998). But as evolutionary anthropologists Michael Tomasello and his colleagues argue,

“saying that only humans have language is like saying that only humans build skyscrapers, when the fact is that only humans (among primates) build freestanding shelters at all” (Tomasello et al. 2005: 690).

Language and Social Cognition

According to Tomasello and many other researchers, language and symbolic behaviour, although they certainly are crucial features of human cognition, are derived from human beings’ unique capacities in the social domain. As Willard van Orman Quine pointed out, language is essential a “social art” (Quine 1960: ix). Specifically, it builds on the foundations of infants’ capacities for joint attention, intention-reading, and cultural learning (Tomasello 2003: 58). Linguistic communication, on this view, is essentially a form of joint action rooted in common ground between speaker and hearer (Clark 1996: 3 & 12), in which they make “mutually manifest” relevant changes in their cognitive environment (Sperber & Wilson 1995). This is the precondition for the establishment and (co-)construction of symbolic spaces of meaning and shared perspectives (Graumann 2002, Verhagen 2007: 53f.). These abilities, then, had to evolve prior to language, however great language’s effect on cognition may be in general (Carruthers 2002), and if we look for the origins and defining features of human uniqueness we should probably look in the social domain first.

Corroborating evidence for this view comes from comparisons of brain size among primates. Firstly, there are significant positive correlations between group size and primate neocortex size (Dunbar & Shultz 2007). Secondly, there is also a positive correlation between technological innovation and tool use – which are both facilitated by social learning – on the one hand and brain size on the other (Reader and Laland 2002). Our brain, it seems, is essential a “social brain” that evolved to cope with the affordances of a primate social world that frequently got more complex (Dunbar & Shultz 2007, Lewin 2005: 220f.).

Thus, “although innovation, tool use, and technological invention may have played a crucial role in the evolution of ape and human brains, these skills were probably built upon mental computations that had their origins and foundations in social interactions” (Cheney & Seyfarth 2007: 283).

Continue reading “What Makes Humans Unique? (II): Six Candidates for What Makes Human Cognition Uniquely Human”

Language – An Embarrassing Conundrum for the Evolutionist?

Hello! This is my first post on the blog and whilst I didn’t want it to be an angry rant after I found this youtube video there seemed little could have been done to avoid it.

This is a video by a creationist named “ppsimmons” who writes on the front page of his youtube channel that he “apologizes for not knowing enough to scientifically refute the evidence for creation nor for being clever enough to “scientifically” support the theory of evolution.” And yet he feels to be enough of an authority to make videos refuting evolution using ‘science’.

I know I shouldn’t let this annoy me as much as it obviously has, I know that there will always be creationists out there and I know that these creationists will never listen to anything I have to say. However, in this case, I’ve decided to respond mostly to set straight the interpretation of Robert Berwick’s words used in this video.

Continue reading “Language – An Embarrassing Conundrum for the Evolutionist?”