Noam Chomsky recently gave a lecture on the poverty of the stimulus at UCL responding to topics such as language evolution and artificial language learning experiments. From about 89 minutes in he discusses iterated learning and language evolution, saying the conclusions derive from "serious illusions about evolution":
Chomsky's criticism of iterated learning experiments (see post here and here) is based on two points. First, the emergence of structure is more to do with the intelligence of the modern humans taking part in the experiment than a realistic language evolving scenario. He suggests that structure would not emerge in a series of computer programs without human intelligence. As as a colleague pointed out, however, the first iterated learning experiments used computational models of this kind. Secondly, he suggests that the view of evolution employed in the explanation of these systems is a pop-psychology, gradual hill-climbing one. In fact, Chomsky claims, evolution of traits such as language or eyes derive from single, frozen accidents. That is, evolution moves in leaps and bounds rather than small steps (Jim Hurford recently gave a lecture entitled 'Reconciling linguistic jerks and biological creeps' on this topic). Why else would humans be the only species with language?
Geoffrey Pullum counters this last point by asking why would an innately specified UG emerge so rapidly, but then freeze for tens of thousands of years, when (borrowing Phillip Lieberman's point) traits such as lactose tolerance have emerged in the human genome within two thousand years. Chomsky gives some examples of traits that have developed rapidly, but then only changed marginally.
I don't think that proponents of iterated learning paradigms would have a problem with a sudden emergence of a capacity for advanced linguistic communication. Although there is a continuity between human and non-human communication systems, we have some tricks that other animals don't (see Michael's post here). However, the evolution of the structure of language after these mutations could owe a huge amount to processes of cultural transmission. The universals we see in the world's languages, then would be an amplification of weak biological biases.
However, Chomsky seems disillusioned with the whole field of what he calls 'the evolution of communication'. At least we didn't get it as bad as exemplar theory, which he dismisses as "so outlandish it's not worth thinking about".
[Edit: I originally attributed Mark Liberman instead of Phillip Lieberman. Now I've made this error in both directions!]
This month sees MIT's Brains, Minds, and Machines symposium. The opening panel discussion was moderated by Steven Pinker and called for a reboot in artificial intelligence. The panel consisted of Noam Chomsky, Marvin Minsky, Patrick Winston, Susan Carey, Emilio Bizzi, and Sidney Brenner. Most panelists called for a reboot of old style research methods in AI as opposed to the more narrow applications of AI seen today. An article on Technology review summarizes Chomsky's contribution:
Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. "That's a notion of [scientific] success that's very novel. I don't know of anything like it in the history of science," said Chomsky.
I wondered what people thought of this argument and how it relates to the computational and statistical models used to demonstrate language that are becoming so fashionable these days.
If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you'll find this blurb:
Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT's Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky's office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.
I've neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn't look like semantics to me, it looked like syntax.
Then I found Syd Lamb's stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I've figured out how to package a very robust semantics into Lamb's diagrammatic notation. But that's another story.
Noam Chomsky is one of the most hysterically abused figures in the world today. Even his critics have to concede that his work inventing the field of linguistics -- and so beginning to decode the structure of how language is formed in the human brain -- makes him one of the most important intellectuals alive.
I agree that Chomsky is an important intellectual figure, and his massive contributions to linguistics are well-documented, but he did not invent the field. Some might say reinvented... Although, I'm not sure how favourably history will view Chomsky's shadow having loomed over linguistics for such a long time. My own opinion, for what it's worth, is that he's largely been a positive influence, even if I find myself disagreeing with a lot of his major ideas.