After passing my final exams I feel that I can relax a bit and have the time to read a book again. So instead of reading a book that I need to read purely for ‘academic reasons’, I thought I’d pick one I’d thoroughly enjoy: James Hurford’s “The Origins of Grammar“, which clocks in at a whopping 808 pages.
I’m still reading the first chapter (which you can read for free here) but I thought I’d share some of his analyses of “Animal Syntax.”
Hurford’s general conclusion is that despite what you sometimes read in the popular press,
“No non-human has any semantically compositional syntax, where the form of the syntactic combination determines how the meanings of the parts combine to make the meaning of the whole.”
The crucial notion here is that of compositionality. Hurford argues that we can find animal calls and songs that are combinatorial, that is songs and calls in which elements are put together according to some kind of rule or pattern. But what we do not find, he argues, are the kinds of putting things together where the elements put together each have a specified meaning and the whole song, call or communicative assembly “means something which is a reflection of the meanings of the parts.”
To illustrate this, Hurford cites the call system of putty-nosed monkeys (Arnold and Zuberbühler 2006). These monkeys have only two different call signals in their repertoire, a ‘pyow’-sound that ‘means’, roughly, ‘LEOPARD’; and a ‘hack’ sound that ‘means’, roughly, ‘EAGLE’.
In phonetics and phonology there is an important distinction to be made between sounds that can be broadly categorised into two divisions: consonants and vowels. For this post, however, I will be focusing on the second, and considered by some to be the more problematic, division. So, what are vowels? For one, they probably aren’t just the vowels (a, e, i, o, u) you were taught in during school. This is one of the big problems when teaching the sounds systems of a language with such an entrenched writing system, as in English, especially when there is a big disconnect between the sounds you make in speech and the representation of sound in orthography. To give a simple example: how many different vowels are there in bat, bet, arm, and say? Well, if you were in school, then a typical answer would be two: a and e. In truth, from a phonological standpoint, there are four different vowels: [æ], [e], [ɑː], [eɪ]. The point that vowel-sounds are different from vowel-letters is an easy one to get across. The difficultly arises in actually providing a working definition. So, again, I ask:
What are vowels?
What I’m going to try and do in this series of posts is follow my phonology module at Cardiff. As such, these posts are essentially my notes on the topic, and may not always come across too clearly. First, I thought it would be useful to give a quick definition of both phonology and phonetics, before moving on to discuss the anatomical organisation of our vocal organs.
Phonetics and Phonology
To begin, phonetics, often referred to as the science of speech sound, is concerned with the physical production, acoustic transmission and perception of human speech sounds (see: phone). One key element of phonetics is the use of transcription to provide a one-to-one mapping between phones and written symbols (something I’ll come back to in a later post). In contrast, phonology focuses on the systematic use of sound in language to encode meaning. So, whereas phonetics is specifically concerned with human speech sounds, phonology, despite having a grounding in phonetics, links in with other levels of language through abstract sound systems and gestures. SIL provides a useful little diagram showing where phonetics and phonology lie in relation to other linguistic disciplines:
Just thought I’d make three quick announcements:
First, I decided to drag myself into the age of 140-characters and (albeit begrudgingly) joined Twitter. I say begrudgingly because my day is already packed with plenty of distractions besides adding Twitter into the mix… But I noticed it’s the place where all the cool science bloggers are gathering, and gradually coagulating into an amorphous cloud of science networking, so I thought I might as well sign up (ever the follower, never the trendsetter).
Second, if you happen to find yourself in Edinburgh on Friday October 1st, then you can come and see me and Sean presenting our respective posters (click here and here for the abstracts) at the 24th Language at Edinburgh Lunch. I’m sure, for me at least, it’ll be quite a sobering experience in highlighting how little I know about phonology, phonetics, sociolinguistics and demography. On the plus side I’ll get some free food .
Lastly, if you happened to click on my poster abstract, then the more observant of you will have noticed I’m now affiliated with Cardiff University. Yes, that’s right, I’m doing yet another masters course. This time it’s at the Centre for Language and Communication Research, with the idea being that I’ll get a more solid foundation in research methodology etc before pursuing a PhD or research assistant position.
That is all.
According to the evolutionary psychologist Geoffrey Miller and his colleagues (e.g Miller 2000b), uniquely human cognitive behaviours such as musical and artistic ability and creativity, should be considered both deviant and special. This is because traditionally, evolutionary biologists have struggled to fathom exactly how such seemingly superfluous cerebral assets would have aided our survival. By the same token, they have observed that our linguistic powers are more advanced than seems necessary to merely get things done, our command of an expansive vocabulary and elaborate syntax allows us to express an almost limitless range of concepts and ideas above and beyond the immediate physical world. The question is: why bother to evolve something so complicated, if it wasn’t really all that useful?
Miller’s solution is that our most intriguing abilities, including language, have been shaped predominantly by sexual selection rather than natural selection, in the same way that large cumbersome ornaments, bright plumages and complex song have evolved in other animals. As one might expect then, Miller’s theory of language evolution has been hailed as a key alternative to the dominant view that language evolved because it conferred a distinct survival advantage to its users through improved communication (e.g. Pinker 2003). He believes that language evolved in response to strong sexual selection pressure for interesting and entertaining conversation because linguistic ability functioned as an honest indicator of general intelligence and underlying genetic quality; those who could demonstrate verbal competence enjoyed a high level of reproductive success and the subsequent perpetuation of their genes. Continue reading
If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you’ll find this blurb:
Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT’s Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky’s office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.
I’ve neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn’t look like semantics to me, it looked like syntax.
Then I found Syd Lamb’s stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I’ve figured out how to package a very robust semantics into Lamb’s diagrammatic notation. But that’s another story.
Throughout much of our history language was transitory, existing only briefly within its speech community. The invention of writing systems heralded a way of recording some of its recent history, but for the most part linguists lack the stone tools archaeologists use to explore the early history of ancient technological industries. The question of how far back we can trace the history of languages is therefore an immensely important, and highly difficult, one to answer. However, it’s not impossible. Like biologists, who use highly conserved genes to probe the deepest branches on the tree of life, some linguists argue that highly stable linguistic features hold the promise of tracing ancestral relations between the world’s languages.
Previous attempts using cognates to infer the relatedness between languages are generally limited to predictions within the last 6000-10,000 years. In the present study, Greenhill et al (2010) decided to examine more stable linguistic features than the lexicon, arguing:
In an effort to update this blog regularly, I’ve decided to take the lazy route and post up a list of abstracts. This will only happen once a week, but it’s a useful resource (for me at least), and will usually be an indicator of what articles I’m going to write about in the near future.