Monkeys can read! (not really)

OMG! Monkeys can read! Planet of the apes is coming! Not really.

OMG! Monkeys can read! Planet of the apes is coming! Not really. A new paper in Science by Grainger, Dufau, Montant, Ziegler and Fagot at the Aix-Marseille University found that Guinea baboons can be trained to differentiate between four letter English words and nonsense words. One monkey called Dan could recognise up to 300 written words, and by “recognise” I mean he knew those words could give him a treat, not that he could recognise that they signified objects in the world, which is what we mean when we say that a human has “recognised” a word. It’s a minefield isn’t it?

I wonder to what degree this is just a memory test or if the monkeys really are noticing relations between the letters which make up the words, as opposed to the nonsense words. The paper probably answers this. Bloody pay walls… Either way, I don’t think this is evidence to suggest that the role of phoneme-letter matching in humans learning to read should be undermined.

6 thoughts on “Monkeys can read! (not really)”

  1. I found it funny when she read “human English readers”…

    Anyway, it’s obvious that baboons have a potential to do things that they wouldn’t normally do in their ordinary lives. The same is true about humans, and possibly about any other living organism we can imagine. In my opinion, one common mistake in the studies of human language is to equal potential and performance. We only use a tiny percentage of our potential for communication, even for oral expression, and that percentage corresponds to the amount that we need to function as social units. The rest is left for self-exploration, creativity, innovation, etc. Compared to our whole potential, languages are just a convenient tool. I guess the same applies to baboons and their communciative systems.

  2. “I wonder to what degree this is just a memory test or if the monkeys really are noticing relations between the letters which make up the words, as opposed to the nonsense words.”

    Not sure if I should be cursing paywalls too, or people who blog about papers without having read them. Maybe both. Anyway, it’s not just a memory test, although there is some potential for memory of specific words to play a role. The baboons are exposed to a relatively small set of target words, plus non-words drawn from a much larger set – as the authors point out, this means they will see actual words repeat more often during training than non-words, which might help their performance (e.g. if something is familiar, they should respond that it’s a word). However, this definitely isn’t the whole story, since the baboons respond differently to words and non-words *on their first encounter* (I presume this effect develops with training, although I couldn’t find the data for that in the paper or supporting materials): they are less likely to reject a novel (i.e. previously unseen) word than a novel non-word. Presumably they are doing this because they have learnt something about the statistical properties of words and non-words, which enables them to generalise to novel cases. The authors select the words and non-words such that the words contain relatively frequent (in English) bigrams, whereas the non-words involve less frequent (but possible) bigrams. Of course the baboons don’t have access to any English other than the words involved in the experiment, so these statistical differences must filter through to the training words that the baboons actually see (i.e. the bigrams which occur in words occur across multiple words and not so much among non-words). Maybe worth thinking a bit about how this actually works, given that words repeat and non-words don’t. Anyway, the fact that they can generalise appropriately shows they are doing something non-trivial. It’s already been shown that at least one non-human primate can do the necessary computation of bigram frequencies in the auditory modality (see http://www.bcs.rochester.edu/people/aslin/pdfs/Hauser_Newp_Asli2001.pdf), but it’s nice to have that result confirmed and extended to another species.

  3. Hi Kenny, is it ok for people who haven’t read the paper to blog as a prompt for people who HAVE read the paper to educate them and the readers? If not sorry, and thanks for taking the bait anyway. Really interesting stuff.

  4. Language Log post on this article: http://languagelog.ldc.upenn.edu/nll/?p=3912.
    Lieberman argues that you could get performance equivalent to that achieved by the baboons without actually tracking the bigram frequencies, but instead by just noting unigram frequencies – basically, the words and non-words differ subtly in how often individual letters are used, by combining this information across all the letters in a word you could do pretty well on the word-nonword differentiation task. Although like I say above, it wouldn’t be particularly surprising if baboons can track bigram frequencies, since other monkeys seem to be able to.

  5. I’m not nearly as acquainted with the statistical learning literature as I oughta be, so thanks for this Kenny. It’s pretty cool. I’m just going to think and speculate things aloud here, don’t mind me.

    I’d be interested to know if humans perform similarly when they’re confronted with words and non-words of an unfamiliar/alien language – or maybe using the preferential looking of prelinguistic kids using auditory stimuli (that one might be a dumb stretch, though, I dunno). If they do (and I don’t see why they couldn’t, really) we probably still can’t know if this is the same strategy kids employ when they’re learning to read, since I guess phoneme mapping to letters comes before confronting words. I wanted to say that could interfere, but there’s no real reason why it should, actually.

    And given the arbitrariness and messiness of spelling conventions (like in English, notoriously) maybe statistical learning would give you a pretty good outcome on guessing new words compared to phoneme-letter matching alone. That this kind of learning happens in the auditory modality already (in baboons at least) maybe suggests it could be the sort of strategy behind learning things like, e.g. permissible onset clusters for a language and what-have-you, which means that even when you’re mapping orthography to sounds, the way you discriminate likely sounds may be the result of statistical learning anyway.

    An optimal strategy for a wee ‘un learning to read might be that statistical learning gets you about 75% of the way there (assuming children are like baboons, which I often do), and you can deploy your newly learned phoneme-letter associations (which seem a bit harder to learn and use) to help discern the other 25% with whether the sound is probable given what you know about your language.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.