Possible Stroke on Live Television

I was alerted recently of this video. It’s short, and the rest of the post won’t make sense without watching it.

My mate Ally said that “I’m sure there’s some kind of linguistic point to be made here but I have no idea what it is.” My first few times through the video, I was also confused. However, the comment section is where things become clear. At the risk of being one of those reporters who mentions twitter-posts, TopGunMD1 stated:

“Its obvious she just had a STROKE! She is currently suffering from Wernicke’s aphasia, its a very serious problem. I hope her producer realized this and took her to the hospital immediately. If you ever see someone talk like that, call an ambulance or take them to the ER immediately.”

This is most likely a correct diagnosis. What’s happened here is that she’s had a stroke which dealt a debilitating blow to her Wernicke’s area, and she’s lost her recall for words. This might be seen as normal stumbling, but for the fact that she is a reporter, which would mean that she should historically stumble only very rarely, and never this much. Around a third of stroke sufferers develop speech problems, some of which can be reversed later. It is unclear here whether she is merely experiencing aphasia or speech production errors as well.

It looks like her fellows recognised her problem very quickly, and cut back to the next scheduled thing. As well, she’s been taken to the hospital for tests – let’s hope this isn’t permanent.

Prairie Dog Communication

istockphoto.comA recent NPR radio show covered the research of the biosemiotician Con Slobodchikoff of the Univeristy of Arizone on prairie dog calls. The piece is very public-orientated, but still might be worth listening to.

ResearchBlogging.orgWe’ve all (I hope) heard of the vervet monkeys, which have different alarm calls for different predators, such as for leopard (Panthera pardus), martial eagle (Polemaetus bellicosus), and python (Python sebae). (Seyfarth et al. 1980) For each of these predators, an inherent and unlearned call is uttered by the first spectator, after which the vervet  monkeys respond in a suitable manner – climb a tree, seek shelter, etc. It appears, however, that prairie dogs have a similar system, and that it is a bit more complicated.

Slobodchikoff conducted a study where three girls (probably underpaid, underprivaleged, and underappreciated (under)graduate students) walked through a prairie dog colony wearing shirts of the colors green, yellow, and blue. The call of the first prairie dog to notice them was recorded, after which the prairie dogs all fled into their burrows. The intern then walked through the entire colony, took a break for ten minutes, changed shirts, and did it again.

What is interesting is that the prairie dogs have significantly different calls (important, as they are pretty much exactly the same to human ears) for blue and yellow, but not for yellow and green. This is due to the dichromatic nature of praire dog eyesight (for a full study of the eyesight of retinal photoreceptors of subterranean rodents, consult Schleich et al. 2010). The distinction between blue and yellow is important, however, as there isn’t necessarily any reason that blue people are any more dangerous to praire dogs than yellow ones. “This in turn suggests that the prairie dogs are labeling the predators according to some cognitive category, rather than merely providing instructions on how to escape from a particular predator or responding to the urgency of a predator attack.” (Slobodchikoff 2009, pp. 438)

Another study was then done where two towers were built and a line was strung between them. When cut out shapes were slung down the line, the prairie dogs were able to distinguish a triangle from a circle, but not a circle from a square. So, the prairie dogs are not entirely perfect at encoding information. The conclusion still stands however that more information is encoded in the calls than is entirely relevant to a suitable reaction (unless one were to argue that evolutionary pressure existed on prairie dogs to distinguish blue predators from yellow ones.)

NPR labels this ‘prairiedogese’, which makes me shiver and reminds me of Punxatawney Pennsylvania, where Bill Murray was stuck on a vicious cycle in the movie Groundhog Day, forced every day to watch the mayor recite the translated proclamation of the Groundhog, which of course spoke in ‘groundhogese’. Luckily, however, there won’t be courses in this ‘language’.

References:

Schleich, C., Vielma, A., Glösmann, M., Palacios, A., & Peichl, L. (2010). Retinal photoreceptors of two subterranean tuco-tuco species (Rodentia, Ctenomys): Morphology, topography, and spectral sensitivity The Journal of Comparative Neurology, 518 (19), 4001-4015 DOI: 10.1002/cne.22440

Seyfarth, R., Cheney, D., & Marler, P. (1980). Monkey responses to three different alarm calls: evidence of predator classification and semantic communication Science, 210 (4471), 801-803 DOI: 10.1126/science.7433999

Slobodchikoff CN, Paseka A, & Verdolin JL (2009). Prairie dog alarm calls encode labels about predator colors. Animal cognition, 12 (3), 435-9 PMID: 19116730

The Danish Language Collapse

On a lighter note, some writers at the Norwegian show Uti Vår Hage wondered what would happen if a language collapsed. It’s quite funny – they do the standard thing along the way of mocking Danish. This video reminds me of another joke I heard recently, where the Dutch refer to Afrikaans as ‘loldutch.’ For instance, giraffe is kameelperd, meaning ‘camel leopard.’ Weird. For more examples, look at the Facebook page of people making fun of Afrikaans. It’s apparently amusing, but I don’t understand a word.

Note: There are a few swear words in the video.

Dialects in Tweets

A recent study published in the proceedings of the Empirical Methods in Natural Language Processing Conference (EMNLP) in October and presented in the LSA conference last week found evidence of geographical lexical variation in Twitter posts. (For news stories on it, see here and here.) Eisenstein, O’Connor, Smith and Xing took a batch of Twitter posts from a corpus released of 15% of all posts during a week in March. In total, they kept 4.7 million tokens from 380,000 messages by 9,500 users, all geotagged from within the continental US. They cut out messages from over-active users, taking only messages from users with less than a thousand followers and followees (However, the average author published around 40~ posts per day, which might be seen by some as excessive. They also only took messages from iPhones and BlackBerries, which have the geotagging function. Eventually, they ended up with just over 5,000 words, of which a quarter did not appear in the spell-checking lexicon aspell.

The Generative Model

In order to figure out lexical variation accurately, both topic and geographical regions had to be ascertained. To do this, they used a generative model (seen above) that jointly figured these in. Generative models work on the assumption that text is the output of a stochastic process that can be analysed statistically. By looking at mass amounts of texts, they were able to infer the topics that are being talked about. Basically, I could be thinking of a few topics – dinner, food, eating out. If I am in SF, it is likely that I may end up using the word taco in my tweet, based on those topics. What the model does is take those topics and figure from them which words are chosen, while at the same time figuring in the spatial region of the author. This way, lexical variation is easier to place accurately, whereas before discourse topic would have significantly skewed the results (the median error drops from 650 to 500 km, which isn’t that bad, all in all.)

ResearchBlogging.orgThe way it works (in summary and quoting the slide show presented at the LSA annual meeting, since I’m not entirely sure on the details) is that, in order to add a topic, several things must be done. For each author, the model a) picks a region from P( r | ∂ ) b) picks a location from P( y | lambda, v ) and c) picks a distribution over P( Theta | alpha ). For each token, it must a) pick a topic from P( z | Theta ), and then b) pick a word from P( w | nu ). Or something like that (sorry). For more, feel free to download the paper on Eisenstien’s website.

This post was chosen as an Editor's Selection for ResearchBlogging.orgWell, what did they find? Basically, Twitter posts do show massive variation based on region. There are geographically-specific proper names, of course, and topics of local prominence, like taco in LA and cab in NY. There’s also variation in foreign language words, with pues in LA but papi in SF. More interestingly, however, there is a major difference in regional slang. ‘uu’, for instance, is pretty much exclusively on the Eastern seaboard, while ‘you’ is stretched across the nation (with ‘yu’ being only slightly smaller.) ‘suttin’ for something is used only in NY, as is ‘deadass’ (meaning very) and, on and even smaller scale, ‘odee’, while ‘af’ is used for very in the Southwest, and ‘hella’ is used in most of the Western states.

Dialectical variation for 'very'

More importantly, though, the study shows that we can separate geographical and topical variation, as well as discover geographical variation from text instead of relying solely on geotagging, using this model. Future work from the authors is hoped to cover differences between spoken variation and variation in digital media. And I, for one, think that’s #deadass cool.

Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, & Eric P. Xing (2010). A Latent Variable Model for Geographic Lexical Variation. Proceedings of EMNLP

Fungus, -i. 2nd Decl. N. Masculine – or is it?: On Gender

ResearchBlogging.orgIn an attempt to write out my thoughts for others instead of continually building them up in saved stickies, folders full of .pdfs, and hastily scribbled lecture notes, as if waiting for the spontaneous incarnation of what looks increasingly like a dissertation, I’m going to give a glimpse today of what I’ve been looking into recently. (Full disclosure: I am not a biologist, and was told specifically by my High School teacher that it would be best if I didn’t do another science class. Also, I liked Latin too much, which explains the title.)

It all started, really, with trying to get my flatmate Jamie into research blogging. His intended career path is mycology, where there are apparently fewer posts available for graduate study than in Old English syntax. As he was setting up the since-neglected Fungi Imperfecti, he pointed this article out to me: A Fungus Walks Into A Singles Bar. The post explains briefly how fungi have a very complicated sexual reproduction system.

Fungi are eukaryotes, the same as all other complex organisms with complicated cell structures. However, they are in their own kingdom, for a variety of reasons. Fungi are not the same as mushrooms, which are only the fruiting bodies of certain fungi. Their reproductive mechanisms is rather unexpectedly complex, in that the normal conventions of sex do not apply. Not all fungi reproduce sexually, and many are isogamous, meaning that their gametes look the same and differ only in certain alleles in certain areas called mating-type regions. Some fungi only have two mating types, which would give the illusion of being like animal genders. However, others, like Schizophyllum commune, have over ten thousand (although these interact in an odd way, such that they’re only productive if the mating regions are highly compatible (Uyenoyama 2005)).

Some fungi are homothallic, meaning that self-mating and reproduction is possible. This means that a spore has within it two dissimilar nuclei, ready to mate – the button mushroom apparently does this (yes, the kind you buy in a supermarket.) Heterothallic fungi, on the other hand, merely needs to find another fungi that isn’t the same mating type – which is pretty easy, if there are hundreds of options. Other types of fungi can’t reproduce together, but can vegetatively blend together to share resources, interestingly enough. Think of mind-melding, like Spock. Alternatively, think of mycelia fusing together to share resources.

In short, the system is ridiculously confusing, and not at all like the simple bipolar genders of, say, humans (if we take the canonical view of human gender, meaning only two.) I’m still trying to find adequate research on the origins of this sort of system. Understandably, it’s difficult. Mycologists agree:

“The molecular genetical studies of the past ten years have revealed a genetic fluidity in fungi that could never have been imagined. Transposons and other mobile elements can switch the mating types of fungi and cause chromosonal rearrangements.Deletions of mitochondrial genes can accumulate as either symptomless plasmids or as disruptive elements leading to cellular senescence…[in summary,] many aspects of the genetic fluidity of fungi remain to be resolved, and probably many more remain to be discovered.” (Deacon, 1997: pg. 157)

At this point you’re probably asking why I’ve posted this here. Well, perhaps understandably, I started drawing comparisons between mycologic mating types and linguistic agreement immediately. First, mating-type isn’t limited to bipolarity – neither is grammatical gender. Nearly 10% of the 257 languages noted for number of genders on WALS have more than five genders. Ngan’gityemerri seems to be winning, with 15 different genders (Reid, 1997). Gender distinctions generally have to do with a semantic core – one which need not be based on sex, either, but can cover categories like animacy. Gender can normally be diagnosed by agreement marking, which, taking out genetic analysis of the parent, could be analogic to fungi offspring. Gender can be a fluid system, susceptible to decay, mostly through attrition, but also to reformation and realignment – the same is true of mating types. (For more, see Corbett, 1991)

As with all biologic to linguistic analogues, the connections are a bit tenuous. I’ve been researching fungal replication partly for the sake of dispelling the old “that’s too complex to have evolved” argument, which is probably the most fun point to argue against creationists with. However, I’ve mostly been doing this because fungi and linguistic gender distinctions are just so damn interesting.

If anyone likes, I’ll keep you updated on mycologic evolution and the linguistic analogues I can tentatively draw. For now, though, I’ve really got to get back to studying for my examination in two days. Which means I’ve got to stop thinking about a future post involving detailing why “Prokaryotic evolution and the tree of life are two different things” (Baptiste et al., 2009)…

References:

  • Corbett, G. Gender. Cambridge University Press, Cambridge: 1991.
  • Deacon, JW. Modern Mycology. Blackwell Science, Oxford: 1997.
  • Reid, Nicholas. and Harvey, Mark David,  Nominal classification in aboriginal Australia / edited by Mark Harvey, Nicholas Reid John Benjamins Pub., Philadelphia, PA :  1997.

Uyenoyama, M. (2004). Evolution under tight linkage to mating type New Phytologist, 165 (1), 63-70 DOI: 10.1111/j.1469-8137.2004.01246.x
Bapteste E, O’Malley MA, Beiko RG, Ereshefsky M, Gogarten JP, Franklin-Hall L, Lapointe FJ, Dupré J, Dagan T, Boucher Y, & Martin W (2009). Prokaryotic evolution and the tree of life are two different things. Biology direct, 4 PMID: 19788731

On Phylogenic Analogues

A recent post by Miko on Kirschner and Gerhart’s work on developmental constraints and the implications for evolutionary biology caught my eye due to the possible analogues which could be drawn with language in mind. It starts by saying that developmental constraints are the most intuitive out of all of the known constraints on phenotypic variation.  Essentially, whatever evolves must evolve from the starting point, and it cannot ignore the features of the original. Thus, a winged horse would not occur, as six limbs would violate the basic bauplan of tetrapods. In the same way, a daughter language cannot evolve without taking into account the language it derives from and language universals. But instead of viewing this as a constraint which limits the massive variation we see biologically or linguistically between different phenotypes, developmental constraints can be seen as a catalyst for regular variation.

ResearchBlogging.orgA recent post by Miko on Kirschner and Gerhart’s work on developmental constraints and the implications for evolutionary biology caught my eye due to the possible analogues which could be drawn with language in mind. It starts by saying that developmental constraints are the most intuitive out of all of the known constraints on phenotypic variation.  Essentially, whatever evolves must evolve from the starting point, and it cannot ignore the features of the original. Thus, a winged horse would not occur, as six limbs would violate the basic bauplan of tetrapods. In the same way, a daughter language cannot evolve without taking into account the language it derives from and language universals. But instead of viewing this as a constraint which limits the massive variation we see biologically or linguistically between different phenotypes, developmental constraints can be seen as a catalyst for regular variation.

A pretty and random tree showing variation among IE languages.

Looking back over my courses, I’m surprised by how little I’ve noticed (different from how much was actually said) about reasons for linguistic variation. The modes of change are often noted: <th> is fronted in Fife, for instance, leading to the ‘Firsty Ferret’ instead of the ‘Thirsty Ferret’ as a brew, for instance. However, why the <th> is fronted at all isn’t explained beyond cursory hypothesis. But that’s a bit besides the point: what is the point is that phenotypic variation is not necessarily random, as there are constraints – due to the “buffering and canalizing of development” – which limit variation to a defined range of possibilities. There clearly aren’t any homologues between biological embryonic processes and linguistic constraints, but there are developmental analogues: the input bottleneck (paucity of data) given to children, learnability constraints, the necessity for communication, certain biological constraints to do with production and perception, etc. These all act on language to make variation occur only within certain channels, many of which would be predictable.

Another interesting point raised by the article is the robustness of living systems to mutation. The buffering effect of embryonic development results in the accumulation of ‘silent’ variation.  This has been termed evolutionary capacitance. Silent variation can lay quiet, accumulating, not changing the phenotype noticeably until environmental or genetic conditions unmask them. I’ve seen little research (not that I don’t expect there to be plenty) on the theoretical implications of the influence of evolutionary capacitance on language change – in other words, how likely a language is to make small variations which don’t affect language understanding before a new language emerges (not that the term language isn’t arbitrary based on the speaking community, anyway). Are some languages more robust than others? Is robustness a quality which makes a language more likely to be used in multilingual settings – for instance, in New Guinea, if seven languages are mutually indistinguishable, is it likely the that local lingua franca is forced by its environment to be more robust in order to maximise comprehension?

The article goes on about the cost of robustness: stasis. This can be seen clearly in Late Latin, which was more robust than the daughter languages as it was needed to communicate in different environments where the language had branched off into the Romance languages, and an older form was necessary in order for communication to ensue. Thus, Latin retained usage well after the rest of it had evolved into other languages. Another example would be Homeric Greek, which retained many features lost in Attic, Doric, Koine, and other dialects, as it was used in only a certain environment and was therefore resistant to change. This has all been studied before better than I can sum it up here. But the point I am making is that analogues can be clearly drawn here, and some interesting theories regarding language become apparent only when seen in this light.

A good example, also covered, would be exploratory processes, as Kirschner and Gerhart call them. These are processes which allow for variation to occur in environments where other variables are forced to change. The example given is the growth of bone length, which requires corresponding muscular, circulatory, and other dependant systems to also change. The exploratory processes allow for future change to occur in the other systems. That is, they expedite plasticity. So, for instance, an ad hoc linguistic example would be the loss of a fixed word order, which would require that morphology step in to fill the gap. In such a case, particles or affixes or the like would have to have already paved the way for case markers to evolve, and would have had to have been present to some extent in the original word order system. (This may not be the best example, but I hope my point comes across.)

Naturally, much of this will have seemed intuitive. But, as Miko stated, these are useful concepts for thinking about evolution; and, in my own case especially, the basics ought to be brought back into scrutiny fairly frequently. Which is justification enough for this post. As always, comments appreciated and accepted. And a possible future post: clade selection as a nonsensical way to approach phylogenic variation.

References:

Caldwell, M. (2002). From fins to limbs to fins: Limb evolution in fossil marine reptiles American Journal of Medical Genetics, 112 (3), 236-249 DOI: 10.1002/ajmg.10773

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

Gerhart, J., & Kirschner, M. (2007). Colloquium Papers: The theory of facilitated variation Proceedings of the National Academy of Sciences, 104 (suppl_1), 8582-8589 DOI: 10.1073/pnas.0701035104

Mapping Linguistic Phylogeny to Politics

In a recent article covered in NatureNews in Societes Evolve in Steps, Tom Currie of UCL, and others, like Russell Gray of Auckland, use quantitative analysis of the Polynesian language group to plot socioanthropological movement and power hierarchies in Polynesia. This is based off of previous work, available here, which I saw presented at the Language as an Evolutionary Systemconference last July. The article claims that the means of change for political complexity can be determined using linguistic evidence in Polynesia, along with various migration theories and archaeological evidence.

I have my doubts.

Note: Most of the content in this post is refuted wonderfully in the comment section by one of the original authors of the paper. I highly recommend reading the comments, if you’re going to read this at all – that’s where the real meat lies. I’m keeping this post up, finally, because it’s good to make mistakes and learn from them. -Richard

§§

I had posted this already on the Edinburgh Language Society blog. I’ve edited it a bit for this blog. I should also state that this is my inaugural post on Replicated Typo; thanks to Wintz’ invitation, I’ll be posting here every now and again. It’s good to be here. Thanks for reading – and thanks for pointing out errors, problems, corrections, and commenting, if you do. Research blogging is relatively new to me, and I relish this unexpected chance to hone my skills and learn from my mistakes. (Who am I, anyway?) But without further ado:

§

In a recent article covered in NatureNews in Societes Evolve in StepsTom Currie of UCL, and others, like Russell Gray of Auckland, use quantitative analysis of the Polynesian language group to plot socioanthropological movement and power hierarchies in Polynesia. This is based off of previous work, available here, which I saw presented at the Language as an Evolutionary Systemconference last July. The article claims that the means of change for political complexity can be determined using linguistic evidence in Polynesia, along with various migration theories and archaeological evidence.

I have my doubts. The talk that was given by Russell Gray suggested that there were still various theories about the migratory patterns of the Polynesians – in particular, where they started from. What his work did was to use massive supercomputers to narrow down all of the possibilities, by using lexicons and charting their similarities. The most probable were then recorded, and their statistical probability indicated what was probably the course of action. This, however, is where the ability for guessing ends. Remember, this is massive quantificational statistics. If one has a 70% probability chance of one language being the root of another, that isn’t to say that that language is the root, much less that the organisation of one determines the organisation of another. But statistics are normally unassailable – I only bring up this disclaimer because there isn’t always clear mapping between language usage and migration.

Continue reading “Mapping Linguistic Phylogeny to Politics”