Some Links #14: Can Robots create their own language?

Can Robots create their own language? Sean already mentioned this in the comments for a previous post. But as I’m a big fan of Luc Steels‘ work this video may as well go on the front page:

Speaking in Tones: Music and Language Partner in the brain. The first of two really good articles in Scientific American. As you can guess by the title, this article is looking at current research into the links between music and language, such as the overlap in brain circuitry, how prosodic qualities of speech are vital in language development, and the way in which a person hears a set of musical notes may be affected by their native language. Sadly, the article is behind a paywall, so unless you have a subscription you’ll only get to read the first few paragraphs, plus the one I’m about to quote:

In a 2007 investigation neuroscientists Patrick Wong and Nina Kraus, along with their colleagues at Northwestern University, exposed English speakers to Mandarin speech sounds and measured the electrical responses in the auditory brain stem using electrodes placed on the scalp. The responses to Mandarin were stronger among participants who had received musical training — and the earlier they had begun training and the longer they had continued training, the stronger the activity in these brain areas.

Carried to extremes: How quirks of perception drive the evolution of species. In the second good article, which by the way is free to view, Ramachandran and Ramachandran propose another mechanism of evolution in regards to perception:

Our hypothesis involves the unintended consequences of aesthetic and perceptual laws that evolved to help creatures quickly identify what in their surroundings is useful (food and potential mates) and what constitutes a threat (environment dangers and predators). We believe that these laws indirectly drive many aspects of the evolution of animals’ shape, size and coloration.

It’s important to note that they are not arguing against natural selection; rather, they are simply offering an addition force that guides the evolution of a species. It’s quite interesting, even if I’m not completely convinced by their hypothesis — but my criticisms can wait until they publish an actual academic paper on the subject.

A robotic model of the human vocal tract? Talking Brains links to the Anthropomorphic Talking Robot developed at Waseda University. Apparently it can produce some vowels. Here is a picture of the device (which looks like some sort of battle drone):

Battle Drone or Model Vocal Tract?

Y Chromosome II: What is its structure? Be sure to check out the new contributor over at GNXP, Kele Cable, and her article on the structure of the Y Chromosome. I found this sentence particularly amusing:

As you can see in Figure 1, the Y chromosome (on the right) is puny and diminutive. It really is kind of pathetic once you look at it.

Scientopia. A cool collection of bloggers have banded together to form Scientopia. With plenty of articles having already appeared it all looks very promising. In truth, it’s probably not going to be as successful as ScienceBlogs, largely because it doesn’t pay contributors and, well, nothing is ever going to be as big as ScienceBlogs was at its peak. This new ecology of the science blogosphere is well articulated in a long post by Bora over at A Blog Around the Clock.

Broca's Area and Hierarchical Structure Building

Considering I devoted two blog posts (pt.1 & pt.2) to Broca’s area and its role in processing hierarchically organised sequences, I’m happy report the following from a Talking Brains post on Disentangling syntax and intelligibility:

Hierarchical structure building can be achieved without Broca’s area involvement.

I’ve only just finished reading the post and, despite having some thoughts on the topic, I’m going to read the actual paper in question (Disentangling syntax and intelligibility in auditory language comprehension) before commenting. Especially since the authors, Friederici et al, don’t seem to arrive at the same conclusions as the bloggers over at Talking Brains. Still, as far as I can tell, this is only looking at syntactic information within speech, and doesn’t really tell us anything about the processing of hierarchically organised sequences in other linguistic (e.g. written language) and non-linguistic (e.g. tool manufacturing) domains.

Here’s the abstract for the paper in question:

Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher-level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid-anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left-lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right-hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid-to-anterior STS activation is associated with increasing speech intelligibility, while the mid-to-posterior STG/STS is more sensitive to syntactic information within the speech.