What Makes Humans Unique ?(IV): Shared Intentionality – The Foundation of Human Uniqueness?

What Makes Humans Unique (IV): Shared Intentionality – The Foundation of Human Uniqueness?

Shared or collective intentionality is the ability and motivation to engage with others in collaborative, co-operative activities with joint goals and intentions. (Tomasello et al. 2005). The term also implies that the collaborators’ psychological processes are jointly directed at something and take place within a joint attentional frame (Hurford 2007: 320, Tomasello et al. 2005).

Michael Tomasello and his colleagues at the Max-Planck-Institute for Evolutionary Anthropology in Leipzig, Germany have proposed that shared intentionality and the cognitive infrastructure supporting it may be the crucial feature that makes humans unique.
ResearchBlogging.org

(You can hear Michael Tomasello talk about shared intentionality in his brief 2009 acceptance speech for the prestigeous “Hegel-Price” here. Transcript here)

Continue reading “What Makes Humans Unique ?(IV): Shared Intentionality – The Foundation of Human Uniqueness?”

Experiments in Communication pt 1: Artificial Language Learning and Constructed Communication Systems

ResearchBlogging.orgMuch of recent research in linguistics has involved the use of experimentation to directly test hypotheses by comparing and contrasting real-world data with that of laboratory results and computer simulations. In a previous post I looked at how humans, non-human primates, and even non-human animals are all capable of high-fidelity cultural transmission. Yet, to apply this framework to human language, another set of experimental literature needs to be considered, namely: artificial language learning and constructed communication systems.

Continue reading “Experiments in Communication pt 1: Artificial Language Learning and Constructed Communication Systems”

Language Evolved due to an “animal connection”?

New hypothesis of language evolution. Language Evolved due to an “animal connection” according to Pat Shipman:

Next, the need to communicate that knowledge about the behavior of prey animals and other predators drove the development of symbols and language around 200,000 years ago, Shipman suggests.

For evidence, Shipman pointed to the early symbolic representations of prehistoric cave paintings and other artwork that often feature animals in a good amount of detail. By contrast, she added that crucial survival information about making fires and shelters or finding edible plants and water sources was lacking.

“All these things that ought to be important daily information are not there or are there in a really cursory, minority role,” Shipman noted. “What that conversation is about are animals.”

Of course, much evidence is missing, because “words don’t fossilize,” Shipman said. She added that language may have arisen many times independently and died out before large enough groups of people could keep it alive.

Nothing but wild conjecture as usual but still interesting.

Original article here.

Selection on Fertility and Viability

So in my previous post on mathematical modelling I looked at viability selection and how it can be expressed using relatively simple mathematics. What I didn’t mention was fertility. My reasoning largely being because the post was already getting unwieldy large for a blog, and, from now on, I’m going to limit the length on these math-based posts. I personally find I get more out of small, bite-sized chunks of information that are easily digestible, than overloading myself by trying understand too many concepts all at once. With that said, I’ll now look at what happens when the two zygote types, V(A) and V(B), differ in their fertility.

A good place to start is by defining the average number of zygotes produced by each type as z(A) and z(B). We can then plug these into a modified version of the recursion equation I used in the earlier post:

So now we can consider both fertility and viability selection. Furthermore, this can be combined to give us W(A) = V(A)z(A) and W(B) = V(B)z(B):

Remember, , is simply the the average the fitness in the population, which can be used in the following difference equation:

That’s it for now. The next post will look at the long-term consequences of these processes.

Reference: McElreath & Boyd (2007). Mathematical Models of Social Evolution: A guide for the perplexed. University of Chicago Press. Amazon link.

Chomsky Chats About Language Evolution

If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you’ll find this blurb:

Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT’s Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky’s office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.

I’ve neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn’t look like semantics to me, it looked like syntax.

Then I found Syd Lamb’s stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I’ve figured out how to package a very robust semantics into Lamb’s diagrammatic notation. But that’s another story.

Some Links #13: Universal Grammar Haters

Universal Grammar haters. Mark Lieberman takes umbrage with claims that Ewa Dabrowska’s recent work challenges the concept of a biologically evolved substrate for language. Put simply: it doesn’t. What their experiments suggest is that there are considerable differences in native language attainment. As some of you will probably know, I’m not necessarily a big fan of most UG conceptions, however, there are plenty of papers that directly deal with such issues. Dabrowska’s not being one of them. In Lieberman’s own words:

In support of this view, let me offer another analogy. Suppose we find that deaf people are somewhat more likely than hearing people to remember the individual facial characteristics of a stranger they pass on the street. This would be an interesting result, but would we spin it to the world as a challenge to the widely-held theory that there’s an evolutionary substrate for the development of human face-recognition abilities?

Remote control neurons. I remember reading about optogenetics awhile back. It’s a clever technique that enables neural manipulation through the use of light-activated channels and enzymes. Kevin Mitchell over at GNXP classic refers to a new approach where neurons are activated using a radio frequency magnetic field. The obvious advantage to this new approach being fairly straight-forward: magnetic-fields pass through brains far more easily than light. It means the new approach is a lot less invasive, without the need to insert micro-optical fibres or light-emitting diodes. Cool stuff.

Motor imagery enhances object recognition. Neurophilosophy has an article about a study showing that motor simulations may enhance the recognition of tools:

According to these results, then, the simple action of squeezing the ball not only slowed down the participants’ naming of tools, but also slightly reduced their accuracy in naming them correctly. This occured, the authors say, because squeezing the ball involves the same motor circuits needed for generating the simulation, so it interferes with the brain’s ability to generate the mental image of reaching out and grasping the tool. This in turn slows identification of the tools, because their functionality is an integral component of our conceptualization of them. There is other evidence that  parallel motor simulations can interfere with movements, and with each other: when reaching for a pencil, people have a larger grip aperture if a hammer is also present than if the pencil is by itself.

On the Origin of Science Writers. If you fancy yourself as a science writer, then Ed Yong, of Not Exactly Rocket Science, wants to read your story. As expected, he’s got a fairly large response (97 comments at the time of writing), which includes some of my favourite science journalists and bloggers. It’s already a useful resource, full of fascinating stories and bits of advice, from a diverse source of individuals.

Some thoughts about science blog aggregation. Although it’s still hanging about, many people, including myself, are looking for an alternative to the ScienceBlogs network. Dave Munger points to Friendfeed as one potential solution, with him setting up a feed for all the Anthropology posts coming in from Research Blogging. Also, in the comments Christina Pikas mentioned Nature Blogs, which, I’m ashamed to say, I haven’t come across before.

What Makes Humans Unique ?(III): Self-Domestication, Social Cognition, and Physical Cognition

ResearchBlogging.orgIn my last post I summed up some proposals for what (among other things) makes human cognition unique. But one thing that we should bear in mind, I think, is that our cognitive style may more be something of an idiosyncrasy due to a highly specific cognitive specialization instead of a definitive quantitative and qualitative advance over other styles of animal cognition. In this post I will look at studies which further point in that direction.

Chimpanzees, for example, beat humans at certain memory tasks  (Inoue & Matsuzawa 2007) and behave more rational in reward situations (Jensen et al. 2007).

In addition, it has been shown that in tasks in the social domain, which are generally assumed to be cognitively complex, domesticated animals such as dogs and goats (Kaminski et al. 2005) fare similarly well or even outperform chimpanzees.

Social Cognition and Self-Domestication

It is entirely possible that the first signs of human uniqueness where at first simply side-effects our self-domesticating lifestyle – the same way the evolution of social intelligence in dogs and goats is hypothesised to have come about –, acting on a complex primate brain (Hare & Tomasello 2005).

This line of reasoning is also supported by domesticated silver foxes which have been bred for tameness over a time period of 50 years but developed other interesting characteristics as a by-product: To quote from an excellent post on the topic over at a Blog Around the Clock (see also here):

“They started having splotched and piebald coloration of  their coats, floppy ears, white tips of their tails and paws. Their body proportions changed. They started barking. They improved on their performance in cognitive experiments. They started breeding earlier in spring, and many of them started breeding twice a year.”

What seems most interesting to me, however, is another by-product of their experimental domestication: they also improved in the domain of social cognition. For example, like dogs, they are able to understand human communicative gestures like pointing. This is all the more striking because, as mentioned above, chimpanzees do not understand human communicative gestures like  helpful  pointing. Neither do wolves or non-domesticated silver foxes (Hare et al. 2005).

Continue reading “What Makes Humans Unique ?(III): Self-Domestication, Social Cognition, and Physical Cognition”

Physicists get linguist envy?

So I wrote a post a couple of weeks ago on my Hungarian friend’s blog in which I wrote about, amongst other things, why some linguists have physics envy, but I just read a new scientist article in which it seems physicists can have linguistics envy too!

Murray Gell-Mann, a nobel prize winning physicist (who discovered quarks), has taken it upon himself to try to work out the origins of human language:

Another pet project is an attempt to trace the majority of human languages back to a common root. Since the 19th century, linguists have been comparing languages to infer their common ancestry, but in most cases, Gell-Mann says, this kind of analysis loses the trail 6000 or 7000 years back. He says most linguists insist it is impossible to follow the trail any further into the past and – this is what truly rankles with him – “absurdly, they don’t even want to try”.

Gell-Mann heads SFI’s Evolution of Human Languages (EHL) programme. The EHL linguists say they can go even further back by classifying language families into superfamilies and even into a super-superfamily. “What we’ve found,” Gell-Mann explains, “is tentative evidence for a situation in which a huge fraction of all human languages are descended from one spoken 20,000 years ago, towards the end of the last ice age.” The team does not claim to account for all languages, though, and remains agnostic about whether they can eventually do so. “All of this just comes from following the data,” he says.

I love that attempting to trace the majority of human languages back to a common root can be described as a ‘pet project’.

If anyone’s interested here’s a paper he wrote on the subject:

Murray Gell-Mann, Ilia Peiros, George Starostin. Distant Language Relationship: The Current Perspective.

Time Travel, Dreams and The Origin of Knowledge

I’ve been attending a weekly seminar on the Metaphysics of Time Travel, given by Alasdair Richmond.  Yesterday, he was talking about the way knowledge arises in causal chains.  Popper (1972 and various others) argues that “Knowledge comes into existence only by evolutionary, rational processes” (quoted from Paul Nahin, ‘Time Machines: Time Travel in Physics, Metaphysics and Science Fiction, New York, American Institute of Physics, 1999: 312).  Good news for us scholars of Cultural Evolution.  However, Richmond also talked about the work of David Lewis on the nature of causality.  There are three ways that causal chains can be set up:

The first is an infinite sequence of events each caused by the previous one.  For example, I’m typing this blog because my PhD work is boring, I’m doing a PhD because I was priced in by funding, I applied for funding because everyone else did … all the way back past my parents meeting and humans evolving etc.

The second option is for a finite sequence of events – like the first option, but with an initial event that caused all the others, like the big-bang.

The third option is a circular sequence of events.  In this, A is caused by B which is caused by A.  For instance, I’m writing doing a PhD because I got funding and I got funding because I’m doing a PhD, because I got funding.  There is no initial cause, the states just are. This third option seems really odd, not least because it involves time-travel.  Where do the states come from?  However, argues Lewis, they are no more odd than any of the other two options.  Option one has a state with no cause and option two has a cause for every event but no original cause.  So, how on earth can we get at the origin of knowledge if there is no logical possibility of determining the origin of any sequence of events?

One answer is just to stop caring after a certain point.  Us linguists are unlikely to get to the point where we’re studying vowel shifts in the first few seconds of the big bang.

The other answer is noise.  Richmond suggested that ‘Eureka’ moments triggered by random occurrences, for instance (Nicholas J. J. Smith, ‘Bananas Enough for Time Travel?’, British Journal for the Philosophy of Science, Vol. 48, 1997: 363-89). mishearing someone or a strange dream, could create information without prior cause.

Spookily, the idea I submitted for my PhD application came to me in a dream.