Lately, I took time out to read through a few papers I’d put on the backburner until after my first year review was completed. Now that’s out of the way, I found myself looking through Berwick et al.‘s review on Evolution, brain, and the nature of language. Much of the paper manages to pull off the impressive job of making it sound as if the field has arrived on a consensus in areas that are still hotly debated. Still, what I’m interested in for this post is something that is often considered to be far less controversial than it is, namely the notion of mental representations. As an example, Berwick et al. posit that mind/brain-based computations construct mental syntactic and conceptual-intentional representations (internalization), with internal linguistic representations then being mapped onto their ordered output form (externalization). From these premises, the authors then arrive at the reasonable enough assumption that language is an instrument of thought first, with communication taking a secondary role:
In marked contrast, linear sequential order does not seem to enter into the computations that construct mental conceptual-intentional representations, what we call ‘internalization’… If correct, this calls for a revision of the traditional Aristotelian notion: language is meaning with sound, not sound with meaning. One key implication is that communication, an element of externalization, is an ancillary aspect of language, not its key function, as maintained by what is perhaps a majority of scholars… Rather, language serves primarily as an internal ‘instrument of thought’.
If we take for granted their conclusions, and this is something I’m far from convinced by, there is still the question of whether or not we even need representations in the first place. If you were to read the majority of cognitive science, then the answer is a fairly straight forward one: yes, of course we need mental representations, even if there’s no solid definition as to what they are and the form they take in our brain. In fact, the notion of representations has become a major theoretical tenet of modern cognitive science, as evident in the way much of field no longer treats it as a point of contention. The reason for this unquestioning acceptance has its roots in the notion that mental representations enriched an impoverished stimulus: that is, if an organism is facing incomplete data, then it follows that they need mental representations to fill in the gaps.
One area where representations were seen as a solution to the poverty of stimulus problem is perception. Marr (1982), in his seminal work Vision: A computational investigation into the human representation and processing of visual information, used the analogy of computers to study perception. Here, three levels were posited at which a process could be viewed: (1) a computational level (What does a process do?), (2) an algorithmic level (How does the process work?) and (3) an implementation level (How is the process realised?). Based on this computational perspective, much of the problem solving takes place in our brain, and what it has to solve is an impoverished, probabilistic access to the world:
Because perception is assumed to be flawed, it is not considered a central resource for solving tasks. Because we only have access to the environment via perception, the environment also is not considered a central resource. This places the burden entirely on the brain to act as a storehouse for skills and information that can be rapidly accessed, parameterized, and implemented on the basis of the brain’s best guess as to what is required, a guess that is made using some optimized combination of sensory input and internally represented knowledge. The job description makes the content of internal cognitive representations the most important determinant of the structure of our behavior. Cognitive science is, therefore, in the business of identifying this content and how it is accessed and used. (Wilson & Golonka, 2012).
James Gibson offers the best, non-representationalist challenge to these claims in his work on direct perception (Gibson, 1966, 1979). What Gibson did was to come at the problem of perception from a different perspective. Instead of assuming the stimulus is poor, and in need of enriching representations, he asked: what is the information for a perceiving organism? In asking this question, Gibson began the important step of naturalising perception by making it continuous with biology and evolution. As Andrew Wilson over at Notes from Two Scientific Psychologists wrote (click here for post):
The difference was that instead of looking at the physiology of the eye to decide what it could possibly detect, he looked at the physics of the world to see what might possibly be around and informative; only then did he ask whether the organism could use it [...] Gibson reasoned that you couldn’t understand how the eye worked until you understood the kinds of things it might be interacting with; in other words, what is the niche that the eye evolved to fill?
When looking at the environment, and the way in which organisms interact with it, Gibson argued that we do indeed have high quality, direct perceptual access to the world. As such, both perception and the environment become useful resources in solving problems, rather than being a problem to be solved. This led to the conclusion that invariant relations exist, and that an organism is able to detect these relations, which in turn allows for information to be unambiguously related to the world (for a more thorough treatment of these ideas I strongly suggest this post by Andrew Wilson). It therefore seems more sensible to frame the issue in terms of an ecologically embedded perceptual system: here, information is picked up directly from, and constrained by, the action possibilities latent in the environment (affordances). So, if we take into account that the brain is not isolated in cognitive processes, but is instead one resource alongside perception-action couplings, the body and the environment, then the need for concepts, internally represented competence, and knowledge becomes diminished.
What evidence do we have for this? Well, courtesy of a recent paper by Wilson & Golonka (2012), we have plenty of examples where representationalist perspectives are not only wrong, but lead us down a garden path of thinking. One of my favourite is the outfielder problem in baseball: here, in order to catch a fly ball (see video below), the fielder must intercept a moving target at a specific time and place. The outfielder problem specifically refers to the phase in which the outfielder moves his/her body close to the landing point of the ball. So, how does a baseball outfielder position themselves to catch a fly ball? Saxberg (1987) proposed a representationalist account called trajectory prediction (TP). Under TP, the fielder will use an internal representation to implement the required calculus transformations in order to computationally predict where and when the ball will land. This requires a sophisticated internal model of the projectile motion that not only considers the ball’s distance, speed and direction of motion, but also variables such as spin, air density and gravity.
Work by Fink et al. (2009) shows that the TP solution isn’t feasible (outfielders do not typically run in straight lines, ruling out this computational strategy). Instead, two other solutions have been proposed, neither of which relies on internal representations. The first of these, called optical acceleration cancellation (OAC), requires the fielder to align themselves with the path of the ball, constantly adjusting his/her position in response to the perceived path of the ball. This method gives the ball the appearance that it moves with a constant velocity. The second strategy, linear optical trajectory (LOT), requires the fielder to move laterally so that the ball appears to be moving in a straight line rather than a parabola. Each of these strategies has their advantages — OAC works best if the ball is coming straight for you, LOT allows you to intercept a ball that is heading off to one side. As Fink et al conclude:
Finally, the rapid responses to mid-flight perturbations show that the fielders movements are continuously controlled, contrary to the standard TP theory. The results thus suggest that perception is used to guide action by means of a continuous coupling of visual information to movement, without requiring an internal model of the balls trajectory.
In assuming a poverty of stimulus, and by extension mental representations, we end up thinking about problems in a very specific way. As the outfielder problem shows, in some cases this way of thinking is erroneous, and by focusing on behavioural dynamics we can radically alter the way in which we approach a problem. If anything, the above example should serve as a cautionary tale, rather than a paradigm shifting blow to representationalist accounts of more complex behaviours, such as language.
- The observation that there is a poverty of stimulus problem to be solved is perhaps flawed.
- As such, representations become a solution in need of a problem.
- Both the environment and perception are vital resources in problem solving and should not be divorced from cognition. This is the embodied thesis.
- A question remains as to how we can approach language from a non-representationalist perspective?
It is on this last question I will turn my attention to in the next post. I’ve purposefully left this question hanging, as it requires a thorough treatment, and I’m still sitting on the fence as to whether we can solve the problem of language without the need for representations. As you can probably tell, I’m definitely sympathetic to the notion of a non-representationalist account for language, and psychology in general. Wilson & Golonka provide an interesting starting point from which we might start to think about this in a non-representationalist framework, and it is here where the next post will kick off.
Wilson AD, & Golonka S (2013). Embodied Cognition is Not What you Think it is. Frontiers in psychology, 4 PMID: 23408669