On the entangled banks of representations (pt.1)

ResearchBlogging.orgLately, I took time out to read through a few papers I’d put on the backburner until after my first year review was completed. Now that’s out of the way, I found myself looking through Berwick et al.‘s review on Evolution, brain, and the nature of language. Much of the paper manages to pull off the impressive job of making it sound as if the field has arrived on a consensus in areas that are still hotly debated. Still, what I’m interested in for this post is something that is often considered to be far less controversial than it is, namely the notion of mental representations. As an example, Berwick et al. posit that mind/brain-based computations construct mental syntactic and conceptual-intentional representations (internalization), with internal linguistic representations then being mapped onto their ordered output form (externalization). From these premises, the authors then arrive at the reasonable enough assumption that language is an instrument of thought first, with communication taking a secondary role:

In marked contrast, linear sequential order does not seem to enter into the computations that construct mental conceptual-intentional representations, what we call ‘internalization’… If correct, this calls for a revision of the traditional Aristotelian notion: language is meaning with sound, not sound with meaning. One key implication is that communication, an element of externalization, is an ancillary aspect of language, not its key function, as maintained by what is perhaps a majority of scholars… Rather, language serves primarily as an internal ‘instrument of thought’.

If we take for granted their conclusions, and this is something I’m far from convinced by, there is still the question of whether or not we even need representations in the first place. If you were to read the majority of cognitive science, then the answer is a fairly straight forward one: yes, of course we need mental representations, even if there’s no solid definition as to what they are and the form they take in our brain. In fact, the notion of representations has become a major theoretical tenet of modern cognitive science, as evident in the way much of field no longer treats it as a point of contention. The reason for this unquestioning acceptance has its roots in the notion that mental representations enriched an impoverished stimulus: that is, if an organism is facing incomplete data, then it follows that they need mental representations to fill in the gaps.

One area where representations were seen as a solution to the poverty of stimulus problem is perception. Marr (1982), in his seminal work Vision: A computational investigation into the human representation and processing of visual information, used the analogy of computers to study perception. Here, three levels were posited at which a process could be viewed: (1) a computational level (What does a process do?), (2) an algorithmic level (How does the process work?) and (3) an implementation level (How is the process realised?). Based on this computational perspective, much of the problem solving takes place in our brain, and what it has to solve is an impoverished, probabilistic access to the world:

Because perception is assumed to be flawed, it is not considered a central resource for solving tasks. Because we only have access to the environment via perception, the environment also is not considered a central resource. This places the burden entirely on the brain to act as a storehouse for skills and information that can be rapidly accessed, parameterized, and implemented on the basis of the brain’s best guess as to what is required, a guess that is made using some optimized combination of sensory input and internally represented knowledge. The job description makes the content of internal cognitive representations the most important determinant of the structure of our behavior. Cognitive science is, therefore, in the business of identifying this content and how it is accessed and used. (Wilson & Golonka, 2012). 

James Gibson offers the best, non-representationalist challenge to these claims in his work on direct perception (Gibson, 1966, 1979). What Gibson did was to come at the problem of perception from a different perspective. Instead of assuming the stimulus is poor, and in need of enriching representations, he asked: what is the information for a perceiving organism? In asking this question, Gibson began the important step of naturalising perception by making it continuous with biology and evolution. As Andrew Wilson over at Notes from Two Scientific Psychologists wrote (click here for post):

The difference was that instead of looking at the physiology of the eye to decide what it could possibly detect, he looked at the physics of the world to see what might possibly be around and informative; only then did he ask whether the organism could use it […] Gibson reasoned that you couldn’t understand how the eye worked until you understood the kinds of things it might be interacting with; in other words, what is the niche that the eye evolved to fill?

When looking at the environment, and the way in which organisms interact with it, Gibson argued that we do indeed have high quality, direct perceptual access to the world. As such, both perception and the environment become useful resources in solving problems, rather than being a problem to be solved. This led to the conclusion that invariant relations exist, and that an organism is able to detect these relations, which in turn allows for information to be unambiguously related to the world (for a more thorough treatment of these ideas I strongly suggest this post by Andrew Wilson). It therefore seems more sensible to frame the issue in terms of an ecologically embedded perceptual system: here, information is picked up directly from, and constrained by, the action possibilities latent in the environment (affordances). So, if we take into account that the brain is not isolated in cognitive processes, but is instead one resource alongside perception-action couplings, the body and the environment, then the need for concepts, internally represented competence, and knowledge becomes diminished.

What evidence do we have for this? Well, courtesy of a recent paper by Wilson & Golonka (2012), we have plenty of examples where representationalist perspectives are not only wrong, but lead us down a garden path of thinking. One of my favourite is the outfielder problem in baseball: here, in order to catch a fly ball (see video below), the fielder must intercept a moving target at a specific time and place. The outfielder problem specifically refers to the phase in which the outfielder moves his/her body close to the landing point of the ball. So, how does a baseball outfielder position themselves to catch a fly ball? Saxberg (1987) proposed a representationalist account called trajectory prediction (TP). Under TP, the fielder will use an internal representation to implement the required calculus transformations in order to computationally predict where and when the ball will land. This requires a sophisticated internal model of the projectile motion that not only considers the ball’s distance, speed and direction of motion, but also variables such as spin, air density and gravity.

Work by Fink et al. (2009) shows that the TP solution isn’t feasible (outfielders do not typically run in straight lines, ruling out this computational strategy). Instead, two other solutions have been proposed, neither of which relies on internal representations. The first of these, called optical acceleration cancellation (OAC), requires the fielder to align themselves with the path of the ball, constantly adjusting his/her position in response to the perceived path of the ball. This method gives the ball the appearance that it moves with a constant velocity. The second strategy, linear optical trajectory (LOT), requires the fielder to move laterally so that the ball appears to be moving in a straight line rather than a parabola. Each of these strategies has their advantages — OAC works best if the ball is coming straight for you, LOT allows you to intercept a ball that is heading off to one side. As Fink et al conclude:

Finally, the rapid responses to mid-flight perturbations show that the fielders movements are continuously controlled, contrary to the standard TP theory. The results thus suggest that perception is used to guide action by means of a continuous coupling of visual information to movement, without requiring an internal model of the balls trajectory.

In assuming a poverty of stimulus, and by extension mental representations, we end up thinking about problems in a very specific way. As the outfielder problem shows, in some cases this way of thinking is erroneous, and by focusing on behavioural dynamics we can radically alter the way in which we approach a problem. If anything, the above example should serve as a cautionary tale, rather than a paradigm shifting blow to representationalist accounts of more complex behaviours, such as language.

To summarise:

  1. The observation that there is a poverty of stimulus problem to be solved is perhaps flawed.
  2. As such, representations become a solution in need of a problem.
  3. Both the environment and perception are vital resources in problem solving and should not be divorced from cognition. This is the embodied thesis.
  4. A question remains as to how we can approach language from a non-representationalist perspective?

It is on this last question I will turn my attention to in the next post. I’ve purposefully left this question hanging, as it requires a thorough treatment, and I’m still sitting on the fence as to whether we can solve the problem of language without the need for representations. As you can probably tell, I’m definitely sympathetic to the notion of a non-representationalist account for language, and psychology in general. Wilson & Golonka provide an interesting starting point from which we might start to think about this in a non-representationalist framework, and it is here where the next post will kick off.

References

Wilson AD, & Golonka S (2013). Embodied Cognition is Not What you Think it is. Frontiers in psychology, 4 PMID: 23408669

7 thoughts on “On the entangled banks of representations (pt.1)”

  1. Dan Sperber defines a representation as anything created by an information processing device as a way to hold information, for the purposes of being used by the same or another information processing device at a later point in time (see Sperber, 2006). So for example, the software on which I write my articles depicts and hence represents my thoughts about the evolution of language, as accurately as I am able to articulate them. The software is an information processing device, which creates files in which it stores information, the purpose of which is to make that same information available both to itself and to other information processing devices. As such, those files are (digital) representations.

    There are two points about this definition that are particularly relevant to your post:

    1. I suspect that OAC and LOT would both be representational descriptions of catching, on this definition (although I have not thought through this in detail, so I may be wrong). More generally, I suspect (although again, I admit I have not sat down with this thought in detail) that this definition deflates a lot of the concerns that some people have about representations.

    2. This definition allows us to clearly states what language evolution (and indeed cultural evolution more generally) is. Under this definition, both spoken words and mentally stored words are representations. The difference is that one is a public representation; the other a private, mental representation (this is I-language vs. E-language, if you like). Language evolution then becomes a matter of tracking the distribution of representations, public or mental, within a community (hence Sperber’s idea of an ‘epidemiology of representations’).

    Sperber, D. (2006). Why a deep understanding of cultural evolution is incompatible with shallow psychology. In: N. Enfield & S. Levinson (Eds.), Roots of Human Sociality (pp. 431-449). Oxford: Berg.

  2. I haven’t read the paper you’re referring to, but doesn’t it simply shift the concerns about ‘representations’ to the question of what exactly an ‘information processing device’ is?

    Of course the catcher who is moving his or her body in a certain direction to react to the ball’s movement is creating new information about where the ball was at the time that movement was initiated, and it is putting this information into the ‘environment’ by means of shifting its body, maybe even ‘creating’ some inertial forces in the process, all of which will be ‘used’ by the same body in the next instant to carry on and refine the movement – but then what’s the use of calling that a ‘representation’? I’m not trying to ridicule Sperber’s view, i’m quite sympathetic to it really, I’m just wondering why we can’t simply call it ‘action’ and stop having to worry about what exactly an ‘information processing device’ is supposed to be. Reminds me that I should probably re-read Benny Shanon’s “Why doesn’t a stone have a grammar of physics?”.

  3. Well, it moves things along only as much as any other definition would. I don’t think your complaint is specific to this definition.

    As for “what’s the use of this definition?” – “What’s the use?” type questions are the wrong sort of questions to ask about definitions. That’s a question to ask about a theory, not a definition. The right question is “Does this definition correct include/exclude all prima facie cases / non-cases of the thing to be defined?”. That’s how we decide if something is a good definition or not. And in this case, the definition includes some cases, like words and the mental storage of words, that I think we do instinctively feel are representations.

  4. Hi Thom, cheers for the comment. I’ll read through the Sperber paper, and try to address these concerns more thoroughly in the next post (when I’ll try and see if we can even begin to approach language from a non-representational perspective).

    To provide a brief reply: Like Kevin, I’m not sure that calling something an information processing device really solves the problem of representations, and we are still left with the problem: what exactly is the information processing device? Also, and correct me if I’m wrong, Sperber’s conception still rests on the assumption that cognition is basically a computational system, whereas the perspectives I outlined above are sympathetic to there being alternatives: namely, a view of cognition that is a non-computational dynamical system. To quote Van Gelder’s (1995) paper What might cognition be, if not computation?:

    Cognitive systems may in fact be dynamical systems, and cognition the behavior of some (noncomputational) dynamical system. Perhaps, that is, cognitive systems are more relevantly similar to the centrifugal governor than they are similar to either the computational governor, or to that more famous exemplar of the broad category of computational systems, the Turing machine.

  5. On definitions, somewhere Thomas Kuhn remarked that, while it seems logical to pin down definitions at the beginning of an investigation, as a practical matter, good definitions only emerge once things have pretty well been worked out. As an example, consider poor old “memetics”. We’ve had over three decades of arguing about the definition of the term and no good science to speak of.

    As James has pointed out, saying that something is an information processing device isn’t very useful unless you say how the device works. All it really says is: “Well, we don’t know what’s going on, but we don’t think it’s supernatural.”

    As for computing, let me quote a passage I’ve recently quotedin my piece on computational linguistics and literary criticism. The remark is by Peter Gärdenfors (Conceptual Spaces 2000, p. 253):

    On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computational more complex than the process that is simulated.

    Conceptual Spaces is a fascinating book and is an extended argument for that intermediate level of geometrical objects and operations. It’s full of examples and has a chapter on semantics that includes a motivated reanalysis of cognitive metaphor. There’s also a neat 2-D geometrical model of case relations.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.