## The computational envelope of language – Once more into the breach

Time to saddle-up and once more ride my current hobby horse, or one of them at least. In this case, the idea that natural language is the simplest aspect of human activity that is fundamentally and irreducibly computational in nature.

Let’s back into it.

* * * * *

Is arithmetic calculation computational in kind?

Well yes, of course. If anything is computation, that sure is.

Well then, in my current view, arithmetic calculation is language from which meaning has been completely removed, squeezed out as it were, leaving us with syntax, morphology, and so forth.

Elaborate.

First, let’s remind ourselves that arithmetic calculation, as performed by writing symbols on some surface, is a very specialized form of language. Sure, we think of it as something different from language…

All those years of drill and practice in primary school?

Yes. We have it drilled into our heads that arithmetic is one thing, over here, while language is something different, over there. But it’s obvious, isn’t it, that arithmetic is built from language?

OK, I’ll accept that.

So, arithmetic calculation has two kinds of symbols, numerals and operators. Both are finite in number. Numerals can be concatenated into strings of any length and in any order and combination.

OK. In the standard Arabic notation there are ten numerals, zero (0) through (9).

That’s correct.

And we’ve got five operators, +, -, * [times], ÷, and =. And, come to think of it, we probably should have left and right parenthesis as well.

OK. What’s the relationship between these two kinds of symbols?

Hmmmm….The operators allow as to specify various relationships between strings of numerals.

Starting with, yes, starting with a basic set of equivalences of the form, NumStr Op NumStr = NumStr, where Op is one from +, -, *, and ÷ and NumStr is a string of one or, in the case of these primitive equivalences, two numerals. [1]

Thus giving us those tables we memorized in grade school. Right!

What do you mean by semantics being removed?

Well, what are the potentially meaning-bearing elements in this collection?

That would be the numerals, no?

Yes. What do they mean?

Why, they don’t meaning anything…

Well… But they aren’t completely empty, are they?

No.

Elaborate. What’s not empty about, say, 5?

5 could designate…

By “designate” you mean “mean”?

Yes. 5 could designate any collection with five members. 5 apples, 5 oranges, 5 mountains, 5 stars…

What about an apple, an orange, a mountain, a star, and a dragon?

Yes, as long as there’s five of them.

Ah, I see. The numerals, or strings of numerals, are connected to the world though the operation of counting. When we use them to count, they, in effect, become numbers. But, yes, that’s a very general kind of relationship. Not much semantics or meaning there.

Right. And that’s what I mean by empty of semantics. All we’ve got left is syntax, more or less.

Sounds a bit like Searle in his Chinese Room.

Yes, it does, doesn’t it?

The idea is that the mental machinery we use to do arithmetic calculation, that’s natural computation, computation performed by a brain, from which semantics has been removed. That machinery is there in ordinary language, or even extraordinary language. Language couldn’t function without it. That’s where language gets its combinatorial facility.

And THAT sounds like Chomsky, no?

Yes.

* * * * *

And so it goes, on and on.

When the intellectual history of the second half of the twentieth century gets written, the discovery of the irreducibly computational nature of natural language will surely be listed as one of the highlights. Just who will get the honor, that’s not clear, though Chomsky is an obvious candidate. He certainly played a major role. But he didn’t figure out how an actual physical system could do it (the question was of little or no interest to him), and surely that’s part of the problem. If so, however, then we still haven’t gotten it figured out, have we?

* * * * *

[1] Isn’t that a bit sophisticated for the Glaucon figure in this dialog? Yes, but this is a 21st century Glaucon. He’s got a few tricks up his sleeve.

[2] Sounds a bit like the Frege/Russell set theory definition of number: a natural number n is the collection of all sets with n elements.

## CfP: Construal and language dynamics (ICLC-15 workshop proposal)

What do we mean when we talk about the “cognitive foundations” of language or the “cognitive principles” behind linguistic phenomena? And how can we tap into the cognitive underpinnings of language? These questions lie at the heart of a workshop that Michael Pleyer and I are going to propose for the next International Cognitive Linguistics Conference. Here’s our Call for Papers:

Construal and language dynamics: Interdisciplinary and cross-linguistic perspectives on linguistic conceptualization
– Workshop proposal for the 15th International Cognitive Linguistics Conference, Nishinomiya, Japan, August 6–11, 2019 –

Convenors:
Stefan Hartmann, University of Bamberg
Michael Pleyer, University of Koblenz-Landau

The concept of construal has become a key notion in many theories within the broader framework of Cognitive Linguistics. It lies at the heart of Langacker’s (1987, 1991, 2008) Cognitive Grammar, but it also plays a key role in Croft’s (2012) account of verbal argument structure as well as in the emerging framework of experimental semantics (Bergen 2012; Matlock & Winter 2015). Indirectly it also figures in Talmy’s (2000) theory of cognitive semantics, especially in his “imaging systems” approach (see e.g. Verhagen 2007).

According to Langacker (2015: 120), “[c]onstrual is our ability to conceive and portray the same situation in alternate ways.” From the perspective of Cognitive Grammar, an expression’s meaning consists of conceptual content – which can, in principle, be captured in truth-conditional terms – and its construal, which encompasses aspects such as perspective, specificity, prominence, and dynamicity. Croft & Cruse (2004) summarize the construal operations proposed in previous research, arriving at more than 20 linguistic construal operations that are seen as instances of general cognitive processes.

Given the “quantitative turn” in Cognitive Linguistics (e.g. Janda 2013), the question arises how the theoretical concepts proposed in the foundational works of the framework can be empirically tested and how they can be refined on the basis of empirical findings. Much work in the domains of experimental linguistics and corpus linguistics has established a research cycle whereby hypotheses are generated on the basis of theoretical concepts from Cognitive Linguistics, such as construal operations, and then tested using behavioral and/or corpus-linguistic methods (see e.g. Hilpert 2008; Matlock 2010; Schönefeld 2011; Matlock et al. 2012; Krawczak & Glynn forthc., among many others).

Arguably one of the most important testing grounds for theories of linguistic construal is the domain of language dynamics. Recent years have seen increasing convergence between Cognitive-Linguistic theories on the one hand and theories conceiving of language as a complex adaptive system on the other (Beckner et al. 2009; Frank & Gontier 2010; Fusaroli & Tylén 2012; Pleyer 2017). In this framework, language can be understood as a dynamic system unfolding on the timescales of individual learning, socio-cultural transmission, and biological evolution (Kirby 2012, Enfield 2014). Linguistic construal operations can be seen as important factors shaping the structure of language both on a historical timescale and in ontogenetic development (e.g. Pleyer & Winters 2014).

Empirical studies of language acquisition, language change, and language variation can therefore help us understand the nature of linguistic construal operations and can also contribute to refining theories of linguistic construal. Interdisciplinary and cross-linguistic perspectives can prove particularly insightful in this regard. Findings from cognitive science and developmental psychology can contribute substantially to our understanding of the cognitive principles behind language dynamics. Cross-linguistic comparison can, on the one hand, lead to the discovery of striking similarities across languages that might point to shared underlying cognitive principles (e.g. common pathways of grammaticalization, see e.g. Bybee et al. 1994, or similarities in the domain of metaphorical construal, see Taylor 2003: 140), but it can also safeguard against premature generalizations from findings obtained in one single language to human cognition at large (see e.g. Goschler 2017).

For our proposed workshop, we invite contributions that explicitly connect theoretical approaches to linguistic construal operations with empirical evidence from e.g. corpus linguistics, experimental studies, or typological research. In line with the cross-linguistic outlook of the main conference, we are particularly interested in papers that compare linguistic construals across different languages. Also, we would like to include interdisciplinary perspectives from the behavioural and cognitive sciences.

The topics that can be addressed in the workshop include, but are not limited to,

• the role of construal operations such as perspectivation and specificity in language production and processing;
• the acquisition and diachronic change of linguistic categories;
• the question of whether individual construal operations that have been proposed in the literature are cognitively realistic (see e.g. Broccias & Hollmann 2007) and whether they can be tested empirically;
• the refinement of construal-related concepts such as “salience” or “prominence” based on empirical findings (see e.g. Schmid & Günther 2016);
• the relationship between linguistic construal operations and domain-general cognitive processes;
• the relationship between empirical observations and the conclusions we draw from them about the organization of the human mind, including the viability of concepts such as the “corpus-to-cognition” principle (see e.g. Arppe et al. 2010) or the mapping of behavioral findings to cognitive processes.

Please send a short abstract (max. 1 page excl. references) and a ~100-word summary to construal.iclc15@gmail.com by August 31st, 2018 September 10th, 2018. We will inform all potential contributors in early September whether your paper can be included in our workshop proposal. If we are unable to accommodate your submission, you can of course submit it to the general session of the conference. The same applies if our workshop proposal as a whole is rejected.

References

Arppe, Antti, Gaëtanelle Gilquin, Dylan Glynn, Martin Hilpert & Arne Zeschel. 2010. Cognitive Corpus Linguistics: Five Points of Debate on Current Theory and Methodology. Corpora 5(1). 1–27.

Beckner, Clay, Richard Blythe, Joan Bybee, Morten H. Christiansen, William Croft, Nick C. Ellis, John Holland, Jinyun Ke, Diane Larsen-Freeman & Tom Schoenemann. 2009. Language is a Complex Adaptive System: Position Paper. Language Learning 59 Suppl. 1. 1–26.

Bergen, Benjamin K. 2012. Louder than Words: The New Science of How the Mind Makes Meaning. New York: Basic Books.

Broccias, Cristiano & Willem B. Hollmann. 2007. Do we need Summary and Sequential Scanning in (Cognitive) Grammar? Cognitive Linguistics 18. 487–522.

Bybee, Joan L., Revere Perkins & William Pagliuca. 1994. The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: University of Chicago Press.

Croft, William & Alan Cruse. 2004. Cognitive Linguistics. Cambridge: Cambridge University Press.

Enfield, N.J. 2014. Natural causes of language: frames, biases, and cultural transmission. (Conceptual Foundations of Language Science 1). Berlin: Language Science Press.

Frank, Roslyn M. & Nathalie Gontier. 2010. On Constructing a Research Model for Historical Cognitive Linguistics (HCL): Some Theoretical Considerations. In Margaret E. Winters, Heli Tissari & Kathryn Allan (eds.), Historical Cognitive Linguistics, 31–69. (Cognitive Linguistics Research 47). Berlin, New York: De Gruyter.

Fusaroli, Riccardo & Kristian Tylén. 2012. Carving language for social coordination: A dynamical approach. Interaction Studies 13(1). 103–124.

Goschler, Juliana. 2017. A contrastive view on the cognitive motivation of linguistic patterns: Concord in English and German. In Stefan Hartmann (ed.), Yearbook of the German Cognitive Linguistics Association 2017, 119–128.

Hilpert, Martin. 2008. New evidence against the modularity of grammar: Constructions, collocations, and speech perception. Cognitive Linguistics 19(3). 491–511.

Janda, Laura (ed.). 2013. Cognitive Linguistics: The Quantitative Turn. Berlin, New York: De Gruyter.

Kirby, Simon. 2012. Language is an Adaptive System: The Role of Cultural Evolution in the Origins of Structure. In Maggie Tallerman & Kathleen R. Gibson (eds.), The Oxford Handbook of Language Evolution, 589–604. Oxford: Oxford University Press.

Krawczak, Karolina & Dylan Glynn. forthc. Operationalising construal. Of / about prepositional profiling for cognition and communication predicates. In C. M. Bretones Callejas & Chris Sinha (eds.), Construals in language and thought. What shapes what? Amsterdam, Philadelphia: John Benjamins.

Langacker, Ronald W. 1987. Foundations of Cognitive Grammar. Vol. 1: Theoretical Prerequisites. Stanford: Stanford University Press.

Langacker, Ronald W. 1991. Foundations of Cognitive Grammar. Vol. 2: Descriptive Application. Stanford: Stanford University Press.

Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press.

Langacker, Ronald W. 2015. Construal. In Ewa Dąbrowska & Dagmar Divjak (eds.), Handbook of Cognitive Linguistics, 120–142. Berlin, New York: De Gruyter.

Matlock, Teenie. 2010. Abstract Motion is No Longer Abstract. Language and Cognition 2(2). 243–260.

Matlock, Teenie, David Sparks, Justin L. Matthews, Jeremy Hunter & Stephanie Huette. 2012. Smashing New Results on Aspectual Framing: How People Talk about Car Accidents. Studies in Language 36(3). 700–721.

Matlock, Teenie & Bodo Winter. 2015. Experimental Semantics. In Bernd Heine & Heiko Narrog (eds.), The Oxford Handbook of Linguistic Analysis, 771–790. Oxford: Oxford University Press.

Pleyer, Michael & James Winters. 2014. Integrating Cognitive Linguistics and Language Evolution Research. Theoria et Historia Scientiarum 11. 19–43.

Schmid, Hans-Jörg & Franziska Günther. 2016. Toward a Unified Socio-Cognitive Framework for Salience in Language. Frontiers in Psychology 7. doi:10.3389/fpsyg.2016.01110 (31 March, 2018).

Schönefeld, Doris (ed.). 2011. Converging evidence: methodological and theoretical issues for linguistic research. (Human Cognitive Processing 33). Amsterdam, Philadelphia: John Benjamins.

Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge: MIT Press.

Taylor, John R. 2003. Linguistic Categorization. 3rd ed. Oxford: Oxford University Press.

Verhagen, Arie. 2007. Construal and Perspectivization. In Dirk Geeraerts & Hubert Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics, 48–81. Oxford: Oxford University Press.

## I know (1) that you think (2) it’s funny, and you know (3) that I know (4) that, too.

A large part of human humour depends on understanding that the intention of the person telling the joke might be different to what they are actually saying. The person needs to tell the joke so that you understand that they’re telling a joke, so they need to to know that you know that they do not intend to convey the meaning they are about to utter… Things get even more complicated when we are telling each other jokes that involve other people having thoughts and beliefs about other people. We call this knowledge nested intentions, or recursive mental attributions. We can already see, based on my complicated description, that this is a serious matter and requires scientific investigation. Fortunately, a recent paper by Dunbar, Launaway and Curry (2015) investigated whether the structure of jokes is restricted by the amount of nested intentions required to understand the joke and they make a couple of interesting predictions on the mental processing that is involved in processing humour, and how these should be reflected in the structure and funniness of jokes. In today’s blogpost I want to discuss the paper’s methodology and some of its claims.

## What’s in a Name? – “Digital Humanities” [#DH] and “Computational Linguistics”

In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.

And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.

Digital Humanities

Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):

To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.

Just so, the way of the world.

Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:

A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.

How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.

## Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory

Here’s an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965 (Isis, vol. 17, #1, pp. 49-73: 2016), by Gregory Radick (abstract below). Commenting on it at Dan Everett’s FB page, Yorick Wilks observed: “It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer.”

Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.

## Future tense and saving money: no correlation when controlling for cultural evolution

This week our paper on future tense and saving money is published (Roberts, Winters & Chen, 2015).  In this paper we test a previous claim by Keith Chen about whether the language people speak influences their economic decisions (see Chen’s TED talk here or paper).  We find that at least part of the previous study’s claims are not robust to controlling for historical relationships between cultures. We suggest that large-scale cross-cultural patterns should always take cultural history into account.

Does language influence the way we think?

There is a longstanding debate about whether the constraints of the languages we speak influence the way we behave. In 2012, Keith Chen discovered a correlation between the way a language allows people to talk about future events and their economic decisions: speakers of languages which make an obligatory grammatical distinction between the present and the future are less likely to save money.

## A Note on Dennett’s Curious Comparison of Words and Apps

I continue to think about Dan Dennett’s inadequate account of words-as-memes in his paper, The Cultural Evolution of Words and Other Thinking Tools (PDF), Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIV, pp. 1-7, 2009. You find the same account in, for example, this video of a talk he gave in 2011: “A Human Mind as an Upside Down Brain”. I feel it warrants (yet another) long-form post. But I just don’t want to wrangle my way through that now. So I’m just going to offer a remark that goes a bit beyond what I’ve already said in my working paper, Cultural Evolution, Memes, and the Trouble with Dan Dennett, particularly in the post, Watch Out, Dan Dennett, Your Mind’s Changing Up on You!.

In that article Dennett asserts that “Words are not just like software viruses; they are software viruses, a fact that emerges quite uncontroversially once we adjust our understanding of computation and software.” He then uses Java applets to illustrate this comparison. I believe the overstates the similarity between words and apps or viruses to the point where the comparison has little value. The adjustment of understanding that Dennett calls for is too extreme.

In particular, and here is my new point, it simply vitiates the use of computation as an idea in understanding the modeling mental processes. Dennett has spent much of his career arguing that the mind is fundamentally a computational process. Words are thus computational objects and our use of them is a computational process.

Real computational processes are precise in their nature and the requirements of their physical implementation – and there is always a physical implementation for real computation. Java is based on a certain kind of computational objects and processes, a certain style of computing. But not all computing is like that. What if natural language computing isn’t? What happens to the analogy then? Continue reading “A Note on Dennett’s Curious Comparison of Words and Apps”

## Languages adapt to their contextual niche (Winters, Kirby & Smith, 2014)

Last week saw the publication of my latest paper, with co-authors Simon Kirby and Kenny Smith, looking at how languages adapt to their contextual niche (link to the OA version and here’s the original). Here’s the abstract:

It is well established that context plays a fundamental role in how we learn and use language. Here we explore how context links short-term language use with the long-term emergence of different types of language systems. Using an iterated learning model of cultural transmission, the current study experimentally investigates the role of the communicative situation in which an utterance is produced (situational context) and how it influences the emergence of three types of linguistic systems: underspecified languages (where only some dimensions of meaning are encoded linguistically), holistic systems (lacking systematic structure) and systematic languages (consisting of compound signals encoding both category-level and individuating dimensions of meaning). To do this, we set up a discrimination task in a communication game and manipulated whether the feature dimension shape was relevant or not in discriminating between two referents. The experimental languages gradually evolved to encode information relevant to the task of achieving communicative success, given the situational context in which they are learned and used, resulting in the emergence of different linguistic systems. These results suggest language systems adapt to their contextual niche over iterated learning.

Background

Context clearly plays an important role in how we learn and use language. Without this contextual scaffolding, and our inferential capacities, the use of language in everyday interactions would appear highly ambiguous. And even though ambiguous language can and does cause problems (as hilariously highlighted by the ‘What’s a chicken?’ case), it is also considered to be communicatively functional (see Piantadosi et al., 2012).  In short: context helps in reducing uncertainty about the intended meaning.

If context is used as a resource in reducing uncertainty, then it might also alter our conception of how an optimal communication system should be structured (e.g., Zipf, 1949). With this in mind, we wanted to investigate the following questions: (i) To what extent does the context influence the encoding of features in the linguistic system? (ii) How does the effect of context work its way into the structure of language?  To get at these questions we narrowed our focus to look at the situational context: the immediate communicative environment in which an utterance is situated and how it influences the distinctions a speaker needs to convey.

Of particular relevance here is Silvey, Kirby & Smith (2014): they show that the incorporation of a situational context can change the extent to which an evolving language encodes certain features of referents. Using a pseudo-communicative task, where participants needed to discriminate between a target and a distractor meaning, the authors were able to manipulate which meaning dimensions (shape, colour, and motion) were relevant and irrelevant in conveying the intended meaning. Over successive generations of participants, the languages converged on underspecified systems that encoded the feature dimension which was relevant for discriminating between meanings.

The current work extends upon these findings in two ways: (a) we added a communication element to the setup, and (b) we further explored the types of situational context we could manipulate.  Our general hypothesis, then, is that these artificial languages should adapt to the situational context in predictable ways based on whether or not a distinction is relevant in communication.

## John Lawler on Generative Grammar

From a Facebook conversation with Dan Everett (about slide rules, aka slipsticks, no less) and others:

# The constant revision and consequent redefining and renaming of concepts – some imaginary and some very obvious – has led to a multi-dimensional spectrum of heresy in generative grammar, so complex that one practically needs chromatography to distinguish variants. Babel comes to mind, and also Windows™ versions. Most of the literature is incomprehensible in consequence – or simply repetitive, except it’s too much trouble to tell which.

–John Lawler

## Why I Abandoned Chomskian Linguistics, with Links to 2 FB Discussions with Dan Everett

It wasn’t a matter of deep and well-thought princple. It was simpler than that. Chomsky’s approach to linguistics didn’t have the tools I was looking for. Let me explain.

* * * * *

Dan Everett’s kicked off two discussions on Facebook about Chomksy. This one takes Christina Behme’s recent review article, A ‘Galilean’ science of language, as its starting point. And this one’s about nativism, sparked by Vyv Evans’ The Language Myth.

* * * * *

I learned about Chomsky during my second year as an undergraduate at Johns Hopkins. I took a course in psycholinguistics taught by James Deese, known for his empirical work on word associations. We read and wrote summaries of classic articles, including Lee’s review of Syntactic Structures and Chomsky’s review of Skinner’s Verbal Behavior. My summary of one of them, I forget which, prompted Deese to remark that my summary was an “unnecessarily original” recasting of my argument.

That’s how I worked. I tried to get inside the author’s argument and then to restate it in my own words.

In any event I was hooked. But Hopkins didn’t have any courses in linguistics let alone a linguistics department. So I had to pursue Chomsky and related thinkers on my own. Which I did over the next few years. I read Aspects, Syntatic Structures, Sound Patterns of English (well, more like I read at that one), Lenneberg’s superb book on biological foundations (with an appendix by Chomsky), found my way to generative semantics, and other stuff. By the time I headed off to graduate school in English at the State University of New York at Buffalo I was mostly interested in that other stuff.

I became interested in Chomsky because I was interested in language. While I was interested in language as such, I was a bit more interested in literature and much of my interest in linguistics followed from that. Literature is made of language, hence some knowledge of linguistics should be useful. Trouble is, it was semantics that I needed. Chomsky had no semantics and generative semantics looked more like syntax.

So that other stuff looked more promising. Somehow I’d found my way to Syd Lamb’s stratificational linguistics. I liked that for the diagrams, as I think diagrammatically, and for the elegance. Lamb used the same vocabulary of structural elements to deal with phonology, morphology, and syntax. That made sense to me. And the s work within his system actually looked like semantics, rather than souped up syntax, though there wasn’t enough of it. Continue reading “Why I Abandoned Chomskian Linguistics, with Links to 2 FB Discussions with Dan Everett”