To the Editors:
If a nonlinguist reads nonspecialist articles about linguists such as the generally excellent piece by John Searle (NYR, June 29, 1972), he may get the impression that a linguist spends most of his time being concerned about rationalism (if he is a transformational grammarian) or behaviorism (if he is a structuralist). In fact, nothing could be further from the truth. The rationalism-behaviorism issue enters rarely if at all into actual linguistic analyses, whether transformational or structural. The reason for this is that the behaviorist vs. rationalist dispute has no necessary connection with the structuralist vs. transformationalist dispute.
"do not think the elephant" George Lakoff.
In much of his writing, Noam Chomsky has conveyed the impression that structural linguistics was necessarily tied to behaviorism, while transformational grammar was necessarily rationalistic. This is false, though most popularizers of Chomsky’s views have let the matter pass without comment. Chomsky’s argument for rationalism, as Searle correctly observes, is based on the existence of linguistic universals (that is, features which are common to all human languages). Chomsky always cites examples of putative universals from transformational grammar, but the fact is that just about every other theory of grammar that has ever been seriously proposed has, either implicitly or explicitly, incorporated claims for extremely complex and sophisticated linguistic universals. This is true of structural linguistics, stratificational grammar, tagmemics, Montague grammar, generative semantics, etc.
In fact, contrary to what Chomsky suggests, the most extensive studies of complex linguistic universals have been carried out within the framework of structural linguistics. The classic works of the European structuralists Trubetzkoy and Jakobson in phonology and of Joseph Greenberg in American structuralist syntax have been the foundation for all of the more recent (and less extensive) studies of universals done in the tradition of transformational grammar. Chomsky’s claims in favor of rationalism over behaviorism do not rest upon his theory of transformational grammar being right and structuralism being wrong. One can make exactly the same argument using the structuralist universals instead, since the universals discovered in structural linguistics are more than complex enough for the purpose of the argument.
One should also be aware of the limitations of Chomsky’s arguments for the existence of an innate language acquisition mechanism in the human mind. As Searle points out, Chomsky has claimed that people possess innately not merely general learning mechanisms, but a specifically linguistic innate faculty. His argument is of this form: There are complex linguistic universals that everyone learns uniformly. There are at present no general learning theories that can account for this. It is hard to imagine what any such theories could be like. Therefore, it is plausible to assume that there can be no such theories. But the argument is fallacious: Nothing follows from a lack of imagination.
What Chomsky has shown is that either there is a specifically linguistic innate faculty or there is a general learning theory (not yet formulated) from which the acquisition of linguistic universals follows. The former may well turn out to be true, but in my opinion the latter would be a much more interesting conclusion. If I were a psychologist, I would be much more interested in seeing if there were connections between linguistic mechanisms and other cognitive mechanisms, than in simply making the assumption with the least possible interest, namely, that there are none.
As Searle notes, Chomsky has characterized structural linguistics as being fundamentally behavioristic and concerned solely with taxonomy. This is a misleading view of a broad, diverse, and interesting field, which happened not to be very good at dealing with the syntactical problems raised by Chomsky, and which showed little if any interest in formalized theories. Chomsky’s teacher, Zellig Harris, was an extreme case of a behavioristically oriented taxonomist.1 Bloomfield and Hockett, in their theorizing moods, also fit the mold, though one can argue that they did not always adhere to their theories in their linguistic analyses. Though these were prominent structural linguists, they were by no means typical of the wide range of European and American structuralists, either in their interests or in their commitment to behaviorism. Distinguished structuralists like Boas, Sapir, Jakobson, Pike, Weinreich, Bolinger and Greenberg never had much, if any, commitment to behaviorism. Their interests and their linguistic theories ranged far beyond mere taxonomy to such areas as linguistic universals, the relation between language and culture, dialectal variation, crosslinguistic interference, ritual language, poetics, and much much more. When transformational grammar eclipsed structural linguistics, it also eclipsed many of these concerns, much to the detriment of the field.
Chomsky’s account of so-called Cartesian linguistics is as inaccurate as his portrayal of structural linguistics. Searle has criticized Chomsky for inaccurately interpreting Descartes’s writings, but he ignores the devastating critiques of Chomsky’s treatment of the Port Royal grammarians and of Locke that have appeared in the linguistic literature. Chomsky claims in Cartesian Linguistics that Cartesian rationalism gave birth to a linguistic theory like transformational grammar in its essential respects. He bases his claims on the Grammaire Générale et Raisonée by Antoine Arnauld (a disciple of Descartes’s) and Claude Lancelot (a language teacher), published in 1660. The Grammaire Générale followed a series of other grammars by Lancelot, the most extensive being his Latin grammar.
Chomsky appears not to have read this Latin grammar (an English translation of which was in Widener Library) but Robin Lakoff studied it and published her findings in the review mentioned in footnote 1. She discovered that in the introduction Lancelot credited all of his interesting findings to Sanctius (Francisco Sanchez de las Brozas), a Spanish grammarian of the previous century, whose work antedated Descartes by half a century. Checking into Sanctius, she found that Lancelot was not being modest. He had indeed taken all of his interesting ideas from Sanctius. In short, what Chomsky called Cartesian linguistics had nothing whatever to do with Descartes, but came directly from an earlier Spanish tradition. Equally inconsistent with Chomsky’s claims is the fact that the theories of Sanctius and the Port Royal grammarians differ from the theory of transformational grammar in a crucial way. They do not acknowledge the existence of a syntactic deep structure in Chomsky’s sense, but assume throughout that syntax is based on meaning and thought. Chomsky has steadfastly opposed this position from his earliest works straight through to his most recent writings.
The important results of transformational linguistics are very different from what one is led to believe in most popular articles and introductory textbooks. Many of those who worked out the details of transformational grammar in the 1960s, both in Chomsky’s group at MIT and elsewhere, found that, if one ignored a great many problems that at first seemed peripheral, then transformational grammar could account for far more facts than any previous theory of grammar and gave much deeper insights into language. The transformational rules formulated in this period are presented in most elementary textbooks present as being essentially correct.
But the deeper results came later, in 1967 and after, when new facts were discovered and old facts that had previously been brushed aside were taken seriously. It was found that virtually no transformational rules that had been formulated could be made to handle the data; there is not a single rule of Chomsky’s syntax that can honestly be said to be well established. The difficulties form a pattern: Chomsky had drawn the syntax/semantics and performance/competence distinctions to try to preserve what Searle describes as his “peculiar and eccentric” view that it is possible to study the structure of language independently of its communicative function. In case after case, however, it has been found that rules of grammar have to take account of what Chomsky had arbitrarily ruled out of grammar as being semantics and performance.
The really deep results of transformational grammar are, in my opinion, the negative ones, the hosts of cases where transformational grammar fell apart for a deep reason: it tried to study the structure of language without taking into account the fact that language is used by human beings to communicate in a social context.
The detailed work that has been done in generative grammar tends to back up Searle’s philosophical judgment that the form of language cannot be studied independently of its function. But Searle is at best half-right when he claims that “the conflict [between transformational grammar and generative semantics] is being carried out entirely within the conceptual system that Chomsky created.” Generative semantics accepts such claims of Chomsky’s as that there exists nonsurface syntactic structure and that there are rules of grammar relating pairs of phrase-structure trees, just as it accepts the goal of accounting for the linguistic intuition of the native speaker in terms of formal rules and a general theory.
But those working in the area have found that many of the most basic assumptions of transformational grammar were inadequate and have rejected them, including the following of Chomsky’s fundamental assumptions: that syntax is independent of human thought and reasoning, that there exists a syntactic deep structure, that transformational rules are fundamentally adequate for the study of grammar, that syntactic categories are independent of the categories of human thought, that language use plays no role in grammar, that syntax is independent of the social and cultural assumptions of speakers, and many other central positions of Chomsky’s that many of us find inadequate, especially in the light of recent research.
Nor is Searle correct when he says, “Whoever wins, the old structuralism will be the loser.” No one in generative semantics is likely to adopt behaviorist taxonomy, but then that was only one of many trends in European and American structuralism. The concerns of generative semantics in the area of the communicative function of language overlap in many respects with nonbehaviorist and nontaxonomic structuralism. In addition, the conceptual framework of generative semantics derives much from outside of transformational grammar, for instance, model-theoretical semantics in the tradition of Tarksi and Carnap, and more recently Kripke, Montague, Scott and others, the concern for language use that one finds in the writings of Wittgenstein, Austin, Grice, and Searle, Zadeh’s work on inexact concepts, recent sociolinguistics as represented in the work of Labov, Hymes, Gumperz, Bickerton, Bailey, and others, and trends in the sociology small-group interactions as represented in the works of Goffman, Garfinkle, Sachs, and Schegloff. What we are trying to do is develop a linguistic theory that is rooted in the study of human thought and culture—the very antithesis of transformational grammar as narrowly construed by Chomsky.
University of California
"We are neural beings," states Berkeley cognitive scientist George Lakoff. "Our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything - only what our embodied brains permit."
His new book Philosophy In The Flesh, coauthored by Mark Johnson, makes the following points: "The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical."
Lakoff believes that new empirical evidence concerning these finding of cognitive science have taken us over the epistemological divide: we are in a new place and our philosophical assumptions are all up for grabs.
He and Johnson write: "When taken together and considered in detail, these three findings from the science of the mind are inconsistent with central parts of Western philosophy, and require a thorough rethinking of the most popular current approaches, namely, Anglo-American analytic philosophy and postmodernist philosophy."
According to Lakoff, metaphor appears to be a neural mechanism that allows us to adapt the neural systems used in sensory-motor activity to create forms of abstract reason. "If this is correct, as it seems to be," he says, "our sensory-motor systems thus limit the abstract reasoning that we can perform. Anything we can think or understand is shaped by, made possible by, and limited by our bodies, brains, and our embodied interactions in the world. This is what we have to theorize with."
He then raises the interesting question: "Is it adequate to understand the world scientifically?