Save

The Role of Creativity in the Cognitive Turn in Linguistics

In: International Review of Pragmatics
Author:
István Kenesei Research Institute for Linguistics, MTA, and IEAS, University of Szeged, Hungary kenesei.istvan@nytud.mta.hu

Search for other papers by István Kenesei in
Current site
Google Scholar
PubMed
Close
Open Access

The recent cognitive turn in linguistics is closely related to research into the creative nature of language. Formal creativity, or in other words, the recursive nature of language (with respect to both words, i.e., the basic units, and sentences, i.e., the end products) is what determines further domains of creativity, viz., at the level of meanings and in the theory of mind, providing for their unlimited and variable nature. Principles of the formal properties of language are presented at the levels of words and sentences, showing that recursion occurs both in words and sentences, indicating the local nature of syntactic relations, and demonstrating their neural correlates. Reference to neurolinguistic experiments is used to argue that metaphorical extensions of meanings are a natural phenomenon placing no burden on mental processing, even though literal meanings are not handled the same way as metaphors. It is claimed that sentential meanings have a primacy over word meanings, while words, and not sentences, are the basic units of the mental lexicon, i.e., long-term memory. In order to understand metaphors it is essential to have theory of mind (ToM), which develops in children parallel with the acquisition of complex syntactic structures involving mental verbs, as is shown by false-belief tasks. The nature and limits of the complexity of ToM is related to the limits of syntactic complexity in natural language.

Introduction

Before I try to elaborate what I mean by the cognitive turn in linguistics, let me first recall a much older phrase, viz., the “linguistic turn” in philosophy starting at the end of the 19th century, which broke with the kinds of metaphysical questions that had determined the discipline before and placed in the foreground of attention epistemological issues, such as how human agents are capable of understanding the world around them, as well as the mediation of language in it.1

The change that has taken place in linguistics since the mid-20th century is comparable to the linguistic turn in philosophy: from the view that language subsists external to human beings at some sphere of Platonic ideas we have come to maintain that language is identified by means of mapping neural processes in the minds of individuals. I hasten to note here it is in this sense that the present paper refers to the cognitive aspects of language, although I am well aware that “cognitive” has become a catchphrase in linguistics with several incommensurate uses and meanings.

The phrase “cognitive turn in linguistics” has not had a long history. While its first mention, although in a different sense, is probably due to Ponterotto (1994), its best first description is by Winfried d’Avis:

So unscharf noch die theoretischen Grundlagen dieser neuen Wissenschaft sind, so einig ist man sich doch über ihr Geburtsjahr: 1956. Damals trafen sich Simon, Chomsky, Newell u.a. am MIT beim ‘Symposium on Information Theory’ und brachten den Stein für eine neue Art der Erforschung der Kognition ins Rollen. Das Neue bestand in dem Versuch, sehr alte philosophische Fragen nach dem Wesen des Geistes in die Frage nach seiner Funktionsweise zu überführen und interdisziplinär und auf empirischem Wege zu beantworten. Unstreitig wie das Geburtsjahr ist auch, daß folgende Teildisziplinen zur Kognitionswissenschaft gehören: Linguistik, Computerwissenschaft, Psychologie, Neurobiologie und Philosophie des Geistes.2

(d’Avis 1998: 37)

While various types of nonformal linguistic research, most of which concentrates on the symbolic-conceptual functions of language, have found “cognitive” an apt term to characterize their interests,3 the interaction of theoretical linguistics with the kinds of “hyphenated” linguistics such as neurolinguistics, psycholinguistics, and biolinguistics relate linguistic phenomena to cognitive processes proper, investigating and ultimately demonstrating the role of language in cognition, thus substantiating the use of the phrase “cognitive turn”.

The present paper illustrates the cognitive turn in linguistics through a set of basic theses. Whether language is taken to be the means of human communication or the expression of thought, it makes it possible to produce entirely novel items in three apparently unrelated fields:

  1. a)in linguistic forms or structures,
  2. b)in the area of metaphorical extensions of meanings,
  3. c)in the representation of the thoughts or mental states of others.

The first of these will be called here a) formal creativity, the second b) semantic creativity, and third one c) theory-of-mind (ToM) creativity. I will argue here that these three apparently independent properties are interrelated: semantic creativity as well as ToM creativity is derivable from formal creativity. There is a growing amount of evidence in this regard obtained from brain research, and if our assumptions are tenable, new research programs can be based upon them.

Formal Creativity

Saussure’s (1916) Cours determined the direction of linguistics in the 20th century in that its fundamental novelty was to regard language as form, not substance, in other words, in considering the relationships between its constituent elements, i.e., sounds/phonemes, morphemes, words/lexemes, etc., rather than the meanings expressed by (some of) them as its defining property. The linguistic enterprise called structuralism has covered a fairly large territory, but a number of essential questions have remained unanswered. To recall just a few, let us first ask what Saussure and his followers understood by the term “linguistic sign” (signe linguistique). Sifting through Saussure’s examples and elucidations, it stands undoubtedly for “word”, or in subsequent periods of structuralism, for “morpheme”, since they are the smallest meaningful units of linguistic analysis.

Saussure did not develop his grand pioneering principles concerning the nature of language into precise methods of linguistic analysis; they were to become the remit of the structuralist (or descriptivist) school in the USA (Bloomfield, 1933; Wells, 1947; Harris, 1951; Hockett, 1958). But about half a century after Saussure another major breakthrough cast doubt upon the very principles everyone had taken for granted before and has since transformed the edifice of linguistics substantially. Noam Chomsky’s doctoral dissertation (1955) and the thin book published on its basis (1957) considered the sentence, and not the word or the morpheme, to be the fundamental unit of language. He showed that the number of possible sentences is unlimited and used a straightforward demonstration to illustrate the property of language that has come to be known as formal recursion, evidenced by simple examples, such as the following ones.

(1) a. The student said that the baker knew that the driver believed that …

b. The student saw the baker who knew the driver who met the man …

c. the lettering on the cover of the book on the destruction of the city in the country …

The examples are rather simple, especially as far as their contents are concerned, but the purpose of the illustration was precisely this: it is this simple formal recursion or formal creativity that underlies the unlimited variability of messages or meanings that any language is capable of conveying.4

The generative enterprise of the past 50 or so years has tried hard to postulate universal principles from which to derive the properties of individual languages, regarding the human brain/mind as the locus of the implicit knowledge behind the universal principles, which are genetically given. That is, it posits evolutionary as well as cognitive-neurological hypotheses in the form of theoretically challenging claims. Let us ignore for the time being the reduction of the number and types of universal principles and concentrate on the advantages of the process.

Practitioners of the field have amassed a multitude of systematic observations within a single language or across several languages, countless new problems have been recorded, and every fresh answer has induced even more interesting questions that have pushed the vehicle of research forward. Besides, this progress of the discipline has brought along such lasting achievements as the formerly secluded linguistic semantics having been revitalized by logic and the philosophy language, owing to the fact that linguistic semantics had ceased to be confined to investigating problems of word meanings, and extended its sphere over the meanings of sentences, thereby realizing new research programs.

How then is this formal system of language put to work when any one sentence or noun phrase can each be an array of a high number of items of intricate structures? Where are the “rules”? Where and how is the basic “formula” of recursion encoded? In my answers below I will make use of grand simplifications and neglect a host of important, though in this connection minor, issues, primarily in order to present my claims as best as possible in the limited space available.

As is the case today, all these properties are encoded in “words”—with some simplification. To reformulate the question, what do we know when we know a word in our native language? Let us start with three simple words denoting consumption: eat, munch, and devour. If one has to answer a question such as What’s she doing?, why is (2c) unacceptable when (2a, b) are OK?

(2) a. Eating

b. Munching

c. *Devouring

Since devour is a transitive verb (= V), it cannot occur without an accompanying object, i.e., a nominal expression, or technically speaking, a noun phrase (NP), e.g., a piece of cake. This information is encoded into the verb devour the same way as eat has the information encoded in it that it is a “transitive/intransitive” verb, that is, it can occur with or without an object. What about munch? This verb can (optionally) occur with a prepositional phrase (PP), whose head, the preposition, is not freely selected: it must be on, cf., She is munching on a piece of cake. Neither *munching a cake, nor *munching at a cake would qualify, in at least one dialect of English. In a similar fashion, neither *eating on a cake or *devouring on a cake is possible. In other words, each verb has the information encoded in it that it can or must take an NP or a PP complement and, if the latter, what preposition the PP has in it. Note that there is no conceptual-semantic reason why the actions of eating or munching should not be represented by an obligatorily transitive verbs, since it is hardly conceivable that someone is eating without eating something, and if they can be used intransitively, why devour cannot, when it simply means (according to common dictionaries) “eat greedily”.5

To extend the above, let us now examine a somewhat more complex case and observe the behaviour of the two verbs think and ask, with the schematic structures shown.

(3) a. *I thought the price.

b. I asked the price.

(4) a. *I thought her to leave.

b. I asked her to leave.

(5) a. I thought that she was new.

b. *I asked that she was new.

(6) a. *I thought whether she was new.

b. I asked whether she was new.

V + NP V + NP + INFINITIVE V + that-CLAUSE V + whether-CLAUSE

The verb think is intransitive, it takes no infinitival or interrogative complement clause, but it can take a declarative that-clause. The verb ask, on the other hand, is transitive, takes infinitivals and question clauses, but cannot be accompanied by declaratives. All this information must be encoded in the respective verbs, and, moreover, in an extremely simple form. This is made possible by an overarching principle of language structure that has come to be called locality. The principle of locality ascertains that all relations are strictly local in language, that is, they hold between adjacent elements.6 In case of a transitive verb, the verb determines that it must have a nominal complement whose head is any one of the items determiner (D), Quantifier (Q) or noun (N). A verb with an infinitival complement requires that it be accompanied by a nonfinite clause whose head is an overt or covert complementiser for, which in turn requires a nonfinite tense marker to. A verb that takes a finite clausal complement prescribes that the head of its complement clause be either that, if it requires a declarative complement clause, or whether, if it needs an interrogative complement clause. Both of the latter complementisers can be overt or covert again. Heads are in a position to determine the heads of their complements, thus producing a “chain” of local relations, as it were. In short, one word only has to “see” as far as the next head.7

Although all the items listed in the examples were indeed words, it is easy to see that it is not necessary that heads or complements always be (lexical) words. The formative carrying Tense, for instance, is sometimes a word, e.g., did or to, or a “mere” affix, as in ask-ed. The marker of the comparative degree in English can be either the lexical item more, or the affix -er, as in more unhappy, but happi-er.8 The complementiser that and whether, as was claimed above, can be overt or covert, i.e., visible or invisible, as in the following examples.

(7) a. I thought (that) she was new.

b. I asked what whether she knew.

This is not the only distinction that these examples illustrate. The traditional difference between notional and grammatical words is carried over into the generative enterprise in distinguishing the roles they play in syntactic structure. Grammatical formatives, or functional categories, provide the frame of sentences on whose branches notional words, that is, items from the lexical categories of nouns, verbs, adjectives and adverbs are hung.

That such a distinction has a reflex in the mental representation of functional and lexical categories has been demonstrated, among others, by Setola and Reilly (2005), who argue that functional items are located in regions different from those characteristic of lexical/notional items.

Critically, different word categories are represented by cell assemblies with a different topography. On the one hand function words, whose meanings reflect their linguistic use rather than objects or actions, should be represented exclusively by strongly left-lateralised assemblies limited to the perisylvian cortices; on the other hand, content words should be represented by less lateralised assemblies, including neurons both within and outside the perisylvian regions.

(Setola and Reilly 2005: 252)

It is always a unit of two constituents that is formed at the next higher level as a result of merging a “word” with another item, whether another word or complex unit, as determined by the so-called edge features in the word that is the head of the new construction thus formed. The rise of edge features, according to Chomsky, was a crucial moment in the history of evolution (Chomsky, 2005; Piattelli-Palmarini, 2008).

Before the relationship between words and sentences is further examined, let us consider one property of words that relates them to sentences. Words can be almost as effortlessly multiplied as sentences, and native speakers can, throughout their lives, acquire or create new words, which may remind us of the parallel property of sentences. Moreover, word formation is just as well regulated as the construction of new sentences. There are productive processes in both derivation and compounding in languages like English, and speakers can recognize and understand new items with considerable ease.

The following examples will illustrate the point: cash mob “an event where people support a local retailer by gathering en masse to purchase the store’s products”, and its derivatives: cash mobber, cash mobbing; unsourcing “transferring company functions from paid employees to unpaid volunteers, particularly customers on social networks”, and its derivatives: unsourced, unsourceable, unsourceability, etc.9 Note that word formation rules always take two items, whether two words as in compounding, or a word and an affix as in derivation, to form a new unit, much like the rules applying to syntax as reviewed above.10 Finally, in addition to word formation, another frequent method of increasing the vocabulary of a language, i.e., of its speakers, is the use of “loanwords”, that is, items imported from other languages, e.g., curry, chutzpah, nuance.11

Yet, the number of words is unlimited in a different way from sentences. Recursion has a boundary, however vague it may be, probably because unlike sentences words have to remembered, but sentences need not. Words have to be stored in long-term memory, that is, in the speaker’s mental vocabulary, while sentences are never stored, except for the (relatively) few proverbs, formulae, clichés, and the like, such as Early bird gets the worm or First come, first served.

Note that lexical creativity is not a prerequisite to the formal creativity of language. In principle, there can be languages without any word formation processes, but recursion in syntactic structures is perfectly sufficient to produce an unlimited number of sentences, although no clear example of such language has been found.

Semantic Creativity

Words are secondary to sentences not only in a formal or structural sense. The meanings of functional (or grammatical) items, e.g., the past tense suffix -ed or the conjunction but, cannot be grasped as directly as those of notional words, as was mentioned above.

(8) a. The clock ticks too loud.

b. The clock ticked too loud.

(9) a. Jim is a weightlifter and very smart.

b. Jim is a weightlifter but very smart.

It is usually claimed that the meanings of functional items transpire from the sentence as a whole, in particular from the difference between the members of each pair (8a, b) and (9a, b), that is, from the difference between present and past tense in (8a, b), or between and and but in (9a, b), respectively. The members in each pair represent different states-of-affairs, or in other words, they are true under different conditions.

But then the meanings of notional words are also a function of the meanings of sentences, that is, they are interpreted in relation to the states-of-affairs that they stand for, and the fact that when a notional item is used there can be a physical object present in the given situation or as an image in the speaker/hearer’s mind is due to the fundamental property of language that (some of) the expressions formed in it can be used to refer to objects or phenomena in the environment of the utterance, and it is these objects or phenomena that we recall when we identify the identical or similar ingredients of the referents of these expressions. To generalize then, the meaning of the word is what it contributes to the meanings of the expressions, and in the last analysis, to the meanings of the sentences that contain it. And this meaning is, most of the time, but not always, constant.

Within the conceptual framework ultimately deriving from Frege (1892/1952) and adopted in this paper, the meanings of words can be identified in two major constructions: in referring expressions that pick out things, on the one hand, and in sentences (propositions, utterances) containing such referring expressions, on the other. Once the referring expression is understood, one knows what object it refers to. Once the sentence is understood, one knows what state-of-affairs (or assertion, command, question, etc.) it represents or expresses. Thus, we know, or even find out, retroactively, as it were, what the words in them mean.12 In this approach there is no difference between “literal” and “extended” or “metaphorical” meanings as will be shortly discussed. Note here that whenever the term “sentence” is used here, it is used unequivocally to cover utterance, proposition (in the logical sense) or the speech act containing it, since in all instances words and sentences are attributed meaning in the same way.

To sum up, it is sentences that are meaningful or have meaning in a primary sense. But what is stored in long-term memory, i.e., in the mental lexicon, is words. The minimal unit of language, thought, or linguistic communication is the sentence. However, the basic unit of linguistic memory, the mental storage system, is the word. It is this dynamic opposition that makes it possible for (primarily notional) words not to have their meanings permanently fixed. The paradoxical situation can be summarized as follows:

While word form is invariable, word meaning is indeterminate; while sentence form is variable, sentence meaning is determinate.

Meaning extension is a traditional topic in linguistic semantics; it figures prominently in a large number of overviews of the field, as, e.g., in Ullmann (1959) or Lyons (1977), which devote considerable space to polysemy and homonymy, metonymy and metaphors, etc. What is clearly visible in more recent treatments of these issues is an unquestionable shift from lexico-centric accounts toward explanations that assign the context, whether phrasal, syntactic, or conceptual, an essential role, as is evidenced in the entire field of semantics. In particular, metaphors have, in the past 50 or so years, been shown to be more widespread in language and thought than to be regarded as a mere literary or stylistic device, and what is more, it has no longer been taken to characterize, or belong to the properties of, the word.

When, for instance, Lakoff and Johnson (1980/2003) determine one class of conceptual metaphors, that of ontological metaphors, they make use of a general frame or “concept” in their terminology, as in (10), which helps to interpret individual metaphorical statements, such as in (11), see Lakoff and Johnson (2003: 51).

(10) theories are buildings

(11) a. Is that the foundation for your theory?

b. The theory needs more support.

c. The argument is shaky.

Individual metaphors, as those in (11), are interpreted by making inferences from one conceptual domain to another (Lakoff and Johnson 2003: 246), in other words, by first taking an utterance like (11c) and identifying its subject with, or subsuming it under, the subject of (10), since arguments are parts of theories, and then subsuming the predicate in (11c) under that of (10), since buildings can be shaky. Of course there can be several alternative conceptual metaphors, as the list in (12) shows, but all are of the form “X is Y”.

(12) ideas are food, ideas are people, ideas are plants, ideas are products, ideas are commodities, ideas are resources, ideas are money, ideas are cutting instruments, ideas are fashions

With respect to physical experiences, metaphors correspond to neural structures, according to Lakoff and Johnson (2003: 256f.). In case of a metaphor like affection is warmth, “[t]here is neuronal activation occurring simultaneously in two separate parts of the brain: those devoted to emotions and those devoted to temperature”. But most conceptual metaphors, including the ontological ones, e.g., (10)–(11), are not accompanied with physical events, therefore, it is reasonable to suppose that the inferences they posit have to appeal to some list or hierarchy of propositions, such as illustrated in (10) and (12), whose general form corresponds to something like that in (13).

(13) abstractions are physical objects, events, animate beings

It is this type hierarchy that has to be, implicitly or explicitly, invoked in case difficulties of interpretations arise. Metaphors are the everyday ingredients of (the use of) language as the following random passage shows, where all metaphors are marked by italics.

I did not have a bad time in the Army; nevertheless, being in a military is like being a citizen of a totalitarian state. Nobody gives a damn who you are and what you think. That was the problem with civilians, a major once explained to me. They have a perverse attachment to their own opinions. The true sense of the self can only come from obedience, from submitting to an organization like the army, which works for greater good, he told me. Since I was just a private first class he deigned to share this piece of wisdom with because I was his driver for the day, I nodded my head in agreement. I wish that fascist prick could see all these free people now, I thought as the taxi took me through the crowded, muggy, rubbish-strewn, smelly, and absolutely fabulous streets of Manhattan.13

It is clear that quite a few metaphorical expressions cannot be converted to literal phrase, e.g., bad time or the taxi took me through … the streets. The approach characterized as “conceptual metaphors” is based on the idea that such metaphorical interpretations are always relative to (sets of) conceptual schemes, rendering it useless to recall the literal meanings of the metaphors.

The relevant literature is divided as to whether or not literal meanings are a necessary stage through which to reach metaphorical meanings. For instance, H. Paul Grice (1975) or John R. Searle (1979) claim that it is when the literal understanding is false or defective that the hearer is forced to look for a new interpretation for sentences, such as those in (14), which initiates an inference leading to the metaphorical meaning.

(14) a. Sally is a block of ice.

b. Richard is a gorilla.

Either conceptual metaphors or inferences are to be preferred, both approaches are committed to an interpretation of metaphors that is more complex than the construal of literal meanings. However that may be, the message from our viewpoint is simple: if the meanings of words are realized only in sentences, then nonliteral meaning, whether metaphor, irony, litotes, etc., is even more closely tied to the sentence as its only possible locus of interpretation.

Take a relatively simple verb primarily expressing a physical action, like lie, which has the following dictionary entry (highly simplified to save space).14

(15) The dictionary entry for lie Vintr

1 a: to be or to stay at rest in a horizontal position; be prostrate; rest, recline <lie motionless> <lie asleep>

b: to assume a horizontal position—often used with down

c: to have sexual intercourse—used with with

d: to remain inactive (as in concealment) <lie in wait>

2: to be in a helpless or defenseless state <the town lay at the mercy of the invaders>

3 [of an inanimate thing]: to be or remain in a flat or horizontal position upon a broad support <books lying on the table>

4 to have direction; extend <the route lay to the west>

5 a: to occupy a certain relative place or position <hills lie behind us>

b: to have a place in relation to something else <the real reason lies deeper>

c: to have an effect through mere presence, weight, or relative position <remorse lay heavily on him>

d: to be sustainable or admissible

6: to remain at anchor or becalmed

7 a: to have place; exist <the choice lay between fighting or surrendering>

b: consist, belong <the success of the book lies in its direct style> <responsibility lay with the adults>

8: remain; especially: to remain unused, unsought, or uncared for.

It goes without saying that the meaning chosen from this list depends on the syntactico-semantic environment, that is, the sentence the word happens to be used in. And better dictionaries always illustrate the individual meanings by giving examples, i.e., some context or a sentence in which its use can be grasped. Note that all metaphorical meanings listed in a dictionary are conventionalized or “dead” metaphors. But the point we are making here is that new and old metaphors, or even non-metaphors are understood the same way: by relying on the context of the sentence, which is what we all understand in a primary sense. The linguistic mechanism of understanding is not different in any one of these cases.

This view is corroborated by at least some neurolinguistic experiments, which show that new metaphors require no more mental effort than conventionalized ones or literal meanings. Blasko and Connine (1993) applied a semantic priming method as summarized by Glucksberg (2003: 93, emphasis added):

The experimental participants listened to metaphors such as: ‘Jerry first knew that loneliness was a desert * when he was very young’. While listening, a letter string target would appear on a computer screen immediately after the metaphor (where * appears, above). When the visual target appeared, the participants had to decide, as quickly and accurately as they could, whether it was an English word or not. There were three types of word targets, defined in terms of their relation to the metaphorical phrase: metaphorical, literal and control. For the ‘loneliness is a desert’ metaphor, the metaphorical, literal and control targets were, respectively, isolate, sand and mustache. Faster lexical decisions to metaphorical or literal targets relative to control targets would indicate immediate activation of metaphorical or literal meanings, respectively. Both metaphorical and literal targets were faster than controls. The metaphorical meanings of these apt metaphors were understood as quickly as the literal meanings, even when the metaphors were relatively unfamiliar. These results are consistent with other studies of metaphor comprehension that have found no differences in the time taken to understand metaphorically- and literally-intended expressions.

Glucksberg had been skeptical with Lakoff and Johnson’s (1980) conceptual metaphors, cf. Glucksberg and Keysar (1990), Glucksberg and McGlone (1999), and in subsequent experiment had the subject answer to questions of whether the following are true or false.

(16) a) literally true:

some fruits are apples

b) literally false:

some fruits are tables

c) metaphors:

some surgeons are butchers;

some jobs are jails

d) scrambled metaphors:

some roads are jails;

some jobs are butchers

The results were quite clear-cut: “People had difficulty in rejecting metaphors as literally false; they could not inhibit their understanding of metaphorical meanings, even when literal meanings were acceptable in the context of our experiment” (Glucksberg 2003: 94). Thus, the diagram in Figure 1 shows that metaphorical meanings are understood just as automatically as literal meanings.

d255812300e822

Figure 1. Glucksberg’s (2003) experiment. Reaction time of subjects making a ‘literal-false’ decision as a function of sentence type (LF, literal false; SM, scrambled metaphor; M, metaphor). Metaphorically true sentences are hard to judge as literally false.

Citation: International Review of Pragmatics 5, 2 (2013) ; 10.1163/18773109-13050207

There are, however, two further considerations that might shed new light onto the problems outlined here. On the one hand, conditions that result in the loss of certain cerebral functions, such as Alzheimer’s disease, coincide with an increased difficulty of understanding new metaphors (Amanzio et al., 2008). This phenomenon is similar to the difficulty observed with the processing of ‘garden path’ sentences, e.g., The horse raced past the barn—fell. That is, the mechanism to analyse metaphors is always in operation but it cannot be pinpointed in the healthy brain.

On the other hand, a number of researchers have reported that the process of metaphorical interpretation involves areas of the brain other than those involved in literal understanding. While literal meanings are processed solely in the left hemisphere, metaphors make use of both hemispheres (Giora et al., 2000). Evidence is provided by the experiments in which metaphorical interpretations were hindered whenever right hemisphere functions were depleted.

The variability of word meanings is a clear consequence of the options of meaning extension realized in sentences, that is, of words having or acquiring meanings only in the context of a sentence, and of the fact that only sentences can have unambiguous meanings.

But what, after all, guarantees that new metaphorical meanings are correctly identified? This is the question that leads us to the next territory of linguistic creativity.

Theory-of-Mind Creativity

In order for the hearer to determine what is understood by the new metaphor Some jobs are jails in (16c) above, s/he must have some idea as to what the speaker may have had in mind. This is the kind of phenomenon typically classified under the label theory of mind (ToM), that is, “the ability to attribute mental states (thoughts, knowledge, beliefs, emotions, desires) to oneself and others” (Sodian and Kristen 2010: 189). Humans are capable of representing the mental states of countless agents or countless mental states of some one agent, even though they may be contradictory. In order then to be able to carry out this feat, it is necessary to invoke linguistic devices—at least in case of some of the types of mental states involved. The relevant literature is divided with respect to whether language is an inherent aspect of children’s social interaction and provides children with information needed for the construction of ToM (Gopnik and Wellman, 1994), or ToM is innate but children cannot demonstrate its use until their language is sufficiently developed (Baron-Cohen, 1995), or understanding other minds depends on domain-general cognitive operations that rely on language for their implementation (Frye, Zelazo and Palfai, 1995), but few doubt the unparalleled role of language in ToM (Astington and Filipova, 2005).

Whether or not ToM is dependent on linguistic skills, a large number of research papers tend to support a close correlation between the development of language capacity and achievements in a crucial aspect of ToM, “false belief” tasks.

According to Happé (1995), there is a close correlation between verbal ability and false belief task performance in children of 3 to 4 years of age. Astington and Jenkins’ (1999) longitudinal study of children has also shown that there is a strong correlation of verbal ability and false belief understanding. Meta-analyses of hundreds of articles, e.g., Wellman et al. (2001) and Milligan et al. (2007), have concluded that language ability and false belief understanding develop in a parallel fashion. “The results of the present meta-analysis show that there is a strong relation between false-belief understanding and language ability, which holds across a variety of language ability measures and false-belief task types, both concurrently and longitudinally, with a stronger direction of effect from language to false-belief than the reverse. These findings provide support for the argument that language plays a vital role in the development of false-belief understanding, and thus in the development of theory of mind” (Milligan et al. 2007: 641).

Some of the most interesting results have arisen from surveys of mental predicates, such as say or think, whose complements can be true or false, unlike complements of other verbs, e.g., want or promise, which take irrealis complements. De Villiers has proposed a theory, which is “a very specific hypothesis about the emergence of false-belief understanding, namely, that it rests on the child’s mastery of the grammar (semantics and syntax) of complementation” (de Villiers and Pyers 2002: 1040). In one type of false-belief task (Wimmer and Perner, 1983) children have to follow a story in which an object is moved from one location to another while the story protagonist is off the scene (e.g., Maxi’s chocolate is moved from a cupboard to a drawer). When the protagonist returns to the scene, children are asked where he thinks the object is, or more simply, where he will look for it, e.g., Where does Maxi think the chocolate is? or Where will Maxi look for the chocolate? (Milligan et al., 2007)

Children start using these mental verbs together with their clausal complements as early as 2 years of age, at the beginning only a few of them, e.g., see, look, think, know, without comprehending the function of the complementation (Diessel and Tomasello, 2001). The clausal complement is mastered after 1 to 2 years after the first appearance of the structure, around the time when they have control over false belief tasks.

Evolutionary psychologists and biologists have long debated the degree to which language is responsible for ToM phenomena. Most of them agree that the experiments that endorse attributing ToM to primates or birds are based on a misinterpretation of the data. (Penn and Povinelli, 2007). The experiments show that the animals, just as children below the age of 3 to 4 years, represent only a second order intentionality (“I know that you know …”), the third level is inaccessible (“I know that you know that I know …”) (Call and Tomasello, 1999). Attributing false beliefs is thus of a different complexity than the recognition of intentions and emotions in another being.

It is at this stage that findings in the philosophy of language, and in particular, Grice’s (1975) Cooperative Principle together with the maxims spelling it out, or Searle’s (1979) indirect speech acts, as well as the nonliteral meaning analysed with reference to metaphor and irony, indicate the significance of ToM. It is hardly possible to grasp Grice’s implicatures or Searle’s inferences without invoking ToM processes. Let me cite just one example from Grice (1975):

(17) A: I am out of petrol.

B: There is a garage round the corner.

In Grice’s analysis “B would be infringing the maxim ‘Be relevant’ unless he thinks, or thinks it possible, that the garage is open, and has petrol to sell; so he implicates that the garage is, or at least may be open, etc.” (Grice 1975: 311).

Metaphors and, in general, meaning extensions, such as irony, induce ToM processes, as is shown also by neurolinguistic data from brain imaging. It has been common knowledge that pragmatic processes involve the right hemisphere. Although early studies concentrated solely on the right hemisphere, as seen in Giora’s (2007) overview, recently the activity of frontal areas has also gained ground:

RHD [right hemisphere damaged] individuals may exhibit different patterns of executive dysfunction (lack of inhibition versus lack of flexibility), which co-occurred with different patterns of pragmatic impairments (metaphor and non-literal interpretation such as indirect request versus literal interpretation) concomitant with a ToM deficit. […] The results suggest that the ability to understand pragmatic aspects of language is closely associated with the ability to make inferences about other people’s intentions.

(Champagne-Lavau and Joanette 2009: 423)

Similar conclusions were reached by Gallagher and Frith (2003), who studied the activation of the medial prefrontal cortex during ToM tasks (for more on this, see Amodio and Frith, 2006). Ahrens et al. (2007) have found through their MRI experiments that new metaphors induce the activity of the temporal and frontal lobes in both hemispheres, as contrasted with sentences having literal meanings only.

ToM is a conditio sine qua non of language comprehension and ToM inferences cannot be realized without the device of mental verbs with clausal complements. At the same time ToM opens up a new territory of creativity: the faculty of representing countless mental states of each of a countless number “other minds”. The unlimited “parallel” representability, i.e., that the minds of countless individuals can be independently represented in one mind (or in one’s mind), is complemented by a “linear” representability, namely, the degree of embeddedness of mental states into one another, or, to put it metaphorically, to which power of ToM can one mental state be raised.

Dunbar (2005) argues that this number equals six, albeit on anecdotal evidence:

When the audience ponders Shakespeare’s Othello, for example, they are obliged to work at fourth order intentional levels: I (the audience) believe that Iago intends that Othello supposes that Desdemona wants [to love someone else]. When Shakespeare puts the play on stage before us, he will, in critical scenes, have four individuals interacting, thus obliging us to work at fifth order level—the very limits to which most of us can cope. But notice that Shakespeare himself is being forced to work at one level of intentionality higher, because he must intend that we (the audience) believe that Iago intends …, etc. And when he is putting the action on the stage with his typical cluster of four interacting characters, suddenly he is being pushed beyond the limits at which most normal adult humans can cope: with four characters’ mind states, plus that of the audience and his own, he is having to work at sixth order.

(Dunbar, 2005: 17, emphasis in the original)

This complexity is, however, not encountered in everyday affairs, where the maximum is fourth order level, as has been demonstrated by experimental evidence in Stiller and Dunbar (2007).

The restriction on ToM complexity recalls the distinction between the principles of grammar and the constraints on language production, or performance, to cite an old term. What we have to do with here is, I believe, similar to what Chomsky (1965) claimed was the case with the difference between the unlimited complexity of possible grammatical constructions and the “limitations on performance imposed by organizations of memory and bounds on memory” (Chomsky 1965: 10). His original examples still qualify as illustrations (bracketing added).

(18) a. [I [called up [the man who wrote [the book that you told me about]]]]

b. [I [called [the man who wrote [the book that you told me about]] up]]

Right-branching structures of any complexity can be easily processed, as in (18a), while nested constructions, as in (18b), are much more difficult to comprehend beyond the order of two, although (18b) is constructed in full compliance with the rules of grammar. In other words, grammar allows for an unlimited number of embeddings, just as ToM allows for the ‘nesting’ of an unlimited number of mental states, but working memory imposes obvious restrictions on the number of embeddings that can be manipulated on-line.

Summary

The three subtypes of creativity surveyed here logically, but possibly also in an evolutionary sense, derive from the same strictly local relationship that is encoded in “edge features”, which determine what complement structure a word in the mental lexicon can have, or in a more general sense, what properties the word prescribes that its immediate environment must satisfy as instantiated by the operation of Merge. Among them it is the functional categories, or in more traditional terminology, grammatical words, activated in the left hemisphere only, that play the role of connectives, mapping out the skeletal structures of sentences, while notional words, which are activated in both hemispheres, make it possible for meanings to be variable and allow for their metaphorical extension, as regulated by a linguistically determined theory of mind. Further evidence is provided by cognitive neuroscience:

Ultimately, the LH [left hemisphere] quickly focuses semantic activation on features related to the dominant, literal or contextually relevant meaning while inhibiting features related to the subordinate or contextually irrelevant meanings. […] By contrast, the RH [right hemisphere] maintains weak, diffuse semantic activation of a broader semantic field, including distant and unusual semantic features, features that seem irrelevant to the context, and secondary word meanings […]. These large semantic fields provide only a coarse interpretation, rife with ambiguity. For instance, if you were listening to a story and heard the word ‘foot’, but couldn’t determine whether it referred to twelve inches or a part of the body, you would quickly get bogged down, unable to follow rapidly unfolding natural language.

(Jung-Beeman 2005: 514)

The three areas of formal, semantic, and ToM creativity, and within the first one, syntactic and lexical creativity, have been shown to go back to simple fundamental properties of language, located in what we called “words”, which through strictly local relations underlie highly complex structures constructed incrementally, which in turn give rise to novel, mostly metaphorical meanings, interpreted against the representations of other minds, all crucially dependent on linguistic, and ultimately formal, processes and properties.

This paper has been meant to be an attempt at drawing logical consequences as to how formal creativity must serve as the foundation for any “higher” level of creativity. The cognitive turn in linguistics, as outlined in the Introduction, is precisely the context in which such claims and assumptions can be put to the test.

Acknowledgements

My thanks are due primarily to Zoltán Bánréti, with whom I have been discussing a wide variety of topics, related to and/or including those written about in this paper; to Csaba Pléh, for his encouragement and inspiration, and to audiences at Eötvös University, Budapest, the University of Szeged, and the Central European University. I am also grateful to Mark Newson and Peter Sherwood, who have helped with some of the English expressions and examples.

This paper is dedicated to the memory of Mike Harnish (1941–2011), friend and continuous source of ideas for over thirty years.

References

Ahrens, Kathleen, Ho-Ling Liu, Chia-Ying Lee, Shu-Ping Gong, Shin-Yi Fang and Yuan-Yu Hsu. 2007. Functional MRI of conventional and anomalous metaphors in Mandarin Chinese. Brain and Language 100: 163–171.

Amanzio, Martina, Giuliano Geminiani, Daniela Leotta and Stefano Cappa. 2008. Metaphor comprehension in Alzheimer’s disease: novelty matters. Brain and Language 107: 1–10.

Amodio, David M. and Chris D. Frith. 2006. Meeting of minds: the medial frontal cortex and social cognition. Nature, April 2006: 268–277.

Astington, Janet Wilde and Eva Filipova. 2005. Language as the route into other minds. In Bertram F. Malle and Sara D. Hodges (eds.), Other Minds: How Humans Bridge the Divide Between Self and Others, 209–219. New York: Guilford Press.

Astington, Janet Wilde and Jennifer M. Jenkins 1999. A longitudinal study of the relation between language and theory-of-mind development. Developmental Psychology 35: 1311–1320.

Baron-Cohen, Simon. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge: MIT Press/Bradford Books.

Blasko, Dawn G. and Cynthia M. Connine. 1993. Effects of familiarity and aptness on metaphor processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 19: 295–308.

Bloomfield, Leonard 1933. Language. New York: Holt.

Call, Josep and Michael Tomasello. 1999. A nonverbal false belief task: the performance of children and great apes. Child Development 70: 381–395.

Champagne-Lavau, Maud and Yves Joanette. 2009. Pragmatics, theory of mind and executive functions after a right-hemisphere lesion: different patterns of deficits. Journal of Neurolinguistics 22: 413–426

Chomsky, Noam. 1955/1975. The Logical Structure of Linguistic Theory. New York: Plenum.

Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton.

Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge: MIT Press.

d’Avis, Winfried. 1998. Theoretische Lücken der Cognitive Science. Journal for General Philosophy of Science 29: 37–57.

de Villiers, Jill G. and Jenny Pyers. 2002. Complements to cognition: a longitudinal study of the relationship between complex syntax and false-belief understanding. Cognitive Development 17: 1037–1060.

Diessel, Holger and Michael Tomasello. 2001. The acquisition of finite complement clauses in English: a corpus-based analysis. Cognitive Linguistics 12: 97–142.

Dunbar, Robin I.M. 2005. Why are good writers so rare? An evolutionary perspective on literature. Journal of Cultural and Evolutionary Psychology 3: 7–21.

Evans, Nicholas and Stephen C. Levinson. 2009. The myth of language universals: language diversity and its importance for cognitive science. Behavioral and Brain Sciences 32: 429–492.

Evans, Vyvyan and Melanie Green. 2006. Cognitive Linguistics: An Introduction. Edinburgh: Edinburgh University Press.

Everett, Daniel L. 2005. Cultural constraints on grammar and cognition in Pirahã: another look at the design features of human language. Current Anthropology 46: 621–646.

Frege, Gottlob. 1892/1952. On sense and reference. In P. Geach and M. Black (eds.), Translations from the Philosophical Writings of Gottleb Frege, 56–78, Oxford: Basil Blackwell.

Frye, Douglas, Philip David Zelazo and Tibor Palfai. 1995. Theory of mind and rule-based reasoning. Cognitive Development 10: 483–527

Gallagher, Helen L. and Christopher D. Frith. 2003. Functional imaging of ‘theory of mind’. Trends in Cognitive Science 7: 77–83.

Giora, Rachel. 2007. Is metaphor special? Brain and Language 100: 111–114.

Giora, Rachel, E. Zaidel, N.G. Soroker and Asa Kasher. 2000. Differential effect of right- and left-hemisphere damage on understanding sarcasm and metaphor. Metaphor and Symbol 15: 63–83.

Gopnik, Alison and Henry M. Wellman. 1994. The theory theory. In L. Hirschfield and S. Gelman (eds.), Mapping the Mind: Domain Specificity in Cognition and Culture, 257–293. New York: Cambridge University Press.

Glucksberg, Sam. 2003. The psycholinguistics of metaphor. Trends in Cognitive Science 7: 92–96.

Glucksberg, Sam and Boaz Keysar. 1990. Understanding metaphorical comparisons: beyond similarity. Psychological Review 97: 3–18.

Glucksberg, Sam and Matthew S. McGlone. 1999. When love is not a journey: what metaphors mean. Journal of Pragmatics 31: 1541–1558.

Grice, H. Paul. 1975. Logic and conversation. In P. Cole and Jerry L. Morgan (eds.), Syntax and Semantics, Vol. 3, Speech Acts, 41–58. New York: Academic Press.

Happé, Francesca G.E. 1995. The role of age and verbal ability in the theory of mind task performance of subjects with autism. Child Development 66: 843–855.

Harris, Zellig S. 1951. Methods in Structural Linguistics. Chicago: University of Chicago Press.

Hockett, Charles. 1958. A Course in Modern Linguistics. New York: Macmillan.

Jung-Beeman, Mark. 2005. Bilateral brain processes for comprehending natural language. Trends in Cognitive Science 9: 512–518.

Kertész, András. 2004. Cognitive Semantics and Scientific Knowledge. Amsterdam: John Benjamins.

Lakoff, George and Mark Johnson. 1980/2003. Metaphors We Live By. Chicago: University of Chicago Press.

Lyons, John. Semantics, Vols. I–II. Cambridge: Cambridge University Press.

Malle, Bertram F. and Sara D. Hodges (eds.). 2005. Other Minds: How Humans Bridge the Divide Between Self and Others. New York: Guilford Press.

Milligan, Karen, Janet Wilde Astington and Lisa Ain Dack. 2007. Language and theory of mind: meta-analysis of the relation between language ability and false-belief understanding. Child Development 78: 622–646.

Penn, Derek C. and Daniel J. Povinelli. 2007. On the lack of evidence that non-human animals possess anything remotely resembling a “theory of mind”. Phil. Trans. R. Soc. B. 362: 731–744.

Piattelli-Palmarini, Massimo. 2008. New tools in the service of old ideas. Biolinguistics 2: 237–246.

Ponterotto, Diane. 1994. Metaphors we can learn by: how insights from cognitive linguistic research can improve the teaching/learning of figurative language. English Teaching Forum 32(3), http://eca.state.gov/forum/vols/vol32/no3/index.htm.

Roeper, Tom 2011. The acquisition of recursion: how formalism articulates the child’s path. Biolinguistics 5: 57–86.

Rorty, Richard (ed.). 1967. The Linguistic Turn: Recent Essays in Philosophical Method. Chicago: University of Chicago Press.

Saussure, Ferdinand de. 1916/1995. Cours de linguistique générale. Paris: Payot & Rivages.

Scalise, Sergio and Irene Vogel (eds.). 2010. Cross-disciplinary Issues in Compounding. Amsterdam: John Benjamins.

Searle, John R. 1979. Expression and Meaning: Studies in the Theory of Speech Acts. Cambridge: Cambridge University Press.

Setola, Patrizia and Ronan G. Reilly. 2005. Words in the brain’s language: an experimental investigation. Brain and Language 94: 251–259.

Sodian, Beate, and Susanne Kristen. 2010. Theory of mind. In B. Glatzeder, V. Goel and A. von Müller (eds.), Toward a Theory of Thinking, 189–201. Berlin: Springer-Verlag.

Stiller, James and Robin I.M. Dunbar. 2007. Perspective-taking and memory capacity predict social network size. Social Networks 29: 93–104.

Tarasova, Elizaveta. 2012. Review of Scalise and Vogel (2010). Word Structure 5: 254–266.

Ullmann, Stephen. 1959. Principles of Semantics. Oxford: Blackwell.

Wellman, Henry M., David Cross and Julianne Watson. 2001. Meta-analysis of theory-of-mind development: the truth about false belief. Child Development 72: 655–684.

Wells, Rulon S. 1947. Immediate constituents. Language 23: 81–117.

Wimmer, Heinz and Joseph Perner. 1983. Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition 13: 103–128.

1

See, e.g., Rorty (1967).

2

“However uncertain are as yet the theoretical foundations of this new discipline, its year of birth is quite firm: 1956. It was then that Simon, Chomsky, Newell, and others met at the M.I.T. at a “Symposium on Information Theory”, and lay the new foundation for the research of cognition. The novelty consisted in the attempt to convert very old philosophical questions about the nature of the mind into questions about how it functions, and answer them in an interdisciplinary and empirical way. In addition to its year of birth it is also common knowledge that cognitive science has the following subdisciplines: linguistics, computer science, psychology, neurobiology, and the philosophy of mind”.

My thanks to András Kertész, who has located this quote; see also Kertész (2004).

3

See, e.g., Evans and Green (2006).

4

Not everyone accepts the general formal principles of linguistic recursion, cf. Everett (2005) or Evans and Levinson (2009) and the debates that they induced in Vol. 85 of Language and in Behavioral and Brain Sciences.

5

Note that the dictionaries always add the particle up, defining the meaning as “eat up greedily”, probably only to show the novice reader that it must be used transitively.

6

Except when other factors intervene, such as in questions, whose discussion would lead us too far afield.

7

Again, this is a fairly simplified picture. In current grammatical theory it works the other way round. That is, the operation Merge, which unites two items, of which one is a “word”, the other a phrase, whose head must match the locality requirements of the head.

8

See, e.g., Roeper (2011).

9

More examples: shtick lit, tweet seats, diarrheaist, success disaster, hackerazzi, hashtag activism, pink slime, expenditure cascade. Source: www.wordspy.com, accessed 10-10-2012.

10

Even though some have challenged this view, see, e.g., Tarasova (2012).

11

For more on the relationship of words and meanings, from a different viewpoint, see McConnel-Ginet (2008).

12

As is well-known, a word can on occasion have even the opposite of its customary sense as in irony. For more on this, see below.

13

Excerpt from “My Fourth of July” by Charles Simic, New York Review of Books, http://www.nybooks.com/blogs/nyrblog/2012/jul/03/my-fourth-july/, accessed 15-03-2013.

14

From Merriam-Webster online: http://www.merriam-webster.com/.

  • 1

    See, e.g., Rorty (1967).

  • 3

    See, e.g., Evans and Green (2006).

  • 8

    See, e.g., Roeper (2011).

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1209 217 15
PDF Views & Downloads 891 252 14