2.2 Generativity and duality of patterning
Let us now reconsider the sentence you heard in the imaginary scenario at the beginning of this course. Here it is again.
(1) My dad's tutor's no joker, and he told me the TMA's going to hit home with a bang.
Before reading this section, try writing down what stages you think the brain might go through in turning the sound of sentence (1) above into its meaning. Think about what it is that makes the tasks difficult.
The vervet monkey call system, as we saw, involves a mapping between sounds and meanings, just as human language does. However, it differs from human language in two crucial respects. The first of these is called generativity. The three major vervet calls – snake, leopard and eagle – are meaningful units in their own right; you don't need to say anything else, you just call. The calls cannot be combined into higher-order complexes of meaning. A snake call followed by a leopard call could, as far as we understand it, express only the presence of a snake and a leopard. It could not express the proposition that a snake was at that moment being hunted by a leopard, or vice versa, or the idea that leopards are really much more of a nuisance than snakes. This means that the number of meanings expressible in the vervet system is closed, or finite. There are only as many meanings as there are calls. Human languages, by contrast, allow the recombination of their words into infinitely many arrangements, which have systematically different meanings by virtue of the way they are arranged.
In the vervet system, the monkeys can simply store in their memory the meaning associated with each call. Human language could not work this way. Consider example sentence (1) above. You have almost certainly never heard or read this exact sentence before. In fact, it is highly unlikely that anyone, in the entire history of humanity, has ever uttered this exact sentence before me today. Yet we all understand what it means. We must therefore all possess some machinery for making up new meanings out of smaller parts in real time. This is what is known as the generative capacity of language; the ability to make new meanings by recombining units. The vervet system is not generative, whereas human language is.
Vervet calls are indivisible wholes; they cannot be analysed as being made up of smaller units. Words, by contrast can be broken down into smaller sound units. Thus language exhibits what is known as duality of patterning (Figure 2). At the lowest level, there is a finite number of significant sounds, or phonemes. The exact number varies from language to language, but is generally in the range of a few dozen. The phonemes can be combined into words fairly freely; however there are restrictions, known as phonological rules, about how phonemes can go together. Words in their turn combine into sentences. However, as we have just said, not all combinations of words are grammatical. Which combinations are allowed depends on syntactic rules.
There are some differences between the higher and lower levels of patterning in language. Phonemes, the basic unit of the lower level, have no meaning at all, whereas words, the basic unit of the higher level, typically carry meaning. The meaning of the word bed has nothing at all to do with the fact that the phonemes making it up are /b/, /e/ and /d/. (See Box 1 for an explanation of the / / notation.) If you change one phoneme, for example the /d/ to a /t/, then you have a word that is not just different but completely unrelated in meaning – bet. You could imagine a hypothetical linguistic system in which particular phonemes had special relationships to meanings; for example, in which words for furniture all began with /b/, or words for body parts all contained an /i/. No human language is like that, however. You cannot predict the meaning of a word, even in the vaguest terms, from the phonemes that make it up.
The higher level of patterning is quite different. The meaning of a sentence is largely a product of the meanings of the individual words that it contains. Syntactic rules serve to identify which word in the sentence plays which role, and also to ‘glue together’ the relationships between the words. Consider these examples.
(5) The cat bit the dog.
(6) The cat which was bitten by the dog was thirsty.
In (5), we know that the cat was the biter and the dog the bitten because of a pattern in English syntax which says that the first noun is generally the subject of the sentence. In (6), there are two possible participants to which the state ‘thirsty’ could be attached – the dog could be thirsty or the cat could be thirsty. The syntax tells us that it must be the cat. Without syntax, no-one would be able to tell who was biting, who was bitten, and who was thirsty in (5) and (6), however much they knew about the behaviour of cats and dogs.
You might say that there is nothing that remarkable about understanding a sentence of spoken English. You listen out for the phonemes; as they come in, you store them in short-term memory until you have enough to make a word. Then the word is passed on to the meaning centres of the brain, where its meaning is activated, and the phonemes making up the next word start coming through. You continue this process until the whole sentence is in. You use your knowledge of syntax to clear up any uncertainties about who did what to whom, and there you are: the meaning. Simple really.
This account underestimates the crucial complexity of linguistic processing in several ways. We will explore this complexity by considering in detail how a sentence (sentence (1) from the opening of this course) could be understood. We will see that there are three areas of really difficult problems that the brain's linguistic system solves effortlessly. These are the phonological problem, the semantic problem, and the syntactic problem.