Skip to content
Skip to main content

About this free course

Become an OU student

Download this course

Share this free course

From sound to meaning: hearing, speech and language
From sound to meaning: hearing, speech and language

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

4 Conclusions

Activity 3

Read back over Section 3. Make two columns on a piece of paper, one headed ‘finding’ and one headed ‘evidence’. Make a list of key findings we have established about the processing of language in the brain. For example, your first finding might be ‘Language localised to the left’, and the evidence might be ‘aphasia, brain scanning’, and so on. The findings might be of two sorts: the evidence of locations (i.e. where things happen), and the evidence of processes (i.e. the sequence in which they happen).

The first half of this course showed just how complex human language is; in the second half, we have established some preliminary findings about how the brain masters it. We clearly have a lot of finely structured special neural machinery to pull off this complex task. We have hardly touched on where this machinery comes from (though see Box 2). Scientists have only just scratched the surface of language. It has been hard to identify the relevant processes in the brain because the precise identification of neural activity in space and time is something we have only just started being able to do. Moreover, just to achieve a correct characterisation of the behaviour whose neural bases we are seeking took a long time.

Box 2: The language instinct?

We have seen in this course that there is a great deal of apparently specialised neural machinery for language, from language-specific areas in the auditory cortex to Broca's area and its role in syntax. Where does this specialised machinery come from? Is it the effect of frequent practice from an early age that makes these structures become dedicated to language?

An influential school of thought in linguistics argues that the capacity for language is innate. Of course, this does not mean that English, German or Swahili is encoded genetically. Rather, the acquisition of a language is a genetically initiated and genetically guided process. It certainly seems like there is something in this view. The acquisition of language unfolds in a similar pattern across different cultures. It does not seem to matter whether parents attempt to explicitly teach their children or not; as long as there is some linguistic input around, children will acquire a language. At peak rate, children acquire something like one new word per hour. The linguist Noam Chomsky has argued for decades that this process is much more like growth in stature than learning to play the piano; as long as there is adequate nutrition, it will just happen because it is programmed to do so.

The main argument for Chomsky's position has always been logical rather than empirical. The child has to infer a grammar, quickly, on the basis of the sentences flying around above his head. These sentences contain positive examples of what a grammatical sentence is, but no negative evidence. Parents do not say *the cat sitting is mat the on followed by a small electric shock to condition children away from ungrammatical sentences. Rather, the child has to figure out the grammar with no examples of what sort of things it does not allow. Chomsky has argued that it would be impossible to do this without fairly strong innate principles to guide the process (for example, the innate knowledge that the language will contain nouns and verbs, subjects and objects, put constraints on what sequences of sounds are possible, etc.). The data alone are not enough to allow the acquisition of language; you need some guiding principles already laid down.

The exact content of this ‘start-up pack’ is very hard to determine. So too is the adequacy of Chomsky's argument, which is called the argument from the poverty of the stimulus. We simply do not know nearly enough about the overall architecture of cognition, or the mental representation of language, to know how much needs to be specified beforehand and how much will just emerge in interaction with the environment and other cognitive systems. However, it does seem fairly clear that language is a species-specific behaviour, and thus part of our biology and evolution rather than our cultural history.

Interesting light has been shed recently on the innateness debate by the study of a genetic disorder called specific language impairment (SLI). Affected individuals have rather specific problems with grammar, like getting the inflection right in The man wants to go versus *The man want to go. In some families, the disorder is controlled by a single gene, called FOXP2 (Lai et al., 2001), and all the affected individuals in these families carry a particular allele that is not present in other families. Intriguingly, FOXP2 is present in other primates but the gene there differs from that in humans. Is FOXP2 one of the genes encoding Chomsky's innate linguistic principles?

The story seems unlikely to be a simple one. It has recently been shown that people affected by SLI have a wide range of other difficulties, from IQ scores to complex facial movements. If the gene involved is a gene for grammar, it is also a gene involved in other things too.

There are still many unanswered questions. For example, the binding problem is the question of how the meaning green gets stuck to the meaning dog, and not something else, in (14c). Our best guess about how this happens neurally is as follows. There is a localised circuit which encodes the meaning of green, and this starts firing in response to a sentence with the word green in it. There is another circuit for dog, which fires when the word dog is present. If the syntax says that these two meanings are bound together in the particular sentence, then their firing becomes synchronous. If they are in the same sentence but not bound together, they both fire, but asynchronously.

Now this account may very well be correct, but it poses an obvious problem. How then do we process the meaning of (21)?

  • (21) The green cat is beside the blue cat, not the green dog.

The green circuit would have to be synchronous with both cat and dog. The cat circuit would have to be synchronous with both green and blue. Yet the meaning of the sentence is not a cat that is both green and blue. The bindings are kept separate. We have no real idea how this is done. This is the wonderful thing about human language. Like the pattern of an ice crystal, the closer you probe it, the more complexities of structure there are. The adventure is only just beginning …