Skip to content
Skip to main content

About this free course

Become an OU student

Download this course

Share this free course

From sound to meaning: hearing, speech and language
From sound to meaning: hearing, speech and language

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

3.6 Summary of Section 3

Sound waves received by the ear are turned into neural activity by a complex mechanism involving the eardrum, the bones in the middle ear, and the hair cells within the cochlea. The auditory nerve carries the signal from the ears to the brainstem, from where it passes via the thalamus to the auditory areas of the cerebral cortex. In the cortex, speech sounds are extracted from the incoming signal. There are neural circuits in the auditory cortex that are specialised for speech and language as opposed to other types of sound.

Production and perception of speech takes place predominantly in one hemisphere of the brain, usually the left. Several areas within the left hemisphere are involved. Broca's area, in the frontal lobe, seems to be crucial for syntactic operations in both production and perception of speech. Wernicke's area, in the temporal lobe, seems to be crucial for accessing the concrete meanings of words. The evidence for the distinction in function between posterior and anterior language areas comes from the study of aphasia, that is, problems with language resulting from brain injury, as well as brain scanning.

Evidence from electrophysiological recording suggests that decoding a sentence involves several stages. First, there is an initial syntactic analysis of the structure of the sentence. Then the meanings of the individual words are accessed. Finally, the meanings of the individual words and the structure of the sentence are integrated to produce a coherent overall meaning. This all happens within one second of the final word being uttered.