Custom Search

Wednesday, December 15, 2010

HOW DO WE HEAR


Since sound does not travel in straight lines, it is not important to have a particular sound receptor pointing in a unique direction in space in the way that photoreceptors do. This considerably simplifies the design of an ear. The outer part of the ear – the pinna – routes the sound waves in towards the passage leading to the eardrum. [pinna the structure made of skin and cartilage on the outer part of the ear] The incoming sound waves then set up mechanical vibrations of the eardrum, [eardrum a membrane between the outer and middle ear that vibrates when sound waves reach it] which is connected via a system of tiny bones to the oval window of an organ called the cochlea. [cochlea coiled structure in the inner ear responsible for transforming mechanical vibration (sound energy) into action potentials in the acoustic nerve] These bones function like a gear-box, transforming the amplitude [amplitude the difference between the peaks and troughs of a waveform] of the vibration to one which is us able by the cochlea. The cochlea (so called because its structure resembles a snail’s shell), contains a membrane stretched along its length. This is the basilar membrane, and all parts of it are attached to very delicate elongated cells, called hair cells. [hair cells long, thin cells in the cochlea and the vestibular system, which, when bent, produce an action potential] When a given part of the basilar membrane vibrates, a deformation occurs in the group of hair cells that are attached there. This deformation is the stimulus for the production of action potentials in the neuron attached to the hair cell. The neurons are bundled together and become part of the acoustic nerve, [acoustic nerve conveys information from the cochlea to the auditory cortex] which transmits information to the auditory cortex. [auditory cortex a region of the cortex devoted to processing information from the ears]


Volume, pitch and timbre
Vibrations reaching the ear can differ in amplitude and frequency. Different frequencies cause the basilar membrane to vibrate in different places, stimulating different sub-populations of hair cells.
For low frequencies, the site of maximal vibration lies further from the oval window of the cochlea than for high frequencies.

[Georg von Bekesy (1899–1972) was a Hungarian physiologist working on hearing at Harvard University whose most famous discovery was that different parts of the basilar membrane in the cochlea are stimulated by different frequencies of sound. He won the Nobel Prize for medicine in 1961 for this type of work.]


This is how the cochlea
achieves frequency selectivity. [frequency selectivity the degree to which a system (e.g. a neuron) responds more to one frequency than another] Differences in the physical variable we refer to as sound frequency give rise to differences in the psychological attribute we refer to as pitch. [pitch auditory sensation associated with changes in frequency of the sound wave] The physical amplitude of the incoming wave translates into the sensation of loudness.This is encoded by a combination of (a) increased firing rate in auditory neurons and (b) a greater number of neurons firing. Finally, acoustic signals can vary in their complexity. The same note played on a piano and violin sound different, even though their fundamental frequencies are the same. This difference in sound quality gives rise to the sensation of timbre. [timbre the complexity of a sound wave, especially one emitted by a musical instrument, allowing us to distinguish the same note played on, say, a piano and a guitar] Of course, our auditory s stem did not evolve to hear musical instruments. More likely, it is there to make sense of complex signals occurring naturally in nature. These different patterns are recognized as different sounds – in the case of speech, as different phonemes. A phoneme is a speech sound, such as the ‘sh’ in ‘rush’. [phonemes basic building blocks of speech: English contains around 40 different phonemes] The English language contains around 40 phonemes (Moore, 2003, p. 301). We can look more closely at the sounds in a word by plotting the relationship between the amplitude and frequency of the sound over time in a spectrogram – so called because it displays the spectrum of sound. [spectrogram a way of plotting the amplitude and frequency of a speech sound-wave as we speak individual phonemes]

No comments: