Custom Search

Wednesday, December 15, 2010


The somatosenses, which detect things like pressure, vibration and temperature, are normally grouped into the skin senses, the internal senses and the vestibular senses. It is important for us to know which way up we are, and how we are moving (especially how we are accelerating). This is achieved by a part of the inner ear called the semicircular canals, which contain small lumps immersed in a viscous fluid. When we move, these lumps move within the fluid. The lumps are in contact with hair cells (like those in the cochlea), and the motion of the lumps in the fluid bends the hair cells and results in neural messages, which are relayed to the vestibular nuclei in the brainstem. This type of sense is referred to as the vestibular system, [vestibular system located in the inner ear, this responds to acceleration and allows us to maintain body posture] and without it we could not walk without staggering. You can see impairment of this system in someone who has consumed too much alcohol. Motion sickness and dizziness are associated with unusual output from the vestibular system. In the skin senses, the transducers are nerve endings located around the surface of the body. There are also inner senses that tell us, for example, about muscle tension and joint position, which have detectors distributed in all the muscles and joints of the body. These work together with our vestibular system to coordinate our movements and maintain balance and posture. Many people who have had limb amputations report that they still feel that they have the amputated limb, and often say that this ‘phantom limb’ is causing them great pain. Research by Ramachandran and Blakesee (1999) on patients who have such feelings shows that the representation of touch by neurons in the brain can be changed, resulting in a remapping of the body’s tactile repr sentation of itself. So, for example, touching the cheek of a patient elicited an apparent sensation in the phantom thumb. The ‘motor homunculus’ (see chapter 3) shows the sensory representations o different parts of the body in the cortex. The proximity of the representations of the different parts of the body in this mapping, and the remapping of neurons within this after a limb amputation, is the probable reason for these remarkable effects.


We know that information travels to our chemical senses (taste and smell) much more slowly and lingers after the stimulus has gone. Simple logic will therefore suggest that the time-course of the transduction is less critical than for hearing and vision. A matter of taste Gustation (taste) is relatively simple in that it encodes only five dimensions of the stimulus: sourness, sweetness, saltiness, bitterness, and ‘umami’, which is a taste similar to monosodium glutamate. The receptors for taste – our taste buds – are on the surface of the tongue. Different types of chemical molecules interact differently with the taste buds specialized for the five different taste sensations.
Salty sensations arise from molecules that ionize (separate into charged ions) when they dissolve in the saliva. Bitter and sweet sensations arise from large non-ionizing molecules. Sour tastes are produced by acids, which ionize to give a positively charged hydrogen ion. The umami taste is produced by specific salts such as monosodium glutamate. The mystery of smell Olfaction (smell), on the other hand, is shrouded in mystery. We do not understand much about how the receptors in the nose respond to different trigger molecules carried in the air that we breathe. It seems a fair assumption that these airborne molecules interact with specific receptors to elicit certain sensations. Unlike the other senses, there is a vast array of receptor types, possibly up to 1000. Subjectively, it seems that smell elicits certain memories and is notoriously hard to describe verbally. The flavour of a food is conveyed by a combination of its smell and taste. Flavours re described by a complex set of references to substances that possess elements of these flavours. For example, wine experts talk about the ‘nose’ of a wine, meaning its smell; the early taste sensation is the ‘palate’, and the late taste sensation is the ‘finish’. The tasting terminology is esoteric and has a poetic quality, with words like ‘angular’, ‘lush’, and ‘rustic’.

Deducing sound direction

Although sound is less directional than light, information about direction can be deduced from sound. The role of the pinna seems to be to route the sound towards the eardrum, but they also produce small echoes, which allow us to distinguish something that is high up above us from something that is below us. Horizontal information is given by comparing signals from the two ears. For low frequencies (up to about 1000 Hz) the time of arrival of each peak of the sound vibration contains this information. A sound wave coming from the right will reach our right ear about 1 ms before it reaches our left ear, and this 1 ms difference is detected by the auditory system. For higher frequencies, where there are too many wave crests per second for a 1 ms difference to be meaningful, it is the amplitude of the sound that matters. A source to our right will project a higher amplitude to the right ear than to the left ear, since the head attenuates sound (in other words, sound reduces in amplitude as it passes through the ead by being partially absorbed). This, too, provides directional information.


Since sound does not travel in straight lines, it is not important to have a particular sound receptor pointing in a unique direction in space in the way that photoreceptors do. This considerably simplifies the design of an ear. The outer part of the ear – the pinna – routes the sound waves in towards the passage leading to the eardrum. [pinna the structure made of skin and cartilage on the outer part of the ear] The incoming sound waves then set up mechanical vibrations of the eardrum, [eardrum a membrane between the outer and middle ear that vibrates when sound waves reach it] which is connected via a system of tiny bones to the oval window of an organ called the cochlea. [cochlea coiled structure in the inner ear responsible for transforming mechanical vibration (sound energy) into action potentials in the acoustic nerve] These bones function like a gear-box, transforming the amplitude [amplitude the difference between the peaks and troughs of a waveform] of the vibration to one which is us able by the cochlea. The cochlea (so called because its structure resembles a snail’s shell), contains a membrane stretched along its length. This is the basilar membrane, and all parts of it are attached to very delicate elongated cells, called hair cells. [hair cells long, thin cells in the cochlea and the vestibular system, which, when bent, produce an action potential] When a given part of the basilar membrane vibrates, a deformation occurs in the group of hair cells that are attached there. This deformation is the stimulus for the production of action potentials in the neuron attached to the hair cell. The neurons are bundled together and become part of the acoustic nerve, [acoustic nerve conveys information from the cochlea to the auditory cortex] which transmits information to the auditory cortex. [auditory cortex a region of the cortex devoted to processing information from the ears]

Volume, pitch and timbre
Vibrations reaching the ear can differ in amplitude and frequency. Different frequencies cause the basilar membrane to vibrate in different places, stimulating different sub-populations of hair cells.
For low frequencies, the site of maximal vibration lies further from the oval window of the cochlea than for high frequencies.

[Georg von Bekesy (1899–1972) was a Hungarian physiologist working on hearing at Harvard University whose most famous discovery was that different parts of the basilar membrane in the cochlea are stimulated by different frequencies of sound. He won the Nobel Prize for medicine in 1961 for this type of work.]

This is how the cochlea
achieves frequency selectivity. [frequency selectivity the degree to which a system (e.g. a neuron) responds more to one frequency than another] Differences in the physical variable we refer to as sound frequency give rise to differences in the psychological attribute we refer to as pitch. [pitch auditory sensation associated with changes in frequency of the sound wave] The physical amplitude of the incoming wave translates into the sensation of loudness.This is encoded by a combination of (a) increased firing rate in auditory neurons and (b) a greater number of neurons firing. Finally, acoustic signals can vary in their complexity. The same note played on a piano and violin sound different, even though their fundamental frequencies are the same. This difference in sound quality gives rise to the sensation of timbre. [timbre the complexity of a sound wave, especially one emitted by a musical instrument, allowing us to distinguish the same note played on, say, a piano and a guitar] Of course, our auditory s stem did not evolve to hear musical instruments. More likely, it is there to make sense of complex signals occurring naturally in nature. These different patterns are recognized as different sounds – in the case of speech, as different phonemes. A phoneme is a speech sound, such as the ‘sh’ in ‘rush’. [phonemes basic building blocks of speech: English contains around 40 different phonemes] The English language contains around 40 phonemes (Moore, 2003, p. 301). We can look more closely at the sounds in a word by plotting the relationship between the amplitude and frequency of the sound over time in a spectrogram – so called because it displays the spectrum of sound. [spectrogram a way of plotting the amplitude and frequency of a speech sound-wave as we speak individual phonemes]

Seeing in colour

So far, we have looked at how the retina responds to rapid spatial changes in illumination. But it also selectively signals temporal changes, such as occur when there is a flash of lightning or (more usefully) when a tiger suddenly jumps out from behind a tree or moves against a background of long grass, so breaking its camouflage. There are various mechanisms involved in processes of adaptation to static scenes (see chapter 8). Perhaps the best-known form of adaptation occurs when we enter a dark room on a sunny day. At first we cannot see anything, but after a few minutes we begin to notice objects in the room, visible in the faint light there. This phenomenon occurs because our receptors become more sensitive when they are not stimulated for a while, and also because there is a change from cone vision to rod vision. You may have noticed that in a faint light all objects appear to have no colour, quite unlike our daylight vision. This is because there is only one type of rod but three different types of co e, and the cones have a dual function: they encode the amount of light present, but they also encode colour, since they are maximally sensitive to different wavelengths in the visible spectrum. It is important to realize that they must be compared, since the output of a single cone cannot unambiguously encode wavelength. Suppose you have a cone maximally sensitive to light which has a wavelength of 565 nm. By using an electrode to measure the output of this cone (do not try this at home!), suppose you find that the cone is producing a ‘medium’ level of output. Can you deduce that the cone is being stimulated by a ‘medium’ amount of light whose wavelength is 565 nm? No – because precisely the same response would arise from stimulation by a larger quantity of light of a slightly different wavelength – say 600 nm. This is because cones do not respond in an ‘all or none’ manner to rays of light of a given frequency. Instead, they show a graded response profile. We have three different types of cone in the retina They are sometimes called ‘red’, ‘green’ and ‘blue’ cones. More strictly, we refer to these cones as ‘L’, ‘M’ and ‘S’ cones, which refers to the fact that the L cones respond most to long-wavelength light, M cones to medium-wavelength light, and of course S to shortwavelength light. So the output of a single cone is fundamentally ambiguous, and for a meaningful colour sensation to arise, we must know how much one cone type responds compared to another cone type. This is achieved through chromatic opponency, [chromatic opponency a system of encoding colour information originating in retinal ganglion cells into red– green, yellow–blue and luminance s gnals; so, for example, a red–green neuron will increase its firing rate if stimulated by a red light, and decrease it if stimulated by a green light] a process that also explains why we can see four ‘pure’ colours – red, yellow, green and blue – even though there are only three kinds of cone. The ‘yellow’ sensation arises when L and M cones receive equal stimulat on. Their combined output is then compared to that of the S cones. If L+M is much greater than S, we see yellow, and if less, we see blue. If L+ M is about the same as S, we see white. This effect was achieved using a special kind of camera, first constructed by Parraga, Troscianko and Tolhurst (2002), which produces different cone responses for each point in the scene (or pixel in the image). Parraga et al. found that the red–green system is suited to encoding not just the colour properties of images of red fruit on green leaves, but also the spatial properties of such images for a foraging primate. We know that the receptive fields for colour are different from the receptive fields for luminance. Specifically, they lack the ‘centre-surround’ structure that makes the centre effectively as big as the whole receptive field. As a result, we are less sensitive to fine detail in colour than in luminance. Early photographs were only in black and white, but the technique of using watercolours to paint the various bjects (e.g. blue sky) on top of the photograph became quite popular. The interesting point is that the paint only needed to be added in approximately the right areas – some creeping across object boundaries did not seem to matter. About fifty years later, the inventors of colour TV rediscovered this fact. The trick is to find a way of transmitting as little information as possible. So only a sharp luminance image is transmitted. The two chrominance (colour) images are transmitted in blurred form, which means that less information needs to be transmitted without a perceived loss of picture quality (Troscianko, 1987). The main consequence of this ‘labour-saving’ trick in the brain is that the optic nerve can contain relatively few neurons. The optic nerve conveys the action potentials generated by the retina to other parts of the brain, principally the primary visual cortex, [primary visual cortex a region at the back of the visual cortex to which the optic nerves project, and which carries out an initial anal sis of the information conveyed by the optic nerves] also known as Area V1, where the information is then analysed and distributed further to other visual areas.

[John Lythgoe (1937–92), a biologist at Bristol University, studied the relationship between the sense organs and visual apparatus of an animal, and between its surroundings and the tasks it has to perform within these surroundings. Lythgoe’s main research was on fish living at different depths of water, since the depth of water affects the wavelength composition of daylight reaching that point. He found a marked relationship between where the fish lived and what their cones were like. His research founded a flourishing new research discipline called ‘ecology of vision’ with the publication of his book in 1979 (The Ecology of Vision).]

Vision as an active process

Of course, our eyes are able to move in their sockets, and this allows the visual system to choose new parts of the image to look at. These rapid eye movements, called saccades, [saccades rapid eye movements in which the fovea is directed at a new point in the visual world] occur several times per second. We are mostly unaware of them and, during a saccade, vision is largely ‘switched off’ so that we do not see the world moving rapidly in the opposite direction to the saccade.

One is a classic study by the Russian psychologist Yarbus (1967), which shows how we move our eyes when looking at a visual object. The other is a study by Gilchrist, Brown, and Findlay (1997), which investigated similar scan patterns by a young woman (a female university undergraduate aged 21) who had no ability to move her eyes due to a condition called extraocular muscular fibrosis. Instead, she moved her whole head using the neck muscles. There are strong similarities between the two sets of scan patterns, indicating that, even if the eye cannot move, the image scan sequence needs to be broadly similar. All of this raises the question of exactly why the eye needs to be moved to so many locations. Presumably it is to resolve fine detail by using the fovea with its small receptive fields. Moving the eyes in this manner is usually associated with a shift of attention. When we move our eye to a given location, we are more likely to be processing the information from that location in greater detail than information from elsewhere. This is the concept of selective attention (see chapter 8). The process is intimately related to physical action, in this case movement of the eyes (or head). This implies that vision is not just a passive process, but an active one. In fact, there appear to be two streams of visual information in the cortex – ventral and dorsal. The former processes information about the nature of objects; the latter allows you to interact with ‘stuff out there’ – i.e. to plan actions – without a detailed representation of objects (see Milner & Goodale, 1995).


We know that light travels in straight lines. It therefore makes sense for a biological transducer of light to preserve information about the direction from which a particular ray of light has come. In fact, this consideration alone accounts for a large swathe of visual evolution. As creatures have become larger and therefore begun to travel further, they have developed an ever greater need to know about things that are far away – so eyes have developed an increasing ability to preserve spatial information from incident light. Where is each ray coming from? To achieve this, there must be some means of letting the light strike a particular photoreceptor. This is the name given to the smallest unit that transduces light. If a given photoreceptor [photoreceptor a cell (rod or cone) in the retina that transforms light energy into action potentials] always receives light coming from a given direction, then the directional information inherent in light can be preserved. Pinhole cameras and the need for a lens The implest way to illustrate light transduction is to make a pinhole camera – a box with a small hole in it. From the geometry of rays travelling in straight lines, it is possible to see that a given place on the rear surface of the pinhole camera will only receive light from one direction. Of course, this is only true until you move the camera. But even then, the relative positional information is usually preserved – if something is next to something else out there, its ray will be next to its neighbour’s ray on the back surface of the camera. One of the drawbacks with a pinhole camera is that the image (the collection of points of light on the back of the box) is very dim, and can only be seen in very bright, sunny conditions. If you make the pinhole bigger to let more light through, the image becomes fuzzy, or blurred, because more than one direction of incident light can land on a given point on the back surface. With this fuzziness we begin to lose the ability to encode direct on. The solution is to place a lens over the now-enlarged hole. The lens refracts (bends) the light so that the sharpness of the image is preserved even if the pinhole is large. Add film and you have a normal camera. The same construction in your head is called an eye. Nature evolved lenses millions of years ago. We then reinvented them in Renaissance Italy in about the 16th century. Possibly the earliest description of the human eye as containing a lens was given by Arab scholar Abu-’Ali Al-Hasan Ibn Al-Haytham, often abbreviated to Al Hazen, in the eleventh century AD. Al- Haytham was born in Basra – now an Iraqi town, which has had a sadly turbulent history recently.
[Abu-’Ali Al-Hasan Ibn Al-Haytham (965–1040) often abbreviated to Al Hazen, was born in Basra, Iraq. He studied the properties of the eye and light at a time when European science was singularly lacking in progress. He is remembered for the discovery that the eye has a lens which forms an image of the visual world on the retina at the back of the eyeball.]

Looking at the eye in detail In human vision, there are two types of photoreceptors, called rods and cones. Rods are cells that only work at low levels of illumination, at night; cones are cells that give us our vision in normal daylight levels of illumination. [rods cells in the retina that transform light energy into action potentials and are only active at low light levels (e.g. at night)] There is only one kind of rod, but three different kinds of cone, [cones cells in the retina that transform light energy into action potentials, different kinds responding preferentially to different wavelengths] each responding preferentially to a different range of wavelengths of light – the basis of colour vision. and lens, eventually falling on the retina. When a ray of light hits a photoreceptor (a rod or a cone), it sets up a photochemical reaction, which alters the electrical potential inside the photoreceptor. This, in turn, produces a change in the firing rat of the neuron connected to that photoreceptor. There are four types of neuron in the retina – horizontal, bipolar, amacrine and ganglion cells. Now we meet with a problem: there are about 100 million photoreceptors but only about one million neurons in the optic nerve. Nobody really knows why, but the most persuasive argument is that if you made the optic nerve thick, then the eye could not move! How can all the important information be squeezed into these few neurons? The only way is to discard a lot of redundant information. Think about how you would give instructions for someone to find your home. It is usually a waste of time to describe exactly how long they need to walk in a straight line. Instead, you might say, ‘turn left, then right, then second left’. What you are doing is noting the points of change in the route information. The retina does pretty much the same thing. It signals the points of change in the image – i.e. the places where intensity or colour alter – and ignores regio s where no changes occur, such as a blank uniform surface. Figure 7.11 shows how each retinal ganglion cell has a receptive field – a particular part of the visual world. If you change the amount of light in this field, you will produce a change in the cell’s activity. A neuron only changes its firing rate when there is an abrupt change in the amount of light falling on the receptive field – for example the boundary between a white object and a dark background. The retina contains many such receptive fields in any one location, so there is a large degree of overlap between them. They are smallest in the area called the fovea, [fovea the central five degrees or so of human vision, particularly the central, high-acuity part of this area (about one degree in diameter)] the high-acuity part of which occupies approximately the central one degree of the visual field. This is the part of the retina that receives light rays from the direction you are looking in. Since a receptive field cannot distinguish between dif erent locations within it, the smaller the receptive field is, the finer the spatial detail that can be resolved. So the fovea is able to resolve the finest detail. To convince yourself of this, try looking at the opposite page out of the corner of your eye and then try to read it. If you cannot do so, it is because the receptive fields in the periphery of your retina are larger and incapable of resolving the small print.

[Thomas Young (1773–1829) was a physicist who postulated that there are only three different kinds of photoreceptors in the retina, even though we can distinguish thousands of different colours. The basis of this argument was that, to have thousands of different photoreceptors would compromise the acuity of the eye, since the acuity is determined by the distance to the nearest neighbour of the same type. Later, Hermann von Helmholtz added the physiological basis of this argument. Thomas Young also studied the mechanical properties of materials, defining a number later known as Young’s Modulus to describe how stretchable a material is. In Young’s days, there was no distinction between the subjects which we now call physics, psychology and physiology.]


The role of our sense organs is to ‘capture’ the various forms of energy that convey information about the external world, and to change it into a form that the brain can handle. This process is called transduction. [transduction the process of transforming one type of energy (e.g. sound waves, which are mechanical in nature) into another kind of energy – usually the electrical energy of neurons] As a transducer, a sense organ captures energy of a particular kind (e.g. light) and transforms it into energy of another kind – action potentials, the neural system’s code for information. Action potentials are electrical energy derived from the exchange of electrically charged ions, which inhabit both sides of the barrier between the neuron and its surroundings (see chapter 3). So our eyes transduce electromagnetic radiation (light) into action potentials, our ears transduce the mechanical energy of sound, and so on. Transduction is a general term, which does not apply only to sense organs. A microphone is a tra sducer, which (rather like the ear) transduces mechanical sound energy to electrical potentials – but in a wire, not in a neuron. There are many other examples of transduction in everyday equipment. As we gradually move away from physics and into psychology, we pass through an area of physiology – how biological transducers work.


There are senses which we use to explore the world immediately around us – just outside our bodies and also on or within our bodies. Somatosenses respond to: pressure temperature vibration information signalling dangers to our bodies (e.g. cuts and abrasions, corrosive chemicals, extreme heat, electrical discharges) possible problems inside our bodies
Exploring through touch
Our skin contains nerve endings which can detect sources of energy. Some parts of our bodies, such as our fingers, have a higher density of nerve endings than other parts, and so fingers and hands are used in active exploration of the world immediately around us. Mostly, this is to corroborate information that is also provided by other senses, such as vision; but of course we can still touch things without seeing them. I recently played a game with some friends in New York, where there is a park with small statues of weird objects. We closed our eyes, were led to a statue, and had to tell what it was. Through active exploration lasting many minutes, we were able to give a pretty precise description of the object, but it was still a big surprise to actually see it when we opened our eyes. This experiment shows that the sense of touch can be used to give a pretty good image of what an object is, but the information takes time to build up. Also, for the process to work efficiently, we need a memory for things that we have experienced before – in this case, a tactile memory. Sensing pain and discomfort The same nerve endings that respond to mechanical pressure and allow this kind of tactile exploration also respond to temperature and any substances or events that cause damage to the skin, such as cuts, abrasions, corrosive chemicals or electric shock. The sensation of pain associated with such events usually initiates a response of rapid withdrawal from the thing causing the pain. There are similar nerve endings inside our bodies, which enable us to sense various kinds of ‘warning signals’ from within. An example of this is that dreadful ‘morning-after’ syndrome, comprising headache, stomach ache and all the other cues that try to persuade us to change our lifestyle before we damage ourselves too much!


Light travels virtually infinitely fast; sound travels more slowly but is still unable to linger in one spot for any length of time. In our efforts to gather useful information about the world out there, we really could use a source of information that sticks around for much longer. This is where the chemical senses – smell and taste – prove useful. Biological systems have developed an ability to detect certain types of molecule that convey information about sources of food, other animals and possible hazards and poisons. To appreciate this information, just watch a dog sniffing for a buried bone – or tracking the path taken by another dog. Here we have a source of information that comes with a level of persistence, a spatial memory of previous events. In humans, the sense of smell seems to be less developed than in other animals. But we do have a well-developed sense of taste, which tells us about the chemical composition of the food we eat and warns us (together with smell) of toxins, for example in food that is putrid. Clearly this is a very different type of information from that provided by light and sound. It requires physical contact or at least close proximity, but the information persists for much longer.


Dolphins use a similar echolocation mechanism, both for finding their way and for communication. In general, communication is the other main use for sound, since it is generally easier for animals to produce sound than light. Speech production and recognition in humans is a spectacularly impressive process, which is crucial to our success as a species. The fact that sound can travel around corners is an added bonus – the person you are talking with does not need to be in a direct line of sight.

SOUND sensation

Those cicadas are still chirping outside the window, and the computer is whirring. These sensations are clearly conveyed to me by sound. But what is sound? The mechanical nature of sound Like light, sound is also a form of physical energy, but this type of energy is mechanical. Sources of sound cause the air molecules next to them to vibrate with certain frequencies; these vibrations are transmitted to neighbouring molecules and cause waves of vibration to spread outwards from the source, just like waves spread on a calm pond if you throw a pebble into it. In this way, sound can travel around corners, unlike light. So sound conveys a very different form of information than light. Since it is not constrained to travel in straight lines, it can tell us about things that are out of sight – but at a price. The price is that sound cannot tell us about spatial location with as much precision as light can; this is a consequence of its physical properties, and nothing to do with our ears.

As sound travels through the air, the air pressure at a given point will change according to the frequency of the sound. We are sensitive to a range of frequencies [frequency the rate at which a periodic signal repeats, often measured in cycles per second or Hertz (Hz); the higher the frequency, the higher the perceived pitch] from about 30 Hz (Hertz in full, which means cycles per second) to about 12 kHz (or kiloHertz, meaning thousands of cycles per second). Figure 7.5 shows the patterns of waves reaching us from objects vibrating at a given frequency.
Using sound to locate objects
As we have already seen, sound also travels much more slowly than light, with a speed of about 300 metres per second. Even though this is still pretty fast, it is slow enough for our brains to process time-of-arrival information. It takes sound just under one millisecond to travel from one side of the head to the other. This information can be encoded by neurons (Moore 2003; see also chapter 3), giving information about what direction the sound is coming from. Sound also gets reflected or absorbed by surfaces. Think about echoes in a cave. These are extreme examples of a process that happens less spectacularly, but more usefully, in everyday life. Subtle echoes give us clues about the location of large objects, even in the absence of vision. Blind people tend to develop this skill to a higher level, using sticks to tap the ground and listen for new echoes. Bats use echolocation to fly at night.

The benefits of colour

The benefits of colour
When light hits a solid object, it can either be reflected or absorbed. An object that absorbs all the light hitting it will look black. One that reflects all light will look white. Intermediate levels of reflectance [reflectance the relative proportion of each wavelength reflected by a surface: the higher the reflectance, the lighter the object will look] (the term given to the ratio of incident to reflected light) will elicit shades between black and white. Also, objects reflect different amounts of light at different wavelengths. So the ability to distinguish between the amounts of different wavelengths of light reaching us from a given surface can convey a lot of information about the composition of the surface, without us having to come close to it. This is the basis of colour vision. It is possible to tell whether a fruit is ripe or not, or whether meat is safe to eat or putrid, using the colour i mation from the surface of the fruit or meat. Equally, it is possible to break camouflage. The ripe fruit is virtually invisible in the monochrome version because its luminance [luminance the intensity of light corrected for the degree to which the visual system responds to different wavelengths] (the amount of light that comes to us from it) is not sufficiently different from the canopy of leaves that surround it, the canopy serving as camouflage because it contains large random fluctuations in luminance. As soon as colour is added, we can see the fruit clearly. It has been argued (Osorio & Vorobyev, 1996; Sumner & Mollon, 2000) that the need to find fruit is the main reason for primates’ trichromatic colour vision. ‘Trichromatic’ simply means that there are three types of cone cells in the retina. Curiously, though, other mammals have only dichromatic colour vision, which means they only have two cone types – one corresp nding to medium-to-long wavelengths and another responding to short wavelengths. As a result, they cannot discriminate between objects that look green or red to us. So a red rag does not look particularly vivid to a bull! Interestingly, most animals (i.e. all birds and insects) have four cone types, one responding to UV radiation.


Arguably our most important perceptual ability is vision. We know that vision depends on light: when there is no light, we cannot see. What are the important characteristics of light, and how do these affect the kind of information it conveys to us? Light is a form of electromagnetic radiation. ‘Visible’ light forms just a small part of the full spectrum of this radiation. The sun emits radiation over a much larger part of the spectrum than the chunk of it that we can see. Why might this be so? To answer this question, it may help to consider why we do not see the two parts of the spectrum that border on the visible part. Ultra-violet radiation There is plenty of ultra-violet (UV) radiation about, especially as you get nearer to the equator and at high altitude. You will have heard about your skin being at risk of sunburn when there is a lot of UV radiation around you. Sunburn is the first stage of the process of the skin dying as a result of damage. So we know that UV radiation is damaging o skin, and presumably other biological tissue too. This is the most likely explanation for our eyes having an in-built filter to remove UV radiation. To put it simply, if we were able to see UV rays, they would be likely to damage our eyes. Some animals do possess UV vision, especially insects and birds. It is thought that they are less vulnerable to this hazardous radiation because they live a shorter timespan than humans. Our eyes must function throughout a long lifetime. Other forms of short-wavelength information, such as X-rays and gamma rays, are even more damaging to tissue, but these are filtered out by the earth’s atmosphere. Infra-red radiation Why are we unable to see infra-red (IR) radiation? Would it be helpful if we could? The answer to the second question is certainly ‘yes’. IR radiation is given off in proportion to an object’s temperature. This is why it is used in night-vision devices, which can locate a warm object, such as a living body, even in the absence of light. This information coul be extremely useful to us. So why do we not see it? Precisely because we are warm creatures ourselves. Imagine trying to see while holding a strong light just below your eyes. The glare from the light prevents you from seeing other objects. In the same way, we would suffer from glare if we could see IR radiation. It would be like having light-bulbs inside your own eyes. Again, some animals do see IR radiation, but these are coldblooded creatures, such as pit vipers, which do not suffer from this glare problem. The IR information is very useful in helping them to locate warm objects, such as the small mammals they hunt for food.

Humans build devices that transform IR into visible light – useful for armies (and the psychopath in the movie Silence of the Lambs) needing to ‘see’ warm objects at night, such as vehicles with hot engines and living humans. More humane uses of this technology include looking for living earthquake victims. A Landover is clearly visible from its engine’s heat. A normal photo of this scene would simply look black. Visible light – speed and spatial precision Light travels extremely quickly, at a rate of about 300,000 km per second. In effect, this means that light transmission is instantaneous. So we cannot determine where light is coming from by perceiving differences in arrival time. No biological system exists that could respond quickly enough to signal such tiny time intervals. One of the fastest neural systems in humans is the auditory pathway, which can sense differences in the time of arrival of sound waves at each side of the head. Such differences are of the order 1 ms, or one-thousandth of a second. As light travels so much faster, the equivalent difference in time-of-arrival we would need to detect would be one millionth of a millisecond. This is impossible for neurons to resolve. Fortunately, the other major property of light means that we do not need time-of-arrival information to know where the light is coming from. In transparent media such as air, light rays travel in straight lines, enabling it to convey information with high spatial precision. This means that two rays of light coming to me from adjacent leaves on the tree outside the window, or adjacent letters on this page, fall on different parts of the retina – the part of the eye that translates optical image information into neural signals. As a result of this simple property (travelling in straight lines), we can resolve these separate details. In other words, we have a high degree of directional sensitivity, [directional sensitivity similar to acuity] or a hig acuity. [acuity the finest detail that the visual (or other) system can distinguish] Without this property, the light from adjacent letters on this page would become irretrievably jumbled and we would not be able to resolve the letters.

Monday, December 13, 2010



Our world is a complex place. Right now, I am sitting at a desk. I see the computer screen, and beyond that a window through which I see a garden and then a pine forest. I hear the faint whirring noise of the fan in the computer and the buzz of cicadas outside. I can smell the jasmine and pine sap from the garden – these smells become more potent if I concentrate on them. I can taste the coffee, which I recently sipped. My skin feels pleasantly warm in the summer heat, but my knee hurts where I grazed it a few days ago. I also feel the itching from some mosquito bites. How does all this information reach me? Examining the above description in more detail, especially the physical sources of information, will help to explain what is going on when we receive information from the world

Sunday, December 12, 2010


The physiological changes associated with emotion are very familiar to us. It is hard to imagine even the mildest emotional experience without its attendant arousal. [arousal the fluctuating state of physiological activation of the nervous system]. When we are happy or sad, afraid or angry, jealous or disgusted, the changes in our bodies are obvious. We might experience ‘butterflies in the stomach’, ‘a sinking feeling’, or ‘our heart in our mouth’. We feel ourselves blush, feel our heart race as we narrowly miss an accident, and feel the drooping depletion in our body that accompanies sadness or depression. We are more aware of the peripheral nervous system than we are of the central nervous system (CNS). We can feel our skin sweating or our muscles tensing, whereas most of us cannot feel our hypothalamus sending out signals, even though we might become aware of the result. We cannot feel our brain doing its work, emotional or otherwise. How do we learn to recognize the bodily changes that accompany our emotional states? Do they differ, depending on the emotion we are experiencing? Can there be emotion without physiological change? Variation in patterns of arousal Emotion is about coping with sudden changes in our environment, changes that have significance for our survival ( physical or social). So the autonomic nervous system (ANS) prepares the body for action and helps it back to quiescence later. These are what we refer to when we talk about changes in arousal. Over the years, psychologists have proposed that the various emotions experienced in everyday life have their own specific response patterns, [response patterns particular patterns of physiological responses, in this case linked to various emotions] in terms of arousal. So, fear should have a different pattern from anger, which in its turn should be different from sadness and happiness, and so on.
These suppositions were endorsed by the much-quoted study of Wolf and Wolff (1947). These researchers investigated a man who had had a gastric fistula inserted (a pipe directly into the stomach) for medical reasons. Wolf and Wolff (1947) found clear and consistent gastric differences between anxiety and anger. But further evidence demonstrating differential physiological response patterns for different emotions was scarce for many years. Lacey and Lacey (1970) found some evidence for emotion specificity in the cardio-vascular system, but it was not until 1990 that Levenson, Ekman and Friesen offered clear support for emotional response patterning. By instructing people on which facial muscles to use, they asked them to hold various emotional expressions for ten seconds. They found that happiness, surprise and disgust (or, at least, the facial expressions associated with these emotions) are characterized by a different heart rate than anger, fear and sadness, for example. Moreover, skin temperature is found to be lower in fear than in anger.

Action readiness

The behavioural view of emotion is clearly limited and does insufficient justice to its richness. It has provided some useful behavioural information but over time has given way to the physiological and cognitive approaches. A relatively recent and promising consideration of the behavioural aspects of emotion comes from Frijda (1996; Mesquite & Frijda, 1994), who proposes that the behaviour in emotion comes from action readiness, or tendency. Frijda emphasizes potential behaviour rather than the behaviour itself. The central notion here is that emotion carries with it a readiness to behave in a general way, rather than necessarily being associated with particular behaviours. So, for example, fear might produce a tendency to run away or to hide, but there could be very many ways of running away or hiding. Also, as Frijda sees it, an action tendency might be suppressed or hidden behind some other behaviour, for social reasons. So we might feel like running away or hiding, but we do not because of the risk of loking foolish or cowardly. Clearly Frijda’s approach to emotion–behaviour links is very different from earlier ones. It is more subtle, more realistic and of more obvious relevance to human emotion.


Freud (1975a, 1975b) had two theories of neurotic anxiety, both suggesting that it is made up of an unpleasant feeling, a discharge process, and a perception of whatever is involved with this discharge. Freud believed that anxiety develops through the trauma of birth, the loss of the caregiver, early uncontrollable threats or impulses, and, more specifically, fears of castration. In contrast to Freud’s theoretical framework, subsequent work (e.g. Bowlby 1973) has stressed the importance of separation from early attachments. For learning theorists (e.g. Mowrer, 1953), anxiety is a form of learned fear, particularly when the source of the fear is vague or repressed. Anxiety becomes a conditioned response that can then participate in new learning. Taking this a stage further, H. Eysenck (e.g. 1957) suggests that we inherit proneness to neurotic anxiety through the ANS, or learn it as conditioned fear. In searching for the physiological mechanisms that might underlie these processes, Gray (1982, 1987) states that he septalhippocampal region of the brain mediates anxiety. This brain system functions to inhibit behaviour that is a threat to the organism. Some recent theories of anxiety stress cognition. For example, M. Eysenck (1988) shows that those who are high or low in anxiety also differ in their cognition. So someone with a high trait anxiety (anxiety as a personality characteristic) is likely to have more worries stored in long-term memory than someone with low trait anxiety, and these worries will be much more easily accessed. One of the most telling contributions in this area comes from Barlow (e.g. 1991), who places anxiety and depression at the centre of emotional disorder. He argues that it is difficult to distinguish between anxiety and depression. However, whilst most depressed patients are also anxious, not all anxious patients are depressed. Barlow suggests that emotional disorders occur when chronic states of dysthymia (i.e. lowered mood) interact with briefer episodes of panic and depression. This mig t lead a depressed patient to misinterpret a personal or environmental event as a sign of personal inadequacy, which simply makes matters worse. Barlow’s general argument is that stress, anxiety and dysthymia can interact with everyday emotions of excitement, anger, fear and sadness. When this happens, the result is one of four kinds of emotional disorder – mania, outburst of temper, panic or depression. For fully fledged emotional disorders to occur, these emotions have to be experienced unexpectedly or inappropriately, and to be seemingly out of control. Finally, it is worth repeating that emotions can never be abnormal. Their expression or recognition may be awry, they might become too extreme for comfort, or they might contribute to mental illness, but even in these unfortunate circumstances, emotions always provide us with useful information. In the case of abnormalities, this is information that something is wrong and needs fixing.


In psychosomatic disorders, there are links between emotion, cognition, and physical symptoms (including pain). Examples where such links have been established are asthma, peptic ulcers, hypertension and skin rashes, where psychosomatic conditions may exacerbate the condition even if they do not cause them per se. These disorders are usually mediated via organs or organ systems that are innervated by the autonomic nervous system (ANS). Furthermore, many physical illnesses are now thought to have a psychological, more particularly an emotional, component (e.g. Robinson & Pennebaker, 1991). Pennebaker (e.g. 1990) has also reported some fascinating research showing that communicating ( by talking or writing) about our illnesses and negative emotional experiences may help to ameliorate them. Anxiety is thought to be at the root of many psychosomatic disorders. It is one of the most common emotions, and certainly contributes to many types of illness, physical or mental. At one level, it is a commonplace experienc and has had more theories offered to account for it than most other emotions (see McNaughton, 1996; Strongman, 1996). On another level, there is extreme anxiety. Imagine this. You suddenly start to tremble, shake and feel dizzy. Your heart is speeding up and slowing down uncontrollably and you have pains in your chest. You feel overwhelmingly hot and break out in a sweat and then you start to shiver with the cold. Your hands and feet start to tingle. You seem to be losing touch with reality and worry that you are having a heart attack or a breakdown. [panic attack sudden and apparently inexplicable experience of terror characterized by extreme physiological reactions, such as heart palpitations and feelings of impending doom] This is a panic attack – the extreme form of acute anxiety – and it is most unpleasant and disturbing. Any of us might have a panic attack under severe circumstances. We might ride it out and put it down to external factors that we are able to tackle. It then becomes an experience to l ok back on. But if we begin to worry about having more panic attacks, then we might be developing a panic disorder. This might lead us to start avoiding social situations that we believe might bring on an attack. We are then becoming agoraphobic.


As our emotional life develops, can it go wrong? Is it possible to be too happy or too sad or too angry? Is it useful to face life with a moderate degree of anxiety? Emotion is always normal. It can be extreme or unusual, but it is always providing information for whoever is experiencing it. It might be seen as inappropriate by other people, but for those who are experiencing the emotion, it is simply their experience. They might be able to limit its expression on the outside but unable to influence directly their own personal emotional reaction. Emotion is functional, both in the immediate sense of providing information, and in the true evolutionary sense of being adaptive (otherwise it would not have been preserved by natural selection). However, emotions have also been seen as contributing to most forms of mental illness, leading Oatley and Jenkins (1992) to ask: how can emotions malfunction?


Finally, the study of emotional intelligence [emotional intelligence the capacity to be sensitive to and regulate our own emotional state, and that of other people] and regulation is rapidly becoming an important area within emotional development research, highlighting the link between emotion and cognition.

Emotional intelligence refers to a set of skills that we use to deal with emotion-relevant information. Salovey, Hsee and Mayer (1993) suggest that it is concerned with:
1. the appraisal and expression of emotion;
2. the use of information based on emotion; and
3. the adaptive nature of emotion regulation.
Of particular importance is how we learn to regulate our own emotions. Salovey and colleagues argue that emotional self-regulation depends on two factors. The first is how disposed we are to regulate our own emotions. This in turn depends on emotional awareness and our thoughts about our own moods. Secondly, it depends on strategies that can be used to affect our own feelings. For example, we might manipulate what we feel by spending a day helping other people, or perhaps by completing the less pleasant tasks of the day early on, saving the more pleasant things for later. Thompson (1990, 1991) links changes in emotional selfregulation to the development of cognitive skills, allowing emotion to be seen as analysable and capable of change. No doubt, such capacities themselves depend on a mixture of genetic influences and the development of language and social behaviour. As emotional intelligence and the ability to self-regulate develop, so does a child’s own way of thinking about emotion. This, in turn, will be influenced by socialization. So emotional intelligence and self-regulation may to some extent depend on the attachment style the child experiences and how well socialized she becomes. Emotional intelligence is, of course, also concerned with accurately interpreting and dealing with others’ emotions.


A core part of early emotional development is attachment – the initial emotional bond that forms between an infant and caregiver. [attachment the close links formed between a human infant and caregiver, or the intimate bond that can form between adults] According to many theories, this forms the basis of both social and emotional development of the individual. The seminal work on attachment was carried out by, who described two major types of attachment pattern – the secure, and the insecure. Insecure attachment is further divided into two types – one defined by avoidance of the attachment figure (avoidant) and the other by anxiety and ambivalent feelings towards the attachment figure (anxious–ambivalent). In drawing attention to attachment, Bowlby placed great emphasis on the emotional relationship between the child and the caregiver during the first two years of life. His basic idea was that a warm continuous relationship with a caregi er leads to psychological health and well-being throughout life. So the nature of the emotional bond of the initial social attachment has implications not only for future intimate relationships but also for potential psychopathology. Bowlby argued that the child’s relationship with the caregiver prompts the development of internal working models. These give the child a schema of how accessible and responsive a caregiver is and how deserving of care the child is. These models will then affect future relationships. A secure working model will prompt expectations of good relationships and an open positive manner. By contrast, an insecure working model may lead to expectations of poor, unsupportive relationships and a distrustful, hostile manner. Of course, these differences in style will bring about obvious outcomes – what we might call self-fulfilling prophecies. The enormous amount of research linking initial attachment to later development has been reviewed by Thompson (1999). He concludes that t e relationship between early attachment and later relationships (including love relationships) is not straightforward. Rather than becoming fixed at an early age and then unchanged, it is mediated by a continuing harmonious parent– child relationship and depends on the nature of other short-term relationships too. Internal working models of how people relate might be established on the basis of the initial attachment, but can be changed by later social experiences and even by psychotherapy. Thompson summarises the effects of early attachment to caregivers as providing children with answers to four questions:
1. What do other people do when I express negative emotion?
2. What happens when I explore?
3. What can I accomplish?
4. How do I maintain good relationships with others?
A great deal of research on attachment in children and adults documents its importance from both developmental and clinical perspectives. For a full coverage, see Cassidy and Shaver (1999).
It should by now be clear that, although early processes (such as attachment to the primary caregiver during infancy) are important, emotion goes on developing throughout the life-span. Indeed, some of the more fulfilling emotional experiences occur later on in life. For a review of this topic, see Strongman and Overton (1999).


Following these early beginnings, the study of emotional development was relatively quiescent until the 1980s, when new theories and more sophisticated empirical research began to appear. For example, Harris (e.g. 1989) carried out a series of studies on how children understand emotion, often using stories and asking children to make judgements about the characters’ emotions. Among other results, he found that children of about six cannot imagine people having an emotion without their expressing it, but by the age of ten children understand hidden feelings in others.
A clear theory of emotional development was developed by Izard and Malatesta (1987; Magai [formerly Malatesta] & McFadden, 1995). They suggest that emotion is a system that relates to life-support, and to behavioural and cognitive systems, but develops independently of them. They view emotions as generating much of the motivational force behind behaviour. Izard and Malatesta express their theory with formal postulates about the neurochemistry of emotion, and the expression and experience of emotion. Although they believe (like Bridges) that emotions become differentially associated with internal states very early in life, Magai and Hunziger (1993) suggest that individual emotional development hinges on life’s crises and transitions, such as puberty, marriage or retirement. However, they argue that everything begins with attachment (see below), and if an emotion overwhelms at age 17, it might still overwhelm at 77, despite the process of individual emotional development. In other words, although we might lear to express our emotions in different ways throughout our lifespan, the experience of the emotions remains constant. Moreover, some of the most compelling emotional experiences that many people have are concerned with their social attachments (romantic or otherwise). Lewis is another major theorist of emotional development (e.g. 1992, 1993). He regards emotional development as dependent on maturation, socialisation and cognitive development, through gradual differentiation of emotional states. Lewis argues that we have to be self-aware to truly experience emotion. So before the infant has developed self-awareness, according to Lewis it could have an emotion but would not properly experience it. Like Bridges and Malatesta/Magai, Lewis believes that most emotions have appeared by about the age of three. Lewis argues that distress, interest and pleasure are there from birth. Joy, sadness and disgust, then anger, appear from three to six months, followed by surprise and fear. In the second half-year of life, with self-awareness developing, Lewis argues that the self-conscious emotions of embarrassment, empathy and envy appear. Finally, Lewis states that the self-conscious evaluative emotions of pride, shame and guilt appear. These emotions depend on seeing the self as both subject and object, requiring a theory of mind – the understanding that other people have minds and hence separate viewpoints. In the end, for Lewis, the cornerstones of emotional development are cognition and socialization. From the two theories that we have reviewed above, it should be possible to ask what actually develops during emotional development. The answer takes us back to the five perspectives on emotion described earlier in this chapter. So we develop (a) emotional experience, (b) emotional behaviour and (c) physiological reactions. We also learn to express and recognize emotion in various social situations, depending on personal maturation and cognitive development


While individual differences in temperament seem to be there from birth, emotion, cognition and social behaviour appear to develop together and to be dependent on one another. However, some aspects of emotion must be built in or hard-wired. Studies by Watson and Raynor (1920) and Bridges (1932) dominated the early investigation of emotion development. From a behavioural perspective, Watson and Raynor were interested in emotional development through conditioning and studied the conditioned fear of rats in a boy of 11 months.
Watson argued that our emotional lives build up around this type of conditioning, although he argued that the foundations for this are provided by what he saw as the three basic built-in emotions. Watson called them X, Y and Z, although they could be named fear, rage and joy. His observations of infants suggested that these reactions are elicited by, respectively, a sudden loss of support, a thwarting or hampering of physical movement, and a stroking or tickling of the body. Bridges’ (1932) approach to emotional development was based on observation rather than experiment. She believed that we have only one built-in emotional state – undifferentiated excitement. By about the age of three months, Bridges argued that this divides into positive (delight) and negative (distress). There follows increasing differentiation of the emotions, until, by about the age of two, we show a primitive form of all of the adult emotions. With respect to this proposed differentiation, Bridges argued that at about six months comes anger, then disgust, and then fear, and at 18 months or so jealousy breaks away from anger. As for positive emotions, it is proposed that elation develops at about seven or eight months, joy at about 20 months, affection for adults at about nine months, and affection for children at about 15 months. For many years, Bridges’ descriptions could be found in most psychological texts, even though it could be argued that her observations were very sketchy, her definitions inexact, and she had not dealt adequately with emotion in newborn infants.


There are, of course, many other discrete emotions. Jealousy and envy are sometimes confused with one another in everyday conversation, but are quite easily distinguished. We become jealous if we think that we might lose someone’s affections (usually those of a sexual partner) because a third person is involved. On the other hand, we envy someone who has something (a possession, a quality, etc.) that we would like. It makes little sense to be jealous of a friend’s car. There is also a class of self-conscious emotions – embarrassment, pride, shyness, shame and guilt. They all make reference in some way to the self, particularly the self in a social context. Most emotions are social, but the self-conscious emotions are distinctive insofar as they depend on other people’s opinions. Lewis (1993) describes shame, for example, as involving an evaluation of our actions in relation to our entire self (our character), following a transgression of standards, rules or goals. It is always very negative and painful, and d srupts both thought and behaviour. Shame is concerned with a fundamental failure of the self, a character flaw, and we have a very strong motivation to avoid or escape it.

Are we born with our emotions or do we learn them? What happens as we turn from the emotional excesses of childhood to the more inhibited world of the adult? How important are early relationships to emotional development?


Of the five fundamental discrete emotions, four are generally judged to be ‘negative’ – fear/anxiety, anger, sadness and disgust – and one to be ‘positive’ – happiness. Although there is only one positive emotion, the negative emotions are not always experienced as negative. In fact, the distinction between positive and negative emotions may not be altogether appropriate, as we shall see.
Anxiety will be discussed later in this chapter when we consider abnormalities in emotion. Fear is directed towards specific objects or events; it alerts us to danger and prompts us to escape or avoid. Anger, on the other hand, is quite different. In a perceptive analysis of anger, Averill (1982) argues that it is an emotion about conflict, and is inevitably linked to aggression. However, even though aggression might be biologically determined, Averill sees anger as largely socially constructed, aimed at correcting perceived wrongs and upholding standards of conduct. As such, the experience of anger is not necessarily negative. The third specific emotion, sadness, has a directness that makes it seem a little less negative than some of the other negative emotions. It is usually a reaction to loss that slows us down into discouragement, downheartedness and loneliness. Grief is an extreme and very complex form of sadness and always involves the loss of something, or more usually, someone, of great importance to s. Izard (e.g. 1991) describes grief as including sadness, anger, disgust, contempt, fear, guilt and shyness, and shock, protest, despair and reorganisation. The last of the negative emotions, disgust, is very primitive. Its central concern is with the rapid expulsion from the body of any substance that might be toxic, noxious or harmful to it. Happiness, joy, elation, and so on, seem to be variations on a theme. In recent years, there has been an increasing emphasis on the study of ‘positive psychology’ (embracing constructs such as happiness) in contrast to the study of what might be termed ‘negative psychology’ (Positive Psychology Center, University of Pennsylvania, by Martin Seligman and colleagues: ). However, Averill and More (1993) argue that happiness is difficult to understand because it can take on so many different meanings.


So far we have considered emotion in general terms, but there have also been many attempts to study specific emotions. Izard (1977, 1993) is one psychologist who has discussed specific emotions in detail. He argues that there are discrete emotions, a view that makes good everyday sense. In Differential Emotions Theory, he suggests that emotions are motivational and organize perception, cognition and behaviour, helping us to adapt and cope with the environment, and to be creative. Like many other theorists, Izard links emotion with personality, believing that they develop together from the early years.


The fifth way of approaching emotion concerns its mainly social nature. This highlights the importance of emotional expression as well as personal characteristics, such as gender, that may be related to differences in emotional expression. Are emotional expressions universal? How do we recognise emotions in other people? How do we express emotion?
Body language – nonverbal expression of emotion Although we can experience emotion when alone, emotion is mainly a social occurrence. Emotional expressions communicate a great deal, and we rely on recognising them in others to assist in the smooth running of our social interactions. Body language is central to emotional communication, which is essentially nonverbal. [body language expressions, gestures, movements, postures and paralinguistic aspects of speech that form the basis of nonverbal communication]. While we communicate about the world verbally, there is a nonverbal subtext that relates to the interplay of our emotions. The interpretation of the emotional meaning of body language is a skill that we seem to acquire and use unconsciously, even automatically. Some people are better at it than others, just as some people are more openly expressive of their emotions than others. Our ability to suppress and moderate our emotional expression further complicates matters.
To find out more about emotional expression, we first have to decide whether to study it in everyday settings or in the laboratory. Both have their difficulties – the context of ordinary life is complicated by a multitude of influences, while the laboratory is essentially an artificial environment with respect to normal social interaction. Methods used in the laboratory to study the accuracy of emotional expression include photographs of real or posed expressions, actors, schematic drawings, emotional readings of the alphabet, and electronic filtering of voices (leaving only the manner rather than the content). For example, actors may be asked to express a range of emotions, with photographs of these expressions being shown to volunteers to determine if they can be recognized correctly. Or emotion-laden conversations may be recorded and then the actual words used filtered out electronically, with volunteers then being asked if they can recognize any emotions being expressed in the resultant sounds. Back in the 1970s, Ekman, Friesen and Ellsworth (1972) demonstrated that most people are able to judge emotional expressions reasonably accurately. In other words, we can correctly recognize the emotion being expressed on another person’s face. One way of studying this is to ask participants to identify the emotions portrayed in photographs posed by actors. Many of these expressions are universal, to the extent that they are present in all the cultures studied. Emotional expressions are also recognisable across cultures, including pre-literate cultures untouched by Western influence. Izard (1980) argues that there are ten basic emotions that are interpreted similarly across cultures, each with its own innate neural programme (that is, a programme defining how the nervous system is wired up, present from birth):
1. interest/excitement
2. joy
3. surprise/startle
4. distress/anguish
5. disgust
6. contempt
7. anger/rage
8. shame/humiliation
9. fear/terror
10. guilt

The possible universality of the facial expression of emotion and its recognition is another central debate in the study of emotion. In general, although there is a very widespread agreement across cultures, it is difficult to make a completely compelling generalization from this type of research. Without investigations into all cultures, universality cannot be finally concluded. There are also cultural and subcultural rules governing the display of emotional expression. Fear might be expressed in a similar way universally, but its expression might be more suppressed in some cultures than others. And in Western cultures, anger is usually more openly expressed by men than by women. Ekman (e.g. 1982, 1992) bases his theory of emotion on three assumptions:
1. Emotion has evolved to deal with the fundamental tasks of life.
2. To be adaptive in evolutionary terms, each emotion must have a distinct facial pattern.
3. For each emotion, a distinctive pattern exists between expression of that emotion and the physiological mechanisms associated with it, and this is linked to appraisal of the emotion.
Some of Ekman’s more fascinating work concerns what happens when we attempt to hide or suppress an emotion. Ekman and Friesen (1969) suggest that feelings ‘leak out’ nonverbally. Although we might successfully suppress our facial expression, our social anxiety might be expressed through movements of our hands and arms, and even our legs and feet. Ekman (1985) developed this research with respect to deception in general, mentioned earlier in this chapter in the context of lie detection. The expressive aspect of emotion has generated the facial feedback hypothesis (Tomkins, 1962). [facial feedback hypothesis the view that our experience of emotion is determined by physiological feedback from facial expressions] This suggests that the experience of emotion is intensified by the proprioceptive feedback we receive from its facial expression. So if you fix a smile or a frown on your face for some minutes, you should begin to feel happier or more irritable, respectively. Try holding a pen sideways between our front teeth for a few moments, a technique used by Strack, Martin and Stepper (1988), and you might begin to experience feedback effects, such as you might experience if you were feeling happy and in good humour. Now compare holding the pen between your lips.

This provides an interesting link with the James–Lange theory – perhaps it is possible that we become irritable because we frown or happy because we smile?

Are Western women irrational and emotional, and Western men logical and non-emotional? Brody and Hall (1993) showed that women are generally more emotionally expressive than men. They are also better at expressing sadness and fear, whereas men have the edge on them with anger. Yet such gender differences are probably more dependent on cultural than genetic factors. In Western society, girls are usually brought up to be more emotionally accountable to society than boys, and also to be responsible for their own emotional lives and for the emotional lives of those around them. Relatively speaking, boys are often encouraged to deny their emotions. Whether these differences are currently changing in Western society is an open question. See Shields (2002) for a recent thorough exploration of the relationship between gender and emotion.

[Carroll Izard (1923– ), with his differential emotions theory, has been the main proponent of the study of individual, distinct emotions since the 1970s. He has stressed the importance of studying emotion from a developmental perspective. In Differential Emotions Theory, he suggests that emotions are motivational and organize perception, cognition and behaviour, helping us to adapt and cope with the environment, and to be creative. Arguing that there are several discrete emotions, Izard has proposed that the emotional system is independent of any other, although linked closely with motivation and personality, and that they develop together from the early years.]

[Paul Ekman (1934– ) has been the acknowledged expert on the expression and recognition of emotion from the early 1960s to the present. Among important research issues that he has investigated over his long and influential career, he has drawn attention to the importance of nonverbal behaviour, context, deception and many other aspects of emotional expression, particularly in the face. The findings of his work have generated considerable discussion of the possible universality of facial expression. Deriving from his research findings, his theory of emotion is based on three central assumptions: 1) emotion has evolved to deal with the fundamental tasks of life, 2) to be adaptive in evolutionary terms, each emotion must have a distinct facial pattern, 3) for each emotion, a distinctive pattern exists between expression of that emotion and the physiological mechanisms associated with it, and this is linked to appraisal of the emotion.]

The relationship between emotion and cognition

The question that remains is whether cognition, and in particular cognitive appraisal, is necessary for the perception of emotion. If someone lacks the cognitive capacity to make a particular appraisal of an event, can they experience the emotion that is normally associated with that event?
Lazarus (1982, 1984, 1991, 1993) has added greatly to our understanding of emotion and coping processes.[coping processes ways of dealing with stressors – usually a mixture of being problem-focused and emotion-focused] He believes that an event must be understood before emotion can follow. On the other hand, Zajonc (e.g. 1980, 1984) argues that cognition and emotion are independent, with emotion even preceding cognition in some cases. This debate about whether cognition necessarily precedes or follows emotion turns on the definition of cognition. It is clear that conscious thought is not involved in some rapid emotional reactions. A sudden screech of brakes tends to produce an unthinking, uncontrolled emotional reaction. But it can also be argued that some appraisals might also occur unconsciously and immediately. If such appraisals are cognitions, then all emotion is preceded by and involves cognition. The alternative is that some emotions involve cognition and others o not. Perhaps this is an arid debate. In everyday life the interplay between emotion and cognition is very intricate. There is a huge difference between the internal lurch you would feel at a sudden loud noise in the middle of the night and the combination of thoughts and feelings you would experience if this turned out to be the precursor to your house going up in flames. In other words, a simple, immediate reflex action that might send a burst of adrenaline through the system is very different from the complexities of emotional reaction when the cortex is involved and specific hopes, fears, memories and expectations are implicated. The reflex system is primitive and very much centred on the ‘now’, whereas what might be termed ‘real’ emotion also involves the past and the future (through appraisals). It is clear that emotions can – or, as Lazarus would argue, must – result from appraisal. It is also clear that emotional states can affect thoughts and even subsequent emotions. You have judged that your partner as been unfaithful to you (appraisal) and this makes you react jealously (emotion). But when you are jealous (emotion) this may in turn stop you thinking (cognition) as clearly as you normally would, and you may become anxious (emotion) about that.

The role of appraisal

Do we think before we experience an emotion, or do we experience the emotion and then reflect on it cognitively, or both? Compare these two situations:
1. You are sitting in the waiting room of a specialist, waiting for the results of some tests done to track down the cause of chest pains that have been bothering you. The receptionist comes over to you and apologizes that the doctor has been held up but asks you to wait because he would definitely like to see you.
2. You are crossing the street, lost in thought, when there is the sudden loud blare of a horn, the screech of locked wheels and the hiss of air brakes. You jump for your life and stand trembling as a truck rumbles past, the driver angrily shouting through the window.
These two situations both involve cognition and emotion, but in very different ways. Appraisal is the foundation stone on which the emotion– cognition structure is built. Theorists maintain that our evaluation – or appraisal – of the personal significance of an event leads to an emotional reaction. Such appraisals allow us to make fine distinctions between our emotional experiences and help us to determine the extent or intensity of the emotion. For example, being criticized privately is a very different experience from a public condemnation, and the appraisal leads to a less intense emotional reaction ( be it anxiety or anger). Attention was first drawn to the significance of appraisal for emotion by Arnold (1960) and continued most strongly by Lazarus (1993), although its importance is assumed by many theorists who link emotion and cognition.
Ellsworth (1991; Smith & Ellsworth 1985) lists six dimensions of appraisal:
1. attention
2. pleasantness
3. certainty
4. anticipated effort
5. human agency
6. situational control
Each appraisal is considered to be unique, making each emotional experience unique, and the degree to which appraisals are similar determines the similarity between emotions.


In recent years, research into emotion and cognition has positively exploded, coming to dominate the field, although it remains a controversial approach to the psychology of Linking arousal and cognition Schachter (1964, 1970) put forward a two-factor theory that had a profound influence on the way that psychologists think about emotion. Briefly, he argued that a necessary part of emotion is arousal of the sympathetic nervous system. The intensity of such arousal differs from situation to situation, and, according to Schachter, is interpreted according to our beliefs and/or knowledge about the situation. This means that our experience of emotion depends on two factors – physiological arousal and cognition.

Schachter derived three empirical predictions from his theory, which he tested in a cunningly devised experiment (Schachter & Singer, 1962). This work, conducted over 40 years ago, has provided the impetus for research on the relationship between emotion and cognition that continues up to the present day. Schachter’s work was partly based on a study by Maranon (1924), who had injected 120 patients with epinephrine (adrenaline) and asked them to say what it made them feel like. Adrenaline causes changes in sympathetic arousal reflected in rises in heart rate and blood pressure, respiration and blood sugar. Subjectively, this takes the form of palpitations, tremors, flushing, faster breathing, and so on. About 70 per cent of Maranon’s patients reported only physical effects while the other 30 per cent also mentioned emotional effects. Typically, participants in the latter group said that the injection made them feel ‘as if ’ they were afraid, rather than actually feeling afraid. Schachter (1959) believed that an epinephrine injection would produce a state of arousal that people would evaluate in terms of whatever they perceived around them, if they were unaware of the effects to expect from the injection. He made three propositions that, between them, show the necessity of both cognition and arousal to emotion:
1. If we are in a physiologically aroused state for which there is no obvious explanation, then we will label it by using whatever cognitions are available to us. The same state might be labelled in many different ways.
2. If we are in a physiologically aroused state for which the explanation is obvious, then we will not seek further explanations.
3. For emotion to occur, there must be physiological arousal.
To test these propositions, Schachter and Singer (1962) persuaded participants to agree to an injection of a ‘vitamin’ so that its effects on vision could be determined. In fact, they were injected either with epinephrine or a placebo (saline). For ethical reasons, participants would nowadays be debriefed after the completion of such a study concerning the misinformation they had received.
Participants were then given one of three ‘explanations’ of the effects of the injection. Epinephrine-informed participants were told that the ‘vitamin’ might have side effects lasting for about 20 minutes. The effects described to them were the actual effects of epinephrine. Epinephrine-ignorant participants were told that the injections would have no side effects. Epinephrine-misinformed participants were told to expect impossible side effects, such as numb feet, body itches and headaches. There was also a control group, injected with saline, which had the same instructions as the epinephrine-ignorant group. Following the injection, individual participants were taken to wait in a room with another person whom they believed was another participant, although it was, in fact, a confederate of the experimenters. For some participants, the room was a mess and the confederate was friendly and extraverted (‘euphoric’ condition). The remaining participants and the confederate were in a different room (the ‘anger’ condition), and they had personal and somewhat insulting questionnaires to complete. The confederate became steadily more angry with this and eventually stormed out (‘anger’ condition). Participants were observed through one-way mirrors and were given self-report questionnaires afterwards, the major questions concerning how angry or irritated, or how good or happy, they felt. In the euphoric condition, the epinephrine-misinformed or epinephrine-ignorant participants rated themselves as being significantly more euphoric than the epinephrine-informed participants.
The placebo control participants were less euphoric than either the misinformed or ignorant groups, but more euphoric than the informed group, although these differences were not significant. The epinephrine-misinformed and epinephrine-ignorant participants had no good explanation for their bodily state. Similarly, in the anger condition, epinephrine-ignorant participants were significantly angrier than the epinephrine informed, with no differences between controls and the misinformed or ignorant groups. In this ingenious experiment, Schachter and Singer were convinced that they had supported Schachter’s three propositions by manipulating cognition and arousal. Schachter’s (1970) general conclusions were that there is little physiological differentiation between the emotions, the labelling of emotional states being largely a cognitive matter. Even though both Schachter’s ideas and his studies have been influential, they have also been criticized (see Cotton, 1981; Izard, 1972; Leventhal, 1974; Plutchik & Ax, 1967; Reisenzein, 1983). To take one example, Schachter did not prove that emotion depends on physiological arousal and cognition. It may be possible to induce physiological arousal through cognition, or to produce a sort of physiological tranquillization cognitively. For example, it is possible to speed up or slow down heart rate and respiration simply by imagining playing a vigorous sport or by visualizing a tranquil scene.
Leventhal (1974) goes further, arguing that Schachter has never shown exactly how arousal and cognition combine in emotion, particularly in children. From a Schachterian perspective, how would a young child be able to feel any emotion before knowing the linguistic label for that feeling? In the end, although Schachter’s ideas have not been disproved, neither have they stood up robustly to criticism. At present, it is reasonable to conclude that feedback from physiological arousal can directly intensify emotional states. Moreover, the arousal–emotion link is mediated, or at least affected, by causal attributions, or appraisals (see chapter 17), about the source of the arousal. Whether both physiological arousal and cognition are necessary for the perception of emotion remains an open question.

[Stanley Schachter (1922–97), an innovator in the 1960s, more than anyone else succeeded in introducing significant ideas emanating from the ‘cognitive revolution’ into the area of emotion. His influence has continued to the present day, especially regarding the reciprocal influence of emotion and cognition, and the particular significance of attribution. His ingenious study with Singer began many years of exploration into the relationship between physiological arousal and cognitions (especially appraisals) in the psychology of emotion. His general conclusions were that there is little physiological differentiation between different emotions, the labelling of emotional states being largely a cognitive matter. These conclusions have been vigorously challenged but his influence in the field remains.]

The limbic system for Emotion

By now it should be clear that emotions have biological and evolutionary bases and involve both the CNS and the ANS. Although subcortical brain mechanisms are implicated in emotion – from the brain stem to the hypothalamus, thalamus and amygdala – cortical structures play an executive role. Animals with their cortex removed but with intact hypothalamus and thalamus show violent (sham) rage (Dusser de Barenne, 1920). Sham rage is so called because a weak stimulus can cause a release of autonomic responses (such as sweating and increasing blood pressure) that are normally only elicited by strong stimuli, and the anger is not directed at any one particular entity. Electrical stimulation of the hypothalamus can also produce such rage. Subcortical structures alone, however, do not provide the physiological mediation of emotion. This is provided by the limbic system of the cortex, with its extensive connections to the subcortex. The long history of research in these regions includes work by MacLean (1954, 1957, 1993). This work suggests that the limbic system, throughout its evolution, has helped to refine the emotional feelings that influence self-preservation. More recently, Panksepp (1981, 1989, 1991, 1992, 1993) made a very significant theoretical contribution to the physiology of emotion. He agrees that emotion is centred in the limbic system and has provided evidence for four, or possibly five, hard-wired emotion-mediating circuits. Panksepp is certain about the emotions of
i) expectancy, ii) fear, iii) rage and iv) panic,
although his evidence is not quite as convincing for the fifth, ludic (play) system. Interestingly, Panksepp’s approach is not solely neurophysiological but also considers the subjective or experiential. So, not only are there structural similarities between mammalian limbic systems across species, but Panksepp further uses subjective experience as a guide for distinguishing between those human brain states of emotion that appear also to be differentiated neurophysiologically.

Simultaneously, Le Doux (1999) demonstrated convincingly that much of the CNS work in relation to emotion is performed by the amygdala. Le Doux argues that the amygdala acts as an ‘emotional computer’, analysing any incoming information for its significance. In right-handed people, the right side of the brain, with which the amygdala has more extensive connections, is more associated with emotion than the left side. Le Doux argues that the connections between the amygdala and the thalamus may be especially relevant in the perception of emotion. In lefthanded people, it is likely that the converse holds – i.e. that thereare left hemisphere–amygdala connections. It should be apparent from the discussion presented above that the physiological investigation of emotion has added much to our current knowledge. We now turn to a consideration of the complementary cognitive perspective.

The lie detector

The history of the lie detector is a practical reflection of the lack of firm ground in the psychophysiology of emotion. Determining whether someone is being truthful is important in all walks of life. Historically, the methods used have ranged from torture through interrogation to interview. At one time, suspected witches were ducked under water for some time. If they drowned, they were innocent. If they did not drown, they must be witches and so were put to death anyway. There was no way out of this test (what these days we may refer to as a ‘Catch 22’) – but the obvious way to deal with slightly less extreme methods is to tell people what you think they want to hear (that is, to lie but in such a way that it ‘beats’ the test). The rationale behind the lie detector, or polygraph is that the act of lying causes measurable psychophysiological changes in emotional arousal. The polygraph measures such responses as heart rate, blood pressure, respiration and the electrical conductivity of the skin which changes with variations in sweating). Measures are taken from the person when relaxed and again when a mixture of critical and non-critical questions are put: ‘When did you last hold a gun?’ versus ‘When did you last hold a party?’ for example. Similar questions might be asked of an ‘innocent’ person and patterns of response compared. Our psychophysiological responses are thought to give us away but, in practice, polygraph methods of lie detection are not reliable. Merely being asked about a gun might cause changes in psychophysiological measures, and, furthermore, it is very unlikely that there is a particular response pattern for lying (e.g. Lykken, 1984; Saxe, 1991). If a foolproof way to detect lying is ever devised, enormous ethical dilemmas will arise. Imagine taking a lie detector test at a job interview and then being told that you would not beemployed because you had cheated once at school. Imagine parents giving such tests to their children.

Theories of emotion

Theories of emotion
The physiological arousal aspect of emotion has been responsible for many theoretical developments. The James–Lange theory of emotion has probably been referred to more than any other. It began with William James (1884) but was also propounded by Carl Lange (1885) and stressed the importance of physiological mechanisms in the perception of emotion. It is the following quotation from James (1884) that is most frequently cited: ‘the bodily changes follow directly the perception of the existing fact, and . . . our feeling of the same changes as they occur is the emotion’ (1884, p. 189). This theory drew attention to bodily changes occurring in response to environmental events, and suggested that emotion is our feeling of the bodily changes that follow perception. This reverses the commonsense idea that we perceive something that causes the emotional experience, which, in turn, causes the bodily changes. As shown in figure 6.3, the primary processing of environmental information occurs from the sensory receptoro the cerebral cortex, after which information is relayed back and forth between the cerebral cortex and the viscera (internal organs) and musculature. According to the James–Lange framework, it is the interpretation of these bodily changes that represents the perception of emotion. The first and most vociferous opposition to the James–Lange theory came from Walter Cannon (1915, 1927, 1931, 1932) in what has come to be known as the Cannon–Bard theory of emotion. Cannon emphasized the physiological foundations of emotion, including the CNS, and particularly the thalamus (see figure 6.4). According to the Cannon–Bard framework, environmental information is first relayed from the sensory receptor to the thalamus, after which it is sent to the cerebral cortex and to the internal organs and skeletal muscles, and then back and forth between the cerebral cortex and thalamus. Note that there is no direct communication in this framework between the cerebral cortex and the viscera or muscles. Cannon also put forward some cogent criticisms of James’ theory.
The most important were that:
1. internal organs react too slowly to be a good source of information about emotional feelings;
2. a drug, whilst it might induce sympathetic arousal in the nervous system, does not in itself produce emotion (see our discussion of Maranon, later); and
3. bodily arousal patterns do not differ much from one emotion to the next.
This third point was certainly prophetic of the later lack of empirical success in finding clear, dissociable bodily response patterns in emotion.

Nevertheless, the psychophysiological analysis of peripheral mechanisms in emotion makes it abundantly clear that arousal is an integral part of emotion. It also seems that the various emotions might have some characteristic patterns of psychophysiological reactions associated with them. It is therefore possible that, as measurement techniques become more advanced in the future, patterns of psychophysiological responses might be found for the various emotions. But the current belief is that for any subtle emotional differentiation, cognitive mechanisms underlying emotion need to be directly addressed