The Divisions of the Nervous System
The specification states you need to know about the central nervous system and the peripheral nervous system. Within these topics it is further split between into subsections.
View the image created below for a better understanding of the division of the nervous system.
The Central Nervous System
The central nervous system consists of the brain and the spinal cord and it’s two main roles are the control of behaviour and psychological processes and the regulation of physiological processes in the body to maintain life.
The brain divides into four main components which are the cerebrum, cerebellum, diencephalon and brain stem.
The largest component is the cerebrum which is split into two halves called cerebral hemispheres and these further subdivide into four other components which are the frontal lobe, the occipital lobes, parietal lobes and temporal lobes.
Each lobe has different functions with the frontal lobe involved in speech, thought and learning and the temporal lobe involved with hearing and memory. The parietal lobe processes sensory information while the occipital lobe processes visual information.
The cerebellum is responsible for controlling motor skills, balance coordination and muscles. The diencephalon splits into two substructures which are the thalamus and hypothalamus. The thalamus acts as a relay station for nerve impulses which come from the senses and routes them to appropriate parts of the brain where they can be processed. The hypothalamus regulates body temperature, hunger and thirst. It also acts as the link between the nervous system and the endocrine system and releases hormones from the pituitary gland.
The brain stem regulates automatic functions which are essential for life such as breathing, heartbeat and swallowing. Motor and sensory neurons pass through the brain stem which allows impulses to pass between the brain and spinal cord.
The Spinal Cord
The spinal cord relays information between the brain and the body. This allows the brain to monitor and regulate bodily processes such as digestion, breathing and the coordination of movements.
The spinal cord connects to different parts of the body through spinal nerves which connect to specific muscles groups and glands. Spinal nerves from the thoracic region carry messages to the chest and abdomen and circuits of nerve cells enable simple involuntary movements without the brains direction.
The Peripheral Nervous System
The peripheral nervous system makes up all the nerves outside the central nervous system and relays nerve impulses back and forth between the body and central nervous system which comprises of the brain and spinal cord.
The peripheral nervous system is subdivided into the somatic nervous system (SNS) and the autonomic nervous system (ANS).
The Somatic Nervous System
The somatic nervous systems main role is transmitting and receiving information from the senses such as visual information from the eyes and auditory information from the ears. It also controls the movement and reaction of muscles including reflex reactions without the involvement of the central nervous system.
The somatic nervous system comprises of 12 pairs of cranial nerves which emerge from the underside of the brain and 31 pairs of spinal nerves emerging from the spinal cord. These nerves transmit sensory neurons and motor neurons with sensory neurons relaying messages to the central nervous system while motor neurons relay information to various parts of the body from the central nervous system.
The Autonomic Nervous System
The role of the autonomic nervous system is to transmit information to and from internal organs to sustain life.
Involuntary actions such as the heart beating, the digestion of food by the intestines, body temperature regulation, heart rate and blood pressure is all regulated by the autonomic nervous system without conscious awareness or input from people.
To control homeostasis (maintain stable conditions necessary for survival), the autonomic nervous system has two branches which are the sympathetic nervous system and parasympathetic nervous system both of which have opposite effects to one another.
The sympathetic division is primarily involved in responses that help us deal with emergency situations (think fight or flight response). Using the neurotransmitter noradrenaline to stimulate, it increases heart rate, blood pressure and lowers digestive activities.
The parasympathetic nervous system uses acetylcholine which inhibits and leads to the lowering of heart rate, blood pressure and increased digestive activity. If we think about the sympathetic nervous system being used for emergencies, the parasympathetic is comes into play once the emergency has passed to restore normal functioning.
Neurons and Synaptic Transmission
The A-level Psychology Specification states you need to know the following for Neurons and Synaptic Transmission:
- The structure and function of neurons
- The process of synaptic transmission
- Excitatory & Inhibitory Neurotransmitters
The Structure and Function of Neurons
There are three types of neurons which are sensory neurons, relay neurons and motor neurons.
These are specialised cells consisting of a cell body, dendrites and an axon and carry neural information throughout the body.
Dendrites located at the end of one neuron receive signals from other neurons or sensory receptors. They connect to a cell body which is the control centre of the neuron and send out impulses which travel along the axon and terminates at the axon terminal. The myelin sheath forms an insulating layer within the brain and spinal cord and forms around the axon. This allows nerve impulses to transmit more rapidly along the axon and if the myelin sheath becomes damaged this can slow impulses down. Neurons vary in length from a few millimetre up to one metre.
Sensory receptors for vision, taste and touch transmit nerve impulses to the spinal cord and brain. Sensory receptors are located in the eyes, ears, tongue as well as the skin and convert information into neural impulses which are translated into sensations (heat, pain, visual input, sound input) once they reach the brain. Some neurons are terminated at the spinal cord and not all information reaches the brain. This allows reflex actions to occur quickly without the delay of impulses needing to reach the brain; this is particularly helpful for self-preservation such as moving a persons hand away from a hot surface.
Relay neurons are located within the brain and spinal cord and allow motor neurons and sensory neurons to communicate bridging sensory input and motor reaction output.
Motor neurons are located in the central nervous system and project their axons either directly or indirectly by forming synapses with muscles to control muscle contractions. Motor neurons release neurotransmitters when stimulated which bind to muscle receptors and trigger movements. As the axon of a motor neuron fires, muscles which have synapses formed around them contract with the strength of contraction dependent on the rate at which axons fire from controlling motor neurons.
The Process of Synaptic Transmission
Synaptic transmission is a process where by messages from one neuron are passed to another neuron despite the two not being physically connected.
Between neurons you have the “synapse” which is a gap that is bridged using neurotransmitters. As a nerve impulse reaches the end of an axon and arrives at the axon terminal, they stimulate the release of neurotransmitter molecules into the synapse. These then diffuse over the postsynaptic membrane of the closest neuron which has synaptic receptors which recognise and are activated by that particular neurotransmitter.
There are only a limited number of neurotransmitters such as dopamine, serotonin, noradrenaline and acetylcholine and their associating receptors. The result is the receptor molecules produce either excitatory or inhibitory effects on the postsynaptic neuron.
Synaptic transmission takes only a fraction of a second to occur with the effects of neurotransmitters terminated by a process called “re-uptake” where the neurotransmitter is taken up by the pre-synaptic neuron and made available again later. The length of time before the neurotransmitter is taken back up determines how long the effects of that neurotransmitter will be with quicker re-uptake resulting in shorter effects and slower re-uptake resulting in longer effects on the postsynaptic neuron.
Excitatory and Inhibitory Neurotransmitters
Neurotransmitters are classified as having either an excitatory or inhibitory effect and this is dependent on the “action potential” of the post-synaptic neuron and the message type which is received at the post synaptic receptors. Only certain neurotransmitters can activate a message channel and when the right neurotransmitter and receptor meet, a specific ion channel within the membrane opens allowing the ions to flow through the membrane to specific pathways. It is this flooding of ions which can cause a “potential” in the dendrites which is either excitatory or inhibitory.
Excitatory neurotransmitters include dopamine, noradrenaline and acetylcholine and make a nerve impulse or action potential more likely to be triggered and cause the post- synaptic neuron to fire.
Inhibitory neurotransmitters such as GABA (gamma aminobutyric acid) and serotonin make it less likely that a neuron will fire and stop nerve impulses at the post-synaptic neuron. Inhibitory neurotransmitters generally have a calming effect on the body and mind while excitatory neurotransmitters have been linked to the opposite effect.
The likelihood of a cell firing is determined by adding the excitatory post-synaptic potential (EPSP) and the inhibitory synaptic input (IPSPs) with the net result of this calculation known as the summation. If excitatory synapses are more active this would result in a higher rate of fire for cells while higher inhibitory synapses results in a lower rate of fire or the cell not firing at all.
The Function of the Endocrine System
The function of the endocrine system focuses on glands and the hormones they secrete.
Working closely with the nervous system, the endocrine system is a network of glands consisting of the pituitary gland, adrenal glands, thyroid, adrenal medulla, adrenal cortex, pineal gland and the reproductive glands which may be either the ovaries for women or testes for men. They are located throughout the body and regulate the physiological processes of the body through the release of hormones which act as chemical messengers.
Hormones travel within the bloodstream and can influence behaviour, mood, sleep, metabolism, reproduction and even the fight or flight stress response. The pituitary gland is considered the master gland and is controlled directly by the hypothalamus which monitors and regulates physiological processes. The hypothalamus instructs the pituitary gland to release hormones which in turn affects other glands into releasing their hormones.
All the different glands produce different sets of hormones which affect the body differently.
The Pituitary Gland and Hormones
The pituitary gland is connected and controlled by the hypothalamus within the cranial cavity and is considered to be the master gland as the hormones it releases affects the release of hormones in other endocrine glands.
The hypothalamus receives information on the body’s functioning and controls the release of hormones in other glands by instructing the pituitary gland to release hormones which cause other glands to release their hormones. Much like a thermostat, the hypothalamus recognises high levels of hormones and can stop the pituitary gland from releasing its hormones in a process known as negative feedback which then stops the affect it has on other glands and physiological processes.
The pituitary gland is divided into two lobes which are referred to as the posterior lobe and anterior lobe and both release different hormones which target different parts of the body. The posterior pituitary releases oxytocin and vasopressin (also known as ADH or antidiuretic hormone). Oxytocin stimulates the contraction of the uterus during childbirth and facilitates bonding between a child and mother while Vasopressin regulates water balance within the body and reabsorption by the kidney.
The anterior pituitary targets the adrenal glands and releases ACTH in response to stress which causes the adrenal glands to release cortisol. The anterior pituitary also releases luteinising hormones (LH) and follicle-stimulating hormones (FSH) which are vital in controlling reproductive functioning and sexual characteristics.
Within females these hormones stimulate the ovaries into producing oestrogen and progesterone while in men they cause the testes to release testosterone and sperm.
The Adrenal Glands And Hormones
There are two adrenal glands which sit above the kidneys with each made up of two parts. The outer part is known as the adrenal cortex while the inner part is called the adrenal medulla.
Both have different functions and produce different hormones and a major distinction between them is the adrenal cortex releases glucocorticoid hormones such as cortisone, cortisol and corticosterone as well as Mineralocorticoids which help maintain life while the adrenal medulla releases adrenaline and noradrenaline as part of the fight or flight response.
Glucocorticoids released from the adrenal cortex help the release of stored glucose and fats for energy as well as suppress the immune system and inflammatory response. Mineralocorticoids help regulate the water balance of the body through water and sodium reabsorption within the kidneys.
Hormones such as adrenaline and noradrenaline which are released from the adrenal medulla increase heart rate and blood flow to the muscles and brain as well as release stored glucose from fat during the fight or flight response to deal with stressful situations.
The Fight or Flight Response
What is The Fight or Flight Response?
The fight or flight response is an evolved survival mechanism in response to perceived stressful or threatening situations which causes the body to react in a particular way.
When faced with a perceived stressful situation the heart starts to beat faster, breathing is more rapid and muscles begin to tense allowing animals to react quickly to dangerous situations. It is called the fight or flight response as these physiological changes allow the animal or person to either stand and fight the perceived threat or run away to safety.
The Amygdala and Hypothalamus
In the modern world with humans this response is routinely activated during times of stress where neither fighting or running away would be appropriate. The fight or flight response occurs in a series of steps; firstly the amygdala recognises sensory signals such as what we see, hear, smell or feel with emotions which are commonly associated with the flight or flight response and sends a distress signal to the hypothalamus.
The hypothalamus recognises the threat and sends a message through the sympathetic nervous system (SNS) to the rest of the body. When the stressor is acute and sudden, the sympathetic nervous system sends a signal to the adrenal medulla which begins to release hormones in the bloodstream such as adrenaline and noradrenaline.
The Role of Adrenaline
Adrenaline circulates through the body causing physiological changes such as faster heart-rate, increased breathing, muscle tension, pupil dilation, the production of sweat to regulate temperature and reduced functioning of the digestive and immune systems to conserve energy for running if required. Adrenaline also causes stored glucose to be released to supply energy to deal with the stressful situation.
The parasympathetic nervous system is a branch of the autonomic nervous system and dampens down the stress response once the threat has passed. It does this by reducing blood pressure, heart rate and resuming digestion once again.
If the stressor is ongoing and chronic, as the initial surge of adrenaline subsides the hypothalamus activates a stress response system known as the HPA axis which consists of the hypothalamus, pituitary gland and adrenal glands. In response to the continued stressor the hypothalamus releases corticotrophin-releasing hormone (CRH) into the bloodstream. Once detected by the pituitary gland, CRH causes it to release adrenocorticotrophic hormone (ACTH) which travels through the bloodstream and reaches the adrenal glands.
ACTH stimulates the adrenal cortex into releasing hormones such as cortisol which is important for the fight or flight response and provides quick bursts of energy and lower sensitivity to pain however it also causes impaired cognitive functioning and suppresses the immune system.
The hypothalamus and pituitary gland have special receptors which monitor the level of cortisol. If this rises to a level above normal then they reduce ACTH and CRH to lower cortisol levels.
Response to Chronic Stressors
If the brain continues to perceive something as threatening for a prolonged period of time, a second system kicks into action.
As the initial surge of adrenaline subsides, the hypothalamus activates a stress response system known as the HPA axis.
This consists of the Hypothalamus, Pituitary gland and Adrenal glands.
- The Hypothalamus: The HPA axis relies on a series of hormonal signals to keep the sympathetic nervous system working. When faced with a perceived threat for a prolonged period, the hypothalamus releases a chemical messenger, corticotrophin-releasing hormone (CRH), which is released into the bloodstream.
- The Pituitary Gland: Once corticotrophin-releasing hormone reaches the pituitary gland, it causes the pituitary to produce and release adrenocorticotrophic hormone (ACTH). ACTH is transported from the pituitary in the bloodstream to its target site in the adrenal glands.
- The Adrenal Glands: Adrenocorticotrophic hormone stimulates the adrenal cortex to release various stress-related hormones, including cortisol. Cortisol is responsible for several effects in the body that are important in the fight or flight response. While some of these effects are positive (quick burst of energy and lower sensitivity to pain), others are negative (e.g. impaired cognitive performance and lowered immune system)
The Fight or Flight Response Evaluation
- A criticism of the fight or flight response is it suffers from gender bias as it describes only male behaviour. Taylor et al (2000) found that females dealt with stress using the “tend and befriend response” rather than fight or flight. This saw them engage in nurturing behaviour and forming protective alliances with other women to protect themselves and their children. The fight or flight response may therefore lack generalisation and external validity as it is an evolutionary response that relates only to men due to the context of them historically being the protective partner while women evolved the tend and befriend response due to being the primary caregiver and needing to protect their children.
- Physically women are less equipped than males to deal with threats while fleeing in the face of danger would put vulnerable children at risk and this would explain the difference in this stress response. Animal studies involving rats have found that in females there may be a physiological response which inhibits flight during times of stress which involves the release of oxytocin. Oxytocin is a hormone which causes relaxation and decreases stress which are all opposed to the male fight or flight response suggesting women have a different system for coping with stress completely. A limitation of this however is rats tend to have different biology to humans and therefore the findings could be argued to have limited generalisation to humans and explaining the fight or flight response in females.
- Lee and Harley (2012) found evidence to suggest there was a genetic basis for the fight or flight stress response being found mostly in males. For example the SRY gene which has been found only in the male Y chromosome is believed to be responsible for promoting aggression. This has been linked to the fight or flight stress response and one explanation is it primes male to respond to stress through releasing adrenaline and increasing blood flow. The lack of a Y chromosome and thus SRY gene in females coupled with oestrogen and oxytocin may stop this response from occurring.
- Although during the environment of evolutionary adaptiveness (EEA) the fight or flight response provided a survival mechanism to deal with threats, this stress response is maladaptive to the modern world which rarely results in life threatening situations that require fighting or running away. In the modern world the stress response is routinely activated by daily problems which can cause serious health problems. For example high blood pressure which is caused by the sympathetic nervous system being routinely activated can cause damage to blood vessels and ultimately heart disease. High levels of cortisol due to routine activation of the stress response can suppress the immune system which results in people being vulnerable to infections. Therefore although adaptive in the past, it has become almost maladaptive and detrimental to health in today’s modern world.
- Gray (1988) suggested that prior to the fight or flight response people engaged in actively avoiding confrontations or a “freeze response”. This freeze response played the role of helping people stop, look and listen so the person can become hyper vigilant to signs of danger and more aware of their surroundings to try find a suitable solutions to threats before them. This suggests there are other stress responses that occur as well beyond only the fight and flight response.
- The fight or flight response being primarily a male trait could be argued to be due to socialisation and male expectation of aggression being normal rather than a genetic basis like the SRY gene. Men are socialised through media influences and social learning from role models to handle situations aggressively while females are encouraged to rely on the support of others. Therefore the fact that men are commonly linked to aggression in the fight or flight response may actually be due to socialisation rather than the fight or flight response itself. There is support for this perspective from Von Dawans et al (2012) who highlights the fact that under stress men do not always result in fighting or running away or women displaying typical responses due to their gender. His study found that acute stress during times of crisis such as the 9/11 terrorist attacks have highlighted that stressful situations can even lead to cooperative behaviour among groups. This could be explained through humans being social creatures that are protective of social relationships which have helped our species thrive into modern society.
- Assessing to what extent biology affects behaviour such as the fight or flight response is difficult as it can be argued that biology merely creates the potential for behaviour rather than be its cause. The cause and effect between biology and behaviour cannot be established as much of the research into this is correlational and only the relationship can be investigated. The argument that biology determines behaviour is reductionist and deterministic. It is reductionist as it attempts to reduce human behaviour and cognitive processes due to genetics or hormones which is simplistic. It is deterministic as it ignores the free will people have in making their own choices and overriding any biological urges they have should they wish to.
Localisation of Function In The Brain
Localisation of function in the brain examines the idea that different parts of the brain serve specific functions such as language, memory or even hearing.
We will look at the visual and auditory centres, motor and somatosensory areas and language centres. These are likely to be asked as small questions in the exam while a full essay question in the exam will likely focus on “localisation of function” which incorporates all the below theory.
The brain has two visual cortices with one in each hemisphere.
The primary visual cortex is located in the occipital lobe which is at the back of the brain and is thought to be the main visual centre as individuals who have damaged this part report no vision.
Visual processing begins in the retina from the back of the eye where light enters striking the photoreceptors and nerve impulses from the retina then transmit to the brain through the optic nerve. Some nerve impulses travel to parts of the brain that regulate the sleep wake cycle and the release of melatonin however most impulses terminate at the thalamus.
The thalamus acts as a relay station which passes information to the visual cortex which spans both hemispheres. The right hemisphere receives input from the visual field on the left side while the left hemisphere receives information from the right visual field.
The audio centre is concerned with hearing. The brain has two auditory cortices located mostly in the temporal lobes, one in each hemisphere of the brain.
Sound waves which are converted into nerve impulses begin in the cochlea, in the inner ear. These travel via the auditory nerve to the auditory cortex located in the brain.
During the journey the impulses first stop at the brain stem which is responsible for basic decoding such as duration or the intensity of the sound. The second stop for impulses is the thalamus which acts as a relay station and conducts further processing of sound. Finally the impulse will reach the auditory cortex and with most of it decoded at this point, the sound is recognised and an appropriate response given.
The Motor Cortex
The motor cortex is located in both hemispheres of the brain in the frontal lobe. It is situated along a bumpy region known as the pre central gyrus, and is responsible for conscious voluntary movements.
Each motor cortex in the hemisphere controls the movements and muscles on the opposing side; so the motor cortex on the right side controls the movements of the left side of the body and vice versa. It does this by sending messages to the muscles via the brain stem and spinal cord.
The Somatosensory Cortex
The somatosensory cortex is responsible for sensory and touch based experiences and is located in the parietal lobe within the brain alongside a region known as the post-central gyrus.
Both hemispheres of the brain have a somatosensory cortex which receive information from the opposing side of the body. The post-central gyrus is responsible for processing touch based and produces sensations of touch using sensory input from the skin. It also produces sensations such as pain, pressure or temperature which it localises to specific regions on the body.
Broca’s area is located in the frontal lobe of the dominant hemisphere (which is usually the left) and is believed to be primarily responsible for speech and language production.
Subsequent studies into damage in this area of the brain has found some patients can still pronounce nouns and verbs however not prepositions and conjunctions. Other research has found Broca’s area to be involved in cognitive tasks and calculation of maths problems (Fedorenko et al 2012)
Wernicke’s area was discovered by Carl Wernicke and is believed to be involved in understanding language.
This area is located in the posterior section of the temporal lobe and patients with damage to this area are able to speak but not understand language. Patients also tend to suffer with anomia which is when they struggle to find the words they need when talking.
Evaluating Localisation of Function
- The Holistic Theory of brain function undermines that localisation of function occurs. Lashley (1950) conducted an animal study involving rats and found that there was not specific areas involved in memory but instead memory was stored all over the brain. This undermines the idea that specific parts of the brain perform functions and is potentially invalid. Animal research using rats provides us a credible way of understanding human behaviour as they share some mammalian genetics with humans which may generalise to us. Lashley was able to conduct tests which would be unethical to conduct on humans and draw comparisons to humans. However a weakness in his holistic theory is that there is still significant differences in biology between rats and humans and just because localisation is diverse across the brain of a rat, it does not necessarily mean that it applies to humans.
- Other research does appear to support localisation of function because if function was spread out throughout the brain, there would not be specific deficits such as loss of speech in people who suffered damage within specific areas. This does have research support as individuals with brain damage have shown brain plasticity and the ability to regain function using other parts of the brain. A famous case involving an Italian boy referred to as EB had most of his left hemisphere removed when just younger than 3 years old. He lost his language ability initially but through rehabilitation he showed dramatic improvements to the point of there being no discernable difference to a normal individual. His right hemisphere appeared to compensate for the loss of his lefts functioning although there were some deficits suggesting other parts of the brain can compensate for one another and function does not necessarily have to be localised to fixed parts of the brain.
- Research from other studies undermines localisation of function as an explanation and suggests that how different brain areas communicate may be more important than localisation itself and what each brain area controls. Joseph Dejerine (1982) reported one case who had damaged the connection between the visual cortex and Wernicke’s area resulting in him losing the ability to read. This suggests that behaviours such as reading, movement and language can be damaged not only through specific parts of the brain becoming damaged but through damage to the connections between any two points which play a part in joining different structures together. Therefore communication may be more important than localisation itself.
- Individual differences are also apparent between language area’s suggesting that function is not universally localised in the same part of the brain for everyone in the same way. Bavelier et al (1997) studied the pattern of activation for participants when engaging in various language activities and found a large variation between individuals. Activity was found in the right temporal lobe as well as the left frontal, temporal and occipital lobes which supports the case for individual differences in localisation of function. Gender differences have also been found with the size of brain areas associated with language with women having proportionally larger area’s associated with language, namely Broca’s and Wernicke’s areas, when compared to men.
Laterisation and Split-Brain Research
For Lateralisation and split-brain research, the a-level psychology specification states you need to know the following topics:
- Hemispheric lateralisation
- Split-brain research
Hemispheric lateralisation refers to the fact that the two halves (hemispheres) of the brain are not identical with each having specialised components limited to them.
For example research has found that functions such as language and speech have been found to be localised in the left hemisphere while visual-motor tasks and facial recognition have been linked to the right hemisphere.
Split-brain research has shown that the brain is not split into specific sections while deal with specific tasks but instead the way the brain is connected between different regions is just as important as the function of the different regions of the brain.
Paul Broca was able verify this difference between hemispheres as he found that damage to the left area where language and speech centres reside caused problems in such however the same damage on the right hemisphere within the same area did not result in language or speech problems. The two hemispheres are able to communicate as they are both connected and this allows information received by one to be shared with the other through nerve fibers known as the corpus callosum.
Hemispheric Lateralisation Evaluation
- Brain lateralisation is believed to have the advantage of increasing neural processing capacity as this leaves one hemisphere free to engage in another task; for example the left hemisphere could work on language and speech while the other hemisphere worked on visual-spatial tasks at the same time. However there is actually little empirical evidence to show that lateralisation infers an advantage for the functioning of the brain.
- Supporting evidence for brain lateralisation inferring an advantage comes from a study by Rogers et al (2004) who studied chickens. He found brain lateralisation was associated with an enhanced ability to perform two tasks at the same time such as being able to find food while also remaining alert for predators. This animal study does lend support to the idea that brain lateralisation provides some efficiency in conducting cognitive tasks and for survival however findings from chickens who have vastly different biology may not generalise to humans.
- Lateralisation of the brain has been found to be associated with strengths in some areas as well as problems in others. Mathematicians and architects have been found to be more likely left handed and have superior right-hemispheres but also more prone to immune system problems. Tonnessen et al (1993) found a relationship between people being left-handed and having an immune system problem suggesting the process which leads to lateralisation on a genetic level may also affect the immune system. This is supported by Morfit and Weekes (2001) who found higher rates of immune disorders in left handed people and their immediate families when comparing right handed people.
- Other research suggests lateralisation of function actually changes with age and does not remain localised throughout an individuals life. Localised patterns observed in younger people appear to become bilateral across both hemispheres as people become older. Szaflarski (2006) found language being localised to the left hemisphere decreased with each decade of life after the age of 25. A possible explanation for this is we begin to rely on the right hemisphere more for extra processing as a way to compensate for age-related decline in functioning.
Split-brain research was first studied by Roger Sperry and Michael Gazzaniga (1967).
They were able to study hemispheric lateralisation and the capabilities of each hemisphere by sending visual information to just one hemisphere. Their research found that when the corpus callosum was cut, information presented to one hemisphere can only be processed by that hemisphere and has no way of sharing this information with the other hemisphere.
For example when a patient is shown a picture of a dog visible to the right visual field, this information would be recognised by the left hemisphere and a patient is able to state what they see. However when the person is shown something only visible to their left visual field, this information is transferred to the right hemisphere which has no language centre so is unable to express what it is able to see without the aid of the language centres on the left hemispheres.
Split-brain research has discovered a number of differences between the two hemispheres. For example, we've learn't that the left hemisphere is responsible for speech and language while the right hemisphere specialises in visual-spatial processing and facial recognition.
One thing split-brain research has not shown is that the brain is organised into specific regions responsible for specific tasks. Instead what appears to be the case is that the connectivity between the different regions is as important as the operation of the different parts themselves.
Split-Brain Research Evaluation
Split-brain research has been undermined in more recent years as other case studies have emerged of people being able to conduct tasks which should not be possible according to earlier findings.
For example split-brain research had suggested that the right hemisphere was unable to handle any language and this was primarily done by the left hemisphere. However cases studies have shown this is not necessarily true and language is not necessarily restricted only to the left hemisphere. A case study involving a patient known as J.W was found to be able to show language ability from the right hemisphere and speak out about information presented to him for either side of the brain (Turk et al 2002).
Split-brain procedures are rarely carried out therefore finding patients who have had this procedure done (cutting the corpus callosum) is difficult. This has made it difficult to find sufficient numbers of participants for research into split-brain research and draw useful conclusions.
Andrewes (2001) argued that many studies are presented with as few as 3 participants with some having only single participants. He argued conclusions have therefore been drawn from such individuals who have confounding physical disorders that had made the split-brain procedure necessary, or had had less complete sectioning of the two hemispheres than was originally believed. Andrewes therefore claimed that these 'rogue' cases have only been identified when further research has failed to replicate the findings. The point here is that generalisation from such low samples and case studies may not generalise to the wider population and lack internal validity themselves.
Brain Plasticity and Functional Recovery
This section focuses on brain plasticity and functional recovery with the A-level psychology specification stating you need to know about the following:
- Brain plasticity
- Functional recovery after trauma
What is Brain Plasticity?
Neuroplasticity, also known as brain plasticity, refers to the brains ability to adapt and change through the creation of new neural pathways and the alteration of existing ones to adapt to new experiences due to learning. Research has found that neuronal organisation can change due to experiences and that different types of experiences ranging from life experiences, video games to even meditation can cause this.
New experiences cause nerve pathways that are frequently used to become stronger in their connections however neurons which are rarely used become weaker and eventually die out. The brain does this to keep adapting to the ever changing environment however age can also explain the decline in cognitive functioning within the brain. Life experiences have been found to reverse this effect through the teaching of new skills. Boyke et al (2008) found when 60 year olds were taught juggling, this resulted in an increase in grey matter within the visual cortex highlighting how new life experiences can result in plasticity.
Research by Kuhn et al (2014) has found video games increase grey matter in various parts of the brain including the cortex, hippocampus and cerebellum when comparing to a control group. This was believed to be because video games involve complex cognitive and motor actions which result in new synaptic connections in the brain areas responsible for spatial awareness and navigation, planning, working memory and motor performance which are important skills when playing various videogames.
Meditation has been found to affect neural activity and pathways. Davidson et al (2004) studied Tibetan monks meditating and compared to a control group they were found to have higher levels of gamma wave activity. Gamma waves are associated with increased coordination of neuron activity and the conclusion drawn from this research was meditation could affect the workings of the brain in the short-term as well as the long-term as the monks had a naturally higher gamma wave activity than the control group even before they began to meditate.
Brain Plasticity Evaluation
- Brain plasticity and its ability to change due to experience is supported not only through single case studies of individuals but through animal studies. Kempermann et al (1988) studied rats to see whether enriched environments could alter neurons within the brain. When rats were housed in complex environments compared to a control group of rats in a normal environment, they found those housed in a complex environment showed an increase in neurons within the hippocampus which is associated with the formation of new memories and navigation. This supports the view that life experiences can cause brain plasticity.
- Research support also comes from human studies demonstrating brain plasticity after exposure to enriched environments. Maguire et al (2000) studied London taxi drivers to test whether their extensive experience of spatial navigation from their profession could result in changes in the brain. MRI scans found that when compared to a control group of participants, the posterior hippocampi within the taxi drivers was much larger than that of the control groups and positively correlated with the amount of time they had spent as a taxi driver. This suggests life experiences over sufficient time and practice could cause plasticity within the brain.
Functional Recovery After Trauma
Case studies of individuals suffering from stokes during the 1960’s have found some individuals were able to recover previous functioning to some degree after brain cells were damaged due to strokes.
They found the brain was able to “re-wire” itself and create alternative neural pathways around damaged areas with other parts of the brain taking over functions which were lost. The brain is able to do this due to the brains plasticity and through two mechanisms known as neuronal unmasking and stem cells.
Neuronal unmasking was first identified by Wall (1977) through what he identified as “dormant synapses” within the brain. Synaptic connections exist within the brain but their functions are blocked and ineffective as the rate of neural input is too low for them to be activated. Damage to other pathways and structures causes these pathways to become “unmasked” and open the dormant synapses as new routes for neural input opening connections within the brain not normally activated. This lateral spread of activation leads to the development of new structures which take over the functions of damaged areas.
Stem cells are unspecialised cells which have the potential to develop into various types of cells with different functions which includes the functioning of nerve cells. One possible treatment involving stem cells may be to inject them into the brain to replace dead or dying cells or for those people with degenerative disorders. Alternatively stem cells can help the regeneration of damaged cells through secreting growth factors or help link an undamaged brain site with a damaged area of the brain using cells from a neural network.
Functional Recovery after Trauma Evaluation
- Evidence for stem cells aiding recovery from brain damage comes from Tajiri et al (2013). Researchers used rats and assigned them to one of two groups; one group had rats with brain injuries while the other were a control group without injury. Only the injured rats received stem cell implants into the areas of the injured brain and after 3 months they showed clear development of neuron-like cells on the injury sites. The control group did not display this type of improvement supporting the use of stem cells in aiding recovery after trauma. An issue with these findings however is the research was based on rats which have drastically different biology than humans and the findings may lack external validity when generalising to humans.
- Educational attainment has also been linked to brain recovery with patients with a college education found to be seven times more likely than those not finishing high school to be disability-free a year after having a moderate to severe brain injury (Schneider et al 2014). A retrospective study of 769 patients found 39.2% who had 16 years or more of education had recovered after one year. Over 30% who had been in education for 12-15 years had also shown improvement after one year while those who were in education for less than 12 years saw only 9.7% show improvement. Researchers concluded that cognitive reserve, which is associated with greater educational attainment, was linked to neural adaptation during recovery from brain injury.
- Research evidence suggests age differences play a role in functional recovery with older age associated with reduced functional plasticity. Abilities which have thought to be fixed in childhood have been shown to be modified with enough intense training. Despite this Elbert et al (2001) concluded that neural reorganisation and functional plasticity was still greater in children than in adults.
Ways of Studying the Brain
Ways of studying the brain for AQA A-level psychology focuses on the following scanning techniques:
- Functional magnetic resonance imaging (FMRI)
- Electroencephalogram (EEG’s)
- Event-related potentials (ERP’s)
- Post-mortem examinations.
fMRI Scans - Functional Magnetic Resonance Imaging
FMRI measures changes to particular areas of the brain while individuals engage in various tasks.
This results in increased neural activity within these areas resulting in a greater demand for oxygen which is delivered through increased blood flow and red blood cells. Behaviours may involve alternating between tasks to map brain areas where there is a matching pattern of change and activated by particular stimulus.
For example looking at something for a period and then closing their eyes for a period; the changes can establish which area’s are activated by particular stimuluses.
The electroencephalogram (EEG) was developed by Hans Berger (1929) and measures the general state of the brain. It does this by measuring electrical activity in the brain through electrodes placed on the head which detect electrical activity of brain cells and the millions of neurons.
The electroencephalogram (EEG) is useful for detecting various brain disorders such as epilepsy or disorders such as Alzheimers disease which influence brain activity. Four basic EEG patterns picked up using electroencephalogram (EEG) are alpha waves, beta waves, delta waves and theta waves.
The electroencephalogram is able to pick up the amplitude (size or intensity of electrical activity) as well as the frequency (the speed or rapidity of electrical activity). Two distinctive states recognised using the electroencephalogram are synchronized patterns where a recognisable waveform of a particular amplitude and frequency can be identified or a desynchronized pattern where no recognisable waveform is visible although the frequency may still be determined.
The electroencephalogram has commonly been used in sleep studies and to identify the different stages of sleep.
Event-Related Potentials (ERPs)
Event-related potentials use electrodes to measure very small voltage changes within the brain when patients are presented with a stimulus such as a picture or sound which requires cognitive processing.
As event-related potentials (ERPs) are difficult to identify from all the other background activity within the brain, to establish a direct response to the stimulus, it is presented numerous times which are then averaged together. Regular specific electrical responses to the stimulus gradually add together while background electrical “noise” is cancelled out allowing the event-related potential to emerge.
Event-related potentials divide into two categories; waves that occur within the first 100 milliseconds after presentation of the stimulus (known as sensory ERP’s) and event-related potentials generated after the first 100 milliseconds which are known as cognitive ERP’s as they demonstrate some level of evaluation by the patient and cognitive processing occurring.
Post-mortem examinations help establish the underlying neurobiology for people that display particular sets of behaviour.
For example people who are alive that display irregular behaviour can be studied to identify possible damage within their brains for abnormality through post- mortem examinations. When compared to a normal brain, any abnormalities found during examinations can help us better understand the cause and the area of the brains which may be responsible.
Post-mortem studies have also helped identify the brain structures involved in memory as well as help establish a link between psychopathological disorders such as schizophrenia and depression. For example examinations have found evidence of reduced glial cells in the frontal cortex of patients with depression (Cotter et al 2001).
This section on biological rhythms for AQA A-level psychology requires you to know about the following:
- Circadian rhythms
- Infradian rhythms
- Ultradian rhythms
- The difference between these 3 biological rhythms
- The effect of endogenous pacemakers and exogenous zeitgebers on the sleep/wake cycle
What are Circadian Rhythms?
Circadian rhythms are biological cycles lasting 24hours like the sleep/wake cycle, which is facilitated by time-checks and regular events such as meal times (external cues).
The main internal biological clock in mammals appears to be located in the hypothalamus, which is responsible for “motivation” and is named the suprachiasmatic nucleus (SCN). The SCN has an inbuilt circadian firing pattern as when damaged in rats the circadian rhythm involving sleeping and feeding patterns has been disrupted (Zucker et al). The SCN regulates the secretion of melatonin in the pineal gland (another endogenous pacemaker which produces melatonin which affects sleep) and is also connected to the retina of the eye through a separate pathway. This highlights the indirect link between exogenous zeitgebers such as light and how melatonin production from the Pineal gland (endogenous pacemakers) works together with the SCN to maintain a rhythm.
Light can also reach the brain via other means as Campbell et al demonstrated resetting the circadian rhythm through shining light on participants knee’s. This shows other secondary oscillators exist throughout the body maintaining circadian rhythms through the use of exogenous zeitgebers.
Core body temperature is another circadian rhythm which sees its lowest point at 4:30am (36 degrees) and highest at around 6pm (38 degrees). A slight trough occurs after lunch and this dip occurs even when people do not eat.
Hormone production also follows a circadian rhythm with cortisol at its lowest around midnight and peaking at 6am. Cortisol plays a role in making us alert and explains why if awaken at 4am we struggle to think clearly. Melatonin and growth hormone also have a circadian rhythm with both peaking at midnight.
Evaluating Circadian Rhythms
- Aschoff and Weaver placed participants in a bunker without any external cues and found participants to have circadian rhythms between 24-25 hours though some were as high as 29hours. This demonstrated the existence of circadian rhythms and their endogenous pacemakers (internal clocks), which persisted even without exogenous zeitgebers to influence them. This also highlighted the importance of external cues as these internal clocks were not accurate without them.
- Due to the lab setting this may have low ecological as it is not indicative of real world settings for sleep behaviour. Also low external validity as this may have then affected the quality or quantity of sleep participants had due to the artificial setup. The sample was also small meaning generalization is more difficult to the wider population where differences may be more evident on a bigger scale. The participants were also volunteers who were aware of being monitored on sleep patterns, which may have caused demand characteristics and affected sleep patterns resulting in results which lack validity and not measuring what the study was meant to measure effectively.
- Michel Siffre spent 6 months in a cave without external cues and found his circadian rhythm varied from 25-30 hours again highlighting the existence of an internal circadian clock. This also highlighted the importance of exogenous zeitgebers in regulating internal biological clocks. However this was a single case study involving one individual and such generalisations may not apply to others due to individual differences. Age may also have been a factor as results may also be limited to a similar age group as other studies have shown sleep patterns vary among different age groups. Other factors such as temperature, air pressure or even the use of monitoring equipment could affect the quality of results meaning this study lacks internal validity. For example artificial light was used and may have been a confounding variable and Campbell et al and Cziesler et al (1999) showed even this can manipulate circadian rhythms through artificial light. Therefore such a study may have low external validity to real world settings but also low internal validity and not actually measure what it was intending to measure (the absence of light or external cues). Such experimental studies are important however as they allow us to demonstrate causal relationships.
- Duffy et al also found a case for individual differences in circadian rhythms. Morning people preferred to rise early and go to bed early (6am and 10pm) while evening people prefer going to sleep and wakeup late (10am and 1am) showing peoples circadian rhythms could vary from one another.
- Zucker’s study where he damaged the SCN in rats to disrupt circadian rhythms was an animal study and may not apply to humans due to differences in anatomy. Therefore it may lack external validity and generalisation in humans. There are also ethical concerns when it comes to intentionally harming such animals although others may argue the benefits gained in understanding animal biology may lead to further understanding of humans.
- Such studies are typical of the biological approach to understanding human behaviour. They propose behaviour can be explained due to biological structures in the brain or hormonal activity. In truth our behaviour is much more complex and not so deterministic as such biological explanations propose. “Nurture” is evidently a strong factor too with environmental influences and exogenous zeitgebers clearly having a strong role in overriding internal biological clocks to some degree. On the other hand Miles et al demonstrated how a blind man who had a circadian rhythm of 24.9 hours struggled to reduce his internal pace no matter what exogenous zeitgebers were used highlighting some biological clocks may be more ingrained and not influenced.
- The SCN is evidently not the only biological clock as other studies have shown that there are other oscillators in the body that appear to regulate biological rhythms through other means (temperature, light penetrating other parts of the body) and explaining circadian rhythms as simply dictated by the SCN and pineal gland connection is oversimplifying the workings of human biology which is far more complex.
- Understanding circadian rhythms has real world applications particularly in the field of Chronotherapeutics. This is the study of how timing affects drug treatments and as the circadian rhythm affects digestion, heart rate and hormones among other functions, this can be taken into account when consuming drugs. For instance medicine that affect certain hormones may have no effect if taken when the target hormone level is low but more effective if taken when they are high. Aspirin for example is most effective in treating heart attacks and most effective if taken in the late evening as most attacks occur in the early hours of the morning.
Infradian Rhythms are biological cycles lasting more than 24 hours and are greater than circadian rhythms. Infradian rhythms can last days, weeks, months or even be annually once per year.
One example is the menstrual cycle in women, which is regulated by hormone secretions, namely oestrogen and progesterone secreted by the ovaries and occurs monthly with the average timing being 28 days although this can vary. Originally it was thought to be controlled by the hypothalamus acting as an endogenous pacemaker, however evidence has shown that exogenous zeitgebers play a part too. Another example is PMS (Pre-Menstrual Syndrome) which occurs a few days prior to the onset of bleeding and is characterized by loss of appetite, stress, irritability and poor concentration. Infradian Rhythms don’t have to be monthly and can apply to behaviours that occur once a year (annually) such as hibernation also.
Some people suffer from a depressive condition called Seasonal Affective Disorder (SAD) during the winter months and recover during the summer. SAD sufferers experience severe symptoms from seasonal changes such as lowering of moods and depression, which is thought to be brought about due to less light and increased melatonin production in darkness by the pineal gland. Serotonin is converted into melatonin and low amounts of serotonin (since it is being converted to melatonin) has been linked with chronic depression. The winter months are also linked to increased rates of heart-attacks and most deaths occurring in January suggesting an annual rhythm to human deaths.
Infradian Rhythms Evaluation
- Reinberg (1967) reported on a woman who spent 3 months in a cave without natural lighting. Her menstrual cycle shortened to 25.7 days, implying that Infradian rhythms are influenced by exogenous zeitgebers such as light cues and require them to remain synchronized.
- However living in a cave is has low ecological validity and is not representative of natural settings. Therefore results may not necessarily be generalised nor can we conclude such changes were due to the absence of light as temperature, smell, noise could all have contributed in some form as exogenous zeitgebers. Therefore other confounding variables could not be ruled out suggesting this research into infradian rhythms lacks internal validity.
- Also, as this was a case study of a single individual the results for her may not have external validity and generalisation to the wider population due to individual differences too (age, health, level of fitness). Russell et al applied underarm sweat of donor women to the upper lips of female participants, finding that menstrual cycles became synchronized. This supported McClintocks findings, which suggested pheromones act as exogenous zeitgebers and can influence endogenous pacemakers, which govern menstrual cycles.
- However results from such studies, including that of McClintock’s concerning the synchronization of menstrual cycles can be explained as random occurrences and do not form a significant difference statistically. Moreover women’s cycles are not universal which may invalidate findings. What is needed is evidence that women with different cycle lengths show synchronisation. A study into a women’s basketball team for an extended period found no correlation between menstrual cycles however it could be that dieting, exercise and stress could affect such also.
- Turke believed however there to be an evolutionary significance to synchronised periods allowing women living together to share child-caring duties. This would fit in with the evolutionary approach that we have adapted accordingly to increase chances of survival for children. However such a theory is post-hoc and cannot be proven or disproven for certain and is therefore purely speculative.
- In real world applications, understanding of the role of darkness in SAD has led to effective therapies, most notably the use of phototherapy. This uses strong lights in the evening and/or early morning to change levels of melatonin and serotonin. SAD sufferers have reported that daily use of such light therapies is enough to relieve them of their feelings of lethargy, depression and other related symptoms. Eastman et al however found that placebo effects could also be at work where simply a belief that the therapy will work leads them to think as such. One study found that 32% of participants reported improvement with the placebo alone which questions its effectiveness. The problem could then be more psychological for some than others and other therapies may be more suitable.
- Rosenzweig et al found that SAD may bring about low moods in some individuals due to darkness stimulating the production of melatonin - a hormone linked with the regulation of sleep and thus stressing the importance of light as an exogenous zeitgeber. SAD has also been explained in terms of being a natural outcome of Infradian rhythms, but alternatively it could be the consequence of a disrupted circadian rhythm. In the UK, as the seasons change from summer to winter, the circadian rhythms may be thrown out of phase. People continue to get up at about the same time but often go to bed earlier because it gets dark earlier. This means that the biological system gets the impression that time is shifting and the result it similar to jet lag.
Ultradian rhythms are biological cycles lasting less than 24 hours and one example is the cycle of brain activity during sleep (sleep stages).
Sleep has 5 stages occurring through the night, lasting for about an hour in infancy to 90 minutes by adolescence. Sleep is a different state of consciousness, where responsiveness to the external environment is reduced. It occurs daily as a circadian rhythm and is composed of an ultradian cycle of separate stages within the sleep period itself.
With the invention of the Electroencephalograph (EEG), psychologists could investigate brain activity occurring during sleep, concluding that it was composed of identifiably different sequenced stages with the first four known as NREM sleep while the fifth stage is known as REM sleep.
- Stage one: Alpha waves disappear and are replaced by low-voltage slow waves. Heart rate declines and muscles relax. This is light sleep and people are easily woken.
- Stage two: A deeper state, from which people are still easily woken. Short bursts of sleep spindles are noticeable, as well as sharp rises and falls in amplitude known as K-complexes. Bodily functions slow down and blood pressure, metabolism and cardiac activity decrease
- Stage three: Sleep becomes increasingly deeper and people are difficult to wake. Sleep spindles decline, being replaced by long, slow delta waves. Heart rate, blood pressure and temperature decline.
- Stage four: Deep sleep, where delta waves increase and metabolic rate is low. People are difficult to wake. Growth hormones are released and this is where most of the “repair work” is done. Incidences of sleep- walking and night terrors may occur. Stages 3 and 4 are called Slow Wave sleep. The sleeper spends about 40 minutes in stage four sleep, about an hour passing in total from stage one to stage four. Stage three is re-entered, then stage two and then an active stage of sleep called rapid eye movement (REM), about 90 minutes after falling asleep.
- Stage five: Also known as REM sleep. After 15 minutes of REM sleep, the sleeper re-enters stages two, three and four in that order, then another cycle begins. It is common to go through about five Ultradian cycles in one night. As the night progresses, the sleeper spends more time in REM sleep and less time in other stages. This pattern is fairly universal, though there are developmental differences.
Kleitman referred to the 90 minute sleep cycle as the Basic Rest Activity Cycle or BRAC.
Kleitman suggested this 90 minute cycle also occurred during the day when were were awake however instead of the sleep stages, people progressed from a state of alertness to physiological fatigue every 90 minutes. Kleitman suggested that mind could only concentrate for up to 90 minutes and as this is reached the body runs out of resources which results in fatigue, lack of concentration and hunger.
The basic rest activity cycle (BRAC) isn’t as obvious according to Kleitman but he believed everyday observations of peoples behaviour prove its existence.
Ultradian Rhythms Evaluation
- Research evidence appears to support the case for individual differences in sleep stages being biologically determined. Differences in sleeping patterns are commonly attributed to non-biological factors such as room temperature or sleep hygiene however Tucker et al (2007) suggested differences may be biologically determined. Studying 11 participants over 11 days and nights found that despite being in strictly controlled laboratory settings to assess sleep patterns, large individual differences were still apparent. This suggests individual differences exist and ultradian rhythms may be unique to each individual through nature and biology.
- Research support for the basic rest activity cycle comes from Ericsson et al (2006). Having studied violinists he found practice sessions were usually limited to 90 minutes at a time with practice being split into 90-minute segments throughout the day. Consistent with Kleitman’s predictions, fatigue was one trait associated with the end of one BRAC cycle and this pattern was evident in musicians, athletes and various professions providing evidence for its existence and it being biologically determined through nature.
- Klein et al found participants tested on verbal and spatial tasks had their performances linked to a 96 minute cycle similar to the Ultradian rhythm in the sleep stages again supporting the BRAC. This suggested a possible link to cognitive ability and the stages of sleep being potentially linked suggesting sleep may be an important element for efficient brain function.
- Gerkema et al found that Ultradian rhythms tended to correlate with brain and body size with large animals having longer sleep cycles. This suggests that the purpose of such a cycle could be linked to restoration theories of sleep and such cycles exist for the recuperation of the body in some form.
- The artificial settings for such laboratory experiments where sleep stages are measured with electrodes may mean the results of such studies lack ecological validity as they are not representative of true sleeping environments and therefore the quality and quantity of sleep may be affected skewing results. Due to this, as well as the risk of demand characteristics as participants were aware of being observed, the findings may lack external validity and application to the wider population.
- As Ultradian rhythms of sleep exist in us all, this suggests that such rhythms are part of our nature rather than a product of nurture and endogenous pacemakers may govern such. Animals studies have found that creating lesions in the brain that control circadian rhythms have no effect on ultradian rhythms suggesting different controlling mechanisms for the two. However this also raises ethical concerns as many studies into Ultradian rhythms have involved animal testing leaving many with permanent damage. Also such findings from animal studies may also not fully apply to humans due to differences in anatomy making generalisation difficult.
Endogenous pacemakers are biological “clocks” that are generated from within organisms.
In mammals the main pacemaker is a tiny cluster of nerve cells called the Suprachiasmastic nucleus (SCN) located in the hypothalamus just above where the optic nerves from each eye cross over. The Suprachiasmastic nucleus (SCN) obtains information via these optic nerves regarding light (an exogenous zeitgeber) and this occurs even when our eyes are closed as light penetrates the eyelids. This produces a circadian rhythm which is reset by light entering the eyes. If the pacemaker is running slow, exogenous zeitgebers such as light shifts it into alignment keeping it synchronized with the outside world.
The SCN is divided into a ventral and dorsal SCN with the ventral SCN easily reset by external cues where as the dorsal is less affected by exogenous zeitgebers (external cues) and more resistant to being reset. The SCN receives information on light and sends signals to the pineal gland which then regulates the production of melatonin which induces sleep and inhibits brain mechanisms promoting wakefulness. Light inhibits melatonin production with the SCN indirectly involved in this process and this demonstrates how endogenous pacemakers interact with exogenous zeitgebers.
Evaluating Endogenous Pacemakers
- Morgan (1995) bred “mutant” hamsters so they had circadian rhythms of 20 hours instead of 24. He then transplanted the SCN of the mutant hamsters into normal hamsters which then displayed the 20 hour mutant rhythm. This demonstrates the key role the SCN plays as an internal biological clock. This study was based on animals however and the anatomy of a hamster is very different to that of humans so such findings may not have external validity across to human generalisation. Also this study raised ethical concerns where animals were left permanently damaged inflicting harm and the ethics in this are questionable.
- Michel Siffre demonstrated the existence of an endogenous pacemaker in his isolation study where he spent 6 months in a cave. His body settled into a sleep/wake cycle of 25 to 30 hours highlighting endogenous pacemakers exert an influence on circadian rhythms however exogenous zeitgebers appear important in keeping it accurately synchronized. Again this is only a single case study and age here may be a confounding variable that influences endogenous pacemakers. When re-doing the experiment when he was older he found his internal clock ticked more slowly com- pared to when he was young. Therefore such results into biological clocks may not have external validity across the wider population. Also the study lacks ecological validity into measuring the workings of internal biological clocks as he was in a cave with electrical equipment hooked up to him. This is not indicative of real world settings.
- There is also evidence to suggest there are other pacemakers and “oscillators throughout the body and the SCN is not the only main one. Folkard (1996) studied a young woman named Kate Aldcroft who spent 25 days in a cave. After this period her temperature rhythm was 24 hours while her sleep rhythm was on a 30 hour cycle. This demonstrated there are separate endogenous pacemakers which can become desynchronized in the control of biological rhythms. The issue here is this study was a single case study and such findings may lack external validity and generalization to the wider population. Also due to hormonal differences the findings for this woman may not apply to men either and there is a risk of gender bias in the findings.
- Campbell et al demonstrated that the SCN may not be the main endogenous pacemaker through altering circadian rhythms by shining light on the backs of participants knee’s. This shifted their rhythms demonstrating that other oscillators must exist that compensate by keeping the biological clock in tune with the outside world. It may be that blood is the messenger that carries signals across the body but this highlights the SCN is not the main endogenous pacemaker as we assumed.
Yamazaki et al found that circadian rhythms persisted in isolated lungs, livers and other tissues grown in the culture dishes not under the control of the SCN. This suggests most cells and body tissue are capable of activity on a circadian basis and have their own pacemakers in some form which may be at a cellular level.
- This biological approach to explaining our rhythms is also deterministic as it ignores the role of free-will we have in overriding these internal clocks. We are not governed by such biological programming and we can override them if we choose to. Nurture also evidently plays a strong factor as the environment interacts with such internal clocks and the two appear just as important as each other. Where as our endogenous pacemakers can keep a rough idea of our biological rhythms, the environment and stimuli appear important in keeping it well synchronized.
Exogenous zeitgebers are external cues that play an important role in regulating biological rhythms helping to synchronise and reset them. Endogenous pacemakers interact with exogenous zeitgebers and this process is known as “entrainment”. The opposite of entrainment is “free-running” and here the biological clock operates without external cues.
Light is the most dominant zeitgeber and has been found to reset the body’s main pacemaker the Suprachiasmatic nucleus (SCN) as well as other oscillators located throughout the body. This is because a light sensitive protein known as CRY reacts to light and is present in the SCN and throughout peripheral oscillators.
Social cues were originally thought to be the main zeitgeber for circadian rhythms as our daily rhythms appeared to be dictated by social convention. We would eat meals, go to sleep and wakeup at times designated appropriate by our age and social factors. Although light has been found to be the main zeitgeber, other cells have been found to react to social cues such as meal times. For example cells in the liver and heart appear to be reset by eating highlighting the role the environment and nurture plays rather than some innate biological programming dictating them being reset (nature).
Temperature has also been found to be an exogenous zeitgeber. An example of this is when trees see their leaves change color or fall off as they react to the temperature around them or the length of the day. Temperature is also an important factor for the onset of hibernation in animals and in the absence of light, temperature may become the dominant zeitgeber that resets biological rhythms.
Exogenous Zeitgebers Evaluation
- Campbell et al found evidence to support that light did in fact act as a exogenous zeitgeber. 15 volunteers slept in a laboratory and were awoken at separate times with light shone on the backs of their knees. This was found to shift circadian rhythms highlighting the role light plays but also how the protein CRY reacts to light and resets biological rhythms. This also highlights humans do not solely rely on light penetrating the eyes and blood may also be a messenger carrying light signals from the skin to the brain. Simplifying such an explanation through light is reductionist as evidently more complex mechanisms are at work which we are not fully understanding. Although we have established the existence of lights role in resetting biological rhythms; the relative importance of each is still unclear as is the nature of their interaction.
- Folkard et al tested whether exogenous zeitgebers could override internal clocks. A group of 12 people lived in a cave for 3 weeks without natural light or other time cues and agreed to go to sleep and wakeup at set times dictated by a clock. Unknown to participants the clock was sped up gradually and initially participants rhythms matched the clock. However as the speed of the clock increased their biological rhythms stopped following it and settled into a 24 hour biological rhythm highlighting that circadian rhythms can only be guided to a certain extent by exogenous zeitgebers and the strong role “nature” and biological programming plays in overriding “nurture” and the environment. The criticisms here are firstly the sample size was small making wider generalisation difficult. Also all participants were aware they were being monitored for a sleep study and demand characteristics are possible which may have affected sleep times and thus rhythms being reset. The fact that it was also a laboratory study mean’t it also lacked ecological validity as the settings were not indicative of real world settings. The strength of this lab study however was that it allowed researchers to identify causal relationships between external cues and biological rhythms without the risk of confounding variables affecting results.
- Michel Siffre spent 6 months in a cave to measure how biological rhythms are affected by the lack of exogenous zeitgebers such as light. Results found that that he settled into a sleep/ wake cycle of 25-30 hours and after 179 days he believed 151 days had past. This supported previous research by Aschoff et al that exogenous zeitgebers play a role in keeping endogenous pacemakers synchronised. This case study however was based on a single individual and may not apply to wider generalisation or the population due to individual differences such as age for example. Also there may be gender bias here as this tells us nothing about how women may be affected by the lack of external cues and other research has shown biological rhythms between males and females may be affected differently by external cues (e.g. pheromones and hormones). The use of artificial light may have also been a confounding variable as other studies with Campbell et al highlighted how this can reset biological rhythms. The study also lacked ecological validity as Siffre spent much of that time hooked up to electronic equipment which may have affected the quality and quantity of sleep and thus his bio rhythms which isn’t fully considered. Temperature may have also affected this study as we now know this is also an exogenous zeitgeber further skewing results.
- The evolutionary perspective would state there is an advantage to having this interaction between endogenous pacemakers and exogenous zeitgebers as this would keep them in tune with seasonal changes. This suggests that such a mechanism has an evolutionary advantage in aiding our survival in some form. Relying solely on our exogenous zeitgebers may threaten our survival therefore internal cues are important also. In truth people are not governed by biological programming or external cues as we have the ability for conscious thought and freewill which can override external or internal cues and alter sleep habits or social cues. Therefore our behaviour is not so deterministic and based on either internal or external cues as free will factors in too.