Biological Basis of Behaviour


Technique

First used

X-rays

1890s

A328081_1_En_3_Figa_HTML.gif Computer Tomography (CT scan)a

1970s

  A328081_1_En_3_Figb_HTML.gif Single-photon emission computed tomography (SPECT)

1990s

Electroencephalography (EEG)

1920s

A328081_1_En_3_Figc_HTML.gif Brain Fingerprinting

1980s

Positron Emission Tomography (PET)

1960s

Magnetic Resonance Imaging (MRI)

1970s

A328081_1_En_3_Figd_HTML.gif functional Magnetic Resonance Imaging (fMRI)

1990s


aThe symbol A328081_1_En_3_Fige_HTML.gif is used to indicate family relationships, where one technology has evolved from another



First developed in the 1890s, X-rays made an almost immediate impact, finding use in both medicinal and legal contexts before the close of the 19th Century (Brogdon and Lichtenstein 1998). The first criminal case employing X-rays actually involved visualisation of someone’s leg rather than a brain; George Holder of Montreal was convicted of attempted murder after an X-ray confirmed the existence of a bullet in the lower limb of Tolson Cunning.

When it comes to study of the brain, imaging techniques can be divided into those that reveal structural information and those that purport to show correspondence between specific functions and particular subsection(s) of the organ. X-rays, Computed Tomography (CT or CAT scans) and Magnetic Resonance Imaging (MRI) fall into the former category, whereas Single-photon emission computed tomography (SPECT), Electroencephalography (EEG), Positron Emission Tomography (PET), functional Magnetic Resonance Imaging (fMRI) and Brain Fingerprinting report on the operation of the brain. Attempts have been made to introduce each of these approaches as evidence in court, with mixed rates of success (see Sect. 4.​4).

Structural imaging approaches: X-rays can reveal information about the internal structure of the body because materials of different composition, density and/or thickness absorb electromagnetic radiation with varying efficiency. The X-rays passing through the body can be captured on appropriately sensitive film to produce a two-dimensional image.

In truth, conventional X-rays reveal little about the detailed structure of the brain. However CT scanning, whilst employing the same fundamental physics, improves considerably upon basic X-ray imaging by developing 3D representations of the brain. With the help of computers, a series of images can be produced as ‘slices’, which can then be reconstructed into a 3D view.

Unlike X-rays and CT scans, which involve the use of potentially harmful ionising radiation, MRI exploits the inherent magnetic properties of atoms, specifically hydrogen atoms due to their abundance in fats and water, which constitutes the majority of the human body. The atoms would naturally spin in random orientations. A patient (or other research subject) lies flat within the MRI machine, surrounded by a strong magnet. When it is switched on, the magnet causes all of the hydrogen atoms to spin in a coordinated direction, aligned head-to-toe, or vice versa (Berger 2002).

Pulses of radio waves at a specific frequency appropriate to disrupt the spin of hydrogen atoms are then directed to the relevant section of the body, in this case the head. Energy from the radio waves causes some of the hydrogen atoms to adopt a different orientation. When the radio signal is turned off, the hydrogen atoms return to their original orientation, as determined by the magnetic field. This process is known as “relaxation”. Different tissues, including tumours, allow relaxation to occur at different rates and/or may contain different percentages of water. With the aid of a computer, mathematical information about relaxation can be converted into three-dimensional brain images.

As well as the safety issues noted above, MRI has certain advantages over X-rays and CT scans. In particular, MRI is better than X-ray based systems at visualising soft tissues and gives sharper images. There are downsides though; MRI machines are noisy and claustrophobic, and very expensive.

Functional imaging approaches: Both PET and SPECT require the use of radioactive isotopes to report on metabolic activity within the brain. In PET, radioactive decay causes the release of a positron, a positively charged particle. When a positron encounters an electron in the tissue under investigation, the two particles interact and “cancel each other out” leading to the generation of two gamma-rays. These rays are detected using scintillant material in the walls of the machine surrounding the patient’s head. The scintillant in turn emits light which is analysed by computer.

The radio-labelled compound, typically a glucose analog fludeoxyglucose (18F-FDG) or labelled water (H2 15O), is delivered intravenously. The level of radioactivity seen in different parts of the brain is a reflection of the blood flow and/or glucose metabolism in those regions which, in turn, is taken to be a proxy for brain activity.

For PET, the radioisotopes are short lived; 18F has a half-life of 110 min, 15O just 2 min (Muehllehner and Karp 2006). In consequence, facilities utilising this technique must be geographically close to a laboratory producing the labelled metabolite, contributing to the uneven availability of this service. For SPECT, isotopes such as 99mTechnetium and 201Thallium are used. The radioactive element do not naturally occur within biomolecules, so radiopharmaceuticals such as 99mTc-hexamethylpropyleneamine oxime must be specially synthesised (Ollinger and Fessler 1997). These isotopes used for SPECT do however have longer half-lives than those used for PET meaning that machines capable of obtaining SPECT data are more readily accessible.

Like PET, SPECT involves production of gamma-rays, but despite use of a rotating gamma camera, SPECT scans result in lower resolution images that PET. Nowadays both SPECT and PET can be conducted in combination with traditional CT scans to generate functional and structural data at the same time.

The use of fMRI represents an important advancement in brain mapping because it does not require the use of radioactivity. Instead fMRI can be used to map active regions of the brain by exploiting natural differences in the magnetic properties of haemoglobin in oxygenated blood versus deoxygenated blood. By comparing images of brains at rest versus brains engaged in specific tasks, a blood oxygen level dependent (BOLD) contrast can be generated. In this way it becomes possible to visualise the areas of the brain requiring greater supply of oxygen, which is taken to represent brain activity.

Waveform approaches: EEG and Brain Fingerprinting differ from the methods discussed thus far. Rather than an image of the brain, these methods output measurements as a series of waves representing brain activity, known as a “montage”. EEG measures electrical activity in the brain arising from the coordinated response of thousands of neurons. It involves recording signals at multiple electrodes positioned on the scalp.

Compared with most of the other imaging techniques, EEG offers worse spatial accuracy, but clearer temporal resolution. It also requires less specialist equipment that many of the scan-based systems and, in consequence, is both cheaper to use and more flexible regarding location (e.g. it can genuinely be used for “bed-side” analysis in a way that fMRI cannot).

Brain Fingerprinting is a particular variation of EEG seeking to compare brain responses to three kinds of stimuli (Farwell and Smith 2001). The first type of stimulus involves particular things that the subject had been asked to memorise. These are known as “targets” and serve as positive controls for the process. Secondly, there are stimuli with which the subject is not expected to have any connection. These are the “irrelevants” and are negative controls. Thirdly, there are the stimuli under test, the “probes”—things which the subject may know, for example, as a consequence of their presence at a crime scene. Such “guilty knowledge” might be considered to be incriminating (see Sect. 4.​4.​1).

Scientists employing this method look for evidence of Memory and Encoding-Related Multifacted Electroencephalographic Responses (MERMERs). The most important MERMER is known as the P300 wave, an involuntary “event-related potential” (ERP) occurring some 300 ms after the trigger stimulus.



3.1.2 Functional Architecture of the Brain


The human brain is divided into six main parts; the midbrain, the pons and the medulla oblongata (these three collectively constituting the brain stem), the cerebellum, the diencephalon (which includes the thalamus and the hypothalamus), and the cerebrum (Kandel et al. 2000). During the early part of the 20th Century, painstaking anatomical work was conducted by German anatomist Korbinian Brodmann. By examining variation in the organisation of different cell types, Brodmann defined 52 structurally-distinct areas within the human cerebral cortex (Kandel et al. 2000). Although more modern techniques have brought into question some of his original mapping of functions to particular regions of the brain, Brodmann’s cytoarchitectonic numbering of areas has nonetheless served for several decades as a valuable starting point for defining the substructure of the brain.

The cerebral hemispheres can initially be divided into four distinct lobes; occipital, parietal, temporal and frontal. As the name suggests, the frontal lobe is situated in the anterior part of the brain, i.e. on the facial side of the head. The frontal lobe is proportionately larger in primates than other mammals, and all the more so in humans (Miller and Cohen 2001). It has a pivotal role in higher brain functions, i.e. not the innate, routine responses maintained by other regions. Several subsections within the frontal lobe have been closely identified with specific behaviours (see Table 3.2). In particular, the PreFrontal Cortex (PFC) and divisions thereof have crucial roles to play in cognitive control or “executive functions”, which can be defined as “those highlevel processes that control and organise other mental processes” (Gilbert and Burgess 2008: p. R112).


Table 3.2
Correlation of certain behaviours with subsections of the frontal lobe of the human brain































Area of brain

Associated behaviour-related activity

Anterior Cingulated Cortex (ACC)

Inhibitory control, emotional processing (including empathy), detection of cognitive conflict

Orbital PreFrontal Cortex (OPFC)

Decision-making, emotional processing (including regret)

Ventromedial PFC (VMPFC)

Ethical decision-making, fear and processing of risk

Ventrolateral PFC (VLPFC)

Inhibition of behaviour

Dorsolateral PFC (DLPFC)

Reasoning, behavioural control, cognitive flexibility, impulse control

Amygdala

Emotional learning and memory, auditory and facial emotion recognition, fear conditioning

Supplementary Motor Complex (SMC)

Volitional (self-initiated) movements, inhibition of action and response to alteration in planned activity


Original concept for table inspired by Mobbs et al. (2007). Additional information drawn from: Balleine and Killcross (2006), Brierley et al. (2002), Bunge et al. (2001), Gilbert and Burgess (2008), Nachev et al. (2008), and Salat et al. (2002)

The PFC has been shown to play a crucial role in “working memory”, i.e. retention of short-term information required to fulfil a goal, and in “behavioural inhibition”, which includes the active exclusion of information irrelevant to completion of the task and the wilful decision not to go through with an action, such as giving into an inappropriate temptation (Bunge et al. 2001; Gilbert and Burgess 2008). Any notion of free will and “top-down control” will, of necessity, involve influences exerted via the PFC.



3.2 Genetics of Behaviour


We are going to move on to think about the potential role of specific genes in human behaviour. Before doing so, however, it should be noted that psychologists and other behavioural scientists have employed a variety of alternative approaches which give strong support to the idea that at least some of our behaviour stems from our genetic make-up.

As far back as the 1920s, scientists have sought to identify a potential role for hereditable factors by comparing the extent to which characteristics are shared by “identical” or monozygotic (MZ) twins in contrast to “fraternal” or dizygotic (DZ) twins. The premise, which is known to have potentially confounding limitations, is that all twins (MZ and DZ) will have had a shared environment (both in utero and after birth). However only MZ arise from the same fertilised egg and will have “identical” genetic heritage; DZ twins are no more genetically alike than other siblings. Thus traits found more commonly in MZ than DZ twins may be attributable to genetics.

As Andrews and Bonta (2010) have observed, the perfect opportunity for analysis would come if MZ twins were separated at birth and raised entirely separately in different families. For obvious reasons it would be unethical to conduct such an intervention purely to further our knowledge of genetics. However there are rare situations when, for other reasons, identical twins have been separated early in life and raised apart in adopted families.

This does herald the second non-molecular way the influence of genetics on behaviour has been investigated, which is the study of adoptees. Antisocial behaviour, for example, exhibited by an adoptee can be compared against the equivalent characteristic in his biological parents, his adoptive parents and/or genetically unrelated siblings in his adoptive family. This will help to delineate genetic versus environmental factors.

Via these various mechanisms, it has been possible to establish with some confidence that criminality and/or other antisocial behaviour is attributable, at least in part, to genetic factors. Calculations of the genetic contribution to behaviour vary significantly (not least due to methodological differences), however it is not considered unreasonable to attribute approximately 30 % of antisocial behaviour to genetic factors (see Rhee and Waldman 2002; Andrews and Bonta 2010 for more detailed analysis).

Of course it is important to remember that studies of these types do not offer information about the specific contribution of genes to the criminal behaviour of any particular individual, such as the defendant in a murder trial (a scenario to which we will return in Chap. 4). A third approach would be to interrogate the pedigree of a person, for example our hypothetical defendant, to examine the inheritance of a feature of interest through the generations in their family tree. If undertaken appropriately, a study of this kind will likely generate a “genogram”, a pictorial representation of the relationships within the family with emphasis on the patterns of inheritance of the trait(s) under consideration. In the present context this could be a history of criminality within the family, but it might be somewhat more subtle than this, looking at family members with a history of mental illness, alcoholism or similar.

From our current vantage point, in which the complete genome1 of an individual can be sequenced in a few days for low thousands of dollars, it is becoming possible to add a molecular dimension to these more traditional approaches. It is to the potential for genetic analysis that we now turn.

Discovery of the structure of deoxyribonucleic acid (DNA), and the subsequent elucidation of the ways in which proteins are encoded by that DNA, opens up the potential for interrogation of the underlying molecular biology of behaviour. The identification of mutations within that coding sequence as the cause of various inheritable diseases has driven a view amongst some scientists that this “book of life” is all that we need. It is no surprise, for example, that it was Francis Crick, one of the authors of the seminal paper on the structure of DNA (Watson and Crick 1953), who made the bold assertion quoted in Sect. 1.​3.​3, regarding the deterministic character of nature [Crick’s co-author, James Watson, is not quite as forthright, but shares some of the same sentiment when he observes that “In large measure, our fate is in our genes” (quoted by Alper 1998)].

This reductionist2 model of life is not uniformly endorsed (e.g. see Noble 2006, for a well-reasoned critique). A good argument can be made that many of the most recent discoveries in genetics are actually demonstrating that this view was overly simplistic. Evidence from a diverse range of biological disciplines, from botany to psychology, has progressively seen models involving Gene-environment (G × E) interactions replacing naïve notions of Nature versus Nurture (Baum 2013). Not only does data increasingly point to “both-and” explanations rather than “either-or”, but scientists are also beginning to understand the molecular mechanisms by which environmental factors can exercise influence on gene expression. These mechanisms include both epigenetics (Sect. 3.2.1) via chromatin modification and the influence of non-coding RNAs (Sect. 3.2.2).


3.2.1 Epigenetics


Over the past fifteen years, there has been a paradigm shift in the understanding of gene expression. In particular, there has been recognition that mechanisms exist for the transmission of inheritable changes in DNA expression that do not involve mutation of the DNA coding sequence per se. This phenomenon, known as epigenetics, has been defined as “the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states” (Bird 2007: p 398). Epigenetic alterations in gene expression are generally achieved by the attachment (or removal) of a range of small molecules to the DNA and/or to the proteins within the histone complexes, around which the nucleic acid is wrapped. Methylation of DNA at so-called “CpG Islands” located close to the transcription start site for a gene (i.e. the position where the “message” begins) can influence whether or not it is expressed in particular cells. Histone proteins can be altered via post-translational methylation, acetylation, ubiquitination or sumoylation3 (Gräff and Mansuy 2008). Some, perhaps all, of these changes can be reversible. Environmental factors, including in utero exposure to biomolecules, are now known to have an influence on gene expression via epigenetic modifications.


3.2.2 Non-coding RNAs (NcRNAs)


Occasionally surprises causes us to fundamentally re-examine areas of biology which we thought were already well understood. One such change has been the revelation regarding the roles played by non-coding RNAs (ncRNAs) in determining whether given protein-coding genes are switched on in particular tissues and/or at particular times.

In a few short years, science has gone from no knowledge about the existence of these small regulatory molecules, to detailed understanding of the roles played by a growing family of ncRNAs including: microRNAs (miRNA), Piwi-interacting RNAs (piRNA), and small-interfering RNAs (siRNA). In our current context, it is sufficient to recognise that these molecules have in common the ability to down-regulate or silence the expression of certain genes. Their actions can be influenced by environmental factors, making them important potential contributors to the G × E interactions (Morris and Mattick 2014).

For example, the body must respond to changes in environmental stress, such as oxygen shortage and nutrient deprivation, and some of the mechanisms for so doing involve microRNAs (Spriggs et al. 2010). There is also growing evidence that environmental factors including cigarette smoke, bisphenol A and exposure to certain metals can lead to alterations in gene expression via pathways involving microRNAs (Hou et al. 2011).


3.2.3 Genes Do Influence Behaviour


Some role for environmental factors is, of course, entirely in keeping with the findings of twin studies, adoption studies and pedigree analysis discussed previously. So too, however, is the expectation that some aspects of behaviour will be influence by genetic criteria. Experiments are starting to identify specific genes implicated in the predisposition to different behavioural abnormalities. These include the genes for brain-derived neurotrophic factor (BDNF), neurogenic locus notch homolog protein 4 (NOTCH4), neural cell adhesion molecule (NCAM) and the serotonin transporter (5HTT) (Raine 2008). However, one example, the gene for the monoamine oxidase A (MAOA), has been studied in far greater detail than any other, and it is to this that we now turn.

Monoamine oxidase A: Originally found in blood serum, serotonin is a biological molecule with a variety of functions, including a role as both a hormone in the peripheral blood system and as a neurotransmitter within the Central Nervous System (Rang et al. 2007). Later identified chemically as 5-Hydroxytryptamine (5-HT), serotonin is structurally similar to other signalling molecules noradrenalin4 and dopamine. Abnormally high or low concentrations of these compounds can have behavioural consequences, including the suggestion that low concentrations of noradrenalin can contribute to depression whereas excessive concentrations are associated with manic behaviour (Rang et al. 2007).

Monoamine oxidase A (MAOA5) is an enzyme responsible for inactivation of this class of neurotransmitters, converting them into molecules which, after further processing, are excreted in the urine. Mutations which lead to a low activity or non-functional versions of MAOA will therefore cause neuroactive compounds to be present for an extended period of time.

It has been claimed that in any community, over 50 % of crime will be conducted by fewer than 10 % of the families (e.g. Moffitt 2005). Observation that antisocial behaviour appears to be clustered around certain families added to the long-standing suspicion that genetic factors play a role in conduct of this kind.

This was brought into sharp relief via a landmark study of genetics and metabolism in a Dutch family (Brunner et al. 1993). Several male members of an extended family exhibited impulsive aggressive behaviour accompanied by borderline mental retardation. Analysis of compounds in the men’s urine and mapping of their DNA, identified a single letter change (a “point mutation”6) within the gene for monoamine oxidase A (MAOA). This alteration introduced a premature “stop” signal within the sequence for the protein, leading to production of a shortened and non-functional version. The men with this mutation all had unusually high levels of neurochemicals in their urine and therefore, presumably, within the brain. Female carriers, with one copy of the mutant MAOA gene and one normal copy, were unaffected.7 This correlation of a specific mutation with a behavioural phenotype was hugely significant and has led onto many more nuanced experiments looking into the influence of that mutation as well as a number of other specific changes, some of which are described below.

Studies with transgenic mice: Before returning to consideration of human subjects, in the context of our current discussion, it is worth drawing attention to another study of MAOA mutation, this time in mice (Cases et al. 1995). Building upon the work of Brunner and colleagues, an international consortium produced transgenic mice in which the MAOA gene had been intentionally inactivated (“knocked out” in the jargon).8 This was achieved by insertion of a second gene, coding as it happens for interferon beta, within the coding region for monoamine oxidase A, thereby disrupting production of the latter.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 21, 2016 | Posted by in GENERAL SURGERY | Comments Off on Biological Basis of Behaviour

Full access? Get Clinical Tree

Get Clinical Tree app for offline access