Simon R J Maxwell
The Characteristics of a Good Prescriber
Prescribing is an essential skill for most doctors and is the major tool used to restore or preserve the health of their patients. Prescribing is also associated with significant risks related to adverse drug reactions and interactions, often caused by inappropriate prescribing, as well as prescribing errors. Prescribing effectively and safely represents a significant intellectual challenge that combines knowledge, judgement and skills, and can be subdivided into ten basic components (Fig. 1.1).
Figure 1.1 The subcomponents of the prescribing process.
The process of prescribing can be conveniently divided into ten subcompetencies, each of which involves a mix of knowledge, judgement and skills. A ‘prescription’ is sometimes referred to more specifically as a ‘medication order’. Steps 2 and 3 should try to take the patient’s views into consideration to establish a therapeutic partnership. A drug is a single chemical substance that has pharmacological effects on the body and is administered as a ‘medicine’, a formulation containing one or more drugs mixed with other ingredients.
Taking into account the patient’s ideas, concerns, and expectations; seeking to form a partnership with the patient when selecting treatments, and making sure that they understand and agree with the reasons for taking the medicine
It is an increasing challenge for junior prescribers to acquire these skills and put them into practice in the face of growing pressures related to the increasing number of drugs available, the greater complexity of treatment regimens taken by individual patients (‘polypharmacy’), and the greater number of elderly and vulnerable patients being treated. The following sections review in more detail some of the knowledge and skills that form the basis of good prescribing.
Basic Principles of Clinical Pharmacology
Clinical pharmacology is the study of the way that drugs act on and are handled by the human body and it is the science that underpins rational prescribing. It can be very simply divided into two aspects (Fig. 1.2):
Figure 1.2 Relationship between pharmacodynamics and pharmacokinetics.
Basic pharmacodynamic studies involve exposing cells or tissues to varying doses (concentrations) of a drug and observing the response to describe a ‘dose–response’ curve. For prescribers, the situation is more complex, because tissue drug exposure depends on how effectively drug molecules are absorbed into the body, distributed to their site of action and subsequently eliminated from the body by metabolism and excretion. This process is pharmacokinetics and is describe by plotting drug concentration over time.
Both have an important influence on the way in which an individual responds to a prescribed drug and provide an understanding of why there is so much potential inter-individual variation in the response to the same prescription.
Drugs act to restore normal function in diseased cells and tissues by acting on receptors or other target molecules in the affected organ (Table 1.1). Binding to target receptors exerts a biological effect, either by initiating new events (e.g. smooth muscle contraction, synthesis of new proteins) or by blocking the actions of endogenous substances (e.g. neurotransmitters, hormones).
Formation of the drug–receptor complex is usually reversible and the proportion of receptors occupied (and thus the response) is directly related to the concentration of the drug. This reversibility means that prescribers usually have to plan a series of repeated drug administrations to achieve the desired therapeutic outcome. Some drug–receptor interactions are so strong that they are effectively irreversible (e.g. aspirin acting at the enzyme cyclo-oxygenase). The extent of the reversibility is determined by the strength of the chemical bond that is formed, often referred to as the ‘affinity’ of the binding. Differences in affinity mean that some drug ligands may show ‘selectivity’ for different types of receptors and allow them to be further divided into ‘subtypes’. For example, adrenoceptors can be subtyped into α, β1 and β2, on the basis of their binding and responsiveness to the endogenous agonists, adrenaline and noradrenaline. Agonist or antagonist drugs that are considered to be ‘selective’ for one receptor subtype can still produce significant effects at other subtypes if a high enough dose is given. For instance, ‘cardioselective’ beta-blocking drugs have anti-anginal effects on the heart (β1) but may cause bronchospasm in the lung (β2) and are absolutely contraindicated for asthmatic patients.
The relationship between the concentration of the drug to which tissues are exposed and the response that is achieved can be plotted on a dose–response curve. When the relation between drug dose (x-axis) and drug response (y-axis) is plotted on a base 10 logarithmic scale, this produces a sigmoidal dose–response curve (Fig. 1.3). Progressive increases in the drug dose (which for most drugs is proportional to the plasma drug concentration) produce increasing response, but this occurs over a relatively narrow part of the overall concentration range; further increases in drug dose beyond this range produce little extra effect. The clinical implication of this relationship is that simply increasing drug dose may not result in any further beneficial effects for patients and may cause adverse effects.
Figure 1.3 Dose–response curve.
A plot of drug response in relation to changes in drug dose on a log10 scale. The purple curve represents this relationship for the beneficial therapeutic effect of this drug. The maximum response on the curve is referred to as the Emax and the dose (or concentration) producing half this value (Emax/2) is the ED50 (or EC50). Clinical responses that might be plotted in this way include change in heart rate, blood pressure, gastric pH or blood glucose. It is possible to plot a curve for the adverse effects of drugs, which are also usually dose related. The grey curve illustrates the relationship for the most important adverse effect of this drug. This requires much higher doses to become manifest. The relationship between the ED50 for the adverse effects and beneficial effects is known as the ‘therapeutic index’. This nominal figure is an expression of how much margin there is for prescribers when choosing a dosage that will provide beneficial effects without also causing adverse effects. Adverse effects that occur at doses beyond the therapeutic range (purple box) are normally called toxic effects while those occurring within it and below it are known as ‘side effects’ and ‘hypersusceptibility effects’ respectively. The therapeutic index is calculated as 100/0.1=1000.
When drugs are used in clinical practice, the prescriber is unable to construct a careful dose–response curve for each individual patient. Therefore, most drugs are licensed for use within a recommended dose range that is expected to be close to the top of the dose–response curve for most patients. This ensures that most patients will achieve a good clinical response without the need for frequent review and dose increases. However, this means that it is sometimes possible to achieve the desired therapeutic response at doses towards the lower end of the recommended range.
Agonists and antagonists
Agonists are drugs that bind to a receptor and initiate a biological response (e.g. adrenaline causing an increase in heart rate via β1 adrenoreceptors in the heart).
Antagonists are drugs that bind to a receptor but do not initiate any biological response. Their importance is that they can block the effect of agonists (e.g. atenolol antagonizing the effect of adrenaline at β1 adrenoreceptors in the heart).
‘Competitive antagonists’ will lead to a shift in the agonist dose–response curve to the right because higher agonist concentrations are now required to achieve a given percentage receptor occupancy (and therefore effect). Their effect can be overcome by giving the agonist at a sufficiently high concentration (i.e. it is surmountable). Examples of competitive antagonists used in clinical practice are atenolol, naloxone, atropine, and cimetidine.
‘Non-competitive’ antagonists inhibit the effect of an agonist in ways other than direct competition for receptor binding with the agonist (e.g. by affecting the secondary messenger system). This makes it impossible to achieve maximum response even at very high-agonist concentration. Irreversible antagonists can be considered as a particular form of noncompetitive antagonist characterized by antagonism that persists even after the antagonist drug has been removed. Common examples are aspirin and omeprazole.
Efficacy and potency
‘Efficacy’ is the term used to describe the maximum response that a drug can achieve when all available receptors or binding sites are occupied. This is equivalent to Emax on the dose–response curve.
‘Therapeutic efficacy’ is a term used to describe the maximum response of drugs that produce the same therapeutic effects on the body but do so via different pharmacological mechanisms (e.g. loop diuretics have greater therapeutic efficacy than thiazide diuretics).
‘Potency’ is a term used to describe the amount of a drug required for a given response (i.e. more potent drugs produce biological effects at lower doses).
The adverse effects of drugs are usually dose-related in a similar way to the beneficial effects although the dose–response curve for these adverse effects is normally shifted to the right (see Fig. 1.3). The ED50 points for each curve indicate that the ratio between the doses that have similar proportionate effects on the two outcomes is 100/0.1=1000. This ratio is known as the ‘therapeutic index’. In reality, drugs have multiple potential adverse effects but the concept of therapeutic index is usually reserved for those requiring dose reduction or discontinuation. For most drugs, the therapeutic index is greater than 100 but there are some notable exceptions with therapeutic indices less than 10, which are in common use (e.g. digoxin, warfarin, insulin, phenytoin, opioids). The challenge for prescribers is to titrate doses carefully to establish the dose for individual patients that maximizes benefits but avoids adverse effects.
Desensitization and withdrawal
Desensitization refers to the situation where the response to a drug diminishes when it is given continuously or repeatedly. The response may be restored by increasing the dose of the drug but, in some cases, the tissues may become completely refractory.
‘Tachyphylaxis’ describes desensitization that occurs very rapidly, sometimes due to depletion of chemicals that may be necessary for drug action (e.g. a stored neurotransmitter).
‘Tolerance’ describes a more gradual loss of response to a drug that occurs over days or weeks, which may be due to changes in receptor numbers or the development of counter-regulatory physiological changes that offset the actions of the drug (e.g. activation of the sympathetic and renin–angiotensin systems in response to diuretic therapy).
When desensitization arises because of chemical, hormonal and physiological changes that offset the actions of a drug, discontinuation may allow these changes to cause rebound ‘withdrawal effects’. Examples of drugs associated with desensitization and withdrawal effects are shown in Table 1.2.
|Drug||Symptoms and signs|
|Alcohol||Anxiety, panic, paranoid delusions, hallucinations, agitation, restlessness, confusion, tremor, tachycardia, ataxia, reduced consciousness, disorientation, seizures|
|Benzodiazepines||As for alcohol|
|Opioids||Anxiety, lacrimation, rhinorrhoea, sneezing, yawning, abdominal cramping, leg cramping, nausea, vomiting, diarrhoea, dilated pupils|
|Selective serotonin reuptake inhibitors||Dizziness, sweating, nausea, insomnia, tremor, confusion, nightmares|
|Corticosteroids||Weakness, fatigue, decreased appetite, weight loss, nausea, vomiting, diarrhoea, abdominal pain, hypotension, hypoglycaemia|
|Nitrates||Chest pain, dyspnoea, palpitations|
Pharmacokinetics is the study of the rate and extent to which drugs are absorbed into the body and distributed to the body tissues and the rate and pathways by which drugs are eliminated from the body by metabolism and excretion (Fig. 1.4). An understanding of these processes is important for prescribers because they form the basis on which the optimal dose regimen is chosen and explain the majority of the inter-individual variation in the response to drug therapy.
Figure 1.4 Pharmacokinetics summary.
A simple view of the main phases of pharmacokinetics. Most drugs are taken orally. Drug molecules are subsequently absorbed from the intestinal lumen, enter the portal venous system and are conveyed to the liver, where they may be subject to first-pass metabolism. Those passing through unchanged enter the systemic circulation, from which they may diffuse out into the surrounding interstitial fluid and into the intracellular fluid. Drug that remains in circulating plasma will be continuously exposed to the possibility of liver metabolism and renal excretion. These processes will lead to the elimination of all of the drug with time (inset graph) unless further doses are administered. First-pass metabolism is avoided if drugs are administered via the buccal or rectal mucosa, or parenterally (e.g. by IV injection).
The term ‘bioavailability’ describes the proportion of the dose that reaches the systemic circulation unscathed.
Absorption is the process by which drug molecules gain access to the bloodstream from the site of drug administration. The rate and extent of drug absorption depend on the route of administration (see Fig. 1.4). The ‘enteral’ routes of absorption are those that involve administration via the gastrointestinal tract:
Oral (ORAL or PO) administration is the commonest route because it is simple, convenient and can readily be used by patients to self-administer their medicines. For successful oral administration it is necessary that the medicine is swallowed, the drug survives exposure to gastric acid, avoids unacceptable food binding, is absorbed across the small bowel mucosa into the portal venous system, and survives metabolism by gut wall or liver enzymes (‘first-pass metabolism’). Therefore, absorption is usually incomplete following oral administration
Buccal and sublingual (SL) administration enable rapid absorption into the systemic circulation without the uncertainties associated with the oral route (e.g. organic nitrates for angina pectoris, triptans for migraine, opioid analgesics)
Intravenous (IV) administration enables all of a dose reliably to enter the systemic circulation, without any concerns about absorption or first-pass metabolism (i.e. the dose is 100% bioavailable), and rapidly achieve a high-plasma concentration. It is the ideal route to use when treating very ill patients where rapid, certain effect is critical to outcome (e.g. benzylpenicillin for meningococcal meningitis)
Subcutaneous (SC) administration is ideal for drugs that require parenteral administration, are absorbed well from subcutaneous fat, and might ideally be injected by patients themselves (e.g. insulin, heparin).
Inhaled (INH) administration allows drugs to be delivered directly to a target in the respiratory tree, usually the small airways (e.g. salbutamol and beclomethasone for asthma). The most common mode of delivery is the metered-dose inhaler but its success depends on some degree of manual dexterity and timing. Patients who find these difficult may use a ‘spacer’ device to improve drug delivery. A special mode of inhaled delivery is via a ‘nebulized’ (NEB) solution created by using pressurized oxygen or air to break up solutions and suspensions into small aerosol droplets that can be directly inhaled from the mouthpiece of the nebulizer
Topical (TOP) administration involves direct application to the site of action (e.g. skin, eye, ear). This has the advantage of achieving sufficient concentration at this site while minimizing systemic exposure and the risk of unwanted adverse effects elsewhere.
Distribution is the process by which drug molecules move from their site of absorption into the bloodstream and then enter the extracellular fluid (and potentially cells) in various tissues around the body, including the site of action. The rate and extent at which distribution occurs will depend upon the drug’s molecular size and lipid solubility, and the extent to which it binds reversibly to plasma proteins. Most drug molecules diffuse passively across capillary walls down a concentration gradient into the extracellular (interstitial) fluid surrounding organs and tissues. There they will bind reversibly with target molecules and other cellular proteins. Distribution to the tissues will continue until the concentration of free drug molecules in the extracellular fluid and (if the drug can enter cells) the intracellular space is equal to the concentration in the plasma and equilibrium is achieved. The movement of drug molecules will then reverse, because the plasma drug concentration begins to fall as those that remain in the blood are subject to elimination by metabolism or excretion. This reverse movement of the drug away from the tissues will be prevented if further drug doses are administered and absorbed into the plasma.
Volume of distribution
The apparent volume of distribution (Vd) is the volume that a drug appears to have distributed into shortly following IV injection. It is calculated from the equation C0=D/Vd, where D is the amount of drug given and C0 is the initial plasma concentration. Drugs that are highly bound to plasma proteins may have a low Vd<10 L (e.g. warfarin, aspirin) while those that diffuse into the extracellular fluid but have low lipid solubility do not enter cells and may have a Vd between 10 and 30 L (e.g. gentamicin, amoxycillin). It is an ‘apparent’ volume because drugs that are lipid soluble and highly tissue bound may have a Vd>100 L (e.g. digoxin, amitriptyline). Drugs with a larger Vd have longer elimination half-lives and take longer to clear from the body.
Metabolism is the process by which drugs are structurally altered from a lipid-soluble form suitable for absorption and distribution to a water-soluble form that is necessary for excretion. There are two phases of drug metabolism. ‘Phase I metabolism’ involves oxidation, reduction or hydrolysis to make drug molecules suitable for Phase II reactions or for excretion. Oxidation is much the commonest form of Phase I reaction and involves chiefly members of the cytochrome P450 family of membrane-bound enzymes in the smooth endoplasmic reticulum of the liver cells. ‘Phase II metabolism’ involves combining Phase I metabolites with an endogenous substrate (e.g. glucuronidation, sulphation, methylation) to form an inactive conjugate that is much more water soluble. This is necessary to enable excretion because lipid-soluble metabolites will simply diffuse back into the body.
Excretion is the process by which drugs and their metabolites are removed from the body. They may leave in the urine, bile, faeces or expired air. Renal excretion is the main route of elimination for low-molecular-weight drugs and their metabolites are sufficiently water soluble to avoid reabsorption from the renal tubule. For some drugs, active secretion from capillaries into the proximal tubule, rather than glomerular filtration, is the predominant mechanism of excretion (e.g. methotrexate, penicillin). Faecal excretion is the preferred route of elimination for larger molecular-weight drugs, including those that are excreted in the bile after conjugation with glucuronide in the liver, and any drugs that are not absorbed. Some drugs or metabolites that are excreted in the bile are sufficiently lipid soluble to be reabsorbed through the gut wall and return to the liver via the portal vein (see Fig. 1.4). This recycling is known as the ‘entero-hepatic circulation’ and can significantly prolong the residence of drugs in the body. ‘Clearance’ is the term used to describe the volume of plasma that is completely cleared of drug per unit time (expressed as mL/min) and is usually the result of either hepatic metabolism or renal excretion, or a combination of both.
For most drugs, elimination is a high capacity process that does not become saturated even at high dosage. Within this range, the rate of elimination is proportional to the amount of drug in the body (i.e. the higher the drug concentration the faster the rate of elimination). This results in so-called ‘first-order kinetics’ where a constant fraction of the drug remaining in the body is eliminated in a given time and the decline in concentration over time is exponential (Fig. 1.5). The importance of this relationship to prescribers is that it means the effect of increasing doses on plasma concentration is predictable–a doubled dose leads to a doubled concentration at all time points. Half-life (t½) is the time taken for the plasma concentration of a drug eliminated by first-order metabolism to halve. For a few drugs in common use (e.g. phenytoin, alcohol) elimination capacity is exceeded (saturated) within the therapeutic range of dosage resulting in ‘zero-order (saturable) kinetics’, where a constant amount of the drug remaining in the body is eliminated in a given time. If the rate of administration exceeds the maximum rate of elimination the drug will accumulate progressively leading to serious toxicity.
Figure 1.5 First-order kinetics.
The decline of drug concentration with time when elimination is a first-order process. The time period required for the plasma drug concentration to halve (half-life, t½) remains constant throughout the elimination process. K=elimination rate constant, e=base of the natural logarithm, C0=concentration at time zero, t=time.
For most drugs, it is necessary to maintain therapeutic effect beyond the first dose, often for several days (e.g. antibiotics) or even for months or years (e.g. antihypertensives, lipid-lowering drugs). This requires prescribers to plan a regimen of repeated doses, indicating the size of each individual dose, the frequency of dose administration and the overall duration of treatment. When doses are repeated, the drug will progressively accumulate in the body and its eventual plasma concentration will be considerably higher than that after a single dose (Fig. 1.6). Accumulation will continue until a ‘steady state’ is reached because the rate of elimination has increased to equal the rate of administration. It takes five half-lives to achieve a plasma concentration that is 97% of the steady state level (irrespective of the dosing interval). This is important for prescribers because it means that the effects of a new prescription, or dose titration, for a drug with a long half-life (e.g. digoxin–36 hours) may not be known for a few days. In these circumstances, the target plasma concentration can be achieved safely before five half-lives have elapsed by giving an initial ‘loading dose’ that is much larger than the maintenance dose, and equivalent to the amount of drug required in the body at steady state (Fig. 1.6). This achieves a peak plasma concentration close to the plateau concentration, which can then be maintained by successive maintenance doses.
Figure 1.6 Repeated drug doses.
The plasma drug concentration rises after successive daily oral doses. The drug’s half-life is 30 hours so each successive dose is administered at a time when there is still drug present in the body. The peak, average and trough concentrations steadily increase as drug accumulates in the body. Steady state is reached after approximately five half-lives when the concentration of drug in the body is sufficient to mean that the rate of elimination (the product of concentration and clearance) is equal to the rate of drug absorption (the product of rate of administration and bioavailability). The long half-life means that it takes 6 days for steady state to be achieved and, for most of the first 3 days of treatment, plasma drug concentrations are below the therapeutic range. This problem can be overcome if a larger loading dose is used to achieve the quantity of drug in the body at steady state more rapidly. It still requires five half-lives to get to steady state but therapeutic concentrations are present for most of this period.
The steady state achieved by regular drug administration actually involves fluctuations in drug concentration between peaks just after administration and troughs just prior to the next administration. Recommended dosing regimens are chosen to keep the troughs in the effective range while the peaks are not high enough to cause adverse effects. The dose interval is a compromise between convenience for the patient and a constant level of drug exposure. More frequent administration (25 mg four times daily) achieves a smoother plasma concentration profile than 100 mg once daily but is much more difficult to sustain. For drugs with short half-lives that would need frequent administration, a solution is the use of ‘modified-release’ formulations. These allow drugs to be more slowly absorbed from the gastrointestinal tract and provide a smoother plasma concentration profile, which is especially important for drugs with a low therapeutic index (e.g. levodopa).
The recommendations of how to prescribe drugs safely and effectively (e.g. dose, route, frequency, duration) for specific indications are based on average dose–response data derived from observations in many individuals. Prescribers can never be certain about the response for particular patients that they treat. However, much of the variability in response is predictable and good prescribers are able to anticipate it and adjust their prescriptions accordingly to maximize the chances of benefit and minimize harm. Different responses may be due to several factors:
Pharmacodynamic variation arises when an equivalent drug concentration results in a different response because of varying receptor number and structure, receptor-coupling mechanisms and physiological changes in target organs resulting from differences in genetics, age and health. For example, the beneficial natriuresis produced by the loop diuretic furosemide is often significantly reduced at a given dose in patients with renal impairment while confusion caused by opioid analgesics is more likely in the elderly
Pharmacokinetic variation arises when the drug concentrations achieved by equivalent doses vary and is a much more important cause of the inter-individual variation in drug responses encountered in clinical practice
Pharmacogenetic variation arises because of genetic polymorphisms that influence both the pharmacodynamic response and pharmacokinetic handling of drugs. The impact of genetics on the response to specific drugs (‘pharmacogenetics’) is gradually being uncovered and will eventually enable prescribers to identify in advance the good responders and those who will suffer adverse effects. This move towards ‘personalized medicine’, based on genetic testing, has the potential to improve the benefit–risk ratio of the drugs that we already have but also to improve the chances of drug discovery (many drugs are lost in development because they cause an unacceptable rate of adverse effects in average unselected patient groups).
Important examples of variations in drug responses are given in Table 1.3. The uncertain outcome of drug therapy for individual patients emphasizes the need for prescribers to monitor the effects of treatment (see below).
Adverse Drug Reactions
The decision to prescribe a drug always involves a judgement of the balance of therapeutic benefit and risk of an adverse outcome. Both prescribers and patients tend to focus on the former but a truly informed prescribing decision requires consideration of both.
An adverse drug reaction (ADR) is an unwanted or harmful reaction experienced following the administration of a drug or combination of drugs under normal conditions of use and is suspected to be related to the drug. ADRs can be conveniently divided into those that occur at concentrations beyond the normal therapeutic range (‘toxic effects’), those that occur at concentrations in the therapeutic range (side effects) and those that occur at concentrations less than the therapeutic range (hypersusceptibility effects) (see Fig. 1.3). ADRs are important because they reduce the quality of life for patients, cause diagnostic confusion, undermine the confidence of patients in their healthcare, and consume scarce NHS resources because of the need for extra care of patients and litigation costs.
ADRs are a common cause of illness and account for around 3% of consultations in primary care, 7% of emergency admissions to hospital, and affect around 15% of hospital inpatients. Many ‘disease’ presentations are eventually attributed to ADRs emphasizing the importance of always taking a careful drug history from all patients (Box 1.1). ADRs appear to be increasing in prevalence due to the increasing age and vulnerability of patients, polypharmacy (higher risk of drug interactions), and the increasing trend towards self-medication (over-the-counter preparations, herbal or traditional medicines, and medicines obtained from internet pharmacies). Retrospective analyses of ADRs has shown that around half could have been avoided if the prescriber had taken more care in anticipating the potential hazards of drug therapy. Non-steroidal anti-inflammatory drug (NSAID) use alone accounts each year for 65 000 emergency admissions, 12 000 ulcer bleeding episodes and 2000 deaths in the UK. In many cases, the patients were at increased risk due to their age, interacting drugs (e.g. aspirin, warfarin), or a past history of peptic ulcer disease. Drugs that commonly cause ADRs are listed in Table 1.4 and well-recognized risk factors for the occurrence of ADRs in Box 1.2.
|Drug or drug class||Important ADRs|
|ACE inhibitors (e.g. lisinopril)||Renal impairment, hyperkalaemia|
|Antibiotics (e.g. amoxycillin)||Nausea, diarrhoea|
|Anticoagulants (e.g. warfarin, heparin)||Bleeding|
|Antipsychotics (e.g. haloperidol)||Falls, sedation, confusion|
|Aspirin||Gastrotoxicity (dyspepsia, gastrointestinal bleeding)|
|Benzodiazepines (e.g. diazepam)||Drowsiness, falls|
|Beta-blockers (e.g. atenolol)||Cold peripheries, bradycardia, lethargy|
|Calcium-channel blockers (e.g. amlodipine)||Ankle oedema|
|Digoxin||Nausea, anorexia, bradycardia|
|Diuretics (e.g. furosemide, bendroflumethiazide)||Dehydration, electrolyte disturbance (hypokalaemia, hyponatraemia), hypotension, renal impairment|
|NSAIDs (e.g. ibuprofen)||Gastrotoxicity (dyspepsia, gastrointestinal bleeding)|
|Opioid analgesics (e.g. morphine)||Nausea, vomiting, confusion, constipation|