Drug therapy and poisoning

Chapter 17 Drug therapy and poisoning




Drug therapy





The patient


The prerequisite of any form of therapeutic intervention is a reliable diagnosis or, at least, an assessment of clinical need. An accurate diagnosis ensures that a patient is not exposed, unnecessarily, to the hazards or costs of a particular intervention. Nevertheless, there are some circumstances when treatment is used in the absence of a clear diagnosis, for example:



In some instances a particular medicine is only effective in subgroups of patients who have a particular disorder. Trastuzumab, for example, is only of value in women with breast cancer whose malignant cells express the HER2 epidermal growth factor receptor. Tailoring treatment, depending on an individual’s specific genetic characteristics or gene expression, is increasingly used. This promising approach approach has become known as ‘personalized medicine’.


Medicines are also given to otherwise healthy individuals. In such circumstances there must be a very clear imperative to ensure that the benefits to the individual outweigh the harm. Examples include:



Co-morbidity may also significantly alter the way in which conditions are treated particularly in the elderly. Some examples are shown in Table 17.1.


Table 17.1 Examples of drugs to be avoided in people with co-morbidity



































Co-morbidity Avoid Effect

Parkinson’s disease


Neuroleptics


Exacerbates Parkinsonian symptoms (including tremor)


Hypertension


Non-steroidal anti-inflammatory drugs


Sodium retention


Asthma


Beta-blockers, adenosine


Bronchospasm


Respiratory failure


Morphine, diamorphine


Respiratory depression


Renovascular disease


ACE inhibitors/antagonists


Reduction in glomerular filtration


Chronic heart failure


Trastuzumab


Worsening of heart failure


Chronic infections (e.g. tuberculosis, hepatitis C, histoplasmosis


Cytokine modulators (e.g. etanercept)


Increased risk of exacerbation








The dose


Appropriate drug dosages will have usually been determined from the results of so-called ‘dose-ranging’ studies during the original development programme. Such studies are generally conducted as RCTs covering a range of potential doses. Drug doses and dosage regimens may be fixed or adjusted.




Pharmacokinetics


Pharmacokinetics is the study of what the body does to a drug. The intensity of a drug’s action, immediately after parenteral administration, is largely a function of its volume of distribution. This, in turn, is predominantly governed by body composition and regional blood flow. Dosage adjustments, for body weight or surface area, are therefore common (e.g. in cancer chemotherapy) in order to optimize treatment.


The main determinants of a drug’s plasma concentration after oral administration are its bioavailability (the proportion of the unchanged drug that reaches the systemic circulation) and its rate of systemic clearance (by hepatic metabolism or renal excretion). A drug’s oral bioavailability depends on the extent to which it is:



Liver drug metabolism occurs in two stages:



Table 17.5 Some inducers and inhibitors of cytochrome P450












































Inducers


Carbamazepine


Hyperforina


Nifedipine


Non-nucleoside reverse transcriptase inhibitors (NNRTIs)


Omeprazole


Phenobarbital


Phenytoin


Rifampicin


Ritonavir (see p. 180)


Inhibitors


Allopurinol


Amiodarone


Cimetidine


Erythromycin, clarithromycin


Fluoxetine, paroxetine


Grapefruit juice (contains flavonoids)


Imidazoles


Quinolones


Saquinavir


Sulphonamides


a Hyperforin is one of the ingredients of the herbal product known as St John’s wort used by herbalists to treat depression. Although it is marketed as a licensed medicine, it is a reminder that drug interactions can occur with alternative, as well as conventional, medicines.



Genetic causes of altered pharmacokinetics

Both presystemic hepatic metabolism, and the rate of systemic hepatic clearance, may vary markedly between healthy individuals.


Variability in the genes that encode drug-metabolizing enzymes (Table 17.6) is a major determinant of the inter-individual differences in the therapeutic and adverse responses to drug treatment. The most common involve polymorphisms of the cytochrome P450 family of enzymes, CYP. The first to be discovered was the polymorphism in the hydroxylation of the antihypertensive agent debrisoquin (CYP2D6). Defective catabolism was shown to be a monogenetically inherited trait, involving 5–10% of Caucasian populations, and leading to an exaggerated hypotensive response.


Table 17.6 Some genetic polymorphisms involving drug metabolism














































































Enzyme Drug

P450


 


 Cytochrome CYP1A2


Amitriptyline


Clozapine


 Cytochrome CYP3A4


Amlodipine


Ciclosporin


Nifedipine


Sildenafil


Simvastatin


Protease inhibitors


Tacrolimus


 Cytochrome CYP2C9


Warfarin


Glipizide


Losartan


Phenytoin


 Cytochrome CYP2D6


Beta-blockers


Codeine


Risperidone


SSRIs


Tramadol


Venlafaxine


 Cytochrome CYP2C19


Clopidogrela


Cyclophosphamide


Diazepam


Lansoprazole


Omeprazole


Plasma pseudocholinesterase


Succinylcholine


Mivacurium


Thiopurine methyltransferase


Azathioprine


Mercaptopurine


UDP-glucuronosyl transferase


Irinotecan


N-acetyl transferase


Isoniazid


CYP, cytochrome; SSRIs, Selective serotonin reuptake inhibitors.


a Clopidogrel is a prodrug and impaired metabolizers have a reduced response.


A substantial number of other drugs – estimated at 15–25% of all medicines in use – are substrates for CYP2D6. The frequencies of the variant alleles show racial variation and a small proportion of individuals may have two or more copies of the active gene. The phenotypic consequences of the defective CYP2D6 include the increased risk of toxicity with those antidepressants or antipsychotics that undergo metabolism by this pathway. Conversely, in individuals with multiple copies of the active gene, there are extremely rapid rates of metabolism and therapeutic failure at conventional doses.


Warfarin is predominantly metabolized by CYP2C9. In most populations, between 2% and 10% are homozygous for an allele that results in low enzyme activity. Such individuals will therefore metabolize warfarin more slowly leading to higher plasma levels, a greater risk of bleeding, and a requirement for lower doses if the international normalized ratio (INR) is to be maintained within the therapeutic range.


Cytochrome P450 is inhibited by the proton pump inhibitors but the consensus view is that co-prescribing with clopidogrel does not cause a significant increase in cardiovascular risk.


Individual differences in the activity of thiopurine methyltransferase (TPMT) determine the doses of mercaptopurine and azathioprine that are used. TMPT activity is therefore undertaken routinely in children undergoing treatment for acute lymphatic leukaemia and people with Crohn’s disease (see p. 233).


Many drugs undergo metabolism by more than one member of the cytochrome P450 family. Individuals deficient in one enzyme may have normal, or over-expressed, activities of others. Current knowledge (and cost) does not therefore permit predictions of an individual’s dosage requirements for the wide range of drugs for which polymorphisms in metabolism have been identified.


This may, however, become possible in the future, and would contribute – in part – to the prospect of ‘personalized prescribing’ (see p. 899).






Monitoring the effects of treatment


The combination of pharmacokinetic and pharmacodynamic causes of variability makes monitoring of the effects of treatment essential. Three approaches are used.







Adverse drug reactions


Adverse drug reactions (ADRs), defined as ‘the unwanted effects of drugs occurring under normal conditions of use’, are a significant cause of morbidity and mortality. Around 5% of acute medical emergencies are admitted with ADRs, and around 10–20% of hospital inpatients suffer an ADR during their stay. Unwanted effects of drugs are five to six times more likely in the elderly than in young adults; and the risk of an ADR rises sharply with the number of drugs administered.





Classification

Two types of ADR are recognized.


Type A (augmented) reactions (Table 17.9) are:



Table 17.9 Examples of adverse drug reactions






















































Type of reaction and drug Adverse reaction

Type A (augmented)


 


ACE inhibitors


Hypotension
Chronic cough


ACE antagonists


Hypotension


Anticoagulants


Gastrointestinal bleeding
Intracerebral haemorrhage


Antipsychotics


Acute dystonia/dyskinesia
Parkinsonian symptoms
Tardive dyskinesia


Cytotoxic agents


Bone marrow dyscrasias
Cancer


Erythromycin


Nausea, vomiting


Glucocorticosteroids


Hypoadrenalism
Osteoporosis


Insulin


Hypoglycaemia


Tricyclic antidepressants


Dry mouth


Type B (bizarre)


 


Benzylpenicillin
Radiological contrast media


Anaphylaxis


Broad-spectrum penicillins


Maculopapular rash


Carbamazepine
Lamotrigine
Phenytoin
Sulphonamides


Toxic epidermal necrolysis
Stevens–Johnson syndrome


Carbamazepine
Diclofenac
Isoniazid
Phenytoin
Rifampicin


Hepatotoxicity


Isotretinoin
SSRIsa


Depression
Suicidal ideation


ACE, angiotensin-converting enzyme; SSRIs, selective serotonin reuptake inhibitors.


a In children and adolescents.


Whilst some such reactions as hypotension with ACE inhibitors may occur after a single dose, others may develop only after months (pulmonary fibrosis with amiodarone) or years (second malignancies with anti-cancer drugs).


Type B (idiosyncratic) reactions (Table 17.9) have no resemblance to the recognized pharmacological or toxicological effects of the drug. They are:




Diagnosis

All ADRs mimic some naturally occurring disease and the distinction between an iatrogenic aetiology, and an event unrelated to the drug, is often difficult. Although some effects are obviously iatrogenic (e.g. acute anaphylaxis occurring a few minutes after intravenous penicillin), many are less so. There are six characteristics that can help distinguish an adverse reaction from an event due to some other cause:



image Appropriate time interval. The time interval between the administration of a drug and the suspected adverse reaction should be appropriate. Acute anaphylaxis usually occurs within a few minutes of administration, whilst aplastic anaemia will only become apparent after a few weeks (because of the life-span of erythrocytes). Drug-induced malignancy, however, will take years to develop.


image Nature of the reaction. Some conditions (maculopapular rashes, angio-oedema, fixed drug eruptions, toxic epidermal necrolysis) are so typically iatrogenic that an adverse drug reaction is very likely.


image Plausibility. Where an event is a manifestation of the known pharmacological property of the drug, its recognition as a type A adverse drug reaction can be made (e.g. hypotension with an antihypertensive agent, or hypoglycaemia with an antidiabetic drug). Unless there have been previous reports in the literature, the recognition of type B reactions may be very difficult. The first cases of depression with isotretinoin, for example, were difficult to recognize as an ADR even though a causal association is now acknowledged.


image Exclusion of other causes. In some instances, particularly suspected hepatotoxicity, an iatrogenic diagnosis can only be made after the exclusion of other causes of disease.


image Results of laboratory tests. In a few instances, the diagnosis of an adverse reaction can be inferred from the plasma concentration (Table 17.8). Occasionally, an ADR produces diagnostic histopathological features. Examples include putative reactions involving the skin and liver.


image Results of dechallenge and rechallenge. Failure of remission when the drug is withdrawn (i.e. ‘dechallenge’) is unlikely to be an ADR. The diagnostic reliability of dechallenge, however, is not absolute: if the ADR has caused irreversible organ damage (e.g. malignancy) then dechallenge will result in a false-negative response. Rechallenge, involving re-institution of the suspected drug to see if the event recurs, is often regarded as an absolute diagnostic test. This is, in many instances, correct but there are two caveats. First, it is rarely justifiable to subject a patient to further hazard. Second, some adverse drug reactions develop because of particular circumstances which may not necessarily be replicated on rechallenge (e.g. hypoglycaemia with an antidiabetic agent).




Evidence-based medicine


There is general acceptance that clinical practice should, as far as possible, be based on evidence of benefit rather than theoretical speculation, anecdote or pronouncement.


One of the main applications of ‘evidence-based medicine’ is in therapeutics. Treatments should be introduced into, and used in, routine clinical care only if they have been demonstrated to be effective in appropriate studies. Three approaches are used:




Randomized controlled trials


In this type of study, people with a particular condition are allocated to one of two (and sometimes more) treatments randomly. At the end of the study, the outcomes in the groups are compared. The purpose of the randomized controlled trial is to minimize bias and confounding. In order to minimize patient bias, the patients themselves are generally unaware of their treatment allocations (a ‘single-blindtrial); and in order to reduce doctor bias, treatment allocations are also withheld from the investigators (a ‘double-blindtrial). To recruit sufficient numbers of patients, and to examine the effects of treatment in different settings, it is often necessary to conduct the trial at several locations (a ‘multicentretrial).


Randomized controlled trials are designed to assess whether one treatment is better than another (a ‘superiority’ trial); or whether one treatment is similar to another (an ‘equivalence’ trial).



Although RCTs were originally introduced to investigate the efficacy of drugs, the methodology can be used for surgical (and other) procedures and medical devices.


There are a number of variants of the conventional randomized controlled trial including cross-over trials, cluster randomized controlled trials, inferiority trials and futility trials (see Further Reading).



Assessing randomized controlled trials

In assessing the relevance and reliability of an RCT a number of features need to be taken into account.








Analysis of a superiority trial

The analysis of a superiority trial is based on the premise – the ‘null hypothesis’ – that there is no difference between the treatments. The null hypothesis is rejected if the probability of the observed result occurring by chance, the p value, is less than 1 in 20 (i.e. p < 0.05). There are three caveats.



Scrutiny of the magnitude of the effect, and its 95% confidence intervals (CI), is a far better guide than the p value.






Controlled observational trials


Three types of observational study have been used to test the clinical effectiveness of therapeutic interventions:






Case–control studies

This type of study design compares people with a particular condition (the ‘cases’) with those without (the ‘controls’). The approach has predominantly been used to identify epidemiological ‘risk factors’ for specific conditions such as lung cancer (smoking) or sudden infant death syndrome (lying prone); or in the evaluation of potential adverse drug reactions (such as deep venous thrombosis with oral contraceptives).


A case–control design allows an estimation of the odds ratio (OR), which is the ratio of the probability of an event occurring to the probability of the event not occurring (Box 17.1).



An OR that is significantly greater than unity indicates a statistical association that may be causal. The OR for deep venous thrombosis and current use of oral contraceptives equals 2–4 (depending on the preparation): this indicates that the risk of developing a deep venous thrombosis on oral contraceptives is between 2 and 4 times greater than the background rate.


In some studies, the OR for a particular observation has been found to be significantly less than unity, suggesting ‘protection’ from the condition under study. Some studies of women with myocardial infarction indicated protection in those using hormone-replacement therapies but it has been subsequently shown that the result was due to bias. On the other hand, case–control studies have consistently shown that aspirin and other non-steroidal anti-inflammatory drugs are associated with a reduced risk of colon cancer. This seems to be a causal effect.


Case–control studies claiming to demonstrate the efficacy of a drug need to be interpreted with great care: the possibility of bias and confounding is substantial as was seen in the studies of hormone-replacement therapy and myocardial infarction. Confirmation from one or more RCTs is usually essential.







Statistical analyses


The relevance of statistics is not confined to those who undertake research but also to anyone who wants to understand the relevance of research studies to their clinical practice.



The average


Clinical studies may describe, quantitatively, the value of a particular variable (e.g. height, weight, blood pressure, haemoglobin) in a sample of a defined population. The ‘average’ value (or ‘central tendency’ in statistical language) can be expressed as the mean, median or mode depending on the circumstances:




In a symmetrically distributed population, the mean, median and mode are the same.


The average value of a sample, on its own, is of only modest interest. Of equal (and often greater) relevance is the confidence we can place on the sample average as truly reflecting the average value of the population from which it has been drawn. This is most often expressed as a confidence interval, which describes the probability of a sample mean being a certain distance from the population mean. If, for example, the mean systolic blood pressure of 100 undergraduates is 124 mmHg, with a 95% confidence interval of ± 15 mmHg, we can be confident that if we replicated the study 100 times the value of the mean would be within the range 109–139 mmHg on 95 occasions. It is intuitively obvious that the larger the sample the smaller will be the size of the confidence interval.



Correlation


In clinical studies two, or more, independent variables may be measured in the same individuals in a sample population (e.g. weight and blood pressure). The degree of correlation between the two can be investigated by calculating the correlation coefficient (often abbreviated to ‘r’). The correlation coefficient measures the degree of association between the two variables and may range from 1 to −1:



Statistical tables are available to inform investigators as to the probability that r is due to chance. As in other areas of statistics, if the probability is less than 1 in 20 (p < 0.05) then by custom and practice it is regarded as statistically significant. There are, however, two caveats:



Correlation analyses can become complicated. The simplest (least squares regression analysis) presumes a straight-line relationship between the two variables. More complicated techniques can be used to estimate r where a non-linear relationship is presumed (or assumed); where the distributions deviate from normal; where the scales of one or both variables are intervals or ranks; or where a correlation between three or more variables is sought.



Expressions of benefit and harm


There are three ways in which the outcomes, in clinical studies, are expressed:



Binary outcomes are often used in the design and analysis of RCTs. Such outcomes are dichotomous (such as alive or dead). The results are usually expressed as the relative risk (or risk ratio – RR). In a trial where the outcome is (say) mortality, the relative risk is the ratio of the proportion of treated patients dying to the proportion of control patients dying. Usually, RR of <1 is suggestive of benefit; an RR of >1 is suggestive of harm. RRs are almost invariably reported with their 95% confidence intervals. If the boundaries of the 95% confidence intervals do not cross unity the results are generally statistically significant (at least at the 5% level).


Survival analyses. In studies in which individuals are observed over a long(ish) period of time, and in which it is unreasonable (or erroneous) to assume that event rates are constant, the technique of survival analysis is used. This is most commonly reported as the hazard ratio (HR) and its 95% confidence interval. The HR is the probability that, if an event in question has not already occurred, it will happen in the next (short) time interval. It has, broadly, a comparable interpretation to the RR.


Continuous outcomes. Studies such as that in Table 17.10 may report outcomes using one or more continuous scales. In this study of the effects of prednisolone in the treatment of Bell’s palsy, the House–Brackmann measure of facial nerve function was used as the outcome measure. Conventional tests of statistical significance using Student’s t-test, for example, can be calculated to assess whether the null hypothesis should be rejected.


Number needed to treat (NNT). As discussed earlier, the NNT is an estimate of the number of patients that need to be treated for one to benefit compared to no treatment. If the probabilities of the end-points with the active drug and no treatment (i.e. placebo) are respectively pactive and pno treatment then the NNT can be calculated thus:


NNT = 1/ (pactivepno treatment)


An analogous measure – the number needed to harm (NNH) – is the number of patients that need to be treated with a drug to cause one patient to be subject to a specific harm.

Mar 31, 2017 | Posted by in GENERAL & FAMILY MEDICINE | Comments Off on Drug therapy and poisoning
Premium Wordpress Themes by UFO Themes
%d bloggers like this: