Role of Human Laboratory Studies in the Development of Medications for Alcohol and Substance Use Disorders


Dr. Leggio’s work is supported by National Institutes of Health (NIH) intramural funding ZIA-AA000218 ( Section on Clinical Psychoneuroendocrinology and Neuropsychopharmacology ), and jointly supported by the Division of Intramural Clinical and Biological Research of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) and the Intramural Research Program of the National Institute on Drug Abuse (NIDA).

The authors would like to thank Vignesh Sankar, BSc, from the NIAAA/NIDA Section on Clinical Psychoneuroendocrinology and Neuropsychopharmacology for bibliographic assistance.

The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.


The development of medications that are useful for the treatment of alcohol or substance use disorders requires the clinical and preclinical testing of existing and novel compounds in various experimental models useful for evaluating the mechanism, safety, and possible efficacy of the putative treatment. Medication development research has sought to evaluate both existing medications already on the market for other indications as well as new, novel compounds never yet tested in humans. Regardless of the stage of development for any particular medication, experimental human studies in controlled environments (i.e., “human laboratory” studies) will be required at some step of the process for at least one of three possible reasons.

    • 1.

      Phase I Safety Testing of Novel Compounds : For novel compounds not yet approved by the US Food and Drug Administration (FDA), Phase I clinical trials will be required to evaluate the safety and abuse liability of the new medications. Basic safety testing in healthy subjects is normally required for first-in-human studies but basic Phase I safety testing approaches will be required in the drug-using target population as well before the FDA will allow Phase II and III treatment trials to proceed.

    • 2.

      Phase I, II Safety Testing in the Target Population : If the medication is already approved by the FDA for another indication, development of that medication for addiction treatment still will require safety testing in drug-using or addicted populations. Safety evaluation includes both the biomedical safety of treatment in a drug-using population as well as an assessment of the abuse liability of the medication in a population likely to misuse substances. In addition, the FDA likely will require these studies to address the safety of the drug interaction between the treatment medication and the drug of abuse.

    • 3.

      Evaluation of Pharmacokinetic and Pharmacodynamic Mechanisms : Although Phase III treatment trials will be required to demonstrate efficacy, human laboratory studies also can be helpful to evaluate the clinical pharmacology (both kinetics and dynamics) of a medication. These studies can evaluate the possible behavioral or neurochemical mechanism(s) of action or use human laboratory models to estimate the possible efficacy of new medications.

For many human laboratory studies, subjects are research volunteers not engaged in treatment. However, individuals who are “in treatment” also may be tested under controlled human laboratory conditions. The purpose of this review is to identify and highlight the role of and contributions made by human laboratory studies in the development of new medication treatments for addictions.

Pioneering studies conducted in the 1950s, 1960s, and 1970s at the Addiction Research Center of the Public Health Service Hospital in Lexington, Kentucky, developed the basic experimental approaches useful for understanding the clinical pharmacology of alcohol and drug dependence, and their treatment. In many cases, early development studies may require the administration of alcohol or drugs to human subjects who have the alcohol or drug use disorder. The National Advisory Council on Drug Abuse and the National Advisory Council on Alcohol Abuse and Alcoholism have both recommended guidelines for the ethical and safe study of, respectively, drugs and alcohol, given to human subjects ( , ). Broadly speaking, pharmacological approaches to the study of the behavioral effects of drug abuse and its treatment are characterized under the umbrella of abuse liability assessment. Abuse liability assessment involves estimation of the likelihood that a substance will be used or self-administered and/or the liability or harmfulness of that use. Thus, abuse liability assessment approaches to human laboratory studies encompass all aspects necessary to evaluate both the safety (i.e., abuse liability of the treatment agent and the harmfulness of the drug interaction) and possible efficacy (i.e., does it reduce the likelihood of using the drug of abuse) of medications useful for treating alcohol and drug dependence.

Role of the Human Laboratory in Evaluating the Abuse Liability of New Medications

When medications are developed for human use, the FDA or Drug Enforcement Administration may require an assessment of the abuse potential of the new agent and this generally will require human laboratory studies. Typically, abuse liability assessment will be required when the medication under development shares pharmacological characteristics or planned indications with other drugs of known abuse potential. Broadly speaking, the abuse liability of a potential medication can be characterized in the human laboratory using one or more of three different behavioral approaches as described below.

To Characterize Adverse or Harmful Effects

Characterizing the effects of a new drug on various dimensions of physiological function and performance or other behavioral impairment can be valuable to understand how the drug might alter or impair important biobehavioral functions. For example, drugs could be examined for how they alter cognitive, psychomotor, or other behavioral performance or physiological functioning. Characterization of drug effects on each of these dimensions provides valuable information to assess the potential liability or harm that can occur with drug use. In the context of drug abuse, it also is important to know about the safety of the drug interaction should the new medication be combined with the drug of abuse. For this reason, many studies have been devoted to assessing the potential interactions between the new medication and alcohol—the most common drug for which potentially dangerous interactions might occur. The safety of drug interactions also is very important for FDA approval of potential treatments for alcohol or drug addiction because it is very likely that drug-dependent populations undergoing treatment with a medication will at some point at least sample their primary drug of dependence. Furthermore, the characterization of the drug interaction in the experimental laboratory may provide insight into the mechanism and possible effectiveness of that medication.

To Characterize Its Comparative Pharmacological Profile

The most common approach to abuse liability assessment is the pharmacological bioassay, which is a standard evaluation of the clinical and pharmacological profile of the new drug in comparison with another known drug from the same or similar pharmacological class. Necessarily, pharmacological profiling means evaluating the pharmacodynamic effects of the drug on a variety of dimensions, which could include assessment of performance or physiological effects, but for abuse liability also includes assessment of subjective effects or euphoria. An adequate evaluation of pharmacological profile requires the testing of a range of doses to construct a dose-response curve because the testing of a single dose fails to provide information on the dose-responsiveness of observed effects and is fraught with the potential for false-negative findings. Comparison of the new drug with a standard drug of known abuse potential is an essential element in the pharmacological comparison approach for at least three reasons. First, use of the standard drug establishes the positive control level of response to drugs of abuse under the standard conditions employed by the experiment. This is particularly important given that false-positive or false-negative results may occur due to variations in the assessments, population, or other study conditions. Second, relative potency or relative effect-size comparisons between the novel drug and the standard drug of abuse provide the basis for the most meaningful interpretation of data. Thus, the new drug may differ in the dose-response slope, the maximum effect size, or the relative potency on different dimensions of effect. Each of these variables has a different implication for abuse liability. Third, for clinical advantage estimation purposes, the FDA and medical prescribers would like to know about the differential efficacy contrast of the new drug in comparison with a known drug, which may be a standard drug of abuse or a scheduled prescription medication that has known abuse potential.

To Evaluate Reinforcing Effects or Potential for Self-Administration

Numerous animal models of addiction studied across a wide variety of drugs and species have shown that drug taking is a drug-reinforced behavior controlled by operant contingencies and schedules of reinforcement. The same also has been shown in humans, where several human laboratory models of drug reinforcement and self-administration have been established. Ultimately, the behavior we are interested in understanding, predicting, and treating, is the likelihood that a drug/substance will be used or consumed in a pattern consistent with abuse or dependence. A yes/no decision whether or not the drug is self-administered by the subject population may not be sufficient here because the environment and the availability of alternatives influence choice behavior. For example, the likelihood that a sedative or stimulant drug will be self-administered is influenced by how stimulating the experimental environment is. This phenomenon likely explains how even the sedating atypical antipsychotic quetiapine, with little intrinsic abuse liability, may become a highly preferred drug of abuse in a prison or psychiatric hospital environment where access to other drugs is limited. Therefore, an all-or-none conclusion of whether or not a drug is self-administered under one set of conditions does not indicate much about its potential for self-administration under a different set of circumstances. Thus, studies of the potential for reinforcement or self-administration are limited by the range of conditions (dose, circumstance, population, etc.) under which they are tested.

Issues in Human Laboratory Studies of Abuse Liability

There are several issues that need to be considered by any human laboratory study of abuse liability. The information below summarizes the issues that generally exist in the field and potentially limit any conclusions coming from human laboratory studies of medication effects on drugs of abuse.

Role of Subjective Effects

Since the earliest studies at the addiction research unit at the United States Public Health Service Hospital at Lexington, Kentucky, it has been observed that drugs of abuse as diverse as alcohol, barbiturates, opiates, and psychomotor stimulants all share a profile of psychoactive effects characterized as euphoria. It is generally accepted that euphoria is at least a partial explanation of why these drugs are abused. Because of the subjective and unobservable nature of this psychoactivity, self-report questionnaires are used to assess these subjective effects. One of the early questionnaires developed to measure the subjective effects of drugs of abuse was the Addiction Research Center Inventory, a multiitem questionnaire completed by human subjects during drug intoxication. Factor analysis was used to empirically derive subscales of items responsive to characteristic drugs of abuse including amphetamine, benzedrine, morphine, pentobarbital, alcohol, chlorpromazine, and lysergic acid diethylamide. Subsequently the morphine-benzedrine groups were combined to represent an opiate or stimulant-type of “euphoria” scale, the pentobarbital-chlorpromazine-alcohol group a distinctly “sedative” scale, and the lysergic acid diethylamide scale as a “dysphoria” or unpleasantness scale. It is important to recognize that these scales actually were derived to measure subjective mood changes induced by pharmacologically distinct drugs of intoxication and not euphoria per se. The Profile of Mood States is a multi-item questionnaire derived in the measurement of mood in normal healthy college students. Nonetheless, it has been used commonly to measure changes in depression-dejection, tension-anxiety, vigor, arousal, and other mood states by various populations under the influence of drugs. Generalized mood measures are valuable to assess the pharmacological profile of a drug and are sometimes presumed to predict abuse potential under the assumption that positive mood states could reflect an increased potential while negative mood states could reflect a decreased potential. In alcoholism research, the biphasic alcohol effects scale was derived to measure the positive and disinhibiting arousal that may occur during the ascending limb of the blood-alcohol curve and the sedative-inhibition that occurs on the descending limb of the curve. Actually there are many other factor-analyzed and single-item rating scales that have been used to evaluate the subjective effects of psychoactive drugs, and enumerating them is beyond the scope of this review.

The psychoactive effects of psychotropic drugs are studied in animals using discriminative stimulus procedures, where subjects are trained to discriminate the differences between drugs. Discriminative stimulus procedures also have been developed to train human subjects to discriminate the interoceptive stimulus effects of drugs. Although subjective rating scales take advantage of the verbal capacity of human subjects to quantitatively report the qualitative characteristics of their subjective experience, the discriminative stimulus approach uses a qualitative analysis of same/different comparisons between drugs. There is reasonable correspondence between conclusions drawn from subjective effects and those from discriminative stimulus studies in humans. Because of differential reinforcement of behavior during discriminative training, it is likely possible to gain a tighter level of discriminative control with this paradigm than with standard subjective questionnaires. However, the specificity and sensitivity of this procedure very much depend on the discrimination training conditions and are achieved only through lengthy training procedures. Nonetheless, the ability to compare the human study results with the preclinical data using discriminative stimulus analyses is a distinct advantage of this procedure. Although there is a good correspondence between “positive” subjective effects and the likelihood of drug self-administration, it is certainly not true that either positive or negative subjective effects alone explain the cause or the reason that drugs are or are not self-administered. , , .

Role of Subjective Euphoria

The cardinal subjective effect commonly assumed to be important to abuse potential is the experience of psychoactive drug effects that are pleasant, preferred, or “euphoric.” A number of reviews of human abuse liability have discussed issues of drug-induced subjective euphoria and its measurement. a

a References 61, 64, 65, 83, 226, 229, 240.

Actually, most drug users do not refer to “euphoria” but rather describe the drug intoxication as a “high.” Although cocaine intoxication has been described as “intensely stimulating and pleasurable,” or “orgasmic,” it is clear that not all drugs of abuse produce such intense pleasurable sensations. For many drugs, including alcohol, the intoxication is more often described as a “buzz,” or “drunk,” or “high” that has “good” features and that people report “liking.” Consequently, most studies employ individual-item rating scales for subjects to rate the extent of “high” and “good” subjective effects and the extent to which subjects “like” the effect. There is no standard euphoria scale used by a majority of studies.

Importance of Measuring Self-Administration Behavior

Current conceptions of the disease condition recognize that the core feature of substance abuse or dependence is the pattern of drug self-administration that is harmful or compulsive. Consequently, most studies of abuse liability seek primarily to predict the likelihood of drug self-use for nonmedical purposes. Ample previous research clearly has demonstrated that drugs of abuse maintain the self-administration behavior of both humans and animals through the process of operant reinforcement. Ever since the earliest studies at the Addiction Research Center observing heroin self-administration in a heroin addict, a variety of different procedures have been developed to study self-administration behavior in human laboratory environments, and these have been described in previous reviews. b

b References 37, 84, 98, 106, 109, 244, 267.

These reviews describe the effects of variations in self-administration procedures such as:

    • 1.

      the specific drug reinforcer, its route of administration, and whether or not dose was varied (higher doses and more rapid increases in blood level are more reinforcing);

    • 2.

      whether the drug reinforcer was administered immediately or after a time delay (immediate drug delivery is more reinforcing);

    • 3.

      whether the self-administered dose was a high bolus dose or multiple smaller doses (multiple smaller doses result in more sensitive measures of reinforcement);

    • 4.

      whether or not the drug reinforcer was “blinded” and placebo controls were employed (blinded procedures have greater validity);

    • 5.

      whether the self-administration behavior was a verbal request or responses on a response instrument (responses on a manipulandum provide quantitative measures of behavior);

    • 6.

      the extent to which behavioral “cost” was varied in the operant contingency (increasing “cost” decreases the probability of self-administration);

    • 7.

      whether the self-administration procedure included choices among alternative reinforcers (choice between alternatives provides a better quantitative assessment of relative reinforcement); and

    • 8.

      whether drug taking was quantified by measuring amount consumed versus the proportion of subjects responding (amount measures are more sensitive measures).

Thus validated operant models of drug reinforcement have been established for human laboratory studies, and these have become used increasingly over the last two decades. Although pleasant subjective effects are generally correlated with the tendency of subjects to self-administer drugs in the human laboratory, c

c References 35, 45, 84, 126, 127, 229, 240.

drug-taking behavior does occur in the absence of measurable subjective effects. At times, the needs to examine complete dose-response functions in making between-drug pharmacological comparisons may preclude self-administration studies. Nonetheless, direct observations of drug-taking behavior generally are preferred over measures of subjective effects alone.

Role of Environment and Cost in Controlling Self-Administration

Although this review does not discuss specific advantages and disadvantages of different self-administration procedures, variations in the procedure are likely to alter the sensitivity to change of the drug-taking measure. In fact these procedural variables are likely to be important both in determining whether the drug is self-administered, as well as the sensitivity to change to show increases or decreases in drug-taking behavior. One of the variables that has an important influence on drug-taking behavior is the role of the internal or external stimulus environment and how that can increase or decrease the likelihood of self-use. For example, diazepam is not normally preferred by healthy controls but preference increases under environmental conditions that increase anxiety. In addition, sedative drugs are preferred over stimulants in sedentary environments while stimulants are preferred over sedatives when task performance contingencies require alertness. A stimulating environment may decrease the reinforcing effects of a sedative but enhance the reinforcing effects of a stimulant likely because of behavioral cost and alternative reinforcement. Understanding this phenomenon involves recognition of the behavioral economics of drug taking. , In behavioral economics, choice of the drug involves a behavioral cost and may occur at the expense of access to alternative reinforcers. In human laboratory studies it is common to make monetary choices available as an alternative to drug taking, wherein choices between increasing amounts of money versus drug result in reductions of drug self-administration. Griffiths and colleagues exploited this phenomenon in creating the “Multiple Choice Procedure,” a questionnaire wherein across a series of single-item questions, subjects choose between receiving the drug or a gradually increasing amount of money. To establish the questionnaire responses as a true measure of choice/preference for drug, one of the many item questions is selected at random and the subjects actually receive as a consequence the drug or the money amount they selected for that item.

Role of Subject Population Variables

One of the issues associated with subjective-effects assessment is that the extent to which subjective psychoactivity is considered pleasurable or “euphoric” varies across different populations and is shaped and influenced by experience. For example, early studies by Beecher showed that normal healthy volunteers reported unpleasant experiences when given opiates or barbiturates while drug-experienced users reported those drug effects to be pleasant or euphoric. Balanced placebo research designs controlling subject expectations with 2 × 2 factorial experiments where subjects were either told or not told they were receiving drug under conditions where they actually did or did not receive drug have shown that the subjective reports of drug effects in normal populations are substantially influenced by expectation. Of course expectations occur in drug-dependent populations as well. Compared with normal drinkers, heavy alcohol drinkers report greater expectations of euphoric responses and other positive or beneficial effects of alcohol. It is likely that some of the differences between drug-experienced and drug-naive populations are due to learned or acquired factors altering attribution or expectation. Generally, populations of normal subjects, who do not abuse drugs, do not report higher levels of liking drug effects or euphoric mood changes, and do not self-administer most drugs of abuse. Strong evidence for the importance of drug abuse history and experience is seen in patient-controlled analgesia studies where opiate analgesics with known addiction potential can be given for medically ill populations to self-administer, and yet those without a substance abuse history do not become drug abusers or addicts. Therefore, valid assessment of abuse liability must employ drug-experienced abuser populations in order to gauge what drug abusers will do with a drug of abuse. This is not to say that certain drugs may not have some abuse liability even for normal healthy populations. In fact, studies of stimulant abuse liability among normal college populations observe that amphetamines tend to be preferred over placebo while sedative benzodiazepines are not preferred. Of course, caffeine clearly has reinforcing properties in healthy human populations worldwide. For these reasons, valid inferences about relative changes in abuse liability have to include experimental controls showing base response rates of the study population and study procedures as a point of comparison. For pharmacological studies comparing across drugs, the comparison drug may show greater or less abuse liability than a standard reference drug in the designated population under standard study conditions.

Population-related differences in drug response could be due in part to genetically controlled individual differences in innate sensitivity. An example of this is found in Asian populations who commonly have the ALDH2 2 allele for aldehyde dehydrogenase, which increases levels of the ethanol metabolite acetyl aldehyde, resulting in an unpleasant flushing response, which reduces the risks of experiencing alcohol-induced euphoria. Another example of a population-related difference may be found in studies showing that young adult children of alcoholic parents may report greater euphoric response and lesser negative, sedative effects of alcohol than do children without a family history of alcoholism.

Role of Craving

Many addicted individuals report that stimulus cues in the environment elicit powerful “cravings” and impulses to use drugs. However, there has been much debate about the meaning of the term “craving” and what role it plays in the risk of drug use. Early pioneering work in the human laboratory considered craving as a conditioned-withdrawal-like motivational state. With the operant model of drug dependence, it has been argued that “craving” refers primarily to the urge or impulse to use. Still others suggest that craving involves at least three dimensions: (1) withdrawal and negative affect–related escape motivation, (2) reward-related conditioned impulses/urges, and (3) obsessive thoughts and/or cognitive-control mechanisms. Many human laboratory studies have studied cue-induced craving in addicted populations. These studies provide visual, olfactory, auditory, and/or tactile stimuli historically associated with drug use; although tactile cue procedures of handling drug paraphernalia have been among the most effective stimulus cues. Idiosyncratic script-driven mental imagery techniques also can be used to guide the cue exposure session. Cue responses can be physiological (i.e., heart rate) or subjective (i.e., craving). Although there often is not a good correlation between the physiological and subjective measures, a meta-analysis concluded that subject ratings of craving were the most reliable and selective reaction to drug cues and showed the largest effect size across studies. Multi-item factor scales have been used in the human laboratory to measure craving for alcohol, marijuana, or cocaine, but many studies commonly use only graded analog scales of single item ratings such as “crave,” “desire,” “urge,” or “want.” Craving ratings sometimes have been correlated with drug use in outpatient studies and with risk of relapse in treatment seekers. However, dissociation between craving ratings and drug-taking behavior has been demonstrated clearly in laboratory studies, and the extent of cue-craving observed in the laboratory has not always correlated with relapse to alcohol drinking among alcohol-dependent individuals. Thus, craving is neither a necessary nor sufficient precursor to drug use or relapse. Rather, it appears to reflect a parallel cognitive process as proposed by Tiffany or a subjective state experienced as urge or impulse that is associated with drug-related environmental stimuli as suggested by a consensus panel. On the other hand, cue-elicited craving procedures seem sensitive to medication response, for example, naltrexone reduces craving for alcohol.

Human Laboratory Studies of Pharmacological Agonist and Antagonist Treatments

Human laboratory studies have been useful to help us understand the potential value of various pharmacological approaches to treatment. The potential of using pharmacological agonists or antagonists in the treatment of substance abuse is best illustrated through studies of opiate dependence as described below.

Utility of Evaluating Pharmacological Antagonist Treatments

Early studies of opiate antagonists at the Addiction Research Center showed that they could completely block the subjective and physiological effects of morphine and precipitate withdrawal in dependent individuals. Subsequent studies showed that oral naltrexone blocked heroin self-administration and subjective effects in human laboratory models of drug taking. The robustness of the observed pharmacological antagonism and the nearly complete blockade of any behavioral effects or abuse liability of heroin observed in these studies strongly suggested efficacy for the antagonist approach. However, outpatient treatment effectiveness with antagonists like naltrexone is poor because of poor medication compliance among heroin addicts who find it too easy to discontinue antagonist therapy so as to recover the heroin effect they seek. These findings suggest a significant weakness of human laboratory procedures to predict efficacy with antagonist approaches. Specifically, even perfect blockade of abuse potential does not predict treatment efficacy because medication noncompliance will nullify even complete pharmacological blockade. More recently, human laboratory studies again have evaluated the depot formulation of naltrexone and shown that it will block heroin self-administration and subjective effects. Although there is reason to hope that depot formulations of naltrexone could improve the effectiveness of antagonist treatments, especially in conjunction with court-ordered treatment, the outcome data do not yet exist to support it. Notably, because of the diffuse mechanisms of action for alcohol, cocaine, and methamphetamine, direct, receptor-mediated pharmacological antagonists are unlikely to exist for those drugs. For nicotine dependence, human laboratory studies of the nicotinic antagonist mecamylamine have shown increased smoking or increased intravenous nicotine self-administration, which is consistent with a surmountable pharmacological blockade. However, another human laboratory study found no effect of mecamylamine, and clinically, there is no evidence for treatment efficacy with nicotinic antagonists in outpatient treatment. No efficacy trial has examined the use of the cannabinoid-1 antagonist, anandamide (rimonabant), for cannabis dependence, but early human laboratory studies have shown only partial or inconsistent blockade of the effects of smoked cannabis.

Utility of Evaluating Pharmacological Agonist Replacement Approaches

A study at the Addiction Research Center was the first human laboratory study showing that oral methadone produced dose-related decreases in the subjective effects, liking, and self-administration of hydromorphone. Thirty years later, a human laboratory study showed that short-term treatment with methadone doses of 50, 100, and 150 mg showed dose-related blockade of the subjective effects and self-administration of heroin. The authors of this later study used their human laboratory data to argue that clinical tendencies to use lower methadone doses for maintenance are counterproductive. It is notable that these findings exactly parallel the dose equivalence and clinical experience with methadone maintenance treatment. Previous reviews have described human abuse liability testing with a variety of opiate agonists, partial agonists, and mixed agonists/antagonists that demonstrated unequivocally that agonist effects at the mu opiate receptor are responsible for the abuse potential of opiates. In the course of this work, human laboratory studies were critical to the ultimate development of buprenorphine as a partial agonist pharmacotherapy, with a reduced abuse potential. Human laboratory studies were particularly important to demonstrate that buprenorphine reduced the reinforcing effects of heroin and that small doses of naloxone could be added to buprenorphine to further reduce its abuse potential without precipitating withdrawal in morphine-dependent subjects. These studies illustrate clearly a strong concordance between the human laboratory studies and clinical experience with buprenorphine. Furthermore, when compared with the studies and clinical experience with antagonist medications, they suggest that human laboratory studies seeking to antagonize the reinforcing effects of a drug of abuse might look for medications that have at least a partial agonist-like activity. Of course, nicotine-replacement strategies for tobacco dependence have been very successful in reducing smoking behavior. Human laboratory studies have shown that smoking and nicotine gum pretreatments each decreased cigarette smoking. In addition, transdermal nicotine patches decreased cue-induced craving, the discriminative stimulus and reinforcing effects of nicotine spray, and the reinforcing effects of intravenous nicotine. The partial nicotinic agonist, varenicline, is the first nicotinic agonist treatment for tobacco dependence approved by the FDA. Varenicline’s efficacy in smoking cessation has been confirmed by a recent meta-analysis. Human laboratory studies showed that varenicline, as compared to placebo, reduced cigarette cue-elicited craving and produced parallel reductions in cigarette cue-elicited ventral striatum and medial orbitofrontal cortex responses assessed by functional magnetic resonance imaging (fMRI). Another human laboratory study with smokers showed that varenicline reduced cigarette craving in a manner correlated with blood varenicline concentrations, suggesting that acute agonist administration produces temporary relief in cigarette craving. A complex human laboratory study examined the effects of chronic varenicline treatment on self-administration of intravenous nicotine, intravenous cocaine, and intravenous nicotine and cocaine combined. Results showed that varenicline selectively attenuated the reinforcing effects of nicotine alone but not cocaine alone, and its effects on nicotine and cocaine combined were dependent on the dose of cocaine. Clearly, complex drug-drug interactions likely exist in this specific population with nicotine and cocaine use disorder comorbidity.

Role of Human Laboratory Studies in Developing Medications for Alcohol Dependence

A brief review of medications that have been or are being developed for alcoholism treatment is used to illustrate how pharmacological mechanisms other than agonist replacement or direct pharmacological antagonism of the drug of abuse can be exploited in medication development. Currently, there are three medications approved by the FDA for the treatment of alcohol dependence. In addition, we discuss human laboratory studies conducted with other medications that have shown promise in clinical treatment trials.


Disulfiram was the first medication approved by the FDA for the treatment of addiction. Human laboratory studies as well as preclinical studies of biochemistry and toxicology were included in the first report of the disulfiram-ethanol reaction that ensues upon alcohol exposure. Over a period of more than 40 years, human laboratory studies have been important to characterize the nature, the safety, and the mechanism of the disulfiram-alcohol reaction. These studies were instrumental in showing that inhibition of aldehyde dehydrogenase and the subsequent accumulation of the acetyl aldehyde metabolite is responsible for the unpleasant effects of the disulfiram reaction and that a hypotensive crisis is a serious medical risk. Either because of the way the disulfiram makes alcohol effects so unpleasant or because of the direct side effects of disulfiram itself, compliance with this medication is a serious problem limiting its utility and effectiveness for the treatment of alcohol dependence. Consequently, there is little ongoing research in further development of disulfiram as a treatment for alcohol dependence.


The opiate antagonist, oral naltrexone, was the second medication approved by the FDA for the treatment of alcohol dependence. Subsequently, an intramuscular naltrexone formulation was approved by the FDA for the treatment of alcohol dependence. A recent meta-analysis supports naltrexone efficacy in in alcohol-dependent individuals. Based largely on preclinical studies showing that naltrexone reduced alcohol drinking in rodents, the first clinical trials were Phase III outpatient efficacy trials of a medication that had already been approved for narcotic addiction. Subsequently, human laboratory studies were useful for demonstrating that naltrexone can reduce alcohol self-administration in some paradigms but not others, and has a mixed profile to reduce some of alcohol’s positive subjective effects and cue-reactive craving. Naltrexone also has been shown to reduce the behavioral-activating effects of alcohol as measured by heart rate increases, subjective liking, and corticotropin (ACTH)/cortisol elevations. This latter finding is interesting given that other studies have shown that parental family histories of alcoholism are associated with greater activation of the hypothalamic-pituitary-adrenal axis at baseline and in response to mu opioid receptor blockade by naloxone, and that these differences may predict naltrexone response. A study administered naltrexone versus placebo to 92 non–treatment-seeking, alcohol-dependent subjects for 6 outpatient days before bringing them into the human laboratory for a drink self-administration session. Study findings showed that naltrexone reduced alcohol self-administration in subjects with a positive family history of alcoholism and may actually have increased drinking in subjects without a family history. More recently, the efficacy of naltrexone in alcohol cue-elicited craving and subjective effects of alcohol has been replicated in adolescent problem drinkers, suggesting its potential use in underage populations with at-risk alcohol use. Although the genes associated with family history are not known, an earlier laboratory study identified a single nucleotide polymorphism of the mu-receptor conferring naloxone-reactive hypothalamic-pituitary-adrenal activation, and this same polymorphism recently was shown to predict naltrexone treatment response in Project COMBINE. . Albeit not without inconsistencies, other studies have further confirmed these pharmacogenetic findings, including a human laboratory study that specifically enrolled Asian-American individuals. It is also important to note that variability in medication responses may depend on several factors, such as participants’ readiness to seek treatment, how alcohol is administered, and how these factors may interact with the medication itself. For example, a recent human laboratory study with treatment-seeking alcoholic inpatients indicated that naltrexone resulted in increased craving in response to cues and increased subjective effects of alcohol (feeling high and intoxicated) after an intravenous alcohol challenge. Overall, these human laboratory studies have shown results consistent with the outpatient treatment trials, concluding that naltrexone is modestly effective in reducing some of the reinforcing but not the subjective effects of alcohol, and that this action may block the alcohol-seeking or craving that is primed or cued by the initial doses of alcohol consumed during a binge. Finally, a recent meta-analysis confirmed that, overall, naltrexone reduces alcohol self-administration and craving under well-controlled human laboratory conditions.


Based largely upon three European treatment trials, the FDA approved the glutamate antagonist acamprosate as the third medication for the treatment of alcohol dependence. Prior to that approval, a human laboratory study examined the safety of the combination of acamprosate with naltrexone in alcohol-dependent subjects as a prelude to the larger outpatient treatment trial known as Project COMBINE, which tested the efficacy of acamprosate and naltrexone alone and in combination. Although meta-analyses of several clinical trials have supported the efficacy of acamprosate at preventing relapse in alcohol-dependent individuals, Project COMBINE did not demonstrate efficacy at reducing drinking in alcohol-dependent outpatients. Despite a large body of preclinical literature examining acamprosate’s actions and mechanisms, only two human laboratory studies have been reported. One study found that acamprosate reduced the heart rate response, but not the subjective craving induced by alcohol cues. Another study administered repeated doses of acamprosate to non–treatment-seeking heavy drinkers in an outpatient setting and brought the subjects into a human laboratory where acamprosate was without effect to alter the subjective or behavioral responses to challenge doses of alcohol.

Other Possible Medications for Alcohol Dependence

A few other medications have been reported to have efficacy in the outpatient treatment of alcohol dependence and to be examined in human laboratory studies evaluating possible mechanisms. The serotonin-3 antagonist ondansetron was initially reported to reduce the subjective effects of ethanol in social drinkers. Subsequently, a large clinical trial showed efficacy of ondansetron in reducing alcoholic drinking, at least in Early Onset Alcoholics, but not Late Onset Alcoholics. Serotonergic abnormalities in “biologically predisposed” individuals have been suggested as the mechanism of this differential efficacy. A subsequent human laboratory study reported that the alcohol cue-induced craving of early onset alcoholics may differ as a function of genetic polymorphisms in the serotonin transporter. More recently, both human laboratory studies with nontreatment seekers and outpatient treatment clinical trials have provided further evidence on the role of genetic polymorphisms in the serotonin transporter in the beneficial effects of ondansetron in reducing excessive alcohol drinking. Topiramate has been shown to have efficacy in reducing drinking in alcohol-dependent outpatients in two randomized controlled trials. Two subsequent human laboratory studies further confirmed the role of topiramate in affecting alcohol drinking, craving, and subjective effects of alcohol. Of special note, in order to reduce some of the adverse cognitive side effects of topiramate, these studies included a gradual dose-escalation period of more than 5 weeks during which subjects received placebo, or 200 or 300 mg per day during outpatient treatment before they were brought into the laboratory.

Hutchinson and colleagues have been studying olanzapine in the human laboratory and in the clinic as a medication having a mixed profile of actions as an antagonist at the D 2 , D 4 , and serotonin-2 receptors. An initial laboratory study of heavy social drinkers reported that 5 mg olanzapine reduced the urge to drink after exposure to alcohol cues and a priming dose of alcohol. However, a treatment trial in alcohol-dependent outpatients failed to show efficacy of 10–15 mg olanzapine. Subsequently, another laboratory study showed that a functional polymorphism in the dopamine D 4 receptor ( DRD4 ) gene mediates the cue-reactive effects of alcohol and that olanzapine really was only effective in reducing cue-reactivity in the subgroup of subjects having the long (L) form of the variable number tandem repeat for the DRD4 gene. Finally, this investigative group studied a group of alcohol-dependent subjects given 2.5–5 mg olanzapine versus placebo during a 12-week treatment trial. These subjects were brought into the human laboratory before and after 2 weeks of double-blind treatment and were tested in the cue-reactivity paradigm. The study showed that olanzapine was effective only in the L-carriers where it reduced cue-reactive craving observed in the laboratory, and also was effective in reducing alcohol drinking in the outpatient treatment component of the study.

Another medication studied in alcohol human laboratory settings is baclofen. Some treatment clinical trials but not others have suggested its efficacy for alcoholic patients, especially those with significant liver disease and/or higher severity of alcohol dependence. Three alcohol human laboratory studies with baclofen have been conducted and converge to a similar conclusion that baclofen does not reduce alcohol craving even though it alters the subjective effects of alcohol. The latter might represent a biobehavioral mechanism by which baclofen could reduce excessive alcohol drinking for some individuals.

Another promising medication, gabapentin, reported to be safe when combined with alcohol, did not alter alcohol effects but delayed the onset to heavy drinking. Consistent with a human laboratory study indicating that gabapentin may reduce alcohol cue-elicited craving, outpatient treatment clinical trials support a role of gabapentin in reducing excessive alcohol use and craving, especially in those patients with high baseline alcohol withdrawal symptoms.

Role of the Human Laboratory in Evaluating Medications for Cocaine Dependence

Many different potential medications with a variety of different pharmacological mechanisms have been tested in Phase II and III efficacy trials looking for a medication to treat cocaine dependence. Several recent reviews have described the different medications that have been evaluated for the treatment of cocaine dependence and so the reader is referred to those articles for further information. Although there have been sporadic positive findings in some of these studies, no medications have yet been proven effective or approved by the FDA. Cocaine acts to inhibit monoamine transporters, although the mechanism of action related to addiction is believed to be primarily through actions on the dopamine transporter to enhance dopamine activity in brain reward neurocircuitry. Consequently, many pharmacological studies have targeted dopamine synthesis, receptors, and the reuptake transporter. In addition, other medications targeting other neurochemical modulators of the brain reward pathways also have been studied.

Evaluation of Dopamine Agonists and Antagonists for Cocaine Treatment

Several human laboratory studies have examined the ability of dopamine antagonists to reduce cocaine-induced subjective effects or self-administration. In cocaine-dependent individuals, haloperidol antagonized cue-elicited craving. In subjects with cocaine abuse or dependence, risperidone reduced the subjective effects of cocaine, but flupenthixol had no effect on cocaine’s subjective effects or self-administration. Again, in subjects with cocaine abuse or dependence, the D 1/5 antagonist ecopipam reduced cocaine’s subjective effects acutely ; however, these effects were not replicated in a study employing repeated ecopipam dosing or in a study of smoked cocaine where ecopipam actually increased the subjective and reinforcing effects of cocaine. These results suggest that at best, dopamine antagonists produce variable and inconsistent reductions in positive subjective effects of cocaine. The overall conclusion from these and other studies do not support the utility of dopamine antagonist treatments. Furthermore, they suggest that direct and potentially unpleasant side effects of treatment with dopamine antagonists could actually enhance the reinforcing effects of cocaine, which could explain the increase in cocaine use observed in an outpatient treatment study using olanzapine.

Human laboratory studies also have examined the effects of direct-acting dopamine agonists. The D 2 agonist, bromocriptine, was shown to reduce the blood pressure elevations but enhance the heart rate effects of cocaine and it caused undesirable “fainting” without changing cocaine’s subjective effects. Another D 2 agonist pergolide reduced the subjective effects but did not alter cocaine self-administration. The D 1 agonist ABT-431 was reported also to reduce the subjective effects and blood pressure but enhance the heart rate effects of cocaine without altering cocaine self-administration. Two dopamine partial agonists also have been examined. Amantadine had no effect on the cardiovascular or subjective effects of cocaine or on cocaine self-administration, and aripiprazole was actually reported to increase cocaine subjective effects and self-administration. Although not acting directly upon the dopamine receptor, but rather indirectly upon the dopamine transporter, bupropion was found only to produce slight alterations in cocaine-related subjective effects. The general lack of positive results in these human laboratory studies is consistent with the lack of efficacy of dopamine agonists, partial agonists, and bupropion in the outpatient treatment of cocaine dependence.

Evaluation of Stimulant-Replacement Strategies for Cocaine

In contrast to the disappointment with dopamine agonists and antagonist approaches, studies examining the use of psychomotor stimulants in a stimulant “replacement”-type of reproach have been more encouraging. An intriguing 5-week inpatient human laboratory study showed that gradually increasing oral doses of cocaine (25–100 mg/kg, four times daily) produced modest reductions in the subjective effects of intravenous challenge doses of cocaine without potentiating the cardiovascular effects of cocaine. Previous human laboratory studies have shown that cocaine binges are associated with substantial “acute” tolerance, whereas most of the subjective and cardiovascular effects of cocaine are seen with the initial dose and subsequent doses only serve to maintain the initial effect without adding additional effect. When combined with data that speed of onset is an important determinant of euphoria, the efficacy of the oral cocaine pretreatment is likely due to the lesser euphoria resulting from the oral pretreatment dose of cocaine coupled with cross-tolerance to the acute effects of the additional cocaine challenge doses. This is exactly analogous to what is believed to occur with methadone maintenance and is similar to that observed in a human laboratory study, where experimenter-administered doses of heroin given on top of methadone pretreatment show diminished responses. Nonetheless, concerns about the ethics or social acceptance of cocaine-replacement approaches for cocaine addiction are likely to limit consideration of this approach. Thus, most studies of the agonist-like replacement approach have examined dopamine reuptake inhibitors and stimulant drugs other than cocaine. Although human laboratory studies with cocaine have reported substantial tolerance to the cardiovascular acceleration that occurs within a cocaine binge, there still are substantial cardiovascular safety concerns regarding the possible drug-drug interactions between cocaine and other stimulant drugs.

A double-blind, placebo-controlled efficacy trial examined the effects of placebo and two doses of oral dextroamphetamine as a treatment for cocaine-dependent outpatients. That study included a human laboratory component that gave the outpatients their initial double-blind dose in a controlled environment as part of a safety assessment. In the laboratory assessment component, dextroamphetamine showed characteristic stimulant effects including mild elevations of subjective effects and euphoria, and there were no limiting adverse events observed. Coupled with treatment findings showing dose-related increases in treatment retention and reduced cocaine use without evidence of abuse or diversion of dextroamphetamine, these data suggest that stimulant therapy for cocaine dependence may be a reasonable approach. In another study taking the same approach with methylphenidate, the human laboratory component found that methylphenidate produced adverse stimulant effects but not subjective euphoria in the cocaine-dependent population. Of interest, methylphenidate also was not efficacious in the main outpatient treatment trial either. Thus, these two studies conducted in treatment-seeking individuals show a good correspondence between the human laboratory findings and treatment outcome and further suggest that the positive subjective effects of dextroamphetamine may be an essential component of efficacy in the stimulant-replacement approach to treatment of cocaine dependence. More recently, a human laboratory study indicated that choice to use cocaine was significantly lower during d -amphetamine maintenance, as compared to placebo.

Still the question remains about the safety of the cocaine + stimulant drug interaction in cocaine-dependent populations. Several human laboratory studies have evaluated the cardiovascular safety and abuse liability of giving combinations of cocaine plus other stimulants. In one such study, acute dosing with mazindol did not substantially alter the acute subjective effects of cocaine, but it significantly enhanced the blood pressure and heart rate elevations produced by intravenous cocaine leading the authors to suggest that mazindol would not be a desirable treatment. A follow-up clinical treatment trial in cocaine-dependent methadone maintenance participants did not find mazindol versus placebo differences in outcome, although it is important to note that there was no evidence for harmful or countertherapeutic effects of mazindol either. Another study gave up to 30 mg oral dextroamphetamine in combination with up to 96 mg intranasal cocaine to non–treatment-seeking cocaine abusers and reported that there were no significant potentiating effects on cardiovascular measures —a finding that was generally supported in the outpatient trial of dextroamphetamine for cocaine dependence. In yet another study, modafinil blunted several subjective effects and even the systolic blood pressure increases produced by intravenous cocaine infusion. This human laboratory study was followed up by the National Institute on Drug Abuse in a clinical treatment trial, which found that modafinil was superior to placebo in reducing cocaine use among the subgroup of individuals without a comorbid alcohol use disorder, but it was not effective among the subgroup of individuals who had a comorbid alcohol use disorder. Following up on this trial, Kampman and colleagues performed another treatment trial where they specifically excluded cocaine use disorder individuals with alcohol use disorder comorbidity and found that modafinil was significantly more effective than placebo in increasing cocaine abstinence. Although other treatment trials have been inconsistent in generating either positive or negative findings, overall, these human laboratory data clearly predicted that stimulant medications with lesser abuse potential than cocaine could be given safely to cocaine-dependent populations with a reasonable expectation that individuals would benefit from a stimulant-replacement approach to treatment.

Evaluation of Cocaine Treatments Affecting Other Neurochemical Systems

A number of other pharmacological approaches to treatment for cocaine dependence also have been evaluated in the human laboratory. Aside from dopamine, several studies have attempted to alter other monoamine neurotransmitter levels (i.e., norepinephrine and serotonin). Catecholamine depletion by means of consuming a tyrosine-depleting amino acid beverage was shown to reduce cue and low-dose cocaine-induced craving for more cocaine, but did not alter cocaine-induced euphoria or self-administration. The monoamine oxidase-B inhibitor selegiline, which should increase catecholamine levels including dopamine, was reported to have no effect or to reduce the subjective effects of cocaine. Two studies reported that the catecholamine reuptake inhibitor desipramine increased baseline blood pressures, decreased cocaine craving, and altered the positive subjective effects of cocaine without altering the high or self-administration of cocaine. Blockade of the serotonin transporter with fluoxetine was reported to reduce the subjective euphoria of cocaine in one study but not another study. These human laboratory studies indicate that, at best, medications that alter serotonin or norepinephrine activity in general do not have robust effects to alter cocaine euphoria or reinforcement, and so it is no surprise that outpatient treatment trials with these medications have not been positive either. In cocaine-using research volunteers, the γ-aminobutyric acid reuptake inhibitor tiagabine had no effect on the subjective or reinforcing effects of oral cocaine, and the γ-aminobutyric acid agonist gabapentin reduced the subjective effects but not self-administration in cocaine-dependent subjects. Each of these pharmacological approaches has been evaluated in clinical trials and none have been found to be efficacious. More recent work has explored the potential role of progesterone for cocaine use disorder. A human laboratory study did not find significant differences between progesterone and placebo on cocaine self-administration in women. By contrast, a preliminary treatment trial with postpartum women indicated that progesterone was superior to placebo in reducing self-reported cocaine use, although no difference was found on free urine drug tests. Neither a human laboratory study nor two treatment trials support the potential use of the 5-HT 1A receptor partial agonist buspirone for cocaine use disorder. Finally, a human laboratory study testing low and high doses of intravenous cocaine indicated that topiramate, as compared to placebo, reduced cocaine craving and monetary value of high-dose cocaine; by contrast, monetary value of low-dose cocaine was increased in the topiramate group. Consistent with this laboratory study, a treatment trial indicated the efficacy of topiramate, compared to placebo, in increasing cocaine-free days and cocaine-free urine tests and decreasing cocaine craving.

Several human laboratory studies have examined the effects of antihypertensive calcium channel blockers in cocaine dependence. As cerebrovascular vasodilators, they have been suggested as possible treatments for vascular stroke and cognitive impairment related to cocaine dependence. In this regard, isradipine was shown to reduce the ischemic effects of cocaine infusion. In other laboratory studies in cocaine-dependent subjects, nifedipine, nimodipine, and isradipine were shown to block the blood pressure–elevating effects of cocaine in subjects but not the stimulant or euphoric subjective responses. Following both acute and repeated dosing with isradipine, the reduction in cocaine-related pressor effects was also associated with an exacerbation of cocaine-related heart rate increases. In addition, repeated dosing with isradipine was shown to produce headaches and other unpleasant effects and to increase the positive and reinforcing effects of intravenous cocaine infusion. Given these laboratory results as noted earlier, it is no wonder that a 12-week trial of amlodipine for the treatment of cocaine-dependent outpatients was plagued by high drop-out rates, and failed to reduce cocaine craving or cocaine use more than was seen with placebo treatment.

Two other medications have shown efficacy in human laboratory and outpatient treatment studies but are not likely to be pursued as treatments for primary cocaine dependence for safety reasons. The mu-receptor partial agonist, buprenorphine, was shown in two studies to reduce cocaine self-administration. One study in intravenous heroin and cocaine users reported that buprenorphine decreased intravenous cocaine self-administration, but it also potentiated several subjective effects including euphoria and sedation. Another study in cocaine-dependent methadone maintenance participants found that substitution to buprenorphine was superior to continued methadone maintenance to decrease desire (“I want”) for cocaine and self-administration behavior without altering other subjective effects. Despite these positive results, the abuse potential of buprenorphine coupled with its potential for physiological dependence make its use for primary cocaine dependence unlikely. Nonetheless, it still may be useful to decrease cocaine use in buprenorphine-maintenance therapy for opioid dependence. A second medication, shown to have efficacy in the outpatient treatment of cocaine dependence, is the alcoholism treatment agent disulfiram. Several human laboratory studies have shown that disulfiram inhibits cocaine metabolism and increases cocaine blood levels and its cardiovascular effects. Although those initial studies reported no significant alteration of cocaine’s subjective effects, a more recent study reported that disulfiram decreased cocaine-induced subjective high. The putative mechanism for efficacy of disulfiram in the treatment of cocaine dependence is presumed to be its inhibition of dopamine beta-hydroxylase. However, because of disulfiram’s inhibition of cocaine metabolism and its side-effect profile, there are concerns about its safety as a treatment for primary cocaine dependence. Because alcohol may be consumed by a cocaine-intoxicated individual treated with disulfiram, the safety of a disulfiram-alcohol reaction was evaluated in subjects with cocaine abuse or dependence in a three-way drug interaction study. That study found that alcohol administration was associated with clinically significant hypotension and increased heart rate in subjects given 5–7 days of disulfiram (250–500 mg) pretreatment. Intravenous infusion of 30 mg cocaine under these conditions counteracted the hypotension but tended to potentiate the heart rate effects. However, safety stop-point criteria prevented the administration of cocaine in two of three subjects who were hypotensive due to an disulfiram-alcohol reaction in subjects treated with 500 mg disulfiram. This human laboratory study illustrates the safety concerns of using disulfiram in the treatment of cocaine dependence. Nonetheless, a review of the safety data from a number of published studies administering disulfiram to cocaine-dependent outpatients and to patients with dual cocaine-alcohol dependence has concluded that it can be safely used for cocaine treatment.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jan 19, 2020 | Posted by in PATHOLOGY & LABORATORY MEDICINE | Comments Off on Role of Human Laboratory Studies in the Development of Medications for Alcohol and Substance Use Disorders

Full access? Get Clinical Tree

Get Clinical Tree app for offline access