Chapter Contents
What to Look for With Sequence Generation 133
Simple (Unrestricted) Randomisation 134
Restricted Randomisation 135
Blocking 135
Random Allocation Rule 136
Biased Coin and Urn Randomisation 136
Replacement Randomisation 137
Stratified Randomisation 137
Separation of Generation and Implementation 138
Conclusion 138
The randomised controlled trial sets the gold standard of clinical research. However, randomisation persists as perhaps the least understood aspect of a trial. Moreover, anything short of proper randomisation courts selection and confounding biases. Researchers should spurn all systematic, nonrandom methods of allocation. Trial participants should be assigned to comparison groups based on a random process. Simple (unrestricted) randomisation, analogous to repeated fair coin-tossing, is the most basic of sequence generation approaches. Furthermore, no other approach, irrespective of its complexity and sophistication, surpasses simple randomisation for prevention of bias. Investigators should, therefore, use this method more often than they do, and readers should expect and accept disparities in group sizes. Several other complicated restricted randomisation procedures limit the likelihood of undesirable sample size imbalances in the intervention groups. The most frequently used restricted sequence generation procedure is blocked randomisation. If this method is used, investigators should randomly vary the block sizes and use large block sizes, particularly in an unblinded trial. Other restricted procedures, such as urn randomisation, combine beneficial attributes of simple and restricted randomisation by preserving most of the unpredictability while achieving some balance. The effectiveness of stratified randomisation depends on use of a restricted randomisation approach to balance the allocation sequences for each stratum. Generation of a proper randomisation sequence takes little time and effort but affords big rewards in scientific accuracy and credibility. Investigators should devote appropriate resources to sequence generation in properly randomised trials and to reporting their methods clearly.
… having used a random allocation, the sternest critic is unable to say when we eventually dash into print that quite probably the groups were differentially biased through our predilections or through our stupidity.
Until recently, investigators shunned formally controlled experimentation when designing trials ( Panel 12.1 ). Now, however, the randomised controlled trial sets the methodological standard of excellence in medical research ( Panel 12.2 ). The unique capability of randomised controlled trials to reduce bias depends on investigators being able to implement their principal bias-reducing technique—randomisation. Although random allocation of trial participants is the most fundamental aspect of a controlled trial, it unfortunately remains perhaps the least understood.
The controlled trial gained increasing recognition during the 20th century as the best approach for assessment of healthcare and prevention alternatives. R.A. Fisher developed randomisation as a basic principle of experimental design in the 1920s and used the technique predominantly in agricultural research. The successful adaptation of randomised controlled trials to healthcare took place in the late 1940s, largely because of the advocacy and developmental work of Sir Austin Bradford Hill while at the London School of Hygiene and Tropical Medicine. His efforts culminated in the first experimental and published use of random numbers to allocate trial participants. Soon after, randomisation emerged as crucial in securing unbiased comparison groups.
Austin Bradford Hill (1954)
Proper implementation of a randomisation mechanism affords at least three major advantages.
- 1.
It eliminates bias in treatment assignment.
Comparisons of different forms of health interventions can be misleading unless investigators take precautions to ensure that their trial comprises unbiased comparison groups relative to prognosis. In controlled trials of prevention or treatment, randomisation produces unbiased comparison groups by avoiding selection and confounding biases. Consequently, comparison groups are not prejudiced by selection of particular patients, whether consciously or not, to receive a specific intervention. The notion of avoiding bias includes eliminating it from decisions on entry of participants to the trial, as well as eliminating bias from the assignment of participants to treatment, once entered. Investigators need to properly register each participant immediately on identification of eligibility for the trial, but without knowledge of the assignment. The reduction of selection and confounding biases underpins the most important strength of randomisation. Randomisation prevails as the best study design for investigating small or moderate effects.
- 2.
It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors, including the possible use of a placebo.
Such manoeuvres reduce bias after random assignment and would be difficult, perhaps even impossible, to implement if investigators assigned treatments by a nonrandom scheme.
- 3.
It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance.
In this chapter we describe the rationale behind random allocation and its related implementation procedures. Randomisation depends primarily on two interrelated but separate processes (i.e., generation of an unpredictable randomised allocation sequence and concealment of that sequence until assignment occurs [allocation concealment]). Here, we focus on how such a sequence can be generated. In Chapter 14 we address allocation concealment.
What to Look for With Sequence Generation
Nonrandom methods masquerading as random
Ironically, many researchers have decidedly nonrandom impressions of randomisation. They often mistake haphazard approaches and alternate assignment approaches as random. Some medical researchers even view approaches antithetical to randomisation, such as assignment to intervention groups based on preintervention tests, as quasirandom. Quasirandom, however, resembles quasipregnant, in that they both elude definition. Indeed, anything short of proper randomisation opens limitless contamination possibilities. Without properly done randomisation, selection and confounding biases seep into trials.
Researchers sometimes cloak, perhaps unintentionally, nonrandom methods in randomised clothing. They think that they have randomised by a method that, when described, is obviously not random. Methods such as assignment based on date of birth, case record number, date of presentation, or alternate assignment are not random, but rather systematic occurrences. Yet in a study that we did, in 5% (11 of 206) of reports, investigators claimed that they had randomly assigned participants by such nonrandom methods. Furthermore, nonrandom methods are probably used much more frequently than suggested by our findings, because 63% (129 of 206) of the reports did not specify the method used to generate a random sequence.
Systematic methods do not qualify as randomisation methods for theoretical and practical reasons. For example, in some populations, the day of the week on which a child is born is not entirely a matter of chance. Furthermore, systematic methods do not result in allocation concealment. By definition, systematic allocation usually precludes adequate concealment, because it results in foreknowledge of treatment assignment among those who recruit participants to the trial. If researchers report the use of systematic allocation, especially if masqueraded as randomised, readers should be wary of the results, because such a mistake implies ignorance of the randomisation process. We place more credence in the findings of such a study if the authors accurately report it as nonrandomised and explain how they controlled for confounding factors. In such instances, researchers should also discuss the degree of potential selection and information biases, allowing readers to properly judge the results in view of the nonrandom nature of the study and its biases.
Method of generation of an allocation sequence
To minimise bias, participants in a trial should be assigned to comparison groups based on some chance (random) process. Investigators use many different methods of randomisation, the most predominant of which we describe in the sections below.
With all these methods, different allocation ratios are possible. However, the allocation ratio of 1:1 (i.e., an equal probability of assignment to each group) is usually employed leading to approximately equal group sizes. Although a 1:1 allocation ratio usually maximises trial power, ratios up to 2:1 minimally reduce the power.
Good reasons argue for more common usage of unequal allocation. The most obvious reasons relate to costs, because within a trial the cost of each treatment often differs. If the available trial funding is fixed, using an unequal allocation ratio to randomise more participants to the cheaper treatment group facilitates a larger sample size thus increasing the power of the trial. With a fixed total sample size available, an unequal allocation ratio can create large cost savings in a trial with minimal effect on statistical power. For example, with a fixed total sample size, using a 2:1 allocation ratio instead of a 1:1 ratio in the Scandinavian simvastatin study for preventing coronary heart disease would have led to a 34% cost savings with only a modest 3% loss in power.
Furthermore, unequal randomisation could help recruitment. Suppose one treatment arm is favoured by potential participants. If the trial is designed with unequal randomisation such that potential participants would have a higher likelihood of being allocated to their favoured treatment, then better recruitment likely results.
Simple (Unrestricted) Randomisation
‘Elementary yet elegant’ describes simple randomisation ( Panel 12.3 ). Although the most basic of allocation approaches, analogous to repeated fair coin-tossing, this method preserves complete unpredictability of each intervention assignment. No other allocation generation approach, irrespective of its complexity and sophistication, surpasses the unpredictability and bias prevention of simple randomisation. The unpredictability of simple randomisation, however, can also be a disadvantage. With small sample sizes, simple randomisation (1:1 allocation ratio) can yield highly disparate sample sizes in the groups by chance alone. For example, with a total sample size of 20, about 10% of the sequences generated with simple randomisation would yield a ratio imbalance of 3:7 or worse. This difficulty is diminished as the total sample size grows. Probability theory ensures that, in the long term, the sizes of the treatment groups will not be greatly imbalanced. For a two-arm trial, the chance of pronounced imbalance becomes negligible with trial sizes greater than 200. However, interim analyses with sample sizes of less than 200 might result in disparate group sizes.
An almost infinite number of methods can be used to generate a simple randomisation sequence based on a random-number table. For example, for equal allocation to two groups, predetermine the direction to read the table: up, down, left, right, or diagonal. Then select an arbitrary starting point (i.e., first line, 7th number):
56 99 20 20 52 49 05 78 58 50 62 86 52 11 88
31 60 26 13 69 74 80 71 48 73 72 18 60 58 20
55 59 06 67 02 …
For equal allocation, an investigator could equate odd and even numbers to interventions A and B, respectively. Therefore, a series of random numbers 05, 78, 58, 50, 62, 86, 52, 11, 88, 31, and so forth represents allocation to intervention A, B, B, B, B, B, B, A, B, A, and so forth. Alternatively, 00–49 could equate to A and 50–99 to B, or numbers 00–09 to A and 10–19 to B, ignoring all numbers greater than 19. Any of a myriad of options suffice, provided the assignment probabilities and the investigator adhere to the predetermined scheme.
Coin-tossing, dice-throwing, and dealing previously shuffled cards represent reasonable approaches for generation of simple complete randomisation sequences. All these manual methods of drawing lots theoretically lead to random allocation schemes, but frequently become nonrandom in practice. Distorted notions of randomisation sabotage the best of intentions. Fair coin-tossing, for example, allocates randomly with equal probability to two intervention groups, but can tempt investigators to alter the results of a toss or series of tosses (e.g., when a series of heads and no tails are thrown). Many investigators do not really understand probability theory, and they perceive randomness as nonrandom. For example, the late Chicago baseball announcer Jack Brickhouse used to claim that when a 0.250 hitter (someone who would have a successful hit a quarter of the time) strolled to the plate for the fourth time, having failed the previous three times, that the batsman was ‘due’ (i.e., that the hitter would surely get a hit). However, Jack’s proclamation ‘he is due’ portrayed a nonrandom interpretation of randomness. Similarly, a couple who have three boys and want a girl often think that their fourth child will certainly be a girl, yet the probability of them actually having a girl is still about 50%.
A colleague regularly demonstrated distorted views of randomisation with his graduate school class. He would have half his class develop allocation schemes with a proper randomisation method, and get the other half to develop randomisation schemes based on their personal views of randomisation. The students who used a truly random method would frequently have long consecutive runs of one treatment or the other. Conversely, students who used their own judgement would not. Class after class revealed their distorted impressions of randomisation.
Moreover, manual methods of drawing lots are more difficult to implement and cannot be checked. Because of threats to randomness, difficulties in implementation, and lack of an audit trail, we recommend that investigators avoid use of coin-tossing, dice-throwing, or card-shuffling, despite these being acceptable methods. Whatever method is used, however, should be clearly indicated in a researcher’s report. If no such description is made, readers should treat the study results with caution. Readers should have the most confidence in a sequence generation approach if the authors mention referral to either a table of random numbers or a computer random number generator, because these options represent unpredictable, reliable, easy, reproducible approaches that provide an audit trail. Many statistical software packages include random number generators and the Internet provides access as well.