Statistics: Setting the Stage



Fig. 3.1
Power curve. A power curve demonstrates the increasing differences (δ) between the control group and the intervention group with a one-sided significance level of 0.05 and a total sample size (2N) (From: Sample Size. Friedman et al. [7]. Figure 7.​1, page 97)





3.8 Sample Size Considerations


Clinical trials should have sufficient statistical power to detect clinically meaningful differences between groups. Calculation of the necessary sample size for adequate levels of significance and power is an essential part of the planning process for clinical trials. Sample size estimates are based on multiple assumptions and therefore should be as conservative as can be justified. Overestimated sample sizes may lead to unfeasible enrollment goals, but underestimates may lead to incorrect conclusions at the end of the trial.

To delineate how sample size calculations are made, the following example will be based upon a trial comparing one intervention group with one control group. If randomization results in equal allocation to the intervention and control groups, an assumption that the variability in responses between the groups is similar can be made, and the sample size can be equally split (in the case of a 1:1 randomization schema).

Differences can be tested as a one-sided test or a two-sided test. In other words, the trialist must decide if the study is going to measure differences in one direction (i.e., there are improvements in the intervention over the control) or differences in either direction (i.e., the intervention is either better or worse than the control). Because a new intervention could be either beneficial or harmful, two-sided tests are preferred. In the unusual circumstance in which a difference is only expected in one direction, the significance level used in calculating the sample size should be half of what would be used for a two-sided test.

Interim analyses may be employed in a clinical trial. If early data suggest that an intervention is harmful, the trial may have to be terminated prematurely for safety reasons. Further, if results appear unlikely to show any difference at the end of the trial, early termination may be considered to spare the resources necessary for trial continuation. Conversely, if early data suggest that an intervention is clearly beneficial, the trial may similarly have to be terminated prematurely because to continue with a control group may be unethical. One specific consequence of early data analysis is that the rate of incorrectly rejecting H o will be larger than the initially selected significance level unless the critical value for an interim analysis is appropriately adjusted. Importantly, the more interim analyses, the higher the probability of incurring a type I error.

Equipoise is the guiding principle behind RCTs; without true uncertainty about an intervention’s superiority, there is no justification for conducting a clinical trial. Sometimes, the appropriate trial is a study of equivalency, also known as trials with positive controls. Equivalency studies test whether a new intervention is as good as an established one. The study design here includes a standard intervention (control) which is known to be better than placebo. The new intervention may be preferred because it is less expensive, has fewer side effects or other attractive features. Importantly, no trial can ever show that the two interventions are the same; there is no way to demonstrate complete equivalence (δ = 0) because this would require an infinite sample size. However, it is reasonable to specify a value of δ such that a difference is less than might be considered equally effective or equivalent. Therefore, the null hypothesis states that differences are greater than δ and failure to reject H o merely finds that there is inadequate evidence to say that the groups are different.


3.9 Conclusion


Proper attention to statistics when planning and conducting a clinical trial will help prevent an effective therapy from being disregarded because of false-negative results or a relatively ineffective therapy from applied improperly because of false-positive results.


References



1.

Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276:637–9.PubMedCrossRef


2.

Tiruvoipati R, Balasubramanian SP, Atturu G, Peek GJ, Elbourne D. Improving the quality of reporting randomized controlled trials in cardiothoracic surgery. The way forward. J Thorac Cardiovasc Surg. 2006;132:233–40.PubMedCrossRef

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Sep 26, 2017 | Posted by in GENERAL & FAMILY MEDICINE | Comments Off on Statistics: Setting the Stage

Full access? Get Clinical Tree

Get Clinical Tree app for offline access