Power

CHAPTER 14 Power




Throughout our study of biostatistics, we have encountered the concept that in any experiment there is a chance of arriving at the wrong conclusion. We are quite comfortable with the fact that our data may lead us to an incorrect inference. We even have names for the types of errors we could make. However, as long as the chance of error is slim (such as less than 5%), we are willing to accept the risk.


We have carefully considered the possibility of making a Type I error. This is the situation where the data do not support the null hypothesis and, even though there is no true treatment effect or relationship between variables, we have “determined” that there is. We have rejected the null hypothesis of no effect and have erroneously declared that the alternative hypothesis is true. We have not done this intentionally; we are acting on the chance that the data may infrequently lead us astray.


The graphical way to show how we could make an error like this is illustrated in Figure 14-1 by a normally distributed sampling distribution of a sample statistic. If we truly have no treatment effect, then the null hypothesis is true and our test statistic (even though unlikely to occur) did indeed happen. It was our bad luck to get a sample that prompted us to reject the null hypothesis. If this did happen, we would likely be set straight in the future. The results observed in subsequent experiments would not support these original findings, or the results when the treatment is applied in the community would not be as positive.



If we were concerned with the possibility of making a Type I error, we could set our α level very low, say 0.01, to minimize this possibility. However, when we do this, we are less likely to detect a true treatment difference if one does indeed exist. A test with that small a level of α is not very useful because it almost never rejects H0. When we decrease the chance of a Type I error, we increase the chance of a Type II error, which would incorrectly declare no treatment effect. We would like a test to minimize the margin of error to avoid making a Type I error, but we also want a test that has the ability to detect a true treatment difference.


If the treatment really did make a difference, then there are really two different sampling distributions—one for the control group and one for the treatment or alternative group. This is an important concept that explains the essence of statistical tests. The reason we get a difference in results (even accounting for variability among the groups) is that the distributions of the outcome variable are different for each group now that one of them has been changed due to the intervention. This is a complex mathematical way of saying, quite simply, that there is a true difference in outcome due to the intervention.


When we compare two groups that have been exposed to an intervention, we look at the difference in means of the outcome variable of the two groups. This is expressed as δ. The null hypothesis would state the groups are the same, and δ = 0. If the treatment has an effect on the outcome, the means of these groups will be more spread out, as in Figure 14-2. The sample statistic (and resulting test statistic) that we get is not likely to be compatible with the results we would get if the null hypothesis were true.


Jun 18, 2016 | Posted by in BIOCHEMISTRY | Comments Off on Power

Full access? Get Clinical Tree

Get Clinical Tree app for offline access