Small Sample Size Paradox

Small Sample Size Illusion is an illusion that will eventually make it into my next iteration of the cognitive optical illusions slide set, but in the meanwhile, I am posting it on the blog.

Let me start with an example borrowed from Howard Wainer’s article in the American Scientist, “The Most Dangerous Equation.”  If you were to ask which counties in the U.S. had the highest rates of kidney cancer, you would find that rural, sparsely populated counties had the highest rates.  You might think that perhaps this was due to pesticides, or lack of access to healthcare, or some other factor related to the rural lifestyle.

However, if you were to ask which counties has the lowest rates, you would find that rural, sparsely populated counties had the lowest rates.  In fact the counties are often adjacent. See below.  The red counties have the highest rates, and the teal counties the lowest.

This is because when you have only a few people in the county, the likelihood that there will be very high or low rates, due simply to chance, is high. For example, if there were only 2 people in the country and 1 person got cancer, that would be 50%.  If 0 out 2 got cancer, it would be 0%.

This is why the best (and the worst) hospitals in the country or the best (and the worst) places to live often are small hospital and small towns. Statistically, smaller the sample size, greater the likelihood of seeing an outlier.

So how does this apply to clinical trial interpretation?

Often in a journal article, you will see a graph like the one below.  This is a modified forest plot (the classic forest plot uses odds ratios, and often a semilog scale, but often in subgroup analysis, point estimates and linear scale are used).

You look at this graph and the different subgroups, and you would be tempted to draw the conclusion that patients under 30 derive the most benefit from the drug.  In fact, perhaps you would even want to enrich the next study for younger patients.

Does the graph tell us that < 30 y.o. patients benefit much more from the drug than > 30 y.o. patients?

Probably not.  The group may benefit more, but almost certainly, the graph over represents the magnitude of the benefit. For one, you will  get regression to the mean, which would tend to bring the outliers closer to the mean if you repeated the study.

But more importantly, note the size of the < 30 y.o. group.  Only 21 patients.  When you have small samples, you are more likely to get an outlier. If you re-ran the trial with only < 30 y.o. patients, it is highly likely that you would see only a smaller (or no) additional benefit compared to the overall group.

It would behoove us to remember when we’re looking at data like this to remember that smaller the group, more likely it is that the effect size we’re seeing greatly overstates the actual magnitude of the benefit.

The graph is therefore misleading.  A more accurate representation would be a plot (let’s call it a hilly forest plot) such as below, which shows the likelihood of the actual benefit by the thickness of the confidence interval line, after adjustment for regression to the mean and the sample size effect.

Now, if you would indulge me, on a slightly different topic, let me propose a hypothesis. One of the more hotly debated topics in our industry is whether small biotechs outperform big pharma. Related to that question is whether small companies are more innovative. The hypothesis is: could it be that small companies may on average be no better than big pharma but that the most productive (and the least productive) companies are small because it is more likely that they will be outliers? This might explain why some people insist that small companies are more productive while the objective data seem to indicate biotechs as a whole do not outperform big pharma.