Saturday, April 8, 2017

Statistical Errors in the Medical Literature

  1. Misinterpretation of P-values and Main Study Results
  2. Dichotomania
  3. Problems With Change Scores
  4. Improper subgrouping

As Doug Altman famously wrote in his Scandal of Poor Medical Research in BMJ in 1994, the quality of how statistical principles and analysis methods are applied in medical research is quite poor.  According to Doug and to many others such as Richard Smith, the problems have only gotten worse.  The purpose of this blog article is to contain a running list of new papers in major medical journals that are statistically problematic, based on my random encounters with the literature.

One of the most pervasive problems in the medical literature (and in other subject areas) is misuse and misinterpretation of p-values as detailed here, and chief among these issues is perhaps the absence of evidence is not evidence of absence error written about so clearly by Altman and Bland.   The following thought will likely rattle many biomedical researchers but I've concluded that most of the gross misinterpretation of large p-values by falsely inferring that a treatment is not effective is caused by (1) the investigators not being brave enough to conclude "We haven't learned anything from this study", i.e., they feel compelled to believe that their investments of time and money must be worth something, (2) journals accepting such papers without demanding a proper statistical interpretation in the conclusion.  One example of proper wording would be "This study rules out, with 0.95 confidence, a reduction in the odds of death that is more than by a factor of 2."  Ronald Fisher, when asked how to interpret a large p-value, said "Get more data."

Adoption of Bayesian methods would solve many problems including this one.  Whether a p-value is small or large a Bayesian can compute the posterior probability of similarity of outcomes of two treatments (e.g., Prob(0.85 < odds ratio < 1/0.85)), and the researcher will often find that this probability is not large enough to draw a conclusion of similarity.  On the other hand, what if even under a skeptical prior distribution the Bayesian posterior probability of efficacy were 0.8 in a "negative" trial?  Would you choose for yourself the standard therapy when it had a 0.2 chance of being better than the new drug? [Note: I am not talking here about regulatory decisions.]  Imagine a Bayesian world where it is standard to report the results for the primary endpoint using language such as:

  • The probability of any efficacy is 0.94 (so the probability of non-efficacy is 0.06).
  • The probability of efficacy greater than a factor of 1.2 is 0.78 (odds ratio < 1/1.2).
  • The probability of similarity to within a factor of 1.2 is 0.3.
  • The probability that the true odds ratio is between [0.6, 0.99] is 0.95 (credible interval; doesn't use the long-run tendency of confidence intervals to include the true value for 0.95 of confidence intervals computed).

In a so-called "negative" trial we frequently see the phrase "treatment B was not significantly different from treatment A" without thinking out how little information that carries.  Was the power really adequate? Is the author talking about an observed statistic (probably yes) or the true unknown treatment effect?  Why should we care more about statistical significance than clinical significance?  The phrase "was not significantly different" seems to be a way to avoid the real issues of interpretation of large p-values.

Since my #1 area of study is statistical modeling, especially predictive modeling, I pay a lot of attention to model development and model validation as done in the medical literature, and I routinely encounter published papers where the authors do not have basic understanding of the statistical principles involved.  This seems to be especially true when a statistician is not among the paper's authors.  I'll be commenting on papers in which I encounter statistical modeling, validation, or interpretation problems.

Misinterpration of P-values and of Main Study Results

One of the most problematic examples I've seen is in the March 2017 paper Levosimendan in Patients with Left Ventricular Dysfunction Undergoing Cardiac Surgery by Rajenda Mehta in the New England Journal of Medicine.  The study was designed to detect a miracle - a 35% relative odds reduction with drug compared to placebo, and used a power requirement of only 0.8 (type II error a whopping 0.2).  [The study also used some questionable alpha-spending that Bayesians would find quite odd.]  For the primary endpoint, the adjusted odds ratio was 1.00 with 0.99 confidence interval [0.66, 1.54] and p=0.98.  Yet the authors concluded "Levosimendan was not associated with a rate of the composite of death, renal-replacement therapy, perioperative myocardial infarction, or use of a mechanical cardiac assist device that was lower than the rate with placebo among high-risk patients undergoing cardiac surgery with the use of cardiopulmonary bypass."   Their own data are consistent with a 34% reduction (as well as a 54% increase)!  Almost nothing was learned from this underpowered study.  It may have been too disconcerting for the authors and the journal editor to have written "We were only able to rule out a massive benefit of drug."  [Note: two treatments can have agreement in outcome probabilities by chance just as they can have differences by chance.]  It would be interesting to see the Bayesian posterior probability that the true unknown odds ratio is in [0.85, 1/0.85].

The primary endpoint is the union of death, dialysis, MI, or use of a cardiac assist device.  This counts these four endpoints as equally bad.  An ordinal response variable would have yielded more statistical information/precision and perhaps increased power.  And instead of dealing with multiplicity issues and alpha-spending, the multiple endpoints could have been dealt with more elegantly with a Bayesian analysis.  For example, one could easily compute the joint probability that the odds ratio for the primary endpoint is less than 0.8 and the odds ratio for the secondary endpoint is less than 1 [the secondary endpoint was death or assist device and and is harder to demonstrate because of its lower incidence, and is perhaps more of a "hard endpoint"].  In the Bayesian world of forward directly relevant probabilities there is no need to consider multiplicity.  There is only a need to state the assertions for which one wants to compute current probabilities.

The paper also contains inappropriate assessments of interactions with treatment using subgroup analysis with arbitrary cutpoints on continuous baseline variables and failure to adjust for other main effects when doing the subgroup analysis.

This paper had a fine statistician as a co-author.  I can only conclude that the pressure to avoid disappointment with a conclusion of spending a lot of money with little to show for it was in play.

Why was such an underpowered study launched?  Why do researchers attempt "hail Mary passes"?  Is a study that is likely to be futile fully ethical?   Do medical journals allow this to happen because of some vested interest?

Similar Examples

Perhaps the above example is no worse than many.  Examples of "absence of evidence" misinterpretations abound.  Consider the JAMA paper by Kawazoe et al published 2017-04-04.  They concluded that "Mortality at 28 days was not significantly different in the dexmedetomidine group vs the control group (19 patients [22.8%] vs 28 patients [30.8%]; hazard ratio, 0.69; 95% CI, 0.38-1.22; P = .20)."  The point estimate was a reduction in hazard of death by 31% and the data are consistent with the reduction being as large as 62%!

Or look at this 2017-03-21 JAMA article in which the authors concluded "Among healthy postmenopausal older women with a mean baseline serum 25-hydroxyvitamin D level of 32.8 ng/mL, supplementation with vitamin D3 and calcium compared with placebo did not result in a significantly lower risk of all-type cancer at 4 years." even though the observed hazard ratio was 0.7, with lower confidence limit of a whopping 53% reduction in the incidence of cancer.  And the 0.7 was an unadjusted hazard ratio; the hazard ratio could well have been more impressive had covariate adjustment been used to account for outcome heterogeneity within each treatment arm.

Dichotomania

Dichotomania, as discussed by Stephen Senn, is a very prevalent problem in medical and epidemiologic research.  Categorization of continuous variables for analysis is inefficient at best and misleading at worst.  This JAMA paper by VISION study investigators "Association of Postoperative High-Sensitivity Troponin Levels With Myocardial Injury and 30-Day Mortality Among Patients Undergoing Noncardiac Surgery" is an excellent example of bad statistical practice that limits the amount of information provided by the study.  The authors categorized high-sensitivity troponin T levels measured post-op and related these to the incidence of death.  They used four intervals of troponin, and there is important heterogeneity of patients within these intervals.  This is especially true for the last interval (> 1000 ng/L).  Mortality may be much higher for troponin values that are much larger than 1000.  The relationship should have been analyzed with a continuous analysis, e.g., logistic regression with a regression spline for troponin, nonparametric smoother, etc.  The final result could be presented in a simple line graph with confidence bands.

An example of dichotomania that may not be surpassed for some time is Simplification of the HOSPITAL Score for Predicting 30-day Readmissions by Carole E Aubert, et al in BMJ Quality and Safety 2017-04-17. The authors arbitrarily dichotomized several important predictors, resulting in a major loss of information, then dichotomized their resulting predictive score, sacrificing much of what information remained. The authors failed to grasp probabilities, resulting in risk of 30-day readmission of "unlikely" and "likely". The categorization of predictor variables leaves demonstrable outcome heterogeneity within the intervals of predictor values. Then taking an already oversimplified predictive score and dichotomizing it is essentially saying to the reader "We don't like the integer score we just went to the trouble to develop." I now have serious doubts about the thoroughness of reviews at BMJ Quality and Safety.

Change from Baseline

Many authors and pharmaceutical clinical trialists make the mistake of analyzing change from baseline instead of making the raw follow-up measurements the primary outcomes, covariate-adjusted for baseline.  To compute change scores requires many assumptions to hold, e.g.:

  1. the variable must be perfectly transformed so that subtraction "works" and the result is not baseline-dependent
  2. the variable must not have floor and ceiling effects
  3. the variable must have a smooth distribution
  4. the slope of the pre value vs. the follow-up measurement must be close to 1.0
Details about problems with analyzing change may be found here.  A general problem with the approach is that when Y is ordinal but not interval-scaled, differences in Y may no longer be ordinal.  So analysis of change loses the opportunity to do a robust, powerful analysis using a covariate-adjusted ordinal response model such as the proportional odds or proportional hazards model.  Such ordinal response models do not require one to be correct in how to transform Y.

Regarding 4. above, often the baseline is not as relevant as thought and the slope will be less than 1.  When the treatment can cure every patient, the slope will be zero.  Sometimes the relationship between baseline and follow-up Y is not even linear, as in one example I've seen based on the Hamilton D depression scale.

The purpose of a parallel-group randomized clinical trial is to compare the parallel groups, not to compare a patient with herself at baseline.  Within-patient change is affected strongly by regression to the mean and measurement error.  When the baseline value is one of the patient inclusion/exclusion criteria, the only meaningful change score, even if assumptions listed below are satisfied, requires one to have a second baseline measurement post patient qualification to cancel out much of the regression to the mean effect.  It is he second baseline that would be subtracted from the follow-up measurement.

Patient-reported outcome scales are particularly problematic.  An article published 2017-05-07 in JAMA, doi:10.1001/jama.2017.5103 like many other articles makes the error of trusting change from baseline as an appropriate analysis variable.  Mean change from baseline may not apply to anyone in the trial.  Consider a 5-point ordinal pain scale with values Y=1,2,3,4,5.  Patients starting with no pain (Y=1) cannot improve, so their mean change must be zero.  Patients starting at Y=5 have the most opportunity to improve, so their mean change will be large.  A treatment that improves pain scores by an average of one point may average a two point improvement for patients for whom any improvement is possible.  Stating mean changes out of context of the baseline state can be meaningless.


The NEJM paper Treatment of Endometriosis-Associated Pain with Elagolix, an Oral GnRH Antagonist by Hugh Taylor et al is based on a disastrous set of analyses, combining all the problems above. The authors computed change from baseline on variables that do not have the correct properties for subtraction, engaged in dichotomania by doing responder analysis, and in addition used last observation carried forward to handle dropouts. A proper analysis would have been a longitudinal analysis using all available data that avoided imputation of post-dropout values and used raw measurements as the responses. Most importantly, the twin clinical trials randomized 872 women, and had proper analyses been done the required sample size to achieve the same power would have been far less. Besides the ethical issue of randomizing an unnecessarily large number of women to inferior treatment, the approach used by the investigators maximized the cost of these positive trials.

The NEJM paper Oral Glucocorticoid–Sparing Effect of Benralizumab in Severe Asthma by Parameswaran Nair et al not only takes the problematic approach of using change scores from baseline in a parallel group design but they used percent change from baseline as the raw data in the analysis. This is an asymmetric measure for which arithmetic doesn't work. For example, suppose that one patient increases from 1 to 2 and another decreases from 2 to 1. The corresponding percent changes are 100% and -50%. The overall summary should be 0% change, not +25% as found by taking the simple average. Doing arithmetic on percent change can essentially involve adding ratios; ratios that are not proportions are never added; they are multiplied. What was needed was an analysis of covariance of raw oral glucocorticoid dose values adjusted for baseline after taking an appropriate transformation of dose, or using a more robust transformation-invariant ordinal semi-parametric model on the raw follow-up doses (e.g., proportional odds model).

In Trial of Cannabidiol for Drug-Resistant Seizures in the Dravet Syndrome in NEJM 2017-05-25, Orrin Devinsky et al take seizure frequency, which might have a nice distribution such as the Poisson, and compute its change from baseline, which is likely to have a hard-to-model distribution. Once again, authors failed to recognize that the purpose of a parallel group design is to compare the parallel groups. Then the authors engaged in improper subtraction, improper use of percent change, dichotomania, and loss of statistical power simultaneously: "The percentage of patients who had at least a 50% reduction in convulsive-seizure frequency was 43% with cannabidiol and 27% with placebo (odds ratio, 2.00; 95% CI, 0.93 to 4.30; P=0.08)." The authors went on to analyze the change in a discrete ordinal scale, where change (subtraction) cannot have a meaning independent of the starting point at baseline.

Improper Subgrouping

The JAMA Internal Medicine Paper Effect of Statin Treatment vs Usual Care on Primary Cardiovascular Prevention Among Older Adults by Benjamin Han et al makes the classic statistical error of attempting to learn about differences in treatment effectiveness by subgrouping rather than by correctly modeling interactions. They compounded the error by not adjusting for covariates when comparing treatments in the subgroups, and even worse, by subgrouping on a variable for which grouping is ill-defined and information-losing: age. They used age intervals of 65-74 and 75+. A proper analysis would have been, for example, modeling age as a smooth nonlinear function (e.g., using a restricted cubic spline) and interacting this function with treatment to allow for a high-resolution, non-arbitrary analysis that allows for nonlinear interaction. Results could be displayed by showing the estimated treatment hazard ratio and confidence bands (y-axis) vs. continuous age (x-axis). The authors' analysis avoids the question of a dose-response relationship between age and treatment effect. A full strategy for interaction modeling for assessing heterogeneity of treatment effect (AKA precision medicine) may be found in the analysis of covariance chapter in Biostatistics for Biomedical Research.

To make matters worse, the above paper included patients with a sharp cutoff of 65 years of age as the lower limit. How much more informative it would have been to have a linearly increasing (in age) enrollment function that reaches a probability of 1.0 at 65y. Assuming that something magic happens at age 65 with regard to cholesterol reduction is undoubtedly a mistake.