Saturday, April 8, 2017

Statistical Errors in the Medical Literature

  1. Misinterpretation of P-values and Main Study Results
  2. Dichotomania
  3. Problems With Change Scores
  4. Improper subgrouping

As Doug Altman famously wrote in his Scandal of Poor Medical Research in BMJ in 1994, the quality of how statistical principles and analysis methods are applied in medical research is quite poor.  According to Doug and to many others such as Richard Smith, the problems have only gotten worse.  The purpose of this blog article is to contain a running list of new papers in major medical journals that are statistically problematic, based on my random encounters with the literature.

One of the most pervasive problems in the medical literature (and in other subject areas) is misuse and misinterpretation of p-values as detailed here, and chief among these issues is perhaps the absence of evidence is not evidence of absence error written about so clearly by Altman and Bland.   The following thought will likely rattle many biomedical researchers but I've concluded that most of the gross misinterpretation of large p-values by falsely inferring that a treatment is not effective is caused by (1) the investigators not being brave enough to conclude "We haven't learned anything from this study", i.e., they feel compelled to believe that their investments of time and money must be worth something, (2) journals accepting such papers without demanding a proper statistical interpretation in the conclusion.  One example of proper wording would be "This study rules out, with 0.95 confidence, a reduction in the odds of death that is more than by a factor of 2."  Ronald Fisher, when asked how to interpret a large p-value, said "Get more data."

Adoption of Bayesian methods would solve many problems including this one.  Whether a p-value is small or large a Bayesian can compute the posterior probability of similarity of outcomes of two treatments (e.g., Prob(0.85 < odds ratio < 1/0.85)), and the researcher will often find that this probability is not large enough to draw a conclusion of similarity.  On the other hand, what if even under a skeptical prior distribution the Bayesian posterior probability of efficacy were 0.8 in a "negative" trial?  Would you choose for yourself the standard therapy when it had a 0.2 chance of being better than the new drug? [Note: I am not talking here about regulatory decisions.]  Imagine a Bayesian world where it is standard to report the results for the primary endpoint using language such as:

  • The probability of any efficacy is 0.94 (so the probability of non-efficacy is 0.06).
  • The probability of efficacy greater than a factor of 1.2 is 0.78 (odds ratio < 1/1.2).
  • The probability of similarity to within a factor of 1.2 is 0.3.
  • The probability that the true odds ratio is between [0.6, 0.99] is 0.95 (credible interval; doesn't use the long-run tendency of confidence intervals to include the true value for 0.95 of confidence intervals computed).

In a so-called "negative" trial we frequently see the phrase "treatment B was not significantly different from treatment A" without thinking out how little information that carries.  Was the power really adequate? Is the author talking about an observed statistic (probably yes) or the true unknown treatment effect?  Why should we care more about statistical significance than clinical significance?  The phrase "was not significantly different" seems to be a way to avoid the real issues of interpretation of large p-values.

Since my #1 area of study is statistical modeling, especially predictive modeling, I pay a lot of attention to model development and model validation as done in the medical literature, and I routinely encounter published papers where the authors do not have basic understanding of the statistical principles involved.  This seems to be especially true when a statistician is not among the paper's authors.  I'll be commenting on papers in which I encounter statistical modeling, validation, or interpretation problems.

Misinterpration of P-values and of Main Study Results

One of the most problematic examples I've seen is in the March 2017 paper Levosimendan in Patients with Left Ventricular Dysfunction Undergoing Cardiac Surgery by Rajenda Mehta in the New England Journal of Medicine.  The study was designed to detect a miracle - a 35% relative odds reduction with drug compared to placebo, and used a power requirement of only 0.8 (type II error a whopping 0.2).  [The study also used some questionable alpha-spending that Bayesians would find quite odd.]  For the primary endpoint, the adjusted odds ratio was 1.00 with 0.99 confidence interval [0.66, 1.54] and p=0.98.  Yet the authors concluded "Levosimendan was not associated with a rate of the composite of death, renal-replacement therapy, perioperative myocardial infarction, or use of a mechanical cardiac assist device that was lower than the rate with placebo among high-risk patients undergoing cardiac surgery with the use of cardiopulmonary bypass."   Their own data are consistent with a 34% reduction (as well as a 54% increase)!  Almost nothing was learned from this underpowered study.  It may have been too disconcerting for the authors and the journal editor to have written "We were only able to rule out a massive benefit of drug."  [Note: two treatments can have agreement in outcome probabilities by chance just as they can have differences by chance.]  It would be interesting to see the Bayesian posterior probability that the true unknown odds ratio is in [0.85, 1/0.85].

The primary endpoint is the union of death, dialysis, MI, or use of a cardiac assist device.  This counts these four endpoints as equally bad.  An ordinal response variable would have yielded more statistical information/precision and perhaps increased power.  And instead of dealing with multiplicity issues and alpha-spending, the multiple endpoints could have been dealt with more elegantly with a Bayesian analysis.  For example, one could easily compute the joint probability that the odds ratio for the primary endpoint is less than 0.8 and the odds ratio for the secondary endpoint is less than 1 [the secondary endpoint was death or assist device and and is harder to demonstrate because of its lower incidence, and is perhaps more of a "hard endpoint"].  In the Bayesian world of forward directly relevant probabilities there is no need to consider multiplicity.  There is only a need to state the assertions for which one wants to compute current probabilities.

The paper also contains inappropriate assessments of interactions with treatment using subgroup analysis with arbitrary cutpoints on continuous baseline variables and failure to adjust for other main effects when doing the subgroup analysis.

This paper had a fine statistician as a co-author.  I can only conclude that the pressure to avoid disappointment with a conclusion of spending a lot of money with little to show for it was in play.

Why was such an underpowered study launched?  Why do researchers attempt "hail Mary passes"?  Is a study that is likely to be futile fully ethical?   Do medical journals allow this to happen because of some vested interest?

Similar Examples

Perhaps the above example is no worse than many.  Examples of "absence of evidence" misinterpretations abound.  Consider the JAMA paper by Kawazoe et al published 2017-04-04.  They concluded that "Mortality at 28 days was not significantly different in the dexmedetomidine group vs the control group (19 patients [22.8%] vs 28 patients [30.8%]; hazard ratio, 0.69; 95% CI, 0.38-1.22; P = .20)."  The point estimate was a reduction in hazard of death by 31% and the data are consistent with the reduction being as large as 62%!

Or look at this 2017-03-21 JAMA article in which the authors concluded "Among healthy postmenopausal older women with a mean baseline serum 25-hydroxyvitamin D level of 32.8 ng/mL, supplementation with vitamin D3 and calcium compared with placebo did not result in a significantly lower risk of all-type cancer at 4 years." even though the observed hazard ratio was 0.7, with lower confidence limit of a whopping 53% reduction in the incidence of cancer.  And the 0.7 was an unadjusted hazard ratio; the hazard ratio could well have been more impressive had covariate adjustment been used to account for outcome heterogeneity within each treatment arm.

Dichotomania

Dichotomania, as discussed by Stephen Senn, is a very prevalent problem in medical and epidemiologic research.  Categorization of continuous variables for analysis is inefficient at best and misleading at worst.  This JAMA paper by VISION study investigators "Association of Postoperative High-Sensitivity Troponin Levels With Myocardial Injury and 30-Day Mortality Among Patients Undergoing Noncardiac Surgery" is an excellent example of bad statistical practice that limits the amount of information provided by the study.  The authors categorized high-sensitivity troponin T levels measured post-op and related these to the incidence of death.  They used four intervals of troponin, and there is important heterogeneity of patients within these intervals.  This is especially true for the last interval (> 1000 ng/L).  Mortality may be much higher for troponin values that are much larger than 1000.  The relationship should have been analyzed with a continuous analysis, e.g., logistic regression with a regression spline for troponin, nonparametric smoother, etc.  The final result could be presented in a simple line graph with confidence bands.

An example of dichotomania that may not be surpassed for some time is Simplification of the HOSPITAL Score for Predicting 30-day Readmissions by Carole E Aubert, et al in BMJ Quality and Safety 2017-04-17. The authors arbitrarily dichotomized several important predictors, resulting in a major loss of information, then dichotomized their resulting predictive score, sacrificing much of what information remained. The authors failed to grasp probabilities, resulting in risk of 30-day readmission of "unlikely" and "likely". The categorization of predictor variables leaves demonstrable outcome heterogeneity within the intervals of predictor values. Then taking an already oversimplified predictive score and dichotomizing it is essentially saying to the reader "We don't like the integer score we just went to the trouble to develop." I now have serious doubts about the thoroughness of reviews at BMJ Quality and Safety.

Change from Baseline

Many authors and pharmaceutical clinical trialists make the mistake of analyzing change from baseline instead of making the raw follow-up measurements the primary outcomes, covariate-adjusted for baseline.  To compute change scores requires many assumptions to hold, e.g.:

  1. the variable must be perfectly transformed so that subtraction "works" and the result is not baseline-dependent
  2. the variable must not have floor and ceiling effects
  3. the variable must have a smooth distribution
  4. the slope of the pre value vs. the follow-up measurement must be close to 1.0
Details about problems with analyzing change may be found here.  A general problem with the approach is that when Y is ordinal but not interval-scaled, differences in Y may no longer be ordinal.  So analysis of change loses the opportunity to do a robust, powerful analysis using a covariate-adjusted ordinal response model such as the proportional odds or proportional hazards model.  Such ordinal response models do not require one to be correct in how to transform Y.

Regarding 4. above, often the baseline is not as relevant as thought and the slope will be less than 1.  When the treatment can cure every patient, the slope will be zero.  Sometimes the relationship between baseline and follow-up Y is not even linear, as in one example I've seen based on the Hamilton D depression scale.

The purpose of a parallel-group randomized clinical trial is to compare the parallel groups, not to compare a patient with herself at baseline.  Within-patient change is affected strongly by regression to the mean and measurement error.  When the baseline value is one of the patient inclusion/exclusion criteria, the only meaningful change score, even if assumptions listed below are satisfied, requires one to have a second baseline measurement post patient qualification to cancel out much of the regression to the mean effect.  It is he second baseline that would be subtracted from the follow-up measurement.

Patient-reported outcome scales are particularly problematic.  An article published 2017-05-07 in JAMA, doi:10.1001/jama.2017.5103 like many other articles makes the error of trusting change from baseline as an appropriate analysis variable.  Mean change from baseline may not apply to anyone in the trial.  Consider a 5-point ordinal pain scale with values Y=1,2,3,4,5.  Patients starting with no pain (Y=1) cannot improve, so their mean change must be zero.  Patients starting at Y=5 have the most opportunity to improve, so their mean change will be large.  A treatment that improves pain scores by an average of one point may average a two point improvement for patients for whom any improvement is possible.  Stating mean changes out of context of the baseline state can be meaningless.


The NEJM paper Treatment of Endometriosis-Associated Pain with Elagolix, an Oral GnRH Antagonist by Hugh Taylor et al is based on a disastrous set of analyses, combining all the problems above. The authors computed change from baseline on variables that do not have the correct properties for subtraction, engaged in dichotomania by doing responder analysis, and in addition used last observation carried forward to handle dropouts. A proper analysis would have been a longitudinal analysis using all available data that avoided imputation of post-dropout values and used raw measurements as the responses. Most importantly, the twin clinical trials randomized 872 women, and had proper analyses been done the required sample size to achieve the same power would have been far less. Besides the ethical issue of randomizing an unnecessarily large number of women to inferior treatment, the approach used by the investigators maximized the cost of these positive trials.

The NEJM paper Oral Glucocorticoid–Sparing Effect of Benralizumab in Severe Asthma by Parameswaran Nair et al not only takes the problematic approach of using change scores from baseline in a parallel group design but they used percent change from baseline as the raw data in the analysis. This is an asymmetric measure for which arithmetic doesn't work. For example, suppose that one patient increases from 1 to 2 and another decreases from 2 to 1. The corresponding percent changes are 100% and -50%. The overall summary should be 0% change, not +25% as found by taking the simple average. Doing arithmetic on percent change can essentially involve adding ratios; ratios that are not proportions are never added; they are multiplied. What was needed was an analysis of covariance of raw oral glucocorticoid dose values adjusted for baseline after taking an appropriate transformation of dose, or using a more robust transformation-invariant ordinal semi-parametric model on the raw follow-up doses (e.g., proportional odds model).

In Trial of Cannabidiol for Drug-Resistant Seizures in the Dravet Syndrome in NEJM 2017-05-25, Orrin Devinsky et al take seizure frequency, which might have a nice distribution such as the Poisson, and compute its change from baseline, which is likely to have a hard-to-model distribution. Once again, authors failed to recognize that the purpose of a parallel group design is to compare the parallel groups. Then the authors engaged in improper subtraction, improper use of percent change, dichotomania, and loss of statistical power simultaneously: "The percentage of patients who had at least a 50% reduction in convulsive-seizure frequency was 43% with cannabidiol and 27% with placebo (odds ratio, 2.00; 95% CI, 0.93 to 4.30; P=0.08)." The authors went on to analyze the change in a discrete ordinal scale, where change (subtraction) cannot have a meaning independent of the starting point at baseline.

Improper Subgrouping

The JAMA Internal Medicine Paper Effect of Statin Treatment vs Usual Care on Primary Cardiovascular Prevention Among Older Adults by Benjamin Han et al makes the classic statistical error of attempting to learn about differences in treatment effectiveness by subgrouping rather than by correctly modeling interactions. They compounded the error by not adjusting for covariates when comparing treatments in the subgroups, and even worse, by subgrouping on a variable for which grouping is ill-defined and information-losing: age. They used age intervals of 65-74 and 75+. A proper analysis would have been, for example, modeling age as a smooth nonlinear function (e.g., using a restricted cubic spline) and interacting this function with treatment to allow for a high-resolution, non-arbitrary analysis that allows for nonlinear interaction. Results could be displayed by showing the estimated treatment hazard ratio and confidence bands (y-axis) vs. continuous age (x-axis). The authors' analysis avoids the question of a dose-response relationship between age and treatment effect. A full strategy for interaction modeling for assessing heterogeneity of treatment effect (AKA precision medicine) may be found in the analysis of covariance chapter in Biostatistics for Biomedical Research.

To make matters worse, the above paper included patients with a sharp cutoff of 65 years of age as the lower limit. How much more informative it would have been to have a linearly increasing (in age) enrollment function that reaches a probability of 1.0 at 65y. Assuming that something magic happens at age 65 with regard to cholesterol reduction is undoubtedly a mistake.

31 comments:

  1. The study was severely underpowered for even the optimistic targeted effect size. What I find puzzling is they originally expected 201 events from 760 subjects. At an interim point of 600 enrolled, they adjusted the target enrollment to 880. How many endpoints they had at the interim time point is not known but it had to be less than 105 (final number). Why bother adjusting from 760 to 880 if you are observing less than half the expected number of events? Not sure what I would have recommended if I was on a DMB - maybe terminate immediately.

    ReplyDelete
  2. Trial sequential analysis of one trial comparing levosimendan versus placebo on primary endpoint in patients left ventricular dysfunction undergoing cardiac surgery.
    Trial sequential analysis of one trial of levosimendan versus placebo on left ventricular dysfunction undergoing cardiac surgery based on the diversity-adjusted required information size (DARIS) of 2981 patients. This DARIS was calculated based upon a proportion of patients with the low cardiac output syndrome after cardiac surgery of 24.1% in the control group; a RRR of 20% in the experimental intervention group; an alpha (α) of 5%; and a beta (β) of 20%. The cumulative Z-curve (blue line) did not cross the conventional alpha of 5% (green line) after one trial. It implies that there is a random error. The cumulative Z-curve did not reach the futility area, which is not even drawn by the program. Presently, only 28.5% (849/2981) of the DARIS has been obtained. Had we calculated the DARIS based on a more realistic RRR such as <20%, the obtained evidence would represent a much smaller part of the DARIS. There is need of more powered-randomised clinical trials for drawing reliable conclusions.
    Figure is available (arturo.marti.carvajal@gmail.com)

    ReplyDelete
    Replies
    1. Sequential analysis in clinical trials is very important but for this particular trial we can make the needed points by mainly considering the final full-data analysis.

      Delete
  3. This is a great idea. I'd love to see some sort of database collecting studies with poor methodology. I think this would be very valuable as a teaching resource.

    ReplyDelete
    Replies
    1. They are so easy to find it's hard to know where to start :-)

      Delete
  4. I'd like to add that, often even with a significant effect, not much is learned because the noise makes is so the effect may range from plausible to implausible--eg, d = 1.0, 95-% CI [0.01 - 2.0].

    ReplyDelete
  5. Yes, You are right! But I see that everything is about amount, or I mean, the true difference is the amount. If I see a difference and is small, it doesn't seem to matter, does it? If the difference is stronguer, then does matter. That's the problem and the solution. Just the big difference matter!
    Thats the solution ignore the small differences.

    ReplyDelete
    Replies
    1. But don't equate estimated differences with true differences.

      Delete
  6. I too wish that the quality of reporting was better, and I think that Professor Harrell's wider work certainly improves the quality of research.
    However, I wonder, given the ubiquity of poorly analysed and reported research, if it is useful to single out particular papers. Researchers who would be embarrassed to see their work featured on this blog (much like they would be embarrassed to be featured as an example of poor-practice on BBC Radio 4's "More or Less") are likely to have agreed to a presentation of results either from weariness of battling with collaborators, a lack of power within the collaboration and/or job pressures.
    Researchers who are ignorant of how to usefully interpret a confidence interval will not recognise the validity of these criticisms. We only need to look in the letters pages to see how authors respond to valid criticisms.
    Hopefully the moves towards reproducible research, code sharing and open data will help with research quality.
    Nevertheless, surely the fundamental problem is the way in which academics are assessed for career progression. As Richard Smith argued when he spoke on this issue at an International Epidemiology Journal conference (I hope I am remembering his talk properly), universities are not fulfilling their duty; to judge and maintain the quality of work their staff is doing, and instead are outsourcing this responsibility to journal editors.
    David McAllister
    Wellcome Trust Intermediate Clinical Fellow and Beit Fellow,
    University of Glasgow

    ReplyDelete
    Replies
    1. David these are great comments and open up a legitimate debate (which I hope we can continue) of the value and appropriateness of singling out individual papers and authors. Part of my motivation is based on the fact that in addition to universities not fully meeting their duty, journals are very much to blame and need to be embarrassed. Very few journals simultaneously get the points that (1) statistical design has a lot to do with study quality and efficiency and (2) statistical interpretation is nuanced and is clouded by the frequentist approach. I am doing this for journal editors and reviewers as much as for authors and for pointing out limitations of translating study findings to practice. More of your thoughts, and the thoughts of others, welcomed.

      Delete
    2. It is helpful for me to see actual examples as opposed to a generic criticism of under-powered studies. I am guilty of allowing researchers to say similar things to the NEJM article in the past. This example forces me to examine my role as a consulting statistician. I hope he will continue to point out problems using real examples, and also offer potential remedies.

      Delete
    3. We should definitely develop a proposal for an optimum way of reporting frequentist inferential results. Overall the solutions are (1) appropriate interpretations, (2) designing studies for precision, and (3) making studies simpler and larger. Regarding (1), confidence intervals should be emphasized no matter what the p-value. I too have been involved in many, many underpowered studies. They usually end up in second-line journals. I am especially concerned when first-line journals don't do their job.

      Delete
    4. Hi Frank,
      Thanks for your very open-minded reply. I take the point about journals and agree that they do need to be embarrassed.
      I have a perhaps unrealistic suggestion on this issue which which it would be great to hear your views on.
      I think that peer-review should be separated into two stages, a methods stage and a results and discussion stage. I think that introduction, methods section and those results which are a measure of the study robustness (eg total number of participants, loss to follow-up, etc) should be reviewed and given a score by each reviewer which reflects the quality of the methods. Only after having submitted this score, they should be able to access the full paper. Journals ought to report the methods-quality scores for every published paper, as well as some metric which summarises the quality of all of their published papers, and those they reject.
      I realise that there is a lot of issues around definition and measurement, but at least the within-journal comparison between accepted and rejected manuscripts would be illuminating.
      best wishes,
      David

      Delete
    5. David I think there is tremendous merit in this approach. It raises the question of exactly who reviewers should be. A related issue has been proposed by others: have all medical papers be reviewed without the results. Results bias reviews of methods. The likely reluctance of journals to adopt this approach will reveal the true motives of many journals. They stand not as much for science as they stand for readership and advertising dollars.

      Delete
  7. This comment has been removed by a blog administrator.

    ReplyDelete
  8. Interesting post, however, I still wonder what would be an appropriate wording / reporting of results? Take your example: "Levosimendan was not associated..." - would it be better to add "significantly" here (Levosimendan was not significantly associated...)? I mean, there is definitely an effect, it's just that the authors cannot say into which direction.
    Another question: you mentioned in your comments to focus on CI instead of p - regarding the problems of CI, especially for mixed models, would it not even be better to focus even more on the standard error than on confidence intervals, because standard errors are more "robust" than CI (which assume a specific distribution for the test-statistic)? But in this point I'm not sure, because I'm not statistician.

    ReplyDelete
  9. I don't get anything out of 'significantly'. Bayesian posterior probabilities of similarity, efficacy, and harm are to me the ultimate solutions. Within the frequentist world I suggest wording such as in this example: The Wilcoxon two-sample test yielded P=0.09 with 0.95 confidence limits for odds ratios from the proportional odds model of [0.7, 1.1]. Thus we were unable to assemble strong evidence against the null hypothesis of no treatment effect. A larger sample size might have been able to do so. The data are consistent with a reduction in odds of a lower level of response in treatment B by a factor of 0.7 as well as an increase by a factor of 1.1 with 0.95 confidence.

    ReplyDelete
  10. One other thought for a study with very low information yield, e.g., the confidence interval for a hazard ratio is [0.25, 4.0]. Valid wording of a conclusion might be "The overall mortality observed in this study (both treatment groups combined) was 0.03 at 2 years. This will be useful in planning a future study that is useful, unlike this one. This study provides no information about relative efficacy of the two treatments.

    ReplyDelete
  11. Thanks! I prefer reporting also non-significant results, because in my opinion these are not meaningless - but it's not easy to convince co-authors sometimes. Their argument: focus on significant results, this makes the argumentation more clear to the reader (so, increasing readability).

    ReplyDelete
  12. The study was registered at ClinicalTrials.gov
    https://clinicaltrials.gov/ct2/show/record/NCT02025621

    The study pre-specified a difference of medical relevance - 35% reduction in odds ratio - and type I and type II error rates (1% and 20% respectively).

    http://www.sciencedirect.com/science/article/pii/S0002870316301843

    "Statistical power and sample size considerations

    The sample size is based on an assumed composite primary end point event rate (death, MI, dialysis, mechanical assist) of 32% for placebo, a 35% relative reduction for levosimendan (20.8% event rate at 30 days), a significance level of .01, and 80% power. A total sample size of 760 should provide 201 events."

    They report measured outcomes estimates, confidence intervals, in addition to p-values, as recommended in the ASA statement on p-values.

    The trial finding was a null result, lack of statistical significance for a clinical trial powered to detect an a-priori stated difference of medical relevance, and the finding was published in a top tier journal. A null result, published!

    How many drugs that appeared promising in early tests and small trials end up failing to show useful effect when studied in a properly sized well controlled trial? This drug appears to be another example of this phenomenon. The review paper cited in this paper (Landoni et al., Crit Care Med 2012; 40:634–646) shows multiple small studies demonstrating a "significant" effect - now there are some papers with problems. Even the meta-analysis in the Landoni paper suggests a relative risk of 0.80, 95%CI (0.72, 0.89) which did not appear to be the case in this large clinical trial.

    Given all the a-priori specifications we can conclude that with 80% confidence, this drug is not doing what it was purported to do. Now we can argue the merits of setting the type II error rate to 20% when the type I error rate is 5% or 1%, or argue whether 35% reduction in odds ratio is too much or not enough of a medically relevant difference, but nonetheless we can conclude with 80% confidence based on this large trial that this drug is not doing much relative to the stated difference of medical relevance. How is the evidence provided by this trial not an improvement over all the little studies done previously, with who knows how many of which never came to publication so as to be available for the meta-analysis discussed above?

    So many things went right here, so I fail to see why this is a poor example. I have seen many poorly reported study findings, far poorer than this effort. I am surprised that for you this is "One of the most problematic examples I've seen".

    ReplyDelete
  13. Steven you are right that this study is better done than many studies and it is good to see 'negative' studies published (only the more expensive multi-center clinical trials tend to be published when 'negative']. But why did you omit the confidence interval of [0.66, 1.54] from your comment? That is the most important piece of frequentist evidence reported in the paper. The confidence interval tells all. We know little more after the study than we did before the study about the relative efficacy. Notice that the trial was designed to detect a whopping 35% reduction and the lower confidence interval was only a 34% reduction. A big part of the issue in this particular result is that the investigators thought that type I error was 20 times more important than type II error. How does that make any sense? After a study is completed, the error rates are not relevant and the data are. Note also that we are not 80% confident that the drug is not doing much. Not only are the data consistent with a 30% reduction in odds of a primary event, but the power is not relevant in the calculation of this probability. What would be needed for you to make that statement is a Bayesian posterior probability of non-efficacy (one minus probability of efficacy). On the non-relevance of error rates see papers by Blume, Royall, etc. One statistician (I wish I remember to whom this should be attributed) gave this analogy: A frequentist judge is one who brags that in the history of her court she only convicts 0.03 of innocent defendants. Judges are supposed to maximize the probability of making correct decisions for individual defendants. Long-run operating characteristics are not useful in interpreting results once they are in. A side issue is that confidence intervals, though having a formal definition that will seem to be non-useful to most, have the nice property that that are equally relevant no matter whether the p-value is 'significant' or not.

    ReplyDelete
    Replies
    1. "A big part of the issue in this particular result is that the investigators thought that type I error was 20 times more important than type II error. How does that make any sense?"

      Type I error: We declare the drug to be effective when in fact it is not.

      Type II error: We declare the drug to be not effective when in fact it is.

      The onus is on the drug developers to demonstrate a marked effect for the new drug. Declaring a sugar pill to be something marvellous is expensive. How much money are we spending right now on this drug? How much does a course of this drug cost? If it is not doing anything, we are wasting money that could be better spent on other treatments with more efficacious outcome. We are also wasting money and resources treating people for side effects for a drug that isn't offering much help. Adding more placebos to our formulary is bloating our health care budget unnecessarily. It yields more profits for drug companies, which they love, but yields a health care system that is a cancer on the national economy. So at a time when health care costs are out of control, we need tough tests of treatment effectiveness. We need to trim out any treatment that isn't showing a large benefit.

      Declaring a drug to be not effective when in fact it is is of course tragic. The cost then is in quality of life lost, and length of life lost. But the drug ought to yield a substantial benefit, and substantial benefits are not that difficult to demonstrate in large trials. Treatments should add years to lives, and improve quality of life substantially, which is why a "whopping 35% reduction" is not an unreasonable outcome to expect for a drug such as this one.

      Look at Tables 3 and 4 of the Mehta et al. paper in question here. In measure after measure, on hundreds of patients, this drug shows little if any effect. This drug has been around for nearly 20 years so there's been plenty of time to figure out which patients should benefit from this drug, yet this large clinical trial shows precious little.

      We have plenty of drugs and treatments that are cheap and bring years of high quality extra life to most patients, with many patients denied such access around the world. We would do far better to make that happen than the minor, if any, benefits that this drug is showing. This is one line of reasoning wherein type I errors are considerably more expensive than type II errors.

      Delete
  14. You are right that totality of evidence needs to be considered. Frequentist methods don't help very much in that regard. I am concentrated on the primary endpoing for simplicity. I disagree that a 35% effect is realistic for powering a study. When investigators choose a low-power binary endpoint they have the obligation of finding the money to make that endpoint 'work' by enrolling a much larger number of patients. Relative efficacy of 15% or 25% are commonly used in power calculations, and we need at least 0.9 power, not 0.8. Many papers have been written showing that for common disorders, relative effects smaller than 15% will still result in large public health benefits. And your reasoning to justify type II errors being 20 times less important than type I is not compelling. You've only justified that type II errors may be allowed to be larger than type I errors. I feel that too many resources are wasted by studies being launched on "a wing and a prayer."

    ReplyDelete
  15. Hi Frank,
    What do you think about the interpretation of reported treatment-covariate interactions?
    This is a big interest of mine as I am doing a 4-year project looking at heterogeneity of treatment effects (on various scales) according to the presence or absence of comorbidities.
    I think that while the dangers of sub-group reporting are appreciated by most, the potential that some treatment have lower efficacy (on some relative scale) is largely disregarded.
    Moreover, there is a tendency to report stratum-specific hazard ratios along with some NHST P-values for the interaction which is often close to one, which many readers take as evidence for no interaction. When I have used the published data to estimate confidence intervals for interactions, the lower and upper limits have been consistent with massive interactions in either direction.
    best wishes,
    David

    ReplyDelete
  16. Great comments David and you raise a lot of good points. Stratum-specific treatment effects are inappropriate for many reasons. Interaction effect estimation is what is needed, and such assessment must be done in the presence of aggressive main effect adjustment. I have detailed this in my BBR notes - see http://www.fharrell.com/p/blog-page.html section 13.6. I slightly disagree with your statement 'is largely disregarded'. In my experience real interactions on an appropriate relative scale are not very common.

    ReplyDelete
    Replies
    1. Frank, if real interactions are not very common, what does that mean for stratified medicine, in your view?

      By 'stratified medicine' I mean interactions with baseline characteristics, rather than 'responder analysis', which is obviously silly for reasons you've touched on in your discussion of change.

      Thanks.

      Delete
    2. 'Stratified medicine' AKA 'precision medicine' AKA 'personalized medicine' is overhyped and based on misunderstandings. Stephen Senn has written eloquently about this. There are some real differential treatment effects out there (especially in cancer and bacterial infections) but true interactions with treatment are more rare than many believe. Clinicians need to first understand the simple concept of risk magnification that always exists (e.g., sicker patients have more absolute treatment benefit, no matter what is the cause of the sickness) and why that is different from differential treatment effectiveness. Any field that changes names every couple of years is bound to be at least partly BS. See my Links page here to get to the BBR notes that go into great detail in the analysis of covariance chapter.

      Delete
    3. Thanks for taking the time to reply Frank. I feel like we (methodologists) have a bit of a conflict of interest when it comes to these medical fads. We get grants to develop 'new' precision medicine methodology, so have an interest in promoting the hype.

      Delete
    4. Jack I believe you are exactly correct. I have seen statisticians switch to 'precision medicine' and invent solutions that are inappropriate for clinical use (e.g., they require non-available information) and I have seen many more statisticians do decent work but not question their medical leaders about either clinical practice or analysis strategy. We are afflicted with two problems: timidity and wanting to profit from the funding that NIH and other agencies make available with too little methodologic peer review.

      Delete
  17. I dont think many of the dichotomisers understand what they're doing, they've collected data, only to throw a portion of it away by dichotomising continuous measurements (and then they'll throw more data away, by randomly splitting their data into development and validation data). The consequences of dichotomising is there for all to see, a substantial loss of predictive accuracy - both a drop in discrimination and poor calibration (see http://bit.ly/2pfbZiw). Plenty of systematic reviews show this is poor behaviour is widespread (http://bit.ly/2qkjR2x, http://bit.ly/2ptKRIN). Prediction models are typically developed without statistical input, and peer review (again often a statistician is not reviewing) is clearly failing to pick this up.

    BW
    Gary

    ReplyDelete
  18. Spot on Gary. The avoidance of statistical expertise is one of the most irritating aspects of this. And researchers don't realize the downstream problems caused by their poor analysis, e.g., requiring more biomarkers to be measured because the information content of any one biomarker is minimized by dichotomization They also fail to realize that cutpoints must mathematically be functions of the other predictors to lead to optimum decisions. I'm going to edit the text to point to my summary of problems of categorization on our Author Checklist. I need to add your excellent paper to the list of references there too.

    ReplyDelete