Thursday, March 16, 2017

Subjective Ranking of Quality of Research by Subject Matter Area

While being engaged in biomedical research for a few decades and watching reproducibility of research as a whole, I've developed my own ranking of reliability/quality/usefulness of research across several subject matter areas.  This list is far from complete.  Let's start with a subjective list of what I perceive as the areas in which published research is least likely to be both true and useful.  The following list is ordered in ascending order of quality, with the most problematic area listed first. You'll notice that there is a vast number of areas not listed for which I have minimal experience.

Some excellent research is done in all subject areas.  This list is based on my perception of the proportion of publications in the indicated area that are rigorously scientific, reproducible, and useful.

Subject Areas With Least Reliable/Reproducible/Useful Research

  1. any area where there is no pre-specified statistical analysis plan and the analysis can change on the fly when initial results are disappointing
  2. behavioral psychology
  3. studies of corporations to find characteristics of "winners"; regression to the mean kicks in making predictions useless for changing your company
  4. animal experiments on fewer than 30 animals
  5. discovery genetics not making use of biology while doing large-scale variant/gene screening
  6. nutritional epidemiology
  7. electronic health record research reaching clinical conclusions without understanding confounding by indication and other limitations of data
  8. pre-post studies with no randomization
  9. non-nutritional epidemiology not having a fully pre-specified statistical analysis plan [few epidemiology papers use state-of-the-art statistical methods and have a sensitivity analysis related to unmeasured confounders]
  10. prediction studies based on dirty and inadequate data
  11. personalized medicine
  12. biomarkers
  13. observational treatment comparisons that do not qualify for the second list (below)
  14. small adaptive dose-finding cancer trials (3+3 etc.)

Subject Areas With Most Reliable/Reproducible/Useful Research

The most reliable and useful research areas are listed first.  All of the following are assumed to (1) have a prospective pre-specified statistical analysis plan and (2) purposeful prospective quality-controlled data acquisition (yes this applies to high-quality non-randomized observational research).
  1. randomized crossover studies
  2. multi-center randomized experiments
  3. single-center randomized experiments with non-overly-optimistic sample sizes
  4. adaptive randomized clinical trials with large sample sizes
  5. physics
  6. pharmaceutical industry research that is overseen by FDA
  7. cardiovascular research
  8. observational research [however only a very small minority of observational research projects have a prospective analysis plan and high enough data quality to qualify for this list]

Some Suggested Remedies

Peer review of research grants and manuscripts is done primarily by experts in the subject matter area under study.  Most journal editors and grant reviewers are not expert in biostatistics.  Every grant application and submitted manuscript should undergo rigorous methodologic peer review by methodologic experts such as biostatisticians and epidemiologists.  All data analyses should be driven by a prospective statistical analysis plan, and the entire self-contained data manipulation and analysis code should be submitted to journals so that potential reproducibility and adherence to the statistical analysis plan can be confirmed.  Readers should have access to the data in most cases and should be able to reproduce all study findings using the authors' code, plus run their own analyses on the authors' data to check robustness of findings.

Medical journals are reluctant to (1) publish critical letters to the editor and (2) retract papers.  This has to change.

In academia, too much credit is still given to the quantity of publications and not to their quality and reproducibility.  This too must change.  The pharmaceutical industry has FDA to validate their research.  The NIH does not serve this role for academia.

Rochelle Tractenberg, Chair of the American Statistical Association Committee on Professional Ethics and a biostatistician at Georgetown University said in a 2017-02-22 interview with The Australian that many questionable studies would not have been published had formal statistical reviews been done.  When she reviews a paper she starts with the premise that the statistical analysis was incorrectly executed.  She stated that "Bad statistics is bad science."

Wednesday, March 1, 2017

Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules

In this article I discussed the many advantages or probability estimation over classification.  Here I discuss a particular problem related to classification, namely the harm done by using improper accuracy scoring rules.  Accuracy scores are used to drive feature selection, parameter estimation, and for measuring predictive performance on models derived using any optimization algorithm.  For this discussion let Y denote a no/yes false/true 0/1 event being predicted, and let Y=0 denote a non-event and Y=1 the event occurred.

As discussed here and here, a proper accuracy scoring rule is a metric applied to probability forecasts. It is a metric that is optimized when the forecasted probabilities are identical to the true outcome probabilities.  A continuous accuracy scoring rule is a metric that makes full use of the entire range of predicted probabilities and does not have a large jump because of an infinitesimal change in a predicted probability.  The two most commonly used proper scoring rules are the quadratic error measure, i.e., mean squared error or Brier score, and the logarithmic scoring rule, which is a linear translation of the log likelihood for a binary outcome model (Bernoulli trials).  The logarithmic rule gives more credit to extreme predictions that are "right", but a single prediction of 1.0 when Y=0 or 0.0 when Y=1 will result in infinity no matter how accurate were all the other predictions.  Because of the optimality properties of maximum likelihood estimation, the logarithmic scoring rule is in a sense the gold standard, but we more commonly use the Brier score because of its easier interpretation and its ready decomposition into various metrics measuring calibration-in-the-small, calibration-in-the-large, and discrimination.

Classification accuracy is an improper scoring rule.  It implicitly or explicitly uses thresholds for probabilities, and moving a prediction from 0.0001 below the threshold to 0.0001 above the thresholds results in a full accuracy change of 1/N.  Classification accuracy is also an improper scoring rule.  It can be optimized by choosing the wrong predictive features and giving them the wrong weights.  This is best shown by a simple example that appears in Biostatistics for Biomedical Research Chapter 18 in which 400 simulated subjects have an overall fraction of Y=1 of 0.57. Consider the use of  binary logistic regression to predict the probability that Y=1 given a certain set of covariates, and classify a subject as having Y=1 if the predicted probability exceeds 0.5.  We simulate values of age and sex and simulate binary values of Y according to a logistic model with strong age and sex effects; the true log odds of Y=1 are (age-50)*.04 + .75*(sex=m).   Fit four binary logistic models in order: a model containing only age as a predictor, one containing only sex, one containing both age and sex, and a model containing no predictors (i.e., it only has an intercept parameter).  The results are in the following table:

Both the gold standard likelihood ratio chi-square statistic and the improper pure discrimination c-index (AUROC) indicate that both age and sex are important predictors of Y.  Yet the highest proportion correct (classification accuracy) occurs when sex is ignored.  According to the improper score, the sex variable has negative information.  It is telling that a model that predicted Y=1 for every observation, i.e., one that completely ignored age and sex and only has the intercept in the model, would be 0.573 accurate, only slightly above the accuracy of using sex alone to predict Y.

The use of a discontinuous improper accuracy score such as proportion "classified" "correctly" has led to countless misleading findings in bioinformatics, machine learning, and data science.  In some extreme cases the machine learning expert failed to note that their claimed predictive accuracy was less than that achieved by ignoring the data, e.g., by just predicting Y=1 when the observed prevalence of Y=1 was 0.98 whereas their extensive data analysis yielded an accuracy of 0.97.  As discusssed here, fans of "classifiers" sometimes subsample from observations in the most frequent outcome category (here Y=1) to get an artificial 50/50 balance of Y=0 and Y=1 when developing their classifier.  Fans of such deficient notions of accuracy fail to realize that their classifier will not apply to a population when a much different prevalence of Y=1 than 0.5.

Sensitivity and specificity are one-sided or conditional versions of classification accuracy.  As such they are also discontinuous improper accuracy scores, and optimizing them will result in the wrong model.

Regression Modeling Strategies Chapter 10 goes into more problems with classification accuracy, and discusses many measures of the quality of probability estimates.  The text contains suggested measures to emphasize such as Brier score, pseudo R-squared (a simple function of the logarithmic scoring rule), c-index, and especially smooth nonparametric calibration plots to demonstrate absolute accuracy of estimated probabilities.


Sunday, February 19, 2017

My Journey From Frequentist to Bayesian Statistics

If I had been taught Bayesian modeling before being taught the frequentist paradigm, I'm sure I would have always been a Bayesian.  I started becoming a Bayesian about 1994 because of an influential paper by David Spiegelhalter and because I worked in the same building at Duke University as Don Berry.  Two other things strongly contributed to my thinking: difficulties explaining p-values and confidence intervals (especially the latter) to clinical researchers, and difficulty of learning group sequential methods in clinical trials.  When I talked with Don and learned about the flexibility of the Bayesian approach to clinical trials, and saw Spiegelhalter's embrace of Bayesian methods because of its problem-solving abilities, I was hooked.  [Note: I've heard Don say that he became Bayesian after multiple attempts to teach statistics students the exact definition of a confidence interval.  He decided the concept was defective.]

At the time I was working on clinical trials at Duke and started to see that multiplicity adjustments were arbitrary.  This started with a clinical trial coordinated by Duke in which low dose and high dose of a new drug were to be compared to placebo, using an alpha cutoff of 0.03 for each comparison to adjust for multiplicity.  The comparison of high dose with placebo resulted in a p-value of 0.04 and the trial was labeled completely "negative" which seemed problematic to me. [Note: the p-value was two-sided and thus didn't give any special "credit" for the treatment effect coming out in the right direction.]

I began to see that the hypothesis testing framework wasn't always the best approach to science, and that in biomedical research the typical hypothesis was an artificial construct designed to placate a reviewer who believed that an NIH grant's specific aims must include null hypotheses.  I saw the contortions that investigators went through to achieve this, came to see that questions are more relevant than hypotheses, and estimation was even more important than questions.   With Bayes, estimation is emphasized.  I very much like Bayesian modeling instead of hypothesis testing.  I saw that a large number of clinical trials were incorrectly interpreted when p>0.05 because the investigators involved failed to realize that a p-value can only provide evidence against a hypothesis. Investigators are motivated by "we spent a lot of time and money and must have gained something from this experiment." The classic "absence of evidence is not evidence of absence" error results, whereas with Bayes it is easy to estimate the probability of similarity of two treatments.  Investigators will be surprised to know how little we have learned from clinical trials that are not huge when p>0.05.

I listened to many discussions of famous clinical trialists debating what should be the primary endpoint in a trial, the co-primary endpoint, the secondary endpoints, co-secondary endpoints, etc.  This was all because of their paying attention to alpha-spending.  I realized this was all a game.

I came to not believe in the possibility of infinitely many repetitions of identical experiments, as required to be envisioned in the frequentist paradigm.  When I looked more thoroughly into the multiplicity problem, and sequential testing, and I looked at Bayesian solutions, I became more of a believer in the approach.  I learned that posterior probabilities have a simple interpretation independent of the stopping rule and frequency of data looks.  I got involved in working with the FDA and then consulting with pharmaceutical companies, and started observing how multiple clinical endpoints were handled.  I saw a closed testing procedures where a company was seeking a superiority claim for a new drug, and if there was insufficient evidence for such a claim, they wanted to seek a non-inferiority claim on another endpoint.  They developed a closed testing procedure that when diagrammed truly looked like a train wreck.  I felt there had to be a better approach, so I sought to see how far posterior probabilities could be pushed.  I found that with MCMC simulation of Bayesian posterior draws I could quite simply compute probabilities such as P(any efficacy), P(efficacy more than trivial), P(non-inferiority), P(efficacy on endpoint A and on either endpoint B or endpoint C), and P(benefit on more than 2 of 5 endpoints).  I realized that frequentist multiplicity problems came from the chances you give data to be more extreme, not from the chances you give assertions to be true.

I enjoy the fact that posterior probabilities define their own error probabilities, and that they count not only inefficacy but also harm.  If P(efficacy)=0.97, P(no effect or harm)=0.03.  This is the "regulator's regret", and type I error is not the error of major interest (is it really even an 'error'?).  One minus a p-value is P(data in general are less extreme than that observed if H0 is true) which is the probability of an event I'm not that interested in.

The extreme amount of time I spent analyzing data led me to understand other problems with the frequentist approach.  Parameters are either in a model or not in a model.  We test for interactions with treatment and hope that the p-value is not between 0.02 and 0.2.  We either include the interactions or exclude them, and the power for the interaction test is modest.  Bayesians have a prior for the differential treatment effect and can easily have interactions "half in" the model.  Dichotomous irrevocable decisions are at the heart of many of the statistical modeling problems we have today.  I really like penalized maximum likelihood estimation (which is really empirical Bayes) but once we have a penalized model all of our frequentist inferential framework fails us.  No one can interpret a confidence interval for a biased (shrunken; penalized) estimate.  On the other hand, the Bayesian posterior probability density function, after shrinkage is accomplished using skeptical priors, is just as easy to interpret as had the prior been flat.  For another example, consider a categorical predictor variable that we hope is predicting in an ordinal (monotonic) fashion.  We tend to either model it as ordinal or as completely unordered (using k-1 indicator variables for k categories).  A Bayesian would say "let's use a prior that favors monotonicity but allows larger sample sizes to override this belief."

Now that adaptive and sequential experiments are becoming more popular, and a formal mechanism is needed to use data from one experiment to inform a later experiment (a good example being the use of adult clinical trial data to inform clinical trials on children when it is difficult to enroll a sufficient number of children for the child data to stand on their own), Bayes is needed more than ever.  It took me a while to realize something that is quite profound: A Bayesian solution to a simple problem (e.g., 2-group comparison of means) can be embedded into a complex design (e.g., adaptive clinical trial) without modification.  Frequentist solutions require highly complex modifications to work in the adaptive trial setting.

I met likelihoodist Jeffrey Blume in 2008 and started to like the likelihood approach.  It is more Bayesian than frequentist.  I plan to learn more about this paradigm. 

Several readers have asked me how I could believe all this and publish a frequentist-based book such as Regression Modeling Strategies.  There are two primary reasons.  First, I started writing the book before I knew much about Bayes.  Second, I performed a lot of simulation studies that showed that purely empirical model-building had a low chance of capturing clinical phenomena correctly and of validating on new datasets.  I worked extensively with cardiologists such as Rob Califf, Dan Mark, Mark Hlatky, David Prior, and Phil Harris who give me the ideas for injecting clinical knowledge into model specification.  From that experience I wrote Regression Modeling Strategies in the most Bayesian way I could without actually using specific  Bayesian methods.  I did this by emphasizing subject-matter-guided model specification.  The section in the book about specification of interaction terms is perhaps the best example.  When I teach the full-semester version of my course I interject Bayesian counterparts to many of the techniques covered.

There are challenges in moving more to a Bayesian approach.  The ones I encounter most frequently are:
  1. Teaching clinical trialists to embrace Bayes when they already do in spirit but not operationally.  Unlearning things is much more difficult than learning things.
  2. How to work with sponsors, regulators, and NIH principal investigators to specify the (usually skeptical) prior up front, and to specify the amount of applicability assumed for previous data.
  3. What is a Bayesian version of the multiple degree of freedom "chunk test"?  Partitioning sums of squares or the log likelihood into components, e.g., combined test of interaction and combined test of nonlinearities, is very easy and natural in the frequentist setting.
  4. How do we specify priors for complex entities such as the degree of monotonicity of the effect of a continuous predictor in a regression model?  The Bayesian approach to this will ultimately be more satisfying, but operationalizing this is not easy.
With new tools such as Stan and well written accessible books such as Kruschke's it's getting to be easier to be Bayesian each day.  The R brms package, which uses Stan, makes a large class of regression models even more accessible.



Sunday, February 5, 2017

Interactive Statistical Graphics: Showing More By Showing Less

Version 4 of the R Hmisc packge and version 5 of the R rms package interfaces with interactive plotly graphics, which is an interface to the D3 javascript graphics library.  This allows various results of statistical analyses to be viewed interactively, with pre-programmed drill-down information.  More examples will be added here.  We start with a video showing a new way to display survival curves.

Note that plotly graphics are best used with RStudio Rmarkdown html notebooks, and are distributed to reviewers as self-contained (but somewhat large) html files. Printing is discouraged, but possible, using snapshots of the interactive graphics.

Concerning the second bullet point below, boxplots have a high ink:information ratio and hide bimodality and other data features.  Many statisticians prefer to use dot plots and violin plots.  I liked those methods for a while, then started to have trouble with the choice of a smoothing bandwidth in violin plots, and found that dot plots do not scale well to very large datasets, whereas spike histograms are useful for all sample sizes.  Users of dot charts have to have a dot stand for more than one observation if N is large, and I found the process too arbitrary.  For spike histograms I typically use 100 or 200 bins.  When the number of distinct data values is below the specified number of bins, I just do a frequency tabulation for all distinct data values, rounding only when two of the values are very close to each other.  A spike histogram approximately reduces to a rug plot when there are no ties in the data, and I very much like rug plots.

  • rms survplotp video: plotting survival curves
  • Hmisc histboxp interactive html example: spike histograms plus selected quantiles, mean, and Gini's mean difference - replacement for boxplots - show all the data!  Note bimodal distributions and zero blood pressure values for patients having a cardiac arrest.

A Litany of Problems With p-values

In my opinion, null hypothesis testing and p-values have done significant harm to science.  The purpose of this note is to catalog the many problems caused by p-values.  As readers post new problems in their comments, more will be incorporated into the list, so this is a work in progress.

The American Statistical Association has done a great service by issuing its Statement on Statistical Significance and P-values.  Now it's time to act.  To create the needed motivation to change, we need to fully describe the depth of the problem.

It is important to note that no statistical paradigm is perfect.  Statisticians should choose paradigms that solve the greatest number of real problems and have the fewest number of faults.  This is why I believe that the Bayesian and likelihood paradigms should replace frequentist inference.

Consider an assertion such as "the coin is fair", "treatment A yields the same blood pressure as treatment B", "B yields lower blood pressure than A", or "B lowers blood pressure at least 5mmHg before A."  Consider also a compound assertion such as "A lowers blood pressure by at least 3mmHg and does not raise the risk of stroke."

A. Problems With Conditioning

  1. p-values condition on what is unknown (the assertion of interest; H0) and do not condition on what is known (the data).
  2. This conditioning does not respect the flow of time and information; p-values are backward probabilities.

B. Indirectness

  1. Because of A above, p-values provide only indirect evidence and are problematic as evidence metrics.  They are sometimes monotonically related to the evidence (e.g., when the prior distribution is flat) we need but are not properly calibrated for decision making.
  2. p-values are used to bring indirect evidence against an assertion but cannot bring evidence in favor of the assertion.  
  3. As detailed here, the idea of proof by contradiction is a stretch when working with probabilities, so trying to quantify evidence for an assertion by bringing evidence against its complement is on shaky ground.
  4. Because of A, p-values are difficult to interpret and very few non-statisticians get it right.  The best article on misinterpretations I've found is here.

C. Problem Defining the Event Whose Probability is Computed

  1. In the continuous data case, the probability of getting a result as extreme as that observed with our sample is zero, so the p-value is the probability of getting a result more extreme than that observed.  Is this the correct point of reference?
  2. How does more extreme get defined if there are sequential analyses and multiple endpoints or subgroups?  For sequential analyses do we consider planned analyses are analyses intended to be run even if they were not?

D. Problems Actually Computing p-values

  1. In some discrete data cases, e.g., comparing two proportions, there is tremendous disagreement among statisticians about how p-values should be calculated.  In a famous 2x2 table from an ECMO adaptive clinical trial, 13 p-values have been computed from the same data, ranging from 0.001 to 1.0.  And many statisticians do not realize that Fisher's so-called "exact" test is not very accurate in many cases.
  2. Outside of binomial, exponential, and normal (with equal variance) and a few other cases, p-values are actually very difficult to compute exactly, and many p-values computed by statisticians are of unknown accuracy (e.g., in logistic regression and mixed effects models). The more non-quadratic the log likelihood function the more problematic this becomes in many cases. 
  3. One can compute (sometimes requiring simulation) the type-I error of many multi-stage procedures, but actually computing a p-value that can be taken out of context can be quite difficult and sometimes impossible.  One example: one can control the false discovery probability (incorrectly usually referred to as a rate), and ad hoc modifications of nominal p-values have been proposed, but these are not necessarily in line with the real definition of a p-value.

E. The Multiplicity Mess

  1. Frequentist statistics does not have a recipe or blueprint leading to a unique solution for multiplicity problems, so when many p-values are computed, the way they are penalized for multiple comparisons results in endless arguments.  A Bonferroni multiplicity adjustment is consistent with a Bayesian prior distribution specifying that the probability that all null hypotheses are true is a constant no matter how many hypotheses are tested.  By contrast, Bayesian inference reflects the facts that P(A ∪ B) ≥ max(P(A), P(B)) and P(A ∩ B) ≤ min(P(A), P(B)) when A and B are assertions about a true effect.
  2. There remains controversy over the choice of 1-tailed vs. 2-tailed tests.  The 2-tailed test can be thought of as a multiplicity penalty for being potentially excited about either a positive effect or a negative effect of a treatment.  But few researchers want to bring evidence that a treatment harms patients; a pharmaceutical company would not seek a licensing claim of harm.  So when one computes the probability of obtaining an effect larger than that observed if there is no true effect, why do we too often ignore the sign of the effect and compute the (2-tailed) p-value?
  3. Because it is a very difficult problem to compute p-values when the assertion is compound, researchers using frequentist methods do not attempt to provide simultaneous evidence regarding such assertions and instead rely on ad hoc multiplicity adjustments.
  4. Because of A1, statistical testing with multiple looks at the data, e.g., in sequential data monitoring, is ad hoc and complex.  Scientific flexibility is discouraged.  The p-value for an early data look must be adjusted for future looks.  The p-value at the final data look must be adjusted for the earlier inconsequential looks.  Unblinded sample size re-estimation is another case in point.  If the sample size is expanded to gain more information, there is a multiplicity problem and some of the methods commonly used to analyze the final data effectively discount the first wave of subjects.  How can that make any scientific sense?
  5. Most practitioners of frequentist inference do not understand that multiplicity comes from chances you give data to be extreme, not from chances you give true effects to be present.

F. Problems With Non-Trivial Hypotheses

  1. It is difficult to test non-point hypotheses such as "drug A is similar to drug B".
  2. There is no straightforward way to test compound hypotheses coming from logical unions and intersections. 

G. Inability to Incorporate Context and Other Information

  1. Because extraordinary claims require extraordinary evidence, there is a serious problem with the p-value's inability to incorporate context or prior evidence.  A Bayesian analysis of the existence of ESP would no doubt start with a very skeptical prior that would require extraordinary data to overcome, but the bar for getting a "significant" p-value is fairly low. Frequentist inference has a greater risk for getting the direction of an effect wrong (see here for more).
  2. p-values are unable to incorporate outside evidence.  As a converse to 1, strong prior beliefs are unable to be handled by p-values, and in some cases the results in a lack of progress.  Nate Silver in The Signal and the Noise beautifully details how the conclusion that cigarette smoking causes lung cancer was greatly delayed (with a large negative effect on public health) because scientists (especially Fisher) were caught up in the frequentist way of thinking, dictating that only randomized trial data would yield a valid p-value for testing cause and effect.  A Bayesian prior that was very strongly against the belief that smoking was causal is obliterated by the incredibly strong observational data.  Only by incorporating prior skepticism could one make a strong conclusion with non-randomized data in the smoking-lung cancer debate.
  3. p-values require subjective input from the producer of the data rather than from the consumer of the data.

H. Problems Interpreting and Acting on "Positive" Findings

  1. With a large enough sample, a trivial effect can cause an impressively small p-value (statistical significance ≠ clinical significance).
  2. Statisticians and subject matter researchers (especially the latter) sought a "seal of approval" for their research by naming a cutoff on what should be considered "statistically significant", and a cutoff of p=0.05 is most commonly used.  Any time there is a threshold there is a motive to game the system, and gaming (p-hacking) is rampant.  Hypotheses are exchanged if the original H0 is not rejected, subjects are excluded, and because statistical analysis plans are not pre-specified as required in clinical trials and regulatory activities, researchers and their all-too-accommodating statisticians play with the analysis until something "significant" emerges.
  3. When the p-value is small, researchers act as though the point estimate of the effect is a population value.
  4. When the p-value is small, researchers believe that their conceptual framework has been validated.  

I. Problems Interpreting and Acting on "Negative" Findings

  1. Because of B2, large p-values are uninformative and do not assist the researcher in decision making (Fisher said that a large p-value means "get more data").

Friday, January 27, 2017

Randomized Clinical Trials Do Not Mimic Clinical Practice, Thank Goodness

Randomized clinical trials (RCT) have long been held as the gold standard for generating evidence about the effectiveness of medical and surgical treatments, and for good reason.  But I commonly hear clinicians lament that the results of RCTs are not generalizable to medical practice, primarily for two reasons:
  1. Patients in clinical practice are different from those enrolled in RCTs
  2. Drug adherence in clinical practice is likely to be lower than that achieved in RCTs, resulting in lower efficacy.
Point 2 is hard to debate because RCTs are run under protocol and research personnel are watching and asking about patients' adherence.  But point 1 is a misplaced worry in the majority of trials.  The explanation requires getting to the heart of what RCTs are really intended to do: provide evidence for relative treatment effectiveness.  There are some trials that provide evidence for both relative and absolute effectiveness.   This is especially true when the efficacy measure employed is absolute as in measuring blood pressure reduction due to a new treatment.  But many trials use binary or time-to-event endpoints and the resulting efficacy measure is on a relative scale such as the odds ratio or hazard ratio.

RCTs of even drastically different patients can provide estimates of relative treatment benefit on odds or hazard ratio scales that are highly transportable.  This is most readily seen in subgroup analyses provided by the trials themselves - so called forest plots that demonstrate remarkable constancy of relative treatment benefit.  When an effect ratio is applied to a population with a much different risk profile, that relative effect can still fully apply.  It is only likely that the absolute treatment benefit will change, and it is easy to estimate the absolute benefit (e.g., risk difference) for a patient given the relative benefit and the absolute baseline risk for the subject.   This is covered in detail in Biostatistics for Biomedical Research, Section 13.6.

Clinical practice provides anecdotal evidence that biases clinicians.  What a clinician sees in her practice is patient i on treatment A and patient j on treatment B.  She may remember how patient i fared in comparison to patient j, not appreciate confounding by indication, and suppose this provides a valid estimate of the difference in effectiveness in treatment A vs. B.  But the real therapeutic question is how does the outcome of a patient were she given treatment A compare to her outcome were she given treatment B.  The gold standard design is thus the randomized crossover design, when the treatment is short acting.  Stephen Senn eloquently writes about how a 6-period 2-treatment crossover study can even do what proponents of personalized medicine mistakenly think they can do with a parallel-group randomized trial: estimate treatment effectiveness for individual patients.

For clinical practice to provide the evidence really needed, the clinician would have to see patients and assign treatments using one of the top four approaches listed in the hierarchy of evidence below. Entries are in the order of strongest evidence requiring the least assumptions to the weakest evidence. Note that crossover studies, when feasible, even surpass randomized studies of matched identical twins in the quality and relevance of information they provide.

Let Pi denote patient i and the treatments be denoted by A and B. Thus P2B represents patient 2 on treatment BP1 represents the average outcome over a sample of patients from which patient 1 was selected.  HTE is heterogeneity of treatment effect.


DesignPatients Compared
6-period crossoverP1A vs P1B (directly measure HTE)
2-period crossoverP1A vs P1B
RCT in idential twinsP1A vs P1B
 group RCTP1A vs P2BP1=P2 on avg
Observational, good artificial controlP1A vs P2BP1=P2 hopefully on avg
Observational, poor artificial controlP1A vs P2BP1≠ P2 on avg
Real-world physician practiceP1A vs P2B

The best experimental designs yield the best evidence a clinician needs to answer the "what if" therapeutic question for the one patient in front of her.

Much more needs to be said about how to handle treatment adherence and what should be the target adherence in an RCT, but overall it is a good thing that RCTs do not mimic clinical practice.  We are entering a new era of pragmatic clinical trials.  Pragmatic trials are worthy of in-depth discussion, but it is not a stretch to say that the chief advantage of pragmatic trials is not that they provide results that are more relevant to clinical practice but that they are cheaper and faster than traditional randomized trials.



Wednesday, January 25, 2017

Clinicians' Misunderstanding of Probabilities Makes Them Like Backwards Probabilities Such As Sensitivity, Specificity, and Type I Error

Imaging watching a baseball game, seeing the batter get a hit, and hearing the announcer say "The chance that the batter is left handed is now 0.2!"   No one would care.  Baseball fans are interested in the chance that a batter will get a hit conditional on his being right handed (handedness being already known to the fan), the handedness of the pitcher, etc.  Unless one is an archaeologist or medical examiner, the interest is in forward probabilities conditional on current and past states.  We are interested in the probability of the unknown given the known and the probability of a future event given past and present conditions and events.

Clinicians are people trained in the science and practice of medicine, and most of them are very good at it.  They are also very good at many aspects of research.  But they are generally not taught probability, and this can limit their research skills.  Many excellent clinicians even let their limitations in understanding probability make them believe that their clinical decision making is worse than it actually is.  I have taught many clinicians who say "I need a hard and fast rule so I know how to diagnosis or treat patients.  I need a hard cutoff on blood pressure, HbA1c, etc. so that I know what to do, and the fact that I either treat or not treat the patient means that I don't want to consider a probability of disease but desire a simple classification rule."  This makes the clinician try to influence the statistician to use inefficient, arbitrary methods such as categorization, stratification, and matching.

In reality, clinicians do not act that way when treating patients.  They are smart enough to know that if a patient has cholesterol just over someone's arbitrary threshold they may not start statin therapy right away if the patient has no other risk factors (e.g., smoking) going against him.  They know that sometimes you start a patient on a lower dose and see how she responds, or start one drug and try it for a while and then switch drugs if the efficacy is unacceptable or there is a significant side effect.

So I emphasize the need to understand probabilities when I'm teaching clinicians.  A probability is a self-contained summary of the current information, except for the patient's risk aversion and other utilities.  Clinicians need to be comfortable with a probability of 0.5 meaning "we don't know much" and not requesting a classification of disease/normal that does nothing but cover up the problem.  A classification does not account for gray zones or patient and physician utility functions.

Even physicians who understand the meaning of a probability are often not understanding conditioning.  Conditioning is all important, and conditioning on different things massively changes the meaning of the probabilities being computed.  Every physician I've known has been taught probabilistic medical diagnosis by first learning about sensitivity (sens) and specificity (spec).  These are probabilities that are in backwards time- and information flow order.  How did this happen? Sensitivity, specificity, and receiver operating characteristic curves were developed for radar and radio research in the military.  It was a important to receive radio signals from distant aircraft, and to detect an incoming aircraft on radar.  The ability to detect something that is really there is definitely important.  In the 1950s, virologists appropriated these concepts to measure the performance of viral cultures.  Virus needs to be detected when it's present, and not detected when it's not.  Sensitivity is the probability of detecting a condition when it is truly present, and specificity is the probability of not detecting it when it is truly absent.  One can see how these probabilities would be useful outside of virology and bacteriology when the samples are retrospective, as in a case-control studies.  But I believe that clinicians and researchers would be better off if backward probabilities were not taught or were mentioned only to illustrate how not to think about a problem.

But the way medical students are educated, they assume that sens and spec are what you first consider in a prospective cohort of patients!  This gives the professor the opportunity of teaching  Bayes' rule and requires the use of a supposedly unconditional probability known as prevalence which is actually not very well defined.  The students plugs everything into Bayes' rule and fails to notice that several quantities cancel out.  The result is the following: the proportion of patients with a positive test who have disease, and the proportion with a negative test who have disease.  These are trivially calculated from the cohort data without knowing anything about sens, spec, and Bayes.  This way of thinking harms the student's understanding for years to come and influences those who later engage in clinical and pharmaceutical research to believe that type I error and p-values are directly useful.

The situation in medical diagnosis gets worse when referral bias (also called workup bias) is present.  When certain types of patients do not get a final diagnosis, sens and spec are biased.  For example, younger women with a negative test may not get the painful procedure that yields the final diagnosis.  There are formulas that must be used to correct sens and spec.  But wait!  When Bayes' rule is used to obtain the probability of disease we needed in the first place, these corrections completely cancel out when the usual correction methods are used!  Using forward probabilities in the first place means that one just conditions on age, sex, and result of the initial diagnostic test and no special methods other than (sometimes) logistic regression are required.

There is an analogy to statistical testing.  p-values and type I error are affected by sequential testing and a host of other factors, but forward-time probabilities (Bayesian posterior probabilities) are not.  Posterior probabilities condition on what is known and does not have to imagine alternate paths to getting to what is known (as do sens and spec when workup bias exists).  p-values and type I errors are backwards-information-flow measures, and clinical researchers and regulators come to believe that type I error is the error of interest.  They also very frequently misinterpret p-values.  The p-value is one minus spec, and power is sens.  The posterior probability is exactly analogous to the probability of disease.

Sens and spec are so pervasive in medicine, bioinformatics, and biomarker research that we don't question how silly they would be in other contexts.  Do we dichotomize a response variable so that we can compute the probability that a patient is on treatment B given a "positive" response?  On the contrary we want to know the full continuous distribution of the response given the assigned treatment.  Again this represents forward probabilities.