This article discusses issues related to alpha spending, effect sizes used in power calculations, multiple endpoints in RCTs, and endpoint labeling. Changes in endpoint priority is addressed. Included in the the discussion is how Bayesian probabilities more naturally allow one to answer multiple questions without all-too-arbitrary designations of endpoints as “primary” and “secondary”. And we should not quit trying to learn.
What are the major elements of learning from data that should inform the research process? How can we prevent having false confidence from statistical analysis? Does a Bayesian approach result in more honest answers to research questions? Is learning inherently subjective anyway, so we need to stop criticizing Bayesians’ subjectivity? How important and possible is pre-specification? When should replication be required? These and other questions are discussed.
This article gives examples of information gained by using ordinal over binary response variables. This is done by showing that for the same sample size and power, smaller effects can be detected
Professor of Biostatistics
Vanderbilt University School of Medicine
Professor of Psychiatry and, by courtesy, of Medicine (Cardiovascular Medicine) and of Biomedical Data Science
Stanford University School of Medicine
Revised July 17, 2017 It is often said that randomized clinical trials (RCTs) are the gold standard for learning about therapeutic effectiveness. This is because the treatment is assigned at random so no variables, measured or unmeasured, will be truly related to treatment assignment.
What clinicians learn from clinical practice, unless they routinely do n-of-one studies, is based on comparisons of unlikes. Then they criticize like-vs-like comparisons from randomized trials for not being generalizable. This is made worse by not understanding that clinical trials are designed to estimate relative efficacy, and relative efficacy is surprisingly transportable. Many clinicians do not even track what happens to their patients to be able to inform their future patients.
There are many principles involved in the theory and practice of statistics, but here are the ones that guide my practice the most.
Use methods grounded in theory or extensive simulation Understand uncertainty Design experiments to maximize information Understand the measurements you are analyzing and don’t hesitate to question how the underlying information was captured Be more interested in questions than in null hypotheses, and be more interested in estimation than in answering narrow questions Use all information in data during analysis Use discovery and estimation procedures not likely to claim that noise is signal Strive for optimal quantification of evidence about effects Give decision makers the inputs (other than the utility function) that optimize decisions Present information in ways that are intuitive, maximize information content, and are correctly perceived Give the client what she needs, not what she wants Teach the client to want what she needs .