Sample-size

Datamethods

datamethods.org is a discussion site where data methodologists meet each other and subject matter experts including clinical trialists and clinical researchers. Its development is documented here. Datamethods is provided by the Department of Biostatistics, Vanderbilt University School of Medicine. I have written some short articles on the site, listed below. Responder analysis: Loser x 4 Problems with NNT Should we ignore covariate imbalance and stop presenting a stratified ‘table one’ for randomized trials?

Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements

Researchers have used contorted, inefficient, and arbitrary analyses to demonstrated added value in biomarkers, genes, and new lab measurements. Traditional statistical measures have always been up to the task, and are more powerful and more flexible. It’s time to revisit them, and to add a few slight twists to make them more helpful.

Information Gain From Using Ordinal Instead of Binary Outcomes

This article gives examples of information gained by using ordinal over binary response variables. This is done by showing that for the same sample size and power, smaller effects can be detected

How Can Machine Learning be Reliable When the Sample is Adequate for Only One Feature?

It is easy to compute the sample size N1 needed to reliably estimate how one predictor relates to an outcome. It is next to impossible for a machine learning algorithm entertaining hundreds of features to yield reliable answers when the sample size < N1.