Some of my personal philosophy of statistics can be summed up in the list below:
- Statistics needs to be fully integrated into research; experimental design is all important
- Don't be afraid of using modern methods
- Preserve all the information in the data; Avoid categorizing continuous variables and predicted values at all costs
- Don't assume that anything operates linearly
- Account for model uncertainty and avoid it when possible by using subject matter knowledge
- Use the bootstrap routinely
- Make the sample size a random variable when possible
- Use Bayesian methods whenever possible
- Use excellent graphics, liberally
- To be trustworthy research must be reproducible
- All data manipulation and statistical analysis must be reproducible (one ramification being that I advise against the use of point and click software in most cases)
- Statistics has been and continues to be taught in a traditional way, leading to statisticians believing that our historical approach to estimation, prediction, and inference was good enough.
- Statisticians do not receive sufficient training in computer science and computational methods, too often leaving those areas to others who get so good at dealing with vast quantities of data that they assume they can be self-sufficient in statistical analysis and not seek involvement of statisticians. Many persons who analyze data do not have sufficient training in statistics.
- Subject matter experts (e.g., clinical researchers and epidemiologists) try to avoid statistical complexity by "dumbing down" the problem using dichotomization, and statisticians, always trying to be helpful, fail to argue the case that dichotomization of continuous or ordinal variables is almost never an appropriate way to view or analyze data. Statisticians in general do not sufficiently involve themselves in measurement issues.
Complacency in the field of statistics and in statistical education has resulted in
- reliance on large-sample theory so that inaccurate normal distribution-based tools can be used, as opposed to tailoring the analyses to data characteristics using the bootstrap and semiparametric models
- belief that null hypothesis significance testing ever answered the scientific question and the p-values are useful
- avoidance of the likelihood school of inference (relative likelihood, likelihood support intervals, likelihood ratios, etc.)
- avoidance of Bayesian methods (posterior distributions, credible intervals, predictive distributions, etc.)