HOW TO RIG STATISTICAL EXPERIMENTS.

HOW TO RIG STATISTICAL EXPERIMENTS. Christopher Shea had an article in the December Atlantic about a psychologist who showed how psychologists can (and some do) rig the results of their experiments to make them publishable. Uri Simonsohn, with three colleagues, set out to follow some statistical procedures that are currently used by some social psychologists. They (1) used a number of variables but reported only a few that turned out to give results they wanted; (2) didn’t establish the number of subjects in the experiment beforehand; and (3) looked at the data as they went along so they could stop the experiment when the results looked good. They were successful. Where the test of significance (of nonrandomness) would be 5%, the odds of a positive result went up to close to 67%. The article tells how Simonsohn identified two academics who had published a number of fraudulent studies. (Here is an abstract of a paper describing what happened.) One of academics who had been caught claimed as a defense that: “many authors knowingly omit data to achieve significance, without stating this.” Simonsohn compares the sloppy use of statistics to the use of steroids; psychologists who use statistics rigorously will have less success in getting papers published, just as baseball players who didn’t use steroids performed less well than those who did.

This entry was posted in Baseball, Economics, Science, Sports. Bookmark the permalink.

Leave a Reply

Your email address will not be published.