New publication | Computational Reproducibility in Finance: Evidence from 1,000 Tests
Dreber and Johannesson with co-authors analyzed the computational reproducibility of more than 1,000 tests of six research questions in finance provided by 168 research teams. Computational reproducibility implies testing if the data and code provided by the researchers yield the same results as reported by the researchers. The exact same results could only be computationally reproduced for 52% of the tests. Computational reproducibility is not related to the “academic quality” of the researchers or peer-review ratings, but improves with better coding skills, more effort, and less complex code. Researchers are overconfident in assessing the computational reproducibility of their own work. The article also provides guidelines for finance researchers to improve computational reproducibility.
Link to the publication
Abstract
We analyze the computational reproducibility of more than 1,000 empirical answers to 6 research questions in finance provided by 168 research teams. Running the researchers' code on the same raw data regenerates exactly the same results only 52% of the time. Reproducibility is higher for researchers with better coding skills and those exerting more effort. It is lower for more technical research questions, more complex code, and results lying in the tails of the distribution. Researchers exhibit overconfidence when assessing the reproducibility of their own research. We provide guidelines for finance researchers and discuss implementable reproducibility policies for academic journals.