The social sciences, and psychology in particular, are suffering under a replication crisis: a failure to reproduce effects that were observed in earlier studies and published in high-ranking journals. Evidently, this failure undermines the epistemic authority of the affected sciences. Often, it is argued that this phenomenon is due to a research culture that sets wrong incentives and rewards unreliable studies. According to this line of criticism, many reliable, but unspectacular results disappear in the proverbial “file drawer”. In my talk, I trace this phenomenon to its roots in statistical methodology, argue against some quick and easy fixes (e.g., to use confidence intervals instead of hypothesis tests), and propose a methodology for interpreting “insignificant” results in hypothesis tests.