tl;dr: great book. Read.
The “Seven Sins” is concerned about the validity of psychological research. Can we at all, or to what degree, be certain about the conclusions reached in psychological research? More recently, replications efforts have cast doubt on our confidence in psychological research (1). In a similar vein, a recent papers states that in many research areas, researchers mostly report “successes” in the sense of that they report that their studies confirm their hypotheses - with Psychology leading in the proportion of supported hypotheses (2). To good to be true? In the light of all this unbehagen, Chambers' book addresses some of the (possible) roots of the problem of (un)reliability of psychological science. Precisely, Chambers mentions seven “sins” that the psychological research community appears to be guilty of: confirmation bias, data tuning (“hidden flexibility”), disregard of direct replications (and related problems), failure to share data (“data hoarding”), fraud, lack of open access publishing, and fixation on impact factors.
Chambers is not alone in out-speaking some dirty little (or not so little) secrets or tricks of the trade. The discomfort with the status quo is gaining momentum (3,4,5, 6); see also the work of psychologists such as J. Wicherts, F. Schönbrodt, D. Bishop, J. Simmons, S. Schwarzkopf, R. Morey, or B. Nosek, to name just a few. For example, recently, the German psychological association (DGPs) opened up (more) towards open data (7). However, a substantial number of prominent psychologist oppose the more open approach towards higher validity and legitimateness (8). Thus, Chambers' book hit the nerve of many psychologists. True, a lot is at stake (9, 10, 11), and a train wreck may have appeared. Chambers book knits together the most important aspects of the replicability (or reproducibility); the first “umbrella book” on that topic, as far as I know. Personally, I feel that one point only would merit some more scrutiny: the unchallenged assumption that psychological constructs are metric (12,13,14). Measurement builds the very rock of any empirical science. Without precise measurement, it appears unlikely that any theory will advance. Still, psychologists turn a dead ear to this issue, sadly. Just assuming that my sum-score does possess metric niveau is not enough (15).
The book is well written, pleasurable to read, suitable for a number of couch evenings (as in my case). Although methodologically sound, as far as I can say, no special statistical knowledge is needed to follow and benefit from the whole exposition.
The last chapter is devoted to solutions (“remedies”); arguably, this is the most important chapter in the book. Again, Chambers arrives at pulling together most important trends, concrete ideas and more general, far reaching avenues. The most important measures are to him a) preregistration of studies, b) judging journals by their replication quota and strengthening the whole replication effort as such, c) open science in general (see Openness Initiative, and TOP guidelines) and d) novel ways of conceiving the job of journals. Well, maybe he is not so much focusing on the last part, but I find that last point quite sensible. One could argue that publishers such as Elsevier managed to suck way to much money out of the system, money that ultimately is paid by the tax payers, and by the research community. Basically, scientific journals do two things: hosting manuscripts and steering peer-review. Remember that journals do not do the peer review, it is provided for free by researchers. As hosting is very cheap nowadays, and peer review is brought by without much input by the publishers, why not come up with new, more cost-efficient, and more reliable ways of publishing? One may think that money is not of primary concern for science, truth is. However, science, as most societal endeavors, is based entirely on the trust and confidence of the wider public. Wasting that trust, destroying the funding base. Hence, science cannot afford to waste money, not at all. Among the ideas for updating publishing and journal infrastructure is the idea to use open archives such as ArXive or osf.io as repositories for manuscripts. Peer review can be conducted on this non-paywalled manuscripts (some type of post publication peer review), for instance organized by universities (5). “Overlay journals” may pick and choose papers from these repositories, organize peer review, and make sure their peer review, and the resulting paper is properly indexed (Google Scholar etc.).
To sum up, the book taps into what is perhaps the most pressing concern in psychological research right now. It succeeds in pulling together the wires that together provide the fabric of the unbehagen in the zeitgeist of contemporary academic psychology. I feel that a lot is at stake. If we as a community fail in securing the legitimateness of academic psychology, the discipline may end up in a way similar to phrenology: once hyped, but then seen by some as pseudo science, a view that gained popularity and is now commonplace. Let’s work together for a reliable science. Chambers' book helps to contribute in that regard.
1 Open Science Collaboration, & Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716-aac4716. http://doi.org/10.1126/science.aac4716
2 Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. http://doi.org/10.1007/s11192-011-0494-7
3 Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. http://doi.org/10.1371/journal.pone.0005738
4 Nuzzo, R. (2015). How scientists fool themselves – and how they can stop. Nature, 526(7572), 182–185. http://doi.org/10.1038/526182a
5 Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. http://doi.org/10.3389/fnhum.2013.00291
6 Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., … Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. http://doi.org/10.1098/rsos.150547
7 Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. Psychologische Rundschau, 68(1), 20–25. http://doi.org/10.1026/0033-3042/a000341
8 Longo, D. L., & Drazen, J. M. (2016). Data Sharing. New England Journal of Medicine, 374(3), 276–277. http://doi.org/10.1056/NEJMe1516564
9 LeBel, E. P. (2017). Even With Nuance, Social Psychology Faces its Most Major Crisis in History. Retrieved from https://proveyourselfwrong.wordpress.com/2017/05/26/even-with-nuance-social-psychology-faces-its-most-major-crisis-in-history/.
10 Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., … Wong, K. M. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113(1), 34.
11 Ledgerwood, A. (n.d.). Everything is F*cking Nuanced: The Syllabus (Blog Post). Retrieved from http://incurablynuanced.blogspot.de/2017/04/everything-is-fcking-nuanced-syllabus.html
12 Michell, J. (2005). The logic of measurement: A realist overview. Measurement, 38(4), 285–294. http://doi.org/10.1016/j.measurement.2005.09.004
13 Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88(3), 355–383. http://doi.org/Article
14 Heene, M. (2013). Additive conjoint measurement and the resistance toward falsifiability in psychology. Frontiers in Psychology, 4.
15 Sauer, S. (2016). Why metric scale level cannot be taken for granted (Blog Post). http://doi.org/http://doi.org/10.5281/zenodo.571356