Skip to content

Latest commit

 

History

History
195 lines (108 loc) · 10.6 KB

reproducibility-reading-series.md

File metadata and controls

195 lines (108 loc) · 10.6 KB

Addressing reproducibility in the neurosciences and neuroimaging: Statistical, computational and sociological aspects

This series of journal club readings will give students a solid background in the reproducibility and replication issues in the neurosciences and brain imaging, and introduce solutions that can be implemented for reproducible and replicable science. The statistical, methodological, computational and sociological aspects of the topic will be reviewed through these articles, as well as the implementation of solutions. After this course, students should be able to identify reproducibility issues in neuroscience literature and apply the principles of reproducible, reusable, and efficient research in their work.

Reproducibility in life sciences: some background

Sept. 6: Landmark papers in the reproducibility crisis in life sciences

  • Begley and Ellis, 2012: raise standards

  • Baggerly and Coombes, 2009: Forensic analysis

PRACTICAL: Environment set-up, introduction to bash (GK)

Sept. 13: General description of the issue and a first open science response

  • Academy of Medical Science: Reproducibility and Reliability of Biomedical Research, 2015

  • Nosek 2015: the reproducibility project

PRACTICAL: Finish bash, git for version control (JBP)

  • Any remaining bash questions
  • introduction to git (concepts)
  • Repository, commit, branches / tags, remotes, working tree
  • SWC Git intro

Sept. 20: Some sociological and general aspects

  • Smaldino et al., 2016: The natural selection of bad science

  • Allison 2016, a tragedy of errors

PRACTICAL: Github for collaboration (JB)

  • Any remaining git questions
  • github markdown
  • Forking; Pull requesting
  • Issues / code review
  • Mozilla open leadership Introduction to GitHub

Sept. 27: Some general solutions and principles

  • Wilkinson Mark D, 2016: The FAIR principles

  • Bosman et al., 2017: The scholarly common principles

PRACTICAL: Introduction to python / Contd., Git / GitHub (ED)

  • More on git and github : some specific exercises
  • intro to python

Some statistical aspects

Oct. 4: Power issues: the initial reports

  • Ioannidis 2005: Why most research results are false

  • Button et al., 2013: Power failure

PRACTICAL: Intro to python / Statistics in python [EMD]

Oct. 11: Power issues: the more recent reports

  • Poldrack, 2017, Scanning the horizon (fMRI)

  • Dumas-Mallet, 2017: Three biomedical examples

PRACTICAL: cf Force 11 conference (JB)

Oct. 18: Guest lecture Paramita S. Chaudhuri: statistical aspects of reproducibility

  • Rosenthal, 1979: The file drawer effect

  • Simonsohn Simmons 2011, Simonsohn 2014: "P-hacking" and "P-curve"

PRACTICAL: To be defined; guest lecture

Oct. 25: Proposal for redefining p-values and response

  • Benjamin et al., 2017: Redefining p-value

  • Lakens et al., 2017: Justify your alpha

PRACTICAL: Introduction to p-hacking (python): Willie

Ideas:

Neuroimaging specific

Nov 1st: Software or software use issues

  • Eklund et al, 2016: (fMRI) Cluster failure

  • Varoquaux 2018: Cross-validation failure

PRACTICAL: Introduction to containers [Greg]

Nov 15: Some Computational aspects

  • Glatard et al., 2015: OS dependencies

  • Bowring A et al, 2018: Same data, different results

  • Carp J. 2012: pipeline flexibility

PRACTICAL: Containers, continued [Greg]

Nov 22: Examples of replications and of reproducible articles

  • Boekel et al, 2013: A pure replication study

  • Waskom et al: an entirely reproducible article

PRACTICAL: Group projects, can you reproduce a paper ?

  • Choosing an article to "reproduce"

Nov 29: Community based standards

  • Nichols et al., The cobidas report

  • Gorgolevski et al.: The Brain Imaging Data Structure standard

PRACTICAL: Group projects, continued

Bibliography

Baggerly, Keith A., and Kevin R. Coombes. 2009. “Deriving Chemosensitivity from Cell Lines: Forensic Bioinformatics and Reproducible Research in High-Throughput Biology.” The Annals of Applied Statistics 3 (4): 1309–34. https://doi.org/10.1214/09-AOAS291.

Begley, C. Glenn, and Lee M. Ellis. 2012. “Drug Development: Raise Standards for Preclinical Cancer Research.” Nature 483 (7391): 531–533.

“Reproducibility and Reliability of Biomedical Research : Improving Research Practice.” 2015. The Academy of Medical Sciences. https://acmedsci.ac.uk/viewFile/56314e40aac61.pdf.

Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349 (6251): aac4716–aac4716. https://doi.org/10.1126/science.aac4716.

Smaldino, Paul E., and Richard McElreath. 2016. “The Natural Selection of Bad Science.” Royal Society Open Science 3 (9): 160384. https://doi.org/10.1098/rsos.160384.

Allison, David B., Andrew W. Brown, Brandon J. George, and Kathryn A. Kaiser. 2016. “Reproducibility: A Tragedy of Errors.” Nature News 530 (7588): 27. https://doi.org/10.1038/530027a.

Wilkinson Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, et al. 2016. “The FAIR Guiding Principles for Scientific Data Management and Stewardship.” Scientific Data 3 (March): 160018. https://doi.org/10.1038/sdata.2016.18.

Bosman, Jeroen, Ian Bruno, Chris Chapman, Bastian Greshake Tzovaras, Nate Jacobs, Bianca Kramer, Maryann Martone, Fiona Murphy, Daniel Paul O’Donnell, and Michael Bar-Sinai. 2017. “The Scholarly Commons-Principles and Practices to Guide Research Communication.”

Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (8): e124. https://doi.org/10.1371/journal.pmed.0020124.

Button, Katherine S., John P. A. Ioannidis, Claire Mokrysz, Brian A. Nosek, Jonathan Flint, Emma S. J. Robinson, and Marcus R. Munafò. 2013. “Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience.” Nature Reviews Neuroscience 14 (5): 365–76. https://doi.org/10.1038/nrn3475.

Poldrack, Russell A., Chris I. Baker, Joke Durnez, Krzysztof J. Gorgolewski, Paul M. Matthews, Marcus R. Munafò, Thomas E. Nichols, Jean-Baptiste Poline, Edward Vul, and Tal Yarkoni. 2017. “Scanning the Horizon: Towards Transparent and Reproducible Neuroimaging Research.” Nature Reviews Neuroscience 18 (2): 115–26. https://doi.org/10.1038/nrn.2016.167.

Dumas-Mallet, Estelle, Katherine S. Button, Thomas Boraud, Francois Gonon, and Marcus R. Munafò. 2017. “Low Statistical Power in Biomedical Science: A Review of Three Human Research Domains.” Royal Society Open Science 4 (2): 160254. https://doi.org/10.1098/rsos.160254.

Rosenthal, Robert. 1979. “The File Drawer Problem and Tolerance for Null Results.” Psychological Bulletin 86 (3): 638.

Simmons, J. P., L. D. Nelson, and U. Simonsohn. 2011. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22 (11): 1359–66. https://doi.org/10.1177/0956797611417632.

Simonsohn, Uri, Leif D. Nelson, and Joseph P. Simmons. 2014. “P-Curve: A Key to the File-Drawer.” Journal of Experimental Psychology: General 143 (2): 534–47. https://doi.org/10.1037/a0033242.

Benjamin, Daniel J., James O. Berger, Magnus Johannesson, Brian A. Nosek, E.-J. Wagenmakers, Richard Berk, Kenneth A. Bollen, et al. 2018. “Redefine Statistical Significance.” Nature Human Behaviour 2 (1): 6–10. https://doi.org/10.1038/s41562-017-0189-z.

Lakens, Daniel, Federico G. Adolfi, Casper Albers, Farid Anvari, Matthew A. J. Apps, Shlomo Engelson Argamon, Marcel A. L. M. van Assen, et al. 2017. “Justify Your Alpha.” PsyArXiv, September. https://doi.org/10.17605/OSF.IO/9S3Y6.

Eklund, Anders, Thomas E. Nichols, and Hans Knutsson. 2016. “Cluster Failure: Why FMRI Inferences for Spatial Extent Have Inflated False-Positive Rates.” Proceedings of the National Academy of Sciences 113 (28): 7900–7905. https://doi.org/10.1073/pnas.1602413113.

Varoquaux, Gaël. 2017. “Cross-Validation Failure: Small Sample Sizes Lead to Large Error Bars.” ArXiv:1706.07581 [q-Bio, Stat], June. http://arxiv.org/abs/1706.07581.

Glatard, Tristan, Lindsay B. Lewis, Rafael Ferreira da Silva, Reza Adalat, Natacha Beck, Claude Lepage, Pierre Rioux, et al. 2015. “Reproducibility of Neuroimaging Analyses across Operating Systems.” Frontiers in Neuroinformatics 9 (April). https://doi.org/10.3389/fninf.2015.00012.

Bowring, Alexander, Camille Maumet, and Thomas Nichols. 2018. Exploring the Impact of Analysis Software on Task fMRI Results 285585.

Carp, Joshua. 2012. “On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of FMRI Experiments.” Frontiers in Neuroscience 6. https://doi.org/10.3389/fnins.2012.00149.

Boekel, Wouter, Eric-Jan Wagenmakers, Luam Belay, Josine Verhagen, Scott Brown, and Birte U. Forstmann. 2013. “A Purely Confirmatory Replication Study of Structural Brain-Behavior Correlations.” Journal of Neuroscience 12 (12): 4745–4765.

Waskom, M. L., D. Kumaran, A. M. Gordon, J. Rissman, and A. D. Wagner. 2014. “Frontoparietal Representations of Task Context Support the Flexible Control of Goal-Directed Cognition.” Journal of Neuroscience 34 (32): 10743–55. https://doi.org/10.1523/JNEUROSCI.5282-13.2014.

Nichols, Thomas E., Samir Das, Simon B. Eickhoff, Alan C. Evans, Tristan Glatard, Michael Hanke, Nikolaus Kriegeskorte, et al. 2017. “Best Practices in Data Analysis and Sharing in Neuroimaging Using MRI.” Comments and Opinion. Nature Neuroscience. February 23, 2017. https://doi.org/10.1038/nn.4500.

Gorgolewski, Krzysztof J., Tibor Auer, Vince D. Calhoun, R. Cameron Craddock, Samir Das, Eugene P. Duff, Guillaume Flandin, et al. 2016. “The Brain Imaging Data Structure, a Format for Organizing and Describing Outputs of Neuroimaging Experiments.” Scientific Data 3 (June): 160044. https://doi.org/10.1038/sdata.2016.44.