the system can be reproduced precisely and practically Transparency: Facilitate communication of analyses and results in ways that are easy to understand while providing all details Accessibility: Eliminate barriers for researchers wanting to use complex methods, make these methods available to everyone
analysis is described/captured in suﬃcient detail that it can be precisely reproduced Reproducibility is not provenance, reusability/ generalizability, or correctness A minimum standard for evaluating analyses
Galaxy is the History For each step, capture the tool that was run, the input datasets (and the step that produced them), and the parameters Can I take this to another Galaxy instance and ensure I have the same tool wrapper? version? version of underlying dependencies? environment?
management was very ad hoc. No tracking of wrapper version information in the Galaxy database, no standard way to share. ToolShed enables not just sharing, but global identifiers and versions across all Galaxy instances. We also tried to deal with dependencies… less successfully. Packaging dependencies is a lot of work and a general need, better handled by a broader community.
of software packages and their dependencies and switching easily between them” ~2000 recipes for software packages* (as of yesterday) All packages are built in a minimal environment to ensure isolation and portability *not even including diﬀerent versions!
the kernel level up Containers — lightweight environments with isolation enforced at the OS level, complete control over all software Adds a complete ecosystem for sharing, versioning, managing containers — e.g. Docker hub
in Conda/ Bioconda, we can build a container with just that software on a minimal base image If we use the same base image, we can reconstruct exactly the same container (since we archive all binary builds of all versions) With automation, these containers can be built automatically for every package with no manual modification or intervention (e.g. mulled)
to create an expectation of reproducibility Require authors to make their work reproducible as part of the peer review process Need to educate analysts The practices that lead to reproducibility are also essential to scientific integrity.
prevention approach Jeffrey T. Leeka,1 and Roger D. Pengb aAssociate Professor of Biostatistics and Oncology and bAssociate Professor of Biostatistics, Johns Hopkins University, Baltimore, MD Reproducibility—the ability to recompute results—and replicability—the chances other experimenters will achieve a consistent result—are two foundational characteristics of successful scientific research. Consistent findings from independent investigators are the primary means by which scientific evidence accumulates for or against a hy- pothesis. Yet, of late, there has been a crisis of confidence among researchers worried about the rate at which studies are either reproducible or replicable. To maintain the integrity of science research and the public’s trust in science, the scientific community must ensure reproducibility and replicability by engaging in a more preventative ap- proach that greatly expands data analysis education and routinely uses software tools. We define reproducibility as the ability to recompute data analytic results given an observed dataset and knowledge of the data analysis pipeline. The replicability of a study been some very public failings of reproduc- ibility across a range of disciplines from can- cer genomics (3) to economics (4), and the data for many publications have not been made publicly available, raising doubts about the quality of data analyses. Popular press articles have raised questions about the reproducibility of all scientific research (5), and the US Congress has convened hearings focused on the transparency of scientific re- search (6). The result is that much of the scientific enterprise has been called into question, putting funding and hard won sci- entific truths at risk. From a computational perspective, there are three major components to a reproducible and replicable study: (i) the raw data from the experiment are available, (ii) the statisti- cal code and documentation to reproduce the analysis are available, and (iii) a correct data analysis must be performed. Recent cultural shifts in genomics and other areas have had computational tools such as knitr, iPython notebook, LONI, and Galaxy (8) have simplified the process of distributing repro- ducible data analyses. Unfortunately, the mere reproducibility of computational results is insufficient to ad- dress the replication crisis because even a re- producible analysis can suffer from many problems—confounding from omitted varia- bles, poor study design, missing data—that threaten the validity and useful interpretation of the results. Although improving the repro- ducibility of research may increase the rate at which flawed analyses are uncovered, as recent high-profile examples have demon- strated (4), it does not change the fact that problematic research is conducted in the first place. The key question we want to answer when seeing the results of any scientific study is “Can I trust this data analysis?” If we think of problematic data analysis as a disease, repro- ducibility speeds diagnosis and treatment in the form of screening and rejection of poor data analyses by referees, editors, and other scientists in the community (Fig. 1). OPINION education and routinely uses software tools. We define reproducibility as the ability to recompute data analytic results given an observed dataset and knowledge of the data analysis pipeline. The replicability of a study is the chance that an independent experi- ment targeting the same scientific question will produce a consistent result (1). Con- cerns among scientists about both have gained significant traction recently due in part to a statistical argument that suggested most published scientific results may be false positives (2). At the same time, there have the experiment are available, (ii) the statisti- cal code and documentation to reproduce the analysis are available, and (iii) a correct data analysis must be performed. Recent cultural shifts in genomics and other areas have had a positive impact on data and code availabil- ity. Journals are starting to require data avail- ability as a condition for publication (7), and centralized databases such as the National Center for Biotechnology Information’s Gene Expression Omnibus are being cre- ated for depositing data generated by pub- licly funded scientific experiments. New problematic data a ducibility speeds d the form of screen data analyses by r scientists in the co This medicatio quality relies on p to make this diagn is a tall order. Edi medical and scie the training and evaluation of a da is compounded b and data analyse ingly complex, th journals continu the demands on are increasing. T duced the efficac tifying and cor discoveries in the cially, the medic address the probl We suggest that to be considered Fig. 1. Peer review and editor evaluation help treat poor data analysis. Education and evidence-based data analysis can be thought of as preventative measures. Author contributions: J.T.L. 1To whom correspondence edu. Any opinions, findings, con pressed in this work are tho reflect the views of the Na www.pnas.org/cgi/doi/10.1073/pnas.1421412111 PNAS | February 10, 2015 |
education on how to conduct computational analyses that are correct and transparent Research should be subject to continuous, constructive, and open peer review Mistakes will be made! Need to create an environment where researchers are willing to be open and transparent enough that these mistakes are found
exported from Galaxy? Imported into another Galaxy? How concrete/abstract is a workflow? Can it generalize across diﬀerent versions of a tool? Diﬀerent tools of a similar type? What about providing narrative context?
Blacklight Bridges Dedicated resources Shared XSEDE resources TACC Austin Galaxy Cluster (Rodeo) • 256 cores • 2 TB memory Corral/Stockyard • 20 PB disk funded by the National Science Foundation Award #ACI-1445604 PTI IU Bloomington Leveraging National Cyberinfrastructure: Galaxy/XSEDE Gateway
NFS cluster 01 cluster 02 … cluster 16 Rodeo dedicated cvmfs 0 cvmfs1 cvmfs1 nfs vm 01 vm 02 … vm N vm 01 vm 02 … vm N nfs slurm + pulsar IU funded by the National Science Foundation Award #ACI-1445604 TACC funded by the National Science Foundation Award #ACI-1445604
manage datasets and the 100k scale. At some point users no longer care about seeing the individual history, workflow, just specific results. New: many workflow view, for monitoring the execution of many workflows in parallel New: reports — generate summaries of executing workflows, multiple workflows, from user templates with continuous updates
we do now Dataset complexity, heterogeneity, dimensionality and all only increasing The analysis decision process requires more support for data exploration, both visual and interactive data manipulation
The future Galaxy embraces real time and continuous communication. From exploratory analysis to batch job tracking to automatic reports, Galaxy needs to be responsive and informative. The future Galaxy is increasingly interactive The future Galaxy better supports transitions between analysis modes.
Martin Cěch, John Chilton, Dave Clements, Nate Coraor, Carl Eberhard, Jeremy Goecks, Björn Grüning, Sam Guerler, Mo Heydarian, Jennifer Hillman-Jackson, Anton Nekrutenko, Eric Rasche, Nicola Soranzo, Marius van den Beek JHU Data Science: Jeﬀ Leek, Roger Peng, … Jetstream: Craig Stewart, Ian Foster, Matthew Vaughn, Nirav Merchant BioConda: Johannes Köster, Björn Grüning, Ryan Dale, Chris Tomkins-Tinch, Brad Chapman, … Other lab members: Boris Brenerman, Min Hyung Cho, Peter DeFord, German Uritskiy, Mallory Freeberg NHGRI (HG005133, HG004909, HG005542, HG005573, HG006620) NIDDK (DK065806) and NSF (DBI 0543285, DBI 0850103)