Pages

Monday, June 29, 2015

Q: Tools for Reproducible Computing?

Thank you to all of the panelists for your introductions and initial comments.  In my first posting, I suggested that we need better tools, better habits, better repositories, and higher expectations in order to achieve reproducibility.  That seems like a useful way to structure the next few questions, so I will start with tools:

What currently-available tools do you recommend for enabling reproducible scientific computing?  Is there a tool that we ought to have, but do not?

P.S. I am using "reproducibility" as an easy shorthand for re-usability, re-creation, verification, and related tasks that have already seen some discussion.  Please interpret the question broadly.

Monday, June 22, 2015

Reproducibility, Correctness, and Buildability: the Three Principles for Ethical Public Dissemination of Computer Science and Engineering Research

Thank you to Douglas Thain for starting this blog on reproducible research! It is my hope that by bringing light to this topic, sharing ideas and success stories, documenting lessons learned, and stimulating on-going discussion on this topic we can help to pave the way for increasing reproducibility in published engineering and computer science research.

My name is Kristin Yvonne Rozier and I am an Assistant Professor of Aerospace Engineering and Engineering Mechanics in the University of Cincinnati's College of Engineering and Applied Science. Before moving to UC in January, 2015, I spent 14 years as a Research Scientist at NASA, first NASA Langley and then most recently NASA Ames. My primary research area is formal methods, a class of mathematically rigorous techniques for the specification, design, validation, and verification of critical systems. In our lab, the Laboratory for Temporal Logic in Aerospace Engineering, we publish a website accompanying each research paper and containing the artifacts required to reproduce the published results, wherever applicable.

When reading research papers, and especially when serving on Programme Committees to peer-review them, I am too-often astonished by the low level of dedication to reproducibility in published research, not just among authors but peer-reviewers as well. Certainly there are many barriers to publishing in a truly reproducible way, but I believe that most of these are surmountable and for the rest, one has to ask if it they are also ethical barriers to publishing at all.

I recently examined this topic in detail with another blogger on this site, Eric Rozier; our paper on Reproducibility, Correctness, and Buildability: the Three Principles for Ethical Public Dissemination of Computer Science and Engineering Research was published in IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics’2014.

Together, we proposed a system of three principles that underlie public dissemination of computer science and engineering research, in essence the raison d'être of scientific publication. In addition to reproducibility, research is also published to show correctness, and enable buildability. We argue that consideration of these principles is a necessary step when publicly disseminating results in any evidence-based scientific or engineering endeavor. In our recent paper, we examined how these principles apply to the release and disclosure of the four elements associated with computational research: theory, algorithms, code, and data.

Precisely, reproducibility refers to the capability to reproduce fundamental results from released details. Correctness refers to the ability of an independent reviewer to verify and validate the results of a paper. Buildability indicates the ability of other researchers to use the published research as a foundation for their own new work. Buildability is more broad than extensibility, as it requires that the published results have reached a level of completeness that the research can be used for its stated purpose, and has progressed beyond the level of a preliminary idea. We argue that these three principles are not being sufficiently met by current publications and proposals in computer science and engineering, and represent a goal for which publishing should continue to aim.

In order to address the barriers to upholding these principles, we introduced standards for the evaluation of reproducibility, correctness, and buildability in relation to the varied elements of computer science and engineering research and discuss how they apply to proposals, workshops, conferences, and journal publications, making arguments for appropriate standards of each principle in these settings. We address modern issues including big data, data confidentiality, privacy, security, and privilege.

Given that it is possible for all publicly disseminated research to uphold these three principles and deviation from them is determined by factors that restrict public dissemination, such as intellectual property, secure/classified information, and time, it is unethical to publicly disseminate work that does not adhere to at least one of the principles of reproducibility, correctness, and buildability.

Friday, June 5, 2015

Interesting Replicable "Badge" for journal articles

I am writing to point out an interesting experiment that's starting up in the ACM Transactions on Mathematical Software (TOMS) computational journal.  There's a preprint of an editorial by Mike Heroux available that explains the process.

In brief, TOMS has created a "Replicated Computational Results (RCR)" review process, which is really a designation saying that the computational results published in an article are replicable.  As you might expect, this adds an extra burden on the authors to submit their work in a way that enables this RCR review, an extra burden on the editor to work with this part of the submission, and a completely new burden on the RCR reviewer.  If this process is successfully completed, the paper gets an RCR designation on its cover page, the journal editors hopefully get higher quality work, and the reviewer gets satisfaction in a job well done and a credited contribution to the field.

This seems a really interesting experiment; I hope it succeeds.

p.s.  I'm also personally happy that the first paper that will receive this designation is an output of an NSF-funded SI2 project: Field G. Van Zee and Robert A. van de Geijn. 2015. BLIS: A framework for rapidly instantiating BLAS functionality. ACM Trans. Math. Softw. 41, 3 (May 2015), with James Willenbring as the first RCR reviewer.

Disclaimer

Some work by the author was supported by the National Science Foundation (NSF) while working at the Foundation; any opinion, finding, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.