Pages

Friday, July 24, 2015

Artifact Evaluation Committees!

It would seem we can all agree that reproducibility is a worthwhile goal, even that it is a sort of bar that all peer-reviewed publications should clear. However, there seems to be a significant variance in the greater research community with regard to the willingness to demonstrate or require (or even suggest!) reproducibility of peer-reviewed publications. Part of that variance certainly stems from the difficulty of precisely defining what reproducibility means for any given work, i.e. a specific research paper. Another part stems from the belief that certainly the authors would have both a) taken the time to determine specifically what reproducibility means for their submission and b) ensured that all necessary artifacts for reproducibility were both publicly available and in good working order before publication.

While I applaud the level of optimism about authors' commitments to reproducibility that seems to pervade the research community today, I also find it highly unrealistic.

Authors are frequently over-worked researchers, academics, and students working long hours right up until the paper submission deadline, pushing for the best submission possible. I find it highly unrealistic to expect that this crowd of tired authors focused tightly on the goal of a submission that passes a rigorous peer review will also always take the time to pontificate on what it really means for their work to be reproducible, something that is nearly never influential in the peer-review process.

Therefore, I argue that if we truly place any value on reproducibility as reason for or goal of peer-reviewed research, we need to make it a required element in peer-review. Until we do that we remain "all talk and no action." Sadly, we have yet to see a top peer-reviewed computer science or engineering venue willing to include reproducibility explicitly as a criteria for publication at the same level as evaluations of the novelty of or interest in the work.

However, there is (a little) hope. One of the best moves in this direction has been in the research area of software engineering. While there are certainly exceptions, for most publications in this area, reproducibility of the published work will involve some level of a) being able to access the software artifacts that comprise the novel contributions of the paper, b) being able to execute them. This led to a new movement called Artifact Evaluation, the motivation for which is quite compelling.

Here is an excerpt:
"We find it especially ironic that areas that are so centered on software, models, and specifications would not want to evaluate them as part of the paper review process, as well as archive them with the final paper. Not examining artifacts enables everything from mere sloppiness to, in extreme cases, dishonesty. More subtly, it also imposes a subtle penalty on people who take the trouble to vigorously implement and test their ideas."

Very importantly artifact evaluation was assigned to a special committee of PhD students and new postdocs, not to the regular PC. This strategy worked brilliantly.

The evaluation was extremely conservative, for three major reasons:

  • Evaluation of artifacts was conducted strictly after papers were unconditionally accepted for publication so that it did not affect peer review in any way.
  • Artifact evaluation was strictly limited to whether the available materials matched what was described in the paper. As Shriram Krishnamurthi notes, "We were only charged with deciding whether the artifact met the expectations set by the paper, no matter how low or incorrect those might be! This is of course a difficult emotional barrier to ignore."
  • The results of the evaluation were not publicly released. They were sent only to the authors of the paper, who could choose to describe the results in public as they wished, or not at all.

So, hats off to Shriram Krishnamurthi, Matthias Hauswirth, Steve Blackburn, Jan Vitek, Andreas Zeller, Carlo Ghezzi, and the others who have taken a stand on reproducibility of published research in software engineering! Andreas Zeller even secured a cash prize of USD 1000 from Microsoft Research for a Distinguished Artifact at ESEC/FSE 2011! AECs have now been included in several major conferences, most recently CAV 2015!

Now that we have a system that works, it would seem it is time to make it stick: make artifact evaluation standard, take it into account for paper acceptance, and make the results public.

So, how do we do that?

5 comments:

  1. I've been pleased to see artifact evaluation catch on in this area, but the fact that we dump the work on students and new postdocs seems to say that it's just a box we need checked, not something we see as an area for creativity and innovation. I hope the upcoming Dagstuhl meeting (http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=15452) will think about this.

    ReplyDelete
  2. Shriram Krishnamurthi has sent in the following comment:

    Thanks for the very nice article. You've certainly gotten to the
    essence of the design.

    Responding to @Dan: I deeply dislike this characterization of
    “dumping” work onto students and new postdocs. (By the way, who said
    students and postdocs aren't capable of at least as much creativity as
    superannuated professors?) The justification for using younger people
    was clearly articulated in the first essay I wrote about artifact
    evaluation:

    http://cs.brown.edu/~sk/Memos/Conference-Artifact-Evaluation/

    Look for “AEC Composition”. It lays out several points in detail.

    But yes, the upcoming Dagstuhl meeting (which I'm co-organizing) will
    indeed be glad to talk about this if there's interest from the
    attendees. You're welcome to bring it up.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Sorry, I didn't read the linked post earlier. I do understand the reasoning for your choice, and shouldn't have said "dumping", but I still see something wrong with the idea of having 2 groups (classes) of reviewers, one that reviews the ideas/papers and one that reviews the artifacts.

    ReplyDelete
  5. Well, first a conference steering committee needs to decide to mandate artifact evaluation. I am optimistic that this will happen sooner or later in a venue where common artifacts are valued but unusual hardware is not required. For example, this would be much easier to accomplish in ICSE or KDD where the concern is the algorithms and the output, as opposed to HPDC or SC, where the concern is usually the performance on unusual hardware.

    If you really want to make it stick, the PC could require artifact evaluation to happen *before* technical evaluation. e.g. require brief instructions on how to start a VM, checkout the code, download the data sets, and run an example. If that can't be done, then return without review...

    ReplyDelete