Pages

Friday, August 14, 2015

Reproducibility Workshop Report


Final report of the reproducibility@xsede at XSEDE 2014:



Monday, August 10, 2015

Preserve the Mess or Encourage Cleanliness?

As part of the DASPOS project, my research group is developing a variety of prototype tools for software preservation.  Haiyan Meng and Peter Ivie have been doing the hard work of software development while I write the blog posts.  This recent paper gives an overview of what we are up to:
 
Douglas Thain, Peter Ivie, and Haiyan Meng, Techniques for Preserving Scientific Software Executions: Preserve the Mess or Encourage Cleanliness?, 12th International Conference on Digital Preservation (iPres), November, 2015.


Our basic idea is that preservation is fundamentally about understanding dependencies.  If you can identify (and name) all of the components necessary to running a bit of code (the data, the app, the libraries, the OS, the network, everything!) then you can reproduce something precisely.  The problem is that the dependencies are often invisible or irrelevant to the end user, so we need a way to make them explicit.

Given that, there are two basic approaches, and a continuum between them: 

  • Preserve the Mess.  Allow the user to do whatever they want using whatever programs, machines, and environments that they like.  As things run, observe the dependencies and record them for later use.  Once known, the dependencies can be collected for preservation.
  • Encourage Cleanliness. Require the user to run in a restricted space, in which all objects (data, programs, operating systems) must first be explicitly imported and named.  Once imported, the user may combine them together, and the system will accurately record the combination.
An example of preserving the mess is to use the Parrot virtual file system to run an arbitrary application.  When run in preservation mode, Parrot will observe all of the system calls and then record all the files and network resources that you accessed.  This list can be used to package up all the dependencies and re-run them using a virtual machine, container, or Parrot itself.  For a given application, the list can be surprisingly long and complex:

file:/etc/passwd
file:/usr/lib/libboost.so.1
file:/home/fred/mydata.dat
file:/usr/local/bin/mysim.exe
http://www.example.com/calibration.dat
git://github.com/myproject/code.git
...


This approach is easy to apply to any arbitrary program, but only for the sake of precise reproduction.  The resulting package is only guaranteed to run the exact command and input files given in the first place.  (Although one could edit the package by hand to change the inputs or make it more generally applicable.)  More importantly, loses the context of what the user was attempting to do: it does not capture the source or version number of all the components, making it challenging to upgrade or modify components.

If you prefer encouraging cleanliness, then Umbrella is an alternate approach to preservation.  Using Umbrella, the end user writes a specification that states the hardware, operating system, software packages, and input data necessary to invoke the desired program.  Umbrella picks the packages out of fixed repositories, and then assembles them all together using your preference of cloud services (Amazon), container technology (Docker), or a ptrace jail (Parrot).  An Umbrella file looks like this:

{
  "hardware": {
    "arch": "x86_64",
    "cores": "2",
    "memory": "2GB",
    "disk": "3GB"
  },
  "kernel" : {
    "name": "linux",
    "version": ">=2.6.32"
  },
  "os": {
    "name": "Redhat",
    "version": "6.5"
  },
  "software": {
    "cmssw-5.2.5-slc5_amd64": {
      "mountpoint": "/cvmfs/cms.cern.ch"
    }
  },
  "data": {
    "final_events_2381.lhe": {
      "mountpoint": "/tmp/final_events_2381.lhe",
      "action":  "none"
    }
  },
  "environ": {
    "CMS_VERSION": "CMSSW_5_2_5",
    "SCRAM_ARCH": "slc5_amd64_gcc462"
  }
}

When using Umbrella, nothing goes into the computation unless it is identified in advance by the user.  In principle, one can share a computation by simply passing the JSON file along to another person, checking it into version control, or archiving it and assigning a DOI.  Of course, that assumes that all the elements pointed to by the spec are also accurately archived!

Obviously, requiring cleanliness places a greater burden on the user to get organized and name everything.  And they to learn yet another tool that isn't directly related to their work.  On the other hand, it makes it much easier to perform modifications, like trying out a new version of an OS, or the latest calibration data.

So, what's the right approach for scientific software: Preserving the mess, or encouraging cleanliness?  Or something else entirely?

 P.S. The third piece described in the paper is PRUNE, which I'll describe in a later post.


Monday, August 3, 2015

I had some pretty extensive notes for this post, but I lost them in the upgrade to Windows 10: I foolishly forgot to sync my notebook before I hit "ok" to the upgrade! So be warned: I am reconstructing this from memory!

In the context of particle physics I have a very hard time talking about tools. After thinking about this for a while I realized why: this is part technical and part sociological. It is technical because either the tools aren't all there yet, or we are asking them to do too much. It is sociological because given the current state of the technology we are almost certainly going to have to ask people to do particle physics analysis in a different way in order to fully preserve the analysis steps.

I can explain a bit more.

First, the sociological part. The field of particle physics is an old one, and very individualistic. Organizing a large, 3000 person, experiment like Atlas is often referred to as hearing cats for a reason. ‘Cats’ have many strengths: people with an ideal pursue it, and sometimes that pays off big, even if frowned upon by the management of the experiment. However, there is also a downside: it is particularly hard to organize things. The only way to do it is if you show there are clear benefits to doing it way A vs. way B. Data preservation falls smack dab in the middle of this. Currently tools do not cover all operations, as I will briefly discuss below. However, if physicists change how they perform the steps of the analysis, we'd be most of the way to preserving the analysis.

So to convince them to move to a new system will happen only if there are significant advantages. I am nervous those aren’t there yet. To talk about that, we should probably first classify the tools that are currently out there for data preservation, and the domain.

A large physics experiment that runs at the LHC, like Atlas, manipulates the data many times over to get it from raw data to published data. Much of that manipulation is automated by necessity – there is so much data there is no other way to deal with it. This part of the data processing pipeline is relatively easily preserved. The most interesting unsolved problem in data preservation is "last mile" – where the individual analyst or analysis group performs their work. This work is unique. This work can include making selection cuts on the data, feeding data into a neural network or boosted decision tree, using some custom software to enhance the signal to background, or just about anything else the physicist thinks might enhance the sensitivity of their analysis. You can already see the problem.

Almost none of this last mile is currently automated. The physicist will write scripts will automate small sections. But some portions will be submitted to a batch farm, other parts need hand tuning, etc. While a work flow tool might help, the reality is that people tend to re-run a single step 100’s of times while developing it, and then they don’t re-run it for months after that. Worse, these steps take place across a large range of machines and architectures. As a result people don’t perceive much value in a work flow system.

Preserving these steps is a daunting task (to me). One approach might be to require all physicist to use a single tool, and every single step would have to be done in that tool. The tool could then record everything. I believe technically this could work. But this tool would have to be a true kitchen sink, and an evolving one at that. It would need to understand an individual users computer, it would need to understand clusters, it would need to understand GRID computing (and perhaps cloud computing soon too), it would have to incorporate specialized analysis tools (like neural networks, etc). In short, I just don’t see how restricting one to a single tool can make this happen. Further, I very much doubt the users – physicists – would be willing to adopt it.

The alternative is to build tools that can record each step. These exist in various forms now – you could run a command and have all files and network connections made be archived and thus preserved. However, this is now an extra step that has to be run. Each time. And the analyzer never really knows the last time they will run a step. Sometimes during the final review a whole section needs to be re-run. Often not. Any performance impact will be noticeable. It will be hard to get the analyzer to commit to something like this as well.

Is there a happy middle ground? Perhaps, but I’ve not seen it yet. In my own work I’ve been doing my best to move towards a continuous integration approach. Once I have a step “down”, I attempt to encode it into a Jenkins build script. This isn’t perfect – Jenkins doesn’t really understand the GRID, for example, but it is a start. And if everything can be written as a build script, perhaps that is a first-order preservation. Remaining, of course, is preserving the environment around it.

A more fruitful approach might be be to work with much smaller experiments, and integrate the data preservation into the data handling layer. Perhaps if all data processing is always run as a batch job of some sort (be it on the GRID or in the cloud), that will provide a point-of-control that is still flexible enough for the user and enough of a known for the tool authors. As long as the metadata system is sufficient enough to track the inter-job operations… which translates to as long as the physicist does nothing outside of the data handling system. Hmmm… :-)