Thoughts on an Open Access Computational Journal I

The authors of this blog have been discussing what a truly open computational journal would look like for many years. It now seems likely that we will see some honest attempts in the near future. Thus, this seemed like a good time to lay out in a series of posts the key issues faced by such a journal. It is not hard to sketch out what such a journal might look like, and this has been done on numerous occasions. There is even a proof of concept, Open Research Computation. What we would like to address are very specific issues which determine whether the journal will accomplish its goal of enabling useful comparisons between different algorithms and codes, or be relegated to the dustbin of history.

In the first installment, I will discuss what I see as a key feature for a journal handling code: tools for code archiving and redistribution. I do not believe any current journal handles this in a satisfactory way, whereas there are numerous, free, useful implementations on the web. Also, I believe these are the proper tools to deal with paper source as well.

The first job of an author will be to submit both his paper and code to the journal for review. I imagine that papers will be initially screened, and only papers that go to review will have code submissions, but this is a small point. I imagine that each paper would be assigned a repository into which could be deposited both a paper and source code. This organization has many advantages:

  • Administration using SSH public keys
  • Easy updating by authors
  • Full record of changes by both editors and authors
  • Reviewers could place comments directly in the source using a branch
  • Easy anonymous retrieval of source code by readers

We could, of course, use two repositories to prevent retrieval of the paper directly, but this seems superfluous for an open access journal. I think the advantage for reviewer comments is especially nice. The todonotes package for LaTeX could be used to directly insert reviewer comments without the cumbersome intermediary of email listings with page numbers, etc. Several other PDF markup technologies would also apply here.

In order to proceed with this organization, the journal must have a few tools in place. Some version control must be in place, and for this purpose distributed version control would be necessary I think. I recommend the usual suspects: Git, Mercurial, and Bazaar. Moreover, some key management architecture must be in place. This can be as simple as using hg-wrapper.sh in the SSH authorized_keys files, to the sophisticated schemes used by hosting sites. In fact, all these services are currently provided, free of charge, on sites such as BitBucket, Git Hub, Launchpad, and Google Code. One can easily imagine a mashup of the arXiv and a hosting site which could accomplish exactly this. Without these code hosting capabilities, I do not believe any journal could truly facilitate open computational research. Moreover, with them I think the process of reviewing and updating work would become vastly more efficient.

Upcoming posts in this series will deal with:

  • Transparency of journal finances
  • Persistent anonymization of reviews and comments
  • A tiered pricing model for submissions
Posted in Open Access Computational Journal | Leave a comment

SIAM CSE 2011 Session

Our session at SIAM CS&E this year, Advanced Algorithms on GPUs, opened to a packed house. Matt Knepley (U Chicago) spoke first about work with Andy Terrel (UT/TACC) on GPU finite element integration based upon high level weak form specification. The weak form is expressed using FEniCS, code is generated using the Mako package, and kernels are compiled and launched through PyCUDA. This makes it possible to achieve high performance, 100 GF/s on the elasticity operator, in Python code. Further work will assemble these element matrices directly into a CUSP matrix on the GPU, and solve systems using PETSc through petsc4py. This work is detailed in a paper on the arxiv.

Next, Andreas Klöckner (NYU) spoke on his work with Discontinuous Galerkin methods for hyperbolic problems. He uses a little language in Python to express the weak forms, and PyCUDA (he is the author) to build and execute the kernels. Extensive autotuning is performed by varying the granularity of partitions and loop slicing. He achieves 250 GF/s for 4th order elements on a single GPU, and more then 3 TF/s on a cluster of 16 as shown in this paper. This methodology (define the parameter space, automate traversal and augment it with heuristics, and test many kernels) allows the programmer to concentrate on the algorithm (DG) rather than the implementation or architecture.

Rio Yokota (BU) presented his work on highly scalable multi-GPU Fast Multipole Method (FMM) for bioelectrostatic problems. He detailed the design of direct particle-to-particle (P2P) and multipole-to-local transformation (M2L) kernels, which consume almost all the time in FMM. However, the balance between these kernels and other operations changed greatly when moving to the GPU, since bandwidth bound operations saw smaller performance increase than computationally bound kernels such as P2P and M2L. Moreover, the GPU computing time far outweighed the CPU-GPU communication time. He demonstrated 40 TF/s on a problem of more than a billion particles, with 80% strong scaling efficiency on 256 GPUs (50% on 512) in the recent paper.

Lastly, David Hardy (UIUC) discussed his Multilevel Summation Method (MSM) for solving electrostatic problems at each step of a molecular dynamics simulation. Unlike FMM, MSM can deliver smooth gradients (electric field) which are necessary to conserve energy in long running simulations. The GPU implementation had several novel features, including the use of constant memory to efficiently communication interpolation weights. Moreover, the CPU is used to process parts of the computation that do not fit nicely on the GPU, and the CPU work is overlapped with the GPU. This paper provides details of the performance.

Posted in Uncategorized | Leave a comment

Do Computational Methods Belong in Applications Journals?

I recently submitted a paper entitled “Textbook multigrid efficiency for hydrostatic ice flow” to an applications journal which I will call JXX for the sake of anonymity. I received a prompt and thoughtful response from the chief editor starting with

I have not yet sent this out for external review, because I wanted to chat with you first to see whether JXX is the most appropriate outlet for this work. I have read the manuscript and, while it is well-written in its present form, the focus is squarely on the computational details and performance of the model, rather than on its application to a geophysical problem. This seems to be at odds with the stated focus of the journal, […]

The subject of the model (if not the manuscript) is certainly very relevant to JXX and your manuscript certainly outlines what seems like a promising way forward. However, I am keen to keep the focus squarely on the geophysical problem at hand, and note that there are other journals more suited to a mathematical or computational exploration of a specific model.

He went on to compare with material I had previously published in the journal which investigated a physical process more than a solution method, and thus was entirely appropriate. I was presented with three options, take my chances with a risky JXX review, send it to a computational journal, or rework it to focus on an application instead of a method. I consulted with my coauthors, then wrote the following reply.

Thanks for your thoughtful response. The debate over where to publish practical computational methods so that they may influence applications is not going to be resolved here, but please consider the context below.

There are currently several projects in Europe and North America that are in relatively early stages of efforts to model ice stream and grounding line dynamics. History has shown that climate modeling components, as well as community models in many other fields of computational physics, tend to develop great inertia and thus have a very difficult time changing the underlying “dynamical core” even when it is deeply flawed. These flaws come in many forms including unstable space and time discretizations that need to be filtered (thus degrading accuracy often to less than first order), numerical artifacts from damped stiff waves, inability to retain accuracy at large time steps, incomplete stress balance, and the inability to (in a scalable way) perform useful analyses such as uncertainty quantification, sensitivity analysis, data assimilation, and optimization. Such limitations severely restrict the science that can be done using the numerical models that are available.

Development of robust numerical methods to perform useful analysis on complex physical systems is still a frontier in computational science. Methods are created and vetted on model problems in the computational science literature. Some methods emerge in a robust and usable form that can have an immediate impact on applications, if only people in those application areas would use them. Ice sheet modeling is currently less mature than other climate components and I think that now is an important time to influence the methods used in the community models that are rapidly gaining inertia.

Each person has a perspective on the computational landscape of available methods that influences what methods they are aware of and consider feasible to implement. On multiple occasions in the past year, since I first wrote the code used in this paper (in a week), I have had conversations with people from relatively young ice sheet modeling efforts along the lines of “We are planning to implement solver X within our existing model over the next 6 months, we think it will work because someone used it to solve <superficially-related problem> and it ‘seemed to work’.” Mostly due to PETSc’s excellent flexibility used by the model in this paper, I have been able to respond in minutes with scalability curves for several variants of method X and a variety of other methods, usually without even recompiling my code. Seeing the future is handy when developing numerical models, especially when it reveals a factor of 1000 performance difference.

As for your three options, I do not have immediate plans to write a paper using this model to solve a scientific problem in the spirit that you suggest. It is certainly capable of such with almost no new code aside from reading in bed geometry. The source code is readily available, many people already have it installed since it is distributed with PETSc, and I would be happy to assist someone who would like to use it. However, I am far more interested in developing methods and my opinion is that the hydrostatic equations are the wrong system to be solving—we should be solving Stokes because the most interesting places have steep and nearly discontinuous bed slopes. Yet there remains a large group of people that would like to solve the hydrostatic equations at high resolution in demanding physical regimes, but are currently unable to because their solver does not scale.

This leaves two options: (a) send it to review in JXX or (b) submit it to a computational journal such as SIAM J Scientific Computing (SISC). The downside of option (b) is that very few glaciologists read SISC, it would not surprise me if I am the only “glaciologist” in the world subscribed to their feed. So if we publish in SISC, only those glaciologists with whom I have personal contact or who obtain it through other informal channels (and are not intimidated by the journal title) will see it. In JXX, it would be far more visible and thus more likely to inform decisions made by other modeling efforts.

I believe that this paper is understandable and relevant to moderately computationally-inclined readers of JXX. I had a brief discussion about the suitability of JXX with [an associate editor] at the AGU Fall Meeting this year. He expressed interest and felt there were significant benefits to placing computational results that are immediately useful to existing modeling efforts in a place where they are likely to be seen. If you agree, then I am willing to take my chances with review and would appreciate it if you send it out.

Posted in Uncategorized | 2 Comments

The Purpose of Writing in Scientific Computing

Many people across the blogosphere are becoming more introspective as academia seems to be having some major shifts. Just to name a few of these shifts, we see that tenure is on its way out the door [0], journals are considered harmful [1], and collaborative environments are extremely useful [2]. This sudden gust of critical thinking has gotten me pondering what the use of writing is in Scientific Computing.

To set up the question, I need to say a few words on my personal perspective of what Scientific Computing really is. There have been numerous heavy weights of mathematics, computer science, and applications try to give an adequate definition of the field [3-5], but just the proliferation of these statements shows that my view will not be shared by all. As a computer scientist, I look at Scientific Computing as a rich field of problems that require the interaction of advanced mathematical skills, detailed architectural characterizations, and expert application knowledge. The programs in the field are interested in answering application questions but usually not higher level questions. Which leads to the question, what is the point of a journal on scientific computing? Why not just write your article in the respective field of application?

First I want to dismiss several overly critical answers. These answers may
very well be the correct answer, but starting life in a philosophy department I don’t like simple, nihilistic answers. The first of these is the need for tenure. If a person does X amount of work they need to have Y number of papers. Matt likes to point out that Riemann only had eleven papers in his whole career, but for most academics their worth is judged by their paper count. The second overly critical answer is that it is easier to have your paper accepted by these journals. I would hope that this is not true, or at least is true because of submission numbers rather than merit.

I have most sympathy for the idea that the interaction of ideas from various fields is a good thing. Perhaps the publication presents the use of a method that is beneficial to an application but still requires some insight from both fields.

For example if one wants to publish a new way of determining quadrature rules for a set of applications, but the method is only a slight modification on previous methods. Such a paper really has no place in a pure mathematics journal since it provides insight for quadrature, which is a numerical concern, nor in applied mathematics because the specification to an application. The publication might be suitable for a section of the papers application results, but often these details are too technical for the application field or get lost. Thus the only viable place to put the article is somewhere that both applied mathematicians and computational researchers might look.

Another example is a computationalist who has created a large code for looking at new problems. The choices made in building the code may be the reason for its better performance or wide acceptance among its peers. Nevertheless the code itself is just a well engineered piece of software that uses practices long established by software researchers. In order to propagate the better practices for building codes, the computationalist still publishes in a place where many people writing similar codes may look.

Both of the examples above outline good reasons to publish in a scientific computing journal but there is one fatal flaw. No matter what metrics the articles use to promote their methods, the addition of viewing an actual implementation, running of the code, and reproducing the results is far more valuable. Furthermore, this code should have a license that has the same spirit as a journal publication, i.e. it gives people the right to build on top of it with only a citation to their work.

Of course many people disagree with me since so many will publish papers but do not distribute their code, for a number of reasons [6]. If the publication was so earth shattering that it changes the game and thus needs no implementation, then it is probably ready for a higher level paper that furthers the respective field. The unfortunate part of writing in scientific computing without making your results reproducible is that it gives credence to those overly critical reasons that I cited above.

[0] Tenure, RIP; What The Vanishing Status Means for the Future of Education, Robin Wilson, Chronicle of Higher Education, July 4, 2010
[1] The Business of Academic Publishing: A Strategic Analysis of the Academic Journal Publishing Industry and its Impact on the Future of Scholarly Publishing, Glenn S. McGuigan, Robert D. Russell, Electronic Journal of Academic and Special Librarianship v. 9 no. 3, 2008
[2] Massively Collaborative Mathematics, Timothy Gowers, Michael Nielsen, Nature, 461, 879-881, October 15, 2009
[3] The Definition of Numerical Analysis , Nick Trefethen
[4] What’s in a Name? Numerical Analysis, Scientific Computing, and CSE., David Bindel
[5] Parallel Scientific Computing in C++ and MPI, George Karniadakis and Robert M. Kirby, Cambridge University Press, 2003
[6] “The Scientific Method in Practice: Reproducibility in the Computational Sciences”, Victoria Stodden, MIT Sloan Research Paper No. 4773-10

Posted in Uncategorized | Tagged , | 2 Comments