PLFS: A Checkpoint File System for Parallel Applications

Colloq: Speaker: 
John Bent
Colloq: Speaker Institution: 
Los Alamos National Lab
Colloq: Date and Time: 
Tue, 2009-05-12 13:00
Colloq: Location: 
ORNL, Bldg. 5700, Room L-204
Colloq: Host: 
Philip Roth
Colloq: Host Email: 
rothpc@ornl.gov
Colloq: Abstract: 
Parallel applications running across thousands of processors must protect themselves from inevitable component failures. Many applications insulate themselves from failures by checkpointing, a process in which they save their state to persistent storage. Following a failure, they can resume computation using this state. For many applications, saving this state into a shared single file is most convenient. With such an approach, the size of writes are often small and not aligned with file system boundaries. Unfortunately for these applications, this preferred data layout results in pathologically poor performance from the underlying file system which is optimized for large, aligned writes to non-shared files.To address this fundamental mismatch, we have developed a parallel log-structured file system, PLFS, which is positioned between the applications and the underlying parallel file system. PLFS remaps an application’s write access pattern to be optimized for the underlying file system. Through testing on Panasas ActiveScale Storage System and IBM’s General Parallel File System at Los Alamos National Lab and on Lustre at Pittsburgh Supercomputer Center, we have seen that this layer of indirection and reorganization can reduce checkpoint time by up to several orders of magnitude for several important benchmarks and real applications.We expect that PLFS can improve the checkpoint bandwidth for any large parallel application that writes to a single file. The expected improvement is especially large for those applications doing unaligned or random IO, patterns which have become increasingly prevalent recently due to the wide-spread adoption of complex formatting libraries such as NetCDF and HDF5.
Colloq: Speaker Bio: 
John Bent is a LANL storage researcher who has been heavily involved with the Roadrunner storage system from early planning to intensive troubleshooting during the current installation period. John is also leading LANL's HPC data-intensive computing effort, is developing a virtual interposition file system, is working closely with Panasas to debug and design their parallel file system, is collecting and releasing many parallel IO traces, and is mentoring several graduate student projects. John got his PhD in computer science from Wisconsin in 2005 and his bachelors in anthropology from Amherst College in 1995.