Abstractions for Parallel Graph Algorithms
Submitted by hebertem on Fri, 2017-02-24 16:15
Colloq: Speaker Institution:
Colloq: Date and Time:
Mon, 2016-09-12 11:00
Building 5700, Room MS-A104
Colloq: Host Email:
Operations on large graphs have become more important in the recent "big data" era, and in-memory computations on large-scale parallel systems are a good way to get performance on these algorithms. However, implementing these algorithms on each system type to get optimal performance is time-consuming and so a method to get code reuse between algorithms and systems while still achieving high performance would be beneficial to the computing community. Generic and generative programming techniques can achieve these properties using existing programming languages. This talk will describe several software libraries and programming models, primarily implemented in C++ and at a variety of levels of abstraction, for implementing graph algorithms to run efficiently on various types of parallel computers. Distributed memory is a particular focus of attention in this work, although other forms of parallelism will also be discussed.
Colloq: Speaker Bio:
Until August of this year, Jeremiah Willcock was a System Software Engineer at Micron Technology, working on developing and optimizing software for a novel, massively parallel hardware architecture. He received his Ph.D. in Computer Science from Indiana University in 2007, advised by Andrew Lumsdaine. His research interests include high-performance computing, especially mapping applications and algorithms to advanced parallel architectures, and secondarily, the design of abstractions and optimized implementations for parallel algorithms, especially graph and other irregular algorithms.