ARES: Abstract Representations for the Extreme-Scale Stack
Project ARES represents a joint effort between LANL and ORNL to introduce a common compiler representation and tool-chain for HPC applications. At the project's core is the High Level Intermediate Representation, or HLIR, for common compiler toolchains. HLIR is built ontop of the LLVM IR, using metadata to represent high-level parallel constructs.
- Oak Ridge National Laboratory: Jeffrey S. Vetter (Co-PI), Joel E. Denny, Jungwon Kim, and Seyong Lee
- Los Alamos National Laboratory: Pat McCormick (Co-PI), Kei Davis, and Nicholas Moss
ARES is sponsored by the DOE Office of Science's Advanced Scientific Computing Office.
Achieving success in programming future high-performance systems should not depend on finding the single best approach, nor committing to an evolutionary or revolutionary path. Instead, we should transform the software development toolchain to support the design and implementation of multiple, ideally inter-operating, approaches that allow the community to explore and progressively move towards effective programming methodologies. To achieve this goal our research effort breaks the tight coupling found in vertical, language- and API-centric software stacks and supplants it with a toolchain that incorporates a new common set of abstract representations of programs in the form of a high-level intermediate representation (HLIR). This representation serves as an intermediary between the language-centric abstract syntax tree and the conventional lower-level intermediate representation (IR). In addition to encoding the usual serial execution semantics of a traditional IR, our higher-level IR will additionally encode more abstract concepts such as concurrency, parallelism, communication, synchronization, and non-uniform memory structures. Like a traditional IR, but unlike an abstract syntax tree, this higher-level representation will be language independent, thus capable of supporting a wide range of both mainstream and experimental languages.
Our ARES project is investigating these abstractions along several tasks:
- Design, implement, and refine the ARES high-level intermediate representation (HLIR) that captures the pertinent features of extreme-scale applications, architectures, and programming constructs. These activities will include developing tools for storing, manipulating, and verifying this HLIR.
- Develop prototype ARES front-ends for two languages that map advanced language concepts (e.g., those in OpenACC) onto our HLIR.
- Develop a prototype ARES optimization engine for HLIR that optimizes the HLIR for the target architectures.
- Develop prototype ARES back-end compilation system, based on LLVM, which converts this optimized HLIR into executable instructions.
Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter. NVL-C: Static Analysis Techniques for Efficient, Correct Programming of Non-Volatile Main Memory Systems. International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC), 2016.
Jungwon Kim, Seyong Lee, and Jeffrey S. Vetter. IMPACC: A Tightly Integrated MPI+OpenACC Framework Exploiting Shared Memory Parallelism. International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC), 2016.
Seyong Lee, Jungwon Kim, and Jeffrey S. Vetter. OpenACC to FPGA: A Framework for Directive-based High-Performance Reconfigurable Computing, IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2016
Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter. FITL: Extending LLVM for the Translation of Fault-Injection Directives. LLVM-HPC2: Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, 2015
Jungwon Kim, Seyong Lee, and Jeffrey S. Vetter, An OpenACC-based Unified Programming Model for Multi-accelerator Systems, Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Poster, 2015
Seyong Lee, Dong Li, and Jeffrey S. Vetter, Interactive Program Debugging and Optimization for Directive-Based, Efficient GPU Computing, IEEE International Parallel and Distributed Processing Symposium (IPDPS), May 2014