Seyong Lee
Computer Scientist in Future Technologies Group, Oak Ridge National Laboratory
CV :
Address :
PDF
5100, MS 6173
P. O. Box 2008
Oak Ridge, TN 37831-9984
E-Mail Address:
lees2 AT ornl DOT gov

Research Interest

  • Parallel Programming and Compile-time/Runtime Performance Optimization on emerging hardware architectures including multi-cores and hardware accelerators
  • Program Analysis and Optimizing Compiler for high-performance computing
  • Internet Computing / Grid Computing and Sharing

  • Education

  • Ph.D., School of ECE, Purdue University, West Lafayette, IN (May 2011)
             Advisor: Professor Rudolf Eigenmann
  • M.S., School of ECE, Purdue University, West Lafayette, IN (May 2004)
             Advisor: Professor Rudolf Eigenmann
  • B.S., School of Electrical Engineering, Seoul National University, South Korea (Feb. 1999)
             Advisor: Professor Beom Hee Lee

  • Research Experience

  • OpenARC: Open Accelerator Research Compiler
    1. Develop an open-sourced, high-level intermediate representation-based, extensible compiler framework, which allows full research contexts for directive-based accelerator computing.
      1. Support full features of OpenACC V1.0 and subset of V2.0 (+ array reductions and function calls).
      2. Generate human-readable output code (either CUDA or OpenCL), which can be viewed and modified further by programmers, if necessary.
      3. Support various heterogeneous architectures, ranging from NVIDIA/AMD GPUs to Intel Xeon Phis and Altera FPGAs.
      4. Equipped with various advanced analysis/transformation passes and built-in tuning tools.
      5. Offer device-aware OpenACC extensions, with which users can express architecture-specific features at high-level to achieve performance portability across diverse architectures.
  • Productive GPU Programming Environment
    1. Evaluate existing directive-based, high-level GPU programming models to get insights on the current research issues and future directions for the productive GPU programming.
    2. Existing directive-based GPU programming models (PGI Accelerator, HMPP, R-Stream, OpenACC, and OpenMPC) are evaluated using various benchmarks from diverse application domains.
  • OpenMP to GPU: Automatic translation and adaptation of OpenMP-based shared-memory programs onto GPUs
    1. Developed the compiler framework that translates OpenMP-based shared-memory programs into CUDA-based GPGPU programs and optimizes their performance automatically.
    2. Created a reference tuning framework, which is able to suggest applicable tuning configurations for a given input OpenMP program, generate CUDA code variants for each tuning configuration, and search the best optimizations for the generated CUDA program automatically.
  • ATune: Compiler-Driven Adaptive Execution
    1. Created a tuning system, which adaptively optimizes MPI applications in a distributed system.
    2. This project is parts of a larger effort that aims at creating a global information sharing system, where resources, such as software applications, computer platforms, and information can be shared, discovered, adapted to local needs.
  • iShare: Internet-sharing middleware and collaboration
    1. Developed domain-specific ranking and content search mechanisms for P2P-based Grid environment.
    2. Developed resource-availability-prediction mechanism for fine-grained cycle sharing system.
  • MaRCO: MapReduce with Communication Overlap
    1. Developed efficient communication overlapping mechanisms to increase the performance of Google's MapReduce system.

  • Professional Service

  • Member of the OpenACC Technical Committee and Test-Suite Committee (OpenACC-standard.org)
  • Member of Science Council, Computer Science and Mathematics Division, Oak Ridge National Laboratory
  • Award Committee Member for 2017 IEEE CS TCHPC Award for Excellence for Early Career Researchers in High Performance Computing, 2017
  • Science and Innovation Culture Metric Committee, Computing and Computational Science Directorate, Oak Ridge National Laboratory, 2016
  • Program Committee Member : ASPLOS (2018) IPDPS(2017), Euro-Par (2017), ICPADS(2013, 2014, 2015, 2016, and 2017), PPoPP (2014), CCGrid (2015, 2016, and 2017), ADVCOMP (2017), ICPP (2013), CANDAR (2016), PLC (2015), WRAp (2015 and 2017), WACCPD (2014, 2015, 2016, and 2017), AsHES (2016 and 2017), LHAM (2016 and 2017)
  • External Reviewer (Journals, Conferences, Workshops, and research proposals)
  • Journals: IEEE Micro (2017), IJHPC (2015 and 2016), TPDS (2014 and 2016), ToMPECS (2015), ParCo (2013, 2015, and 2017), CyS (2015), JPDC (2009), IJHPC (2012), ACMTACO (2013 and 2014), SOSYM (2011), SPE (2010), TWMS (2017), JES (2017), IJHPCN (2017), Computers (2017), TC (2017)
  • Conferences: PACT (2010 and 2012), PLDI (2011), IPDPS (2010 and 2013), ICS (2008, 2011, 2013, and 2016), SC (2007 and 2013), CGO (2013 and 2014), HiPC (2009 and 2010), ICDCS (2006), ICPE (2011), GPC (2007 and 2008), INPAR (2012)
  • Workshops: LCPC (2006, 2007, 2011, and 2014), IWOMP (2007, 2009, and 2011), APPT (2011), PCGrid (2008), EPHAM (2008 and 2009)
  • Research Proposals: The General Research Fund, the Research Grants Council of Hong Kong (2011), Department of Energy (DOE) Office of Science Small Business Innovation Research (SBIR) & Small Business Technology Transfer (STTR) program (2015)

  • Recent Publications (Full Publication List)

    Jungwon Kim, Seyong Lee, and Jeffrey S. Vetter. PapyrusKV: A High-Performance Parallel Key-Value Store for Distributed NVM Architectures, SC17: ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, to appear, Denver, Colorado, USA, November 2017. (61/327, 18.6%)

    Michael Wolfe, Seyong Lee, Jungwon Kim, Xiaonan Tian, Rengan Xu, Sunita Chandrasekaran, and Barbara Chapman, Implementing the OpenACC Data Model, The Seventh International Workshop on Accelerators and Hybrid Exascale Systems (AsHES) in conjunction with IPDPS17, 2017.

    Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter, Language-Based Optimizations for Persistence on Nonvolatile Main Memory Systems, 31th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2017.

    Jungwon Kim, Kittisak Sajjaponse, Seyong Lee, and Jeffrey S. Vetter, Design and Implementation of Papyrus: Parallel Aggregate Persistent Storage, 31th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2017.

    Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter, NVL-C: Static Analysis Techniques for Efficient, Correct Programming of Non-Volatile Main Memory Systems, Proceedings of the ACM Symposium on High-Performance and Distributed Computing (HPDC), 2016.

    Jungwon Kim, Seyong Lee, and Jeffrey S. Vetter, IMPACC: A Tightly Integrated MPI+OpenACC Framework Exploiting Shared Memory Parallelism, Proceedings of the ACM Symposium on High-Performance and Distributed Computing (HPDC), 2016.

    Seyong Lee, Jungwon Kim, and Jeffrey S. Vetter, OpenACC to FPGA: A Framework for Directive-Based High-Performance Reconfigurable Computing, 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2016.

    Joel E. Denny, Seyong Lee, and Jeffrey S. Vetter, FITL: extending LLVM for the translation of fault-injection directives, Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (LLVM) in conjunction with SC15, 2015.

    Amit Sabne, Putt Sakdhnagool, Seyong Lee, and Jeffrey S. Vetter, Understanding Portability of a High-level Programming Model on Contemporary Heterogeneous Architectures, IEEE Micro, 2015.

    Seyong Lee, Jeremy S. Meredith, and Jeffrey S. Vetter, COMPASS: A Framework for Automated Performance Modeling and Prediction, ACM International Conference on Supercomputing (ICS15), 2015.

    Jungwon Kim, Seyong Lee, and Jeffrey S. Vetter, An OpenACC-based Unified Programming Model for Multi-accelerator Systems, Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), Poster, 2015.