in Clusters, Clouds, and Grids
in conjunction with the
19th International European Conference on Parallel and
Distributed Computing (Euro-Par 2013)
Aachen, Germany, August 26-30, 2013
Clusters, Clouds, and Grids are three different computational paradigms with the intent or potential to support High Performance Computing (HPC). Currently, they consist of hardware, management, and usage models particular to different computational regimes, e.g., high performance cluster systems designed to support tightly coupled scientific simulation codes typically utilize high-speed interconnects and commercial cloud systems designed to support software as a service (SAS) do not. However, in order to support HPC, all must at least utilize large numbers of resources and hence effective HPC in any of these paradigms must address the issue of resiliency at large-scale.
Recent trends in HPC systems have clearly indicated that future increases in performance, in excess of those resulting from improvements in single- processor performance, will be achieved through corresponding increases in system scale, i.e., using a significantly larger component count. As the raw computational performance of these HPC systems increases from today's tera- and peta-scale to next-generation multi peta-scale capability and beyond, their number of computational, networking, and storage components will grow from the ten-to-one-hundred thousand compute nodes of today's systems to several hundreds of thousands of compute nodes and more in the foreseeable future. This substantial growth in system scale, and the resulting component count, poses a challenge for HPC system and application software with respect to fault tolerance and resilience.
Furthermore, recent experiences on extreme-scale HPC systems with non-recoverable soft errors, i.e., bit flips in memory, cache, registers, and logic added another major source of concern. The probability of such errors not only grows with system size, but also with increasing architectural vulnerability caused by employing accelerators, such as FPGAs and GPUs, and by shrinking nanometer technology. Reactive fault tolerance technologies, such as checkpoint/restart, are unable to handle high failure rates due to associated overheads, while proactive resiliency technologies, such as migration, simply fail as random soft errors can't be predicted. Moreover, soft errors may even remain undetected resulting in silent data corruption.
Important Web sites:
- Resilience 2013 at http://xcr.cenit.latech.edu/
- Euro-Par 2012 at http://www.europar2013.org
Prior conferences Web sites (2008-2012):
- Resilience YEAR at http://xcr.cenit.latech.edu/
Important dates:
- Paper submission deadline on May 31, 2013
- Notification deadline on July 8, 2013
- Camera ready deadline on October 3, 2013
Submission guidelines:
Authors are invited to submit papers electronically in English in PDF format via EasyChair at <https://www.easychair.org/
view and further action may be taken, including (but not limited to) notifications sent to the heads of the institutions of the authors and sponsors of the conference. Submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. The proceedings will be published in Springer's LNCS as post-conference proceedings. At least one author of an accepted paper must register for and attend the workshop for inclusion in the proceedings. Authors may contact the workshop program chair for more information.
Topics of interest include, but are not limited to:
- Reports on current HPC system and application resiliency
- HPC resiliency metrics and standards
- HPC system and application resiliency analysis
- HPC system and application-level fault handling and anticipation
- HPC system and application health monitoring
- Resiliency for HPC file and storage systems
- System-level checkpoint/restart for HPC
- System-level migration for HPC
- Algorithm-based resiliency fundamentals for HPC (not Hadoop)
- Fault tolerant MPI concepts and solutions
- Soft error detection and recovery in HPC systems
- HPC system and application log analysis
- Statistical methods to identify failure root causes
- Fault injection studies in HPC environments
- High availability solutions for HPC systems
- Reliability and availability analysis
- Hardware for fault detection and recovery
- Resource management for system resiliency and availability
General Co-Chairs:
- Stephen L. Scott, Tennessee Tech University and Oak Ridge National Laboratory, USA
- Chokchai (Box) Leangsuksun, Louisiana Tech University, USA
Program Co-Chairs:
- Patrick G. Bridges, University of New Mexico, USA
- Christian Engelmann, Oak Ridge National Laboratory, USA
Program Committee:
- Vassil Alexandrov, Barcelona Supercomputing Center, Spain
- Patrick G. Bridges, University of New Mexico
- Greg Bronevetsky, Lawrence Livermore National Laboratory, USA
- Franck Cappello, INRIA/UIUC, France/USA
- Zizhong Chen, University of California, Riverside, USA
- Andrew Chien, University of Chicago, USA
- Nathan DeBardeleben, Los Alamos National Laboratory, USA
- Christian Engelmann, Oak Ridge National Laboratory, USA
- Kurt B. Ferreira, Sandia National Laboratories, USA
- Cecile Germain, University Paris-Sud, France
- Paul Hargrove, Lawrence Berkeley National Laboratory, USA
- Larry Kaplan, Cray, USA
- Dieter Kranzlmueller, LMU/LRZ Munich, Germany
- Sriram Krishnamoorthy, Pacific Northwest National Laboratory, USA
- Chokchai (Box) Leangsuksun, Louisiana Tech University, USA
- Celso Mendes, University of Illinois at Urbana Champaign, USA
- Christine Morin, INRIA Rennes, France
- Alexander Reinefeld, Zuse Institute Berlin, Germany
- Rolf Riesen, IBM Research, Ireland
- Stephen L. Scott, Oak Ridge National Laboratory, USA
--
Christian Engelmann, Ph.D.
System Software Team Task Lead / R&D Staff Scientist
Computer Science Research Group
Computer Science and Mathematics Division
Oak Ridge National Laboratory
Mail: P.O. Box 2008, Oak Ridge, TN 37831-6173, USA
Phone: +1 (865) 574-3132 / Fax: +1 (865) 576-5491
e-Mail: engelmannc@ornl.gov / Home: www.christian-engelmann.info
No comments:
Post a Comment