Special Issue on
New Trends in Data-Aware Scheduling and Resource Provisioning in Modern HPC Systems
Future Generation Computing System
*** Call for Papers ***
The Big Data era poses new challenges as well as significant opportunities for High-Performance Computing (HPC) systems: how to efficiently turn massively large data into valuable information and meaningful knowledge. It is clear that computationally optimized new data-driven HPC techniques are required for processing Big Data in rapidly-increasing number of applications, such as Life Sciences, Particle Physics and Socio-economical systems.
The realm of HPC systems lies in sharing of the "multi-core" hardware resources among the software applications. Key characteristics of HPC systems include high processor density, high speed Input/Output (IO), and high-density cooling techniques. Pre Grid computing era (before 2000), the HPC was always exclusively referred to as "supercomputing". In grid-based HPC era, multiple Clusters from one or more organizations are used to run HPC applications. Some of the Grid technologies that played important role in enabling this include: Globus, Portable Batch System (PBS) Gridbus, and Platform LSF. PBS and Platform LSF implemented scheduling techniques and cluster resource provisioning mechanism for allocation available compute resources to HPC applications.
However, with the emergence modern HPC systems (e.g., Amazon EC2 Cluster CPU Instances, Univa Grid Engine, IBM HPC Cloud, Aneka cloud application platform, etc.) powered by cloud computing and virtualisation technologies, the job scheduling techniques implemented by traditional HPC schedulers (e.g. Platform LSF, PBS) are facing serious limitations. The main reason for this state of affairs is that modern HPC systems cannot support on-demand scalability, strong performance guarantees, and improved fault-tolerance, which the traditional HPC schedulers are not able to cater for or take advantage of. In reality, the traditional HPC schedulers were not designed for the cloud computing and virtualization era.
Hence, this special issue solicits papers related to following topics (but not limited to):
Topics
- Novel HPC applications in the cloud case study
- Techniques for optimizing the performance of traditional HPC applications on new "multi-core" cloud systems
- Novel extensions to traditional HPC schedulers for Big Data application scheduling
- Dynamic resource provisioning for HPC applications on the cloud
- Techniques for optimizing HPC application specific performance and energy constraints
- Workflow scheduling techniques
Schedule
Submission due date: June 30, 2014
Notification of provisional acceptance: September 5, 2014
Revised paper submission: October 1, 2014
Notification of final acceptance: October 30, 2014
Submission of final manuscript: November 18, 2014
Submission & Major Guidelines
The special issue invites original research papers that make significant contributions to the state-of-the-art in "New Trends in Data-Aware Scheduling and Resource Provisioning in Modern HPC Systems". The papers must not have been previously published or submitted for journal or conference publications. However, the papers that have been previously published with reputed conferences could be considered for publication in the special issue if they are substantially revised from their earlier versions with at least 30% new contents or results that comply with the copyright regulations, if any.
The submission must be done via the on-line submission system of FGCS. Please select SI- Data-Aware Scheduling as your category. The submission is open on May 15.
http://www.journals.elsevier. com/future-generation- computer-systems/
Every submitted paper will receive at least three reviews. The editorial review committee will include well known experts in the area of Grid, Cloud, Autonomic, and HPC computing.
Selection and Evaluation Criteria:
- Significance to the readership of the journal
- Relevance to the special issue
- Originality of idea, technical contribution, and significance of the presented results
- Quality, clarity, and readability of the written text
- Quality of references and related work
- Quality of research hypothesis, assertions, and conclusion
Guest Editors:
Dr. Jie Tao, Karlsruhe Institute of Technology, Germany
Prof. Joanna Kolodziej, Cracow University of Technology, Poland
Dr. Rajiv Ranjan, Commonwealth Scientific and Industrial Research Organisation, Canberra, Australia
Dr. Prem Prakash Jayaraman, Commonwealth Scientific and Industrial Research Organisation, Canberra, Australia
Prof. Rajkumar Buyya, The University of Melbourne, Australia
Contact: jie.tao@kit.edu
New Trends in Data-Aware Scheduling and Resource Provisioning in Modern HPC Systems
Future Generation Computing System
*** Call for Papers ***
The Big Data era poses new challenges as well as significant opportunities for High-Performance Computing (HPC) systems: how to efficiently turn massively large data into valuable information and meaningful knowledge. It is clear that computationally optimized new data-driven HPC techniques are required for processing Big Data in rapidly-increasing number of applications, such as Life Sciences, Particle Physics and Socio-economical systems.
The realm of HPC systems lies in sharing of the "multi-core" hardware resources among the software applications. Key characteristics of HPC systems include high processor density, high speed Input/Output (IO), and high-density cooling techniques. Pre Grid computing era (before 2000), the HPC was always exclusively referred to as "supercomputing". In grid-based HPC era, multiple Clusters from one or more organizations are used to run HPC applications. Some of the Grid technologies that played important role in enabling this include: Globus, Portable Batch System (PBS) Gridbus, and Platform LSF. PBS and Platform LSF implemented scheduling techniques and cluster resource provisioning mechanism for allocation available compute resources to HPC applications.
However, with the emergence modern HPC systems (e.g., Amazon EC2 Cluster CPU Instances, Univa Grid Engine, IBM HPC Cloud, Aneka cloud application platform, etc.) powered by cloud computing and virtualisation technologies, the job scheduling techniques implemented by traditional HPC schedulers (e.g. Platform LSF, PBS) are facing serious limitations. The main reason for this state of affairs is that modern HPC systems cannot support on-demand scalability, strong performance guarantees, and improved fault-tolerance, which the traditional HPC schedulers are not able to cater for or take advantage of. In reality, the traditional HPC schedulers were not designed for the cloud computing and virtualization era.
Hence, this special issue solicits papers related to following topics (but not limited to):
Topics
- Novel HPC applications in the cloud case study
- Techniques for optimizing the performance of traditional HPC applications on new "multi-core" cloud systems
- Novel extensions to traditional HPC schedulers for Big Data application scheduling
- Dynamic resource provisioning for HPC applications on the cloud
- Techniques for optimizing HPC application specific performance and energy constraints
- Workflow scheduling techniques
Schedule
Submission due date: June 30, 2014
Notification of provisional acceptance: September 5, 2014
Revised paper submission: October 1, 2014
Notification of final acceptance: October 30, 2014
Submission of final manuscript: November 18, 2014
Submission & Major Guidelines
The special issue invites original research papers that make significant contributions to the state-of-the-art in "New Trends in Data-Aware Scheduling and Resource Provisioning in Modern HPC Systems". The papers must not have been previously published or submitted for journal or conference publications. However, the papers that have been previously published with reputed conferences could be considered for publication in the special issue if they are substantially revised from their earlier versions with at least 30% new contents or results that comply with the copyright regulations, if any.
The submission must be done via the on-line submission system of FGCS. Please select SI- Data-Aware Scheduling as your category. The submission is open on May 15.
http://www.journals.elsevier.
Every submitted paper will receive at least three reviews. The editorial review committee will include well known experts in the area of Grid, Cloud, Autonomic, and HPC computing.
Selection and Evaluation Criteria:
- Significance to the readership of the journal
- Relevance to the special issue
- Originality of idea, technical contribution, and significance of the presented results
- Quality, clarity, and readability of the written text
- Quality of references and related work
- Quality of research hypothesis, assertions, and conclusion
Guest Editors:
Dr. Jie Tao, Karlsruhe Institute of Technology, Germany
Prof. Joanna Kolodziej, Cracow University of Technology, Poland
Dr. Rajiv Ranjan, Commonwealth Scientific and Industrial Research Organisation, Canberra, Australia
Dr. Prem Prakash Jayaraman, Commonwealth Scientific and Industrial Research Organisation, Canberra, Australia
Prof. Rajkumar Buyya, The University of Melbourne, Australia
Contact: jie.tao@kit.edu
No comments:
Post a Comment