[Apologies if you got multiple copies of this email. This message is
sent to . If you'd like to opt out of these
announcements, information on how to unsubscribe is available at the
bottom of this email.]
+++ Apologies for cross-postings +++
2nd COSH Workshop on Co-Scheduling of HPC Applications
Organised by
Carsten Trinitis and Josef Weidendorfer
Technische Universität München, Institut für Informatik
Stockholm, Sweden, January 24, 2017
http://wwwi10.lrr.in.tum.de/~t rinitic/COSH2017/
Co-located with HiPEAC 2017, Stockholm, Sweden, January 23-25, 2017
DEADLINE EXTENDED: November 22, 2017
Overview
The task of a high performance computing system is to carry out its
calculations (mainly scientific applications) with maximum performance
and energy efficiency.
Up until now, this goal could only be achieved by exclusively assigning
an appropriate number of cores/nodes to parallel applications. As a
consequence, applications had to be highly optimised in order to achieve
even only a fraction of a supercomputer's peak performance which
required huge efforts on the programmer side.
This problem is expected to become more serious on future Exa-Scale
systems with millions of compute cores. Many of today's highly scalable
applications will not be able to utilise an Exa-Scale system's extreme
parallelism due to node specific limitations like e.g. I/O bandwidth.
Therefore, to be able to efficiently use future supercomputers, it will
be necessary to simultaneously run more than one application on a node.
To be able to efficiently perform co-scheduling, applications must not
slow down each other, i.e. candidates for co-scheduling could e.g. be a
memory-bound and a compute bound application.
Within this context, it might also be necessary to dynamically migrate
applications between nodes if e.g. a new application is scheduled to the
system.
In order to be able to monitor performance and energy efficiency during
operation, additional sensors are required. These need to be correlated
to running applications to deliver values for key performance indicators.
Main topics:
Exa-Scale architectures, supercomputers, scheduling, performance
sensors, energy efficiency, task migration
Workshop papers must not exceed 6 single-spaced, double-column pages in
ACM style (including figures and references).
Submit your paper to https://easychair.org/conferen ces/?conf=cosh2017 .
Accepted papers will be published online in the TUM library. Upon
acceptance of the submission, at least one author is required to
register for the HiPEAC 2017 conference.
Important Dates
Paper Submission deadline: November 22, 2016
Notification: December 5, 2016
Camera ready: December 15, 2016
Workshop: January 23-25, 2017
****************************** ****************************** ********************
(https://lists.mcs.anl.gov/mai lman/listinfo/hpc-announce
If you do not remember your password (which is needed to change these options), you can reset it using the "Unsubscribe or Edit Options" button at the bottom of the page.
(https://lists.mcs.anl.gov/mai lman/listinfo/hpc-announce)
hpc-announce-owner@mcs.anl.gov
hpc-announce-owner@mcs.anl.gov .
****************************** ****************************** ********************
sent to . If you'd like to opt out of these
announcements, information on how to unsubscribe is available at the
bottom of this email.]
+++ Apologies for cross-postings +++
2nd COSH Workshop on Co-Scheduling of HPC Applications
Organised by
Carsten Trinitis and Josef Weidendorfer
Technische Universität München, Institut für Informatik
Stockholm, Sweden, January 24, 2017
http://wwwi10.lrr.in.tum.de/~t
Co-located with HiPEAC 2017, Stockholm, Sweden, January 23-25, 2017
DEADLINE EXTENDED: November 22, 2017
Overview
The task of a high performance computing system is to carry out its
calculations (mainly scientific applications) with maximum performance
and energy efficiency.
Up until now, this goal could only be achieved by exclusively assigning
an appropriate number of cores/nodes to parallel applications. As a
consequence, applications had to be highly optimised in order to achieve
even only a fraction of a supercomputer's peak performance which
required huge efforts on the programmer side.
This problem is expected to become more serious on future Exa-Scale
systems with millions of compute cores. Many of today's highly scalable
applications will not be able to utilise an Exa-Scale system's extreme
parallelism due to node specific limitations like e.g. I/O bandwidth.
Therefore, to be able to efficiently use future supercomputers, it will
be necessary to simultaneously run more than one application on a node.
To be able to efficiently perform co-scheduling, applications must not
slow down each other, i.e. candidates for co-scheduling could e.g. be a
memory-bound and a compute bound application.
Within this context, it might also be necessary to dynamically migrate
applications between nodes if e.g. a new application is scheduled to the
system.
In order to be able to monitor performance and energy efficiency during
operation, additional sensors are required. These need to be correlated
to running applications to deliver values for key performance indicators.
Main topics:
Exa-Scale architectures, supercomputers, scheduling, performance
sensors, energy efficiency, task migration
Workshop papers must not exceed 6 single-spaced, double-column pages in
ACM style (including figures and references).
Submit your paper to https://easychair.org/conferen
Accepted papers will be published online in the TUM library. Upon
acceptance of the submission, at least one author is required to
register for the HiPEAC 2017 conference.
Important Dates
Paper Submission deadline: November 22, 2016
Notification: December 5, 2016
Camera ready: December 15, 2016
Workshop: January 23-25, 2017
******************************
(https://lists.mcs.anl.gov/mai
If you do not remember your password (which is needed to change these options), you can reset it using the "Unsubscribe or Edit Options" button at the bottom of the page.
(https://lists.mcs.anl.gov/mai
hpc-announce-owner@mcs.anl.gov
hpc-announce-owner@mcs.anl.gov
******************************
No comments:
Post a Comment