International Workshop
SHPCloud-2014: Extreme Scale Data Cloud Computing Architectures
In Collaboration with Big Data Science 2014 and SocialCom2014 Conferences
August 4, 2014
Beijing, China
Call for Paper
Extreme scale computing needs architectures that do not have scalability
limits. Following the trajectory of existing architecture trends, however,
these seem highly unlikely. The current computing architectures have
reached an inflection point signified by a gridlock of well-known
impossibilities, such as limitations of circuit packaging technologies in
hardware and CAP Theorem in software. The most telling evidence is the
commonly accepted notion that it is impossible to gain application
performance and reliability at the same time as we up scale the processing
architectures.
SHPC2014 solicits paradigm shifting architecture proposals, preliminary
results, concepts and ideas towards the building of next generation extreme
scale data cloud computing architectures that can handle compute intensive
and data intensive applications.
SHPC2014 will provide a platform for academic architecture researchers,
industry practitioners and government agencies to exchange evolving
requirements, ideas and current results.
One distinct feature of the Big Data processing applications from the
traditional transaction data processing is the increased computation and
data requirements at similar rates. It also differs from typical HPC (high
performance computing) applications in its high data requirements. Thus,
the architecture challenges must also include data storage since it is
integral part of the processing platform. The emergence of auction-based
computing clouds makes the architecture challenge even more interesting:
How can future applications gain the ability to harness increasing more
volatile resources for increasing larger datasets? In this context, extreme
scale and cloud computing share the same characteristics.
Topics of Interest:
We encourage submissions on the theoretical foundation of programming for
volatile resources. We will also encourage submissions of practical
experiences using compute and data intensive cloud resources. These include
but not limited to the following areas:
* Theoretical foundation of programming volatile resources.
* Application scalability analysis using volatile resources.
* Investigative reports on delivered performance for computation intensive
and data intensive applications with or without failure and recovery
* Investigative results on delivered cloud performances for computation
intensive and data intensive applications
* Experiences in non-conventional HPC programming paradigms
* Experiences in using auction-based cloud resources
* Experiences in virtualized GPU for HPC applications
* Experiences in virtualized network for HPC applications
* Innovative failure prevention and recovery methods
* HPC security considerations using cloud resources
* Communication infrastructure virtualization experiences
* Private cloud implementation experiences
* Innovative cloud auction pricing models
Paper Submission: https://www.easychair.org/ conferences/?conf=shpc2014
Workshop Website: https://sites.google.com/a/ temple.edu/shpcloud2014/
Workshop Organizers:
* Dr. Justin Y. Shi, CIS, Temple University (Workshop Chair)
* Dr. Boleslaw Szymanski, CS, Rensselaer Polytechnic Institute (Vice
Workshop Chair)
* Dr. Chiu Tan, CIS, Temple University
* Dr. Kreishah Abdallah, ECE, New Jersey Institute of Technology
* Dr. Ron Minnich, Google Inc. (TBD)
* Dr. Moussa Taifi, Cloudmize.com
* Dr. Thomas J. Hacker, Computer and Information Technology, Purdue
University (TBD)
* Dr. Dave Yuen, Earth Sciences, University of Minnesota
Important Dates:
* Submission Deadline: May 15, 2014
* Author Notification: June 1, 2014
* Final Paper Due: June 15, 2014
* Workshop Date: August 4, 2014
SHPCloud-2014: Extreme Scale Data Cloud Computing Architectures
In Collaboration with Big Data Science 2014 and SocialCom2014 Conferences
August 4, 2014
Beijing, China
Call for Paper
Extreme scale computing needs architectures that do not have scalability
limits. Following the trajectory of existing architecture trends, however,
these seem highly unlikely. The current computing architectures have
reached an inflection point signified by a gridlock of well-known
impossibilities, such as limitations of circuit packaging technologies in
hardware and CAP Theorem in software. The most telling evidence is the
commonly accepted notion that it is impossible to gain application
performance and reliability at the same time as we up scale the processing
architectures.
SHPC2014 solicits paradigm shifting architecture proposals, preliminary
results, concepts and ideas towards the building of next generation extreme
scale data cloud computing architectures that can handle compute intensive
and data intensive applications.
SHPC2014 will provide a platform for academic architecture researchers,
industry practitioners and government agencies to exchange evolving
requirements, ideas and current results.
One distinct feature of the Big Data processing applications from the
traditional transaction data processing is the increased computation and
data requirements at similar rates. It also differs from typical HPC (high
performance computing) applications in its high data requirements. Thus,
the architecture challenges must also include data storage since it is
integral part of the processing platform. The emergence of auction-based
computing clouds makes the architecture challenge even more interesting:
How can future applications gain the ability to harness increasing more
volatile resources for increasing larger datasets? In this context, extreme
scale and cloud computing share the same characteristics.
Topics of Interest:
We encourage submissions on the theoretical foundation of programming for
volatile resources. We will also encourage submissions of practical
experiences using compute and data intensive cloud resources. These include
but not limited to the following areas:
* Theoretical foundation of programming volatile resources.
* Application scalability analysis using volatile resources.
* Investigative reports on delivered performance for computation intensive
and data intensive applications with or without failure and recovery
* Investigative results on delivered cloud performances for computation
intensive and data intensive applications
* Experiences in non-conventional HPC programming paradigms
* Experiences in using auction-based cloud resources
* Experiences in virtualized GPU for HPC applications
* Experiences in virtualized network for HPC applications
* Innovative failure prevention and recovery methods
* HPC security considerations using cloud resources
* Communication infrastructure virtualization experiences
* Private cloud implementation experiences
* Innovative cloud auction pricing models
Paper Submission: https://www.easychair.org/
Workshop Website: https://sites.google.com/a/
Workshop Organizers:
* Dr. Justin Y. Shi, CIS, Temple University (Workshop Chair)
* Dr. Boleslaw Szymanski, CS, Rensselaer Polytechnic Institute (Vice
Workshop Chair)
* Dr. Chiu Tan, CIS, Temple University
* Dr. Kreishah Abdallah, ECE, New Jersey Institute of Technology
* Dr. Ron Minnich, Google Inc. (TBD)
* Dr. Moussa Taifi, Cloudmize.com
* Dr. Thomas J. Hacker, Computer and Information Technology, Purdue
University (TBD)
* Dr. Dave Yuen, Earth Sciences, University of Minnesota
Important Dates:
* Submission Deadline: May 15, 2014
* Author Notification: June 1, 2014
* Final Paper Due: June 15, 2014
* Workshop Date: August 4, 2014
No comments:
Post a Comment