-------- Original Message -------- Subject: [computational.science] ISPA 2012: Workshops Call for Papers Date: Wed, 28 Dec 2011 18:03:55 +0100 From: ISPA2012 ispa2012@piojito.arcos.inf.uc3m.es Organization: "ICCSA" To: Computational Science Mailing List computational.science@lists.iccsa.org
Dear Sir or Madam,
(We apologize if you receive multiple copies of this message)
============================================================================= IPDPS 2012 WORKSHOPS CALL FOR PAPERS =============================================================================
ABOUT ISPA 2012 WORKSHOPS
ISPA 2012 workshops are a major part of the ISPA week-long family of events. They provide the ISPA community an opportunity to explore special topics, and the goal of the workshops is to present work that is more preliminary and cutting-edge or that has more practical content than the more mature research presented in the main symposium.
Proceedings of the workshops are published by the IEEE Digital Library and are distributed at the conference. The workshops schedule by day will be announced when the full program of workshops is complete.
- Clouds for Business, Business for Clouds - International Workshop on Cross-Stratum Optimization for Cloud Computing and Distributed Networked Applications - International Workshop on AstroParticle Physics Advanced Computing (APPAC) - International Workshop on Multicore Cache Hierarchies: Design and Programmability Issues - International Workshop on Heterogeneus Architectures and Computing (HAC 2012) - International Workshop on The Growing Problems with Scalable, Heterogeneous Infrastructures - International Workshop on Stream Computing Applications - Design for Ubiquitous Technology Enhanced Learning Methods, Applications, Languages and Tool
************************************************************************************************************ Clouds for Business, Business for Clouds ************************************************************************************************************
Cloud Computing has acquired enough maturity to expand its field of application to business. Yet there are not only institutions which use this paradigm in their production line but there are also those which are offering services through the Cloud.
This workshop intends to put together efforts done from service producers and consumers in order to make Cloud Computing provide an added value to the economy of any kind of institution. Technologies, policies and heuristics will be shared, not discarding those coming from other areas that would benefit Cloud Computing.
The Workshop intends also to focus on how services are delivered through the cloud as a popular strategic technology choice for businesses that provides a flexible, ubiquitous and consistent platform accessible from anywhere at any time.
The interface of software services and cloud computing provides a rich area for research and experience to give a unique insight into how cloud-based service can be achieved in practice. We encourage submissions of research papers on work in the area of Cloud Computing, Service Engineering, and especially welcome papers describing developments in cloud-enabled business process management and all related areas, such as deployment techniques, business models for cloud-based enterprises and experience reports from practitioners.
++ Paper Submission Deadline: 01 February 2012 ++ See details at http://dsa-research.org/c4bb4c/
************************************************************************************************************ International Workshop on Cross-Stratum Optimization for Cloud Computing and Distributed Networked Applications ************************************************************************************************************
The current lack of interaction between networked applications and the underlying network during service provisioning cause inefficiencies on the use of network resources which can negatively impact the quality expectations of the final consumers of those applications.
Typical networked applications are offered through Information Technology (IT) resources (as computing and storage facilities) residing in data centers. Data centers then provide the physical and virtual infrastructure in which applications and services are provided. Since the data centers are usually distributed geographically around a network, many decisions made in the control and management of application services, such as where to instantiate another service instance, or which data center out of several is assigned to a new customer, can have a significant impact on the state of the network. In the same way, the capabilities and state of the network can have a major impact on application performance.
Cross-stratum optimization (CSO) is referred to the combined optimization of both the application and the network components that aims to provide joint resource optimization, responsiveness to quickly changing demands from/to application to/from network, enhanced service resilience using cooperative recovery techniques, and quality of experience assurance by a better use of existing network and application resources, among others.
The CSO involves the overall optimization of application layer and network resources by envisioning next generation architecture for interactions and exchanges between the two layers to improve service continuity, performance guarantees, scalability and manageability. The goal of this workshop is to promote the research interest on the optimal integration of application and network resources.
++ Paper Submission Deadline: 03 February 2012 ++ See details at http://www.cccso.net
************************************************************************************************************ International Workshop on AstroParticle Physics Advanced Computing (APPAC) ************************************************************************************************************
AstroParticle Physics (APP) and High energy physics experiments are addressing the last state of the art related to manipulation of large data sets over wide computing networks, developing tools and services for exchanging, delivery, processing and managing huge amount of data.
Analysis and simulation applications involved in these experiments are deployed on a variety of architectures and usually demand high computational and storage capacities.
Different solutions ranging from distributed computing (e.g. GRID, CLOUD, P2P, etc.) to recently GPUs attempt to address efficient scenario for the Astroparticle& HEP community. In particular, job- scheduling optimization in distributed environments is an active research field originated from previously worldwide successful HEP projects (ALICE, Pierre Auger observatory, etc.) and wil be implemented in future Space and on-ground APP projects as the JEM-EUSO Space Mission.
The main aim of this workshop is to bring researchers from Physics, Computer Science, developers and, in general, from the AstroParticle and High Energy Physics fields, to identify and explore open issues regarding efficient solutions for HEP advanced computing. Also we foster the proposal of solutions that are being developed for each experiment applications. The APPAC workshop will provide a forum for free exchange of ideas.
We plan to have a variety of presentations where applications, middleware and computing models for a few experiments are explained and discussed to give ideas for the next generations of high- energy physics analysis software being developed in the future.
++ Paper Submission Deadline: 30 January 2012 ++ See details at http://spas.uah.es/appac
************************************************************************************************************ International Workshop on Multicore Cache Hierarchies: Design and Programmability Issues ************************************************************************************************************
Caches have been playing an essential role in the performance of single-core systems due to the gap between processor speed and main memory latency. First level caches are strongly restricted by their access time but current processors are able to hide most of their latency using out-of order execution as well as miss overlapping techniques. On the other hand, last levels of the cache memory hierarchy are not so dependable on their access time but on their locality issues. The locality in lower levels is filtered by the upper levels. As requests going down in the memory hierarchy they require a greater number of cycles to be satisfied, so it becomes more difficult to hide the latency of last-level caches (LLC). In multi-core systems their importance is even large due to the growing number of cores that share the bandwidth that this memory can provide. In an attempt to make a more efficient usage of their caches, the memory hierarchies of many Chip Multiprocessors (CMPs) present LLCs which can be allocated across threads and part of them may be private to a thread while other parts may be shared by multiple threads. Then, caching techniques will continue their evolution during next years in order to tackle the new challenges imposed by multicore platforms and workloads.
The aim of this workshop is to strongly encourage the exchange of experiences and knowledge in novel solutions exploiting and defining new trends in multicore cache hierarchy design, also considering new programming techniques for taking full advantage of cache hierarchies in terms of performance.
The Workshop will be held as a half-day meeting at the ISPA 2012 conference in Leganes, Madrid. The authors of the papers selected for the Workshop will be invited to submit extended versions of their manuscript to be considered for publication in a special issue of the Parallel Computing journal (tentative).
++ Paper Submission Deadline: 31 January 2012 ++ See details at http://www.ac.uma.es/ispa-wmch12/
************************************************************************************************************ HAC 2012: International Workshop on Heterogeneus Architectures and Computing ************************************************************************************************************
High performance computing (HPC) has significantly evolved during the last decades. The remarkable evolution of networks, the raise of multi-core technology and the use of hardware accelerators have made it possible to integrate up to hundreds of thousands cores into the current petaflop machines. This scenario has led to the emergence of massively parallel and heterogeneous systems composed of a variety of different types of computational units, such as distributed environments composed of multicore nodes some of which include hardware accelerators, like GPUs or FPGAs.
However, the design and implementation of efficient parallel algorithms for heterogeneous systems is still a very important challenge. The diverse architectures, interconnection networks, parallel programming paradigms as well as the presence of system heterogeneity have a pervasive impact on algorithm performance and scalability.
The traditional parallel algorithms, programming environments and development tools as well as theoretical models are not applicable to the high performance parallel heterogeneous systems currently available. Therefore, we need a thorough analysis of these new systems to propose new ideas, innovative algorithms and tools, as well as theoretical models for modelling and working properly and efficiently with heterogeneous clusters.
The workshop is intended to be a forum for researchers working on algorithms, programming languages, tools, and theoretical models aimed at efficiently solving problems on heterogeneous networks.
++ Paper Submission Deadline: 1 February 2012 ++ See details at http://www.atc.unican.es/hac2012/
************************************************************************************************************ International Workshop on The Growing Problems with Scalable, Heterogeneous Infrastructures ************************************************************************************************************
With the successful implementation of petaflop computing back in 2008 [ref], manufacturers strive to tackling the next barrier: exaflop computing. However, this figure is generally misleading, as the peak performance of a machine is basically just calculated by the number and performance of each individual processing unit in the system even the sustained performance tests through LINPACK essentially just stresses the mathematical units and not the interconnects between nodes. In other words, modern performance measurements essentially reflect the size of the system and not so much the efficiency to solve a specific class of problems. With the introduction of multi- and manycore processors, the scale of modern systems increases drastically, even though their effective frequency remains basically unchanged. It is therefore generally predicted that we reach exaflop computing by the end of this decade.
Given the circumstances, the question arises whether it is worth reaching the exaflop mark in the first instance, for other than research interest: already only few applications can actually exploit the scale of existing infrastructures, let alone handle the scale of an exaflop machine if this means scale, rather than clock rate. We can distinguish in particular between embarrassingly parallel applications which benefit from the number of resources but have little requirements towards their interconnects and tightly coupled applications that are highly restricted by the interconnect limitations and the number of embarrassingly parallel applications, as well as their resource need, is typically limited itself. Problems that would really benefit from the scale in order to improve accuracy and speed of calculation also frequently expose an exponential resource need, or at least a growth by the power of n. This means that in order to reach an efficiency increment, it needs an exponential number of additional resources, i.e. a linear growth in the number of resources does not offer the increment in efficiency required.
Yet manufacturers stills struggle with essential problems on both hard- and software side to increase the efficiency of larger scale systems at all: in particular the limited scalability of the interconnect, memory etc. poses issues that reduce the effective performance of larger scales rather than increase it. Promising approaches employ a mix of different scalability and consistency models which however restrict the usage domain accordingly - in particular multi-level applications exploit such hybrid models, but their development is still difficult and very rarely supported.
This workshop focuses on the particular problems to increase scale and examines potential means to address these problems. It is thereby not restricted to the hardware side of problems, but addresses the full scope from hardware limitations over algorithm and computing theory to new means for application development and execution. The workshop aims at experts from all fields related to high performance computing, including manufacturers, system designers, compiler and operating system developers, application programmers and users etc. Depending on the number of submissions, this workshop will be broken into multiple strands according to topic, such as hardware, theory of computation and software development.
++ Paper Submission Deadline: 15 February 2012 ++ See details at http://www.soos-project.eu/GSHI/
************************************************************************************************************ International Workshop on Stream Computing Applications ************************************************************************************************************
They workshop is devoted to a very easily parallelizable computing: STREAMS.
Data volumes are expected to double every two years over the next Decade and the existing tools and technologies that aid to explode them first require data to be recorded on a storage device and run queries after the fact to detect actionable insights.
Stream computing addresses this gap effectively by providing a futuristic technology that can detect insights within data streams still in motion, that is, before they are saved into databases.
The goal of the Stream Computing Workshop is to show breakthrough technologies that enable aggressive production and management of information and knowledge from relevant data, which must be extracted from enormous volumes of potentially unimportant data.
++ Paper Submission Deadline: 31 January 2012 ++ See details at http://www.wix.com/angelhbravo/stream/
************************************************************************************************************ Design for Ubiquitous Technology Enhanced Learning Methods, Applications, Languages and Tool ************************************************************************************************************
We are continuously witnessing the pervasive appearance of new kinds of software (e.g. widgets, Web services, cloud computing) and mobile devices (smart phones, tablets, ultrabooks) at user hands. These new paradigms and technologies enable us to enact more engaging and innovative learning experiences, challenging existing instructional practices and pedagogies. The workshop aims to provide a forum for deepened discourse and discussion around the development of new design models and tools for Ubiquitous Technology Enhanced Learning (UTEL) environments. The main question we propose to explore with workshop contributors is "How to facilitate design-level discourse among those involved in Ubiquitous TEL development?". Designing and orchestrating materials (e.g., contents, activities, lesson plans), context information (e.g. location, time), devices, and systems for UTEL is a complex task. It requires integrated thinking and interweaving of state-of-the-art knowledge in computer science, human-computer interaction, pedagogy, instructional theory, psychology, and curricular subject domains. This interdisciplinary workshop aims to bring together practitioners and researchers from these diverse backgrounds to share their proposals and findings related with the design of educational resources and systems in the UTEL context.
++ Paper Submission Deadline: 31 January 2012 ++ See details at http://dbis.rwth-aachen.de/d4utel/