-------- Forwarded Message --------
-----------------------------------------------------------------------------------------------
Please accept our apologies if you receive multiple copies of this
CFP
-----------------------------------------------------------------------------------------------
DIDL 2021 : Fifth Workshop on Distributed Infrastructures for Deep
Learning
Deep learning is a rapidly growing field of machine learning, and
has
proven successful in many domains, including computer vision,
language
translation, and speech recognition. The training of deep neural
networks
is resource intensive, requiring compute accelerators such as
GPUs, as well
as large amounts of storage and memory, and network bandwidth.
Additionally, getting the training data ready requires a lot of
tooling for
data cleansing, data merging, ambiguity resolution, etc.
Sophisticated
middleware abstractions are needed to schedule resources, manage
the
distributed training job as well as visualize how well the
training is
progressing. Likewise, serving the large neural network models
with low
latency constraints can require middleware to manage model
caching,
selection, and refinement.
All the major cloud providers, including Amazon, Google, IBM, and
Microsoft
have started to offer cloud services in the last year or so with
services
to train and/or serve deep neural network models. In addition,
there is a
lot of activity in open source middleware for deep learning,
including
Tensorflow, Theano, Caffe2, PyTorch, and MXNet. There are also
efforts to
extend existing platforms such as Spark for deep learning
workloads.
This workshop focuses on the tools, frameworks, and algorithms to
support
executing deep learning algorithms in a distributed environment.
As new
hardware and accelerators become available, the middleware and
systems need
to be able exploit their capabilities and ensure they are utilized
efficiently.
Authors are invited to submit research papers, experience papers,
demonstrations, or position papers
Topics
----------
This workshop solicits papers from both academia and industry on
the state
of practice and state of the art in deep learning infrastructures.
Topics
of interest include but are not limited to:
Resource scheduling algorithms for deep learning workloads
Advances in deep learning frameworks
Programming abstractions for deep learning models
Middleware support for hardware accelerators
Novel distribution techniques for training large neural networks
Case studies of deep learning middleware
Optimization techniques for Inferencing
Novel debugging and logging techniques
Data cleansing, data disambiguation tools for deep learning
Data visualization tools for deep learning
Federated Learning
Neural architecture search
Deep Learning at the edge
Dates and location
---------------------------
Paper submissions: September 14, 2021
Notification to authors: October 12, 2021
Camera-ready copy due: October 16, 2021
Papers and Submissions
-----------------------------------
We are looking for the following types of submissions:
Research and industry papers (up to 8 pages): Reports on original
results
including novel techniques, significant case studies or surveys.
Authors
may include extra material beyond the six pages as a clearly
marked
appendix, which reviewers are not obliged to read but could read.
Position papers (up to 4 pages): Reports identifying unaddressed
problems
and research challenges.
Abstracts (up to 1 page): An extended abstract on a preliminary or
ongoing
work.
Papers must be written in English and submitted in PDF format. All
papers
should follow ACM formatting instructions, specifically the ACM
SIG
Proceedings Standard Style. The author kit containing the
templates for the
required style can be found at
http://www.acm.org/publications/proceedings-template.
Submissions should not be blinded for review. Please submit your
papers via
the submission site:
https://didl21.hotcrp.com/
All accepted papers will appear in the Middleware 2021 companion
proceedings, available in the ACM Digital Library. All accepted
papers will
also be presented at the workshop, and at least one author of each
paper
must register for the workshop.
Workshop Co-chairs
-------------------------------
Bishwaranjan Bhattacharjee, IBM Research
Vatche Isahagian, IBM Research
Vinod Muthusamy, IBM Research
Program Committee (Tentative)
--------------------------------------------
Parag Chandakkar, Walmart Labs
Ian Foster, Argonne National Laboratory and the University of
Chicago
Matthew Hill, Dataminr
Mayoore Jaiswal, Nvidia
Gauri Joshi, Carnegie Mellon University
Jayaram K. R., IBM Research
Ruben Mayer, Technical University of Munich
Pietro Michiardi, Eurecom
Phuong Nguyen, eBay
Peter Pietzuch, Imperial College
Chuan Wu, University of Hong Kong
_______________________________________________
AISWorld mailing list
AISWorld@lists.aisnet.org