Subject: | [WI] [CFP] Special Issue "Advances in Explainable Artificial Intelligence" - MDPI Information, open access |
---|---|
Date: | Thu, 29 Oct 2020 15:24:30 +0100 |
From: | Fulvio Frati <fulvio.frati@unimi.it> |
Reply-To: | Fulvio Frati <fulvio.frati@unimi.it> |
To: | wi@lists.uni-karlsruhe.de |
Special Issue "Advances
in Explainable Artificial Intelligence"
[Apologies if you
receive multiple copies of this CFP]
****************************************************************
Special Issue "Advances
in Explainable Artificial Intelligence"
MDPI Information, open
access
Website:
https://www.mdpi.com/journal/information/special_issues/advance_explain_AI
****************************************************************
The following special
issue will be published in Information (ISSN 2078-2489,
https://www.mdpi.com/journal/information), and is now open
to receive submissions of full research articles and
comprehensive review papers for peer-review and possible
publication.
The papers will be
published, after a standard peer-review procedure, in Open
Access journal Information.
The official deadline
for submission is 31 May 2021. However, you may send your
manuscript at any time before the deadline.
If accepted, the paper
will be published very soon.
** SPECIAL ISSUE
INFORMATION
Machine Learning
(ML)-based Artificial Intelligence (AI) algorithms can learn
from known examples of various abstract representations and
models that, once applied to unknown examples, can perform
classification, regression, or forecasting tasks, to name a
few.
Very often, these highly
effective ML representations are difficult to understand;
this holds true particularly for Deep Learning models, which
can involve millions of parameters. However, for many
applications, it is of utmost importance for the
stakeholders to understand the decisions made by the system,
in order to use them better. Furthermore, for decisions that
affect an individual, the legislation might even advocate in
the future a “right to an explanation”. Overall, improving
the algorithms’ explainability may foster trust and social
acceptance of AI.
The need to make ML
algorithms more transparent and more explainable has given
rise to several lines of research that form an area known as
explainable Artificial Intelligence (XAI).
Among the goals of XAI
are adding transparency to ML models by providing detailed
information about why the system has reached a particular
decision; designing more explainable and transparent ML
models, while at the same time maintaining high-performance
levels; finding a way to evaluate the overall explainability
and transparency of the models and quantifying their
effectiveness for different stakeholders.
The objective of this
Special Issue is to explore recent advances and techniques
in the XAI area.
Research topics of
interest include (but are not limited to):
- Devising machine
learning models that are transparent-by-design;
- Planning for
transparency, from data collection up to training, test, and
production;
- Developing algorithms
and user interfaces for explainability;
- Identifying and
mitigating biases in data collection;
- Performing black-box
model auditing and explanation;
- Detecting data bias
and algorithmic bias;
- Learning causal
relationships;
- Integrating social and
ethical aspects of explainability;
- Integrating
explainability into existing AI systems;
- Designing new
explanation modalities;
- Exploring theoretical
aspects of explanation and interpretability;
- Investigating the use
of XAI in application sectors such as healthcare,
bioinformatics, multimedia, linguistics, human–computer
interaction, machine translation, autonomous vehicles, risk
assessment, justice, etc.