-------- Forwarded Message --------
Subject: [AISWorld] Final CFP: 2nd Italian Workshop on Explainable Artificial Intelligence (XAI.it) jointly held with AIxIA 2021
Date: Mon, 20 Sep 2021 17:36:06 +0200
From: Cataldo Musto <cataldo.musto@uniba.it>
To: aisworld@lists.aisnet.org


*** Apologies for cross postings ***


XAI.it Workshop @AIxIA 2021 - CALL FOR PAPERS

----------------------------------------------------------------------

2nd Italian Workshop on Explainable Artificial Intelligence (XAI.it)

December 1-3, 2021


co-located with AIxIA 2021 (https://aixia2021.disco.unimib.it/) - Milano, Italy


Twitter: https://twitter.com/XAI_Workshop

Web: http://www.di.uniba.it/~swap/xai-it

Submission: https://easychair.org/conferences/?conf=aixia2021 (track: Italian Explainable Artificial Intelligence Workshop)

For any information: cataldo.musto@uniba.it


=========

HIGHLIGHTS

=========

- DEADLINE: *** October 1, 2021 ***

- CEUR conference proceedings


=========

ABSTRACT

=========


Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based algorithms are being adopting in a growing number of contexts and applications domains,
ranging from media and entertainment to medical, finance and legal decision-making.

While the very first AI systems were easily interpretable, the current trend showed the rise of opaque methodologies such as those based on Deep Neural Networks (DNN), whose (very good)
effectiveness is contrasted by the enormous complexity of the models, which is due to the huge number of layers and parameters that characterize these models.


As intelligent systems become more and more widely applied (especially in very ìsensitiveî domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore
the general rationale that guides the algorithms in the task it carries on Moreover, the metrics that are usually adopted to evaluate the ef-fectiveness of the algorithms reward
very opaque methodologies that maximize the accuracy of the model at the expense of the transparency and explainability.


This issue is even more felt in the light of the recent experiences, such as the General Data Protection Regulation (GDPR) and DARPA's Explainable AI Project, which further emphasized
the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the information held and managed by AI-based systems.


Accordingly, the main motivation of the workshop is simple and straightforward: how can we deal with such a dichotomy between the need for effective intelligent systems and the
right to transparency and interpretability?


These questions trigger several lines, that are particularly relevant for the current research in AI. The workshop tries to address these research lines and aims to provide a
forum for the Italian community to discuss problems, challenges and innovative approaches in the area.



======

TOPICS

======

Several research questions are triggered by this workshop:

1. How to design more transparent and more explainable models that maintain high performance?

2. How to allow humans to understand and trust AI-based systems and methods?

3. How to evaluate the overall transparency and explainability of the models


Topics of interest:

- Explainable Artificial Intelligence

- Interpretable and Transparent Machine Learning Models

- Strategies to Explain Black Box Decision Systems

- Designing new Explanation Styles

- Evaluating Transparency and Interpretability of AI Systems

- Technical Aspects of Algorithms for Explanation

- Theoretical Aspects of Explanation and Interpretability

- Ethics in Explainable AI

- Argumentation Theory for Explainable AI


============

SUBMISSIONS

============

We encourage the submission of original contributions, investigating novel methodologies to to build transparent and scrutable AI systems and algorithms. In particular, authors can submit:


(A) Regular papers (max. 12 + references ñ Springer LLNCS format);

(B) Short/Position papers (max 6 pages + references - Springer LLNCS format);


Submission Site https://easychair.org/conferences/?conf=aixia2021


All submitted papers will be evaluated by at least two members of the program committee, based on originality, significance, relevance and technical quality. Papers should be formatted according to the Springer LLNCS Style.

Submissions should be single blinded, i.e. authors names should be included in the submissions.
Submissions must be made through the EasyChair conference system prior the specified deadline (all deadlines refer to GMT) by selecting XAI.it as submission track.
At least one of the authors should register and take part at the conference to make the presentation.

================

IMPORTANT DATES

===============

* Paper submission deadline: October 1st, 2021 (11:59PM UTC-12)

* Notification to authors: November 8th, 2021

* Camera-Ready submission: November 22nd, 2021


================

PROCEEDINGS AND POST-PROCEEDINGS

===============

All accepted papers will be published in the AIxIA series of CEUR-WS. Authors of selected papers, accepted to the workshops, will be invited to submit a revised and extended version of their work.


=============

ORGANIZATION

=============

Cataldo Musto - University of Bari, Italy
Riccardo Guidotti - University of Pisa, Italy

Anna Monreale - University of Pisa, Italy

Giovanni Semeraro - University of Bari, Italy


=======================

PROGRAM COMMITEE (TBE)

=======================

Davide Bacciu, Università di Pisa

Matteo Baldoni, Università di Torino

Valerio Basile, Università di Torino

Federico Bianchi, Università Bocconi - Milano

Ludovico Boratto, EURECAT

Roberta Calegari, Università di Bologna

Roberto Capobianco, Università di Roma La Sapienza

Federica Cena, Università di Torino

Nicolò Cesa-Bianchi, Università di Milano

Roberto Confalonieri, Free University of Bozen-Bolzano

Luca Costabello, Accenture

Rodolfo Delmonte, Università Ca' Foscari

Mauro Dragoni, Fondazione Bruno Kessler

Stefano Ferilli, Università di Bari

Fabio Gasparetti, Roma Tre University

Alessandro Giuliani, Università di Cagliari

Andrea Iovine, Università di Bari

Antonio Lieto, Università di Torino

Francesca Alessandra Lisi, Università di Bari

Alessandro Mazzei, Università di Torino

Stefania Montani, Università del Piemonte Orientale

Daniele Nardi, Università di Roma la Sapienza

Andrea Omicini, Università di Bologna

Andrea Passerini, University of Treno

Roberto Prevete, University of Naples Federico II

Antonio Rago, Imperial College London

Amon Rapp, Università di Torino

Salvatore Rinzivillo, ISTI - CNR

Gaetano Rossiello, IBM Research

Salvatore Ruggieri, Università di Pisa

Giuseppe Sansonetti, Roma Tre University

Lucio Davide Spano, Università di Cagliari

Stefano Teso, Katholieke Universiteit Leuven

Francesca Toni, Imperial College London

Guido Vetere, Università Guglielmo Marconi

_______________________________________________
AISWorld mailing list
AISWorld@lists.aisnet.org