-------- Forwarded Message --------
** Please forward to anyone who might be interested **
------------------------------------------------------
CALL FOR Papers
EXplainable AI in Law (XAILA) 2019
https://xaila.geist.re
Workshop co-located with Jurix 2019
(Legal Knowledge and Information Systems)
https://jurix2019.oeg-upm.net
Organizers: Grzegorz J. Nalepa, Martin Atzmueller, Michał
Araszkiewicz, Paulo Novais
Abstract:
=========
Humanized AI emphasizes transparency and explainability in AI
systems. These perspectives have an important ethical dimension,
that is most often analyzed by philosophers. However, in order for
it to be fruitful for AI engineers, it has to be properly focused.
The intersection of Law and AI that makes it possible, as it
provides a conceptual framework for ethical concepts and values in
AI systems. A significant part of AI and Law research during the
last two decades was devoted to operationalization of legal
thinking with values. These results may now be reconsidered in a
broader context, concerning the development of HAI systems and
their social impact. It is a timely issue for the AI and Law
community.
Motivation for the workshop and description
===========================================
Humanized AI (HAI) includes important perspectives in AI systems,
including transparency and explainability (XAI). Another one is
the affective computing paradigm. These perspectives have an
important ethical dimension. While ethical discussion is conducted
by many philosophers, in order for it to be fruitful for engineers
in AI, it has to be properly focused with specific concepts and
operationalized.
We believe, that it is the intersection of Law and AI that makes
such an endeavor possible. Together, this lays foundations and
provides a conceptual framework for ethical concepts and values in
AI systems. Therefore, when discussing ethical consequences and
considerations of transparent and explainable AI systems,
including affective systems, we should focus on the legal
conceptual framework. A significant part of AI and Law research
during the last two decades was devoted to operationalization of
legal thinking with values. These results may now be reconsidered
in a broader context, concerning the development of XAI systems
and their social impact. As such it is a very timely issue for the
AI and Law community.
Our objective is to bring people from AI interested in XAI/HAI
topics (possibly with broader background than just engineering)
and create an ample space for discussion with people from the
field of legal scholarship and/or legal practice. As many members
of the AI and Law community join both perspectives, the JURIX
conference should be assessed as perfect venue for the workshop.
Together we would like to address some questions like:
* the notions of transparency, interpretability and explainability
in XAI
* non-functional design choices for explainable and transparent AI
systems
* legal consequences of black-box AI systems
* legal criteria and requirements for explainable and transparent
AI systems
* possible applications of XAI systems in the area of legal policy
deliberation, legal practice, teaching and research
* ethical and legal implications of the use of AI systems in
different spheres of societal life
* the notion of right to explanation relation of XAI and
argumentation technologies
* XAI models, approaches and architectures
* XAI and declarative domain knowledge
* risk-based approach to analysis of AI systems and the influence
of XAI on risk assessment
* incorporation of ethical values into AI systems & its legal
interpretation and consequences
* XAI, privacy and data protection
* possible legal aspects and consequences of affective systems
* XAI, certification and compliance
Important dates:
================
Submission: 25.11.2019
Notification: 01.12.2019
Camera-ready: 06.12.2019
Workshop: 11.12.2019
--
-----------------------------------------------------------------------
Assoc. Prof. Dr. habil. Martin Atzmueller https://martin.atzmueller.net
Associate Professor, Tilburg University, CSAI/CSLab http://www.cslab.cc
Visiting Professor, Université Sorbonne Paris Cité, Machine Learning/A3
Email: m.atzmuller@uvt.nl | Phone: +31-(0)13 466 4736
--
Mailing-Liste: wi@lists.kit.edu
Administrator: wi-request@lists.kit.edu
Konfiguration: https://www.lists.kit.edu/wws/info/wi