-------- Forwarded Message -------- Subject: [AISWorld] HICSS 2021 [CFP] Explainable Artificial Intelligence (XAI) mini track Date: Thu, 26 Mar 2020 08:29:05 +1100 From: Babak Abedin babak.abedin@gmail.com To: aisworld@lists.aisnet.org
Dear Colleagues,
Please consider submitting to HICSS 2021: Explainable Artificial Intelligence (XAI) mini track ( https://hicss.hawaii.edu/tracks-54/decision-analytics-and-service-science/#e... ). Fast track journal opportunity at *Information Systems Frontiers *
*Description*:
The use of Artificial Intelligence (AI) in the context of decision analytics and service science has received significant attention in academia and practice alike. Yet, much of the current efforts have focused on advancing underlying algorithms and not on decreasing the complexity of AI systems. AI systems are still “black boxes” that are difficult to comprehend—not only for developers, but particularly for users and decision-makers. Also, the development and use of AI is associated with many risks and pitfalls like biases in data or predictions based on spurious correlations (“Clever Hans” phenomena), which eventually may lead to malfunctioning or biased AI and hence technologically driven discrimination.
This is where research on Explainable Artificial Intelligence (XAI) comes in. Also referred to as “transparent,” “interpretable,” or “understandable AI,” XAI aims to “produce explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners”. XAI hence refers to “the movement, initiatives, and efforts made in response to AI transparency and trust concerns, more than to a formal technical concept”, eventually impacting task performance of users.
With a focus on decision support, this minitrack aims to explore and extend research on how to establish explainability of intelligent black box systems—machine learning-based or not. We especially look for contributions that investigate XAI from either a developer’s or user’s perspective. We invite submissions from all application domains, such as healthcare, finance, e-commerce, retail, public administration or others. Technically and method-oriented studies as well as design science or behavioral science approaches are welcome.
*Topics of interest include, but are not limited to:*
- *The developers’ perspective on XAI* - XAI to open, control and evaluate black box algorithms - Using XAI to identify bias in data - Explainability and Human-in-the-Loop development of AI - XAI to support interactive machine learning - Prevention and detection of deceptive AI explanations - XAI to discover deep knowledge and learn from AI - *The users’ perspective on XAI* - Presentation and personalization of AI explanations for different target groups - XAI to increase situational awareness, compliance behavior and task performance - XAI for transparency and unbiased decision making - Impact of explainability on AI-based decision support systems use and adoption - Explainability of AI in crisis situations - Potential harm of explainability in AI
We provide the opportunity for (extended) best papers of this minitrack to be fast-tracked to the Information Systems Frontiers (ISF) journal. Important Dates for Paper Submission June 15, 2020: Paper Submission Deadline (11:59 pm HST) August 17, 2020: Notification of Acceptance/Rejection Minitrack Co-Chairs:
Christian Meske (Primary Contact) Freie Universität Berlin and Einstein Center Digital Future christian.meske@fu-berlin.de
Babak Abedin University of Technology Sydney Babak.Abedin@uts.edu.au
Iris Junglas College of Charleston junglasia@cofc.edu
Fethi Rabhi University of New South Wales f.rabhi@unsw.edu.au _______________________________________________ AISWorld mailing list AISWorld@lists.aisnet.org