-------- Forwarded Message -------- Subject: [AISWorld] CfP HICSS-55: mini track on Explainable Artificial Intelligence (XAI) with publication opportunity in Electronic Markets Date: Tue, 1 Jun 2021 08:52:04 +1000 From: Babak Abedin babak.abedin@gmail.com To: aisworld@lists.aisnet.org
Dear colleagues,
We are happy to introduce the minitrack on “*Explainable Artificial Intelligence (XAI)*” at *HICSS 55* (submission deadline: *June 15th*, 2021). We provide the opportunity for (extended) best papers of this minitrack to be *fast-tracked* to the journal* Electronic Markets *(Special Issue on “*Explainable and Responsible Artificial Intelligence*”*, *CfP will follow).
If you have any questions, please do not hesitate to contact Christian Meske (christian.meske@fu-berlin.de)
Yours sincerely, Christian Meske, Babak Abedin, Mathias Klier, Fethi Rabhi
********************************************************************* *Call for Papers: “Explainable Artificial Intelligence (XAI)” Minitrack at the 55th Hawaii International Conference on System Sciences (HICSS)* ********************************************************************* The use of Artificial Intelligence (AI) in the context of decision analytics and service science has received significant attention in academia and practice alike. Yet, much of the current efforts have focused on advancing underlying algorithms and not on decreasing the complexity of AI systems. AI systems are still “black boxes” that are difficult to comprehend—not only for developers, but particularly for users and decision-makers. In addition, the development and use of AI is associated with many risks and pitfalls like biases in data or predictions based on spurious correlations (“Clever Hans” phenomena), which eventually may lead to malfunctioning or biased AI and hence technologically driven discrimination.
This is where research on Explainable Artificial Intelligence (XAI) comes in. Also referred to as “transparent,” “interpretable,” or “understandable AI”, XAI aims to “produce explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners”.
With a focus on decision support, this minitrack aims to explore and extend research on how to establish explainability of intelligent black box systems—machine learning-based or not. We especially look for contributions that investigate XAI from either a developer’s or user’s perspective. We invite submissions from all application domains, such as healthcare, finance, e-commerce, retail, public administration or others. Technically and method-oriented studies, case studies as well as design science or behavioral science approaches are welcome.
Topics of interest include, but are not limited to:
- The developers’ perspective on XAI
- - XAI to open, control and evaluate black box algorithms - Using XAI to identify bias in data - Explainability and Human-in-the-Loop development of AI - XAI to support interactive machine learning - Prevention and detection of deceptive AI explanations - XAI to discover deep knowledge and learn from AI - Designing and deploying XAI systems - Addressing user-centric requirements for XAI systems
- The users’ perspective on XAI
- - Theorizing XAI-human interactions - Presentation and personalization of AI explanations for different target groups - XAI to increase situational awareness, compliance behavior and task performance - XAI for transparency and unbiased decision making - Impact of explainability on AI-based decision support systems use and adoption - Explainability of AI in crisis situations - Potential harm of explainability in AI - Identifying user-centric requirements for XAI systems
- The governments’ perspective on XAI
- - Explainability and transparency policy guidelines - Evidence-based benefits and challenges of XAI implementations in the public sector - XAI and compliance
*Submission Deadline: * June 15th, 2021 (Notification: until August 17th, 2021) Further information for authors: https://hicss.hawaii.edu/authors/ Link to minitrack descriptions: https://hicss.hawaii.edu/tracks-54/decision-analytics-and-service-science/#e...
*Fast track:* We provide the opportunity for (extended) best papers of this minitrack to be fast-tracked to the journal Electronic Markets (Special Issue on “Explainable and Responsible Artificial Intelligence”, CfP will follow).
*Minitrack Co-Chairs: * Christian Meske Freie Universität Berlin
Babak Abedin Macquarie University
Mathias Klier University of Ulm
Fethi Rabhi University of New South Wales _______________________________________________ AISWorld mailing list AISWorld@lists.aisnet.org