-------- Forwarded Message --------
--- Apologies for cross-postings---
Dear colleagues,
Electronic Markets is seeking submissions for a Special Issue on
“Explainable and Responsible Artificial Intelligence (XAI)”.
Please find
further details below.
Call for Papers: “Explainable and Responsible Artificial
Intelligence”
Submission deadline: April 30, 2022
Guest Editors
* Christian Meske, Freie Universität Berlin, Germany,
christian.meske@fu-berlin.de
<mailto:christian.meske@fu-berlin.de> * Babak Abedin,
Macquarie University, Australia,
<mailto:babak.abedin@mq.edu.au> babak.abedin@mq.edu.au
* Mathias Klier, University of Ulm, Germany,
<mailto:mathias.klier@uni-ulm.de> mathias.klier@uni-ulm.de
* Fethi Rabhi, University of New South Wales, Australia,
<mailto:f.rabhi@unsw.edu.au> f.rabhi@unsw.edu.au
Theme
Today’s algorithms already reached or even surpassed the task
performance of
humans in various domains. Especially, AI plays a central role for
the
interaction between organizations and individuals such as their
customers,
transforming for instance electronic commerce or customer
relationship
management. However, most AI systems are still “black boxes” that
are
difficult to comprehend—not only for developers, but also for
consumers and
decision-makers (Meske, Bunde, Schneider and Gersch 2020). With
regards to
electronic markets, problems such as trying to manage the risk and
ensure
regulatory compliance of electronic trading systems based on
machine
learning stem not only from their data-driven nature and technical
complexity, but also from their black-box nature, where the
“learning”
creates non-transparent dependencies between inputs and outputs
(Cliff and
Treleaven 2010). This raises many challenges such as ensuring data
quality
issues, managing provenance information needed for transparency as
well as
organizing metadata when combining data from multiple sources
(Rabhi,
Mehandjiev and Baghdadi 2020). Thus, a responsible and more
trustworthy AI
is demanded (HLEG-AI 2019; Thiebes, Lins and Sunyaev 2020).
This is where research on Explainable Artificial Intelligence
(XAI) comes
in. Also referred to as “interpretable”, “responsible”, or
“understandable
AI”, XAI aims to “produce explainable models, while maintaining a
high level
of learning performance (prediction accuracy); and enable human
users to
understand, appropriately, trust, and effectively manage the
emerging
generation of artificially intelligent partners” (DARPA 2017). XAI
hence
refers to “the movement, initiatives, and efforts made in response
to AI
transparency and trust concerns, more than to a formal technical
concept”
(Adadi and Berrada 2018, p. 52140). XAI is designed user-centric
in that
users are empowered to scrutinize AI (Förster, Klier, Kluge and
Sigler
2020). Overall, XAI supports to evaluate, to improve, to learn
from, and to
justify AI, in order to eventually be able to manage AI (Meske,
Bunde,
Schneider and Gersch 2020).
With a focus on the transformation of electronic markets, in this
special
issue, we intend to explore and extend research on how to
establish
explainability and responsibility in intelligent black box
systems—machine
learning-based or not. On that account, we invite researchers to
submit
their papers from all application domains, such as e-commerce,
customer
relationship management, healthcare, finance, retail, public
administration
or others.
Central issues and topics
This special issue of the Electronic Markets Journal will focus on
new,
innovative approaches to explainable and responsible AI systems
that will
change/improve the interaction between organizations and
individuals. They
should discuss how their approaches and solutions enable enhanced
ways of
information exchange, decision-making, and service science.
Technically and
method-oriented studies, case studies as well as design science or
behavioral science approaches are welcome.
This special issue is not only intended for academics and
researchers but
will also be valuable for executives, managers, innovators and
project
leaders who would like to implement explainable and responsible AI
systems.
The (non-exclusive) list of topics includes:
* Designing and deploying XAI systems in electronic markets
* XAI to foster trust in AI-based buyer-seller interactions (e.g.,
chatbots, recommender systems)
* Addressing user-centric requirements for XAI systems
* Addressing the responsibility of AI systems
* Explainability as a prerequisite for responsible AI systems
* Impact of explainability on AI-based digital platform use and
adoption
* Prevention and detection of deceptive AI explanations
* XAI to discover deep knowledge and learn from AI
* Presentation and personalization of AI explanations for
different
target groups
* XAI to increase situational awareness and compliance behavior *
XAI for transparency and unbiased decision making
* Potential harm of explainability in AI
* Explainability and responsibility policy guidelines
* XAI and ethics
Submission:
Electronic Markets is a Social Science Citation Index
(SSCI)-listed journal
(IF 2.981 in 2019) in the area of information systems. We
encourage original
contributions with a broad range of methodological approaches,
including
conceptual, qualitative and quantitative research. Please also
consider
position papers and case studies for this special issue. All
papers should
fit the journal scope (for more information, see
<http://www.electronicmarkets.org/about-em/scope/>
www.electronicmarkets.org/about-em/scope/) and will undergo a
double-blind
peer-review process. Submissions must be made via the journal’s
submission
system and comply with the journal’s formatting standards. The
preferred
average article length is approximately 8,000 words, excluding
references.
If you would like to discuss any aspect of this special issue, you
may
either contact the guest editors or the Editorial Office.
Special Note – HICSS55 (January 4-7, 2022; Submission Deadline:
July 15,
2021). The guest-editors of this special issue have organized a
conference
mini-track “Explainable Artificial Intelligence (XAI)” at the
Hawaii
International Conference on System Sciences (HICSS) 55
(
https://hicss.hawaii.edu/tracks-55/decision-analytics-and-service-science/#
explainable-artificial-intelligence-xai-minitrack). Authors
interested in
this special issue are invited to consider this mini-track as an
opportunity
to receive developmental feedback. In fact, the best paper of the
XAI
mini-track will be fast-tracked to the special issue.
Keywords:
Explainable Artificial Intelligence, Responsible Artificial
Intelligence,
Explainability, Transparency, Managing AI
Important deadline
* Submission Deadline: April 30, 2022
References
Adadi A., Berrada M. (2018). Peeking Inside the Black-Box: A
Survey on
Explainable Artificial Intelligence (XAI). IEEE Access
6:52138-52160.
Cliff D. and Treleaven, P. (2010). Technology trends in the
financial
markets: A 2020 vision, UK Government Office for Science’s
Foresight Driver
Review on The Future of Computer Trading in Financial Markets – DR
3,
October 2010
Defense Advanced Research Projects Agency (DARPA) (2017).
Explainable
Artificial Intelligence (XAI).
https://www.darpa.mil/program/explainable-artificial-intelligence.
Accessed
7 April 2021.
Förster, M., Klier, M., Kluge, K. and Sigler, I. (2020). Fostering
Human
Agency: A Process for the Design of User-Centric XAI Systems.
Proceedings of
the 41th International Conference on Information Systems (ICIS).
HLEG-AI. (2019). Ethics Guidelines for Trustworthy Artificial
Intelligence.
Brussels: Independent High-Level Expert Group on Artificial
Intelligence set
up by the European Commission.
Meske, C., Bunde, E., Schneider, J. and Gersch, M. (2020).
Explainable
Artificial Intelligence: Objectives, Stakeholders and Future
Research
Opportunities. Information Systems Management (ISM), p. 1-11, doi:
https://doi.org/10.1080/10580530.2020.1849465
Rabhi, F. A., Mehandjiev, N. and Baghdadi, A. (2020).
State-of-the-Art in
Applying Machine Learning to Electronic Trading. In International
Workshop
on Enterprise Applications, Markets and Services in the Finance
Industry
(pp. 3-20). Springer Lecture Notes in Business Information
Processing.
Thiebes, S., Lins, S. and Sunyaev, A. (2020). Trustworthy
artificial
intelligence. Electronic Markets (EM), p. 1-18, doi:
https://doi.org/10.1007/s12525-020-00441-4.
Best regards,
Rainer Alt, Hans-Dieter Zimmermann, Ramona Coia
====================================================================
Electronic Markets - The International Journal on Networked
Business
Editors-in-Chief: Rainer Alt, Leipzig University and Hans-Dieter
Zimmermann,
FHS St.Gallen, University of Applied Sciences
Executive Editor: Ramona Coia, Leipzig University Editorial
Office:
c/o Information Systems Institute
Leipzig University
04109 Leipzig, Germany
Mail:
<mailto:editors@electronicmarkets.org>
editors@electronicmarkets.org
Phone: +49-341-9733600
<http://www.electronicmarkets.org>
http://www.electronicmarkets.org
<https://www.facebook.com/ElectronicMarkets>
https://www.facebook.com/ElectronicMarkets
<https://twitter.com/journal_EM>
https://twitter.com/journal_EM
<https://www.springer.com/journal/12525>
https://www.springer.com/journal/12525
Journal Impact Factor 2019: 2.981
_______________________________________________
AISWorld mailing list
AISWorld@lists.aisnet.org