---
Apologies for
cross-postings---
Call
for Papers: "Trust in AI for
electronic markets"
in "Electronic Markets -
The International Journal
on Networked Business"
Submission
deadline: December, 15,
2021
Guest
Editors
Wolfgang
Maass, Saarland University
and German Research Center
for Artificial
Intelligence (DFKI),
Germany,
wolfgang.maass(at)dfki.de
Roman Lukyanenko, HEC
Montréal, Canada,
roman.lukyanenko(at)hec.ca
Veda C. Storey, Georgia
State University, USA,
vstorey(at)gsu.edu
Theme
Electronic
markets for trading
physical, as well as
digital, goods offer a
wide variety of services
based on Artificial
Intelligence technologies,
as smart market services.
Smart market services
generate recommendations
and predictions using
Artificial Intelligence
(AI) technologies on data
available and accessible
in electronic markets. For
instance, financial
high-speed trading is only
feasible by smart market
services that autonomously
execute transactions
according to market
signals based on AI models
trained with big data.
Electronic marketplaces,
including Amazon and
Alibaba, are using AI
technologies to provide
smart services to
consumers, optimize
logistics, analyze
consumer behavior, and
derive innovative product
and service designs. Some
business leaders even
consider there to be major
threats to society from
sophisticated AI
solutions, while using AI
extensively for their own
business. Because AI
systems elude human
understanding and
scrutinization, trust in
AI is crucial for the
success of smart market
services, as well as other
AI or machine
learning-based systems .
Gaining trust in AI begins
with transparency in the
reviews of (a) data so
that biases and gaps in
knowledge of a domain are
controlled, (b) AI models
and objective functions,
(c) model performance and
(d) results generated by
AI models for decision
making. Trust becomes an
important factor for
overcoming uncertainty on
AI-based recommendations
in general and in
electronic markets in
particular.
The
quality of smart market
services depends on shared
understanding and
conceptual models of data
used for training AI
models; data quality; the
selection and training of
appropriate models; and
the embedding of models
into smart market
services. Providers of
smart market services are
required to build trust
relationships with
business and end customers
based on limited
possibilities for opening
the "black boxes" of
Artificial Intelligence
systems due to increased
complexity of machine
learning models. Empirical
studies on trust in AI
indicate heterogenous
results.
Companies
and end-users appreciate
benefits and opportunities
provided by smart market
services. At the same
time, concerns are raised
with respect to privacy
issues and biases of data,
models and algorithms.
Overly optimistic
customers might become
disappointed if smart
market services do not
deliver as expected. Proof
of privacy leaks and
biases might reinforce
prejudices. Both may lead
to decrease of trust in
AI. Challenging research
questions are to identify
which methods, indicators
and experiences have
increasing effects on
trust in AI. For instance,
explainable AI is a
technical means for
opening "black boxes" of
AI systems, generally, and
smart market services,
specifically.
This
special issue seeks
contributions on trust in
Artificial Intelligence in
the context of electronic
markets. Contributions
that help to understand
challenges from an
economic, legal or
technical perspective are
invited.
Central
issues and topics
Possible
topics of submissions
include, but are not
limited to:
- Trust
behavior and AI
- Mental
models, conceptual
models and AI models
- Psychological
and sociological factors
for trust in AI
- Human-centric
design of smart market
services
- Explainable
AI for smart market
services
- Threats
for trust in AI
- Frameworks
for smart markets
- Business
and legal aspects
influencing trust in AI
- Relationships
between trust and
Business models with
smart market services
- Transparency
of data, AI models and
recommendations
- Case
studies on building
trust in AI
Submission:
Electronic
Markets is a Social
Science Citation Index
(SSCI)-listed journal (IF
2.981 in 2019) in the area
of information systems. We
encourage original
contributions with a broad
range of methodological
approaches, including
conceptual, qualitative
and quantitative research.
Please also consider
position papers and case
studies for this special
issue. All papers should
fit the journal scope (for
more information, see www.electronicmarkets.org/about-em/scope/)
and will undergo a
double-blind peer-review
process. Submissions must
be made via the journal's
submission system and
comply with the journal's
formatting standards. The
preferred average article
length is approximately
8,000 words, excluding
references. If you would
like to discuss any aspect
of this special issue, you
may either contact the
guest editors or the
Editorial Office.
Keywords: Trust,
Interpretability, Mental
Models, Conceptual Models,
Explainable AI, Smart
Market Services, Privacy,
Fairness of Artificial
Intelligence, Biases,
Transparency
Important
deadline: * Submission Deadline:
December, 15, 2021
References
Domingos,
P. (2012). A few useful
things to know about
machine learning.
Communications of the ACM,
55(10), 78-87. doi.org/10.1145/2347736.2347755.
Dwivedi,
Y. K. et al. (2019).
Artificial intelligence
(AI): Multidisciplinary
perspectives on emerging
challenges, opportunities,
and agenda for research,
practice and policy.
International Journal of
Information Management,
57, 101994, doi.org/10.1016/j.ijinfomgt.2019.08.002.https://...
Jacovi,
A., Marasovi, A., Miller,
T., & Goldberg, Y.
(2021). Formalizing trust
in artificial
intelligence:
Prerequisites, causes and
goals of human trust in
ai. In Proceedings of the
2021 ACM Conference on
Fairness, Accountability,
and Transparency, pp.
624-635. doi.org/10.1145/3442188.3445923.
Luo,
X., Tong, S., Fang, Z.,
& Qu, Z. (2019).
Frontiers: Machines vs.
humans: The impact of
artificial intelligence
chatbot disclosure on
customer purchases.
Marketing Science, 38(6),
937-947. doi.org/10.1287/mksc.2019.1192.
Maass,
W., Parsons, J., Purao,
S., Storey, V. C., &
Woo, C. (2018).
Data-driven meets
theory-driven research in
the era of big data:
opportunities and
challenges for information
systems research. Journal
of the Association for
Information Systems,
19(12), 1. doi.org/10.17705/1jais.00526.
Maass,
W., Parsons, J., Purao,
S., & Storey, V. C.
(2021). Pairing conceptual
modeling with machine
learning. Data &
Knowledge Engineering,
forthcoming.
Maass,
W., Storey, V. C., &
Lukyanenko, R. (2021).
From mental models to
machine learning models
via conceptual models. In
Exploring Modeling Methods
for Systems Analysis and
Development (EMMSAD 2021),
Melbourne, Australia, pp.
1–8. doi.org/10.1007/978-3-030-79186-5_19.
Ribeiro,
M. T., Singh, S., &
Guestrin, C. (2016). "Why
should I trust you?"
Explaining the predictions
of any classifier. In
Proceedings of the 22nd
ACM SIGKDD international
conference on knowledge
discovery and data mining,
pp. 1135-1144.
dx.doi.org/10.1145/2939672.2939778.
Siau,
K., & Wang, W. (2018).
Building trust in
artificial intelligence,
machine learning, and
robotics. Cutter Business
Technology Journal, 31(2),
47-53.
Thiebes,
S., Lins, S., &
Sunyaev, A. Trustworthy
artificial intelligence.
Electronic Markets,
31(2021)2. doi.org/10.1007/s12525-020-00441-4.
====================================================================
Electronic
Markets - The
International Journal on
Networked Business
Editors-in-Chief:
Rainer Alt, Leipzig
University and Hans-Dieter
Zimmermann, FHS St.Gallen,
University of Applied
Sciences, Executive
Editor: Ramona Coia,
Leipzig University
Editorial
Office
c/o Information Systems
Institute
Leipzig University
04109 Leipzig, Germany
Mail: editors@electronicmarkets.org
Phone: +49-341-9733600
http://www.electronicmarkets.org
www.facebook.com/ElectronicMarkets
twitter.com/journal_EM
www.springer.com/journal/12525
Journal Impact Factor
2019: 2.981
|