-------- Forwarded Message --------
— Apologies for cross-posting —
QUARE 2022: The 1st workshop on Measuring the Quality of
Explanations in Recommender Systems, co-located with SIGIR 2022
(
https://sigir.org/sigir2022/
<https://sigir.org/sigir2022/>), July 11-15, 2022, in
Madrid, Spain and Online
Workshop website:
https://sites.google.com/view/quare-2022/home
<https://sites.google.com/view/quare-2022/home> Location:
Hybrid - Madrid, Spain and Online
IMPORTANT DATES:
-----------------------------
Paper submission: 3 May 2022
Author notification: 15 May 2022
Final version deadline: 15 June 2022
Workshop date: 15 July 2022
CALL FOR PAPERS:
----------------------------
Recommendations are ubiquitous in many contexts and domains due to
a continuously growing adoption of decision-support systems.
Explanations may be provided along with recommendations with the
reasoning behind suggesting a particular item. However,
explanations may also significantly affect a user's
decision-making process by serving a number of different goals,
such as transparency, persuasiveness, scrutability, among others.
While there is a growing body of research studying the effect of
explanations, the relationship between their quality and their
effect has not been investigated in depth yet.
For instance, at an institutional level, organisational values may
require a different combination of explanation goals; also, within
the same organisation some combinations of goals may be more
appropriate for some use cases and less for others. Conversely,
end-users of a recommender system may be bearers of different
values, and explanations can affect them differently. Therefore,
understanding whether explanations are fit for their intended
goals is key to subsequently implementing them in a production
stage.
Furthermore, the lack of established, actionable methodologies to
evaluate explanations for recommendations, as well as evaluation
datasets, hinders cross-comparison between different explainable
recommendations approaches, and is one of the issues hampering
widespread adoption of explanations in industry settings.
This workshop aims to extend existing work in the field by
bringing together and facilitating the exchange of perspectives
and solutions from industry and academia, and aims to bridge the
gap between academic design guidelines and the best practices in
the industry regarding the implementation and evaluation of
explanations in recommender systems, with respect to their goals,
impact, potential biases, and informativeness. With this workshop,
we provide a platform for discussion among scholars,
practitioners, and other interested parties.
TOPICS AND THEMES:
--------------------------------
The motivation of the workshop is to promote discussion upon
future research and practice directions of evaluating explainable
recommendations, by bringing together academic and industry
researchers and practitioners in the area. We focus in particular
on real-world use cases, diverse organisational values and
purposes, and different target users. We encourage submissions
that study different explanation goals and combinations of those,
how they fit various organisation values and different use cases.
Furthermore, we welcome submissions that propose and make
available for the community high-quality datasets and benchmarks.
Topics include, but are not limited to:
Evaluation
Relevance of explanation goals for different use cases;
Soliciting user feedback on explanations; Implicit vs. explicit
evaluation of explanations and goals;
Reproducible and replicable evaluation methodologies;
Online vs. offline evaluations.
Personalisation
User-modelling for explanation generation;
Evaluation approaches for personalised explanations (e.g.,
content, style);
Evaluation approaches for context-aware explanations (e.g., place,
time, alone/group setting, exploratory/transaction mode).
Presentation
Evaluation of different explanation modalities (e.g., text,
graphics, audio, hybrid);
Evaluation of interactive explanations.
Datasets
Generation of datasets for evaluation of explanations;
Evaluation benchmarks.
Values
Evaluation of explanations in relation to organisational values;
Evaluation of explanations in relation to personal values.
SUBMISSIONS:
----------------------
We welcome three types of submissions: position or perspective
papers (up to 4 pages in length, plus unlimited pages for
references): original ideas, perspectives, research vision, and
open challenges in the area of evaluation approaches for
explainable recommender systems;
featured papers (title and abstract of the paper, plus the
original paper): already published papers or papers summarizing
existing publications in leading conferences and high-impact
journals that are relevant for the topic of the workshop
demonstration papers (up to 2 pages in length, plus unlimited
pages for references): original or already published prototypes
and operational evaluation approaches in the area of explainable
recommender systems.
Page limits include diagrams and appendices. Submissions should be
single-blind, written in English, and formatted according to the
current ACM two-column conference format. Suitable LaTeX, Word,
and Overleaf
<https://www.overleaf.com/gallery/tagged/acm-official>
templates are available from the ACM Website
<https://www.acm.org/publications/proceedings-template> (use
“sigconf” proceedings template for LaTeX and the Interim Template
for Word).
Submit papers electronically via EasyChair:
https://easychair.org/my/conference?conf=quare22
<https://easychair.org/my/conference?conf=quare22>.
All submissions will be peer-reviewed by the program committee and
accepted papers will be published on the website of our workshop:
https://sites.google.com/view/quare-2022/home
<https://sites.google.com/view/quare-2022/home>.
At least one author of each accepted paper is required to register
for the workshop and present the work.
ORGANISERS:
----------------------
Alessandro Piscopo, BBC Oana Inel, University of Zurich
Sanne Vrijenhoek, University of Amsterdam
Martijn Millecamp, AE NV
Krisztian Balog, Google, University of Stavanger
_______________________________________________
AISWorld mailing list
AISWorld@lists.aisnet.org