[apologies for double posting]
QUARE 2022: The 1st workshop on Measuring the Quality of Explanations in Recommender
Systems, co-located with SIGIR 2022 (
https://sigir.org/sigir2022/), July 11-15, 2022, in
Madrid, Spain and Online
Workshop website:
https://sites.google.com/view/quare-2022/home
Location: Hybrid - Madrid, Spain and Online
IMPORTANT DATES:
-----------------------------
Extended paper submission deadline: 10 May 2022
Author notification: 17 May 2022
Final version deadline: 15 June 2022
Workshop date: 15 July 2022
WORKSHOP ORGANISERS:
----------------------------------------
- Alessandro Piscopo (BBC, UK)
<alessandro.piscopo@bbc.co.uk<mailto:alessandro.piscopo@bbc.co.uk>>
- Oana Inel (University of Zurich, CH)
<inel@ifi.uzh.ch<mailto:inel@ifi.uzh.ch>>
- Sanne Vrijenhoek (University of Amsterdam, NL)
<s.vrijenhoek@uva.nl<mailto:s.vrijenhoek@uva.nl>>
- Martijn Millecamp (AE NV, BE)
<martijn.millecamp@hotmail.com<mailto:martijn.millecamp@hotmail.com>>
- Krisztian Balog (Google Research)
<krisztianb@google.com<mailto:krisztianb@google.com>>
CALL FOR PAPERS:
----------------------------
Recommendations are ubiquitous in many contexts and domains due to a continuously growing
adoption of decision-support systems. Explanations may be provided along with
recommendations with the reasoning behind suggesting a particular item. However,
explanations may also significantly affect a user's decision-making process by serving
a number of different goals, such as transparency, persuasiveness, scrutability, among
others. While there is a growing body of research studying the effect of explanations, the
relationship between their quality and their effect has not been investigated in depth
yet.
For instance, at an institutional level, organisational values may require a different
combination of explanation goals; also, within the same organisation some combinations of
goals may be more appropriate for some use cases and less for others. Conversely,
end-users of a recommender system may be bearers of different values, and explanations can
affect them differently. Therefore, understanding whether explanations are fit for their
intended goals is key to subsequently implementing them in a production stage.
Furthermore, the lack of established, actionable methodologies to evaluate explanations
for recommendations, as well as evaluation datasets, hinders cross-comparison between
different explainable recommendations approaches, and is one of the issues hampering
widespread adoption of explanations in industry settings.
This workshop aims to extend existing work in the field by bringing together and
facilitating the exchange of perspectives and solutions from industry and academia, and
aims to bridge the gap between academic design guidelines and the best practices in the
industry regarding the implementation and evaluation of explanations in recommender
systems, with respect to their goals, impact, potential biases, and informativeness. With
this workshop, we provide a platform for discussion among scholars, practitioners, and
other interested parties.
TOPICS AND THEMES:
--------------------------------
The motivation of the workshop is to promote discussion upon future research and practice
directions of evaluating explainable recommendations, by bringing together academic and
industry researchers and practitioners in the area. We focus in particular on real-world
use cases, diverse organisational values and purposes, and different target users. We
encourage submissions that study different explanation goals and combinations of those,
how they fit various organisation values and different use cases. Furthermore, we welcome
submissions that propose and make available for the community high-quality datasets and
benchmarks.
Topics include, but are not limited to:
* Evaluation
* Relevance of explanation goals for different use cases;
* Soliciting user feedback on explanations;
* Implicit vs. explicit evaluation of explanations and goals;
* Reproducible and replicable evaluation methodologies;
* Online vs. offline evaluations.
* Personalisation
* User-modelling for explanation generation;
* Evaluation approaches for personalised explanations (e.g., content, style);
* Evaluation approaches for context-aware explanations (e.g., place, time,
alone/group setting, exploratory/transaction mode).
* Presentation
* Evaluation of different explanation modalities (e.g., text, graphics, audio,
hybrid);
* Evaluation of interactive explanations.
* Datasets
* Generation of datasets for evaluation of explanations;
* Evaluation benchmarks.
* Values
* Evaluation of explanations in relation to organisational values;
* Evaluation of explanations in relation to personal values.
SUBMISSIONS:
----------------------
We welcome three types of submissions:
* position or perspective papers (up to 4 pages in length, plus unlimited pages for
references): original ideas, perspectives, research vision, and open challenges in the
area of evaluation approaches for explainable recommender systems;
* featured papers (title and abstract of the paper, plus the original paper): already
published papers or papers summarizing existing publications in leading conferences and
high-impact journals that are relevant for the topic of the workshop
* demonstration papers (up to 2 pages in length, plus unlimited pages for references):
original or already published prototypes and operational evaluation approaches in the area
of explainable recommender systems.
Page limits include diagrams and appendices. Submissions should be single-blind, written
in English, and formatted according to the current ACM two-column conference format.
Suitable LaTeX, Word, and
Overleaf<https://www.overleaf.com/gallery/tagged/acm-official> templates are
available from the ACM
Website<https://www.acm.org/publications/proceedings-template> (use “sigconf”
proceedings template for LaTeX and the Interim Template for Word).
Submit papers electronically via EasyChair:
https://easychair.org/my/conference?conf=quare22.
All submissions will be peer-reviewed by the program committee and accepted papers will be
published on the website of our workshop:
https://sites.google.com/view/quare-2022/home.
At least one author of each accepted paper is required to register for the workshop
(attendance may be either remote or in-person) and present the work.
________________________________
Dr Alessandro Piscopo
Principal Data Scientist
[Shape Description automatically generated]
[signature_1413759244]