We would like to invite you to submit to the workshop surrounding socially assistive
robots as decision makers <https://sites.google.com/view/sar-decision-making>, which
will take place on Friday 28th April in Hamburg, Germany as part of the ACM CHI Conference
on Human Factors in Computing Systems (CHI '23).
*Socially Assistive Robots as Decision Makers: Transparency, Motivations, and Intentions*
is a one-day hybrid workshop which is part of the ACM CHI Conference on Human Factors in
Computing Systems (CHI) in 2023. It aims to discuss challenges, current practices, and
ethical implications of Socially Assistive Robot (SAR) decision-making.
We welcome up to 4-page (excluding references) position or research contribution papers
which address challenges within the theme of SARs that make decisions. Some example
challenges are listed below. Authors may also indicate their background and motivations
for attending the workshop, in addition to any research results. The papers should be in
the CHI extended abstract format and be submitted through a Google Form. Papers will be
reviewed according to suitability to the workshop to contribute to an interesting set of
discussions by the participants. Papers will be selected to present a poster or short talk
on the day of the workshop, and accepted papers will be invited to submit to arXiv. At
least one author of each accepted submission must attend the workshop. More information
can be found on our
website:
https://sites.google.com/view/sar-decision-making
- Which decisions should be made by the SAR and who decides this?
- How should SARs transparently communicate when reasoning is based on
complex processes?
- How do SARs smoothly integrate into existing relationships in
assistive scenarios to build trust with users?
- What is the role of robot embodiment in communicating explanations?
- How do SARs recover from unintended/incorrect actions through
communication with the user?
- How should SARs approaches be adapted for specific populations?
- Should SARs that make autonomous decisions also have an explicit moral
reasoning framework?
*Important Dates (all deadlines are at 11:59PM AoE): *
- Google Form Submission Opens:* Wednesday January 18, 2023*
- Submission Deadline:* Thursday **February 23, 2023*
- Notification of Acceptance: *Monday* *March 13, 2023*
- Camera Ready Deadline:* Monday **March 20, 2023*
*Organisers: *
- *Emilyann Nault*, University of Edinburgh (UK) (*Main Point of Contact*)
- *Carl Bettosi*, Heriot-Watt University (UK)
- *Professor Lynne Baillie*, Heriot-Watt University (UK) (*Secondary Point of
Contact*)
- *Ronnie Smith*, University of Edinburgh (UK)
- *Professor Maja Mataric*, University of Southern California (USA)
- *Dr Vivek Nallur*, University College Dublin (IRL)
- *Professor Manfred Tscheligi*, University of Salzburg (AUT)
- *Dr Andreas Sackl*, Austrian Institute of Technology (AUT)
- *Professor Fabio Patern *, National Research Council of Italy - Institute of
Information Science and Technologies (ITA)
- *Scott MacLeod*, Heriot-Watt University (UK)
- *Sara Cooper*, PAL Robotics (ESP)
Please feel free to reach out to Emilyann (en27(a)hw.ac.uk) with any questions you may
have.
Kind regards,
SARs: TMI Workshop Organizers