Call for Submissions to the "3rd Workshop on Explainability for Human-Robot Collaboration (X-HRI): Real-World Concerns" at HRI 2025
---===================
Website: https://sites.google.com/view/x-hri
Date: March 3, 2025 (Half-day workshop)
Location: Melbourne, Australia, as part of the 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025)
Manuscript submission site: https://easychair.org/conferences?conf=xhri2025
Contact for submissions: dkonto@mit.edu and e.yadollahi@lancaster.ac.uk
IMPORTANT DATES
Submission deadline: February 12, 2025
Notification of acceptance: February 19, 2025
Camera-ready deadline: February 26, 2025
Workshop: March 3, 2025
All deadlines are at 23:59 Anywhere on Earth time.
---===================
AIM AND SCOPE
Robots powered by AI and machine learning are increasingly capable of collaboration and social interaction with humans, leading to a demand to develop new approaches to ensure their transparency and explainable behaviour. As explainable AI (XAI) seeks to clarify AI decisions, its integration into physical robots often creates an illusion of explainability—raising questions about whether current approaches truly enhance understanding. The 3rd Workshop on Explainability in Human-Robot Collaboration aims to address the real-world concerns associated with developing explainable and transparent robots through a focused, multi-faceted panel discussion and a series of paper presentations. In this workshop, we will focus on refining when and how explanations should be provided, integrating human communication principles to enhance trust and transparency in human-robot collaboration through both technical and user-centred solutions.
TOPICS OF INTEREST
Topics of interest include, but are not limited to:
Using participatory design to achieve explainability
The downsides of explainability
The connection between explainability and trust
What makes an interaction explainable
Metrics to evaluate explainability
Deception in Human-Robot Interaction
Unintended biases in explainability & how to alleviate them
Transparency in Trustworthy autonomous systems
Explanation generation as a model reconciliation process
Adapting explanation through forming a mental model
Explanation generation
AI Ethics
Bias and misinformation in LLM-based explanations
Few-shot learning and adaptation for LLM in explainable HRI
WORKSHOP SCHEDULE
The half-day workshop will be hybrid (in Melbourne and online) on March 3, 2025. It will consist of an ice-breaking activity, short keynote talks, an exciting panel and presentation of accepted papers. The website includes a more detailed version of the schedule that will be published closer to the conference date.
SUBMISSION GUIDELINES
We invite scientific papers ranging from 3 to 4 pages (including additional pages allocated for references and appendices). Submissions can encompass various types of work, including ongoing projects with preliminary findings, technical reports, case studies, opinion pieces, surveys, and cutting-edge research in the realms of explainability in robotics and AI. All submitted papers will undergo a thorough review process to assess their relevance, originality, and scientific and technical robustness. Authors are asked to adhere to the submission guidelines outlined by HRI2025.
Submissions do not need to be anonymized for review. All manuscripts must be written in English and submitted electronically in PDF format via EasyChair: https://easychair.org/conferences?conf=xhri2025
The accepted papers will be published on the workshop website as well as in arXiv. The authors of the accepted papers will present their work in the format of lightning talks or posters during the workshop.
Authors should use ACM SIG format (use “sigconf” as document class, instead of “manuscript,screen,review”) template files (US letter): https://www.acm.org/publications/proceedings-template
Overleaf template (use “sigconf” as document class, instead of “manuscript,screen,review”): https://www.overleaf.com/latex/templates/acm-conference-proceedings-primary-article-template/wbvnghjbzwpc
ORGANISERS
Elmira Yadollahi (Lancaster University, UK)
Fethiye Irmak Dogan (University of Cambridge, UK)
Marta Romeo (Heriot-Watt University, UK)
Dimosthenis Kontogiorgos (Massachusetts Institute of Technology, USA)
Peizhu Qian (Rice University, USA)
Yan Zhang (University of Melbourne, Australia)
Call for Submissions to the "3rd Workshop on Explainability for Human-Robot Collaboration (X-HRI): Real-World Concerns" at HRI 2025
=====================================================================================
Website: https://sites.google.com/view/x-hri
Date: March 3, 2025 (Half-day workshop)
Location: Melbourne, Australia, as part of the 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025)
Manuscript submission site: https://easychair.org/conferences?conf=xhri2025
Contact for submissions: dkonto@mit.edu and e.yadollahi@lancaster.ac.uk
IMPORTANT DATES
Submission deadline: February 12, 2025
Notification of acceptance: February 19, 2025
Camera-ready deadline: February 26, 2025
Workshop: March 3, 2025
All deadlines are at 23:59 Anywhere on Earth time.
=====================================================================================
AIM AND SCOPE
Robots powered by AI and machine learning are increasingly capable of collaboration and social interaction with humans, leading to a demand to develop new approaches to ensure their transparency and explainable behaviour. As explainable AI (XAI) seeks to clarify AI decisions, its integration into physical robots often creates an illusion of explainability—raising questions about whether current approaches truly enhance understanding. The 3rd Workshop on Explainability in Human-Robot Collaboration aims to address the real-world concerns associated with developing explainable and transparent robots through a focused, multi-faceted panel discussion and a series of paper presentations. In this workshop, we will focus on refining when and how explanations should be provided, integrating human communication principles to enhance trust and transparency in human-robot collaboration through both technical and user-centred solutions.
TOPICS OF INTEREST
Topics of interest include, but are not limited to:
Using participatory design to achieve explainability
The downsides of explainability
The connection between explainability and trust
What makes an interaction explainable
Metrics to evaluate explainability
Deception in Human-Robot Interaction
Unintended biases in explainability & how to alleviate them
Transparency in Trustworthy autonomous systems
Explanation generation as a model reconciliation process
Adapting explanation through forming a mental model
Explanation generation
AI Ethics
Bias and misinformation in LLM-based explanations
Few-shot learning and adaptation for LLM in explainable HRI
WORKSHOP SCHEDULE
The half-day workshop will be hybrid (in Melbourne and online) on March 3, 2025. It will consist of an ice-breaking activity, short keynote talks, an exciting panel and presentation of accepted papers. The website includes a more detailed version of the schedule that will be published closer to the conference date.
SUBMISSION GUIDELINES
We invite scientific papers ranging from 3 to 4 pages (including additional pages allocated for references and appendices). Submissions can encompass various types of work, including ongoing projects with preliminary findings, technical reports, case studies, opinion pieces, surveys, and cutting-edge research in the realms of explainability in robotics and AI. All submitted papers will undergo a thorough review process to assess their relevance, originality, and scientific and technical robustness. Authors are asked to adhere to the submission guidelines outlined by HRI2025.
Submissions do not need to be anonymized for review. All manuscripts must be written in English and submitted electronically in PDF format via EasyChair: https://easychair.org/conferences?conf=xhri2025
The accepted papers will be published on the workshop website as well as in arXiv. The authors of the accepted papers will present their work in the format of lightning talks or posters during the workshop.
Authors should use ACM SIG format (use “sigconf” as document class, instead of “manuscript,screen,review”) template files (US letter): https://www.acm.org/publications/proceedings-template
Overleaf template (use “sigconf” as document class, instead of “manuscript,screen,review”): https://www.overleaf.com/latex/templates/acm-conference-proceedings-primary-article-template/wbvnghjbzwpc
ORGANISERS
Elmira Yadollahi (Lancaster University, UK)
Fethiye Irmak Dogan (University of Cambridge, UK)
Marta Romeo (Heriot-Watt University, UK)
Dimosthenis Kontogiorgos (Massachusetts Institute of Technology, USA)
Peizhu Qian (Rice University, USA)
Yan Zhang (University of Melbourne, Australia)