CfP: Towards The Positronic Brain? Workshop on Situated and Embodied Language Processing and Multimodal Interaction at SLTC 2024, Linköping, Sweden

SD
Simon Dobnik
Fri, Oct 18, 2024 12:19 PM

[apologies for x-posting]

Call for Participation and Extended Abstracts

Towards The Positronic Brain? Workshop on Situated and Embodied Language Processing and Multimodal Interaction

Date: Friday, 29 November (morning)
Location: SLTC, https://sltc2024.github.io Linköping University, Sweden

Website: https://gu-clasp.github.io/language-and-perception/events/positronic-brain/

Important dates:

  • Submission of extended abstracts: 29 October 2024
  • Notification of acceptance: 7 November 2024
  • Camera Ready: 15 November 2024

All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth").

Workshop description

When people hear Artificial Intelligence they often imagine something like Isaac Asimov's Positronic Brain: a brain built out of mathematics, logic, and wires which functions in many respects like a human brain and follows human-like behaviour. It is attached to a body and it can perceive and interact with the world. It has sophisticated language capabilities, speaking about its perception, action and reasoning, follows humanan interactive conventions and social and ethical norms.

However, a common approach to Artificial Intelligence has been to treat language, perception, action and behaviour as separate fields with distinct tasks and goals. Consequently, most systems that we build do not live-up to the beliefs and expectations of every-day users. Integration of several modalities and constructions of agents that can act in sophisticated ways in the world is therefore our next big challenege if we want to approximate human-like robotic agents that Sci-fi authors love.

This workshop invites researchers in the fields of natural language processing, computer science, language technology, computational linguistics,  computer vision, machine learning, AI, robotics, linguistics, cognitive science, and related fields to participate in an open, community-building forum where we discuss current work, challenges, and future directions related to multi-modality, interaction, embodiment, language technology, and AI in general.

In particular we would like to explore the following questions: (i) what modalities are required for human linguistic and non-linguistic behaviour, how they interact and how they can be represented and their data collected for computational use; (ii) how can modalities be modelled in artificial systems and learned, (iii) to what degree are different modalities used by the current systems and these are capable of multi-modal inference, (iv) what are practical needs for applications of multi-modal systems - are all modelities always required and are completely embodied agents wanted by end-users, (v) what are ethical considerations related to both collecting and training multi-modal data subsequent usage of multi-modal systems?

In particular, we encourage contributions in the following and similar topics:

  • Grounded language understanding and generation
  • Multi-modal or embodied interaction
  • Incremental or online Learning
  • Low resource learning and adaptation
  • Human-AI interaction
  • Multimodal dialogue
  • Interactive task learning
  • Multi-modal data collection
  • Ethical considerations for building and using AI systems

We foresee an open, interactive workshop with plenty of time for discussion, complemented with invited talks and presentations of on-going or completed research.

Submissions

Submissions of up to 2 pages (excluding references) should follow the ACL formatting template https://2023.aclweb.org/calls/style_and_formatting/ , should be in English and contain full contact information of the presenter(s). Authors are also encouraged to submit any other information and materials that they would like to discuss and share with the community about these topics.

Please upload your submissions (zipped in a file name.surname.zip) with your full contact details (names of authors, presenting author, affiliation(s), physical address, email and personal website) at https://sigmoid.flov.gu.se/index.php/s/cHEPCyncm99d2S6

The abstracts will be published on the workshop webpage.

Registration

Please register for SLTC to attend the workshop at  https://www.trippus.net/sltc2024_delegate

Workshop organisers

[apologies for x-posting] Call for Participation and Extended Abstracts Towards The Positronic Brain? Workshop on Situated and Embodied Language Processing and Multimodal Interaction Date: Friday, 29 November (morning) Location: SLTC, https://sltc2024.github.io Linköping University, Sweden Website: https://gu-clasp.github.io/language-and-perception/events/positronic-brain/ Important dates: * Submission of extended abstracts: 29 October 2024 * Notification of acceptance: 7 November 2024 * Camera Ready: 15 November 2024 All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth"). Workshop description When people hear Artificial Intelligence they often imagine something like Isaac Asimov's Positronic Brain: a brain built out of mathematics, logic, and wires which functions in many respects like a human brain and follows human-like behaviour. It is attached to a body and it can perceive and interact with the world. It has sophisticated language capabilities, speaking about its perception, action and reasoning, follows humanan interactive conventions and social and ethical norms. However, a common approach to Artificial Intelligence has been to treat language, perception, action and behaviour as separate fields with distinct tasks and goals. Consequently, most systems that we build do not live-up to the beliefs and expectations of every-day users. Integration of several modalities and constructions of agents that can act in sophisticated ways in the world is therefore our next big challenege if we want to approximate human-like robotic agents that Sci-fi authors love. This workshop invites researchers in the fields of natural language processing, computer science, language technology, computational linguistics, computer vision, machine learning, AI, robotics, linguistics, cognitive science, and related fields to participate in an open, community-building forum where we discuss current work, challenges, and future directions related to multi-modality, interaction, embodiment, language technology, and AI in general. In particular we would like to explore the following questions: (i) what modalities are required for human linguistic and non-linguistic behaviour, how they interact and how they can be represented and their data collected for computational use; (ii) how can modalities be modelled in artificial systems and learned, (iii) to what degree are different modalities used by the current systems and these are capable of multi-modal inference, (iv) what are practical needs for applications of multi-modal systems - are all modelities always required and are completely embodied agents wanted by end-users, (v) what are ethical considerations related to both collecting and training multi-modal data subsequent usage of multi-modal systems? In particular, we encourage contributions in the following and similar topics: * Grounded language understanding and generation * Multi-modal or embodied interaction * Incremental or online Learning * Low resource learning and adaptation * Human-AI interaction * Multimodal dialogue * Interactive task learning * Multi-modal data collection * Ethical considerations for building and using AI systems We foresee an open, interactive workshop with plenty of time for discussion, complemented with invited talks and presentations of on-going or completed research. Submissions Submissions of up to 2 pages (excluding references) should follow the ACL formatting template https://2023.aclweb.org/calls/style_and_formatting/ , should be in English and contain full contact information of the presenter(s). Authors are also encouraged to submit any other information and materials that they would like to discuss and share with the community about these topics. Please upload your submissions (zipped in a file name.surname.zip) with your full contact details (names of authors, presenting author, affiliation(s), physical address, email and personal website) at https://sigmoid.flov.gu.se/index.php/s/cHEPCyncm99d2S6 The abstracts will be published on the workshop webpage. Registration Please register for SLTC to attend the workshop at https://www.trippus.net/sltc2024_delegate Workshop organisers - Mattias Appelgren (contact) https://www.gu.se/om-universitetet/hitta-person/mattiasappelgren - Simon Dobnik https://www.gu.se/om-universitetet/hitta-person/simondobnik