CfP: HCII 2024 session on Behavioromics

AL
Andy Lücking
Fri, Sep 29, 2023 6:49 AM

Apologies for cross-posting

Call for papers: Behavioromics (see below how to contribute)

HCI International 2024 Session: "Semantic, artificial and
computational interaction studies: Towards a behavioromics of
multimodal communication"

A common theme that has emerged across different disciplines,
especially in recent years, is the transition from monomodal to
multimodal domains. This includes, for example, multimodal learning --
learning representations from multiple data modalities --, multimodal
communication -- the study of gestures, facial expressions, head
movements, and the like in addition to speech --, or multimodal logic
-- that is, modal logic with more than one primitive operator. While
the predominant use of "multimodal communication" (and its relatives)
refers to human-human interaction, other domains can also be conceived
in terms of interacting modes. The field of HCI and HRI, with its
focus on natural (for humans) interfaces, has paid attention to
multimodality from an early stage. It is also an important topic in
conversation analysis and cognitive science, and is beginning to
percolate into information science and theoretical linguistics. At the
same time, due to the digital turn, work on multimodality is being
expanded by data analytics using machine learning tools for modelling,
detecting, analysing and simulating any form of communication.
Relatedly, but independently, symbol grounding approaches have been
multimodally extended, and advances in computer vision and multimodal
AI are prolific. However, while these fields share a common empirical
domain, there is little interaction between them. This session aims to
bring these branches together - a collaborative endeavour that we call
"behaviouromics".

The session is open, but not restricted, to topics such as the following:

  • multimodal generative AI
  • predictive multimodal computing
  • dialogue generation, dialogue systems and dialog semantics
  • social robot interaction and adaptive behaviour
  • monitoring and processing in interaction
  • (big) multimodal data
  • multimodal data analytics
  • automatic multimodal annotation beyond written text
  • virtual reality and augmented reality applications
  • simulation-based learning in virtual environments
  • representation schemes for multimodal communication
  • verbal and non-verbal social signalling in humans and non-humans
  • the role of multimodality in 4E cognition
  • notions and theories of multimodality
  • multimodality in logics

We want to emphasize that conceptual contributions are highly welcome!

The conference session aims to provide a platform to bring together
computer scientists, linguists, psychologists and researchers in
related fields who are working on multimodal interaction. We are all
working on almost the same topic from different angles, but there are
far too few opportunities to interact. But sharing and seeing what
others are doing is crucial for the methodological, empirical and
theoretical challenges outlined above. The planned session will
support this collaboration.

The conference session will take place virtually in conjunction with
HCI International 2024 (https://2024.hci.international/).

Full papers will be published as part of the conference proceedings by
Springer.

If you want to contribute, please send a message to any one of the
organizers until 26 October 2023:

Alexander Mehler (mehler@em.uni-frankfurt.de)
Andy Lücking (luecking@em.uni-frankfurt.de)
Alexander Henlein (henlein@em.uni-frankfurt.de)

Important dates:

  • until 26 October 2023: send email message to one of the session organizers
  • 30 October 2023: upload abstract (up to 500 words)
  • 02 February 2024: full paper is due
  • 29 June--04 July 2024: HCI International conference (virtual)

Session organizers:

Alexander Mehler (https://www.texttechnologylab.org/team/alexander-mehler/)
Andy Lücking (https://www.texttechnologylab.org/team/andy-luecking/)
Alexander Henlein (https://www.texttechnologylab.org/team/alexander-henlein/)

--- *Apologies for cross-posting* Call for papers: *Behavioromics* (see below how to contribute) HCI International 2024 Session: "Semantic, artificial and computational interaction studies: Towards a behavioromics of multimodal communication" A common theme that has emerged across different disciplines, especially in recent years, is the transition from monomodal to multimodal domains. This includes, for example, multimodal learning -- learning representations from multiple data modalities --, multimodal communication -- the study of gestures, facial expressions, head movements, and the like in addition to speech --, or multimodal logic -- that is, modal logic with more than one primitive operator. While the predominant use of "multimodal communication" (and its relatives) refers to human-human interaction, other domains can also be conceived in terms of interacting modes. The field of HCI and HRI, with its focus on natural (for humans) interfaces, has paid attention to multimodality from an early stage. It is also an important topic in conversation analysis and cognitive science, and is beginning to percolate into information science and theoretical linguistics. At the same time, due to the digital turn, work on multimodality is being expanded by data analytics using machine learning tools for modelling, detecting, analysing and simulating any form of communication. Relatedly, but independently, symbol grounding approaches have been multimodally extended, and advances in computer vision and multimodal AI are prolific. However, while these fields share a common empirical domain, there is little interaction between them. This session aims to bring these branches together - a collaborative endeavour that we call "behaviouromics". The session is open, but not restricted, to topics such as the following: - multimodal generative AI - predictive multimodal computing - dialogue generation, dialogue systems and dialog semantics - social robot interaction and adaptive behaviour - monitoring and processing in interaction - (big) multimodal data - multimodal data analytics - automatic multimodal annotation beyond written text - virtual reality and augmented reality applications - simulation-based learning in virtual environments - representation schemes for multimodal communication - verbal and non-verbal social signalling in humans and non-humans - the role of multimodality in 4E cognition - notions and theories of multimodality - multimodality in logics We want to emphasize that conceptual contributions are highly welcome! The conference session aims to provide a platform to bring together computer scientists, linguists, psychologists and researchers in related fields who are working on multimodal interaction. We are all working on almost the same topic from different angles, but there are far too few opportunities to interact. But sharing and seeing what others are doing is crucial for the methodological, empirical and theoretical challenges outlined above. The planned session will support this collaboration. The conference session will take place *virtually* in conjunction with HCI International 2024 (https://2024.hci.international/). Full papers will be published as part of the conference proceedings by Springer. *If you want to contribute, please send a message to any one of the organizers until 26 October 2023:* Alexander Mehler (mehler@em.uni-frankfurt.de) Andy Lücking (luecking@em.uni-frankfurt.de) Alexander Henlein (henlein@em.uni-frankfurt.de) Important dates: - until 26 October 2023: send email message to one of the session organizers - 30 October 2023: upload abstract (up to 500 words) - 02 February 2024: full paper is due - 29 June--04 July 2024: HCI International conference (virtual) Session organizers: Alexander Mehler (https://www.texttechnologylab.org/team/alexander-mehler/) Andy Lücking (https://www.texttechnologylab.org/team/andy-luecking/) Alexander Henlein (https://www.texttechnologylab.org/team/alexander-henlein/)