Final CFP: Multimodal Semantic Representations (MMSR 2021)

KL
Kenneth Lai
Wed, Mar 17, 2021 10:30 PM

FINAL CALL FOR PAPERS - DEADLINE EXTENDED

Beyond Language: Multimodal Semantic Representations (MMSR 2021)

co-located with IWCS 2021

June 14-18, 2021

https://mmsr-workshop.github.io/

Important Dates

  • March 26, 2021: Submissions due (Deadline Extended)
  • April 16, 2021: Notification of acceptance decisions
  • May 7, 2021: Camera-ready papers due

The demand for more sophisticated natural human-computer and human-robot
interactions is rapidly increasing as users become more accustomed to
conversation-like interactions with AI and NLP systems. Such interactions
require not only the robust recognition and generation of expressions
through multiple modalities (language, gesture, vision, action, etc.), but
also the encoding of situated meaning.

This workshop intends to bring together researchers who aim to capture
elements of multimodal interaction such as language, gesture, gaze, and
facial expression with formal semantic representations. We provide a space
for both theoretical and practical discussion of how linguistic
co-modalities support, inform, and align with “meaning” found in the
linguistic signal alone.

We solicit papers on multimodal semantic representation, including but not
limited to the following topics:

  • Examination and interpretation of co-gestural speech and co-speech
    gesture;
  • Semantic frameworks for individual linguistic co-modalities (e.g.
    gaze, facial expression);
  • Formal representation of situated conversation and embodiment;
  • Design and annotation of multimodal meaning representation (including
    extensions of existing semantic frameworks);
  • Challenges in cross-lingual or cross-cultural multimodal
    representation;
  • Challenges in semantic parsing of multimodal representation;
  • Challenges in aligning co-modalities in formal representation and/or
    NLP;
  • Discussion of criteria for evaluation of multimodal semantics;
  • Position papers on meaning, language, and multimodality;
  • Simulated agents that embody multimodal representations of common
    ground.

Submission Information

Two types of submissions are solicited: long papers and short papers. Long
papers should describe original research and must not exceed 8 pages,
excluding references. Short papers (typically system or project
descriptions, or ongoing research) must not exceed 4 pages, excluding
references. Both types will be published in the workshop proceedings and in
the ACL Anthology. Accepted papers get an extra page in the camera-ready
version.

We strongly encourage students to submit to the workshop and will consider
a student session depending on the number of submissions.

Papers should be formatted using the IWCS/ACL style files, available at:
https://iwcs2021.github.io/download/iwcs2021-templates.zip

Papers should be submitted in PDF format via the Softconf system at the
following link: https://www.softconf.com/iwcs2021/MMSR1/

Best regards,
Lucia Donatelli, Nikhil Krishnaswamy, Kenneth Lai, and James Pustejovsky
MMSR 2021 organizers
Email: mmsr.workshop.2021@gmail.com
Web page: https://mmsr-workshop.github.io/

FINAL CALL FOR PAPERS - DEADLINE EXTENDED Beyond Language: Multimodal Semantic Representations (MMSR 2021) co-located with IWCS 2021 June 14-18, 2021 https://mmsr-workshop.github.io/ Important Dates - March 26, 2021: Submissions due (Deadline Extended) - April 16, 2021: Notification of acceptance decisions - May 7, 2021: Camera-ready papers due The demand for more sophisticated natural human-computer and human-robot interactions is rapidly increasing as users become more accustomed to conversation-like interactions with AI and NLP systems. Such interactions require not only the robust recognition and generation of expressions through multiple modalities (language, gesture, vision, action, etc.), but also the encoding of situated meaning. This workshop intends to bring together researchers who aim to capture elements of multimodal interaction such as language, gesture, gaze, and facial expression with formal semantic representations. We provide a space for both theoretical and practical discussion of how linguistic co-modalities support, inform, and align with “meaning” found in the linguistic signal alone. We solicit papers on multimodal semantic representation, including but not limited to the following topics: - Examination and interpretation of co-gestural speech and co-speech gesture; - Semantic frameworks for individual linguistic co-modalities (e.g. gaze, facial expression); - Formal representation of situated conversation and embodiment; - Design and annotation of multimodal meaning representation (including extensions of existing semantic frameworks); - Challenges in cross-lingual or cross-cultural multimodal representation; - Challenges in semantic parsing of multimodal representation; - Challenges in aligning co-modalities in formal representation and/or NLP; - Discussion of criteria for evaluation of multimodal semantics; - Position papers on meaning, language, and multimodality; - Simulated agents that embody multimodal representations of common ground. Submission Information Two types of submissions are solicited: long papers and short papers. Long papers should describe original research and must not exceed 8 pages, excluding references. Short papers (typically system or project descriptions, or ongoing research) must not exceed 4 pages, excluding references. Both types will be published in the workshop proceedings and in the ACL Anthology. Accepted papers get an extra page in the camera-ready version. We strongly encourage students to submit to the workshop and will consider a student session depending on the number of submissions. Papers should be formatted using the IWCS/ACL style files, available at: https://iwcs2021.github.io/download/iwcs2021-templates.zip Papers should be submitted in PDF format via the Softconf system at the following link: https://www.softconf.com/iwcs2021/MMSR1/ Best regards, Lucia Donatelli, Nikhil Krishnaswamy, Kenneth Lai, and James Pustejovsky MMSR 2021 organizers Email: mmsr.workshop.2021@gmail.com Web page: https://mmsr-workshop.github.io/