Workshop on People in Vision, Language And the Mind (P-VLAM) at LREC 2022, 3rd CfP

PP
Patrizia Paggio
Mon, Apr 4, 2022 7:03 AM

*** With apologies for multiple postings ***

Third Call for Papers

P-VLAM: People in Vision, Language And the Mind

Workshop to be held at the 13th Edition of the Language Resources and Evaluation Conference, Palais du Pharo, Marseilles, France, June 2022.

https://p-vlam.github.iohttps://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fp-vlam.github.io%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=wcb2VzZ9qB9wtw4krhBo1jpNyteIv5%2FcVMLTldxdQPo%3D&reserved=0

We invite paper submissions for the second workshop on People in Vision, Language, and the Mind (formerly ONION 2020), which discusses how people, their bodies and faces as well as mental states are described in text with associated images, and modelled in computational and cognitive terms. We are interested in contributions from diverse areas including language generation, language analysis, cognitive computing, affective computing and multimodal (especially vision and language) modelling.

Detailed Workshop goals

The workshop will provide a forum to present and discuss current research focusing on multimodal resources as well as computational and cognitive models aiming to describe people in terms of their bodies and faces, including their affective state as it is reflected physically. Such models might either generate textual descriptions of people, generate images corresponding to people’s descriptions, or in general exploit multimodal representations for different purposes and applications.  Knowledge of the way human bodies and faces are perceived, understood and described by humans is key to the creation of such resources and models, therefore the workshop also invites contributions where the human body and face are studied from a cognitive, neurocognitive or multimodal communication perspective.

Human body postures and faces are being studied by researchers from different research communities, including those working with vision and language modelling, natural language generation, cognitive science, cognitive psychology, multimodal communication and embodied conversational agents. The workshop aims to reach out to all these communities to explore the many different aspects of research on the human body and face, including the resources that such research needs, and to foster cross-disciplinary synergy.

The ability to adequately model and describe people in terms of their body and face is interesting for a variety of language technology applications, e.g., conversational agents and interactive narrative generation, as well as forensic applications in which people need to be depicted or their images generated from textual or spoken descriptions.  Such systems need resources and models where images associated with human bodies and faces are coupled with linguistic descriptions, therefore the research needed to develop them is placed at the interface between vision and language research. At the same time, this line of research raises important ethical questions, both from the perspective of data collection methodology and from the perspective of bias detection and avoidance in models trained to process and interpret human attributes.
By focusing on the modelling and processing of people, and bringing in relevant insights from the cognitive and neurocognitive fields, the workshop will explore and further develop a particular area within vision and language research.

Relevant topics

We are inviting short and long papers reporting original research, surveys, position papers, and demos. Authors are strongly encouraged to identify and discuss ethical issues arising from their work, insofar as it involves the use of image data or descriptions of people.
Relevant topics include, but are not limited to, the following ones:
−      Datasets of facial images, as well as body postures, gestures and their descriptions
−      Methods for the creation and annotation of multimodal resources dedicated to the description of people
−      Methods for the validation of  multimodal resources for descriptions of people
−      Experimental studies of facial expression understanding by humans
−      Models or algorithms for automatic facial description generation
−      Emotion recognition by humans
−      Multimodal automatic emotion recognition from images and text
−      Subjectivity in face perception
−      Communicative, relational and intentional aspects of head pose and eye-gaze
−      Collection and annotation methods for facial descriptions
−      Coding schemes for the annotation of body posture and facial expression
−      Understanding and description of the human face and body in different contexts, including commercial applications, art, forensics, etc.
−      Modelling of the human body, face and facial expressions for embodied conversational agents
−      Generation of full-body images and/or facial images from textual descriptions
−      Ethical and data protection issues related to the collection and/or automatic description of images of real people
−      Any form of bias in models which seek to make sense of human physical attributes in language and vision.

Important dates

Paper submission deadline:    April 8, 2022
Notification of acceptance:      May 3, 2022
Camera ready Papers:            May 23, 2022
Workshop:                              June 20, 2022
Submission guidelines

Short paper submissions may consist of up to 4 pages of content, while long papers may have up to 8 pages of content. References and appendices do not count towards these page limits.
All submissions must follow the LREC 2022 style files, which are available for LaTeX (preferred) and MS Word and can be retrieved from the following address: https://lrec2022.lrec-https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flrec2022.lrec-%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=VXxMxSGqW%2FfxetI11teB7rkAG96wGTLPcgf4Q%2B4sTcY%3D&reserved=0conf.org/en/submission2022/authors-kit/http://conf.org/en/submission2022/authors-kit/

Papers must be submitted digitally, in PDF format, and uploaded through the START online submission system here:

https://www.softconf.com/lrec2022/P-VLAM/https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.softconf.com%2Flrec2022%2FP-VLAM%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=aoRirnpli8AdvJ%2BSUHj7jhmh2YJGHWJFoOe5EUyp8z4%3D&reserved=0

The authors of accepted papers will be required to submit a camera-ready version to be included in the final proceedings. Authors of accepted papers will be notified after the notification of acceptance with further details.

Identify, Describe and Share your LRs!

●      Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository.  This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.
●      As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2022 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.orghttps://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.islrn.org%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=tgysZ%2FySKPx0IdkTdBoqTilEfM3bJZMF1pVIg0Exn%2Fc%3D&reserved=0), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers  will be offered at submission time.

Organisers

Patrizia Paggio, University of Copenhagen and University of Malta, paggio@hum.ku.dkmailto:paggio@hum.ku.dk
Albert Gatt, Utrecht University and University of Malta, a.gatt@uu.nlmailto:a.gatt@uu.nl
Marc Tanti, University of Malta, marc.tanti@um.edu.mtmailto:marc.tanti@um.edu.mt

Programme Committee

See workshop’s web site


Patrizia Paggio

Professor
University of Malta
Institute of Linguistics and Language Technology
patrizia.paggio@um.edu.mtmailto:patrizia.paggio@um.edu.mt

Senior Researcher
University of Copenhagen
Centre for Language Technology
paggio@hum.ku.dkmailto:paggio@hum.ku.dk

*** With apologies for multiple postings *** Third Call for Papers P-VLAM: People in Vision, Language And the Mind Workshop to be held at the 13th Edition of the Language Resources and Evaluation Conference, Palais du Pharo, Marseilles, France, June 2022. https://p-vlam.github.io<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fp-vlam.github.io%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=wcb2VzZ9qB9wtw4krhBo1jpNyteIv5%2FcVMLTldxdQPo%3D&reserved=0> We invite paper submissions for the second workshop on People in Vision, Language, and the Mind (formerly ONION 2020), which discusses how people, their bodies and faces as well as mental states are described in text with associated images, and modelled in computational and cognitive terms. We are interested in contributions from diverse areas including language generation, language analysis, cognitive computing, affective computing and multimodal (especially vision and language) modelling. Detailed Workshop goals The workshop will provide a forum to present and discuss current research focusing on multimodal resources as well as computational and cognitive models aiming to describe people in terms of their bodies and faces, including their affective state as it is reflected physically. Such models might either generate textual descriptions of people, generate images corresponding to people’s descriptions, or in general exploit multimodal representations for different purposes and applications. Knowledge of the way human bodies and faces are perceived, understood and described by humans is key to the creation of such resources and models, therefore the workshop also invites contributions where the human body and face are studied from a cognitive, neurocognitive or multimodal communication perspective. Human body postures and faces are being studied by researchers from different research communities, including those working with vision and language modelling, natural language generation, cognitive science, cognitive psychology, multimodal communication and embodied conversational agents. The workshop aims to reach out to all these communities to explore the many different aspects of research on the human body and face, including the resources that such research needs, and to foster cross-disciplinary synergy. The ability to adequately model and describe people in terms of their body and face is interesting for a variety of language technology applications, e.g., conversational agents and interactive narrative generation, as well as forensic applications in which people need to be depicted or their images generated from textual or spoken descriptions. Such systems need resources and models where images associated with human bodies and faces are coupled with linguistic descriptions, therefore the research needed to develop them is placed at the interface between vision and language research. At the same time, this line of research raises important ethical questions, both from the perspective of data collection methodology and from the perspective of bias detection and avoidance in models trained to process and interpret human attributes. By focusing on the modelling and processing of people, and bringing in relevant insights from the cognitive and neurocognitive fields, the workshop will explore and further develop a particular area within vision and language research. Relevant topics We are inviting short and long papers reporting original research, surveys, position papers, and demos. Authors are strongly encouraged to identify and discuss ethical issues arising from their work, insofar as it involves the use of image data or descriptions of people. Relevant topics include, but are not limited to, the following ones: − Datasets of facial images, as well as body postures, gestures and their descriptions − Methods for the creation and annotation of multimodal resources dedicated to the description of people − Methods for the validation of multimodal resources for descriptions of people − Experimental studies of facial expression understanding by humans − Models or algorithms for automatic facial description generation − Emotion recognition by humans − Multimodal automatic emotion recognition from images and text − Subjectivity in face perception − Communicative, relational and intentional aspects of head pose and eye-gaze − Collection and annotation methods for facial descriptions − Coding schemes for the annotation of body posture and facial expression − Understanding and description of the human face and body in different contexts, including commercial applications, art, forensics, etc. − Modelling of the human body, face and facial expressions for embodied conversational agents − Generation of full-body images and/or facial images from textual descriptions − Ethical and data protection issues related to the collection and/or automatic description of images of real people − Any form of bias in models which seek to make sense of human physical attributes in language and vision. Important dates Paper submission deadline: April 8, 2022 Notification of acceptance: May 3, 2022 Camera ready Papers: May 23, 2022 Workshop: June 20, 2022 Submission guidelines Short paper submissions may consist of up to 4 pages of content, while long papers may have up to 8 pages of content. References and appendices do not count towards these page limits. All submissions must follow the LREC 2022 style files, which are available for LaTeX (preferred) and MS Word and can be retrieved from the following address: https://lrec2022.lrec-<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flrec2022.lrec-%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=VXxMxSGqW%2FfxetI11teB7rkAG96wGTLPcgf4Q%2B4sTcY%3D&reserved=0>conf.org/en/submission2022/authors-kit/<http://conf.org/en/submission2022/authors-kit/> Papers must be submitted digitally, in PDF format, and uploaded through the START online submission system here: https://www.softconf.com/lrec2022/P-VLAM/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.softconf.com%2Flrec2022%2FP-VLAM%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=aoRirnpli8AdvJ%2BSUHj7jhmh2YJGHWJFoOe5EUyp8z4%3D&reserved=0> The authors of accepted papers will be required to submit a camera-ready version to be included in the final proceedings. Authors of accepted papers will be notified after the notification of acceptance with further details. Identify, Describe and Share your LRs! ● Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data. ● As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2022 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.islrn.org%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=tgysZ%2FySKPx0IdkTdBoqTilEfM3bJZMF1pVIg0Exn%2Fc%3D&reserved=0>), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time. Organisers Patrizia Paggio, University of Copenhagen and University of Malta, paggio@hum.ku.dk<mailto:paggio@hum.ku.dk> Albert Gatt, Utrecht University and University of Malta, a.gatt@uu.nl<mailto:a.gatt@uu.nl> Marc Tanti, University of Malta, marc.tanti@um.edu.mt<mailto:marc.tanti@um.edu.mt> Programme Committee See workshop’s web site ***************** Patrizia Paggio Professor University of Malta Institute of Linguistics and Language Technology patrizia.paggio@um.edu.mt<mailto:patrizia.paggio@um.edu.mt> Senior Researcher University of Copenhagen Centre for Language Technology paggio@hum.ku.dk<mailto:paggio@hum.ku.dk>