CFP: Advanced Robotics Special Issue on Multimodal Processing and Robotics for Dialogue Systems

RH
Ryuichiro Higashinaka
Thu, Nov 17, 2022 1:53 PM

Dear all,

It is my great pleasure to announce that we have a special issue on "Multimodal Processing and Robotics for Dialogue Systems" in the journal of Advanced Robotics (Taylor & Francis).
Please consider submitting your original paper to this special issue. The deadline is 31 Jan. 2023.
https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimodal-processing/?utm_source=TFO&utm_medium=cms&utm_campaign=JPG15743
https://www.rsj.or.jp/content/files/pub/ar/CFP/CFP_37_21.pdf

I would also appreciate it if you could help distribue the CFP and encourage your colleagues to make submissions.

Best regards,
Ryuichiro Higashinaka
on behalf of guest editors

/////////////////////////////////////////////////////////////////////////
[Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems

Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)

Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan  2023

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.

This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.

The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including "dialogue robot competition" at IROS2022.
In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:

*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition

Submission:
The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.


Ryuichiro Higashinaka, Ph.D.
Professor, Graduate School of Informatics, Nagoya University
Nagoya University #391 South, IB Building
Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan
Phone/Fax: +81-(0)52-789-5875
Email: higashinaka@i.nagoya-u.ac.jp

Dear all, It is my great pleasure to announce that we have a special issue on "Multimodal Processing and Robotics for Dialogue Systems" in the journal of Advanced Robotics (Taylor & Francis). Please consider submitting your original paper to this special issue. The deadline is 31 Jan. 2023. https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimodal-processing/?utm_source=TFO&utm_medium=cms&utm_campaign=JPG15743 https://www.rsj.or.jp/content/files/pub/ar/CFP/CFP_37_21.pdf I would also appreciate it if you could help distribue the CFP and encourage your colleagues to make submissions. Best regards, Ryuichiro Higashinaka on behalf of guest editors ///////////////////////////////////////////////////////////////////////// [Call for Papers] Advanced Robotics Special Issue on Multimodal Processing and Robotics for Dialogue Systems Co-Editors: Prof. David Traum (University of Southern California, USA) Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden) Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan) Prof. Ryuichiro Higashinaka (Nagoya University, Japan) Dr. Takashi Minato (RIKEN/ATR, Japan) Prof. Takayuki Nagai (Osaka University, Japan) Publication in Vol. 37, Issue 21 (Nov 2023) SUBMISSION DEADLINE: 31 Jan 2023 In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions. This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems. The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including "dialogue robot competition" at IROS2022. In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots: *Spoken dialogue processing *Multimodal processing *Speech recognition *Text-to-speech *Emotion recognition *Motion generation *Facial expression generation *System architecture *Natural language processing *Knowledge representation *Benchmarking *Evaluation method *Ethics *Dialogue systems and robots for competition Submission: The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal. --------------------------------------- Ryuichiro Higashinaka, Ph.D. Professor, Graduate School of Informatics, Nagoya University Nagoya University #391 South, IB Building Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan Phone/Fax: +81-(0)52-789-5875 Email: higashinaka@i.nagoya-u.ac.jp
RH
Ryuichiro Higashinaka
Thu, Dec 29, 2022 6:15 PM

Dear all,

Let us send a reminder of a special issue on "Multimodal Processing and Robotics for Dialogue Systems" in the journal of Advanced Robotics (Taylor & Francis).
Please consider submitting your original paper to this special issue. The deadline is 31 Jan. 2023.
https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimodal-processing/?utm_source=TFO&utm_medium=cms&utm_campaign=JPG15743
https://www.rsj.or.jp/content/files/pub/ar/CFP/CFP_37_21.pdf

I would also appreciate it if you could help distribute the CFP and encourage your colleagues to make submissions.

Best regards,
Ryuichiro Higashinaka
on behalf of guest editors

/////////////////////////////////////////////////////////////////////////
[Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems

Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)

Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan  2023

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.

This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.

The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including "dialogue robot competition" at IROS2022.
In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:

*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition

Submission:
The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.

Note that word count includes references. Captions and author bios are not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds the word limit.


Ryuichiro Higashinaka, Ph.D.
Professor, Graduate School of Informatics, Nagoya University
Nagoya University #391 South, IB Building
Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan
Phone/Fax: +81-(0)52-789-5875
Email: higashinaka@i.nagoya-u.ac.jp

Dear all, Let us send a reminder of a special issue on "Multimodal Processing and Robotics for Dialogue Systems" in the journal of Advanced Robotics (Taylor & Francis). Please consider submitting your original paper to this special issue. The deadline is 31 Jan. 2023. https://think.taylorandfrancis.com/special_issues/advanced-robotics-multimodal-processing/?utm_source=TFO&utm_medium=cms&utm_campaign=JPG15743 https://www.rsj.or.jp/content/files/pub/ar/CFP/CFP_37_21.pdf I would also appreciate it if you could help distribute the CFP and encourage your colleagues to make submissions. Best regards, Ryuichiro Higashinaka on behalf of guest editors ///////////////////////////////////////////////////////////////////////// [Call for Papers] Advanced Robotics Special Issue on Multimodal Processing and Robotics for Dialogue Systems Co-Editors: Prof. David Traum (University of Southern California, USA) Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden) Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan) Prof. Ryuichiro Higashinaka (Nagoya University, Japan) Dr. Takashi Minato (RIKEN/ATR, Japan) Prof. Takayuki Nagai (Osaka University, Japan) Publication in Vol. 37, Issue 21 (Nov 2023) SUBMISSION DEADLINE: 31 Jan 2023 In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions. This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems. The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including "dialogue robot competition" at IROS2022. In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots: *Spoken dialogue processing *Multimodal processing *Speech recognition *Text-to-speech *Emotion recognition *Motion generation *Facial expression generation *System architecture *Natural language processing *Knowledge representation *Benchmarking *Evaluation method *Ethics *Dialogue systems and robots for competition Submission: The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal. Note that word count includes references. Captions and author bios are not included. For special issues, longer papers can be accepted if the editors approve. Please contact the editors before the submission if your manuscript exceeds the word limit. --------------------------------------- Ryuichiro Higashinaka, Ph.D. Professor, Graduate School of Informatics, Nagoya University Nagoya University #391 South, IB Building Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan Phone/Fax: +81-(0)52-789-5875 Email: higashinaka@i.nagoya-u.ac.jp