DMR 2025
The 6th International Workshop on Designing Meaning Representations
To be held in beautiful Prague, Czechia, August 4-5, 2025, following ACL
2025 in Vienna, Austria.
DMR is accepting commitments for papers with reviews from ACL Rolling
Review (ARR)! Submit your papers and reviews on OpenReview at the following
link by May 30, 2025 (AOE):
https://openreview.net/group?id=DMR/2025_ARR_Commitment
DMR 2025 invites the submissions of long and short papers about original
works on the design, processing, and use of meaning representations. While
deep learning methods have led to many breakthroughs in practical natural
language applications, there is still a sense among many NLP researchers
that we have a long way to go before we can develop systems that can
actually “understand” human language and explain the decisions they make.
Indeed, “understanding” natural language entails many different human-like
capabilities, and they include but are not limited to the ability to track
entities in a text, understand the relations between these entities, track
events and their participants described in a text, understand how events
unfold in time, and distinguish events that have actually happened from
events that are planned or intended, are uncertain, or did not happen at
all. We believe a critical step in achieving natural language understanding
is to design meaning representations for text that have the necessary
meaning “ingredients” that help us achieve these capabilities. Such meaning
representations can also potentially be used to evaluate the compositional
generalization capacity of deep learning models.
There has been a growing body of research devoted to the design,
annotation, and parsing of meaning representations in recent years. In
particular, formal meaning representation frameworks such as Minimal
Recursion Semantics (MRS) and Discourse Representation Theory are developed
with the goal of supporting logical inference in reasoning-based AI systems
and are therefore easily translatable into first-order logic, while other
meaning representation frameworks such as Abstract Meaning Representation
(AMR), Uniform Meaning Representation (UMR), Tecto-grammatical
Representation (TR) in Prague Dependency Treebanks and the Universal
Conceptual Cognitive Annotation (UCCA), put more emphasis on the
representation of core predicate-argument structure. The automatic parsing
of natural language text into these meaning representations and the
generation of natural language text from these meaning representations are
also very active areas of research, and a wide range of technical
approaches and learning methods have been applied to these problems.
DMR intends to bring together researchers who are producers and consumers
of meaning representations and, through their interaction, gain a deeper
understanding of the key elements of meaning representations that are the
most valuable to the NLP community. The workshop will provide an
opportunity for meaning representation researchers to present new
frameworks and to critically examine existing frameworks with the goal of
using their findings to inform the design of next-generation meaning
representations. One particular goal is to understand the relationship
between distributed meaning representations trained on large data sets
using network models and the symbolic meaning representations that are
carefully designed and annotated by NLP researchers, with an aim of gaining
a deeper understanding of areas where each type of meaning representation
is the most effective.
The workshop solicits papers that address one or more of the following
topics:
Development and annotation of meaning representations;
Challenges and techniques in leveraging meaning representations for
downstream applications, including neuro-symbolic approaches;
The relationship between symbolic meaning representations and
distributed semantic representations;
Issues in applying meaning representations to multilingual settings and
lower-resourced languages;
Challenges and techniques in automatic parsing of meaning
representations;
Challenges and techniques in automatically generating text from meaning
representations;
Meaning representation evaluation metrics;
Cross-framework comparison of meaning representations and their formal
properties;
Any other topics that address the design, processing, and use of meaning
representations.
Important dates:
Direct Submission Deadline April 28, 2025
ARR Commitment Deadline: May 30, 2025
Notification of acceptance: June 16, 2025
Camera-ready papers due: July 1, 2025
Workshop date: August 4-5, 2025
All deadlines are 11:59pm UTC-12 ("anywhere on Earth").
Other Questions
If you have any questions, please feel free to contact the program
co-chairs (dmr.workshop.2025@gmail.com) and see the workshop website (
dmr2025.github.io).
DMR 2025
The 6th International Workshop on Designing Meaning Representations
To be held in beautiful Prague, Czechia, August 4-5, 2025, following ACL
2025 in Vienna, Austria.
DMR is accepting commitments for papers with reviews from ACL Rolling
Review (ARR)! Submit your papers and reviews on OpenReview at the following
link by May 30, 2025 (AOE):
https://openreview.net/group?id=DMR/2025_ARR_Commitment
DMR 2025 invites the submissions of long and short papers about original
works on the design, processing, and use of meaning representations. While
deep learning methods have led to many breakthroughs in practical natural
language applications, there is still a sense among many NLP researchers
that we have a long way to go before we can develop systems that can
actually “understand” human language and explain the decisions they make.
Indeed, “understanding” natural language entails many different human-like
capabilities, and they include but are not limited to the ability to track
entities in a text, understand the relations between these entities, track
events and their participants described in a text, understand how events
unfold in time, and distinguish events that have actually happened from
events that are planned or intended, are uncertain, or did not happen at
all. We believe a critical step in achieving natural language understanding
is to design meaning representations for text that have the necessary
meaning “ingredients” that help us achieve these capabilities. Such meaning
representations can also potentially be used to evaluate the compositional
generalization capacity of deep learning models.
There has been a growing body of research devoted to the design,
annotation, and parsing of meaning representations in recent years. In
particular, formal meaning representation frameworks such as Minimal
Recursion Semantics (MRS) and Discourse Representation Theory are developed
with the goal of supporting logical inference in reasoning-based AI systems
and are therefore easily translatable into first-order logic, while other
meaning representation frameworks such as Abstract Meaning Representation
(AMR), Uniform Meaning Representation (UMR), Tecto-grammatical
Representation (TR) in Prague Dependency Treebanks and the Universal
Conceptual Cognitive Annotation (UCCA), put more emphasis on the
representation of core predicate-argument structure. The automatic parsing
of natural language text into these meaning representations and the
generation of natural language text from these meaning representations are
also very active areas of research, and a wide range of technical
approaches and learning methods have been applied to these problems.
DMR intends to bring together researchers who are producers and consumers
of meaning representations and, through their interaction, gain a deeper
understanding of the key elements of meaning representations that are the
most valuable to the NLP community. The workshop will provide an
opportunity for meaning representation researchers to present new
frameworks and to critically examine existing frameworks with the goal of
using their findings to inform the design of next-generation meaning
representations. One particular goal is to understand the relationship
between distributed meaning representations trained on large data sets
using network models and the symbolic meaning representations that are
carefully designed and annotated by NLP researchers, with an aim of gaining
a deeper understanding of areas where each type of meaning representation
is the most effective.
The workshop solicits papers that address one or more of the following
topics:
-
Development and annotation of meaning representations;
-
Challenges and techniques in leveraging meaning representations for
downstream applications, including neuro-symbolic approaches;
-
The relationship between symbolic meaning representations and
distributed semantic representations;
-
Issues in applying meaning representations to multilingual settings and
lower-resourced languages;
-
Challenges and techniques in automatic parsing of meaning
representations;
-
Challenges and techniques in automatically generating text from meaning
representations;
-
Meaning representation evaluation metrics;
-
Cross-framework comparison of meaning representations and their formal
properties;
-
Any other topics that address the design, processing, and use of meaning
representations.
Important dates:
-
Direct Submission Deadline April 28, 2025
-
ARR Commitment Deadline: May 30, 2025
-
Notification of acceptance: June 16, 2025
-
Camera-ready papers due: July 1, 2025
-
Workshop date: August 4-5, 2025
All deadlines are 11:59pm UTC-12 ("anywhere on Earth").
Other Questions
If you have any questions, please feel free to contact the program
co-chairs (dmr.workshop.2025@gmail.com) and see the workshop website (
dmr2025.github.io).