Call For Papers: 2025 ICDM Multimodal Search & Recommendations Workshop (MMSR '25)

AN
Aditya Nandkishore Chichani
Thu, Aug 7, 2025 1:07 AM

2025 ICDM Multimodal Search & Recommendations Workshop (MMSR '25)

Multimodal search and recommendation (MMSR) systems are at the forefront of
modern information retrieval, designed to integrate and process diverse
data types such as text, images, audio, and video within a unified
framework. This integration enables more accurate and contextually relevant
search results and recommendations, significantly enhancing user
experiences. For example, e-commerce platforms have started supporting
product searches through images to provide a streamlined shopping
experience. Recent innovations in LLMs have extended their capabilities to
handle multimodal inputs, allowing for a deeper and more nuanced
understanding of content and user preferences.

The deadline for paper submission is Sep 1, 2025 (11:59 P.M. AoE)

The special theme of MMSR '25 is From Data to Discovery: Using Multimodal
Models for Smarter Search and Recommendations

MMSR ’25 is a half day workshop taking place on November 12, 2025 in
conjunction with ICDM 2025. ICDM MMSR '25 will be an in-person workshop.


Important Details

Paper submission deadline - Sep 1, 2025 (11:59 P.M. AoE
http://www.worldtimeserver.com/time-zones/aoe/)

Notification of acceptance - Sep 15, 2025

MMSR ’25 Workshop - Nov 12, 2025

Venue: ICDM 2025 https://www3.cs.stonybrook.edu/~icdm2025/ (Washington
DC, USA)

Website: https://icdm-mmsr.github.io/

Organizers: Aditya Chichani, Surya Kallumadi,Tracy Holloway King, Yubin
Kim, Andrei Lopatenko

We invite quality research contributions representing original research.
All submitted papers will be single-blind and will be peer reviewed by an
international program committee of researchers of high repute. Accepted
submissions will be presented at the workshop.

Topics

Topics of interest include, but are not limited to:

From Data to Discovery: Using Multimodal Models for Smarter Search and
Recommendations (2025 Special Theme)

  Strategies for building scalable multimodal discovery engines.
  -

  Lessons learned from productionizing MMSR models in real-world
  applications.
  -

  Handling discovery in cold-start scenarios and sparse multimodal data
  settings.
  -

  Balancing discovery and relevance in multimodal recommendation
  systems.
  -

  Evaluating business impact and user satisfaction of multimodal
  discovery systems.
  -

  Emerging trends in using LLMs for multimodal data exploration and
  discovery.
  -

  Personalization strategies tailored to multimodal discovery journeys.
  -

  Bridging research and practical deployment: overcoming challenges in
  scaling multimodal models for search and recommendation.
  -

Cross-modal retrieval techniques

  Efficiently indexing and retrieving multimodal data.
  -

  Handling large-scale cross-modal data.
  -

  Developing metrics to measure similarity across different modalities.
  -

  Zero-shot and few-shot retrieval across unseen modalities.
  -

  Adapting retrieval architectures (e.g., dual encoders vs. fusion
  models) for different multimodal tasks.
  -

Applications of MMSR to Verticals (e.g., E-commerce, Healthcare, Real
Estate)

  MMSR for image-based product search in e-commerce.
  -

  Multimodal conversational agents for healthcare, legal, and retail
  industries.
  -

  Augmented reality (AR) and multimodal discovery for shopping
  experiences.
  -

  Customer service optimization through multimodal search interfaces
  (e.g., support chat, help centers).
  -

  Personalized multimodal travel planning and recommendation systems.
  -

  Video+text based multimodal recommendations in media and
  entertainment domains.
  -

User-centric design principles for MMSR interfaces

  Designing user-friendly interfaces that support multimodal search.
  -

  Methods for evaluating the usability of MMSR systems.
  -

  Ensuring MMSR interfaces are accessible to users with disabilities.
  -

  Visualizations and interactive feedback mechanisms for multimodal
  search refinement.
  -

  A/B testing strategies specific to multimodal search UI/UX
  improvements.
  -

Ethical and Privacy Considerations of MMSR

  Identifying and mitigating biases in multimodal algorithms.
  -

  Ensuring transparency in how multimodal results are generated and
  presented.
  -

  Approaches for obtaining and managing user consent for using user
  data.
  -

  User perception studies of trust and explainability in multimodal
  search systems.
  -

  Privacy-preserving multimodal modeling: federated learning and
  differential privacy for MMSR.
  -

Modeling for MMSR

  Multimodal representation learning.
  -

  Utilizing pre-trained multimodal LLMs.
  -

  Dimensionality reduction techniques to manage multimodal complexity.
  -

  Fine-tuning pre-trained vision-language models.
  -

  Developing and standardizing metrics to evaluate the performance of
  MMSR models.
  -

  Alignment challenges in multimodal embeddings across diverse
  modalities.

Submission Instructions:

All papers will be peer reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion.
Submissions
must describe work that is not previously published, not accepted for
publication
elsewhere, and not currently under review elsewhere. All submissions must
be in English. We do not accept anonymous submissions.

Please note that at least one of the authors of each accepted paper must
register for the workshop and present the paper. All accepted workshop
papers will be published in the dedicated ICDMW proceedings published by
the IEEE Computer Society Press. Non-archival submissions are not allowed,
i.e., all accepted papers will be included and published in the proceedings.

Long papers: 6-8 pages excluding references

Short papers: 3-5 pages excluding references

Submissions to MMSR '25 should be made through the workshop's submission
portal: https://wi-lab.com/cyberchair/2025/icdm25/scripts/submit.php
https://wi-lab.com/cyberchair/2025/icdm25/scripts/submit.php?subarea=S25&undisplay_detail=1&wh=/cyberchair/2025/icdm25/scripts/ws_submit.php

E-mail: icdm-mmsr-organizers@googlegroups.com

2025 ICDM Multimodal Search & Recommendations Workshop (MMSR '25) Multimodal search and recommendation (MMSR) systems are at the forefront of modern information retrieval, designed to integrate and process diverse data types such as text, images, audio, and video within a unified framework. This integration enables more accurate and contextually relevant search results and recommendations, significantly enhancing user experiences. For example, e-commerce platforms have started supporting product searches through images to provide a streamlined shopping experience. Recent innovations in LLMs have extended their capabilities to handle multimodal inputs, allowing for a deeper and more nuanced understanding of content and user preferences. The deadline for paper submission is Sep 1, 2025 (11:59 P.M. AoE) The special theme of MMSR '25 is From Data to Discovery: Using Multimodal Models for Smarter Search and Recommendations MMSR ’25 is a half day workshop taking place on November 12, 2025 in conjunction with ICDM 2025. ICDM MMSR '25 will be an in-person workshop. ____________________________________________________________________________ Important Details Paper submission deadline - Sep 1, 2025 (11:59 P.M. AoE <http://www.worldtimeserver.com/time-zones/aoe/>) Notification of acceptance - Sep 15, 2025 MMSR ’25 Workshop - Nov 12, 2025 Venue: ICDM 2025 <https://www3.cs.stonybrook.edu/~icdm2025/> (Washington DC, USA) Website: https://icdm-mmsr.github.io/ Organizers: Aditya Chichani, Surya Kallumadi,Tracy Holloway King, Yubin Kim, Andrei Lopatenko We invite quality research contributions representing original research. All submitted papers will be single-blind and will be peer reviewed by an international program committee of researchers of high repute. Accepted submissions will be presented at the workshop. Topics Topics of interest include, but are not limited to: - From Data to Discovery: Using Multimodal Models for Smarter Search and Recommendations (2025 Special Theme) - Strategies for building scalable multimodal discovery engines. - Lessons learned from productionizing MMSR models in real-world applications. - Handling discovery in cold-start scenarios and sparse multimodal data settings. - Balancing discovery and relevance in multimodal recommendation systems. - Evaluating business impact and user satisfaction of multimodal discovery systems. - Emerging trends in using LLMs for multimodal data exploration and discovery. - Personalization strategies tailored to multimodal discovery journeys. - Bridging research and practical deployment: overcoming challenges in scaling multimodal models for search and recommendation. - Cross-modal retrieval techniques - Efficiently indexing and retrieving multimodal data. - Handling large-scale cross-modal data. - Developing metrics to measure similarity across different modalities. - Zero-shot and few-shot retrieval across unseen modalities. - Adapting retrieval architectures (e.g., dual encoders vs. fusion models) for different multimodal tasks. - Applications of MMSR to Verticals (e.g., E-commerce, Healthcare, Real Estate) - MMSR for image-based product search in e-commerce. - Multimodal conversational agents for healthcare, legal, and retail industries. - Augmented reality (AR) and multimodal discovery for shopping experiences. - Customer service optimization through multimodal search interfaces (e.g., support chat, help centers). - Personalized multimodal travel planning and recommendation systems. - Video+text based multimodal recommendations in media and entertainment domains. - User-centric design principles for MMSR interfaces - Designing user-friendly interfaces that support multimodal search. - Methods for evaluating the usability of MMSR systems. - Ensuring MMSR interfaces are accessible to users with disabilities. - Visualizations and interactive feedback mechanisms for multimodal search refinement. - A/B testing strategies specific to multimodal search UI/UX improvements. - Ethical and Privacy Considerations of MMSR - Identifying and mitigating biases in multimodal algorithms. - Ensuring transparency in how multimodal results are generated and presented. - Approaches for obtaining and managing user consent for using user data. - User perception studies of trust and explainability in multimodal search systems. - Privacy-preserving multimodal modeling: federated learning and differential privacy for MMSR. - Modeling for MMSR - Multimodal representation learning. - Utilizing pre-trained multimodal LLMs. - Dimensionality reduction techniques to manage multimodal complexity. - Fine-tuning pre-trained vision-language models. - Developing and standardizing metrics to evaluate the performance of MMSR models. - Alignment challenges in multimodal embeddings across diverse modalities. Submission Instructions: All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. Submissions must describe work that is not previously published, not accepted for publication elsewhere, and not currently under review elsewhere. All submissions must be in English. We do not accept anonymous submissions. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. All accepted workshop papers will be published in the dedicated ICDMW proceedings published by the IEEE Computer Society Press. Non-archival submissions are not allowed, i.e., all accepted papers will be included and published in the proceedings. Long papers: 6-8 pages excluding references Short papers: 3-5 pages excluding references Submissions to MMSR '25 should be made through the workshop's submission portal: https://wi-lab.com/cyberchair/2025/icdm25/scripts/submit.php <https://wi-lab.com/cyberchair/2025/icdm25/scripts/submit.php?subarea=S25&undisplay_detail=1&wh=/cyberchair/2025/icdm25/scripts/ws_submit.php> E-mail: icdm-mmsr-organizers@googlegroups.com