Dear colleagues,
We are excited to announce the Vision Language Models For All: Building
Geo-Diverse and Culturally Aware Vision-Language Models
https://sites.google.com/view/vlms4all/ (VLMs-4-All) workshop at CVPR 2025,
featuring an outstanding lineup of nine speakers. This workshop will bring
together researchers to discuss geo-diverse and culturally aware
vision-language models.
We invite you to:
Participate in our two challenges. We host two challenges on the
recently developed CulturalVQA http://culturalvqa.org and GlobalRG
http://globalrg.github.io benchmarks, where the former focuses on
evaluating cultural understanding of VLMs, while the latter focuses on
evaluating cultural diversity in VLMs’ outputs. Both challenges
will be hosted
on Hugging Face challenges. The results of the challenges and winning
entries will be presented at the workshop. Participate here:
https://sites.google.com/view/vlms4all/challenges
Start date: March 14, 2025
End date: April 15, 2025
Submit your papers of up to 4 pages that discuss identifying effective
evaluation tasks, benchmarks, and metrics to assess cultural awareness and
alignment in VLMs; and new methodology for improving cultural awareness of
AI systems. For more details see the Call for Papers:
https://sites.google.com/view/vlms4all/call-for-papers.
Useful links::
Challenges: https://sites.google.com/view/vlms4all/challenges
We encourage you to contribute and foster meaningful discussions in this
important area. Please feel free to share this with colleagues who may be
interested. We look forward to your participation.
For any workshop-related queries, please drop an email at vlms4all@gmail.com.
For inquiries about the CulturalVQA challenge, contact culturalvqa@gmail.com.
For GlobalRG-related questions, reach out to globalrg@gmail.com.
Yours sincerely,
Workshop organizers
Dear colleagues,
We are excited to announce the Vision Language Models For All: Building
Geo-Diverse and Culturally Aware Vision-Language Models
<https://sites.google.com/view/vlms4all/> (VLMs-4-All) workshop at CVPR 2025,
featuring an outstanding lineup of nine speakers. This workshop will bring
together researchers to discuss geo-diverse and culturally aware
vision-language models.
We invite you to:
1.
Participate in our two challenges. We host two challenges on the
recently developed CulturalVQA <http://culturalvqa.org> and GlobalRG
<http://globalrg.github.io> benchmarks, where the former focuses on
evaluating cultural understanding of VLMs, while the latter focuses on
evaluating cultural diversity in VLMs’ outputs. Both challenges
will be hosted
on Hugging Face challenges. The results of the challenges and winning
entries will be presented at the workshop. Participate here:
https://sites.google.com/view/vlms4all/challenges
Start date: March 14, 2025
End date: April 15, 2025
2.
Submit your papers of up to 4 pages that discuss identifying effective
evaluation tasks, benchmarks, and metrics to assess cultural awareness and
alignment in VLMs; and new methodology for improving cultural awareness of
AI systems. For more details see the Call for Papers:
https://sites.google.com/view/vlms4all/call-for-papers.
Useful links::
-
Workshop Website: https://sites.google.com/view/vlms4all/
-
Call for Papers: https://sites.google.com/view/vlms4all/call-for-papers
-
Challenges: https://sites.google.com/view/vlms4all/challenges
We encourage you to contribute and foster meaningful discussions in this
important area. Please feel free to share this with colleagues who may be
interested. We look forward to your participation.
For any workshop-related queries, please drop an email at vlms4all@gmail.com.
For inquiries about the CulturalVQA challenge, contact culturalvqa@gmail.com.
For GlobalRG-related questions, reach out to globalrg@gmail.com.
Yours sincerely,
Workshop organizers