Hello Researchers,
We are pleased to announce the launch of the **Commonsense Persona-grounded Dialogue Challenge (CPDC 2025)** — an opportunity for researchers and practitioners in NLP to advance the state of the art in persona-grounded, task-oriented dialogue systems.
Following the success of CPDC 2023, this year’s edition introduces a more complex and immersive game-world setting, where conversational agents must produce coherent, persona-consistent responses, while also performing task-oriented actions informed by contextual and commonsense knowledge. The challenge builds on insights from PeaCoK (Persona Commonsense Knowledge for Consistent and Engaging Narratives) — an ACL 2023 Outstanding Paper Award winner; which forms the foundation for the persona modelling in this year’s tasks. Join the challenge [here](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).
🔍 **Challenge Overview**
Participants may choose from three tasks:- **Task 1: Task-Oriented Dialogue Response Generation** Generate persona-consistent dialogue and execute function calls based on goals, roles, and context. Evaluates both response quality and action accuracy.
- **Task 2: Commonsense Dialogue Response Generation** Produce natural, context-aware responses grounded in persona, long dialogue history, and shared world knowledge.
- **Task 3: A Hybrid Track evaluating performance across both tasks** Submit once to be evaluated on both Tasks 1 and 2. Build models that balance conversational fluency with task execution.
Each task features two tracks:- **GPU Track**: Full flexibility to train and deploy your own models using any dataset**-** **API Prompt Engineering Track**: Compete using the same API and models; only prompts may vary
Participants may train models using any dataset of their choice; a small reference dataset is also provided for Tasks 1 and 2. Starter kits and baseline models are available to facilitate onboarding. Explore the tasks in [more detail](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).
🧠 **Key Features**- Rich, persona-driven dialogue scenarios grounded in game environments - Role-specific knowledge, function definitions, and common-sense reasoning - Unified evaluation framework across all tasks - Leaderboards for each task and track, with independent prize pools
💰 **Total Prize Pool: $20,000**, distributed across six leaderboards
📅 **Timeline**- Round 1: 20 April 2025 - Round 2: 25 May 2025 - Final Deadline: 30 June 2025
We encourage researchers interested in conversational AI, dialogue modelling, and LLM-based reasoning to take part. Full details, rules, and resources are available on the [challenge page](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).
We look forward to your participation.
Warm regards,
Hello Researchers,
\
We are pleased to announce the launch of the \*\*Commonsense Persona-grounded Dialogue Challenge (CPDC 2025)\*\* — an opportunity for researchers and practitioners in NLP to advance the state of the art in persona-grounded, task-oriented dialogue systems.\
Following the success of CPDC 2023, this year’s edition introduces a more complex and immersive game-world setting, where conversational agents must produce coherent, persona-consistent responses, while also performing task-oriented actions informed by contextual and commonsense knowledge. The challenge builds on insights from PeaCoK (Persona Commonsense Knowledge for Consistent and Engaging Narratives) — an ACL 2023 Outstanding Paper Award winner; which forms the foundation for the persona modelling in this year’s tasks. Join the challenge \[here\](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).\
\
🔍 \*\*Challenge Overview\*\*\
Participants may choose from three tasks:**-** \*\*Task 1: Task-Oriented Dialogue Response Generation\*\* Generate persona-consistent dialogue and execute function calls based on goals, roles, and context. Evaluates both response quality and action accuracy.\
**-** \*\*Task 2: Commonsense Dialogue Response Generation\*\* Produce natural, context-aware responses grounded in persona, long dialogue history, and shared world knowledge.\
**-** \*\*Task 3: A Hybrid Track evaluating performance across both tasks\*\* Submit once to be evaluated on both Tasks 1 and 2. Build models that balance conversational fluency with task execution.\
Each task features two tracks:**-** \*\*GPU Track\*\*: Full flexibility to train and deploy your own models using any dataset**-** \*\*API Prompt Engineering Track\*\*: Compete using the same API and models; only prompts may vary \
Participants may train models using any dataset of their choice; a small reference dataset is also provided for Tasks 1 and 2. Starter kits and baseline models are available to facilitate onboarding. Explore the tasks in \[more detail\](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).\
\
🧠 \*\*Key Features\*\***-** Rich, persona-driven dialogue scenarios grounded in game environments **-** Role-specific knowledge, function definitions, and common-sense reasoning **-** Unified evaluation framework across all tasks **-** Leaderboards for each task and track, with independent prize pools\
💰 \*\*Total Prize Pool: $20,000\*\*, distributed across six leaderboards\
📅 \*\*Timeline\*\***-** Round 1: 20 April 2025 **-** Round 2: 25 May 2025 **-** Final Deadline: 30 June 2025 \
We encourage researchers interested in conversational AI, dialogue modelling, and LLM-based reasoning to take part. Full details, rules, and resources are available on the \[challenge page\](https://www.aicrowd.com/challenges/commonsense-persona-grounded-dialogue-challenge-2025).\
We look forward to your participation.\
Warm regards,