[CFP] IJCNN Special Session Explainable Deep Neural Networks for Responsible AI (DeepXplain 2025)

FV
Francielle Vargas
Sun, Dec 29, 2024 10:48 AM

*** First Call for Papers ***

We invite paper submissions to the Explainable Deep Neural Networks for
Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025),
a special session at IJCNN 2025 dedicated to innovative methodologies for
improving the interpretability of Deep Neural Networks (DNNs), while
addressing fairness and bias mitigation.

Website: https://deepxplain.github.io/

Important Dates:

Submission link: https://cmt3.research.microsoft.com/IJCNN2025/

Submission deadline: January 15, 2025

Notification date: March 15, 2025

Camera-ready submission: May 1, 2025

Contributions

This special session aims to foster interdisciplinary collaboration,
promote the ethical design of AI systems, and encourage the development of
benchmarks and datasets for explainability research. Our goal is to advance
both post-hoc and intrinsic interpretability approaches, bridging the gap
between the high performance of deep neural networks and their
transparency. By doing so, we seek to enhance human trust in these models
and mitigate the risks of negative social impacts.

Topics of interest include, but are not limited to:

Theoretical advancements in post-hoc explanation methods (e.g., LIME,
SHAP, Grad-CAM) for DNNs.

Development of inherently interpretable architectures using
self-explaining mechanisms, such as attention-based or saliency-based
models, prototype networks, and SENNs (Self-Explaining Neural Networks).

Post-hoc and self-explaining methods for Large Language Models (LLMs).

Application-driven explainability insights, particularly in Natural
Language Processing and Computer Vision.

Ethical evaluations of DNN-based AI models with a focus on reducing bias
and social impact.

Methods, metrics, and methodologies for improving interpretability and
fairness in DNNs.

Ethical discussions about the social impact of non-transparent AI models.

Datasets and benchmarking tools for explainability.

Explainable AI in critical applications: healthcare, governance,
misinformation, hate speech, etc.

Submission Information

We welcome submissions of academic papers (both long and short) across the
spectrum of theoretical and practical work, including research ideas,
methods, tools, simulations, applications or demonstrations, practical
evaluations, position papers, and surveys. Submissions must be written in
English, adhere to the IJCNN-2025 formatting guidelines, and be submitted
as a single PDF file.

Organizers

Francielle Vargas https://franciellevargas.github.io/, University of
São Paulo, Brazil

Roseli Romero https://sites.icmc.usp.br/rafrance/, University of São
Paulo, Brazil

Jackson Trager https://www.jacksonptrager.com/, University of Southern
California, USA

*** First Call for Papers *** We invite paper submissions to the Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025), a special session at IJCNN 2025 dedicated to innovative methodologies for improving the interpretability of Deep Neural Networks (DNNs), while addressing fairness and bias mitigation. Website: https://deepxplain.github.io/ Important Dates: - Submission link: https://cmt3.research.microsoft.com/IJCNN2025/ - Submission deadline: January 15, 2025 - Notification date: March 15, 2025 - Camera-ready submission: May 1, 2025 Contributions This special session aims to foster interdisciplinary collaboration, promote the ethical design of AI systems, and encourage the development of benchmarks and datasets for explainability research. Our goal is to advance both post-hoc and intrinsic interpretability approaches, bridging the gap between the high performance of deep neural networks and their transparency. By doing so, we seek to enhance human trust in these models and mitigate the risks of negative social impacts. Topics of interest include, but are not limited to: - Theoretical advancements in post-hoc explanation methods (e.g., LIME, SHAP, Grad-CAM) for DNNs. - Development of inherently interpretable architectures using self-explaining mechanisms, such as attention-based or saliency-based models, prototype networks, and SENNs (Self-Explaining Neural Networks). - Post-hoc and self-explaining methods for Large Language Models (LLMs). - Application-driven explainability insights, particularly in Natural Language Processing and Computer Vision. - Ethical evaluations of DNN-based AI models with a focus on reducing bias and social impact. - Methods, metrics, and methodologies for improving interpretability and fairness in DNNs. - Ethical discussions about the social impact of non-transparent AI models. - Datasets and benchmarking tools for explainability. - Explainable AI in critical applications: healthcare, governance, misinformation, hate speech, etc. Submission Information We welcome submissions of academic papers (both long and short) across the spectrum of theoretical and practical work, including research ideas, methods, tools, simulations, applications or demonstrations, practical evaluations, position papers, and surveys. Submissions must be written in English, adhere to the IJCNN-2025 formatting guidelines, and be submitted as a single PDF file. *Organizers* - Francielle Vargas <https://franciellevargas.github.io/>, University of São Paulo, Brazil - Roseli Romero <https://sites.icmc.usp.br/rafrance/>, University of São Paulo, Brazil - Jackson Trager <https://www.jacksonptrager.com/>, University of Southern California, USA