WSDM Workshop on Benchmarking Causal Models (CausalBench'26)

WSDM Logo CausalBench Logo

Schedule

The workshop will take place during WSDM'26, on February 26, 2026, in Boise, Idaho, USA.

Overview

The WSDM Workshop on Benchmarking Causal Models (CausalBench) aims to promote scientific collaboration, reproducibility, and fairness in causal learning research by providing a dedicated venue for work on benchmarking data, algorithms, models, and metrics for causal learning. CausalBench addresses the growing need for unified, publicly available, and configurable benchmarks that support causal discovery, causal effect estimation, and more general causal inference and learning research problems (e.g., A/B testing, experimental design, mechanistic interpretability, causal reasoning and causal RL etc.) across diverse applications, such as web search, data mining, public health, and sustainability.

Standardized evaluation has historically driven progress in machine learning, as seen with UCI ML and KDD repositories, by encouraging collaborative research and reproducible science. The causal learning community now faces similar challenges: lack of unified benchmark datasets, algorithms, and metrics for reproducible evaluation. CausalBench workshop aims to

  • help identify existing datasets and metrics for causal learning and integrate them into standardized evaluation protocols, encourage coverage, calibration, and uncertainty reporting for causal estimates,
  • develop ontologies for benchmarking, improving transparency and collaboration,
  • address challenges of incomplete causal knowledge and integration of heterogeneous datasets, and
  • help define evaluation standards to scientifically quantify progress in causal learning.

The workshop will bring together researchers and practitioners to discuss new algorithms, datasets, and evaluation methodologies that help establish trust in causal learning innovation. Our goal is to foster discussion and community practices that make evaluation more transparent and comparable across different causal tasks—e.g., clarifying task taxonomies, surfacing assumption-linked metrics, and sharing accessible benchmark resources and artifacts.

By encouraging open exchange on datasets and metrics, the workshop aims to catalyze incremental, evidence-based improvements to causal evaluation.

Topics of Interest

CausalBench welcomes submissions in the following research and application areas:

  • Benchmarking and Evaluation: Software frameworks, datasets, standard workflows/pipelines, and metrics for evaluating causal learning algorithms.
  • Algorithmic Advances: Novel causal discovery and causal inference models/algorithms with reproducible benchmarking results.
  • Data and Systems: Open-source platforms for data exchange, (automatic) model evaluation, and reproducing results for any causality related research problems: e.g., causal inference, causal discovery, causal representation learning, and causal recommendation.
  • Trustworthy AI: causality-inspired methods, datasets, or metrics for benchmarking any aspect of trustworthiness of various AI systems and methods, including interpretability, safety, robustness, bias, and fairness.
  • Applications: Real-world demonstrations of causal benchmarking in domains, such as healthcare, finance, sustainability, and social systems—with a particular emphasis on applications in web search and data mining.

Additional thematic sessions (e.g., invited talks, panels) will be held on emerging challenges in causal benchmarking.

Submission Guidelines

  • Submission Site: ​​https://easychair.org/my/conference?conf=causalbench26
  • Format: Submissions must be formatted according to the ACM SIG Proceedings Template double-column format, with a font size no smaller than 9pt.
  • Length: We invite submissions of extended abstracts (2-3 pages, excluding references) and research articles (4-6 pages, excluding references) that align with the workshop's themes.
  • File Type: PDF, maximum file size 10 MB.
  • Review Process: Single-blind review.
  • Accepted Papers: All accepted papers will be presented at the workshop and included in the official WSDM Companion Proceedings Volume.

Artifacts and Reproducibility

Submissions are encouraged to emphasize reproducibility, benchmark availability, and evaluation methodology. Authors are encouraged to make public and include links to code, datasets, experimental setups, and other supporting materials. While not required, the authors are also encouraged to share the relevant artifacts (data, model, metric, and benchmark runs) on CausalBench’s repositories.

Important Dates

All deadlines are at 11:59 PM (Anywhere on Earth) unless otherwise noted.

Stage Date
Paper Submission Deadline Nov 13, 2025
Author Notification Dec 18, 2025
Camera-ready Deadline TBD
Workshop Date Feb 26, 2026

Organizers

General Chairs

K. Selçuk Candan Huan Liu
Arizona State University Arizona State University
candan@asu.edu huanliu@asu.edu

Program Chairs

Ruocheng Guo Paras Sheth
Intuit AI Research Amazon
ruocheng_guo@intuit.com parshet@amazon.com
Web Chair Publicity Chair
Ahmet Kapkiç Pratanu Mandal
Arizona State University Arizona State University
akapkic@asu.edu pmandal5@asu.edu

Duplicate Submissions and Novelty Requirements

All submissions will undergo a rigorous peer-review process to ensure quality and originality. Submissions must present original work not under review elsewhere. Concurrent submission to other venues is not permitted. Papers must cite prior work appropriately, including authors’ own related publications. The submitted paper must substantially differ from earlier workshop papers by the same authors.

Inclusion and Diversity

CausalBench embraces the values of diversity and inclusion in writing, participation, and representation. Authors should use inclusive language and examples that avoid stereotyping or marginalization of any group.

Conflicts of Interest

Authors must declare any conflicts of interest with organizers or reviewers (e.g., recent collaborations, shared affiliations, advisor/advisee relationships). Submissions with incorrect conflict declarations are subject to rejection.

ACM Publications Policy on Research Involving Human Participants and Subjects

As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects.

Policy on Authorship Requirements and GenAI

We follow the ACM policy on authorship requirements. Specifically on the use of generative AI tools and technologies, the guidelines note that: "The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work."

Contact Information

For questions or clarifications, please contact: wsdm26@causalbench.org

Acknowledgements

We thank all the contributors and the community for their continuous support and feedback in making CausalBench a reliable and valuable resource for causal learning research. This workshop is funded by NSF Grant 2311716, "CausalBench: A Cyberinfrastructure for Causal-Learning Benchmarking for Efficacy, Reproducibility, and Scientific Collaboration", and NSF Grants #2230748, "PIRE: Building Decarbonization via AI-empowered District Heat Pump Systems", #2412115, "PIPP Phase II: Analysis and Prediction of Pandemic Expansion (APPEX)" and USACE #GR40695, "Designing nature to enhance resilience of built infrastructure in western US landscapes".