Causal Inference & Machine Learning: Why now?
December 13th, 2021
WHY-21 @ NeurIPS
Motivation
Machine Learning has received enormous attention from the scientific community due to the successful application of deep neural networks in computer vision, natural language processing, and game-playing (most notably through reinforcement learning). However, a growing segment of the machine learning community recognizes that there are still fundamental pieces missing from the AI puzzle, among them causal inference. This recognition comes from the observation that even though causality is a central component found throughout the sciences, engineering, and many other aspects of human cognition, explicit reference to causal relationships is largely missing in current learning systems.
This entails a new goal of integrating causal inference and machine learning capabilities into the next
generation of intelligent systems, thus paving the way towards higher levels of intelligence and human-centric AI. The synergy goes in both directions; causal inference benefitting from machine learning and the other way around.
- Current machine learning systems lack the ability to leverage the invariances imprinted by the underlying causal mechanisms towards reasoning about generalizability, explainability, interpretability, and robustness.
- Current causal inference methods, on the other hand, lack the ability to scale up to high-dimensional settings, where current machine learning systems excel.
All indications are that such marriage can be extremely fruitful. For instance, initial results indicate that understanding and leveraging causal invariances is a crucial ingredient in achieving out-of-distribution generalization (transportability) -- something that humans do much better than state-of-the-art ML systems. Also, causal inference methodology offers a systematic way of combining passive observations and active experimentation, allowing more robust and stable construction of models of the environment. In the other direction, there is a growing evidence that embedding causal and counterfactual inductive biases into deep learning systems can lead to high-dimensional inferences that are needed in realistic scenarios.
This 2nd edition of the WHY workshop (1st edition: WHY-19) focuses on bringing together researchers from both camps to initiate
principled discussions about the integration of causal reasoning and machine
learning perspectives to help tackle the challenging AI tasks of the coming decades. We
welcome researchers from all relevant disciplines, including but not limited to computer
science, cognitive science, robotics, mathematics, statistics, physics, and philosophy.
Topics
We will invite papers that describe methods for answering causal questions with the help of ML machinery, or methods for enhancing ML robustness and generalizability with the help of causal models (i.e., carriers of transparent structural assumptions). Authors are encouraged to identify the specific task the paper aims to solve and where on the causal hierarchy their contributions reside (i.e., associational, interventional, or counterfactual). Topics of interest include but are not limited to the following:
- Algorithms for causal inference and mechanisms discovery.
- Causal analysis of biases in data science & fairness analysis.
- Causal and counterfactual explanations.
- Generalizability, transportability, and out-of-distribution generalization.
- Causal reinforcement learning, planning, and imitation.
- Causal representation learning and invariant representations.
- Intersection of causal inference and neural networks.
- Fundamental limits of learning and inference, the causal hierarchy.
- Applications of the 3-layer hierarchy (Pearl Causal Hierarchy).
- Evaluation of causal ML methods (accuracy, scalability, etc).
- Causal reasoning and discovery in child development.
- Other connections between ML, cognition, and causality.
Invited Speakers
-
David Blei
(Columbia University) -
Victor Chernozhukov
(MIT) -
Tobias Gerstenberg
(Stanford University) -
Alison Gopnik
(UC Berkeley) -
Aapo Hyvarinen
(U Helsinki) -
Thomas Icard
(Stanford University)
-
Rosemary Nan Ke
(Mila) -
Julius von Kügelgen
(Max Planck Institute) -
Adèle Ribeiro
(Columbia University) -
Uri Shalit
(Technion) -
Ricardo Silva
(UCL) -
Caroline Uhler
(MIT)
Schedule (EST/NY time) [tentative]
Start | End | Description |
---|---|---|
10:00am | 10:10am |
Opening Remarks |
Session 1 | ||
10:10am | 10:30am |
Calibration, out-of-distribution generalization and a path towards causal representationsUri Shalit |
10:30am | 10:50am |
Independent mechanism analysis, a new concept?Julius von Kügelgen |
10:50am | 11:10am |
On the Assumptions of Synthetic Control MethodsDavid Blei |
11:10am | 11:25am |
Q&A panel |
Session 2 | ||
11:30am | 11:50am |
The Road to Causal ProgrammingRicardo Silva |
11:50am | 12:10pm |
Causal discovery by generative modellingAapo Hyvarinen |
12:10pm | 12:30pm |
Going beyond the here and now: Counterfactual simulation in human cognitionTobias Gerstenberg |
12:30pm | 12:45pm |
Q&A panel |
Poster Session | ||
12:45pm | 1:45pm |
Poster presentations at Gathertown |
Session 3 | ||
1:45pm | 2:05pm |
A (topo)logical perspective on causal inferenceThomas Icard |
2:05pm | 2:25pm |
Optimal Design of InterventionsCaroline Uhler |
2:25pm | 2:45pm |
From "What" to "Why": towards causal learningRosemary Ke |
2:45pm | 3:00pm |
Q&A panel. |
Keynote Speaker | ||
3:00pm | 3:45pm |
The logic of Causal InferenceJudea Pearl |
3:45pm | 4:00pm |
Discussion Panel |
Contributed Talks | ||
4:00pm | 4:15pm |
Causal Expectation-Maximisation
Marco Zaffalon, Alessandro Antonucci, Rafael Cabañas
|
4:15pm | 4:30pm |
On the Adversarial Robustness of Causal Algorithmic Recourse
Ricardo Dominguez Olmedo, Amir Karimi, Bernhard Schölkopf
|
4:30pm | 4:45pm |
Scalable Causal Domain Adaptation
Mohammad Ali Javidian, Om Pandey, Pooyan Jamshidi
|
4:45pm | 5:00pm |
BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery
Chris Cundy, Aditya Grover, Stefano Ermon
|
Session 4 | ||
5:00pm | 5:20pm |
Casual Learning in Children and Computational ModelsAlison Gopnik |
5:20pm | 5:40pm |
Effect Identification in Cluster Causal DiagramsAdèle Ribeiro |
5:40pm | 6:00pm |
Omitted Confounder Bias Bounds for Machine Learned Causal ModelsVictor Chernozhukov |
6:00pm | 6:15pm |
Q&A panel |
6:15pm | 6:30pm |
Closing Remarks |
Logistics
Registration The WHY-21 workshop is part of NeurIPS 2021 Conference. All the logistics, including the registration, is handled by NeurIPS. For details, see here.
Format The workshop will include 6 one-hour sessions. Each session is expected to have between 2-4 speakers and includes invited talks and presentations of some accepted papers. We expect to leave ample time for discussion and exchange of ideas. We will also have a poster session between the morning and afternoon sessions.
Submissions We will accept submissions for papers, including ongoing and more speculative work. Papers should be from 6 to 9 pages long (excluding references and appendix) and formatted using NeurIPS style anonymously. All accepted papers will be presented as posters during the workshop, and some of them will be selected for oral presentations. Preference will be giving to unexplored tasks and new connections with the current paradigm in the field. Submissions will be lightly evaluated. The overall goal is to have a forum for open discussion instead of only being an alternative venue for already mature work. We accept submission through the CMT system.
Important Dates
Submissions (electronic submission, PDF) due: September 18th, 2021, AoE
Notifications of acceptance: October 23nd, 2021, AoE
Program Committee
- David Arbour (Adobe Research)
- Sander Beckers (University of Tübingen)
- Guangyi Chen (Tsinghua University)
- Carlos Cinelli (University of Washington)
- Tom Claassen (Radboud University)
- Juan D Correa (Universidad Autónoma de Manizales)
- Alexander D'Amour (Google Brain)
- Frederick Eberhardt (Caltech)
- Robin Evans (University of Oxford)
- Tian Gao (IBM Research)
- Mingming Gong (University of Melbourne)
- Jason S Hartford (University of British Columbia)
- Tom Heskes (Radboud University)
- Duligur Ibeling (Stanford University)
- Thomas Icard (Stanford University)
- Shalmali Joshi (Harvard University)
- Yonghan Jung (Purdue University)
- Edward H Kennedy (Carnegie Mellon University)
- Murat Kocaoglu (Purdue University)
- Kun Kuang (Zhejiang University)
- Daniel Kumor (Purdue University)
- Thuc Duy Le (University of South Australia)
- Kai-Zhan Lee (Columbia University)
- Sanghack Lee (Seoul National University)
- Jiuyong Li (University of South Australia)
- David Lopez-Paz (FAIR)
- Sara Magliacane (IBM Research)
- Daniel Malinsky (Columbia University)
- Krikamol Muandet (Max Planck Institute)
- Yulei Niu (Columbia University)
- Pedro A Ortega (DeepMind)
- David Parkes (Harvard University)
- Emilija Perkovic (University of Washington)
- Drago Plecko (ETH Zürich)
- Jakob Runge (German Aerospace Center)
- Shohei Shimizu (Shiga University & RIKEN)
- Ricardo Silva (University College London)
- Dhanya Sridhar (University of Montreal)
- Adarsh Subbaswamy (Johns Hopkins University)
- Jin Tian (Iowa State University)
- Sofia Triantafillou (University of Pittsburgh)
- Tian-Zuo Wang (Nanjing University)
- Kevin M Xia (Columbia University)
- Marco Zaffalon (IDSIA and Artificialy)
- Jiji Zhang (Hong Kong Baptist University)
- Junzhe Zhang (Columbia University)
- Lu Zhang (University of Arkansas)
Workflow Managers
- Alexis Bellot (Columbia University)
- Adèle Ribeiro (Columbia University)
Organizers
-
Elias Bareinboim
(Columbia University) -
Bernhard Scholkopf
(Max Planck Institute) -
Terry Sejnowski
(Salk Institute & UCSD) -
Yoshua Bengio
(U Montreal & Mila) -
Judea Pearl
(UCLA)
For questions please contact us at why21neurips@gmail.com