Causal Inference & Machine Learning: Why now?



December 13th, 2021


WHY-21 @ NeurIPS


Motivation

Machine Learning has received enormous attention from the scientific community due to the successful application of deep neural networks in computer vision, natural language processing, and game-playing (most notably through reinforcement learning). However, a growing segment of the machine learning community recognizes that there are still fundamental pieces missing from the AI puzzle, among them causal inference. This recognition comes from the observation that even though causality is a central component found throughout the sciences, engineering, and many other aspects of human cognition, explicit reference to causal relationships is largely missing in current learning systems.

This entails a new goal of integrating causal inference and machine learning capabilities into the next generation of intelligent systems, thus paving the way towards higher levels of intelligence and human-centric AI. The synergy goes in both directions; causal inference benefitting from machine learning and the other way around.

  • Current machine learning systems lack the ability to leverage the invariances imprinted by the underlying causal mechanisms towards reasoning about generalizability, explainability, interpretability, and robustness.
  • Current causal inference methods, on the other hand, lack the ability to scale up to high-dimensional settings, where current machine learning systems excel.

All indications are that such marriage can be extremely fruitful. For instance, initial results indicate that understanding and leveraging causal invariances is a crucial ingredient in achieving out-of-distribution generalization (transportability) -- something that humans do much better than state-of-the-art ML systems. Also, causal inference methodology offers a systematic way of combining passive observations and active experimentation, allowing more robust and stable construction of models of the environment. In the other direction, there is a growing evidence that embedding causal and counterfactual inductive biases into deep learning systems can lead to high-dimensional inferences that are needed in realistic scenarios.

This 2nd edition of the WHY workshop (1st edition: WHY-19) focuses on bringing together researchers from both camps to initiate principled discussions about the integration of causal reasoning and machine learning perspectives to help tackle the challenging AI tasks of the coming decades. We welcome researchers from all relevant disciplines, including but not limited to computer science, cognitive science, robotics, mathematics, statistics, physics, and philosophy.

Topics

We will invite papers that describe methods for answering causal questions with the help of ML machinery, or methods for enhancing ML robustness and generalizability with the help of causal models (i.e., carriers of transparent structural assumptions). Authors are encouraged to identify the specific task the paper aims to solve and where on the causal hierarchy their contributions reside (i.e., associational, interventional, or counterfactual). Topics of interest include but are not limited to the following:

  1. Algorithms for causal inference and mechanisms discovery.
  2. Causal analysis of biases in data science & fairness analysis.
  3. Causal and counterfactual explanations.
  4. Generalizability, transportability, and out-of-distribution generalization.
  5. Causal reinforcement learning, planning, and imitation.
  6. Causal representation learning and invariant representations.
  7. Intersection of causal inference and neural networks.
  8. Fundamental limits of learning and inference, the causal hierarchy.
  9. Applications of the 3-layer hierarchy (Pearl Causal Hierarchy).
  10. Evaluation of causal ML methods (accuracy, scalability, etc).
  11. Causal reasoning and discovery in child development.
  12. Other connections between ML, cognition, and causality.
Invited Speakers (Tentative)
Schedule (TBA)
Logistics

Registration The WHY-21 workshop is part of NeurIPS 2021 Conference. All the logistics, including the registration, is handled by NeurIPS. For details, see here.

Format The workshop will include 6 one-hour sessions. Each session is expected to have between 2-4 speakers and includes invited talks and presentations of some accepted papers. We expect to leave ample time for discussion and exchange of ideas. We will also have a poster session between the morning and afternoon sessions.

Submissions We will accept submissions for papers, including ongoing and more speculative work. Papers should be from 6 to 9 pages long (excluding references and appendix) and formatted using NeurIPS style anonymously. All accepted papers will be presented as posters during the workshop, and some of them will be selected for oral presentations. Preference will be giving to unexplored tasks and new connections with the current paradigm in the field. Submissions will be lightly evaluated. The overall goal is to have a forum for open discussion instead of only being an alternative venue for already mature work. We accept submission through the CMT system.

Important Dates Submissions (electronic submission, PDF) due: September 18th, 2021, AoE
Notifications of acceptance: October 23nd, 2021, AoE

Program Committee (TBA)
Organizers

For questions please contact us at why21neurips@gmail.com