Estimating individual treatment effect: Generalization bounds and algorithms

Copyright © 2017 by the author(s). There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theor...

Full description

Bibliographic Details
Main Authors: Sontag, David (Author), Shalit, Uri (Author), Johansson, Fredrik D. (Author)
Format: Article
Language:English
Published: 2021-11-03T14:31:57Z.
Subjects:
Online Access:Get fulltext
LEADER 01676 am a22001693u 4500
001 137194
042 |a dc 
100 1 0 |a Sontag, David  |e author 
700 1 0 |a Shalit, Uri  |e author 
700 1 0 |a Johansson, Fredrik D.  |e author 
245 0 0 |a Estimating individual treatment effect: Generalization bounds and algorithms 
260 |c 2021-11-03T14:31:57Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137194 
520 |a Copyright © 2017 by the author(s). There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms leam a "balanced" representation such that the induced treated and control distributions look similar, and we give a novel and intuitive generalization-error bound showing the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art. 
546 |a en 
655 7 |a Article 
773 |t 34th International Conference on Machine Learning, ICML 2017