In observational studies, we do not know the treatment assignment mechanism, which leads to bias in naive estimates of treatment effect, where `effect’ is meant in a causal context. A major source of this bias is covariate imbalance between treatment groups.
Randomization of treatment leads to average balance of both known and unknown covariates among treatment groups, facilitating unbiased estimates of treatment effect. In cases where it is unethical or financial infeasible to assign treatment, we must rely on observational data which is notoriously imbalanced.
One solution to this imbalance is to balance on all known influential covariates. If the completeness of this list can be reasonably justified, then the problem can be considered mitigated.
One common method of balance is by matching individuals in the treatment and control groups (i.e., one-to-one or one-to-many matching) via minimization of some distance metric (i.e., propensity scores). The issue is that data from unmatched individuals is discarded. So, we would like to optimize our sample size – what subset individuals leads to best balance and maximal sample size?
BOSS: Balance Optimization Subject Selection
Cho et al. have proposed a method entitled Balance Optimization Subset Selection, which involves a more holistic view of matching. Instead of matching individuals, the issue is reframed as a best subset selection optimization problem. The treatment effect across different subsets of treatment and control groups with identical balance is investigated. Considering the distribution of treatment effect across these different possibilities is arguably more statistically sound (from a frequentist perspective) and yields a nice framework for standard error calculation. Below I summarize results from their paper.
The goal is to find a subset of the treatment pool and a subset of the control pool so that a measure of balance is maximized, or, some measure of distance is minimized. This measure of balance or distance is the objective function. Examples of common distance measures include the Mahalanobis metric matching using the propensity score, Mahalanobis metric matching using calipers, or the propensity score itself.
Description of the Method
Imagine creating a set of uniformly-sized data bins, and each covariate value is assigned to the bin that includes its value. Small leads to a simple optimization problem. Large leads to a more complex problem, but more similar covariate distributions between the treatment and control groups.
Consider covariate . Within the treatment group, this covariate takes values within the closed interval . We can separate this range into bins with breakpoints. This leads to a covariate distribution (imagine a histogram).
The BOSS method then selects control units such that the control covariate distribution and the treatment covariate distribution are as similar as possible.
For a set of covariates, joint and marginal distributions:
- marginal distributions
- joint distributions of 2 covariates
- joint distribution of all covariates
The BOSS method described above can be repeated for all, or any subset, of the possible covariate distributions. One usually doesn’t optimize over all distributions because of redundancy of information.
Let represent the fixed number of bins we’re using, and let’s arbitrarily order the bins from . Let represent the cardinality of with values in bin , represent the treatment group of individuals, the set of pre-treatment covariates to balance on. Our objective function is
- represents the number of observations in bin from some subset of control observations,
- represents the number of observations in bin across all members of the treatment group,
- is to ensure we don’t divide by zero when there are no observations in bin from the treatment group.
Limitations and Discussion
“Optimizing over subsets” can be difficult to explain to a collaborator. One might use more common measures of difference like the two-sample t-statistic for the difference in means.
However, this method is really nice in that one doesn’t have to choose a particular measure of distance between the two groups, and one doesn’t have to stress over a good model for the propensity score. “Human bias is replaced with computational constraints… instead, the quality of treatment effect estimation is now limited just by the complexity of an NP-Hard optimization problem and available computational power.”
Cho WKT, Sauppe JJ, Nikolaev AG, Jacobson SH, Sewell EC.
An optimization approach to matching and causal inference.