Among other work, papers in political science, epidemiology, and statistics have highlighted that linear regression is a weighting estimator. Alastair Leyland and I showed that given no effect modification regression weights are the equivalent of inverse probability weights for a binary exposure. Thus a draw back for regression is that without interaction terms it may not represent the target population. This body of work also shows that linear regression weights are not dependent on the outcome. This also connects linear regression to causal adjustment methods where there is an emphasis on control for confounding by modelling the exposure rather than the outcome.
One thing I struggled with was adding effect modification in the regression framework as you didn’t then have a single weight for the exposure . The solution, I think, is centring confounders. The table below shows how two unbalanced binary confounders and their interaction were unbalanced over levels of the exposure in the observed data . The target on which to balance was the confounders’ average in the data, in other words the average treatment effect. A standard regression - where we enter on the right-hand side the exposure and the two confounders but not their interactions - does balance the confounders but not at their target value and does not balance their interaction. If I centre the confounders and their interaction and then interact all the variables on the right hand side of my linear regression I can balance the confounders and their interaction using the regression weight. The final two lines show that I can do the same using inverse probability weights derived from a regression with the exposure as the outcome and the confounders and their interaction on the right side.
Exposure | Confounder balance | |||
---|---|---|---|---|
Confounder 1 | Confounder 2 | Their interaction | ||
Observed data | 0 | 0.0885 | 0.1718 | 0.0156 |
Observed data | 1 | 0.1442 | 0.4286 | 0.0549 |
Target | NA | 0.0969 | 0.2106 | 0.0215 |
Regression weight | 0 | 0.1337 | 0.3689 | 0.0412 |
Regression weight | 1 | 0.1337 | 0.3689 | 0.0425 |
Reg weight interaction | 0 | 0.0969 | 0.2106 | 0.0215 |
Reg weight interaction | 1 | 0.0969 | 0.2106 | 0.0215 |
IPW | 0 | 0.0969 | 0.2106 | 0.0215 |
IPW | 1 | 0.0969 | 0.2106 | 0.0215 |
You can represent a linear regression in terms of matrices. The X matrix contains your right hand side variables (exposure, confounders) while your outcome is in the Y matrix. We then obtain our regression coefficients via the equation .\[(X'X)^{-1}X'Y\]
Without getting in to the details of matrix algebra if you exclude Y, then \[(X'X)^{-1}X\]
gives you a weight variable for each variable in X. Given a binary exposure weights will sum to -1 and 1 over the two categories.
Mega thanks to tidyverse, simstudy, gt and WeightIt packages used in this blog and knitr, distill, R, and Rstudio that allow me to produce the blog.
Code to reproduce this blog and analysis.
For attribution, please cite this work as
Popham (2021, Dec. 3). Frank Popham: Regression weights can equal inverse probability weights. Retrieved from https://www.frankpopham.com/posts/2022-12-03-regression-weights-can-equal-inverse-probability-weights/
BibTeX citation
@misc{popham2021regression, author = {Popham, Frank}, title = {Frank Popham: Regression weights can equal inverse probability weights}, url = {https://www.frankpopham.com/posts/2022-12-03-regression-weights-can-equal-inverse-probability-weights/}, year = {2021} }