Summary: | Abstract Background Statistical adjustment is often considered to control confounding bias in observational studies, especially case–control studies. However, different adjustment strategies may affect the estimation of odds ratios (ORs), and in turn affect the results of their pooled analyses. Our study is aimed to investigate how to deal with the statistical adjustment in case–control studies to improve the validity of meta-analyses. Methods Three types of adjustment strategies were evaluated including insufficient adjustment (not all preset confounders were adjusted), full adjustment (all confounders were adjusted under the guidance of causal inference), and improper adjustment (covariates other than confounders were adjusted). We carried out a series of Monte Carlo simulation experiments based on predesigned scenarios, and assessed the accuracy of effect estimations from meta-analyses of case–control studies by combining ORs calculated according to different adjustment strategies. Then we used the data from an empirical review to illustrate the replicability of the simulation results. Results For all scenarios with different strength of causal relations, combining ORs that were comprehensively adjusted for confounders would get the most precise effect estimation. By contrast, combining ORs that were not sufficiently adjusted for confounders or improperly adjusted for mediators or colliders would easily introduce bias in causal interpretation, especially when the true effect of exposure on outcome was weak or none. The findings of the simulation experiments were further verified by the empirical research. Conclusions Statistical adjustment guided by causal inference are recommended for effect estimation. Therefore, when conducting meta-analyses of case–control studies, the causal relationship formulated by exposure, outcome, and covariates should be firstly understood through a directed acyclic graph, and then reasonable original ORs could be extracted and combined by suitable methods.
|