Is model averaging the solution for addressing model uncertainty? : methodological insights, tools for assessment, and considerations for practical use
Banner, Katharine Michelle
MetadataShow full item record
Model averaging (MA) was developed as a way to combine predictions from many models, with the goal of reducing bias and incorporating model uncertainty into final predictive inferences. A new flavor of MA, focused on averaging partial regression coefficients over multiple models, has gained traction in fields such as Ecology, Biology, and Political Science, with motivation stemming from the concern that inferences based on a single model are too 'naive' (i.e., do not fairly reflect sources of substantial uncertainty). However, coefficients appearing in multiple models do not necessarily hold the same interpretation across models, and averaging over them has the potential to result in inferences that are difficult to interpret. A gap exists between the theoretical development of MA and its current use in practice, potentially leaving well-intentioned researchers with unclear inferences, or difficulties justifying decisions to use (or not use) MA. Furthermore, it is questionable whether the perceived benefit of accounting for an additional source of uncertainty is realized in terms of increased variance for quantities of interest. In this work, we revisit relevant foundations of regression modeling, suggest more explicit notation and graphical tools, and discuss how individual model results are combined to obtain a MA result, with the goal of helping researchers make informed decisions about MA. We present a new package for R Statistical Software providing plotting functions for visualizing components going into the MA posterior distribution. This package is meant to be used to assess the implicit assumptions made by using MA for regression coefficients, complete with guidelines for use and examples. We also design and conduct a simulation study to investigate how the variance for a partial regression coefficient of interest is different for three different approaches used within multimodel inference (MA using all models, MA using a subset of models, and conditioning inferences on one model). We assess whether the perceived benefit of accounting for model uncertainty is actually realized when more models are used for final inference, with the goal of helping researches weigh tradeoffs between using variants of MA in place of one well thought out model.