Letter to the Editor

Should a Meta-Analyst Want the Likelihood or the Posterior from Each Study?

writingletter

I read the recent commentary on ethics of using priors by [Andrew] Gelman with great interest and considerable agreement. There is, however, an aspect of the commentary that I think is in need of serious elaboration concerning the use of posterior distributions as opposed to likelihoods from individual study reports contributing to a review or meta-analysis.

Gelman’s commentary recalled earlier comments by Senn, especially the remarks about meta-analysts wanting the likelihood from a study, not the posterior. In isolation, those remarks seemed to ignore the fact (which both Gelman and Senn mention at other points in their commentaries) that the likelihood itself is built from priors. As George Box stressed, these priors are usually not recognized as such because they tend to be only in the two extreme forms allowed in fixed-effect models, either point masses at a null value (e.g., for covariate coefficients dropped from a model) or else uniform priors over an entire parameter space (e.g., for included coefficients). Since included covariates can vary considerably across study reports, the priors underlying the final estimates can vary considerably across the reports.

Perhaps Gelman and Senn meant that one needs to remove study-specific priors about the effect under study before combination. These target-parameter priors should be contrasted to priors about the study-specific features (design, execution, reporting) embedded in the data model (and hence likelihood), or their representation in study-specific nuisance parameters such as the degree of confounding or selection bias expected in each study. Again, priors for those nuisance parameters are essential for data analysis, yet they are not without controversy, so they are usually portrayed as model features (covariate inclusions and exclusions) instead of prior features. Senn summarized the problem succinctly in a 2008 article:

“To the degree that we agreed on the model (and the nuisance parameters) we could communicate likelihoods, but in practice all that may be left is the communication of data.”

In observational epidemiology, even when we get all the data, it is a lengthy and expensive research project to come up with a pooled analysis, especially since the data formats, included variables and variable forms, and missing-data patterns are so disparate. The “judgments” needed to combine all that, if good, will amount to no less than separate (albeit not necessarily independent) priors about certain nuisance (or bias) parameters for each study, reflecting what can be gleaned about their design and data-collection processes from available narratives.

More often, of course, expedient oversimplifications will be used to get the job done, which may then be touted as “objective” because they ignore important contextual information. If I trusted the authors had used their best contextual judgment to come up with and use informative priors about nuisance parameters instead of using implicit defaults (which, as I outlined some years ago, usually assign point masses of 1 to bias nulls), I might prefer to use the results of their semi-Bayes analyses (using informative priors on study-specific nuisance parameters but not the study effect). That trust may be naïve, but perhaps not much more so than trusting their basic modeling or covariate-adjustment strategy.

Returning to priors for the target effects under study, in reality, I might well have separate (if highly correlated) priors for study-specific effects that vary systematically across the studies with features like the age and sex composition of the study groups (e.g., I do not seriously expect the same effect of aspirin on thromboembolic risk in old men and young women). So my reason for rejecting posteriors from Bayesian study reports when doing a meta-analysis is that I want to use my own priors rather than the priors of others. This is so even if I trust the judgments in the original single-study priors. One good reason for rejecting those priors is that they may have been contaminated by information from earlier studies which I am including directly, and that would result in double counting of such information were I to use the resulting posteriors reported by the single studies.

All the above ignores the fact that the owners of the most critical data often won’t part with them in any meaningful sense (which fuels suspicions of selective reporting), leading to a terribly nonignorable meta-analytic missing-data problem; selection bias is the term Copas used in his studies of that problem. This problem being nonidentified without priors, we are forced to end with a sensitivity analysis or informative Bayesian analysis whenever a meaningful proportion of study data is missing from our meta-analysis.

Further Reading

Box, G.E.P. 1980. Sampling and Bayes inference in scientific modeling and robustness. J R Stat Soc Ser A 143:383–430.

Copas, J.B. 1999. What works? Selectivity models and meta-analysis. J R Stat Soc Ser B 162:95–109.

Gelman, A. 2012. Ethics and the statistical use of prior information. CHANCE 25:52–54.

Greenland, S. 2005. Multiple-bias modeling for analysis of observational data (with discussion). J R Stat Soc Ser A 168:267–308.

Senn, S. 2008. Comment on Article by Gelman. Bayesian Analysis 3:459–462.

Senn, S. 2011. You may believe you are a Bayesian, but you are probably wrong. Rationality, Markets, and Morals 2:48–66.

Gelman’s Response:

I agree!

Tagged as: , , ,