July 09, 2010
Meta-analysis: the wrong tool (wielded improperly)
By: Amy M. Romano, RN,CNM | 0 Comments
A lot has been said about the new meta-analysis of home birth. (Here is an excellent summary from Jennifer Block.) Canadian physician Michael Klein has been widely quoted as saying that the meta-analysis, a potentially valuable statistical tool, was performed poorly because the researchers included studies using discredited methodology, as well as studies that are decades old. 'Garbage in, garbage out.' I totally agree with this assessment. I also take issue with the fact that the researchers did not display the standard 'forest plot' that customarily accompanies a meta-analysis to illustrate how the relative magnitude of observed differences in the individual studies and the pooled analysis. And I'm perplexed by the use of a fixed-effects model for the analysis of neonatal death.
But I want to take a step back and ask a larger question - is meta-analysis even appropriate for the study of home birth?
Meta-analysis is a statistical process that pools data from multiple studies. It is intended to achieve two related goals:
- have adequate statistical power to detect differences in rare but clinically important outcomes (such as perinatal mortality among babies of healthy women)
- establish a definitive answer to an important clinical question, so that policies and practices can adapt to conform to the new 'truth' and other researchers don't have to study the issue anymore.
Let's look at these two issues separately in the context of the Wax meta-analysis.
Statistical Power
Lack of statistical power could not possibly be the rationale for conducting a meta-analysis on the safety of home birth. That's because there already is a study large enough to detect differences in intrapartum and neonatal death. In fact, it contributed 94% of the data on planned home birth in the meta-analysis (321,307 of 342,056 planned home births). That study found virtually identical rates of neonatal death in both the planned home and planned hospital births*, with relatively narrow confidence intervals. Neonatal deaths on day 0-7 occurred in 3.4 per 10,000 of each group and when combined with intrapartum mortality and adjusted for confounding factors, the relative risk was 1.00 (95% CI 0.78 to 1.27). That means that there was a 95% likelihood that planned home birth results in somewhere between a 22% reduction and a 27% increase in intrapartum or neonatal mortality.)
By adding a bunch of smaller, older, and flawed studies, excluding the intrapartum deaths (which may be affected by intrapartum events and therefore are potentially modifiable by the birth setting) and adding deaths that occurred between 8-28 days (which are less likely to be related to intrapartum events and therefore are less modifiable by birth setting), we suddenly have nearly three times the neonatal mortality rate with planned home birth and a confidence interval you could drive a truck through? (a 95% chance that home birth increases the risk of neonatal death by somewhere between 32% and 625%) Hmmm...
Definitive 'truth'
The other reason to undertake meta-analysis is to definitively settle a clinical question. Meta-analysis, after all, holds a privileged place atop the evidence pyramid, where it is considered the 'best evidence.' But is a deeply flawed meta-analysis really better than an adequately powered, methodologically sound study? The answer, of course, is no. All the meta-analysis does in such cases is separate the reader from the primary source of the data so that they can't assess it for themselves, while putting the evidence-based stamp of approval on whatever statistics the meta-analysis software spits out. But people with a political motivation to authoritatively declare a certain definitive truth may realize that most people don't bother to check to see if a meta-analysis is done appropriately or critically assess the quality of the included studies. They just go, 'Oh look, there's a meta-analysis of home birth and it said it's 3 times riskier than hospital birth. That settles that! It's a meta-analysis, after all!'
So if not a meta-analysis, then what?
OK, so if meta-analysis was not the right tool, what is? And can we stop studying the safety of home birth now that we have that large study that contributed 94% of the home birth data to the meta-analysis?
The way I see it, the large study that showed equivalent perinatal outcomes between home and hospital birth tells us definitively that home birth can be safe. But it doesn't tell us that home birth is intrinsically safe. We need to continue to study home birth using all of the tools in the research toolbox, qualitative and quantitative, to determine under what circumstances home birth is safe and how to optimize care and outcomes in all birth settings. And we need to stop pushing home birth underground in the United States where it remains a fringe alternative, poorly integrated with the maternity care system, with no standard safety net in place for women who begin labor with the intention to birth at home but turn out to need hospitalization in order to birth safely. Shame on the American Journal of Obstetrics and Gynecology for making this task even more difficult than it already was, by publishing and publicizing a junk meta-analysis.
*edited 7/12/2010 to correct a (serious) error. Sentence previously read 'virtually identical rates of neonatal death in both the planned and unplanned home births.'
Tags
Home Birth Safety Newborns Systematic Review Maternal Infant Care Meta-Analysis