Tools for Those Who Summarize the Evidence Base
Hi everybody,I am really happy to finally find somewhere where my meta-analysis questions could find an answer. I'm new to meta-analyzing, but I have read multiple books on the subject while starting my own and I cannot find anything close to what I am trying to do. I might mean that it is not "ok", not correct according to statistic or to meta-analysis standart and if so, I hope somebody will be able to point it out to me.
The data I'm dealing with comes from different experiment types, most of them appropriate for a d-type effect size. Some, however, can only be compute into effect size by using odd ratios (OR). I know from Borenstein et al (2009) conversion equation from OR to d. So at this point, everything is fine.
The problem is that some OR-type data (actually, quite a lot of them) present 0. I have tried continuity correction factor but as the variance sky-rocketed, the weight (I'm using random type, inverse of within study variance and between study variance) of these effect size dropped to 0. I tried to understand Sweeting et al (2004, Statistic in Medicine), but they haven't worked with random-effect meta-analysis.
Any clue or potential references that might help ? I really wish I could mix the d-type and OR-type effect size in a single meta-analysis, as I'm doing a meta-regression and I would really like to have the most level of moderator possible.
I thought that with ORs, 0 is infinity so it cannot appear. Yes, an OR of 0 would be astronomical in size! How is it possible that a report would give a zero for an OR, unless it is a typographical error? (Rounding?)
Sorry, I was unclear. Some of the data used to compute the OR are 0. So if I refer to the Borenstein nomenclature for the equation :
Table 5.1 Nomenclature for 2 2 table of outcome by treatment.
Events Non-Events N
Treated A B n 1
Control C D n 2
One of my event or non-event (A,B,C or D) is 0.
Oh, I see. There is something called the Haldane Correction to solve the problem. Essentially you add 0.5 to each cell frequency, and that gets rid of the zero n problem. Here's a link describing it.
Thanks. I tried it (partially for the moment), but it still greatly increase the variance and bring the weight to 0. I wished their would be something else to do.
I read a bit about the Peto method, that estimate OR differently and can handle 0, but from what I've read, it's a different way of doing the entire meta-analysis and it doesn't seem to fit with effect size conversion or meta-regression...
Can you give me an example 2 x 2 table that exhibits the problem?
I realize that I was probably doing the correction wrong, so finally, it might be okay (great!).Just to help anyone who would find this post, here's an exemple:
My data are percentage plant individual browsed by herbivore with or without treatment, so I usually have to create the 2 x2 table. For that, you need to know the n in each cell.
Events (browsed) Non-events (unbrowsed)
Treatment 0 40
Control 30 10
Without correction: OR = (0 * 10) / (30 * 40) = 0 and it's V = infinite
With correction: OR = ((0 + 0.5)*(10+0.5)/((30+0.5)*(40+0.5)) = 0.004 and it's v = 2.15
and then I would transform it to a LogOR.
Thanks Blair for the help. If you see any mistake in what I've done, please let me know.
Right, it cannot be calculated with the zero cell. When I calculate the OR using dstat 2, I come up with the same OR (and implied variance). This OR of 0.0034 converts to an SMD of 3.278, and a 95% CI of 2.61 to 3.95, so the underlying variance still implies a fairly wide CI. These are extremely large effects that you are documenting. You should see that your variance estimates get smaller with larger studies, so then the logic is correct. I see no mistake. You should go with the Haldane-corrected version!