Meta-Analysis Resources

Tools for Those Who Summarize the Evidence Base

Resources and networking for those who conduct or interpret meta-analyses related to any phenomenon that is gauged in multiple studies.

computing 95%CI of the I^2 statistic; and publication bias

Dear all,

I am working on a manuscript of my meta-analysis on effectiveness of a program. As indicators of heterogeneity in the effect size estimates, I have obtained the Q statistic and the I^2 statistic from the program Comprehensive Meta-Analysis (CMA) 3.0. Now I am using the formula on page 123 of Borenstein, Hedges, Higgins, & Rothstein (2009) to compute the 95% confidence intervals of the I^2 statistics. But some problems occur.

1) When there is only 1 study (k=1), should we (or can we) still compute the Q and I^2 statistic? Because I still obtained 0.00 for both Q and I^2 in CMA. However when I go ahead to compute the 95%CI of I^2, it cannot be computed. I suppose heterogeneity issue would not be irrelevant when k=1?

2) Is it possible and reasonable for the lower limit of the 95% confidence interval of I^2 be negative values? The range apparently seems too large...? What does a negative value of I^2 imply?

The 3 problematic cases for this question:

Q=5.708, df=4, I^2=29.92 [-81.34, 72.92]

Q=3.527, df=1, I^2=71.65 [-26.04, 93.62]

Q=1.17, df=4, I^2=0.00 [-1646.34, 33.07]

2) Another problematic case is that, the computed lower limit of I^2 is larger than the point estimate of I^2 statistic. Does this imply my computation error or other possible issues?

Q=117.220, df=12, I^2=71.65 [84.34, 93.31]

the reference is: 

Borenstein, Hedges, Higgins, & Rothstein, 2009

http://onlinelibrary.wiley.com/book/10.1002/9780470743386

3) Regarding the article below, may I know if anyone could share with me, in simpler ways, how to conceptualize, conduct, and interpret the testing of potential publication bias alongside moderators such as PET-PEESE in this article? I am required to conduct this test in my meta-analysis but I don't quite understand the simulation analyses in this paper.

Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60-78.

Any help would be greatly appreciated. Much thanks for your help in advance! Thanks!!

Regards,

Kevria

Views: 13

Replies to This Discussion

With 1), right I^2 can only be calculated when k>1.

With 2), according to Higgins and Thompson's (2002) formulation, I^2 values below zero are rounded up to zero.

With the second 2), it looks like a calculation error.

With 3), for PET-PEESE, (a) put the SE in a meta-regression as a single predictor of the effect sizes. If the intercept is not significant, then the intercept is the estimate of your effect, adjusted for small-studies effects, along with its 95% CI.

(b) If intercept is significant, then run a second model and replace SE with its squared form, SE^2. Now the intercept and its 95% CI are your adjustment for small-sample effects.

(c) Compare the values to the naive mean (with no predictors). Obviously, estimates that are close together imply less "publication bias" (or small-studies effects).

(d) My stock advice: Beware heterogeneity! Under heterogeneity, such estimates are prone to error, as what is needed are more complex models that account for heterogeneity.





Sincerely thanks so much for the informative and concise explanation!! May I further ask few follow-up questions?

4) Should I^2 be computed when k = 2, but both studies are from the same article? e.g. two independent samples reported in the same article? Should we use the criteria of k > 1 or the degrees of freedom of Q > 1. It is because the 95% CI of I^2 cannot be computed in the following two situations as well, but I haven’t been able to figure out the root problems.

1 article (2 studies): Cohen’s d = 0.21, SE = 0.06, 95% CI = 0.10, 0.32
Q = 0.147, df = 1, I^2 = 0.00

2 articles (2 studies): Cohen’s d = 0.26, SE = 0.08, 95% CI = 0.10, 0.41

Q = 0.167, df = 1, I^2 = 0.00

5) In fact, how do we conceptualize negative values of I^2 statistic?

6) Moreover, when we statistically round up negative I^2 values up to zero, what does this mean conceptually? If we round up the value to zero, do we need to specify this in any parts of a manuscript?

Much thanks again for the explanation of PET-PEESE and it helped me a lot. May I further ask:

7) Does it mean we can directly put the effect sizes as DV and standard error (or variance, depending on the results you specified last time) as the single predictor in linear regression model in SPSS? Then the “constant” in the SPSS linear regression output will be the intercept we are looking for?

8) When we compare the naive mean and the estimates obtained from PET-PEESE, is there any thumb of rule or criteria to determine how close together the two values are to imply the degree of small-studies effect? What if when the two estimates are close together, but the 95% CI of the estimates from PET-PEESE cut across zero (which suggests non-significance?)?

9) I’m indeed a bit confused. Based on my understanding, publication bias could be a reason for small-study effects. But it seems to me these two terms are used interchangeably sometimes. So for PET-PEESE, Egger’s Test, Begg’s Test, and the trim-and-fill method, what does each of this test assess actually? Publication bias or small-studies effect?

10) Referring to your stock advice, for “complex model that account for heterogeneity” — when there is high heterogeneity, should we try to employ statistical models to account for it, or select a relatively “homogeneous” group of studies from the original batch to conduct a separate meta-analysis?

Sincerely much thanks again for the explanation and great help!!

Kevria

RSS

© 2017   Created by Blair T. Johnson.   Powered by

Badges  |  Report an Issue  |  Terms of Service