Heterogeneity Discussions - Meta-Analysis Resources2021-01-27T14:12:44Zhttp://meta-analysis.ning.com/group/heterogeneity/forum?feed=yes&xn_auth=nocomputing 95%CI of the I^2 statistic; and publication biastag:meta-analysis.ning.com,2017-05-09:5367515:Topic:244292017-05-09T20:54:03.179ZKevriahttp://meta-analysis.ning.com/profile/GloriaMa
<p>Dear all,</p>
<p>I am working on a manuscript of my meta-analysis on effectiveness of a program. As indicators of heterogeneity in the effect size estimates, I have obtained the Q statistic and the I^2 statistic from the program Comprehensive Meta-Analysis (CMA) 3.0. Now I am using the formula on page 123 of Borenstein, Hedges, Higgins, & Rothstein (2009) to compute the 95% confidence intervals of the I^2 statistics. But some problems occur.</p>
<p>1) When there is only 1 study (k=1),…</p>
<p>Dear all,</p>
<p>I am working on a manuscript of my meta-analysis on effectiveness of a program. As indicators of heterogeneity in the effect size estimates, I have obtained the Q statistic and the I^2 statistic from the program Comprehensive Meta-Analysis (CMA) 3.0. Now I am using the formula on page 123 of Borenstein, Hedges, Higgins, & Rothstein (2009) to compute the 95% confidence intervals of the I^2 statistics. But some problems occur.</p>
<p>1) When there is only 1 study (k=1), should we (or can we) still compute the Q and I^2 statistic? Because I still obtained 0.00 for both Q and I^2 in CMA. However when I go ahead to compute the 95%CI of I^2, it cannot be computed. I suppose heterogeneity issue would not be irrelevant when k=1?</p>
<p>2) Is it possible and reasonable for the lower limit of the 95% confidence interval of I^2 be negative values? The range apparently seems too large...? What does a negative value of I^2 imply?</p>
<p>The 3 problematic cases for this question:</p>
<p>Q=5.708, df=4, I^2=29.92 [-81.34, 72.92]</p>
<p>Q=3.527, df=1, I^2=71.65 [-26.04, 93.62]</p>
<p>Q=1.17, df=4, I^2=0.00 [-1646.34, 33.07]</p>
<p>2) Another problematic case is that, the computed lower limit of I^2 is larger than the point estimate of I^2 statistic. Does this imply my computation error or other possible issues?</p>
<p>Q=117.220, df=12, I^2=<strong>71.65</strong> [<strong>84.34</strong>, 93.31]</p>
<p>the reference is: </p>
<p>Borenstein, Hedges, Higgins, & Rothstein, 2009</p>
<p><a href="http://onlinelibrary.wiley.com/book/10.1002/9780470743386">http://onlinelibrary.wiley.com/book/10.1002/9780470743386</a></p>
<p>3) Regarding the article below, may I know if anyone could share with me, in simpler ways, how to conceptualize, conduct, and interpret the testing of potential publication bias alongside moderators such as PET-PEESE in this article? I am required to conduct this test in my meta-analysis but I don't quite understand the simulation analyses in this paper.</p>
<p>Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60-78.</p>
<p>Any help would be greatly appreciated. Much thanks for your help in advance! Thanks!!</p>
<p>Regards,</p>
<p>Kevria</p> Meta-regression questiontag:meta-analysis.ning.com,2014-04-09:5367515:Topic:205402014-04-09T20:03:56.121ZYeshvanth JPhttp://meta-analysis.ning.com/profile/YeshvanthJP
<p>As I mentioned in the 'methodological quality' forum, I am currently working on a risk association meta-analysis involving case-control studies. </p>
<p>Not surprisingly, our meta-analysis has a substantial heterogeneity of I-squared of 90%. We are trying to do a metaregression and subgroup analysis which might explain some of the heterogeneity. </p>
<p>The question I have is how do we form a regression equation in the form of y = a+bx? Is Y here the 90% heterogeneity? And the regression…</p>
<p>As I mentioned in the 'methodological quality' forum, I am currently working on a risk association meta-analysis involving case-control studies. </p>
<p>Not surprisingly, our meta-analysis has a substantial heterogeneity of I-squared of 90%. We are trying to do a metaregression and subgroup analysis which might explain some of the heterogeneity. </p>
<p>The question I have is how do we form a regression equation in the form of y = a+bx? Is Y here the 90% heterogeneity? And the regression variable (b1, b2, b3, etc) the covariates that I will be using for the metaregression? </p>
<p>Also, how does one do a metaregression in STATA? I am currently using comprehensive meta-analysis and it is working well, but I would like to learn to use the metareg command in STATA. </p>
<p>Thanks so much. </p>
<p></p>
<p></p> Methodology bibliography: Heterogeneity and related topicstag:meta-analysis.ning.com,2012-11-06:5367515:Topic:171162012-11-06T22:39:44.137ZAdam Hafdahlhttp://meta-analysis.ning.com/profile/AdamHafdahl
<p>(I'm posting similar info about my bibliography to other groups as well. This seems more useful than making one general post somewhere on this site, because different topics and issues are relevant to each group.)<br></br><br></br>My bibliography on methodology for research synthesis includes numerous articles, chapters, conference papers, dissertations, and other types of work related to between-studies heterogeneity and related topics (e.g., moderators, random-effects techniques, clinical…</p>
<p>(I'm posting similar info about my bibliography to other groups as well. This seems more useful than making one general post somewhere on this site, because different topics and issues are relevant to each group.)<br/><br/>My bibliography on methodology for research synthesis includes numerous articles, chapters, conference papers, dissertations, and other types of work related to between-studies heterogeneity and related topics (e.g., moderators, random-effects techniques, clinical heterogeneity, homogeneity tests, between-studies\inter-study variance, apples-and-oranges criticism). I'll just mention a few ways to find relevant items. One strategy is to examine the <strong>tag cloud</strong> in the new CiteULike version of my bibliography:<br/><br/> <a href="http://www.citeulike.org/user/Meth4ReSyn/tags">http://www.citeulike.org/user/Meth4ReSyn/tags</a><br/><br/>From there you can simply click on a given tag to see a list of items that address that topic (though not always as a main topic). For example, clicking on 'heterogeneity,' 'moderator,' or 'random_effect' takes you to the following subsets of items <br/><br/> <a href="http://www.citeulike.org/user/Meth4ReSyn/tag/heterogeneity">http://www.citeulike.org/user/Meth4ReSyn/tag/heterogeneity</a><br/> <a href="http://www.citeulike.org/user/Meth4ReSyn/tag/moderator">http://www.citeulike.org/user/Meth4ReSyn/tag/moderator</a><br/> <a href="http://www.citeulike.org/user/Meth4ReSyn/tag/random_effects">http://www.citeulike.org/user/Meth4ReSyn/tag/random_effects</a><br/><br/>Many items' records include an abstract, more tags, links to the full text, and other features. You can also search this library, export citations, and accomplish other useful reference management tasks; some tasks require (free) registration but most do not. The following blog page describes much more about this bibliography project, including links to tips for getting started and to other publicly available versions:<br/><br/> <a href="http://adamhafdahl.net/bibliography">http://adamhafdahl.net/bibliography</a><br/><br/>The above Meth4ReSyn library in CiteULike currently contains fewer than 10% of the 7,000+ items in my larger publicly available bibliography, the most recent version of which is associated with the forthcoming article:<br/><br/> Hafdahl, A. R. (in press). Article Alerts: Items from 2011, Part I. <em>Research Synthesis Methods</em>.<br/><br/>This 'Article Alerts' version exists as an Excel workbook, which is somewhat clumsy to use but can be searched or filtered using Excel utilities.<br/><br/>Even the largest version of my bibliography contains only about 20% to 30% of all the available methodological work relevant to research synthesis, by my admittedly crude estimate. I spend several hundred hours each year adding more items and making it user-friendlier (e.g., transporting most items to the Meth4ReSyn library in CiteULike). I'd welcome your suggestions for how to improve this resource.<br/><br/></p> The Truth Wears Offtag:meta-analysis.ning.com,2011-04-14:5367515:Topic:26012011-04-14T13:00:34.721ZBlair T. Johnsonhttp://meta-analysis.ning.com/profile/BlairTJohnson
<p>Hi all,</p>
<p> </p>
<p>I'd noticed and browsed <a href="http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer" target="_blank">this very interesting piece</a> by Jonah Lehrer in the New Yorker in December, but had not read it carefully until yesterday. It discusses publication bias (a.k.a. reporting bias) and the frustrations of researchers trying to replicate their own effects. The story about Jonathan Schooler is vividly told as is early work on ESP by J. B. Rhine. Yet of all…</p>
<p>Hi all,</p>
<p> </p>
<p>I'd noticed and browsed <a href="http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer" target="_blank">this very interesting piece</a> by Jonah Lehrer in the New Yorker in December, but had not read it carefully until yesterday. It discusses publication bias (a.k.a. reporting bias) and the frustrations of researchers trying to replicate their own effects. The story about Jonathan Schooler is vividly told as is early work on ESP by J. B. Rhine. Yet of all the examples of debut curses the article lists, the one that piques my interest most are the efforts by John Crabbe to conduct the exact same experiment at the same time in three different locations in the northern hemisphere (Albany, NY; Edmonton, Alberta; and Portland, Oregon). The experiment had to do with the effects of exposure to cocaine on the behavior of mice. They attempted to hold seemingly every variable constant across the three settings. As Lehrer states,</p>
<blockquote>The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. they had been exposed to the same amount of incandescent light, were living with the same number of litter mates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.</blockquote>
<p>(I'm guessing even more things Yet, how did results look? Disturbingly different. Lehrer reports (emphasis added):</p>
<blockquote>In Portland the mice given the drug moved, on average, six hundred ccntin1ctres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn't follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.</blockquote>
<blockquote>The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are <strong>nothing but noise.</strong></blockquote>
<p>Of course, Lehrer reports these findings in a pop science way that does not permit us to get a sense of the magnitude of the effects that are described (e.g., if standard deviations on the critical variables were large, then we are just seeing small wavering in the data) and I'd be curious how they measured movement and anxiety (surely questionnaires were not used!).It would be valuable to know whether the observed variation was within that expected by sampling error or more than that expected by sampling error. (Perhaps the original Crabbe report addresses this issue?)</p>
<p>Those of us who work with human populations--which seem even more complex than mice--can surely relate to seemingly random results appearing in even very carefully conducted investigations. But I wonder if what we label "noise" or "randomness" is just a convention for the longer but more precise meaning, "we know the results are heterogeneous and have no clue what is causing the deviations." Maybe it just takes a different expert (than the one who did the original research) to figure out why the results deviate. Just thinking out loud: Maybe the mice had to travel farther to the Alberta site than the other sites, or maybe there is something about the local ecologies at work. Maybe Einstein was right when he opined, "<strong>God does not play dice with the universe</strong>."</p>
<p>Any discussion? I'd love to hear similar reports, or hear from people who know the Crabbe or Schooler work well enough to inform the discussion.</p>
<p> </p> Spreadsheet to Convert Q into I2tag:meta-analysis.ning.com,2010-03-30:5367515:Topic:1182010-03-30T14:52:37.000ZBlair T. Johnsonhttp://meta-analysis.ning.com/profile/BlairTJohnson
<p>Following is a spreadsheet that my team and I developed to convert Q statistics into I2:<br/> <a target="_self" href="http://storage.ning.com/topology/rest/1.0/file/get/1802383003?profile=original">Q_k_to_I2with_CIs.xlsx</a></p>
<p>Following is a spreadsheet that my team and I developed to convert Q statistics into I2:<br/> <a target="_self" href="http://storage.ning.com/topology/rest/1.0/file/get/1802383003?profile=original">Q_k_to_I2with_CIs.xlsx</a></p>