Calculating Effect Sizes and Their Variances or Standard Errors Discussions - Meta-Analysis Resources2022-01-24T03:01:16Zhttps://meta-analysis.ning.com/group/EScalcs/forum?feed=yes&xn_auth=noDealing with effect sizes from 1 study reporting both group comparison and correlations?tag:meta-analysis.ning.com,2018-10-10:5367515:Topic:290712018-10-10T16:08:05.624ZStephane De Britohttps://meta-analysis.ning.com/profile/StephaneDeBrito
<p>Hi,</p>
<p>We are running a meta-analysis looking at the association between psychopathy and impulsivity and some studies report both group comparisons (psychopaths vs non-psychopaths) and correlations between psychopathy scores and impulsivity.</p>
<p>What do you do in those instances? Do you calculate the two effect sizes (i.e., from the group comparison and from the correlation) and average them to obtain an overall effect size? What would you recommend when the correlation is run across…</p>
<p>Hi,</p>
<p>We are running a meta-analysis looking at the association between psychopathy and impulsivity and some studies report both group comparisons (psychopaths vs non-psychopaths) and correlations between psychopathy scores and impulsivity.</p>
<p>What do you do in those instances? Do you calculate the two effect sizes (i.e., from the group comparison and from the correlation) and average them to obtain an overall effect size? What would you recommend when the correlation is run across the two groups or just in the psychopathy group?</p>
<p>Thank you.</p>
<p>Stephane</p> Regression weights as effect sizes ?tag:meta-analysis.ning.com,2018-09-06:5367515:Topic:288492018-09-06T08:47:00.204ZEmpihttps://meta-analysis.ning.com/profile/Empi
<p>Dear all,</p>
<p> </p>
<p>in order to synthesize results from a set of multiple regression analyses from 6 separate cross-sectional samples, I would like to use the standardized regression slopes or beta coefficients ( standardized slope = unstandardized slope *sd_x/sd_y) for a (fixed or random-effects) meta-analysis. All analyses use the same set of covariates. I am not sure whether it would be adequate</p>
<p> </p>
<ol>
<li>to use the squared standard errors of the unstandardized…</li>
</ol>
<p>Dear all,</p>
<p> </p>
<p>in order to synthesize results from a set of multiple regression analyses from 6 separate cross-sectional samples, I would like to use the standardized regression slopes or beta coefficients ( standardized slope = unstandardized slope *sd_x/sd_y) for a (fixed or random-effects) meta-analysis. All analyses use the same set of covariates. I am not sure whether it would be adequate</p>
<p> </p>
<ol>
<li>to use the squared standard errors of the unstandardized regression slopes as variances for weighting the standardized regression slopes in the meta-analysis or whether one would need</li>
</ol>
<p> </p>
<ol>
<li>the standard errors of the standardized regression slopes (as they are available e.g. in Mplus or Lavaan)?</li>
</ol>
<p> </p>
<p>A related question is whether it would be better to use other effect sizes than standardized regression slopes – what do you think?</p>
<p>Best,</p>
<p>Empi</p>
<p></p> Help with Master's dissertation - Meta-analysis on Private Equity Performancetag:meta-analysis.ning.com,2018-06-05:5367515:Topic:284102018-06-05T10:27:07.197ZSean Mckeonhttps://meta-analysis.ning.com/profile/SeanMckeon
<div dir="ltr">Hi!</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I am currently completing my master's dissertation on private equity performance versus public stock markets.</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I want to use a meta-analysis to study prior literature, on whether private equity outperforms in terms of investment return versus public stock markets. There are many papers on this topic, all…</div>
<div dir="ltr">Hi!</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I am currently completing my master's dissertation on private equity performance versus public stock markets.</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I want to use a meta-analysis to study prior literature, on whether private equity outperforms in terms of investment return versus public stock markets. There are many papers on this topic, all with conflicting conclusions.</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">The Question I need Help with;</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I am unsure of what effect sizes to use for the analysis. The papers generally give you a binary/dichotomous answer (yes/no - whether private equity outperforms stock markets). Problem with this I have noticed is that in previous examples there is a treatment and control group and with the papers I want to analyse, this does not exist. </div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">Or the papers give a percentage value (e.g. Private equity outperforms by 3.79% per annum versus stock markets). Can the difference in percentage values given by the papers be used to calculate an effect size? I have not seen examples that use percentages to get effect sizes. </div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">What effect size should I use? Are this values applicable to get effect sizes or should I try something else? Is this possible to calculate for a meta-analysis? </div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">I look forward to any help anyone can provide! Thanks!</div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">Kind regards, </div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr"></div>
<div dir="ltr">Seán</div> Estimating the standard error for d, VR, and Proportion ratios.tag:meta-analysis.ning.com,2018-02-07:5367515:Topic:274182018-02-07T17:54:45.635ZMajidhttps://meta-analysis.ning.com/profile/Majid
<p>Hi everybody,</p>
<p><br></br>I want to calculate the standard error of <strong><em>d</em></strong>, <strong>VR</strong>, and <strong>Proportion ratios</strong>. And I'm looking for the standard error formula of them? (if it's possible all of them by using the Jackknife)<br></br>(there are two independent groups)<br></br>1. <strong><em>d</em></strong> = standard mean differences<br></br>2. <strong>VR</strong> (variance ratio)<br></br>VR = variance of group A / variance of group B<br></br>3. <strong>Proportion…</strong></p>
<p>Hi everybody,</p>
<p><br/>I want to calculate the standard error of <strong><em>d</em></strong>, <strong>VR</strong>, and <strong>Proportion ratios</strong>. And I'm looking for the standard error formula of them? (if it's possible all of them by using the Jackknife)<br/>(there are two independent groups)<br/>1. <strong><em>d</em></strong> = standard mean differences<br/>2. <strong>VR</strong> (variance ratio)<br/>VR = variance of group A / variance of group B<br/>3. <strong>Proportion ratios</strong>: For example the ratio of the top 10% subjects<br/>The ratio of the top 10% subjects = their frequency in group A / their frequency in group B<br/>Please let me know if there are some mistakes in my question.</p>
<p><br/>thank you in advance,<br/>Majid</p> Relative high weights in pre-post scores studiestag:meta-analysis.ning.com,2018-02-07:5367515:Topic:276182018-02-07T10:18:38.811ZMichael Köhlerhttps://meta-analysis.ning.com/profile/MichaelKoehler
<p>Hi there,</p>
<p>I'm working on a meta-analysis where i combine different designs: independent groups (treatment and control group) on the one hand and only dependent pre post without control group on the other. For both study types I computed hedges' g according to Borenstein et al. (2009).</p>
<p>As prescribed for studies that use pre post scores I used the correlation between pre- and post-scores to compute the standard deviation within groups and the variance of d. To compute the weight…</p>
<p>Hi there,</p>
<p>I'm working on a meta-analysis where i combine different designs: independent groups (treatment and control group) on the one hand and only dependent pre post without control group on the other. For both study types I computed hedges' g according to Borenstein et al. (2009).</p>
<p>As prescribed for studies that use pre post scores I used the correlation between pre- and post-scores to compute the standard deviation within groups and the variance of d. To compute the weight for each study I used the inverse of that study’s variance.</p>
<p>And here is my problem/question:</p>
<p>If the pre- / postscore-correlation is relativ high (e.g. .06) the weight of this study shoots up extremly. One study showed a pre post correlation of .08 and its weight is more than 20th times the most precise study with independent groups has. I tried to use an averaged pre post correlation but also then as result some of these studies have a much higher weight than the independent ones. I read a few papers that adressed this problem but i didnt find a solution. I also read the chapter in Borenstein et al. about the factors that affect precision. But I'm not sure how to deal with that.</p>
<p>Thanks for any reply</p>
<p>Michael</p> calculating Cohen's d for pre-post changes among paired groups?tag:meta-analysis.ning.com,2018-02-06:5367515:Topic:275152018-02-06T11:47:45.520ZKevriahttps://meta-analysis.ning.com/profile/GloriaMa
<p class="p1"><span class="s1">Dear all,</span></p>
<p class="p1"><span class="s1">I am using the programme "Comprehensive Meta-Analysis" (CMA) to meta-analyze the effectiveness of a particular intervention. I want to meta-analyze whether there are effective changes in some hypothesized outcomes after joining the intervention. I would like to seek your advice on which effect size measure(s) to use appropriately.</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">I wanted to use…</span></p>
<p class="p1"><span class="s1">Dear all,</span></p>
<p class="p1"><span class="s1">I am using the programme "Comprehensive Meta-Analysis" (CMA) to meta-analyze the effectiveness of a particular intervention. I want to meta-analyze whether there are effective changes in some hypothesized outcomes after joining the intervention. I would like to seek your advice on which effect size measure(s) to use appropriately.</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">I wanted to use Cohen’s d as the effect size measure using r.<span class="Apple-converted-space"> </span> However, my whole batch of those included studies got two types — one type with both intervention and control groups (A or B below); and another type with only intervention group (C below). It means now I mainly got three types of statistical data for effect size calculations:</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">A) pre-post change scores of BOTH intervention group AND control group</span></p>
<p class="p1"><span class="s1">B) post-ONLY data of both intervention group and control group</span></p>
<p class="p1"><span class="s1">C) paired pre-post changes of ONLY intervention group</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">For A and B, I am using Cohen’s d. But for C, should I use Cohen’s dz, which is the standardized paired difference?</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">However, my principal question is, Is it statistically and conceptually acceptable to lump studies of A, B, and C together as a batch and use Cohen’s d throughout?</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">Second, some the built-in formula in CMA require the "mean change" or "difference in means". Is "mean change" here refers to only the change in scores between two time-points within the same group? Is "difference in means" here referring to only the difference in scores between two independent groups?</span></p>
<p class="p2"></p>
<p class="p1"><span class="s1">Some studies reported the difference / change in means directly, while some other studies only reported the scores (e.g. means with SD) at each time-points respectively. For example, if I calculate the different in means between post-test and pre-test manually myself, does this difference equal to the "mean change" or "difference in means" as required by CMA mentioned above?</span></p>
<p class="p1"></p>
<p class="p1"><span class="s1">Any advice or recommended resources for relevant effect size calculation issues here would be highly appreciated. Thanks a lot in advance!</span></p>
<p class="p1"></p>
<p class="p1"><span class="s1">Regards,</span></p>
<p class="p1"><span class="s1">Kevria</span></p> inter-study variancetag:meta-analysis.ning.com,2017-05-07:5367515:Topic:243202017-05-07T17:02:57.082ZNoha Ghallabhttps://meta-analysis.ning.com/profile/NohaGhallab
<p>Hi Everyone</p>
<p>I need to know how can i calculate inter-study variance in a fixed effect model ?</p>
<p>i need to know the variance so i will be able to calculate the weight of the study which is the inverse of variance.</p>
<p>thanks in advance</p>
<p>Hi Everyone</p>
<p>I need to know how can i calculate inter-study variance in a fixed effect model ?</p>
<p>i need to know the variance so i will be able to calculate the weight of the study which is the inverse of variance.</p>
<p>thanks in advance</p> calculating variance of mean proportions for meta-analysistag:meta-analysis.ning.com,2016-06-03:5367515:Topic:226182016-06-03T13:40:09.188ZAnne Fleurhttps://meta-analysis.ning.com/profile/AnneFleur
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3">We aim to do a meta-analysis on mean proportions using. Thus, individual proportions (accuracy scores) within a study are already averaged to a mean proportion when I distract them from studies.</font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3"> </font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3">We now struggle with the following: We want to calculate the variance but the denominator in…</font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3">We aim to do a meta-analysis on mean proportions using. Thus, individual proportions (accuracy scores) within a study are already averaged to a mean proportion when I distract them from studies.</font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3"> </font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3">We now struggle with the following: We want to calculate the variance but the denominator in our proportions reflect the number of math calculations attempted not the number of people in a study sample. Thus, denominators in calculations for e.g. Risk difference reflect the number of math calculations attempted. However, for calculation of the variance, this isn’t right as the variance will be used to weight the effect size and thus needs to reflect the number of people in a study sample. Normally one would calculate the variance of proportions as p(1-p) but I doubt whether this is appropriate as our distribution of mean proportions is more likely to be normally distributed than binomially distributed. Variance is generally not given in studies included as those studies report on the number attempted and the number correct (with accompanying variance), not on the proportion.</font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3"> </font></p>
<p style="margin: 0cm 0cm 0pt;"><font face="Calibri" size="3">Do you have any advice on how to obtain the variance necessary to weight the studies in our situation?</font></p>
<p></p> Converting from r to dtag:meta-analysis.ning.com,2014-07-14:5367515:Topic:206532014-07-14T19:09:15.502ZEmilie Champagnehttps://meta-analysis.ning.com/profile/EmilieChampagne
<p>Hi everybody,</p>
<p>I am looking for references concerning the transformation of r-type effect size into d-type.</p>
<p>My meta-analysis encompass multiple type of data, some presented as differences of means, others as Pearson's r. As d-type are more frequent, I converted all effect size into d, using Borenstein et al 2009 equations. But as this is rarely done, I would like to back myself with good references.</p>
<p>Do you know of anything like that ? Would you advise against converting…</p>
<p>Hi everybody,</p>
<p>I am looking for references concerning the transformation of r-type effect size into d-type.</p>
<p>My meta-analysis encompass multiple type of data, some presented as differences of means, others as Pearson's r. As d-type are more frequent, I converted all effect size into d, using Borenstein et al 2009 equations. But as this is rarely done, I would like to back myself with good references.</p>
<p>Do you know of anything like that ? Would you advise against converting ?</p>
<p>Thanks !</p>
<p>Emilie</p> Complex effect size conversiontag:meta-analysis.ning.com,2014-06-05:5367515:Topic:206292014-06-05T14:47:25.830ZEmilie Champagnehttps://meta-analysis.ning.com/profile/EmilieChampagne
<p>Hi everybody,I am really happy to finally find somewhere where my meta-analysis questions could find an answer. I'm new to meta-analyzing, but I have read multiple books on the subject while starting my own and I cannot find anything close to what I am trying to do. I might mean that it is not "ok", not correct according to statistic or to meta-analysis standart and if so, I hope somebody will be able to point it out to me.</p>
<p>The data I'm dealing with comes from different experiment…</p>
<p>Hi everybody,I am really happy to finally find somewhere where my meta-analysis questions could find an answer. I'm new to meta-analyzing, but I have read multiple books on the subject while starting my own and I cannot find anything close to what I am trying to do. I might mean that it is not "ok", not correct according to statistic or to meta-analysis standart and if so, I hope somebody will be able to point it out to me.</p>
<p>The data I'm dealing with comes from different experiment types, most of them appropriate for a d-type effect size. Some, however, can only be compute into effect size by using odd ratios (OR). I know from Borenstein et al (2009) conversion equation from OR to d. So at this point, everything is fine.</p>
<p>The problem is that some OR-type data (actually, quite a lot of them) present 0. I have tried continuity correction factor but as the variance sky-rocketed, the weight (I'm using random type, inverse of within study variance and between study variance) of these effect size dropped to 0. I tried to understand Sweeting et al (2004, Statistic in Medicine), but they haven't worked with random-effect meta-analysis.</p>
<p>Any clue or potential references that might help ? I really wish I could mix the d-type and OR-type effect size in a single meta-analysis, as I'm doing a meta-regression and I would really like to have the most level of moderator possible.</p>
<p>Thanks !</p>
<p>Emilie</p>