Meta-Analysis Resources

Tools for Those Who Summarize the Evidence Base

Resources and networking for those who conduct or interpret meta-analyses related to any phenomenon that is gauged in multiple studies.

The Hunter & Schmidt approach - is it really a Random Effect Model?

I am currently working on a meta-analysis in the field of work psychology. In this field it is quite common to use the method from Hunter & Schmidt (2004) to combine corrected correlations (mostly corrected for measurement error using the Cronbach's alpha reliability measure).

I want to conduct a meta-analysis within the Random Effects Model (REM) as I got quite heterogeneous data and I aim for high generalizability. Hunter & Schmidt (2004) state that that their method was a REM. I doubt that. As I want to analyze uncorrected as well as corrected correlations I started working with different meta analysis programs. To check for any inconsistency I ran a test analyzing the same small set of test data with different programs: Comprehensive Meta-Analysis by Biostat (CMA), The Hunter-Schmidt Meta-Analysis Programs 1.1, and an excel sheet from M. Brannic using the Hunter & Schmidt formulas (http://luna.cas.usf.edu/~mbrannic/files/meta/metadir.htm).

The official Hunter-Schmidt program and the excel sheet from M. Brannic are leading to the same results, which is quite good so I do not have to work with the Hunter-Schmidt program which has a rather simple and unserviceable user interface (from my point of view). But the results of the Hunter-Schmidt program and the CMA program differ in a quite interesting way:

I took the Hunter&Schmidt individually corrected correlations from the M. Brannic excel sheet and put them into CMA to combine effect sizes within that program. In CMA you can choose either REM or FEM to combine correlations. The results showed that using Hunter-Schmidt individually corrected correlations in CMA gives me exactly the same result as the official Hunter-Schhmidt program - but only when using the Fixed Effects Model in CMA. When using REM to combine corrected correlations I get a smaller overall effect size than the Hunter-Schmidt program gives me. I draw the conclusion that Hunter & Schmidt (2004) is not really a REM. What do you think about that? Do you think the conclusion is correct?

I really do not know which technique I should use now to conduct my meta-analysis. On the one hand I would like to use REM in CMA with individually corrected Hunter-Schmidt correlations because it seems the most logical approach to me. On the other hand I want to publish results that are comparable to other meta analysis in my field of research and they are mostly using the classical Hunter & Schmidt (2004) approach (except for one using CMA). Please help!

Thank you very much.

Anna

Views: 3828

Reply to This

Replies to This Discussion

Interesting, Anna! Would you please post output from the models in question? It might make it easier to answer your query. -Blair

Please find attached the output files from the dirfferent programs I used for the analysis.

Short version of the results

Hunter&Schmidt: r(uncorrected)=.302,  r(corrected)=.342

Excel: r(uncorrected)=.302 r(corrected)=.342

CMA: r(FixedEffects uncorrected)=.308, r(FixedEffects uncorrected).343

r(RandomEffects corrected)=.355, r(RandomEffects corrected)=.362

Interestingly, the results of all programs differ a lot (e.g., standard deviation, see output files) but nevertheless the combined effect sizes are quite similar. The problem is that I just put some raw data in and get the most important results out from the meta-analysis software packages (CMA & Hunter&Schmidt) BUT I have no information about the steps between and the exact formulas that were applied, I can only control for that in my Excel sheet which gives me quite similiar not exactly the same results as the programms (this you can only see in the more detailed output tables attached).

Note:

1. The data analyzed here is only a small part of my data set (8 out of 246 correlations)

2. The first two pages of the file' CMA Data uncorrected and correct' show the results when typing in the raw correlations, page three and four show the results when using the Hunter&Schmidt corrected correlations (from the excel sheet).

Attachments:

I found an explanation for the results and want to share it here just in case someone else is facing the same issue.

In fact, Hunter & Schmidt state that their model was a Random Effects Model but there are authors who express some criticism - see p. 165 in Field (2001) and p. 493 in Hedges & Vevea (1998).

Apparently, the Hunter-Schmidt approach assumes that tau squared (an estimate of the between study variance used in REM) was zero (and might therefore be underestimating the variance). This assumption usually is applied in FEM so this is why the Hunter & Schmidt approach leads to similar results as the FEM in CMA.

In consequence, I do not consider Hunter & Schmidt as a complete Random Effects Model. I will use the Hunter-Schmidt approach to individually correct correlations for attenuation due to unreliabilty and then combine corrected correlations in a real REM using CMA instead of Hunter & Schmidt. However, I will as well calculate the results of the original Hunter-Schmidt procedure and put them in the Appendix to allow a comparison of my results with those from other meta-analyses in this field.

Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random effects models in meta-analysis. Psychological Methods,
3, 486-504.

Field, A. P. (2001 Jun). Meta-analysis of correlation coefficients: a Monte Carlo comparison of fixed- and random-effects methods. Psychological Methods, 6, 161–180. doi:10.1037/1082-989X.6.2.161 

 

(Below is a reply that Prof. Brannick, Professor at USF, emailed me:)

There is a difficult core to this question.  What is the difference between fixed-effects and random-effects models in meta-analysis?  The conventional statistical answer to the question of fixed- vs. random-effects in statistics is that in the fixed-effects analysis, all the levels or treatments of the independent variable are included in the study, whereas in the random-effects case, only a sample of the levels are contained in the study.  In a study of the effects of automobile commercials on consumer preferences, you would conduct a fixed-effects study if you were a car company trying to decide which of two commercials you have developed are most effective so you can choose which to put on TV.  If you were a researcher interested in the general question of how much car commercials influence attitudes, you would sample some commercials from the population of such advertisements, and test those to see how much variability in attitudes is associated with commercials (and this would be a random-effects study).  By analogy, a meta-analysis in which only the studies I have in hand or studies exactly like the ones I have would call for a fixed-effects analysis.  A sample of studies representing a larger population would call for a random-effects analysis.  This is why most people prefer a random-effects model in meta-analysis.

A second way to answer the question is whether the effect sizes in the population are expected to vary or not.  Is there really only a single correlation between cognitive ability (g) and first year GPA in college, or are there many different values that depend upon the college in question?  The fixed-effects programs typically assume only a single underlying population value, but the random-effects programs typically estimate the variance of the distribution of underlying effect sizes.  In practice, authors rarely distinguish between these. However, if you read articles by Whitener (1990) and by Douglas Bonett (e.g., 2010), you can flesh out the distinctions. 

Closer to your question, why do the Hunter-Schmidt and CMA results differ?  The short answer is that the methods used to compute the average correlation and the variance between studies are different.  However, they both are what I would call random-effects methods because they both attempt to generalize to a superpopulation of studies and they both estimate the variance of an underlying distribution of effect sizes from which the group you have in hand is assumed to be a random sample.  In other words, they are both trying to do the same thing, but their calculations are different. A more technical discussion would get into the math and weights, but this is not the place for that. You would need to read the Hunter & Schmidt (2004) book and the article by Hedges & Vevea (1998). You can find a fuller nontechnical discussion in Keppes et al. (cited below).

In your case there is an additional complication introduced by correcting for unreliability of measures.  When you do this, you not only increase the effect size in absolute value, but you also increase the uncertainty about the actual value.  If you analyzed the corrected or disattenuated effect sizes in CMA without accounting for the extra uncertainty, you will get a misleading result.  Therefore, in my opinion, either use the Hunter and Schmidt approach, or if you want to use CMA, look at the book by Borenstein, Hedges, Higgins & Rothstein (2009) called Introduction to meta-analysis, and follow their instructions for CMA.

Bonett, Douglas G. (2010).  Varying coefficient meta-analytic methods for alpha reliability. Psychological Methods, Vol 15(4), Dec 2010, 368-385.

Keppes, S., McDaniel, M. A., Brannick, M. T.,  & Banks, G. C. (2013).  Meta-Analytic Reviews in the Organizational Sciences: Two Meta-Analytic Schools on the Way to MARS (the Meta-Analytic Reporting Standards). Journal of Business and Psychology, 28, 123-143.

 

 

 

Reply to Discussion

RSS

© 2021   Created by Blair T. Johnson.   Powered by

Badges  |  Report an Issue  |  Terms of Service