Tools for Those Who Summarize the Evidence Base
Resources and networking for those who conduct or interpret meta-analyses related to any phenomenon that is gauged in multiple studies.
I have some meta-analytic data that I would like to analyze using multi-level techniques. In particular, teams are my focal level of analysis. I have individuals nested within these teams and some higher level influences as well (i.e., influences of external leaders). Specifically, I'd like to be able to look the influence of team functions (i.e., leadership, team processes) on both team performance and individual outcomes.
Does anyone know of a macro for SPSS or a way to work through it HLM? Has anyone tried this type of analysis?
Thanks,
Lauren
Tags:
Dear Lauren,
while a fixed effect meta-analysis can be regarded as a one-level meta-analysis (there is only one source of variation: sampling variation, this is variation within studies due to sampling participants rather than studying the whole population), a traditional random effects (and mixed effects) meta-analysis can be regarded as a two-level meta-analysis (with variation at two levels: sampling variation within studies, and on top of that 'systematic' variation between studies). The random effects model indeed includes two residuals: a deviation of the observed effect size from the population effect size for that study, and the deviation of the population effect size in that study from the mean population effect size. The last kind of residuals are also called the study random effects.
In a three-level analysis accounting for teams, we include another kind of random effect: the observed effect deviates of the study population effect, this effect deviates from the population effect for the team, and this deviates from the overall effect.
I use MLwiN or SAS to perform such three level meta-analyses, because both programs allow constraining the sampling variance to the values you estimated before (which as far as I know is not the case for SPSS!) & to define more than one kind of random effect. This is also true for HLM, but i do not really have experience with HLM.
Here are the codes for SAS (the effect size variable is called 'ES', study and team indicators are called 'study' and 'team', I also used moderator called 'x', the dataset includes a variable called 'precision' which is equal to the inverse of the sampling variance):
proc mixed data= <name dataset> ;
class study team;
weight precision;
model ES = x/solution;
random intercept /sub = study(team);
random intercept /sub = team;
parms 1 1 1/hold = 3;
run;
good luck,
Wim
(Wim.vandennoortgate@kuleuven-kortrijk.be)
Thank you, Wim! Another question for you: What is the current best source discussing such analyses? -Blair
Dear Blair and Lauren,
I am sorry for my late reaction.
Excellent introductions to multilevel modelling, that all include a discussion of multilevel meta-analysis (two-level meta-analysis, but extensions to threelevel meta-analysis are straightforward) are:
Hox, J. (2002). Multilevel analysis. Techniques and applications. Mahwah, NJ: Erlbaum.
Raudenbush, S. W. & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. (2nd ed.) London: Sage Publications.
Snijders, T. A. B. & Bosker, R. J. (1999). Multilevel analysis. An introduction to basic and advanced multilevel modeling. London: Sage.
I have a couple of manuscripts regarding three-level meta-analyses in the pipeline, and hope they will be published soon.
As an example of an application of the three-level meta-analysis, see:
Van den Bussche, E., Van den Noortgate, W., Reynvoet, B. (2009). Mechanisms of masked priming: A meta-analysis. Psychological Bulletin, 135, 452-477.
I hope this helps,
Wim
Blair T. Johnson said:
Thank you, Wim! Another question for you: What is the current best source discussing such analyses? -Blair
Dear all,
As a follow-up of Wim's suggestions, I would recommend Konstantopoulos (2011) that provides a detailed explanation on three-level meta-analysis.
I have written an R package to conduct meta-analysis. One function is meta3() that conducts three-level meta-analysis with maximum likelihood estimation method. For example, the following code conducts a 3-level meta-analysis with and without predictor.
## Load the library
library(metaSEM)
## 3-level meta-analysis
summary( meta3(y=y, v=v, cluster=District, data=Cooper03) )
## 3-level meta-analysis with "Year" as a predictor
summary( meta3(y=y, v=v, cluster=District, x=Year, data=Cooper03) )
Example on 3-level meta-analysis: https://dl.dropbox.com/u/25182759/3level.html
Introductory paper on the metaSEM package: https://dl.dropbox.com/u/25182759/metaSEM.pdf
Source package: https://dl.dropbox.com/u/25182759/metaSEM_0.7-1.tar.gz
Windows binary package: https://dl.dropbox.com/u/25182759/metaSEM_0.7-1.zip
Since it is still being developed, comments and suggestions are highly appreciated.
Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2(1), 61–76. doi:10.1002/jrsm.35
Regards,
Mike
Thank you Wim and Mike. These are great suggestions and resources. I appreciate your help.
Best,
Lauren
Apropos to this discussion, an article that Wim led:
Abstract
Although dependence in effect sizes is ubiquitous, commonly used meta-analytic methods assume independent effect sizes. We describe and illustrate three-level extensions of a mixed effects meta-analytic model that accounts for various sources of dependence within and across studies, because multilevel extensions of meta-analytic models still are not well known. We also present a three-level model for the common case where, within studies, multiple effect sizes are calculated using the same sample. Whereas this approach is relatively simple and does not require imputing values for the unknown sampling covariances, it has hardly been used, and its performance has not been empirically investigated. Therefore, we set up a simulation study, showing that also in this situation, a three-level approach yields valid results: Estimates of the treatment effects and the corresponding standard errors are unbiased.
Building on references Wim, Mike, and Blair recommended, I'll take this opportunity to suggest how a person might find relevant methodological work about either using multilevel models for meta-analysis or meta-analyzing multilevel data -- two related but distinct topics. Finding this work can be challenging even for veteran meta-analysts, for at least the following reasons:
At any rate, here's one multiple-step strategy we might use to find work that combines meta-analysis and multilevel models or data. We might start by checking my nascent CiteULike library on methodology for research synthesis (Meth4ReSyn):
http://www.citeulike.org/user/Meth4ReSyn
To search this library we could select 'Search' from the "Meth4ReSyn's CiteULike" menu or go to this URL:
http://www.citeulike.org/search/user/Meth4ReSyn
A basic search this library for the term "multilevel" turns up three items, of which only one seems relevant after inspecting their titles and abstracts. (We could also search for this or other terms in specific fields such as the title or abstract -- click the 'Help' link for syntax tips.) Alternatively, we could use this library's rudimentary controlled vocabulary by checking for items with relevant tags; here's the library's tag cloud (also accessible by selecting 'Tags' from the "Meth4ReSyn's CiteULike" menu):
http://www.citeulike.org/user/Meth4ReSyn/tags
This list of tags includes 'multilevel_model,' which seems pertinent, and clicking that yields a list of six items displayed at this URL:
http://www.citeulike.org/user/Meth4ReSyn/tag/multilevel_model
Why did this yield more items than our search for "multilevel"? Well, mainly because the tag system identifies items that address a particular topic even if their authors have used different terminology (e.g., "hierarchical [linear] model"). Although most of these six items seem relevant, a few address topics that don't pertain directly to situations like Lauren posted (e.g., longitudinal rather than simply clustered data, Bayesian hierarchical models); this potential source of confusion might be reduced by a more sophisticated controlled vocabulary (e.g., taxonomy, thesaurus), which I'm planning for this library, but for now we'd need to choose the most relevant items based on their titles, abtracts, or full text.
What to do now? One option is to read the few relevant items we've found to get a sense of the key concepts, procedures, and issues and perhaps identify additional terminology to use in future searches. Another option is to conduct more searches right now.
(Aside: Wouldn't it be cool if we could use the Meth4ReSyn library to immediately find items that cite or are cited by the relevant ones we've already found [i.e., citation network]? Although that's possible to implement using CiteULike's CiTO feature, it's a hugely time-consuming task that's low on my priority list.)
One source for more searches is the larger public version of my bibliography -- an Excel file provided as online supporting information with the most recent published installment of the 'Article Alerts' feature section that I edit:
Hafdahl, A. R. (2011). Article Alerts: Items from 2010, Part II. Research Synthesis Methods, 2, 279-286. doi:10.1002/jrsm.56
(A more recent installment with about 1,200 more items is in press.)
This Excel file is clumsier to use and includes less metadata than the Meth4ReSyn library in CiteULike (e.g., no abstracts, fewer DOI names, far fewer tags\keywords). Nevertheless, searching for terms such as "multilevel" and "hierarchical" -- mostly in the title -- yields several more promising items. (I'm gradually transporting this larger version's items to the Meth4ReSyn library, but it's a time-consuming process that involves adding metadata, creating "trusted"\"authenticated" entries for CiteULike, and other tasks.)
At this point we might want to use the items we've collected as a basis for conducting searches in other databases. For instance, we could search for work by the same authors, use search terms we've identified, or search for citations of or by the most relevant items we've collected. These strategies all have strengths and limitations. For example, searching for citations of a target methodological article -- such as with Google Scholar or Web of Science -- can be a great way to find related methodological work, but it can also entail sifting through scores of substantive applications that use the method in question; that said, sometimes substantive examples can be useful, provided their authors didn't misunderstand, misuse, or misreport the analyses or results.
Depending on our interests, we might also want to search for work on other methodology topics related to meta-analysis and multilevel models or data. Here are some that come to mind:
* multi-site or -center (or -centre) studies\trials, whose data often have a similar structure as meta-analytic data and can be analyzed using the same or similar techniques (e.g., sites play the same role as studies)
* group- or cluster-randomized studies (e.g., place-based randomization); how these relate to meta-analysis depends partly on the level(s) at which data are available and how clustering has been handled in producing those data
* dependent effect sizes, and in particular hierarchical dependence whereby a study includes two or more interchangeable effect sizes (e.g., effect sizes from different samples that might differ more or less than those from different studies)
* mixed models, such as general(ized) linear mixed models; this is related but not equivalent to conventional "mixed-effects" meta-analysis models (e.g., meta-regression with random effects), and it's distinct from other uses of "mixed" in methodological contexts (e.g., mixed between- and within-subjects factorial designs, mixed-methods research)
* Bayesian hierarchial (linear) models, which can be used readily for meta-analysis of multilevel data but involve additional complications (beyond frequentist approaches) such as specifying prior distributions on the highest level's parameters and implementing computations to obtain posterior distributions (e.g., Markov chain Monte Carlo [MCMC])
Finally, there's the tricky problem of deciding how accurate, credible, comprehensive, or otherwise valuable any given piece of methodological work is. We shouldn't believe everything we read, even when it's written by a well-known methodologist and published in a "good" peer-reviewed journal. That's an issue I won't try to address here.
Dear. Wim
I like to use the code you mentioned above. but i am not sure whether my data is fit for the analysis. Could you allow me any example file that i can practice with?
Hi James,
below you see codes that simulate a three-level meta-analytic dataset. Observed effect sizes are simulated with an expected value of .50, a between-team variance of .05, a between study (within-team) variance of .15 and a sampling variance of .10 (of course in reality, studies typically vary in size and so also the sampling variance differ between studies).
If you do the analysis for the simulated data, you should get values close to these chosen parameter values.
good luck!
/*simulating random team-effects*/
data team;
do team = 1 to 50;
v = normal(0);
output;
end;
/*simulating random study-effects*/
data study;
do team = 1 to 50;
do study = 1 to 3;
u=normal(0);
output;
end;
end;
data sample;
do team = 1 to 50;
do study = 1 to 3;
r=normal(0);
output;
end;
end;
data all;
merge team study;
by team;
data all;
merge all sample;
by team study ;
v = v*sqrt(.05);/*assuming a between-team variance of .05*/
u = u*sqrt(.15); /*assuming a between-study variance of .15*/
r = r*sqrt(.10); /*assuming two groups of size 20; the sampling variance of the SMD is more or less 4/N*/
d = .50 + v + u + r; /*calculating the observed SMDs with an expected value of .50 */
precision = 10; /*this is the inverse of the sampling variance*/
keep team study d precision; /*if you want to drop the simulated residuals, which you also do not have in reality*/
run;
/*performing the meta-analysis*/
proc mixed data= all ;
class study team;
weight precision;
model d = /solution;
random intercept /sub = team;
random intercept /sub = study(team);
parms 1 1 1/hold = 3;
run;
© 2017 Created by Blair T. Johnson. Powered by