Topic 43

An automated meta-analysis of 624 studies

Top-loading terms for this topic:

emotional, neutral, processing, pictures, affective, emotion, valence, arousal, negative, aversive, unpleasant, emotionally, picture, positive, pleasant, images, content, enhanced, ratings, viewing, emotions, viewed, participants, valenced, presentation, behavioral, scenes, female, mood, visual, arousing, perception, system, evaluation, reactivity, implicated, half, stimulus, rated, elicited

Meta-analytic tests of uniformity and association

X:
Y:
Z:
Description
Layers
Color palette: Positive/Negative:
Thresholds:
Opacity:

Studies associated with Topic 43

Title Authors Journal Loading

Topic-based meta-analyses: Frequently Asked Questions

What is a "topic" in Neurosynth?

Topics reflect an effort to move beyond individual term-based analyses by modeling the covariance between different terms in article abstracts. Instead of generating separate maps for, say, "working memory" and "cognitive control", a topic modeling approach might extract a single topic that assigns a large weight to each of these closely-related terms (as well as many others). You can effectively think of a Neurosynth topic is a cluster of semantically-related words that tend to occur together in article abstracts. A meta-analysis is then performed to identify the neural correlates of each topic by searching for brain regions that are consistently more activated in articles that load highly on each topic than in articles that do not load highly on a topic. For further details, please see Poldrack et al (2012).

What do the "uniformity test" and "association test" maps mean?

For a detailed explanation, please see our Nature Methods paper. In brief, the association test map (formerly known as the reverse inference map, although it should have never been titled as such) displays brain regions that show a statistically significant association with the topic in question. A positive value at a given voxel means that studies tagged with Topic are also more likely to report activation at that voxel; a negative value means that studies tagged with Topic 43 are _less_ likely to report activation that voxel. The underlying statistical test is a simple 2-way chi-square on a 2 x 2 contingency table of topic presence (mentioned or not mentioned in a study) crossed with voxel activation (active or not active in a study). The values are z-scores corresponding to p-values from the chi-squared test. Note that no probabilistic statement is licensed by the association test maps--one cannot conclude, for instance, that if one observes activity at a voxel with a particularly large z-score, it must be very likely that the topic in question is used in a study. The latter is a claim about effect sizes that requires inspection of probability maps (which are not available on this website, but can be generated using the Python code available here).

The uniformity test map (formerly known as the forward inference map) displays brain regions that are more consistently activated for Topic 43 than one would expect if activation were distributed uniformly throughout the brain. One can think of this as a kind of consistency test: are there brain regions where activation tends to cluster relative to a null of no structure at all? Note that this is typically not very interesting, because it turns out that some brain regions are consistently reported in a lot of different kinds of studies (again, see our paper). So as a general rule of thumb, we don't recommend paying much attention to uniformity test maps.

How do you determine which studies to include in an analysis?

We use a predefined binary cut-off. For all topic-based meta-analyses, we treat all studies with a loading > 0.05 as "active" for a given topic, and all other studies as inactive. Although the choice of threshold is relatively arbitrary, in practice, varying it within a fairly broad range of values has minimal influence on the results. Adopting a continuous approach instead of dichotomizing the dataset also has a negligible effect.

Are these maps corrected for multiple comparisons?

Yes and no. We use a false discovery rate (FDR) approach to threshold for multiple comparison, with an expected FDR of 0.01. In other words, all values you see in these maps have been corrected for multiple comparisons in the sense that the nominal false positive rate over the long run is being controlled. However, note that we have not actually transformed the values at all. In other words, you are seeing voxels that survive multiple comparisons correction, but the actual z-scores displayed correspond to the original, uncorrected p-values.

If you want to know exactly how things work, we encourage you to clone the Neurosynth python tools from the github repository and work through some of the examples and code provided in the package. Everything you see on this page was generated using the default processing stream, so you should be able to easily generate the exact same images (unless the underlying database has grown or changed) for yourself.