aversive

An automated meta-analysis of 238 studies

X:
Y:
Z:
Description
Layers
Color palette: Positive/Negative:
Thresholds:
Opacity:

Studies associated with aversive

Title Authors Journal Loading

Term-based meta-analyses: Frequently Asked Questions

This page displays information for an automated Neurosynth meta-analysis of the term aversive. The meta-analysis was performed by automatically identifying all studies in the Neurosynth database that loaded highly on the term, and then performing meta-analyses to identify brain regions that were consistently or preferentially reported in the tables of those studies.

What do the "uniformity test" and "association test" maps mean?

For a detailed explanation, please see our Yarkoni et al (2011). In brief, the uniformity test map displays brain regions that are consistently active in studies that load highly on the term aversive. Voxels with large z-scores are reported more often in studies whose abstracts use the term aversive than one would expect them to be if activation everywhere in the brain was equally likely. Note that this is typically not so interesting, because it turns out that some brain regions are consistently reported in a lot of different kinds of studies (again, see our paper). So as a general rule of thumb, we don't recommend paying much attention to uniformity test maps.

association test maps are, roughly, maps displaying brain regions that are preferentially related to the term aversive. The association test map for aversive displays voxels that are reported more often in articles that include the term aversive in their abstracts than articles that do not. Most of the time this a more useful way of thinking about things, since association test maps tell you whether or not there's a non-zero association between activation of the voxel in question and the use of a particular term in a study.

Note that these two maps were previously referred to as forward and reverse inference maps, respectively, which was a mistake. We don't show true forward and reverse inference maps on the website (for reasons explained in the FAQs), but you can generate them via the Python core tools.

How do you determine which studies to include in an analysis?

For all term-based meta-analyses visible on this website, we consider a study to load on a given term if the term is used at least once anywhere in the article abstract. We have applied various other modeling approaches in the past (e.g., increasing the cut-off, using continuous-valued weights, and using the full-text of articles rather than just the abstract), but there is generally surprisingly little effect on results within a fairly broad range of parameter variation.

Are these maps corrected for multiple comparisons?

Yes, they're corrected using a false discovery rate (FDR) approach, with an expected FDR of 0.01.

I need more details! How exactly were these maps and data generated?

If you want to know exactly how things work, we encourage you to clone the Neurosynth python tools from the github repository and work through some of the examples and code provided in the package. Everything you see on this page was generated using the default processing stream, so you should be able to easily generate the exact same images (unless the underlying database has grown or changed) for yourself.