Hi:

By using an atlas with 130 scouts, I calculated Z-score of each vertex in scouts. If I want to do t-test or permutation test between two subject groups, I chose the option "use scouts" and there is an box for scou function. I know the usual choice is "mean". Is there any possible the mean of Z-scores of all vertices in a scout can be the total Z-score of this scout? In other words, if I want to calculate a single amplitued value of a scout and to do t-test between different subect group by using this amplitude value, what should I do for the amplitued of each vertex and how can I make this work done? Obviously, Z-socres are much better than current density values for t/perm-test. This turn back to the original question: Z-score of each vertex and Z-score of a scout. I think it maybe usefull to call the function "sources>downsample to atlas" to get a single value for the scout and then use this value for between group comparison. But I don't know how this function do the downsample. Mean? Max? or PCA? There is no choice on the panel of downsample to atlas.

In addition, after I use the t-test with "mean" scout function, how can I know which group has larger value than another group? The t-test results with 130 (scouts) p-values. However, If I use the "Difference:(A-B)" pipline, I get a table of 15000 values for the difference. This goes back to the original question: value of vertex and value of Scout.

I searched on this community and did not find an exact answer.

Is there any possible the mean of Z-scores of all vertices in a scout can be the total Z-score of this scout?

It simply computes the arithmetic mean of whatever values are available for all the vertices of the scout.

if I want to calculate a single amplitued value of a scout and to do t-test between different subect group by using this amplitude value, what should I do for the amplitued of each vertex and how can I make this work done?

You simply check the "Use scouts" option in the statistics process of your choice, and select the scouts of interest.

Obviously, Z-socres are much better than current density values for t/perm-test.

Indeed, for comparing between subjects, it is advised to normalize the amplitudes for each subject first:

https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Statistics:_Group_analysis.2C_between_subjects-1

This might not be necessary when using dSPM.

I think it maybe usefull to call the function "sources>downsample to atlas" to get a single value for the scout and then use this value for between group comparison. But I don't know how this function do the downsample. Mean? Max? or PCA? There is no choice on the panel of downsample to atlas.

The process "downsample to atlas" uses the function defined for each of the scout, ie. the function you can see and edit in the Scout tab when the surface is displayed.

This is not really recommended to use this process in your context, as the stat processes already do this on the fly, with the additional possibility of redefining the scout function directly in the process options.

In addition, after I use the t-test with "mean" scout function, how can I know which group has larger value than another group? The t-test results with 130 (scouts) p-values. However, If I use the "Difference:(A-B)" pipline, I get a table of 15000 values for the difference. This goes back to the original question: value of vertex and value of Scout.

I'm not sure I understand how you obtain different number of values.

In general, both approaches are possible: 1) computing the 15000 tests for each source independently and then reporting the significant effects that overlap your ROI, or 2) averaging the source signals by ROI and then testing. The first approach will lead to source maps that you can display on the cortex surface but with a lot of multiple comparisons to correct for, the second approach leads to results that you can't display on a brain, but possibly with more significant effects (no need to correct for multiple comparisons with only one ROI).

Thanks for your kindly answer. When I use the perm-t test, what is the meaning for the opition "exclude the zero values from computation"? If I average the source signals by an ROI, do the brainstorm put the mean value on the seed and let other vertices in this ROI have a zero value?

In brief, if I use the follwing pipline, how to choose the two options on the perm-t test panel, including "average selected time window" and "exclude the zero value from computation"?

My pipline:

average time window 0-120ms on source level and now we have an ROI with 82 vertices

-->down sample to atlas and now the 82 values of the ROI were shrinked to an mean value of this atlas-->by using the baseline window -200~0ms，we calculated the Z-score for the upmentioned mean value---> we use the Z-scores to do group analysis between subjects.

When I use the perm-t test, what is the meaning for the option "exclude the zero values from computation"?

When this option is selected, the values that are strictly zero are considered as bad and are not averaged together with the other values in input. On source maps, all the values are typically non-zero, there are no "bad signals", therefore this option would have no effect.

average time window 0-120ms on source level and now we have an ROI with 82 vertices

-->down sample to atlas and now the 82 values of the ROI were shrinked to an mean value of this atlas-->by using the baseline window -200~0ms，we calculated the Z-score for the upmentioned mean value

I'm not sure I completely understand what you do here, but I think I might organize this differently:

- Normalize with a Z-score
- Extract scouts time series
- Compute the average over your time window of interest
- Non-parametric t-test.

Steps #2 and #3 can be done within the options of step #4, no need to create new files in the database for this.

Thanks for your kindly answer. Here you mentioned to compute the averge over my time window of interest after getting Z-scores. Hower, in the tutorials, there is a recommendation "Avoid averaging normalized maps (or computing any additional statistics)"(https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation). I cannot clearly understand these two different expressions.

The recommendation is about averaging multiple files together.

Averaging your normalized results over a short time window is OK.

A summary of all the recommendations is available here:

https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows

thanks a lot!