[Mne_analysis] permutation clustering test using TFCE

Laetitia Grabot laetitia.grabot at gmail.com
Mon Aug 4 10:42:07 EDT 2014
Search archives:

>
> >> Thanks Latitia,
> >>
> >>
> >> On Thu, Jul 31, 2014 at 10:03 AM, Laetitia Grabot <
> >> laetitia.grabot at gmail.com>
> >> wrote:
> >> >
> >> > Hi Denis,
> >> >
> >> > I tried the spatio-temporal clustering with TFCE
> >> (spatio_temporal_cluster_1samp_test) on alpha power data (size of epoch
> :
> >> 2s; decimation = 4 (so 501 time points)) with n_jobs = 6 and the default
> >> TFCE parameter (dict(start = 0, step = 0.2)).
> >>
> >>
> >> I think we need to improve the documentation on TFCE a bit. A good
> default
> >> range is probably
> >>
> >> dict(start=2, step=0.2)
> >>
> >> >
> >> > According to script output, 48 thresholds were used from 0 to 9.4.
> >> > After 5h (!!), 10262484 clusters were found and finally after some
> >> others
> >> hours the script crashed before the end ("cannot allocate memory")...
> >> >
> >>
> >> For TFCE N clusters equals N features. Howver if you do not scan the
> >> enitre
> >> range of the test statistic you wont have to wait thast long.
> >>
> >>
> > I tried dict(start=2, step=0.2) and dict(start=2, step=0.5), but I still
> > had 5 120 000 clusters (15 threshold for step =0.5). How can I not scan
> the
> > entire range of the test, as you suggested ? I didn't understand what you
> > mean by "features"...
> >
> >
>
> samples : subjects or trials or observations
> feaures : measured value at time t, location l, condition c, frequency f,
> etc.
>
> dict(start=2, step=0.2) would not scan the entire range since you start at
> a value of 2. It depends on your effect size and your test statistic. with
> an f-test you might want to start at 4 (roughly the point where values form
> an f-dist are considered significant). Also if the maximum of your primary
> test statistic is rather high, e.g. 80,  you might want to jump in steps of
> .05 or even 1.
>
> I'm currently using it like that (dict(start=4, step=0.5) in sensor space
> analysis and with 17640 clusters,  7 jobs and I'm waiting about 6-7 minutes
> for a result using a repated measures anova as stat function (slower than
> t-test) .
>
>
Ok thanks, it's clearer! So, I tried dict(start=4, step=0.5), I got 5 120
000 clusters, 14 thresholds and I got no significant clusters. The
computation lasted 84min, so that remains quite long. And I didn't find the
cluster I found with p = 0.01 in classical analysis...


> >
> >> > I tried on shorter data (1s so 250 time points) but it was also too
> long
> >> and too memory-demanding. Then I tried to change the step parameter to
> >> decrease the number of thresholds to test. I took step =0.5. 17
> thresholds
> >> were used from 0 to 8. 5121000 clusters were found and my script also
> >> ended
> >> up crashing. So... what do you suggest to get acceptable computation
> time?
> >> >
> >>
> >> See above. Btw for roughtly 10.000 clusters with 15-20 thresholds I'm
> >> waiting for roughly 15-20 minutes per iteration (multiple iterations
> with
> >> step_down=0.05).
> >>
> >> >
> >> > By the way, with p_threshold =0.001, I got no cluster; with
>  p_threshold
> >> =0.01, I got one occipito-parietal cluster (pvalue = 0.026) lasting
> around
> >> 500ms, after something about 10min of computation.
> >> >
> >>
> >> I'm not sure what you refer to by `p_threshold`. Either you pass a dict
> or
> >> a float. The latter will be a classical cluster permuation analysis, the
> >> former TFCE.
> >> You can howver use the p_threhold as start value for TFCE.
> >>
> >>
> > Yes I was talking about classical permutation cluster analysis.
> >
> >
>
> Ah ok, got it now.
>
>
> >  >
> >> > Hoping that it is useful for you,
> >> > Best,
> >> > Laetitia G.
> >>
> >>
> >> Yes, thanks!
> >> Best,
> >> Denis
> >>
> >>
> > Thanks a lot for your advice!
> > Laetitia
> >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20140804/095696d5/attachment.html 


More information about the Mne_analysis mailing list