[Mne_analysis] Mne_analysis post from ccushing1 at mgh.harvard.edu requires approval

Alexandre Gramfort alexandre.gramfort at telecom-paristech.fr
Tue Apr 12 15:13:35 EDT 2016
Search archives:

hi Cody,

> I've got one final probe for you on the topic, if you don't mind.  I've been reading up on the estimation of the noise covariance matrix, since that was the main player in activation differences. It seems the standard approach for event-related studies is to compute the noise covariance based on a pre-stimulus baseline if "enough samples are available", but without any explicit suggestions of what is enough.

you might be interested in our recent papers with Denis et al.:

http://www.ncbi.nlm.nih.gov/pubmed/25541187

https://hal-institut-mines-telecom.archives-ouvertes.fr/hal-01183551/document

and some related MNE examples:

http://martinos.org/mne/dev/auto_examples/visualization/plot_evoked_whitening.html

http://martinos.org/mne/dev/auto_examples/datasets/plot_spm_faces_dataset.html

> Do you have an opinion on enough samples?  And also, you are correct about me computing the noise covariance over my whole trial in mne-c, but that is what seems to make my activations higher (in dspm values) which seems counter-intuitive to me?

hard to tell without looking at data. I never did this experiment.

> Or perhaps it is my understanding that is inversed.  And, also, I know the simple answer to this question is "just use mne-python",

yes :)

> but the parameters in the .cov text file  for mne-c now leave me confused.  If tmin/tmax defines the time period to compute the noise covariance over, why is there a bmin/bmax option, i.e. why do I want to baseline-correct a baseline?  Should those values (tmin/tmax, bmin/bmax) be equivalent to get the "best" result?  Thanks for any input.

I guess it's a design decision. I am sure Matti had a good reason for this :)

Alex



More information about the Mne_analysis mailing list