[Mne_analysis] appropriate way to combine intrasubject dSPMs?

Eric Larson larson.eric.d at gmail.com
Thu Feb 7 10:22:41 EST 2013
Search archives:

Hey Andy,

In terms of dSPM scaling (and thresholded area) being larger for the dSPM
calculated across all runs compared to the average of the dSPMs for each
run, I would expect this since the dSPM values are derived by scaling each
current estimate by a spatially-dependent factor that is proportional to
the square root of the "nave" parameter (the number of trials used in the
average/evoked file in calculating the dSPM). Try manually setting the
"nave" parameter in mne_analyze and you'll see what I mean -- try 20 trials
and 80 trials, for example, and you should need a factor of 2 scaling to
get their dSPM estimates to look equivalent.

Using the average of dSPMs from each run will thus have overall amplitudes
below those of calculating the dSPM across all runs. Consider a case where
you have four runs, each with exactly 10 trials. In averaging dSPMs, each
run's dSPM will be scaled by sqrt(10), and averaging will preserve this for
an overall scaling of your mean dSPM as sqrt(10). When treating all runs
together to calculate a single dSPM, the dSPM will be scaled by sqrt(40),
a.k.a. 2 * sqrt(10). In other words, the scaling is greater by a factor of
2 than in the averaging case. Make sense? You should be able to compensate
for this scaling difference by using a weighted summation followed by an
appropriate scale factor. Something like this pseudocode might be
appropriate:

    dSPM_tot = sum[dSPM_i / sqrt(T_i)] * sqrt[sum(T_i)]

This should un-do the sqrt(nave) scaling for each dSPM_i calculated using
"nave" T_i (for i in [0, 4] for my example), then re-apply the scaling
based on the total, but it is early in the morning here in Seattle and I
might not be thinking of this 100% correctly :)

In any case, I would expect that by averaging dSPMs without compensating in
some manner like this you are probably hurting yourself by effectively
under-estimating the "nave" used and thus lowering your dSPM score...

Cheers,
Eric



On Thu, Feb 7, 2013 at 3:12 AM, Dykstra, Andrew <
Andrew.Dykstra at med.uni-heidelberg.de> wrote:

> Thanks Martin.
>
> I've tried A and something very similar to B, the only difference being
> I used the gain matrix from a single run (the first) instead of
> averaging the gain matrices across runs.  The solutions I obtained were
> different in two related ways: (1) the overall values were larger in B
> and (2) the activity meeting a certain dSPM threshold was more
> widespread.  I'll try averaging the gain matrices to see how much of a
> difference that makes in the computation of a single dSPM for the grand
> average.  I guess this will depend on the magnitude of the difference in
> head position.
>
> Cheers,
> Andy
>
> --
> Andrew R. Dykstra, PhD
> Auditory Cognition Lab
> Neurologie und Poliklinik
> Universitätsklinikum Heidelberg
> Im Neuenheimer Feld 400
> 69120 Heidelberg
>
> "How small the cosmos.  How paltry and puny compared to human
> consciousness . . . to a single individual recollection." - Vladimir Nabokov
>
> On 02/06/2013 05:45 PM, Martin Luessi wrote:
> > Hi Andy,
> >
> > It seems to me that from a theoretic point of view you are right;
> > averaging the dSPMs is incorrect due to noise normalization. However,
> > that being said, I recently did a test with data that has 15 runs and
> > ~180 epochs/run:
> >
> > A) Compute forward solutions, noise cov, inverse operator, dSPM for
> > each run, average the dSPMs (what you are asking).
> >
> > B) Create a single evoked response by averaging all epochs from all
> > runs, noise cov using all epochs, average the forward solutions across
> > runs, compute a single inverse operator, compute dSPM.
> >
> > At least qualitatively the solutions obtained using A and B are almost
> > identical for the data I have. Maybe in situations where you have
> > significantly different head positions between runs it would be better
> > to use A, but as you said, technically it is incorrect (unless the
> > same noise cov and fwd operator are used for each run).
> >
> > I hope this helps,
> >
> > Martin
> >
> > On 02/06/13 05:41, Dykstra, Andrew wrote:
> >> Hi all,
> >>
> >> What is the most appropriate way to combine intrasubject dSPMs, e.g.
> >> multiple runs of the same task within the same subject in which I cannot
> >> assume that the gain and noise covariance matrices are equivalent across
> >> runs.  Is it as simple as averaging the resulting dSPMs from each run.
> >> This would seem to make sense for the raw current estimates, but I'm
> >> unclear on the noise-normalized estimates.
> >>
> >> Thanks in advance,
> >> Andy
> >>
> >
> >
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20130207/2e9cf1ab/attachment.html 


More information about the Mne_analysis mailing list