[Mne_analysis] Baseline correcting pre-stimulus segments for covariance estimation

Graham Flick grahamflick00 at gmail.com
Wed May 10 05:49:23 EDT 2017
Search archives:

Hi All,

I have a set of MEG data collected in a sentence processing paradigm, where
the critical words occur 6-7 words into the sentence. I'd like to look at
source-level evoked responses to these words via minimum-norm estimates,
without applying baseline correction.

In this scenario, should I still apply baseline correction to the
pre-stimulus intervals that I use to estimate the noise covariance? Note
that in this design, pre-stimulus is actually pre-sentence, meaning that
there is about 4 seconds of data between these windows and the onset of the
epochs that will be inverted to source space.

In attempt to address this question, I've plotted whitened evoked responses
from the start of the sentence to the target words using different methods
of covariance estimation, with and without baseline correction applied to
the 100 ms windows from which I estimated the covariance. I've attached an
example from one subject, and the pattern shown there is consistent across
quite a few subjects in the sample.

In general, it looks like if I apply baseline correction to the window from
which I estimate covariance, the global field power of the whitened
response never reaches 1, even in the window in which the covariance was
estimated. In contrast, the GFP in the whitened response without baseline
correction looks more like what I'd expect to see. This pattern seems
unusual to me, but does it imply that I should not be be applying baseline
correction here? Or are there other factors that should be considered?

Thanks!

Graham


Here is a sample of the code used to generate the whitened responses for
the empirical estimator with/without baseline correction:

raw = mne.io.read_raw_fif(fname_raw, preload=True)
events = mne.read_events(fname_event)
picks = mne.pick_types(raw.info, meg=True, eeg=False,
eog=False,exclude=bads)
epochstargetFull = mne.Epochs(raw, events, event_id = event_id,
tmin=-4.4,tmax=1.2, decim=5,reject=dict(mag=2e-12)
,baseline=None,picks=picks,on_missing='ignore')
evokedtargetFull = epochstargetFull.average()

method = 'empirical'

# covariance with baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_id, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=(-4.4,-4.3),
picks=picks, on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3,
method=method')
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topright_empirical_Baselined.png')
del(epochscov)
del(cov)
del(tmp)

# covariance without baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_AF, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=None, picks=picks,
on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3, method=method)
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topleft_empirical_NoBaseline.png')
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/a1b6e6b8/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CovarianceComparison.png
Type: image/png
Size: 427911 bytes
Desc: not available
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/a1b6e6b8/attachment-0001.png 


More information about the Mne_analysis mailing list