[Mne_analysis] 'shrunk' vs ‘empirical’ for the rest alpha power

Denis A. Engemann denis-alexander.engemann at inria.fr
Tue Mar 31 11:13:42 EDT 2020
Search archives:

        External Email - Use Caution        

Hi Elena,

this is hard to tell by just looking at the code.

Would you be happy to share the (at least parts) of the raw & epochs file with me privately?

I could then investigate the issue.

Denis

> On Mar 31, 2020, at 4:38 PM, Elena Orekhova <orekhova.elena.v at gmail.com> wrote:
> 
>         External Email - Use Caution        
> 
> 
> Hi,
> I try to localize rest alpha band power using sLORETA.
>  
> I calculated noise covariance from the empty room data using:
>  
> noise_cov = mne.compute_raw_covariance(raw_noise, method=['shrunk', ‘empirical’])
>  
> The 'shrunk' was said to be a better choice. However, the result looks weird with the 'shrunk’ and much more normal with ‘empirical’. (This subject has prominent occipital alpha peak in sensors.)
>  
> The stc power plots for both methods are attached; the only parameter changed was method=['shrunk'] vs method= [‘empirical’]
> 
> Elena
>  
> p.s. 
> The code is below.
> I use :
>  
> Platform:      Darwin-18.7.0-x86_64-i386-64bit
> Python:        3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)  [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
> Executable:    /Users/elena/anaconda3/envs/mne/bin/python
> CPU:           i386: 4 cores
> Memory:        16.0 GB
>  
> mne:           0.20.0
> numpy:         1.16.2 {blas=mkl_rt, lapack=mkl_rt}
> scipy:         1.2.1
> matplotlib:    3.0.3 {backend=Qt5Agg}
>  
> sklearn:       0.20.3
> numba:         Not found
> nibabel:       2.4.0
> cupy:          Not found
> pandas:        0.24.2
> dipy:          0.16.0
> mayavi:        4.7.0.dev0 {qt_api=pyqt5, PyQt5=5.9.2}
> pyvista:       Not found
> vtk:           8.1.2
> 
> ##########
> epo_rest = mne.read_epochs(raw_fname , proj=False, verbose=None)
> raw_noise = io.read_raw_fif(raw_fname, preload=True)
> raw_noise.filter(2, 40, fir_design='firwin') 
> noise_cov = mne.compute_raw_covariance(raw_noise, method=['shrunk', 'empirical'])
> inverse_operator = make_inverse_operator(epo_rest.info, fwd, noise_cov, loose=0.2, depth=0.8, verbose=True)  
> method = 'sLORETA' 
> lambda2 = 1. / snr ** 2
>  bandwidth =  'hann'
> stcs_rest = compute_source_psd_epochs(epo_rest[:n_epochs_use], inverse_operator,
>                                  lambda2=lambda2,
>                                  method=method, fmin=8, fmax=13,
>                                  bandwidth=bandwidth,
>                                  return_generator=True, verbose=True)
> 
>     psd_avg = 0.
>     for i, stc_rest in enumerate(stcs_rest):
>     psd_avg += stc_rest.data
>     psd_avg /= n_epochs_use
>     freqs = stc_rest.times  # the frequencies are stored here
>     stc_rest.data = psd_avg  
> 
> clim=dict(kind='percent', lims=[50, 75, 100] ) 
> kwargs = dict(initial_time=10.0,  clim=clim, hemi='both', subjects_dir=subjects_dir,time_unit='s',   size=(600, 600))
>     
> brain = stc_rest.plot(figure=1, **kwargs)
> <empirical vs shrunk.pdf>_______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis




More information about the Mne_analysis mailing list