[Mne_analysis] [External] Re: Source Modeling HCP data in MNE
mattwint at iu.edu
Mon Sep 28 16:58:09 EDT 2020
External Email - Use Caution
So I do still need to ask then, is there anything special I need to do to the raw data, I notice that when I plot the psd, the meg sensors on the head model are all crunched together on the left side. Should I just ignore that? I did a correction with hcp.preprocessing.map_ch_coords_to_mne() but that the part that says source localization will be wrong. So should I not do that is the big question.
I have tried read_epochs() but I keep getting an epochs.times error I can't seem to fix. I can do read_evokeds() just fine, but I do want the epochs so I can take coherence measures. That's why I was trying to use the tmegpreproc files, because I have had better luck recreating them into mne readable structures. I didn't realize the read_epochs pulls in preprocesed data, is that pulled from the tmegpreproc file then? I do want to use preprocessed stuff if possible so I can run all the participants on a more automated pipeline.
Thanks for all your help.
On Sep 28, 2020 3:57 AM, Denis-Alexander Engemann <denis.engemann at gmail.com> wrote:
This message was sent from a non-IU address. Please exercise caution when clicking links or opening attachments from external sources.
External Email - Use Caution
The warnings only concern the fact that for the Magnes WH3600 MEG
system, reference channels can be tricky to handle and some details
can depend on the site. MNE and also the mne-hcp toolbox are not
including the reference channels in the source modeling. This is
pretty standard for certain sites (e.g. Jülich) and in my experience
has no practical impact for the results. You would have the same
gotcha when using standard MNE-code for such MEG data.
Look at the examples and conduct your own benchmarks perhaps (visual,
motor, etc.), if you are not sure.
Everything else works fine, just the the i/o is more complicated due
to the way the HCP data is shipped.
You can process inputs at different levels of processing, see here for
example source localization of preprocessed evokeds:
You should be able to do the same with preprocessed epochs (I found
reprocessing from scratch can give cleaner results though).
As to the alternatives, there is not much to do as the way HCP is
shipped does not allow for standard processing and requires a specific
choreography of transformations to use the coregistration information
with the head models and the MEG outputs.
But once more, this is not arbitrary, it's just a different way of
parsing the otherwise correct inputs.
Let me know if you have any questions and how it goes.
On Mon, Sep 28, 2020 at 1:49 AM Winter, Matthew <mattwint at iu.edu> wrote:
> External Email - Use Caution
> Hi everyone,
> I am trying to figure out what would be better to go towards source modeling activity with the hcp data. Should I use the hcp-mne toolbox and start from the raw instance or load in only what I need from the tmegpreproc files (preprocessed meg files) and raw files into mne? My hold back on using the hcp-mne toolbox is there is a whole 'Gotchas' sections that seems to say something about it not producing accurate source models, and it is not very clear to me what the workarounds are if there are any. I am assuming the tmegpreproc files cannot be brought over with the hcp-mne toolbox, so if I am wrong please let me know.
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mne_analysis