[Mne_analysis] TFCE parameters

Cushing, Cody CCUSHING1 at mgh.harvard.edu
Mon Jul 18 17:20:49 EDT 2016
Search archives:

Oh, definitely.  It seems like unless you have serious computational needs that using 0 is the best way to go (and certainly the only place to start if you want to know your results are legit).  My confusion stemmed from not understanding the parameters from the scarce documentation and taking that "good default" of start=2 and step=.2 that I stumbled upon in the archives and then stepping back to 0 once I learned what these parameters were actually controlling.  I was exploring in the context of single source phase-locking plots, so that made quite a big difference.  In the specific context of mne-python's use of TFCE, the smaller each of those parameters, the more accurate the result, so it seems any kind of adjustment up in pursuit of significance would be a huge unsound no-no since these numbers are very non-arbitrary.  I didn't mean to imply I would be tweaking those values to find what p-value I liked best; I just wanted to understand what the tweaking was doing.

You have hit on a huge problem in our current MCP-concerned landscape in which all these solutions have parameters that are arbitrary and we do just function on the trust of the experimenter to responsibly use those (since I'm yet to see a paper that Bonferroni corrected parameter tweaks).  Everybody picks exactly the right settings the first time right?  I would say an important step in learning these things is to play with fake data that's self-constructed.  That's the only way to truly know where the signal is (since you put it there) and then learn how to extract the signal you are looking for using whatever new method Nichols releases on the neuroimaging world.
________________________________
From: mne_analysis-bounces at nmr.mgh.harvard.edu [mne_analysis-bounces at nmr.mgh.harvard.edu] on behalf of Natalie Klein [neklein at andrew.cmu.edu]
Sent: Monday, July 18, 2016 4:20 PM
To: Discussion and support forum for the users of MNE Software
Subject: Re: [Mne_analysis] TFCE parameters

Thanks for the discussion, I am using TFCE myself and it is helpful to see what others think about it. It seems safest to me to always start at 0 and increase the step size to ease computational burden, as changing the start value seems much more likely to have drastic effects on the overall result than changing the step size.

Also, sorry to be a bit pedantic, but while I completely understand wanting to play with the method to see how the parameters might affect the results, please keep in mind that going back and changing the start/step values to get different (or possibly 'more significant') p-values is problematic. To me, one of the main benefits of using TFCE is to avoid having to choose a single threshold ahead of time, as trying several thresholds and picking the 'best one' results in a new multiple comparisons issue that is typically not accounted for.




On Mon, Jul 18, 2016 at 3:49 PM, Cushing, Cody <CCUSHING1 at mgh.harvard.edu<mailto:CCUSHING1 at mgh.harvard.edu>> wrote:
Ah, alright beautiful.  Thanks for the explanation Eric.  I'm imagining using a start of 2 on a whole brain analysis would not make much of a difference, which I'm imagining is where that number was coming from.

Cheers,
Cody
________________________________
From: mne_analysis-bounces at nmr.mgh.harvard.edu<mailto:mne_analysis-bounces at nmr.mgh.harvard.edu> [mne_analysis-bounces at nmr.mgh.harvard.edu<mailto:mne_analysis-bounces at nmr.mgh.harvard.edu>] on behalf of Eric Larson [larson.eric.d at gmail.com<mailto:larson.eric.d at gmail.com>]
Sent: Monday, July 18, 2016 3:38 PM
To: Discussion and support forum for the users of MNE Software
Subject: Re: [Mne_analysis] TFCE parameters

Is there any motivation behind start>0 other than trying to reduce computation time?

Not as far as I know.

Why are these the 2 parameters we have control for TFCE?  What's to be gained/lost?

Think of it as a way to approximate an integral where each function value to put into the summation takes a long time to compute (the clustering step). Ideally we would start at zero and go in infinitesimal steps to the largest statistic value, but numerically that's infeasible and practically it would take forever. So the idea is to compute as few as possible (highest start and biggest step) without affecting the result. If using a smaller start and/or step affects the output, then you should use the smaller start and/or step because it should provide a better approximation to the integral. Without looking back, I would assume a start of 2 was suggested because it usually doesn't affect the result (at least in the suggester's experience).

Eric


_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu<mailto:Mne_analysis at nmr.mgh.harvard.edu>
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160718/4805921e/attachment-0001.html 


More information about the Mne_analysis mailing list