[Homer-users] Problematic baselining in hmrBlockAvg

Schneider Christoph christoph.schneider at epfl.ch
Sun Jul 17 09:37:24 EDT 2016
Search archives:

Hello,

When working through the code of hmrBlockAvg I realized that the baselining of the trials is done class-wise (with the average of all points t<0), meaning that each class (based on the according stimulation) has a different baseline. I have two questions to this:

1) I come from a background of EEG processing. Baselining, as I know it, is always done with regards to each individual trial = subtract baseline_k (avg of all samples t<0 in trial_k) from trial_k . I fail to see the practical value of baselining all trials to the class average baseline (subtracting the same value from all trials of the same class), neither for grand averages nor for working on the single trial level. Could someone please explain the rationale behind it?

2) Does this way of baselining not bear the danger of inducing significant differences (false positives) between two classes through very different class baselines?

Thank you very much in advance for your help!

Christoph


P.S: I would also have a quite technical bonus question: In the paper "HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain. (Huppert TJ, Diamond SG, Franceschini MA, Boas DA.)“ the a-priori estimate of the covariance of the measurement error, R, is introduced in equation 6. In the corresponding HOMER function - hmrOD2Conc - this matrix R is set to the identity matrix I. Could someone explain me this assumption? I can see why the off-diagonal elements are assumed to be zero for non-correlated noise on different wavelengths (is that even true?), but I thought that the respective variances might as well differ for the different wavelengths.


More information about the Homer-users mailing list