# [Mne_analysis] calculate adaptive mean amplitude

Marijn van Vliet w.m.vanvliet at gmail.com
Mon Feb 19 18:05:30 EST 2018
 Previous message: [Mne_analysis] calculate adaptive mean amplitude Next message: [Mne_analysis] calculate adaptive mean amplitude Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Search archives:

```Hi Aaron,

here is a super fast solution that vectorises everything. In order to do this, we have two use two super handy numpy features:

I hope you can follow along with the comments. I encourage you to go through it line by line and carefully inspect all the arrays that are created.

You can find the code example here:
https://gist.github.com/wmvanvliet/5cc013ef0f9b18561c74a4d6c1d130b7

best,
Marijn.

--
Marijn van Vliet
Postdoctoral Researcher
Department of Neuroscience and Biomedical Engineering
Aalto University

> On 20 Feb 2018, at 04:25, Aaron Newman <Aaron.Newman at dal.ca> wrote:
>
> Hi all
>
> This is perhaps more of a python question than MNE-specific, but it is needed to solve an MNE-specific problem based on an MNE data structure. I’m trying to compute ‘adaptive mean amplitude’ for individual epochs - i.e., the mean amplitude centred around the minimum (or max) value within a specified time window. To do this, I find the time point with the minimum value, then compute the mean including x number of points +/- that time point.
>
> I have a working solution, but it’s very slow because it uses nested for loops to iterate through every cell in the dataframe manually. I’m wondering if anyone can suggest a vectorized or otherwise more efficient solution?
>
> Aaron
>
> # Create pandas dataFrame from MNE epochs
> df = epochs.to_data_frame(scaling_time=scaling_time, index=['epoch', 'time'])
>
> idx = pd.IndexSlice
>
> epoch_start, epoch_end   = 150, 250
> ama_width = 30  # tw around peak to get mean of, in ms
>
> # find time points with peak minimum values within time window
> t_indices = df.loc[idx[:, time_win], idx[:]].groupby('epoch').idxmin()
> # note: this produces tuples within each cell, of (epoch, time_point)
>
> # create empty df with rows = epochs and columns for each electrode
> peak_times = pd.DataFrame(columns=list(df.columns[1:].values))
>
> # fill empty df with just time_point from each tuple
> for electrode in list(df.columns[1:].values):
>     peak_times.loc[idx[:],idx[electrode]] = t_indices.loc[idx[:],idx[electrode]].str[1]
>
> # create empty df for output
> out_data = pd.DataFrame(columns=list(df.columns[:].values))
>
> out_data.loc[idx[:],idx['condition']] = list(df.loc[idx[:,0], idx['condition']])
>
> # Slow step - needs to iterate through each cell of array (epoch and electrode) individually
> for electrode in list(out_data.columns[1:].values):
>     for trial in list(df.index.levels[0]):
>         # define time window around peak value
>         peak_tp = peak_times.loc[idx[trial], idx[electrode]]
>         amawin_start = peak_tp - ama_width/2
>         amawin_end = peak_tp + ama_width/2
>         amawin = list(arange(amawin_start,amawin_end))
>         # compute mean amplitude around peak value and write to output df
>         out_data.loc[idx[trial], idx[electrode]] = float(df.loc[idx[trial, amawin], idx[electrode]].groupby(['epoch']).mean())
>
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to whom it is
> addressed. If you believe this e-mail was sent to you in error and the e-mail
> http://www.partners.org/complianceline . If the e-mail was sent to you in error
> dispose of the e-mail.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1386 bytes
Desc: not available
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20180220/91081c92/attachment.bin
```