Back to the main page.

Bug 2101 - Provenance very big, makes plotting and executing functions very slow

Reported 2013-04-11 10:53:00 +0200
Modified 2019-08-10 12:32:58 +0200
Product: FieldTrip
Component: plotting
Version: unspecified
Hardware: PC
Operating System: Windows
Importance: P3 critical
Assigned to:
Depends on: 2255
See also:

Simone Heideman - 2013-04-11 10:53:10 +0200

Created attachment 446 timelocked grandaverage EEG data from 2 different conditions I think MATLAB cannot handle the cfg.previous information present in my variables (provenance is very big). Plotting and loading of variable information takes a lot of time. MATLAB often closes down especially when plotting with cfg.interactive = 'yes', even with 16Gb RAM. I already discussed this problem with Robert and he asked me to report this as a bug.

Roemer van der Meij - 2013-04-18 18:04:00 +0200

I've noticed the same with a colleague here in my office. Though the initial data is very small, a stat structure of several 10's of MBs, it causes very severe memory issues. Calling ft_clusterplot on it, after a few plots are produced, memory uses suddenly goes up to 10+ GB (crashing in our case). This makes me think it is related to the postamble cleanup. Robert, I CC'ed you as well. Simone, what kind of plotting function did you use? (just to make it a little easier)

Roemer van der Meij - 2013-04-18 18:04:56 +0200

(also changed this to critical as well, as it could affect many many functions)

Roemer van der Meij - 2013-04-18 18:06:41 +0200

Also, possibly related to bug 2121

Simone Heideman - 2013-04-18 20:27:01 +0200

(In reply to comment #1) Hi Roemer, It is slowest when I use ft_multiplotER, especially when I then try to zoom in on a single channel. MATLAB sometimes cannot handle this. However, other plotting functions are slow as well and for example making changes in a plot e.g. navigate around to turn the axes upside down is almost impossible because it reacts so slow. I have TFR grandaverages of the same data and plotting this seems to go a bit faster but still takes a lot of effort.

Roemer van der Meij - 2013-04-19 11:51:44 +0200

Hmmm, reconstructing the analysis pipeline from one of the datasets shows why it is so slow (using ft_analysispipeline([],Grandavg_LLR)). I attached the figure. Though it looks very complicated, this is not very uncommon I think. Many subjects with many previous steps. When it becomes this complicated, the cfg-tracking becomes the bulk of computation time when interacting with the data. Which is undesirable, especially because the cfg-tracking is on by default. My guess is the slowness is caused by the deeply nested structure-arrays. Looking at the size of the compressed vs the uncompressed data (160MB vs 5MB on disk for one of the datasets), makes me lean towards this. Still thinking about how to approach this problem. Referring to my previous comment, the cfg structure of my office mate is about 2GBs, but is much less complicated than the ones I attached from Simone.

Roemer van der Meij - 2013-04-19 11:52:40 +0200

Created attachment 458 ft_analysispipeline on Simone's data

Roemer van der Meij - 2013-04-19 11:53:01 +0200

Created attachment 459 ft_analysisprotocol on Simone's data

Robert Oostenveld - 2013-04-19 13:41:02 +0200

(In reply to comment #7) looks impressive, but is indeed not uncommon considering group studies.

Roemer van der Meij - 2013-04-19 14:21:41 +0200

I've tracked down some of the slowdowns to calls like ft_preamble_provenance varargin, which on line 51 does mxSerialize on varargin (containing the data and the complicated cfg), the output is used for the getting the md5 hash. I guess the serialization has difficulties with the nested structure-arrays? Maybe we could circumvent this by only basing the hash on the data minus the cfg?

Robert Oostenveld - 2013-04-19 14:42:28 +0200

(In reply to comment #9) at this moment the input and output hashes are computed (both) on the data with cfg. Important is that input and output hashes remain comparable. So I think that it would be fine to switch both to data-witout-cfg.

Roemer van der Meij - 2013-04-19 17:16:23 +0200

Alright, fixes applied. Ft_preamble/postamble provenance now first tries to remove the cfg field from any of the inputs, and then calculates the md5 checksum on those. This creates a copy of any of the data structures of course, but the reference to the data in memory doesn't change. At least, in matlab R2012a it doesn't. So, no additional memory is needed. Simone, all of the plotting should be much more responsive now. Could you play around and confirm? If not, it might be that the md5 still takes a lot of time to calculate. PS: Robert, used svn to commit it. But I yearn for git! ;)

Simone Heideman - 2013-04-23 15:57:58 +0200

I downloaded the newest version of Fieldtrip and tried to do some plotting after adding a new participant. However, MATLAB closes down when I try to plot the ERP grandaverages. MATLAB also gives warnings when I try to calculate the TFR's for this new participant: it gives the following warning for every trial (so a lot of warnings!): Warning: output time-bins are different from input time-bins > In ft_specest_mtrial 34, frequency 25 (50.00 Hz), 1 tapers When I try to calculate the TFR grandaverages it gives the following error: computing average powspctrm over 14 subjects ??? Error using ==> plus Array dimensions must match for binary array op. Error in ==> ft_freqgrandaverage at 190 tmp = tmp + varargin{s}.(cfg.parameter{k})./Nsubj; % do a weighted running sum I never saw this error before. Is it possible that this (and perhaps also the closing down of MATLAB after plotting ERP's) is caused by the changes you made, or is it likely that I did something wrong myself? I think I used exactly the same procedure as I used for the other participants.

Roemer van der Meij - 2013-04-23 17:10:47 +0200

Hi Simone, No, the changes I made for this bug cannot be the cause for your current issues I'm afraid. The time-bin warning has been added recently, and serves to make it more obvious when the time-bins you request are not present in the data (the closest ones will be picked). This means your cfg.toi specified time-points that were not found in your data.time{i}. On the error and the closing down (you mean a crash to desktop?), that looks like something different. Could you make a separate bug out of this with a copy-paste of the cfg you used, and maybe some data? Were you able to reach a point to plot data and see whether it was faster?

Robert Oostenveld - 2013-05-17 08:44:12 +0200

Arjen has provided me with another example structure that has a very large provenance history. I will add this data structure (as the one from Simone) to /home/common/matlab/fieldtrip/data/test/bug2101 for future reference and testing. On 16 May 2013, at 13:58, Arjen Stolk wrote: In de matfile zit de stucture genaamd 'SP' K>> whos Name Size Bytes Class Attributes SP 1x1 2234455688 struct K>> SP = rmfield(SP,'cfg') K>> whos Name Size Bytes Class Attributes SP 1x1 632540 struct

Roemer van der Meij - 2013-05-17 12:02:14 +0200

I had a look at the cfg from Arjen's mat-file, should give us some thought. For every subject many multiple copies of the grads, vols and grids are kept, many in nested usercfgs. Single large fields (like vol.tri) are removed because of checkconfig, but there are many other ones which are present very often (grad, grid, vol). Examples: vol.pnt 4000x3 double grad.chanori 302x3 double grad.chanpos 302x3 double grad.chantype 302x1 cell grad.coilori 595x3 double grad.coilpos 595x3 double grad.label 302x1 cell grad.balance.G1BR.labelorg 281x1 cell grad.balance.G1BR.labelnew 281x1 cell grad.balance.GX1BR... grid.inside 1x2998 double grid.inside 1x2782 double For all 24 subjects they are present at least several times, and some of the field are present inside themselves as well (e.g. grid.inside and grid.cfg.grid.inside).

Robert Oostenveld - 2019-08-10 12:32:58 +0200

This closes a whole series of bugs that have been resolved (either FIXED/WONTFIX/INVALID) for quite some time. If you disagree, please file a new issue on