Back to the main page.

Bug 2151 - MFF might have discontinuous data

Reported 2013-05-01 17:05:00 +0200
Modified 2019-08-10 12:03:49 +0200
Product: FieldTrip
Component: fileio
Version: unspecified
Hardware: PC
Operating System: Linux
Importance: P3 major
Assigned to: Gio Piantoni
Depends on:
See also:

Gio Piantoni - 2013-05-01 17:05:02 +0200

Created attachment 473 NetStation image Hi, I found an interesting behavior in the EGI MFF file format. We start the recordings on EGI, then we start E-Prime. When E-Prime starts, EGI does not acquire and store data for half a second. When reading the recordings in NetStation, the time goes from 0s to 1s, then from 1.5s to the end of the recordings. See attached screenshot. The green line shows the discontinuity. The discontinuity is not obvious in the time scale on top of screenshot, but note that the ticks above "Add Clip Event" are interrupted. This is a problem when reading the data in Matlab using Fieldtrip. Fieldtrip now reads the data as continuous, in other words, it does not take into account the discontinuity. This is a problem because the markers are in the "absolute" EGI times. So, the markers will be shifted in time by the length of the discontinuity. I'm using the egi_mff_v1 approach, because I didn't get the java reader to work. I suspect that the bug would be there as well. We found out that the information is stored in "epochs.xml". In our case, which reads: 0 1351000000 1 1 1801000000 573676000000 2 141 Meaning that the signal goes from 0 to 1351 ms (sampling rate = 1000 Hz) and from 1801 to 573676 ms. I can provide the data if needed (~ 580 MB). How does Fieldtrip deal with discontinuity like this? Either we change the marker times or we add some NaN for the period between the markers. I can change ft_read_data between lines 657-668 to deal by adding NaN if you think it's a good idea. Cheers, g

Gio Piantoni - 2013-05-01 18:06:55 +0200

I just noticed that ft_read_data l. 657-668 should throw an error in cases like mine. However, it didn't because the xml file in my case is called "epochs.xml", not "epoch.xml"

Ingrid Nieuwenhuis - 2013-05-01 20:25:07 +0200

Hi Gio, Yeah, I remember having the same problem with my data at some point. FYI there is a setting in the Eprim package to prevent this fragmented data from happening in the first place. I had the same and they told me the following: ------------- The first small segmentation that you saw is indeed a work of NSInit in the E-Prime experiment. When object NSInit calls Net Station to establish a socket communication, it also manipulate Net Station to start recording, and when the communication handshake finishes, the recording stops which leaves a small chunk of epoch. To disable this function, add "False" at the end of the call parameter line in NSInit, Ex: c, "on", CellList, "socket", "","False". ---------------- But that does not solve your problem with the data you have. I remember having tried to have the v1 implementation work with this. Adding some info to the header file I think. But can't remember if I solved it. I do remember it was a head ache indeed. So I suggest that you first look through the v1 implementation in detail, to see what's already there, and then try to solve it in a way you think makes sense. Adding NaNs for the missing time sounds like a smart idea. So I'd look into the read header part, and see if you can already get from the hdr that there are discontinuous epochs. If so, add some info to the hdr (if this is not implemented already). And then when reading in the data, the code should chack if there a epochs and if so fill with NaNs sounds good. Good luck! I feel your pain, been there a thousand times ;) Ingrid

Gio Piantoni - 2013-05-01 22:08:21 +0200

Hi Ingrid, Thanks a lot for the detailed answer! You're right about ft_read_header. It seems that using ft_read_header that you wrote should give an elegant solution to the problem! Let's see if I can use it in my case. I missed that warning at l. 902 and the error in ft_read_data at l. 657 because in my case the file is called "epochs.xml" instead of "epoch.xml", so fieldtrip did not throw an error. Can you (and other EGI users) check if your datasets have "epochs.xml" or "epoch.xml"? Cheers, g

Ingrid Nieuwenhuis - 2013-05-01 22:16:10 +0200

I also have epochs.xml, and I only have 1 epoch for sure, so epoch seems to be a typo. Cheers, Ingrid

Gio Piantoni - 2013-05-01 23:02:58 +0200

Hi Ingrid, You did an amazing job with the code! I didn't realize that ft_read_header and ft_read_event take care of the shift due to the multiple epochs, it's a very elegant solution and exactly what I needed. I thought the "magic" would happen in ft_read_data, but as I'm only looking at event-related data I don't need to implement the NaN part and I like the current solution a lot. I'm just surprised it didn't work out of the box because of the typo with epochs.xml/epoch.xml I think we should use epochs.xml, but allow for the case epoch.xml is in the folder. I'll write the code for that, then I'll close the bug. Thanks! g

Robert Oostenveld - 2013-05-02 07:39:33 +0200

thanks guys!

Gio Piantoni - 2013-05-02 09:25:14 +0200

I added a pull-request to the GIT repo with the patch: But, Robert, do you still have problems syncing git and svn? How should I submit patches?

Gio Piantoni - 2013-09-19 17:31:49 +0200

(In reply to comment #7) new PR:

Robert Oostenveld - 2013-09-20 12:05:16 +0200

(In reply to comment #8) it has been integrated in the main branch, see

Gio Piantoni - 2013-09-20 16:13:15 +0200

Thanks. Sorry that git gives you so much trouble: I'm also still learning a lot about it...

Robert Oostenveld - 2013-09-20 19:23:00 +0200

(In reply to comment #10) git by itself is very nice, we use it in another project fine. It is just the bidirectional combination with svn that I still don't understand how it might work. Luckily the unidirectional svn->git now works automatically and robustly.

Robert Oostenveld - 2019-08-10 12:03:49 +0200

This closes a whole series of bugs that have been resolved (either FIXED/WONTFIX/INVALID) for quite some time. If you disagree, please file a new issue describing the issue on