Back to the main page.

Bug 1490 - Problem with reading Neuroscan 16bit file (again)

Status REOPENED
Reported 2012-05-28 17:39:00 +0200
Modified 2014-01-21 12:43:37 +0100
Product: FieldTrip
Component: fileio
Version: unspecified
Hardware: PC
Operating System: Windows
Importance: P3 normal
Assigned to: Robert Oostenveld
URL:
Tags:
Depends on:
Blocks: 1359
See also: http://bugzilla.fcdonders.nl/show_bug.cgi?id=1412

Vladimir Litvak - 2012-05-28 17:39:51 +0200

Created attachment 268 problematic file I got the attached file at the last SPM course. It doesn't convert properly with either 16bit or 32bit setting but it does convert OK in EEGLAB with 16bit setting. Vladimir


Roemer van der Meij - 2012-05-29 11:17:00 +0200

Hi Vladimir, What do you mean with improper converting? It loads the data in fine when I do ft_read_data(file), and the same for the header/events. Although it does look extremely noisy... is that what you are referring to? (CC: Robert, I assigned it to me in case it is related to my changes to loadcnt.m)


Vladimir Litvak - 2012-05-29 11:22:36 +0200

Created attachment 269 eeglab


Vladimir Litvak - 2012-05-29 11:22:48 +0200

Created attachment 270 spm


Vladimir Litvak - 2012-05-29 11:23:51 +0200

(In reply to comment #1) Hi Roemer, I attached PDFs of what the data look like when converted in EEGLAB and in SPM. I hope you can appreciate the difference. Best, Vladimir


Roemer van der Meij - 2012-05-29 12:04:05 +0200

Hi Vladimir, Thanks for the attachments, it took me a while, but I found the source of the problem. Turns out we enforce a 'blockread' of 1 in the low-level read-function from EEGLAB. Normally this is automatically determined. I need a little bit of executive decision on this, as I don't fully oversee the purpose of enforcing the blockread to be 1. Robert, are you aware of this being necessary in some cases? Maybe it was necessary for the old version of loadcnt but no longer? *** Just for documentation purpose, the following did not affect the issue: - enforcing a blockread of 1 in the header as well (now automatically determined) - enforcing 16bit/32bit/auto read of header and data - using the old version of loadcnt and: * enforcing a blockread of 1 for header as well * enforcing 16bit/32bit/auto read of header and data which of course all totally make sense not to affect the issue since it's the blockread


Robert Oostenveld - 2012-05-29 12:35:07 +0200

(In reply to comment #5) I think the blockread was set to 1 to deal with the most common formats in the old implementation. Probably it can be disposed of now. I suggest that you make a test script, which includes the problematic file and some other files. The rest script could read the first part of each file and compare it to the known correct values. The correct values that serve as reference can also be in a file (but then a mat file). The correctness of the reference values should be verified using visual inspection. I have copied the available test files that I have to the shared test directory, see below. manzana> pwd /home/common/matlab/fieldtrip/data/test/bug1490 manzana> ls -l total 203672 -r-xr--r-- 1 roboos staff 19583149 May 29 12:24 0500.cnt -rwx------ 1 roboos staff 15127153 May 29 12:28 CS14_Sess1_V1_short-block.cnt -rw-r--r-- 1 roboos staff 45069561 May 29 12:25 Subject1_MP.cnt -rw-r--r-- 1 roboos staff 12803322 May 29 12:29 cba1ff01.cnt -rwxr-xr-x 1 roboos staff 11683897 May 29 12:25 test.cnt The test script could look like path= /home/common/matlab/fieldtrip/data/test/bug1490 filelist = { ...} % this file contains the reference solution that has been visually checked for correctness reference = load('/home/common/matlab/fieldtrip/data/test/bug1490.mat') hdr = {}; dat = {}; for i=1:5 filename = fullfile(path, filelist{i}) hdr{i} = ft_read_header(filename); dat{i} = ft_read_data(filename, ....); % only 10 seconds assert(isequal(hdr{i}.nSamples, reference.hdr{i}.nSamples); % idem for nchannels assert(isequal(dat{i}, reference.dat{i}); end Please see also test_bug1412.m and bug #1412.


Vladimir Litvak - 2012-05-29 12:42:02 +0200

(In reply to comment #6) Here are two more files from my archives: http://dl.dropbox.com/u/7732885/nscan.zip Vladimir


Robert Oostenveld - 2012-05-29 12:46:21 +0200

(In reply to comment #7) I have added them to the home/common test directory.


Roemer van der Meij - 2012-05-30 21:44:31 +0200

Transferring to Robert as I'm leaving to China tomorrow for a month, and this deserves a quicker fix :)


Robert Oostenveld - 2012-06-05 22:53:46 +0200

I have added a test script for bug 1490 based on 8 different neuroscan cnt datasets. The test datasets are now: '0500.cnt' 'cba1ff01.cnt' 'dronba4dh.cnt' 'Subject1_MP.cnt' '1pas102_working_memory.cnt' 'CS14_Sess1_V1_short-block.cnt' 'sub1E3a.cnt' 'test.cnt' These names should be recognizable to the external bug reporters. Based on that test script and data, I have made the following two changes: 1) don't use blockread=1 but let loadcnt decide. 2) use nums instead of numsamples. roboos@mentat001> svn commit fileio/ft_read_* test/test_bug1490.m Sending fileio/ft_read_data.m Sending fileio/ft_read_header.m Adding test/test_bug1490.m Transmitting file data ... Committed revision 5913.


Vladimir Litvak - 2012-08-22 14:22:16 +0200

Hi, I tried to convert the same file and I get an error in SPM. Try: hdr = ft_read_header('cba1ff01.cnt'); dat = ft_read_data('cba1ff01.cnt', 'header', hdr, 'begsample', 105704, 'endsample', 206440,... 'chanindx', 1:hdr.nChans, 'checkboundary', 1); It should work as the samples are in bounds. I get: ??? Subscripted assignment dimension mismatch. Error in ==> loadcnt at 474 dat(:, counter*h.channeloffset+1:counter*h.channeloffset+h.channeloffset) = ... Error in ==> ft_read_data at 815 tmp = loadcnt(filename, 'sample1', sample1, 'ldnsamples', ldnsamples); Nothing obvious I could figure out. Vladimir


Robert Oostenveld - 2012-08-22 15:01:23 +0200

(In reply to comment #9) > Transferring to Robert as I'm leaving to China tomorrow for a month, > and this deserves a quicker fix :) Hmm, I did not live up to the expectations to fix it more quickly :-(


Robert Oostenveld - 2012-08-22 15:23:53 +0200

(In reply to comment #11) there is one sample difference around tmp = loadcnt(filename, 'sample1', sample1, 'ldnsamples', ldnsamples); K>> dbstack In fileio/private/loadcnt at 457 > In ft_read_data at 815 K>> endsample-begsample ans = 100736 K>> ldnsamples ldnsamples = 100737 Looking in loadcnt, it seems the data is represented in 40 sample blocks. ------------------------ another issue is with dat1 = ft_read_data(filename, 'begsample', 1, 'endsample', 40); dat2 = ft_read_data(filename, 'begsample', 2, 'endsample', 41); dat3 = ft_read_data(filename, 'begsample', 1, 'endsample', 80); and plotting these, the dat2 is not what you would expect


Robert Oostenveld - 2012-08-22 15:46:26 +0200

loadcnt line 469 dat = zeros( h.nchannels, r.ldnsamples, 'single'); % FIXME see fieldtrip bug 1490 dat(:, 1:h.channeloffset) = fread(fid, [h.channeloffset h.nchannels], r.dataformat)'; counter = 1; while counter*h.channeloffset < r.ldnsamples dat(:, counter*h.channeloffset+1:counter*h.channeloffset+h.channeloffset) = ... fread(fid, [h.channeloffset h.nchannels], r.dataformat)'; counter = counter + 1; this has not the expected effect if r.ldnsamples is smaller than 40, the first line in which dat=zeros is then overruled by the second line (where the fread returns 40 samples). In case there are more than 40 samples desired, in the while loop where the remainder of the data is read, it is read in chuncks of 40 samples. So the reading will fail either 1) if the begin of the requested segment is not aligned with the 40-sample blocks 2) if the requested segment is not a multiple of 40 samples long Vladimirs case is 'begsample', 105704, 'endsample', 206440, which fails on both accounts.


Robert Oostenveld - 2012-08-22 16:12:49 +0200

not yet fixed, have to go now, but to summarize... the problem with 'cba1ff01.cnt' is in loadcnt and has to be solved there in consultation with Arno. In that solution (that has to be merged into eeglab), also FT bugfix #1412 needs to be incorporated. I have added some test cases to the script manzana> svn commit test_bug1490.m Sending test_bug1490.m Transmitting file data . Committed revision 6395.


Vladimir Litvak - 2013-05-21 23:19:56 +0200

Created attachment 475 new problematic file


Vladimir Litvak - 2013-05-21 23:23:03 +0200

(In reply to comment #16) Hi Robert, I see this bug was left unfixed. I got a crash again this year with the line: dat = ft_read_data('family1.cnt', 'begsample', 156039, 'endsample', 215140); because the actual size of dat was different from the requested. I attach the file. There was also another strange cnt which looked a weird with some kind of pulse artefacts.The strange thing was that it looked the same no matter if converted wit 16 or 32 bit setting. I also have that if you want to look. Vladimir


Robert Oostenveld - 2013-05-24 14:00:40 +0200

with Vladimir over skype) is to move fieldtirp/fileio/private/loadcnt.m to fieldtrip/external/eeglan and merge it with the "official" eeglab version. That ensures that the flow of teh code and changes ot the code can be better tracked.


Robert Oostenveld - 2013-05-24 14:01:38 +0200

(In reply to comment #18) Here is a correction of the copy-and-paste; there are two bugs related to neuroscan cnt, bug 1412 and bug 1490. My proposed solution (discussed with Vladimir over skype) is to move fieldtirp/fileio/private/loadcnt.m to fieldtrip/external/eeglan and merge it with the "official" eeglab version. That ensures that the flow of teh code and changes ot the code can be better tracked.


Robert Oostenveld - 2013-09-24 17:06:08 +0200

(In reply to comment #19) I have implemented the solution as suggested. See svn commit 8519 and 8526. Note that commit 8519 includes the loadcnt from the latest svn eeglab version. I also split the test_bug1490 script into two parts. The first part runs fine, the second (now disabled) needs to be fixed in eeglab and cannot be fixed here.


Vladimir Litvak - 2014-01-21 12:43:37 +0100

(In reply to Robert Oostenveld from comment #20) Just to remind that part b of the bug is still not fixed. I just got another complaint. Vladimir