Back to the main page.

Bug 2990 - source and sensor level results do not match: left and right hemispehres are swapped

Status CLOSED FIXED
Reported 2015-10-20 17:04:00 +0200
Modified 2016-06-14 16:14:55 +0200
Product: FieldTrip
Component: core
Version: unspecified
Hardware: PC
Operating System: Windows
Importance: P5 normal
Assigned to: Robert Oostenveld
URL:
Tags:
Depends on:
Blocks:
See also:

Lin Wang - 2015-10-20 17:04:05 +0200

Created attachment 743 The source and sensor level results of one subject The N400m effect was shown on the left hemisphere at the sensor level, but the effect was localized on the right temporal lobe at the source level.


Arjen Stolk - 2015-10-20 17:11:00 +0200

Just revert the img. Jokes apart, how are the imgs created? Is the source reconstruction of an erp, or signal variance?


Robert Oostenveld - 2015-10-20 17:12:23 +0200

The problem seems to stem from the original MRI (directly after reading it from the dicom files) having this left-handed coordinate system. >> mri.transform ans = 0 0 -1.0000 88.3897 -1.0000 0 0 171.3241 0 -1.0000 0 111.4144 0 0 0 1.0000 Lin and I already checked the realignment procedure (ft_volumerealign), that is all going OK. I received some test data and copied it to /home/common/matlab/fieldtrip/data/test/bug2990 I suspect it to be due to ft_prepare_sourcemodel, which is used to create a MNI realigned source model. I will make a test script to etst further.


Robert Oostenveld - 2015-10-20 17:29:07 +0200

I made a test script, but have not been able to detect anything inconsistent with ft_prepare_sourcemodel. mac011> svn commit test_bug2990.m Adding test_bug2990.m Transmitting file data . Committed revision 10803.


Robert Oostenveld - 2015-10-20 17:30:31 +0200

Lin will redo the analysis (single subject and perhaps also group-wise), switching to the resliced MRI early in the pipeline.


Arjen Stolk - 2015-10-20 17:58:29 +0200

Thanks for this clarification.


Lin Wang - 2015-10-21 10:23:20 +0200

Thanks for all your suggestions. I've tried three things so far: 1. Create subject specific grid using the template grid and individual MRI without reslicing 1. Create subject specific grid using the template grid and individually resliced MRI 2. Create subject specific grid without normalizing to the template grid All the analyses gave me the strong right hemispheric activation. Now I'm trying to group the subjects based on their sensor level results (more left dominant ones vs. less left dominant ones) and then compare their source level results to see where the difference is. I think another way of testing the script is to use the tutorial data (the N400 dataset) to see whether it gave us the same flipped results. Lin


Robert Oostenveld - 2015-10-21 13:17:24 +0200

(In reply to Lin Wang from comment #6) Since I have not been able to find an error in the FT code so far, and since all suggested work-arounds have failed to resolve the matter, I suggest we also consider the possibility that what you get is not due to a left-right flip, but due to some other effect (e.g. one subject driving the results in the sensor level analyses, whereas the group stats at source level is probably computed differently). Do you have a single "representative" subject that has a very strong effect? If so, I suggest we investigate that subject in more detail.


Lin Wang - 2015-10-21 13:33:20 +0200

Created attachment 744 Topos of ERF effects for all subjects


Lin Wang - 2015-10-21 13:36:29 +0200

(In reply to Robert Oostenveld from comment #7) I attached the topographies of the ERF effects of all participants. I think most of them showed the left-dominant effect at the sensor level. Which one do you think we should start with? I used a relatively small time window (0.3-0.6s) to calculate the common filter. Jim suggested me to use a longer time window (including the baseline period). I'll see whether this can make a difference. Lin


Robert Oostenveld - 2015-10-21 13:39:35 +0200

I suggest number 17. It seems to have the strongest effect. Note that overall the effect is more on the left, but not exclusively. I don't know how the group stats were computed, but consistency (e.g. group t-scores) might look different than averaged amplitudes.


Lin Wang - 2015-10-21 13:43:28 +0200

(In reply to Robert Oostenveld from comment #10) I haven't done any stats on the group level data. I only calculated the grand averaged amplitudes. I'll focus on subject 17 and see what happens. Thanks! :-) Lin


Arjen Stolk - 2015-10-21 16:42:29 +0200

(In reply to Lin Wang from comment #11) Brings me bavk to asking how the source level results are computed. Wouldnt be the first time i see lcmv results (signal variance) giving a totally different picture than the strongly event locked erps. Or is this a mne projection of the erp?


Lin Wang - 2015-10-21 17:07:56 +0200

(In reply to Arjen Stolk from comment #12) This is the lcmv results of the ERF data. Do you think I can trust the source localization of oscillatory activities using DICS?


Arjen Stolk - 2015-10-21 18:01:15 +0200

(In reply to Lin Wang from comment #13) Let's deal with the lateralization issue first. :) What did you feed, in terms of data, to the lcmv algorithm? Did you use a certain part of the data (e.g. a latency window within each trial) to create the data covariance matrix that underlies the spatial filter? What I'm after, is to know whether you may have constructed an analysis pipeline that is more sensitive to loosely timelocked/induced neural activity, as compared to the strict timelocking sensitivity of the standard erp analysis.


Lin Wang - 2015-10-21 19:39:28 +0200

(In reply to Arjen Stolk from comment #14) I fed the single-trial data to the lcmv algorithm. The time latency for calculating the common filter is relatively short (0.3-0.6s). Jim suggested me to use a longer time window (e.g., -1 - 1s). I'll try it and see whether this can get improved results. Lin


Arjen Stolk - 2015-10-21 20:13:41 +0200

(In reply to Lin Wang from comment #15) Ok, I'm not saying this is the explanation for your sensor vs. source level mismatch, but there's a discrepancy in terms of what aspects of the data these analyses are focused on (as explained above, a potential difference in timelockedness to the event) and that are worth keeping in mind when interpreting the data. One way to check, is to replicate your source-level estimate of neural activity at the sensor-level. For instance, by computing signal variance in the latency you mentioned, followed by averaging over trials.


Lin Wang - 2015-10-26 14:53:57 +0100

(In reply to Arjen Stolk from comment #16) Hi Arjen, Sorry for my late response. I think your suggestions make great sense. I've now tried to apply the common filter to the covariance of the averaged trials (i.e., first average the trials and then calculate the covariance of two conditions). Then I got results on bilateral temporal lobes as well as left frontal lobe. This result is relatively in line with the sensor level result. I think the calculation of covariance to the averaged trials is more sensitive to the strictly timelocked ERF effect, while the calculation of the averaged covariance to the single trials is more sensitive to the loosely timelocked/induced neural activity. I also used the same leadfield to localize oscillatory activities and there is no left-right flipping at the source level. Therefore, I think there is nothing wrong with the construction of source model and leadfield. I only need to be more careful with the data that are used for the source localization. Thanks again for all your help. :-) Lin


Robert Oostenveld - 2015-11-10 16:04:28 +0100

this has been resolved, there was no swap in the code.


Robert Oostenveld - 2016-06-14 16:14:55 +0200

Hereby I am closing multiple bugs that have been resolved for some time now. If you don't agree to the resolution, please reopen.