Back to the main page.

Bug 2300 - make side-by-side documentation comparing FT to BrainStorm

Status NEW
Reported 2013-09-25 10:22:00 +0200
Modified 2014-08-22 09:07:51 +0200
Product: FieldTrip
Component: documentation
Version: unspecified
Hardware: PC
Operating System: Mac OS
Importance: P3 normal
Assigned to:
URL:
Tags:
Depends on:
Blocks:
See also:

Robert Oostenveld - 2013-09-25 10:22:06 +0200

This is something I discussed per email with Francois. See below the suggested course of action. On 24 Sep 2013, at 17:00, François Tadel wrote: For the side-by-side comparison of the two analysis pipelines, this is easy to do. I would suggest we use a simple stimulus-based experiment with obvious evoked results (visual, auditive or somatosensory). I think the goal is not to show than one gives better results than the other, but to illustrate how to perform equivalent operations with the two programs. It could stay quite basic, to be used as an entry point, and feature the following operations: importing the subject anatomy (freesurfer ?) MEG/MRI registration reviewing continuous recordings 60Hz removal band-pass filter artifact removal (blinks and heartbeats) epoching DC removal or detrending of the epochs averaging source modeling definition of regions of interest / activity of those ROIs in time time-frequency analysis of the activity of those ROIs (trial by trial) project individual sources on a template for group analysis Any other suggestions? Do you have some nice data we could distribute freely for this purpose? The only high-quality dataset I could offer now is this median nerve experiment on which I have based most of the recent Brainstorm tutorials, but I would be glad to use something else. http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawViewer http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawScript We could have two wiki pages with the same structure, one on each website, illustrating the same pipeline in the two environments.


Robert Oostenveld - 2013-09-25 15:57:26 +0200

On 25 Sep 2013, at 15:27, Francois Jean Tadel, Mr wrote: Hi Robert, Do you have any simple dataset we could use and distribute freely (auditory or visual evoked responses)? If we use the median nerve dataset from the Brainstorm website, the Brainstorm part is already almost done. But even if it is more work for me, I would prefer another experiment: 1) to introduce a different type of recordings on the brainstorm website, 2) to validate better the software (some of the functions were tested mainly using this median nerve dataset) 3) to write something shorter than the 4 basic brainstorm tutorials (less details, no explanations about the interface) Are you suggesting to use the bug tracker as a forum, to exchange messages about a specific topic between a list of subscribed users? Should I have posted this message on the bug tracker?


François Tadel - 2013-09-26 00:23:39 +0200

Suggestions for the tutorial datasets: 1) Brainstorm tutorial: Median nerve stimulation, CTF 275 2) FieldTrip tutorial: Subject01.ds, auditory responses, CTF 151 3) We'll try to find other options in what we've acquired at the MNI in the past 2 years. Or we'll discuss the possibility to record an auditory session of "beep beep beep beep beep beep buuuuuup beep beep beep..." or something similar, specially for this tutorial. About #2: - Do you have the continuous (_AUX.ds) files for those recordings? It contains data that is already epoched, with trials that are not contiguous in time, but from which the 50Hz contamination has not been corrected. It would be better to have a continuous file to illustrate the best possible ways to apply frequency filters and sinusoid subtraction functions (filtering and then epoching). In the end it would make no difference in the tutorial, but it's always good to illustrate the best possible practice. - Do you have a Polhemus file that goes with it? I could not find any additional 3D points collected at the same time as the coils positions. I think it would be nice to illustrate the alignment process using a fully digitized head shape, as it gives much more precise results than just the CTF coils position. Example: http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawViewer#Access_the_raw_file


Robert Oostenveld - 2013-09-26 10:18:47 +0200

(In reply to comment #2) I don't think we have an AUX recording of this data. And if we do, I would have no idea where to find it. It was recorded ~10 years ago, using experimental procedures that since then have been abandoned. The reason for sticking to the particular dataset for the tutorial is that it is a relatively small file (151 channels, 300Hz sampling, only trials) which makes it easier to do it on random computers (like laptops) at workshops. Regarding coregistration: back then and still today we do not make polhemus head shapes by default. We use precision ear molds with a small vit-E capsules. Occasionally we do track the head shapes with the polhemus, but that is rather rare (e.g. for kids that are not allowed according to the specific ethical approval to go in the MRI scanner). See http://fieldtrip.fcdonders.nl/faq/how_are_the_lpa_and_rpa_points_defined and http://fieldtrip.fcdonders.nl/faq/how_can_i_convert_an_anatomical_mri_from_dicom_into_ctf_format FieldTrip can use polhemus head shapes for coregistration, but we don't have a tutorial on that. We might not even have any documentation on it on the wiki. The Subject01.ds dataset is clearly suboptimal, as it deviates from the best practice dataset and it is already processed too much (e.g. the coregistered MRI is already available in CTF format). I think that one aspect of the comparison could be how to deal with deviations from "best practices". E.g. you have your best practices, I have mine, but an end-user might yet have another set of best practices according to his/her specific lab conventions. How about a dataset from SPM, MNE, Nutmeg or another external source?


Robert Oostenveld - 2013-09-26 10:52:41 +0200

I asked around, Arjen, one of my coworkers responded: > For the head movement compensation study, I recorded 4 sessions X 3 oddball > datasets (visual, auditory, and tactile stimulation) in 2 subjects. Pretty good data. See http://www.sciencedirect.com/science/article/pii/S1053811912011597, click on "materials" in the navigation bar on the left for the relevant details. The 2x2 design (for the movements) is not relevant, but the interesting aspect is that it (by design) the data contains a wide range of sensory activity (evoked/induced) in three modalities. In that published study we did not analyze the oddballs. They were included to keep the subjects naive to the purpose of the experiment and to make it not too boring. So besides looking at (evoked versus induced) * (auditory, somatosensory, visual) activity, we can even extend the analysis into the oddbal contrast. The two subjects are still around and likely to formally agree with releasing their data for tutorial purposes. We could also get them back to get a polhemus head surface scan (suboptimal with replacing fiducials of course, but it would not be too far off, as the ear molds do not allow for too much movement).


Jörn M. Horschig - 2013-09-26 10:58:21 +0200

Ana Todorovic (roommate of mine) has been doing auditory MMN studies and primarily looked into ERFs. Afair, the paradigm she used is something along the lines that a high tone is usually followed by a low tone, but a low tone is usually followed by no tone (it's at least roughly like that). Not sure if the data set is neutral, because analyses have already been conducted using FieldTrip. If you want to go visual/covert attention, I could offer our covert attention BCI data (4 sessions per subject, 20 trials per side/session a 10 seconds) or the corresponding training data (Posner cueing task, 4 sessionsper subject, 48 trials per side/session). I did not record ECG or EOG though (Ana did). About SPM, well at least I followed the SPM MEEG course a few years back, but I don't know any details anymore...


François Tadel - 2013-09-27 01:06:21 +0200

After discussing with the MEG group at McGill, we though it would be a good occasion to acquire one simple auditory task on a very good subject for training purpose. We discussed briefly the specifications of the experiment. It could be something like this: - 1 acquisition run = 200 regular beeps + 40 easy deviant beeps - Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed - The subject presses a button when detecting a deviant - We would record three runs (each of them is ~5min long), asking the subject to move a bit between the runs (not too much), if we want to test later some sensor-level co-registration algorithms. - Auditory stim generated with the Matlab Psychophysics Toolbox - Only one subject, with a good 1.5T MRI - Acquisition at 2400Hz, with a CTF 275 system at the MNI, subject in seating position - Online 300Hz low-pass filter, files saved with the 3rd order gradient compensation applied - Saving both the continuous _AUX.ds files and the epoched .ds files - Additional channels: * Some EEG electrodes * Horizontal EOG * Vertical EOG * ECG * Stim channel (audio trigger) * Response channel (button press) * Audio channel (recording of what is actually sent to the subject) - Number and positions of EEG electrodes to be discussed with audition specialists - 3D digitization using a Polhemus Fastrak device driven by Brainstorm (http://neuroimage.usc.edu/brainstorm/Tutorials/TutDigitize) - The output file is copied to each .ds folder and contains the following entries: * the position of the center of CTF coils * the position of the anatomical references we use in Brainstorm (nasion and connections tragus/helix - the red point I placed on that ear image we have on both websites) * Around 150 head points distributed on the hard parts of the head (no soft tissues) We will record this around mid-October. Please let us know by October 7th if you would like to make any adjustments to this experiment, or if you have other suggestions.


Robert Oostenveld - 2013-09-27 09:27:58 +0200

(In reply to comment #6) I would like to have MRI-visible markers placed on the MEG fiducials for the MRI scan, and an additional marker somewhere near the right ear. The marker on the right is to rule out left/right swaps in whatever MRI format conversions people do, and to rule out left/right confusions in visualizing in radiological or neurological conventions. The markers on the fiducial locations correspond to how we by default do the coregistration (i.e. without the head surface). We don't do AUX recordings in Nijmegen (*), but record in pseudo-continuous mode by specifying in the acquisition software to do trials of 10 seconds that are not aligned to any triggers. Would that work for you, or would you want the data to be epoched, locked to stimuli and with gaps in between? *) this is nowadays, not in the good old days of Subject01.ds


Beth Bock - 2013-10-22 17:17:41 +0200

Hi Robert, Francois and I are working on the details of a shared tutorial set. I understand you would like to have an MRI marker on the T1 for coregistration. I had a look at the tutorial for how you use the ear molds. I am planning to try out the same method. Just to confirm: The subject wears the mold during the MEG, with the HLC coil on the audio tube, against the mold. The subject wears the mold during the MRI, with a marker in the tube, instead. I do not have those fancy markers you use, so I will use a tube in the mold that has an agar solution, instead of vitamin E. I can fill a tube with the solution and place it into the tube opening of the mold. Thoughts? Beth


Robert Oostenveld - 2013-10-22 20:50:31 +0200

(In reply to comment #8) Hi Beth, You are right about the procedure for the important parts. Except that in the MEG there is an audio tube around which the localizer coil goes, whereas in the MR there is no tube and the MR marker takes the place of the tube. Your proposed solution with the agar in the tube sounds fine to me. Could you also have an extra marker (like a fatty vitamin pill) on the right side? It ain't needed per see, but makes it more compatible with our standard procedure. We can, and sometimes also do use the polhemus head shape, so the procedure is not so much a fieldtrip thing (as FT can do either one) but a DCCN thing. best Robert


Beth Bock - 2013-11-19 15:49:25 +0100

Hi Robert, I have tried to create an MRI marker with the agar, as discussed, but it does not show well on the 1.5T scan. I am wondering if it will be possible to do this tutorial with only the polhemus points for the HPI coils and anatomical landmarks? Does that work well for FT users? Beth


Robert Oostenveld - 2013-11-19 15:56:59 +0100

(In reply to comment #10) I guess that the volume of agar is too small. FieldTrip can work fine without the markers and do the coregistration based on the head shape surface points, so please go ahead without the MEG fiducial visible in the MRI. Having something in the scanner to distinguish right from left remains preferable to me (that can be a large vitamin E capsule, or fish-oil, or some other fatty substance in a capsule).


Beth Bock - 2013-11-19 15:59:35 +0100

OK. I we will use the headpoints only and I will place a vitamin E cap on the right side near the ear when we do the MRI scan. Beth


Beth Bock - 2013-12-17 18:36:04 +0100

Hi Robert and Francois, I am planning to do this tutorial recording tomorrow. Robert, you have suggested we use a psuedo-continuous mode. We also do this for recording resting state, for instance. Actually, this is what the AUX file records. Do you guys think we should have both the .ds epoched file and the continuous AUX file or is just a continuous file enough? Beth


Robert Oostenveld - 2013-12-18 12:17:21 +0100

(In reply to Beth Bock from comment #13) just the continuous would be enough for me, but it might not hurt to have both. Actually, it would be good if on http://fieldtrip.fcdonders.nl/getting_started/ctf there were a section on the relevance of the various choices that can be made here with the CTF system. Since we have a certain DCCN procedure that "just works", I forgot about these confusing aspects of the recording modes (and still don't have a sharp picture of it).


François Tadel - 2014-03-10 19:17:57 +0100

Dear all, We finally put online this auditory experiment. You can get the dataset from the download page of the Brainstorm website (sample_auditory). Let us know if it can work for you or if you would need some adaptation in the way it is packaged. I started drafting the structure of the Brainstorm analysis tutorial. I will be very similar to other tutorials that are already online, but with less details. http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory


François Tadel - 2014-08-21 19:12:24 +0200

Hello, The first part of the auditory tutorial on the Brainstorm website is now finished: Importing anatomy and recordings, artifact cleaning, epoching, averaging, source modeling, regions of interest. http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory Do you agree with what has been written until now? It would be nice to have some input from the FieldTrip perspective before we move on to the next topics: Time-frequency, connectivity, phase-amplitude coupling, etc. Cheers, Francois


Robert Oostenveld - 2014-08-22 09:07:51 +0200

(In reply to François Tadel from comment #16) Hi Francois, Great! Let's discuss this on Sunday or anytime next week in Halifax. best Robert