Brainstorm Walkthrough Stephen Whitmarsh Grey Box Research Innovations By request of the University of Amsterdam. Background.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Brainstorm WalkthroughStephen WhitmarshGrey Box Research InnovationsBy request of the University of Amsterdam
Neurnav system recorded 3D locations of EEG sensors and fiducial points
These will all be combined to permit localization of EEG activity on the MRI and 3D surfaces
EEG recordings were made with a custom 32+16 channel cap
From a high-resolution MRI scan 3D tessellated surfaces were made
We are now ready to import the EEG dataset.
We’ll start with aligning the 3D surfaces
Note: You can rotate the surfaces by opening the Camera Toolbar in the Envelope Viz (View Camera Toolbar) and selecting the utmost left option (orbit camera). Click somewhere on the Envelope Viz and drag the mouse to rotate the camera angle.
We will put the MRI scan in the same coordinate system by
appointing the same fiducials.
Finally the electrodepositions will be referenced in CTF.
Note that most electrodes will still not appear on the scalp surface since the shape of the electrode-cap and the shape of the Phantom surface are different. For that purpose we will now ‘warp’ the Phantom surfaces, together with the accompanying MRI scan, to the electrode positions.
We will now have to check if we succeeded aligning
the 3D surface with the MRI image.
TIP: since we will be interested in the posterior part of the brain at least that alignment should be rather accurate. If the yellow boundaries show up too high with respect to the MRI, the fiducials of the MRI (left and right auricular) can be moved downwards. You can probably leave the nasion in its place. See the next slide as an example (I worked hard on that one, for now it doesn’t have to be perfect).
Now a simple but visually appealing presentation of the data can be
We will start with the 3 Spheres Headmodel
A way to make this spherical headmodel
more realistic it to first warp the MRI scan and 3D
surfaces to a sphere. Since MEG electrodes are
positioned in a sphere anyway, using this approach
with MEG sensors will be feasible.
Brainstorm’s source localization method is called RAP-MUSIC. MUSIC stands for Multiple Signal Classification Approach. It can be applied for the whole time range thus creating fixed locations of dipoles, as well as for different time-epochs for which different solutions can be found. Because of the many possible number of sources located (equal to the number of sensors), Brainstorm uses the Recursively Applied and Projected (RAP) method. By this way after each successive source is found its contribution to the signal will be ‘projected away’, leaving the next source to explain the remaining signal. Moreover, one can specify the number of components that the search will start with, according to a manual selection of the number of components that the data will extract.
number of components
How would you interpret these results?
Next up is the BEM headmodel. We will use this to look at cortical sources of recurrent processing from a masking study with which you will be familiar now.
The aligned surfaces for the BEM computation that will show up automatically should look something like this:
Try to formulate how the difference wave would look.
We will now to import the data from the masking experiment in the same way as we sis before.
We are now able to extract mean signal from the dipoles.
We will include several vertices (each containing a dipole).