User Tools

Site Tools


Sidebar

Home

EEGLAB Hackathon
Scripts
Main Website
Contact

bastien.b1@gmail.com

eeglabsesh1

Session One: Preparations & Pre-processing (GUI)

Installing toolboxes

1. Moving files

  • unzip contents of downloaded zip.
  • toolboxes (and their plug-ins) can go anywhere on your drive but it may make more sense to put them in the MATLAB folder (usually located in User/Documents) and to create a “toolbox” folder where all your toolboxes could go (that is just some OCD housekeeping however, MATLAB doesn't care).
  • move EEGLAB folder in MATLAB folder (or folder of your choice)
    • open EEGLAB plug-in folder EEGLAB>plugins and place the ADJUST & ERPLAB from the unzipped file in there.
  • move MassUnivariate folder in the MATLAB folder (or folder of your choice)

2. Updating MATLAB path

  • Moving the files in the MATLAB folder isn't enough for the application. The matlab path must be updated so that you can type eeglab and directly load the toolbox without having to specify the current folder every time.
    • In Matlab, click “Set Path”
    • Click “add folder” (do not add subfolder as it may create duplicates in your path).
    • Locate the EEGLAB folder (User/Documents/MATLAB/toolboxes, on Mac OS) and hit OK. This adds the toolbox as well as the plugins we added before moving the file into the MATLAB/toolboxes folder.
    • Repeat the process for the Mass Univariate Toolbox

3. Testing the installation

  • simply typing eeglab in the command window should fire EEGLAB. If it loads, you're fine! If it doesn't, hit your head against the desk…
  • the Mass Univar Toolbox doesn't have a GUI that you can load without data.
    • to test it type help gui_erp if a help file loads and no error message appears, you're fine! If it doesn't, hit your head against the desk… again!

Getting to know EEGLAB: GUI

1. Importing NeuroScan data

There is nothing easier in EEGLAB than to load datasets from other applications. There are tons of plugins already pre-installed that import datasets from most formats.
(File formats recap page from EEGLAB developers.)

The tutorial dataset was recorded in NeuroScan so we will use that particular plugin. Fear not! If you use another format, the GUI has plug-ins for other systems and will guide you through.

  1. To import a dataset simply click file>import data>use EEGLAB functions>From Neuroscan CNT file

  1. Locate the *.cnt file, hit OK. MATLAB will load the file and display info related to the process in the command window.
  2. Loaded datasets appear in the EEGLAB main window, which also displays information about the dataset that is currently active (nb channels, epochs, ICA weights, etc.)

  • By the way you have just met EEGLAB's face! Ugly bugger, I know (which is why later on we'll try and deal with the ugly face as little as possible).

2. Pre-processing pipeline & steps

Pipe-what?

Any analysis of your data will involve a fair amount of repetitive steps (for every subject, every condition, etc.). We also end up repeating these steps over the same experiment with a few changes (maybe our data is too noisy and we want to use a more powerful setting, sometimes it may be that we want to have different artefact rejection criteria, or we might just decide to group certain trials/conditions together although they have different triggers).

The ultimate goal is to be able to batch those (eternally boring steps) as they do not require any human supervision. In addition, MATLAB isn't the quickest kid in the block. EEG datasets, unlike spreadsheets, are starting to be on the heavy side and all this means that processing time and efficiency is probably something we need to keep in mind. Because computers are dumb, we have to do a fair amount of thinking for them and make decisions early on (i.e. before pre-processing) that could make our lives easier by reducing time that we have to waste waiting after our dumb computers.

We can do this in two ways

  1. Write scripts
  2. Think of an efficient sequence, reduce file size, avoid having to repeat computing heavy steps & write scripts.
HACK #1



Knowing that we have to do all of the following steps before we can do pretty much anything meaningful on our data, and knowing the computing time the steps take, can you think of the most efficient sequence of pre-processing?

Steps (in no particular order):

  • filtering [KINDA SLOW]
  • loading/applying electrodes positions (useful for plotting for example) [FAST]
  • re-referencing [FAST]
  • dealing with artefacts (ICA) [REALLY ENORMOUSLY ENDLESSLY TEAR-GENERATING SLOW]
  • epoching & baseline correction (cutting data into analysable chunks) [FROM FAST TO SLOW]

“Ideal” pipeline order

  1. Electrode positions: super fast step which could be done at any point but is quite essential from the beginning, for ICA decomposition and for any plotting so why not get it out of the way from the beginning.
  2. Re-referencing: same as above, super fast and could in theory be done at any point, since referencing is a linear operation it will not impact ICA decomposition (and can be done before or after) or any steps before averaging but, again, quite essential to even look at the data so let's also put that in the bag first!
  3. !Epoching! (or rather pre-epoching): here we will not necessarily epoch directly in preparation of averaging but mainly for the purpose of reducing file size. And indeed from a 30-minutes comprising of 500 trials of a 1 second length we are left off with less than 9 minutes of data: that is a 10:3 reduction (i.e. 70% of the data is actually useless). That is a great advantage because it will take 70% less time to filter your data, and about the same reduction for your ICA decomposition!
  4. Filtering: Whether this step should happen before or after ICA decomposition (or even half before, half after) is a matter of intense debates in the methodology literature. Some say ICA should always be done on clean data, some say it should be done on unfiltered data because the filters create artefacts that could impact the ICA and some say that the high-pass filtering should be done before ICA and low-pass after. Ultimately, it probably doesn't matter so much and you can play around or read the papers and chose which option you think is best.
  5. ICA decomposition: This could be done after filtering or before, but probably not much earlier since ICA takes ages to run (approx 30 minutes per 9 min dataset on an quad-core 2.66Ghz, with 4GB RAM iMac).

"Ideal" pipeline & GUI Steps

1. Electrode positions

The data set that we load only contains information about the activity recorded, the channels it is recorded from and events (triggers, answer codes, etc.). In order to plot, as well as for any steps that may require information about topographies it is important to add information about the locations of the electrodes on the scalp.

You can load your own electrodes positions (i.e. manufacturer's provided) but EEGLAB also has a few handy files from BESA with prototypical electrode positions which match your electrode labels with the XYZ co-ordinates.

To load electrodes positions to a dataset go into the Channel location menu (Edit>Channel Locations). The following window will appear: select use BESA file for 4-shell dipfit spherical model The channel location window will pop up and all fields should be populated with information from the BESA file: That's it! We now have pretty locations loaded to that dataset and we can even plot the locations of each electrodes for fun if we want to. For this we can hit plot 2D or even plot 3D and we should get sexy stuff like that:

2. Re-Referencing

While we're in the channel business, now is a good time to do a few more channel operations such as re-referencing. Since we are recording from a single reference electrode we need to re-reference our entire data to more than an electrodes to avoid biases such as topographical shifts and other spurious effects. This operation is super quick to do so we could leave it to later but since we should NOT look at non-rereferenced data why not just get rid of that step from the beginning.

While the operations behind re-referencing are fairly simple to understand, EEGLAB is for some reason a bit strange when it comes to this and requires a lot of rather repetitive steps. It took me a few hours to get it to work for the first time when I was reading if from the EEGLAB re-reference help page.

Perhaps the reason why this is less intuitive than we think is because when we have recorded the data (at the time of acquisition) in reference to an electrode that is on the scalp and therefore quite precious to us and we will need put it back into the data, which involves a few strange steps in EEGLAB but let's just put our seat-belts on and get through this.

  • “Physically” add the channel to the data (i.e. create a place holder for the reconstructed reference in the dataset's structure).
  • Open channel locations (Edit>Channel Locations).
  • Jump through to the last channel (here 66) using the » arrow.
  • Once you're at the end (it is important to be at the end of the numbers) hit Append Chan this add a new channel (here #67).
  • In the Channel label field write the name of the electrode you want to reconstruct (in our case Cz).
  • In the same window hit Look up locks (chose BESA 4-shell again). This will add the spacial info to Cz (unsurprisingly, you get a lot of 0s because Cz is right in the middle.
  • Because, so far, that channel is the reference (and EEGLAB does not know which channel is the reference), we will hit Set reference. Channel indices should contain the entire array (since that reference applies to ALL channels) and we will just declare that the reference channel to which these channels are referred is Cz.

So far, all we have done is tell EEGLAB that there is an electrode that is “hidden” (Cz) that it should be added to the dataset and that this channel is in fact the (original) reference. No transformations have been done at this point since it's just a matter of creating space in the dataset to add the re-constructed Cz later on.

Notice how the information in the dataset went from “unknown” to “Cz”

  • Now that the correct amount of channels is in the dataset we can proceed to re-referencing per se: Go to the re-referencing interface (Tools>Re-reference). This brings up the main window shown in the next figure.

If we did not want to re-create the online channel (say we recorded in reference to the tip of the noise or a mastoid to which we are not interested, we could go straight into the re-referencing interface, choose common average and hit OK. Since we do not wish to include the online ref electrode, we don't need to actually create a place holder for it via the channel locations interface in the first place.

  • We will compute a common average reference (i.e. re-reference the data to the average of all the channels). Tick Compute average reference & click on the menu button next to the Add current reference channel back to the data and select Cz (it will be the only channel).
  • Hit OK and see how the reference info has now changed to “average”.

  • If we wanted our dataset to be re-referenced from the average of the two mastoids (usually common for setups with less channels or for other reasons), we would have to add a step where we bring up the re-reference interface again and list the specific channels and hit OK again.
  • Again, if your system or reference was outside the scalp or if your system was recorded reference-free (like BIOSEMI) you could go straight in the referencing since you would not have to (a) specify the channel's location and existence (it would already be there) nor (b) re-calculate the activity at the reference.
3. "Pre"-Epoching

As we saw before, this step isn't really the classical pre-average epoching in the sense that we are probably used to think of it, but merely so that we can reduce our data (remember, 70%!). At this point, we don't need to think about which trials go together, which ones are followed by a correct or incorrect response etc. We will therefore, create epochs of [-104 1004] for each of the experimental triggers we have sent (whether we only have two or 20 different codes).

-104 ms? WHY? As we will see in the next session we will need the epochs created here to be a little longer so that when we prepare our sets for ERPLab some of the transformations it performs to the EVENTLIST do not impact our window length. More on this on the next session!

  • Open the Extract epochs interface (Tools>Extract Epochs)
  • Select the triggers you want to create time-locked epochs of (in this dataset select all and de-select “1” & “2” –these are responses)
  • Set the time lower- and upper-limits in seconds [-0.104 1.004]
  • Hit OK

  • A second interface pops-up to ask you whether you want to correct the baseline. We will set this to [-100 0] (yes here it's in milliseconds…).

  • You will see that now the info window displays the number of epochs 500 (while there are 995 events –that is because the triggers we sent for the responses are also included in that 1second window).

4. Filtering

We will now filter our data to get rid of electrical noise and slow waves or offsets. For this step I prefer to use the filtering tool implemented in ERPLAB because it gives a nice graphical visualisation of the filter but the same type of filter is available through EEGLAB functions too.

  • Go to ERPLAB>Filter & Frequency Tools>Filters for EEG data.
  • Select type of filter (IIR Butterworth filter in our case).
  • Set high-pass. (0.1 in our case)
  • Set low-pass. (30 in our case)
  • Set filter order to determine the slope or roll-off (“harshness”) of the filter (4 in our case [24dB/oct]).
  • Hit APPLY and MATLAB will go through the file and apply the filter.

5. ICA decomposition

There are many reasons why we would want to run an ICA on our dataset. The main one would be that it is extremely good at correcting artefacts like eye-blinks, vertical and horizontal eye-movements, muscle artefacts, electrode “jumps” and even line noise.
Sadly it is very slow. Luckily, we have made some clever decisions early on and we have reduced our file size while keeping all the interesting bits of data we may want to analyse and (in theory) we should not have to run it ever again on this dataset.

Needless to say, if we were ever going to want to automate a step in our analysis it would be the one!

  • Graphically we run ICA by clicking Tools>Run ICA
  • We will have to chose which ICA algorithm we run. The most popular one is runica and we will choose that one.
  • No need to set anything else, the default values are good. This will run ICA on all channels (we could chose to exclude some).

  • Unless you have super fast computer or you are running the decomposition on a really small dataset, go grab a cup of coffee or, in most cases, go grab a pillow. While the decomposition is happening MATLAB will inform you with somewhat cryptic messages about how far it is in the decomposition as show below:

Good night! */

DISCLAIMER: The tutorials provided on this wiki are not intended to compete with the tutorials written by the EEGLAB's (and other toolboxes) developers (EEGLAB website). It intends to be a condensed version for educational/reference purposes for members of the lab/department to which this workshop was given.

eeglabsesh1.txt · Last modified: 2014/10/08 03:14 (external edit)