SMART was developed during my dissertation work at the University of Texas at Austin. The idea was to preprocess electroencephalogram (EEG) data that was collected while participants meditated with their eyes closed. EEG is infamous for getting easily corrupted by muscular and other artifactual sources of noise. Thus, we used a two step process of pre-processing EEG data. First, we used second-order blind source separation to find sources that were uncorrelated in time (hence second-order). Second, we developed a novel semi-automatic web-based tool (or SMART) to identify non-neural sources of artifact (e.g., muscular artifact, ocular artifact, and interference by power lines).
The identification of artifacts was done in SMART by using three key pieces of information – topological, temporal (mainly auto-correlation function), and spectral structure of each component. Based on simple rules, first SMART identified sources as different kinds of artifacts. Followed by generating a web-based tool, so that the researcher can quickly and efficiently go over all the components and their corresponding properties to judge whether SMART correctly classified the source under consideration as noise or not. If not, then the researcher can override source’s classification by clicking a radio button. The screenshot below better portrays the point.
The source code is provided here. For more information on the SMART tool, please see the following references. Also, if you use this tool in your research, please cite these as well.
- Saggar M, et al. (2012) Intensive training induces longitudinal changes in meditation state-related EEG oscillatory activity. Front. Hum. Neurosci. 6:256. doi: 10.3389/fnhum.2012.00256
- Saggar, M. (2011) Computational analysis of meditation. Retrieved from University of Texas Digital Repository: Electronic Theses and Dissertations, (Austin, TX), Pages: 24-44.
NOTE: SMART for other imaging modalities (fMRI and fNIRS) is under testing and will be released very soon.