Signal Processing

Event-related and evoked responses in MEG and EEG are typically quite small compared to ongoing neural activity or other physiological activity or environmental noise. Effective signal processing strategies are essential for isolating the desired signals from other observed signals not of interest.

Temporal filtering is a ubiquitous initial step in the signal processing chain. Virtually all MEG and EEG systems employ some sort of analog filtering on the front end to reduce the signal bandwidth to avoid aliasing in digitization, if for no other reason. Many conventional EEG systems provide an extensive set of analog filtering capabilities, including high-pass and low-pass filters as well as notch filters (often tuned to the 50- or 60-Hz frequency of the power grid), to eliminate major sources of environmental interference. If the frequency content of the targeted signal is known or can be determined empirically, reduction of the bandwidth of the measurement reduces noise and can increase the efficiency of signal acquisition. However, as high-density sensor systems and digital acquisition and processing become more standard, instruments are increasingly designed with fixed analog front ends. Signal processing is increasingly performed by digital signal processing subsystems, general purpose computers embedded in the data system, or workstations used by investigators for subsequent analysis.

Digital filtering is cheaper and in many respects technically superior to the use of analog filters. Methods based on the fast Fourier transform (FFT) greatly simplify the algorithms required to implement digital temporal filters. Related techniques are used for the frequency domain analysis of the major rhythms characteristic of spontaneous EEG or MEG recording or for the extraction of steady-state responses at the stimulation frequency. Short time base analyses based on wavelets or special FFT algorithms may be used to identify periods of transient synchronization (or desynchronization) that are associated with (but not time-locked to) external stimuli or perceptual states. Such analyses are often based on trial by trial analysis of continuously acquired data. The averaging techniques used for most event-related response work would attenuate or eliminate such responses because they are not time-locked to the stimulus and may vary in time and phase from one response to another.

Eye blinks or other movements of facial muscles can produce strong signals in MEG or EEG. The startup of an electric motor or power-hungry instrument in the vicinity (or sharing a power circuit) can also introduce large artifacts. A single large-amplitude transient may leave significant residue in an averaged event-related response even after many trials. For this reason, artifact rejection is another common step applied in the analysis of MEG and EEG data. The digital data stream is monitored for values that exceed some criterion threshold. If the threshold is exceeded, the offending channels or, more commonly, the entire trial is excluded from further analysis. For some clinical applications, artifact rejection is typically accomplished by visual inspection. In epilepsy, the patho-physiological responses of interest may exceed the size of signals considered artifacts under other circumstances, and inspection by a trained clinician may be the most efficient and effective way to identify both artifacts and epileptiform activity. However, as data streams become more dense and algorithms become more sophisticated, there is increasing reliance on software that can categorize events in the experimental record on the basis of temporal waveforms, spatial topographies, or both. Such systems are often used to preprocess an extensive record, bringing interesting or suspicious events to the attention of the reviewer.

Time-locked selective averaging is the mainstay of most existing work with event-related or evoked responses. As a first step, epochs in the data are identified and characterized according to the nature of the stimulus, the response, or the task and its performance. Averaging can even be undertaken relative to a reproducible endogenous transient, such as an epileptiform spike. Segments of the waveform data across all channels are selected relative to the timing event, and within each channel the corresponding time points in the segment are averaged. In general, averages are segregated according to the particulars of the trial, e.g., the identity of the stimulus or the accuracy of task performance, although in many cases averages are collapsed across conditions that are considered irrelevant for a particular experimental question. Many studies report "grand averages'' constructed by averaging responses across many subjects in order to increase the power of statistical inference, although this practice probably precludes reliable source localization.

The construction of difference waves, by subtracting a control condition from a particular response, is another time-honored method for the analysis of event-related response data. For example, such methods clearly disclose the increase in response amplitude associated with selective attention to a particular location in the visual field and isolate responses associated with certain endogenous cognitive responses. In many cases such records are reduced to a single response waveform, for example, by averaging (or summing power) across all channels or a selected subset. However, an increasing number of investigators construct difference topographies by subtracting control responses from the corresponding signal channels. These distributions are often analyzed in the same way as the underlying event-related signals, for example, subjected to source localization procedures.

Such analyses should be approached with caution. The methods will work, in principle, if the only differences between conditions are due to the strength of the underlying sources or the appearance of a new source under particular experimental conditions. Unlike PET and fMRI, which in effect produce an estimate of the distribution of source activity, EEG and MEG produce topographic maps with a complex relationship to source activity, driven by the physical properties of the measurement instrument and the system under study. Minor changes in the location or extent of activation (especially in cortical regions of high curvature) can produce big changes in the observed field topography and, thus, significant changes in computed difference fields.

Single-pass analytical methodologies are increasingly applied to the analysis of event-related response data. Frequency decomposition techniques (described previously) have been used to explore the putative role of transient phase-locking of oscillatory activity in certain perceptual processes. Correlation techniques can be applied to continuous evoked response data to identify consistent features in a manner analogous to signal averaging. Spatial filtering techniques compute a linear transform (based on a computed or assumed source model) that can be applied to the data to estimate the activation time course of the source. Several investigators suggest that techniques in this class (such as minimum variance beam-forming) are most useful if applied to single trial data rather than averaged responses. Some new methods such as independent component analysis (ICA), synthetic aperture magnetometry, or magnetic field tomography by the nature of the algorithm are most effectively applied to the analysis of single trial data.

Breaking Bulimia

Breaking Bulimia

We have all been there: turning to the refrigerator if feeling lonely or bored or indulging in seconds or thirds if strained. But if you suffer from bulimia, the from time to time urge to overeat is more like an obsession.

Get My Free Ebook


Post a comment