Bayesian Methods

Bayesian analysis techniques provide a formal method for integration of prior knowledge drawn from other imaging methods. In pure form, Bayesian techniques estimate a posterior probability distribution (a form of solution) based on the experimental data and prior knowledge expressed in the form of a probability distribution. In additional to providing a flexible mechanism for multimodality integration, these techniques allow rigorous assessment of the consequences of prior knowledge or assumptions about the nature of the preferred solution. Several investigators have explored traditional Bayesian methods, seeking a single "best" solution that satisfies some criterion, such as maximum likelihood, maximum a posteriori (MAP), or maximum entropy solution. However, any given single solution is effectively guaranteed to be inaccurate, at least in its details.

A technique for Bayesian inference has been developed that addresses this concern by explicitly sampling the posterior probability distribution. The strategy is essentially to conduct a series of numerical experiments and determine which solutions best account for the data. To make the method efficient, a Markov chain Monte Carlo (MCMC) technique is employed.

After the algorithm identifies regions of the source model parameter space that account for the data by a stochastic process, the algorithm effectively concentrates its sampling in that region. Thus, in the end, samples are distributed according to the posterior probability distribution—a probability distribution of solutions upon which subsequent inferences are based. The Bayesian inference method does not employ optimization procedures and does not produce an estimate of the best-fitting solution. Instead, it attempts to build a probability map of activation. This distribution provides a means of identifying and estimating probable current sources from surface measurements while explicitly emphasizing multiple solutions that can account for any set of surface EEG-MEG measurements.

This method for Bayesian inference uses a general neural activation model that can incorporate prior information on neural currents, including location, orientation, strength, and spatial smoothness. Instead of equivalent current dipoles, the method uses an extended parametric model to define sources. An active region is assumed to consist of a set of voxels identified as part of cortex and located within a sphere centered on cortex or a patch generated by a series of dilation operations about some point on cortex. In a typical analysis, 10,000 samples are drawn from the posterior distribution using the MCMC algorithm. Despite the variability among the samples, several sources common to (nearly) all are often apparent. Features such as these are associated with a high degree of probability. By keeping track of the number of times each voxel is involved in an active source over the set of samples, it is possible to build a probability map for neural activation and to quantify confidence intervals. In addition to information about the locations of probable sources, the Bayesian inference approach also estimates probabilistic information about the number and size of active regions. Figure 7 illustrates several aspects of this approach to Bayesian inference.

Understanding And Treating Autism

Understanding And Treating Autism

Whenever a doctor informs the parents that their child is suffering with Autism, the first & foremost question that is thrown over him is - How did it happen? How did my child get this disease? Well, there is no definite answer to what are the exact causes of Autism.

Get My Free Ebook


Post a comment