Biologically Based Neural Networks

The complex behavior of neural systems perhaps has less to do with the specifics of the individual units and more with the connection of units to form networks, leading to emergent computation. Computational neuroscience focuses on the modeling of biologically based neural networks. Computational neuroscience developed into its own field in the 1980s largely due to advances in neurophysiological recording and the continued development of low-cost digital computers. More recent advances in noninvasive neuroimaging, for example, the development of functional magnetic resonance imaging (fMRI), high-density electroence-phalography (EEG), and magnetoencephalography (MEG), have led to a new set of tools and data for supporting the construction and validation of biologically based neural network models.

In terms of neural modeling, the hippocampus is a brain structure that has received much attention. The hippocampus plays an important role in memory, seemingly acting like a buffer for the storage of short-term memories, as well as being involved in spatial navigation tasks. Roger Traub and colleagues have constructed a neural network model of the mammalian hippocampus, which consists of roughly 10,000 neurons, each neuron built using realistic models of several types of ionic channels. Consistent with anatomical data, the neurons in the network are sparsely connected (less than 5% connectivity). Network simulations produce a population-based rhythmic activity of 4-8 Hz known as the 0 rhythm, which is observed in normal hippocampus. The activity is population-based because individual neurons do not fire regularly at the 0 rhythm. Only by considering the population or network response does one see this emergent behavior. The model reproduces a variety of responses observed in vivo; specifically, the network can generate ''seizures'' similar to those seen in humans and monkeys. Because the hippocampus is involved in a variety of neurological diseases, including epilepsy, Alzheimer's disease, and Down's syndrome, one promising area of research is the development of computational models for evaluating the efficacy of various types of treatment.

Other neural network models of the hippocampus, including those by John Lisman and colleagues, have exploited two types of rhythms observed in vivo, the 0 rhythm and the g rhythm (40-60 Hz). In Lisman's model, a short-term memory is stored in each cycle of a g oscillation and all short-term memories are packaged in the 0 rhythms. The model predicts that roughly seven short-term memories can be stored at any given time, consistent with well-known psychological data.

Neural networks have been used to model oscillatory and rhythmic phenomena in areas other than the hippocampus. For example, several network models have been developed that hypothesize a role for oscillations and temporal patterns of spikes for solving the so-called binding problem. The binding problem in audition is the well-known "cocktail party problem.'' When attending a noisy cocktail party with multiple speakers and noise sources, you will notice it is relatively easy to separate out the voice of a single speaker and understand what he or she is saying. This is in spite of the fact that the signals from the sound sources in the room are mixed together, often in very complicated ways due to reverberation and attenuation. The brain ultimately must deal with this mixture, separating the neural signals that belong to the target speaker from those that belong to the background noise. In vision, the problem is compounded by the fact that the first thing the visual system appears to do is decompose objects into their associated color, motion, depth, position, etc. In fact, this decomposition occurs in two different streams, termed the "what" and "where" streams by Robert Desimone and Leslie Ungerleider. These streams are in different parts of the cortex—temporal and parietal lobes—and no region has been identified in the brain in which all of the information converges, implying that the representation remains distributed.

Charles Gray and Wolf Singer have found evidence that temporal patterns may play a role in binding a distributed neural representation. On the basis of their neurophysiological recordings, they propose that neurons representing the same object fire in synchrony. In a network having excitatory and inhibitory neurons, this synchronous firing could also lead to oscillations. Several neural network models have been developed that use synchrony and/or oscillatory activity as a coding scheme for representing and binding objects. John Hopfield and Carlos Brody have developed a neural network model constructed from integrate-and-fire neurons that is selective for specific spatiotemporal patterns in the input stimulus. In their model, the presence of a stimulus is signaled via transient synchrony among a population of neurons in the network. One important element of the model is that the network easily and rapidly desynchronizes when no recognized spatiotemporal patterns are present. Desynchronization is just as important as synchronization if the network is to use these temporal patterns as a coding mechanism for objects. The recognition event, that being transient synchrony, is detected by a neuron that acts as a coincidence detector.

Whether the brain explicitly solves the binding problem in order to recognize objects is still under much debate. A neural network model developed by Maximilian Reisenhuber and Tomaso Poggio illustrates that impressive visual object recognition performance is achievable using a purely feedforward model, which does not explicitly necessitate binding or segmentation. The model is based on the neurophy-siological experiments of Keiji Tanaka and colleagues and their recordings from the anterior portion of the inferior temporal cortex (IT) in the monkey. IT is believed to be an important visual area for object recognition, and recordings in IT have shown cells having high specificity for individual objects, such as faces. Tanaka and his group developed a set of complex visual stimuli (e.g., starbursts, junctions of lines, bull's-eyes), which are quite different from the bars and edges traditionally used as stimuli by neurophysiologists. They found that individual IT neurons respond preferentially to these features—one neuron only fires for bull's-eyes, another only for T junctions. One hypothesis is that they function as a large set of complex feature detectors. Reisenhuber and Poggio constructed a neural network that builds up a large set of complex feature detectors using a hierarchy of linear and nonlinear combination rules. In their model, neurons at higher levels in the hierarchy have response properties, which are very similar to those observed by Tanaka in vivo. Reisenhuber and Poggio argue that these responses are a good basis for representing visual objects because individual objects will tend to uniquely cluster in small regions of this high-dimensional feature space. By feeding these representations into an artificial neural network, they have shown that the representation captured the necessary information for robust, rotation invariant object recognition—a difficult visual recognition problem.

Further evidence that argues against oscillatory behavior of neural networks being a necessity for solving the binding problem comes from models of the piriform cortex by Matt Wilson and James Bower. Piriform cortex is the primary cortical area for olfaction (e.g., sense of smell). It is well-known that the EEG of the piriform cortex in rats exhibits g oscillations when the rat sniffs an odor. Wilson and Bower constructed a compartmental model, consisting over 4000 neurons, to investigate the nature of these oscillations. Their findings showed that even when random inputs were given to the model, g oscillations continued. This implies that the oscillations are not involved in olfactory computation and odor recognition, rather they are an epiphenomenon of the network architecture. The debate over the computational role of oscillations in the cortex and subcortical areas thus is not resolved.

In addition to neurophysiology, anatomical data has been important in the development of biologically based neural networks. Many regions of the neocortex are organized topographically, with precise connectivity mapping the sensory world into the brain. For example, much of the visual system is retinotopically mapped, meaning that adjacent regions in the visual cortex are stimulated by adjacent positions in the visual environment, as projected onto the retina. Other prominent areas having topographic representations are the motor cortex and somatosensory cortex. One of the unique aspects of many of these topographic maps is that they have been shown to be adaptive. Leif Finkel, Gerald Edelman, and John Pearson developed a network model consisting of approximately 1500 neural units of somatosensory cortex that exhibited this adaptive behavior, often termed plasticity. Stimulation of the network model by repeated tactile input on one of the simulated fingers results in an increase in the representation of that finger within the network model. Simulated transection of afferent nerves, eliminating input from that finger, causes the corresponding region in the model to shrink, being taken over by regions receiving active tactile input. The phenomenon of a phantom limb, where an amputee patient experiences the sensation of their limb shrinking into their body and disappearing, is the analog of the dynamic remapping observed in this model.

The learning rule governing plasticity in many biologically based neural network models is based on the classic Hebb rule. In 1949, Donald Hebb formulated a learning rule that stated how the strengths of synapses are modified on the basis of pre- and postsynaptic activities. The basic notion was that, if neurons' activities are correlated or coincident, then the synaptic strength between them should be increased. In mathematical terms, one of the simplest forms of the rule is

Awij = z ■ Oi ■ Oj where Awj is the change in synaptic strength, z is the learning rate, and Oi and Oj are the outputs or firing rates of the presynaptic and postsynaptic neurons, respectively. In this case, the synaptic strength is increased if the presynaptic cell and postsynaptic cell fire at the same time. A symmetric rule allows the synapse to be weakened if the cells fire at different times. Computationally, the Hebb rule strengthens synapses between neurons that have correlated activity and weakens those synapses between neurons having uncorrelated firing patterns. One of the issues with the classic Hebb rule is that it is a local rule based on the generation of action potentials by the pre- and postsynaptic neurons. Because this adaptation occurs directly at the synapse, which can be far removed from the axon hillock and axon, it is unclear how information about whether the postsynaptic neuron generated action potentials would be communicated back to the synapse. Several researchers have reformulated the Hebb rule so that the change in synaptic strength is based on pre- and postsynaptic potentials (local voltages), not firing rates. However, as mentioned earlier, more recent neurophysiological evidence has shown that action potentials can quickly back-propagate from the soma to the synapse, serving as a means to communicate the firing of the postsynaptic cell to the local synapse. Regardless of which formulation is used, the Hebb rule has provided modelers with a biologically plausible activity-dependent mechanism for constructing highly specific connection patterns, leading to emergent computation.

The specificity of the connections in the visual cortex has led to the development of several models hypothesizing a computational role for these connections. Leif Finkel, Shih-Cheng Yen, and Elliot Menschik, for example, have developed a biologically based network model, which reproduces several interesting quantitative psychophysical results of contour perception. The computation in the model is largely mediated by the specificity of long-range horizontal connections in layers 2 and 3 of the visual cortex. Anatomically, these connections extend over several millimeters and subtend over 10° of visual field. In their model, horizontal connections mediate facilitation of neurons representing collinear/cocircular contour elements, as shown in Fig. 2.

Biologically based network modeling has as its goal understanding how the biophysics, neurophysiology and neuroantomy give rise to complex behavior and computation within the brain. One of the challenges has been that a mathematical analysis of biologically based networks is difficult due to the complexity of the detailed neurobiology. This has also limited the size of the models, in terms of the number of neurons in the network, and therefore puts limitations on their evaluation with more complex and realistic input. As a result, some researchers have developed neural network models that trade off biological realism in

Figure 2 Cortical simulations of horizontal connections between columns of neurons in the visual cortex. Spike traces show the degree of synchronization in response to a six-element contour. Each column of neurons "sees" just one contour element; the spacing between elements determines the salience of the contour relative to background clutter. As the salience increases (top trace most salient, bottom trace least salient), the degree of synchronization decreases. Six hypercolumns are simulated with eight pyramidal cells and eight interneurons per orientation column. Each pyramidal cell is a 64-compartment model, and interneurons are 51-compartment models (reprinted with permission from L. Finkel).

Figure 2 Cortical simulations of horizontal connections between columns of neurons in the visual cortex. Spike traces show the degree of synchronization in response to a six-element contour. Each column of neurons "sees" just one contour element; the spacing between elements determines the salience of the contour relative to background clutter. As the salience increases (top trace most salient, bottom trace least salient), the degree of synchronization decreases. Six hypercolumns are simulated with eight pyramidal cells and eight interneurons per orientation column. Each pyramidal cell is a 64-compartment model, and interneurons are 51-compartment models (reprinted with permission from L. Finkel).

favor of larger network size and whose overall behavior, through mathematical analysis, is tractable.

Was this article helpful?

0 0
Unraveling Alzheimers Disease

Unraveling Alzheimers Disease

I leave absolutely nothing out! Everything that I learned about Alzheimer’s I share with you. This is the most comprehensive report on Alzheimer’s you will ever read. No stone is left unturned in this comprehensive report.

Get My Free Ebook


Post a comment