The Neuron As A Basic Computational Element

In some sense, neurons in a neural network play the same basic computational role that transistors play in a digital computer, though they are far from isomorphic, e.g., the method of computation and the representation of information are quite different. The concept of the neuron as a basic unit in the nervous system has at its origins the ''neuron doctrine,'' which was formulated by Wilhelm Waldeyer in the 1890s and is largely based on the seminal neuroanatomical work of Santiago Ramon y Cajal and the nerve cell stain discovered by Camillo Golgi, both of whom shared the Noble prize in 1906. The neuron doctrine states that the nervous system is composed of discrete units or cells, called neurons, which are both structurally and functionally discrete, having their own cell membranes and functioning as a fundamental signaling unit.

Further, the doctrine states that the connection between these discrete units is highly specific. At the time, this was in contrast to competing theories that hypothesized the nervous system as a syncytium or as an amorphous collection of cell bodies that essentially share a common cell membrane, implying no real connection specificity. The neuron doctrine has been important in establishing the currently accepted definition of a neural network, namely, that discrete units can be connected in highly specific ways to enable complex behaviors in biological and artificial nervous systems.

There are several hundred types of excitable nerve cells in the cerebral cortex, some of which have differences in their biophysics of computation. Most classes, however, have similarities related to the nature of information flow within and between neurons, their input-output characteristics or transfer function, and the coding and representation of information.

The biophysical basis of information flow within a neuron rests in the dynamics of ionic concentration gradients. Biological neurons are described as having a set of neural processes, which include their dendrites

Figure 1 Modeling a neuron: (a) biological neuron, including neural processes, (b) artificial neuron.

and axon, as shown in Fig. 1a. Ionic gradients across the cell membrane produce voltage changes that propagate down the neural processes. The voltage changes are typically graded until they reach the soma where, at the axon hillock, they sum and generate, with some probability, an all-or-none response, termed a spike or an action potential. Action potentials travel down the axon until they reach synapses, where they induce the release of a neurotransmitter. There are many types of neurotransmitters, however, two of the most prominent in the brain are glutamate and gamma amino butyric acid (GABA). Neurotransmitters are often characterized by whether they excite or inhibit the postsynaptic neuron, with glutamate being excitatory and GABA being inhibitory. Neurotransmitters diffuse across the synaptic cleft and bind to the cell membrane of a neighboring neuron. Binding of neurotransmitter changes the conductance of ionic channels so that a voltage change is induced in the postsynaptic cell. At the synapse, the transduction of the electrical signal (voltage) to a chemical signal (neurotransmitter) and back to an electrical signal can amplify or attenuate the signal received postsynapti-cally, depending on the strength of the synapse. This "synaptic weighting'' plays an important role in the input-output characteristics of a neuron as well as being important for learning and adaptation. The postsynaptic voltage travels down the dendrites of the second neuron and, in this way, information is communicated between the two neurons.

Information flow is thought to be primarily in the direction from the dendrites to the soma (cell body), down the axon and finally across synapses to other neurons. The concept of unidirectional information flow in a single neuron has been an important element of many artificial neural network models, although more recent experimental evidence has shown that the neurobiology is more complex. For example, there is evidence that action potentials propagate back through the dendrites. This backpropagating signal can be important for different types of adaptation and learning, including long-term potentiation (LTP) and Hebbian learning, as will be discussed later.

Neurons are interesting computational elements because of the nonlinear way in which they transform their inputs. From a purely computational point of view, complex behaviors and functions can only be computed if there is a source of nonlinearity in the information processing flow, e.g., to implement multiplication requires a nonlinearity. Warren McCulloch and Walter Pitts in 1943 formulated a nonlinear model that is a binary device, with the output of the neuron being either 1 ("on" state) or 0 ("off" state). The state of the McCulloch-Pitts neuron, as it became called, is determined by the synaptic input mediated by both excitatory and inhibitory synapses. The neuron transitions to the "on" state if the sum of its excitatory synaptic input is greater than a threshold value and no inhibitory synapses were active, otherwise it transitions to an "off" state. The work of McCulloch and Pitts was one of the first attempts to understand the computational properties of the nervous system through consideration of the nonlinear properties of single neurons.

The McCulloch-Pitts neuron, though having some interesting computational properties, was far removed from the neurobiology and biophysics of real neurons. In biological systems, nonlinearities are the norm, so it is important to consider which nonlinearities in real biological neurons are utilized for computation. In the 1950s, Alan Hodgkin and Andrew Huxley's Nobel prize winning work describing the properties of excitable cells began to shed light on this question. By using the giant squid axon as their subject, Hodgkin and Huxley demonstrated how action potentials are generated by the nonlinear properties of voltage-dependent ion channels in the cell membrane. They modeled this complex physiological behavior by using a set of coupled differential equations, fitting the parameters of their model to experimental data. The Hodgkin-Huxley model is still widely used for modeling biologically based spiking neurons.

Though there are other sources of nonlinearity in biological neurons, the Hodgkin-Huxley model is important for establishing the nonlinearity of action potential generation. The relationship between a neuron's postsynaptic potential (i.e., its input) and the generation of action potentials is often termed the neuron's transfer function. The transfer function determines how a neuron will map its input to an output. To simplify things mathematically and to focus more on the computation of entire networks rather than individual neurons, the biophysically based model of Hodgkin and Huxley has been abstracted in various ways to yield a variety of transfer functions. One such abstraction is the "integrate-and-fire'' neuron. The basic biophysical mechanism governing the behavior of an integrate-and-fire neuron is the change in membrane voltage due to injection of a current, for example, at a synapse. In the integrate-and-fire neuron, the membrane essentially acts like a capacitor. If enough current is injected into the cell, the voltage will increase until it reaches a threshold, at which time an action potential is generated, the membrane potential resets, and all of the charge is dissipated. The integrate-and-fire neuron can be made more realistic by adding a resistance into the membrane equation, allowing for the leakage of current that is observed in real neurons. These models are termed leaky integrate-and-fire neurons.

Even simpler models ignore the finer temporal information of individual spikes and condense a sequence of spikes—the spike train—into a single number called the firing rate, which represents the number of spikes generated by the neuron over a given time interval. The advantage of this abstraction is that the transfer function, which captures the relationship between input and firing rate, can be represented accurately by classes of functions with nice mathematical properties: such as, they are continuously differentiate. One such class of function is the sigmoid. For these model neurons, shown in Fig. 1b and often used in artificial neural network models, the firing rate encodes the information represented by the neuron and all information related to spike-timing is lost, e.g., when a spike occurred relative to some other spike. In biological neural networks, firing rates of a population of neurons are believed to be used for encoding movement direction. The population vector response in the primate motor cortex, first observed in the monkey by Apostolos Georgeopoulos, has each neuron encoding a movement direction. The monkey's intended movement can be predicted from the sum of the neurons' direction vectors, each weighted by their relative firing rate.

In more recent years there has been considerable debate on whether firing rate or spike timing is the more optimal coding-representation strategy. For example, humans can recognize familiar objects in 150 ms, which corresponds to <25 ms per processing stage between retina and cortical recognition areas. The biology thus dictates that each processing stage must be capable of integrating and responding to the initial wave of arriving spikes without requiring additional processing iterations. The biology also indicates that the computation cannot depend upon traditional rate coding (40 Hz firing rate=25 msec between spikes). One mechanism that allows sufficiently fast analog computation for recognition is "space-rate coding.'' In space-rate coding, stimulus information is encoded by the fraction of neurons in a population that are active within a short time window (e.g., 5 msec). Because the fraction active can change on a millisecond time scale, space-rate coding allows a rapid and high-resolution readout of network computations.

Was this article helpful?

0 0
Understanding And Treating Autism

Understanding And Treating Autism

Whenever a doctor informs the parents that their child is suffering with Autism, the first & foremost question that is thrown over him is - How did it happen? How did my child get this disease? Well, there is no definite answer to what are the exact causes of Autism.

Get My Free Ebook


Post a comment