This monograph addresses in a new way how brains may use signal spikes to compute, perceive, and support cognition. It presents and simulates simple novel numerical models of neurons and discusses the relationship between their performance and known neural behavior. Modest math skills combined with the included brief review of neural signaling should enable most readers to understand the basic issues while skimming over the more specialized portions.
Novel contributions include:
- A simple binary neuron model (the “cognon model”) that learns and recognizes complex spike excitation patterns in less than one second while requiring no unexpected neural properties or external training signals. An excitation pattern has no more than one spike per input synapse and typically lasts less than about 20 milliseconds.
- A Shannon mutual information metric (recoverable bits/neuron) that assumes: 1) each neural spike indicates only that the responsible nearly simultaneous neuron input excitation pattern had probably been seen earlier while that neuron was “learning ready”, and 2) the information is stored in synapse strengths. This focus on recallable learned information differs from most prior metrics such as pattern classification performance or those relying on specific training signals other than the normal input spikes.
- Derivation of an equation for the Shannon metric that suggests such neurons can recall useful Shannon information only if their probability of firing randomly is lowered between learning and recall, where an increase in firing threshold before recall is one likely mechanism.
- Extensions and analyses of cognon models that also use spike timing, dendrite compartments, and new learning mechanisms in addition to spike-timing-dependent plasticity (STDP).
- Simulations that show how simple neuron models having between 200 and 10,000 binary-strength synapses can recall up to ~0.16 bits/synapse by optimizing various neuron and training parameters.
- Translations of these simulation results into estimates for parameters like the average bits/spike, bits/neuron/second, maximum number of learnable patterns, optimum ratios between the strengths of weak and strong synapses, and probabilities of false alarms (output spikes triggered by unlearned patterns).
- Concepts for how multiple layers of neurons can be trained sequentially in approximately linear time, even in the presence of rich feedback, and how such rich feedback might permit improved noise immunity, learning and recognition of pattern sequences, compression of data, associative or content-addressable memory, and development of communications links through white matter.
- Report of a seemingly novel and rare human waking visual anomaly (WVA) that lasts less than a few seconds but seems consistent with these discussions of spike processing, rich feedback, and, surprisingly, video memories consistent with prior observations of accelerated time-reversed maze-running memories in sleeping rats.
- Discussions of the possible relationship between the new neural spike feedback model and new experiments in the initiation and termination of transient weak hallucinations while the subjects were aware of their true surroundings. Some of these experiments may have useful medical implications.
Few grand challenges are more daunting than the intertwined mathematical, neurological, and cognitive mysteries of brain. This monograph addresses all three, but focuses primarily on the mathematical performance limits of neuron models that appear consistent with observations.
The most basic of these neuron models (the basic “cognon model”) utilizes only two well-known properties of cortical neurons: 1) they produce a ~1-millisecond ~100-millivolt electrochemical output “spike” only when their synapse-weighted input spikes simultaneously sum to exceed a variable “firing threshold” for the instantaneous excitation pattern, and 2) the weight applied by each input terminal (the synapse strength) can increase long-term if it is excited when that neuron fired. In addition the only information conveyed by a cognon output spike is that the cognon was probably exposed to the responsible excitation pattern during a prior period of learning readiness.
Extensions of this basic cognon model allow precisely dispersed spike delays within a single pattern, multiple dendrite compartments, and alternative learning rules for synaptic strength. Because real neurons are far more complex they should perform better than cognons. However, neuron model performance must be defined in a neurologically relevant way before it can be quantified and optimized. The Shannon information metric for the recallable taught information follows directly from our simplifying assumption that a spike implies only that a cognon’s responsible current excitation pattern resembles one that it saw when it was “learning ready.” This information metric (recallable bits/neuron) is simply the mutual information between the ensembles of taught and recalled patterns and depends on teaching strategy. It differs substantially from most prior information measures that do not restrict the metric to recallable learned information.
The ultimate test of this modeling strategy is whether or not these rudimentary neuron models can successfully recall large amounts of information when scaled up to reflect neurologically expected firing thresholds, numbers of synapses per neuron, spreads in synaptic strength, and spike frequencies. Our results of ~0.16 bits/synapse for cognons with up to 10,000 synapses rivals or exceeds most estimates obtained for other neuron models despite our minimalist and arguably plausible assumptions. Real neurons probably can perform even better since evolution has resulted in great neuron complexity involving many special purpose neurotransmitters and other components. This physical complexity presumably evolved to enable neurons and neural networks to utilize optimum signal processing techniques that we have yet to identify.
With this encouraging performance of ~0.16 bits/synapse we then extended the model to include substantial signal feedback and to show how feedback could arguably facilitate signal detection, storage, and contentaddressable memory functions for both static and time-sequential patterns. This discussion then presents an apparently newly observed but very rare “waking visual anomaly” (WVA) that typically lasts less than a few seconds and seems consistent with the feedback model for spike processing. Most intriguing is WVA evidence that humans can recall movies played backward at high speed, as do some rats when dreaming of running mazes (Davidson, T., Kloosterman, F. & Wilson, M., 2009) Additional evidence from traditional hallucinations is also considered along with potential medical applications.
This monograph represents the product of the lead author’s lifetime interest in brain function, supported by the second author’s development and use of a series of spike-processing neural simulators. The hope of the authors is that the results and insights presented here will motivate and enable others to design new neuroscience experiments and to explore and extend these spike-processing neuron models at both the neuron and system level so as to improve understanding of brain function in ways that lead to useful medical and educational applications.
David H. Staelin and Carl H. Staelin
October 25, 2011