Abstract

This monograph addresses in a new way how brains may use signal spikes to compute, perceive, and support cognition. It presents and simulates simple novel numerical models of neurons and discusses the relationship between their performance and known neural behavior. Modest math skills combined with the included brief review of neural signaling should enable most readers to understand the basic issues while skimming over the more specialized portions.

Novel contributions include:


Preface

Few grand challenges are more daunting than the intertwined mathematical, neurological, and cognitive mysteries of brain. This monograph addresses all three, but focuses primarily on the mathematical performance limits of neuron models that appear consistent with observations.

The most basic of these neuron models (the basic “cognon model”) utilizes only two well-known properties of cortical neurons: 1) they produce a ~1-millisecond ~100-millivolt electrochemical output “spike” only when their synapse-weighted input spikes simultaneously sum to exceed a variable “firing threshold” for the instantaneous excitation pattern, and 2) the weight applied by each input terminal (the synapse strength) can increase long-term if it is excited when that neuron fired. In addition the only information conveyed by a cognon output spike is that the cognon was probably exposed to the responsible excitation pattern during a prior period of learning readiness.

Extensions of this basic cognon model allow precisely dispersed spike delays within a single pattern, multiple dendrite compartments, and alternative learning rules for synaptic strength. Because real neurons are far more complex they should perform better than cognons. However, neuron model performance must be defined in a neurologically relevant way before it can be quantified and optimized. The Shannon information metric for the recallable taught information follows directly from our simplifying assumption that a spike implies only that a cognon’s responsible current excitation pattern resembles one that it saw when it was “learning ready.” This information metric (recallable bits/neuron) is simply the mutual information between the ensembles of taught and recalled patterns and depends on teaching strategy. It differs substantially from most prior information measures that do not restrict the metric to recallable learned information.

The ultimate test of this modeling strategy is whether or not these rudimentary neuron models can successfully recall large amounts of information when scaled up to reflect neurologically expected firing thresholds, numbers of synapses per neuron, spreads in synaptic strength, and spike frequencies. Our results of ~0.16 bits/synapse for cognons with up to 10,000 synapses rivals or exceeds most estimates obtained for other neuron models despite our minimalist and arguably plausible assumptions. Real neurons probably can perform even better since evolution has resulted in great neuron complexity involving many special purpose neurotransmitters and other components. This physical complexity presumably evolved to enable neurons and neural networks to utilize optimum signal processing techniques that we have yet to identify.

With this encouraging performance of ~0.16 bits/synapse we then extended the model to include substantial signal feedback and to show how feedback could arguably facilitate signal detection, storage, and contentaddressable memory functions for both static and time-sequential patterns. This discussion then presents an apparently newly observed but very rare “waking visual anomaly” (WVA) that typically lasts less than a few seconds and seems consistent with the feedback model for spike processing. Most intriguing is WVA evidence that humans can recall movies played backward at high speed, as do some rats when dreaming of running mazes (Davidson, T., Kloosterman, F. & Wilson, M., 2009) Additional evidence from traditional hallucinations is also considered along with potential medical applications.

This monograph represents the product of the lead author’s lifetime interest in brain function, supported by the second author’s development and use of a series of spike-processing neural simulators. The hope of the authors is that the results and insights presented here will motivate and enable others to design new neuroscience experiments and to explore and extend these spike-processing neuron models at both the neuron and system level so as to improve understanding of brain function in ways that lead to useful medical and educational applications.

David H. Staelin and Carl H. Staelin
October 25, 2011