Locality is a property of neurons that describes how well neurons are spatially integrated into a neural network. Not used seriously in neuroscience, it's only important for the theory-based information science approach to understanding neural processing. Neurons closer to sensory inputs and responsible for sensory processing tend to more localized than neurons used for associative/thought processing.

A neuron's fanout is the number of other neurons that receive a signal when it fires. CNS neurons usually have a fanout between 1,000 and 10,000, though order of magnitude differences in either direction are possible. Neural length is how spatially separate the neuron's ends are; whether the fanout is onto neurons spatially close to the neuron body or far away. Locality is simply the fanout multiplied by the length -- thus, how well connected over space the neuron is to the network that it's a part of.

For instance, a neuron that starts near the cortex, go through the white matter, and rejoin the cortex at another point might have a length of .08 meters and a fanout of only 400, thus having a locality of 32. In comparison, a neuron used for processing on the cerebral cortex could be only .008 meters long but relay action potentials to 9000 other neurons, and have a locality of 72. By the measureing the locality we can tell that even though the cortical neuron is dramatically shorter than the connective one, it is more than twice as well spatially integrated.

When describing purely theoretical neural networks the length is just an arbitrary number that's consistent between all of the neurons in the network. This doesn't matter with regard to computing the locality, which is most important as a ratio rather than a standard measurement, so it needn't rely on some particular quantum of length.