A Universal Model of Single-Unit Sensory Receptor Action

ELSEVIER
A Universal Model of Single-Unit Sensory Receptor Action
KENNETH H. NORWICH AND WILLY WONG
Institute of Biomedical Engineering, Department of Physiology, and Department
of Physics, University of Toronto, Toronto, Ontario, Canada M5S 1A4
Received 22 January 1994; revised 20 April 1994
ABSTRACT
We present a model governing the operation of all "isolated" sensory receptors
with their primary afferent neurons. The input to the system is the intensity of a
sensory signal, be it chemical, optical etc., and the output is the rate of neural
impulse transmission in the afferent neuron. The model is based on the premise that
information is conserved when transmitted by an efficient sensory channel. Earlier
studies on this informational or entropic model of receptor function were constrained to the cases where the sensory stimulus assumed the form of an infinite step
function. The theory was quite successful in treating mathematically all responses to
such elementary sensory stimuli--both neural, and by extension, psychophysical. For
example, the theory unified the logarithmic and power laws of sensation. However,
the earlier model was incapable of predicting responses to time-varying stimuli. The
generalized model, which we present here, accounts for both steady and time-varying
signals. We show that more intense stimuli are remembered by sensory receptors for
longer periods of time than are less intense stimuli.
I.
INTRODUCTION
W e p r o p o s e here an a p p r o a c h to modeling the function of a single,
isolated sensory r e c e p t o r and its primary afferent neuron. Ultimately,
we intend the model to hold universally. T h a t is, it is intended to be of
sufficient generality to govern all sensory r e c e p t o r s - - c h e m o r e c e p t o r s ,
stretch receptors, light receptors e t c . - - i n all organisms. It is to be
capable o f predicting quantitatively or semi-quantitatively the neural
outputs corresponding to all possible single stimulus inputs, where the
stimulus takes the f o r m o f a signal intensity below the physiological
saturation level. Moreover, the m o d e l is to be capable of handling
stimuli which are either steady or time-varying in nature. It must be
capable of simulating with substantial fidelity all empirical or phenomenological laws, all rules o f thumb, and all experimental data ever
collected dealing with the single, dissociated sensory unit (receptor +
MATHEMATICAL BIOSCIENCES 125:83-108 (1995)
© Elsevier Science Inc., 1995
655 Avenue of the Americas, New York, NY 10010
0025-5564/95/$9.50
SSDI 0025-5564(94)00024-T
84
KENNETH H. NORWICH AND WILLY WONG
primary afferent neuron). Data of the latter type have been collected
either by dissecting out isolated receptor-neuron preparations, or by
isolating single units in vivo as well as was possible. Clearly, it is not to
be a model of the mechanism of action of the receptor because, for
example, the mechanism of olfaction is very different from the mechanism of hearing. The model utilizes a property that all sensory receptors
share, the capacity to transmit information. The model is, therefore, a
model of information transmission.
All modalities of sensation conduct information from the outside
world to the nervous system of the sensating organism. The amount of
information received cannot exceed that which is transmitted, and in a
highly efficient sensory receptor, the two quantities will be equal. That
is, at maximum efficiency,
Information transmitted by the stimulus
= Information received by the system.
(1)
We utilize the principle that transmission of information cannot
occur instantaneously, but takes place over a finite period of time. We
also utilize Shannon's doctrine that information represents uncertainty
or entropy which has been dispelled, That is, immediately following the
start of a sensory stimulus, the receptor is maximally uncertain about
the intensity of its stimulus signal and, hence, has received very little
information. As time proceeds, the receptor loses uncertainty (that is,
decreases its information theoretical entropy), and, therefore, gains
information. Explicitly,
Entropy = monotone decreasing function of stimulus duration. (2)
We regard the stimulus as a signal with clearly defined statistical
properties. For example, for the chemical senses, the stimulus is the
density of a gas or the concentration of a solution, which, on the
microscopic scale, will fluctuate in accordance with the laws of statistical mechanics.
Entropy or uncertainty tends to increase with stimulus signal variance. That is, variance, which represents the spread of intensity values
of the stimulus, lends uncertainty to the stimulus and, hence, increases
stimulus entropy; that is,
Entropy = monotone increasing function of stimulus variance. (3)
Since pure stimulus signal is always compounded with "noise," the
combined stimulus received by the receptor is, in fact, a convolution of
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
85
the pure stimulus with noise. For stimuli which are governed by the
Gaussian or by the Poisson distributions, the variances of signal 002 and
of noise o-~ will simply add. That is
Total variance = 002 + 002.
(4)
When the signals are Gaussian or normal to a close approximation, the
entropy or uncertainty is found simply from the expression
Original entropy = ½In [2~-e( 002 + o.2)],
(5)
where e is the base of natural logarithms, as shown by Shannon [16].
The reduction in uncertainty accompanying the process of sensory
perception may be expressed as the difference between the original
entropy, given above, and the residual entropy of the noise:
Residual entropy = ½In (2 zr e 0°2).
(6)
Subtracting the residual from the original entropy, given above, we
obtain for the entropy H of the Gaussian distribution,
H=½1n o-2 + o ff
° ff
(7)
The above ideas were put forth in more detail by Norwich [11, 12].
When the stimulus signal is small, the Poisson distribution may be more
appropriate than the Gaussian. However, there is no simple expression
for the entropy of the Poisson distribution. When summating the
necessary series, we find that for small signals we obtain an expression
of the form
Hpoisson
1
=
~ln
t2
2
O'R + O'S
-o.
7
,
(8)
where
00~2> 00~.
(9)
The symbols are defined in Appendix I and these equations are demonstrated in Appendix 2. Equations (7) and (8) incorporate the idea of (3)
--that entropy is a monotone increasing function of stimulus intensity
- - b u t not yet of (2). The effect of stimulus duration has yet to be built
into (7) and (8).
KENNETH H. NORWICH AND WILLY WONG
86
We take the receptor to be sampling its stimulus signal. If the
receptor regards its stimulus as providing a stationary time series of
signal amplitudes over some interval of time during which it has made
m samplings of the stimulus intensity, then, by the central limit theorem, the variance of the mean stimulus will be 0-2/m. So (7) will
become
H
=
½In 0"2 + o.2/m
0-2
(10)
or, for signals of low intensity,
H = ½In 0-/~2q_ O,s2/m
0-2
(11)
We note that the noise or "reference" variance 0-2 is not depreciated or reduced by the central limit theorem in the same manner as the
signal variance. This property is best regarded as an assumption of the
model.
The fundamental assumption of the entropy theory of sensation is
that the impulse frequency F in the primary afferent neuron issuing
from a sensory receptor is in direct proportion to the entropy H, that is,
F=kH.
(12)
F = ½k In [(0-2 + 0-s2/m)/0"2] •
(13)
Thus, (10) gives rise to
If Equation (13) is to be of use in the analysis of experimental data, its
principal independent variables 0-2 and m must be related to quantities
that can be measured experimentally. We shall treat 0-2 as a constant
parameter.
For physical systems (such as energy, pressure, concentration, and
density), by and large, variance 0- 2 can be related to mean I, using a
relation of the form
0"2 ot I n.
(14)
For example, the molecular density of a dilute gas is a fluctuating
quantity governed by the Poisson distribution, whose variance is equal
to its mean, that is,
O'2=1,
(15)
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
87
which is an instance of (14) with n = 1. The mean-variance relationship
is discussed at some length by Norwich [10]. Please see also Appendix 3.
In the absence of any definitive knowledge to the contrary, the
number of samplings m made by a receptor of its stimulus signal in
time t was taken to be a linear function of t, that is,
m=at.
(16)
With o'~ held constant, (14) and (16), when introduced into (13),
produce the seminal equation
F = 1kin(l+ ~'In/t),
(17)
where t >~t o = time required for one sampling, and /3' is a constant
parameter of unknown value but greater than zero. We refer to (17) as
the elementary or restricted entropy equation of sensation. Equation
(17) is valid, strictly speaking, only when the sensory signal is applied as
a step function, that is, as a constant, steady signal.
2.
UTILITY OF T H E ENTROPY E Q U A T I O N
This equation has enjoyed a certain success in describing neural
phenomena (as well as behavioral effects). For a full description of the
capabilities of the elementary equation, the reader is referred to Norwich [10, pp. 248-249], where a detailed "genealogy chart" is shown.
However, some of the simpler properties of this equation will be shown
very briefly below for two types of experiment called Type A and Type B.
2.1. TYPEA EXPERIMENTS
For signals applied to the sensory receptor for a constant interval of
time t', we can condense ~ ' / t ' into a single constant y so that (17)
becomes
F = ½k In (1 + Tin).
(18)
When y I n >> 1, (18) can be approximated closely by
F = l k n l n I + ½k ln y,
(19)
which we recognize as Fechner's law. This law, while stated originally for behavioral experiments, has been used to describe impulse
frequency in a primary afferent since the paper of Hartline and
Graham [6].
88
KENNETH H. NORWICH AND WILLY WONG
When TI n << 1, then (18) may be represented by its Taylor expansion, which, retaining terms of the first order only, becomes
(20)
F =lkyI".
We recognize here the "power law of sensation," championed by S. S.
Stevens, and used by many people to describe sensory data [15].
2.2. TYPEB EXPERIMENTS
Consider now, a single, constant stimulus signal i held in place for a
variable period of time. Then (17) can be simplified by setting /3'(i) n =
A'= constant so that
(21)
F=½kln(l+ A'/t).
In this form, the elementary equation describes the process of sensory
adaptation: as t increases, F falls progressively.
The informational underpinning of (21) is clear: As the sensory
neuron makes repeated samplings of its stimulus signal, its uncertainty
about the mean signal decreases, and hence the entropy H decreases.
H is "mapped" onto the biological variable F (12), and, hence, we
observe the impulse rate to fall.
Equation (21) predicts that F will continue to fall as t increases
without limit. If such were the case, all receptor-neuron units would
adapt completely, which is not always observed. We have salvaged (21)
in the past by setting tmax, a maximum possible value for t, corresponding to mmax, the greatest number of samplings that can be recorded in
the local memory of the receptor. This device has been quite successful,
and (21) has been used to describe many types of adaptation data.
However, in our new, generalized equation to be developed below, we
refine this model of local receptor memory.
Most striking is the ability of the single equation (17) to fit two or
three sets of adaptation data derived from the same receptor-neuron
preparation, with a single set of three parameter values (see data in [3, 5,
9] as analyzed in Norwich [10, Figs. 11.5, 11.6, and 11.7; p. 182 et seq.]).
The fit of the data in [9] is reproduced here in Figure 1. This unifying
power of (17) was shown by Norwich and McConville [13].
Perhaps the most interesting feature of (17) is its capability of
predicting the amount of information transmitted by a sensory stimulus.
From (12) and (21) it is seen that
H(t)=½1n(l+ A'/t),
t>~t o.
(22)
UNIVERSAL MODEL OF SENSORY RECEFrOR FUNCrlON
8o I
89
,
7oIL:
y6o
~:~ ra
50
~
e~
40
--
~
30
10
0
I
0
__
I
5
10
15
Time
[seconds]
FIG. 1. The data of Matthews [9] is fitted with the classical F-equation (17). The
three sets of data correspond to the response of a small frog muscle to steady loads
of varying weight (v, 5 g; O, 2 g; O, 1 g). The most important feature of this figure
is that a common set of three parameters, k, ~', and n, was used to fit three sets of
data from the same experimental preparation, or an average of one parameter per
curve.
tma x is the time taken by the neuron to adapt to its equilibrium
level, the reduction in entropy over this time interval is given by
H ( t o ) - H(tmax). But from the definition of information, the reduction
in entropy is equal to the gain in information. So the information gained
from the stimulus is given by
Since
J=
½In [(1 + ; t ' / t o ) / ( 1
+ A'//tmax) ] .
(23)
Since adaptation data can be curve-fitted to (21), we can obtain
values for the parameter A', and in this way measure the information
transmitted.
From such measurements, the transmitted information is almost
always found to be in the order of 2.0-2.5 bits per stimulus, identical to
the values obtained by behavioral studies using measurements of categorical judgments.
There are many other applications of the elementary form of the
entropy equation which may be found in the literature.
90
3.
K E N N E T H H. N O R W I C H AND WILLY W O N G
LIMITATION OF THE ELEMENTARY MODEL
In principle, a completely adequate model of receptor-neuron function should serve as a "transfer function," whereby the neural output
frequency will be totally predictable by specifying the sensory input
signal as a function of intensity and time. However, the elementary
equation (17), for example, was derived only for the special case of a
constant stimulus signal and will not give valid results when I varies
with time. Let us call this Restriction 1. This restriction has prevented
our use of the entropy model to describe various simple laboratory
experiments that utilize, for example, two sequential step inputs in
signal. Neither will the elementary model describe the neural output
when a step input is terminated abruptly (i.e., a positive step input is
followed immediately by zero input). It will not describe the process of
"de-adaptation" during a period of sensory quiescence.
A second limitation of the elementary model is the need to define a
value for tmax, representing mmax, the greatest number of samplings
that the receptor memory can retain. If we fail to provide some finite
value for tmax, then all adaptation processes governed by (21) would
proceed to extinction (i.e., F would approach zero for t ~ oo). Let us call
this Restriction 2.
Any generalization of the elementary model to allow for time-varying
stimuli (relaxation of Restriction 1), or incomplete adaptation (relaxation of Restriction 2), must leave the basic operation of the elementary
model intact. That is, the generalization must retain the ability of the
parent model to account quantitatively for the dozens of sensory phenomena tabulated in the "genealogy chart." Moreover, the generalized
model equations must reduce to the elementary model equations when
the signal is constant and when the value of t is small << tma x.
The above are severe requirements, and we made many attempts
over a period of years to find a suitable generalization. In the following
section, we shall review briefly two of our earlier attempts at generalization, which laid the groundwork for our current model.
4.
4.1.
THE LINEAR AND ASSEMBLY-LINE MODELS
LINEAR MODEL
A simple, linear systems analysis of the problem must fail because
only under certain special conditions will the log function governing
information transmission reduce to a linear function of I, that is, no
simple transfer function can be completely successful although this
approach has been tried by others (e.g., [18]). Nonetheless, since we
know the appropriateness response of the sensory system to brief step
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
91
inputs, we tried analyzing the general input into Walsh functions, which
provide an o r t h o n o r m a l basis (similar to F o u r i e r analysis), and superimposing the c o r r e s p o n d i n g neural o u t p u t signals. This a p p r o a c h gave
results that were approximately correct for time-varying input signals,
thus obviating Restriction 1 in part, but did nothing for Restriction 2.
4.2. ASSEMBLY-LINE MODEL
W e then decided to relax our r e q u i r e m e n t for mathematical rigor
and formulated a fanciful m o d e l of r e c e p t o r m e m o r y , which we called
the Assembly-Line model or just Box model. W e imagined the receptor
m e m o r y to consist of p boxes arrayed in series as in Figure 2a. E a c h
box would hold the result o f one sampling by the receptor of its
stimulus signal. A t each sampling time, new values would be a d d e d at
I-I I-I I-1 rl
i=
1
2
3
4
I-I ... I-I
5
p
DDNDN...D
i=
1
2
3
4
5
p
F1 IT1 ITI VI ITI ... iTI
i=
1
2
3
4
5
ITI F] ITI IT1 V1 ... IT1
i=
1
2
3
4
5
i=
1
2
3
4
El] [ ~
1
2
r~
3
r~
4
5
~o~,o
p
ITI ITI [Z] frl IT1 ... IT1
i=
~,~,o
p
~am~'o
p
[ ] ... r ~
5
p
(b)
pthsample
FIG. 2. (a) Each box in this figure represents a single "memory unit" which would
hold the result of one sampling by the receptor of the stimulus. In this case, there
are p such boxes. For convenience, we have numbered the boxes from left to right.
(b) With each new sampling, the contents of the boxes are displaced one box to the
right. The contents of the pth box are removed and forgotten; box 1 now contains
the latest sampling.
92
KENNETH H. NORWICH AND WILLY WONG
the left-hand box; the number previously stored in each box would move
one box to the right; and the number in the right-hand box would vanish
as in Figure 2b. We could introduce time-varying stimuli ad libitum on
the left-hand side and calculate the receptor output using (13). Remember, though, we must retain continuity with the elementary model. The
problem was to obtain a value for m, which represents the number of
samplings made and retained by the receptor.
We found for a given input, that the output H (or F / k ) qualitatively
matched the output that would be actually observed in the laboratory
when the value of m was taken to be equal to the number of boxes occupied
by samples taken of the most intense signal. We termed this value of m
the depreciation of the stimulus because its use in (10) or (13) was to
decrease or depreciate the variance of the stimulus signal. In effect, the
depreciation represented a change in local receptor memory that was
governed by the stimulus intensity.
The results of the Assembly-Line model were quite encouraging
because they predicted the correct trend of the neural output. But a
trend is qualitative and we were after quantitative predictions. For
example, the Assembly-Line model predicted that if a constant stimulus
of low intensity is applied for a long time and then is suddenly removed
and replaced by a signal of low or zero intensity, the neural frequency
will drop transiently to near zero. This rather natural effect can be seen
easily from (13): setting o-ff = 0, F = ln l = 0. In the recovery or deadaptation phase that follows, the Assembly-Line model predicts that F
will rise with time, making a curve that is concave upwards, as the
reader can verify by working through the mathematical details. This
effect can also be seen with reference to (13): The depreciation m
becomes progressively smaller, as more and more boxes become filled
with samples from the less intense stimulus. Remember that depreciation is equal to the number of boxes containing samples of the greatest
stimulus.
However, appeal to experiment shows us that the recovery curve is
convex upwards as in Figure 3.
It was clear that m, which is the effective local memory of the
sensory receptor, did seem to change with the intensity of the stimulus,
but the rate of change of m must be initially greater than that given by
the Assembly-Line model so that we obtain a upwards convex curve
rather than concave.
5.
THE RELAXATION M O D E L
In this model, as in the previous one, the variable m represents the
number of samplings made that are retained by the receptor, that is, m
U N I V E R S A L M O D E L O F SENSORY R E C E P T O R FUNCTION
93
0a
t:m
F.
/ S
E
/
/
f
Time
Fro. 3. This figure demonstrates the difference between the recovery curve as
predicted by the Assembly-Line model (dotted line), and the recovery curve as
observed experimentally (solid line).
is a measure both of the number of samplings and of the occupied
portion of the receptor memory.
We refined the Assembly-Line model by introducing m as the
solution to a linear differential equation. In this model, each stimulus
intensity is associated with a local receptor memory m ( t ) , which
"relaxes" toward its equilibrium value m~q, in accordance with the
equation
dm
dt
= - a(m-
meq),
a > 0.
(24)
The value of meq will be a monotonically increasing function of intensity. Initially, we shall consider only sequences of step functions in
intensity so that meq takes on discrete, constant values, with meq taken
as constant, we can easily solve (24):
m ( t) = m ( O ) e -at + meq(1 - e - a t ) ,
(25)
where rn(0) is the receptor memory at the beginning of a new stimulus
step function. Later, we shall couple meq to intensity I and let I change
with time.
94
KENNETH H. NORWICH AND WILLY WONG
For the case where the receptor is initially de-adapted (or adapted to
a very small stimulus), m(0) --- 0 and (25) becomes
(26)
a,).
m(t)=meq(1-e
In retrospect, it took rather a long time to realize that the simple
equation (26) was the one that permitted the generalization of the
entropy equation (17) to include the case of multiple input stimuli.
Introducing m(t), from (26), into (13) in place of (16), we emerge with
F=½kln
/3"1"
1+ meq(l_e_at )
)
,
(27)
where /3" is constant. This is our new, generalized entropy equation for
the single, step stimulus, replacing the previous (17).
We can now inspect (27) to see if it retains its "heritage," so to
speak. That is, does (27) reduce to (17) for single, brief stimuli?
Since meq increases monotonically with I, we might represent it by
the power function
meq = m ° q I q,
(28)
where q is constant and, from our experience with the Assembly-Line
model, greater than zero. If I is a function of time, rneq will also be a
function of time.
Introducing (28) into (27) gives finally,
flip ) ,
F=½kln 1+ (l_e_a,)
(29)
where /3 and p are constant.
For stimuli held for constant duration t', (29) again retrieves the two
fundamental laws of sensation given by (19) and (20).
When we consider a single stimulus held for a variable time t, (29)
reduces to
F=½kln
1+ l _ Ae_,,------~ ).
(30)
Expanding the exponential e-at in a Taylor series, we have by approximation for small t,
1 - e -at = at,
(31)
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
95
SO that (30) becomes identical to (21). Thus the generalized equation
does reduce to its more restricted forerunner.
We have, in fact, run (27) through each of the sensory functions in
the "genealogy chart" cited above. Although the mathematics becomes
more complicated, the generalized equation (30) seems to carry out
each function of the more elementary equation (17).
The generalized equation does overcome both Restrictions 1 and 2.
It overcomes Restriction 1 by permitting us to use multiple step inputs
in succession, rather than just a single step input, or we may permit I to
vary continuously as a function of time. Moreover, it overcomes Restriction 2, because when t becomes large (specifically in (30)) e -at----~ 0 so
that
F ~ l k In (1 + A) = constant.
(32)
That is, we can now allow explicitly for the property of incomplete
adaptation. It is no longer necessary to restrict our entropy function to
t </max"
The extent to which the generalized equation is capable of simulating
experimental data will now be shown.
6.
SIMULATIONS U S I N G T H E N E W E Q U A T I O N
We can generalize (27) to include the case of multiple, successive
step inputs of different amplitudes, by introducing (25):
F=½kln
(
l+m(O)e_at+meq(l_e_a,
)
)
.
(33)
Using this equation, we can simulate the cases of successively applied
step functions in stimulus intensity. For these simulations, we avail
ourselves of the apparent spontaneous firing rate that occurs when
I ~ 0 as a consequence of the entropy of the Poisson distribution
(Appendix 2).
6.1. CASE (A):
Stimulation of the receptor with three stimuli of intensities 1, 2, and
5 units, permitting complete de-adaptation between stimuli
Inserting values of k, fl", n, and a of 32, 1.4, 1.0, and 0.080,
respectively into (33), we obtain the three graphs shown in Figure 4. We
find that the simulated curves match the type of experimental curves
shown by Matthews [9], Bohnenberger [3], and Dethier and Bowdan [5].
6.2. CASE (B):
Initial step signal of intensity I = 1 unit, followed immediately by a
second step signal of intensity I = 2 units
KENNETH H. NORWICH AND WILLY WONG
96
120
i
i
i
100
..,a
O~
8O
r'!
:~
6o
4O
20
0
0
I
I
I
10
2O
30
4O
Time
FIG. 4. Using the new F-equation (33), the response to step inputs of varying
intensity was simulated. Notice that the new equation predicts a nonzero steady
firing rate, unlike the classical F-equation (17) which predicts complete adaptation.
The parameters and intensity values used in the simulation are given in the text.
The results are shown in Figure 5. For the second step function, at
t = 40, we take m2(0)= m1(40), that is, we take the m-value at the
beginning of the second step input equal to the m-value at the end of
the first step input. We observe results such as these in the experiments
of Smith and Zwislocki [17]. The transient term m2(0)e -at explains why
the second F-curve does not rise to twice the amplitude of the first: The
second step stimulus must overcome the residual adaptation due to the
first.
6.3.
CASE(C):
Step input of arbitrary amplitudes, followed by near-zero step input
(spontaneous firing rates)
Using I ' to represent a "virtual" intensity corresponding to the
spontaneous activity (see Appendix 2), (33) becomes
F=½kln
fl"(I')"
l+m2(O)e_at+m~q(l_e_a,
)
) ,
(34)
97
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
60
r~
~
~
40
em
~
2o
0
0
I
I
I
20
4O
60
80
Time
FIG. 5. This figure shows the prediction of (33) for a step input, followed by
another step input of higher intensity. The response to the second input adapts to a
plateau which is higher than the plateau of the first step input. Furthermore, the
early portion of the second curve does not rise to a level as high as the first curve.
This is what one observes experimentally [17].
as illustrated by Figure 6. Since I ' << 1 and m2(0) >> 1, we can understand why there is a sudden fall, then rise, with time. This enigmatic fall
to near zero when the step input is removed has been observed many
times for many sensory modalities (for example Ratliff, Hartline, and
Miller [13], Cleland, Dubin, and Levick [14], Adrian, and Zotterman
[15]), and the theoretical recovery or de-adaptation function now
matches the type observed experimentally.
6.4. CASE (D): Sinusoidal input
Slit sensilla are arachnid mechanoreceptors which measure minute
changes to exoskeletal structure. Small deflections of these mechanoreceptors may be considered as simple sensory stimuli, and the mean
discharge rate in the associated primary afferent neurons the corresponding response. Single slits on the leg tibia of Cupiennius salei were
investigated by electrophysiological methods in the paper by Bohnenberger [3]. Both step and sinusoidal inputs were used to investigate the
transfer characteristics of the sensilla. The step input is analyzed as in
Case (A), using (33) with I = 0.25 and m ( 0 ) = 0. The sinusoidal input
98
KENNETH H. NORWICH AND WILLY WONG
60
r-~
i
I
i
I
40
~2
,---t
~
2O
fi
'
S
I
I
I
I
L
I
20
40
60
80
1 O0
120
140
Time
FIG. 6. Equation (33) is used to simulate recovery from adaptation. If a step input
is suddenly removed, the F-function falls to a near-zero value and then slowly rises
to the spontaneous firing level. The sudden fall and upwards-convex recovery curve
corresponds to experimental observation.
may be analyzed by solving for
by (28):
m(t)
from (24), with meq now expressed
dm
d---i-+ am = am°qI q.
(35)
For intensity of stimulus, I, we use Bohnenberger's sinusoidal deflection function,
I = 0.25 sin (27rf.t),
(36)
where 0.25 is the maximum amplitude of deflection measured in degrees, f = 0.38 Hz is the frequency of stimulation, and t is time
measured in seconds. The organism only responded to the "positive"
half-cycle of stimulation; thus I remained non-negative for the duration
of simulation. The initial value m(0) = 0. We set q equal to 1. The value
of re(t) obtained from the solution to the differential equation (35) is
introduced into (13), in place of (16), giving
F=½kln(l+ ~"I")
(37)
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
99
T h e sets of four parameters k, /3", n, and a required to fit the two
receptor outputs--to a step input and to a sinusoid--are very similar:
Step input:
Sinusoidal input:
k
/3"
n
a
101
111
0
1.50 msq
1.68 rn~q
3.29
3.12
0.0211 Hz
0.0211 Hz
The results, along with the experimental data, are plotted in Figures 7
and 8. In Figure 8, a 40-ms advance was introduced into the plotting of
the theoretical function in order to match the time delay between
stimulus onset and biological response.
The characterizing feature of the sinusoidal fit is that both the early
and the later portions of the curve rise and fall faster than a sine curve
of the same amplitude. The fit of the data with a sine curve is shown in
Figure 9.
400
[
I
I
m
--T--
]
L
-,~
~
~0
@
m
~
I~E
500
200
loo
0
0
i
600
i
1200
J
1800
i
2400
Time
[milliseconds]
FIG. 7. Theoretical prediction (smooth curve) of response by arachnid mechanoreceptor to step input. Data is from experiment of Bohnenberger [3] (histogram).
The step input amplitude is equal to the maximal deflection of the sinusoidal input,
0.25 °. Almost identical parameters were used to fit both the step input response and
the sinusoidal input response (please see Figure 8 and main text).
KENNETH H. NORWICH AND WILLY WONG
100
200
¢) ,-~
.~ ~
I
I
I
I
300
600
900
1200
150
0
o
rn
.~
i-q
r2~
100
5'1
E
s0
0
0
1500
Time
[milliseconds]
FIG. 8. Comparison between the data obtained from the experiment of Bohnenberger [8] (histogram), and the theoretical prediction as obtained from Equation (37)
(smooth curve). The experiment consisted of a sinusoidal deflection of an arachnid
mechanoreceptor. The organism only responded to the positive cycle of the input.
Almost identical parameters were used to fit both the sinusoidal and the step input
response (please see Figure 7 and main text). Notice that the response to a
sinusoidal input is not sinusoidal. The early portion rises, and the later portion falls,
faster than a sine curve of the same amplitude (please see Figure 9).
6.5. OTHER CASES
We can also simulate the output to an increasing ramp and hold
function of the type obtained by Awiszus and Sch~ifer [2], to progressively increasing and decreasing step inputs, or to very brief stimuli of
varying intensities.
7.
DISCUSSION
We began with an equation of sensory entropy which is capable of
predicting the neural impulse rate of a primary sensory afferent neuron
in the de-adapted state, in response to any single step input in signal
intensity. We then generalized this equation to deal with any time-varying sensory input. Entropy is dependent upon the memory of previous
events. The key to the generalization was to append an assumption
regarding the maximum duration of the local receptor memory for
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
200
~-~ ~
0
0
150
~
~oo
E
I
I
I
300
60o
90o
101
r
so
0
0
1200
1500
Time
[milliseconds]
FIG. 9. Same data shown in Figure 8. The corresponding fit to the data of
Bohnenberger [8] with a sine curve of the same amplitude, demonstrating that the
response to a sinusoidal input is not, in general, sinusoidal. Equation (37) accounts
for the non-symmetrical shape of the neural output.
recent stimulus events: More intense stimuli produce an expansion of
local memory and less intense stimuli involve contraction in local
memory span. The reason for adopting this assumption was purely
because it works.
We tested the generalized model and found first that it retained the
explanatory and predictive capabilities of the more elementary model.
We then simulated the case of various sequential step inputs to a
sensory receptor. The elementary or restricted model predicted that if t
were permitted to increase without limit, neural discharge rates would
approach zero (total adaptation). The generalized model predicted the
attainment of a steady discharge rate for prolonged stimulation of the
receptor (partial adaptation).
The generalized model was also capable of describing the known
behavior of the receptor neuron to successive step inputs. In the case
where a step input was suddenly removed, the generalized model
allowed for a drop in firing rate to near zero, followed by partial
recovery. This property of the receptor-neuron complex had been
observed for many modalities. It has also been observed in the intact
102
KENNETH H. NORWICH AND WILLY WONG
retina, where it was attributed to lateral inhibition by neighboring
retinal units [17] and this property was, accordingly, "built in" to the
silicon retina [8]. We see it now emerging as a property of the single
unit, as a consequence of the flexible memory of the receptor.
Calculations of the quantity of information transmitted per stimulus
may be made quite naturally. For a single step input, we use (30) with
(12):
H = ½1n 1 + -a--------7
1- e
J=
'
(38)
H(to) - H(~).
The difference between the elementary or restricted model and the
more general model is, essentially, the replacing of (16) by (25) and (28).
These equations describe a new view of receptor function, whereby
local receptor memory can expand and contract depending on the input
state of the receptor. The differential equation (24) stands in analogy to
a capacitor discharging through a resistance. Memory and electric
charge are analogs. Another analog is the washout or clearance curve.
Memory and solute are analogs. These analogs may, indeed, suggest
that local memory is mediated via, say, membrane capacitance or by the
concentration of a humoral substance within some compartment in the
receptor cell.
Finally, a brief word on the operation of the generalized equation on
the behavioral scale. The original or restricted theory was applicable not
only to single sensory receptors, which have been our focus in this
paper, but also to global behavioral responses. Thus, for example, the
logarithmic and power laws of Type A neural experiments are also
applicable to behavioral or psychophysical experiments. The generalized
equation (29) may continue to operate at the behavioral as well as the
receptor level. As we have seen using (31) for small t, the generalized
equation (I is any function of t) reduces to the older, restricted
equation (I is a step function). For larger t, that is, stimuli of greater
duration, (29) and (30) provide the relief we were seeking. For example,
since the denominator 1 - e -at cannot exceed unity for any value of
at > 0, we now have a built-in constraint which prevents complete
adaptation when dealing with the senses of audition and vision.
The entropic or informational model of sensation replaces the older
concept of energy summation (integration) with that of information
summation. For example, on the behavioral scale, a perceiver cannot
react to a stimulus until he or she has accumulated a "quantum" of
information [10], that is, simple reaction time is calculated from the
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
103
time needed to gather AH bits of information. The function 1 - e - a t
describes the rate at which a receptor samples its sensory environment.
The constant a then governs the rate at which sensory information is
gathered, that is, a is a rate constant modulating the process of
temporal summation.
8.
CONCLUSIONS
A generalized equation of entropy is capable of predicting neural
outputs corresponding to sensory stimulus inputs. The equation functions for constant and for time-varying sensory inputs. The results
presented lend support to the thesis that sensory receptors respond to
their uncertainty (entropy) about the physical world. The results also
suggest that sensory receptors expand and contract their local memory
of events; they tend to remember more when receiving intense stimuli
and vice versa.
APPENDIX 1: NOMENCLATURE
/3,/3"
/3'
e
F
y
H
I
J
k
A
A'
D2
meq
0
meq
/x
tr2
~r2
Constants. When stimulus intensity is expressed in dimensionless units (e.g., multiplies of threshold intensity), then /3 and 13"
have the dimensions of inverse time.
When stimulus intensity is dimensionless, /3' is unitless.
Base of natural logarithms.
Firing rate of a primary afferent neuron.
Constant set equal to / 3 ' / t ' . When stimulus intensity is unitless, y is dimensionless.
Information theoretical entropy = potential information
="uncertainty" of receptor about the intensity of the stimulus.
Intensity of a stimulus. May be constant or changing with time.
Total information gained from adaptation.
Proportionality constant relating F to H.
Dimensionless constant.
Constant with dimensions of time.
For a constant stimulus, m is the number of samplings made by
a receptor of its stimulus input since the onset of the stimulus.
Equilibrium value of the local receptor memory, which is a
function of intensity. The number of samplings retained at
equilibrium.
Constant factor of proportionality, relating meq to stimulus
intensity.
Mean of a Poisson distribution. Variable used only in Appendix.
Population variance of the stimulus signal.
Variance of "noise" affecting receptor performance.
104
KENNETH H. NORWICH AND WILLY WONG
Variance of "noise" >I cry. When Poisson data are approximated
by the normal or Gaussian distribution, the value of o-~ that
appears in the numerator of (8) should be replaced by ¢r~2,
which is slightly larger.
Time. For the steady stimulus, t is the time since onset of the
stimulus.
A P P E N D I X 2: E N T R O P Y OF T H E POISSON D I S T R I B U T I O N
The Poisson probability distribution is given by the well-known
discrete probability function
Px : e-~'/zx
x[
'
(37)
where Px is the probability of obtaining x successes, x is, of course, an
integer, and /z = m e a n = variance. If both pure signal and noise are
governed by (37), the summed distribution of signal (mean iZs)+ noise
(mean tzR) is obtained by convolving two expressions of the form of (37)
to obtain a third Poisson distribution of the form of (37) with /z
replaced by tz s + tzR.
The Gaussian or normal distribution can be used to approximate the
Poisson distribution under certain limiting conditions. The normal distribution is a continuous probability distribution defined by
p(x)
1
e_(X_~,)2/2,~2 '
2~/~--z
(38)
where Ix is the mean and o-2 the variance. When both signal and noise
are Gaussian, the summed distribution is again Gaussian with mean and
variance equal to the sum of the constituent means and variances,
respectively.
The information theoretical entropy of the continuous distribution
given by (38) is
-
f_f(x) In p ( x ) dx = ln(2"n'e o"e ) 1/2.
(39)
If we subtract the entropy after an observation (entropy of noise alone)
from the entropy before the observation, we obtain
HGauss= ½In( °'~+ °'s2 1
1"
(40)
105
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
The corresponding expression for the entropy of a discrete probability
function such as the Poisson distribution is defined by
oo
E Px In Px.
(41)
x=O
When the value of Px from (37) is substituted into (41), we do not
obtain a simple expression in closed form. However, we can obtain a
numerical value for the entropy to any desired accuracy by evaluating
expression (41) using any desired value of the parameter/z. Moreover, it
may be shown in Table 1 that the entropy of the Poisson distribution is
approximated closely by the expression (1/2)In (21re/x), particularly for
larger values of/x.
If, for the Poisson distribution, we again subtract the entropy after an
observation (entropy of noise alone) from the entropy before the
observation, we obtain the expression
Hpo~sson= ½In (/zR
+/Z~s
) = lln ( tr2 + °2 )
/J'g
O'R2
'
(42)
corresponding to (40) since the variance of the Poisson distribution is
equal to /x. The approximation (42) is quite good for signals of larger
TABLE 1
Comparing the Exact Poisson Entropy to the Approximate Gaussian
Entropy of a Poisson Distribution for Small Mean Values
Mean
Poisson entropy
Gaussian entropy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1.305
1.704
1.931
2.087
2.204
2.299
2.379
2.447
2.508
2.561
2.610
2.654
2.695
2.732
2.767
1.419
1.766
1.968
2.112
2.224
2.315
2.392
2.459
2.518
2.570
2.618
2.661
2.701
2.738
2.773
106
KENNETH H. NORWICH AND WILLY WONG
variance /Zs, but for smaller variances the approximation weakens, as
seen from Table 1. Therefore--and this is the point we wish to make
here--we may use an e2xpression similar in form to (42) by replacing 0.2
in the numerator by 0.~, where
0.~2/> o.2,
(43)
that is,
Hp°iss°n = ½1n ( 0-~2 + 0-2 )
o-2
'
(44)
which is the form used in (8). From (44), we see that as 0.2 becomes
small so that 0-s2 < < 0-/~2,
npoisson ~ In (o-/~/OR) > 0.
(45)
We would like to rewrite (45) in a more useful form. Since 0-~2/> 0.2,
we can decompose 0.~2 into a sum of two terms to obtain
o.~2 = 0.2 + 0.2,
(46)
where o-2 is the residual variance. By analogy with (14), we can
introduce a "virtual" intensity I', which is related to the residual
variance. Equation (45) can now be rewritten as
Hpois~on --) ½1n (1 + ~ ( I ' ) " ) ,
(47)
where ~ is a constant.
Since F p o i s s o n = knpoisson from (12), we see that the neural impulse
rate will not approach zero as rapidly as if approximation (42) held
rigorously. The result may be to produce the illusion of a "spontaneous
neural firing rate," as used during our discussion of the Assembly-Line
Model. However, to be true to the equations which define the model,
when the stimulus is truly equal to zero, a neural impulse rate of zero is
predicted.
To see numerically how (43) and (44) hold, we can refer to Table 1.
Let us take the case where /~s =/xR = 1. The convolution of signal +
noise gives /z = / z s +Pm = 2. Using the first two rows of Table 1 for
= 1 and /x = 2, we find that the exact value for npoisson, given by
entropy(/x = 2) - entropy( # = 1) from columns 2 and 1, respectively, is
npoisso n =
1.705 - 1.305 = 0.400 natural units.
UNIVERSAL MODEL OF SENSORY RECEPTOR FUNCTION
107
However, the approximate value for Hpoisson, evaluated from
(1/2)ln(21re/z), corresponding to the use of (42), is given by entropy( p~
= 2) - entropy( ~ = 1) from column 3, or
Hpoisso n =
1.776 - 1.419 = 0.357 natural units,
which is the exact value.
Therefore, if we wish to use the approximate expression (42) for
smaller values o f / z = o. 2, we must "adjust" it, which we do by replacing
o,2 by the slightly larger quantity O'/~2 in the numerator.
APPENDIX 3: VARIANCE IN T H E ENTROPY T H E O R Y
Information, in the sense expressed by Shannon, arises from the
uncertainty in the values of measured signals. In our current application, there is uncertainty in the mean value of a signal, which arises as a
consequence of the fluctuation in densities of an ensemble of particles
(e.g., molecules or photons). Such fluctuations are a consequence of the
particulate composition of matter and are not related to nonlinear
effects or initial conditions of measurement. Fluctuations in density,
represented by the population variance in density, are related in a
known manner to the mean density, usually but not invariably by means
of an equation such as (14). Admittedly, what a receptor experiences is
a sample variance S 2, rather than the population variance o. 2 directly. It
is, therefore, more accurate to introduce s 2, rather than O"2 into our
equations. Since s z is, in itself, a fluctuating quantity (vis-a-vis o.2,
which is constant), the altered equations would reflect the statistics of
intra-subject variability. For example, we could account quantitatively
for the variations in loudness experienced by a single perceiver in
response to a tone of the same intensity and for the changes in simple
reaction time from trial to trial in response to a light of the same
intensity. However, the resulting equations would become much more
complex. At the present time, we chose to approximate s z by o"z in
order to work with simpler equations. The use of the constant quantity
orE, rather than the random variable s 2, gives the impression that we
are dealing with a purely deterministic system, which is, of course, false.
Within the entropy theory, variance, rather than mean, is a primary
independent variable. If the variance of a signal is increased while at
the same time keeping the mean constant, perception of the signal will
be enhanced (e.g., the phenomenon of "brightness enhancement"). If
the variance of the stimulus signal is decreased while keeping the mean
constant, perception of the signal will be diminished (e.g., the phenomenon of disappearance of a stabilized retinal image).
108
KENNETH H. NORWICH AND WILLY WONG
This work has been supported by an operating grant from the Natural
Sciences and Engineering Research Council of Canada. We also thank
Professor Lester Krueger and Sheldon Opps for their helpful comments.
REFERENCES
1 E. D. Adrian and Y. Zotterman, The impulses produced by sensory nerve
endings. Part 3. Impulses set up by touch and pressure, J. Physiol. 61:465-483
(1926).
2 F. Awiszus and S. S. Schiifer, Subdivision of primary afferents from passive cat
muscle spindles based on a single slow-adaptation parameter, Brain Res.
612:110-114 (1993).
3 J. Bohnenberger, Matched transfer characteristics of single units in a compound
slit sense organ, J. Comp. Physiol. A 142:391-402 (1981).
4 B. G. Cleland, M. W. Dubin, and W. R. Levick, Sustained and transient
neurones in the cat's retina and lateral geniculate nucleus, J. Physiol. 217:
473-496 (1971).
5 V. G. Dethier and E. Bowdan, Relations between differential threshold and
sugar receptor mechanisms in the blowfly, Behav. Neurosci. 98:791-803 (1984).
6 H.K. Hartline and C. H. Graham, Nerve impulses from single receptors in the
eye, J. Cellularand Comparative Physiol. 1:277-295 (1932).
7 H.K. Hartline and F. Ratliff, Inhibitory interaction of receptor units in the eye
of Limulus, J. Gen. Physiol. 40:357-376 (1957).
8 M. A. Mahowald and C. Mead, The silicon retina, Sci. Amer. May: 76-82
(1991).
9 B . H . C . Matthews, The response of a single end organ, J. Physiol. 71:64-110
(1931).
10 K. H. Norwich, Information, Sensation, and Perception, Academic Press, Orlando, 1993.
11 K.H. Norwich, The magical number seven: Making a "bit" of "sense," Perception and Psychophys. 29:409-422 (1981).
12 K.H. Norwich, On the information received by sensory receptors, Bull. Math.
Biol. 39:453-461 (1977).
13 K.H. Norwich and K. M. V. McConville, An informational approach to sensory
adaptation, J. Comp. Physiol. A 168:151-157 (1991).
14 F. Ratliff, H. K. Hartline, and W. H. Miller, Spatial and temporal aspects of
retinal inhibitory interaction, J. Optical Soc. Amer. 53:110-120 (1963).
15 R.F. Schmidt, Somatovisceral sensibility, in Fundamentals of Sensory Physiology,
3rd ed., R. F. Schmidt, ed., Springer-Verlag, New York, 1986, p. 89.
16 C.E. Shannon, A mathematical theory of communication, Bell System Tech. J.
27:379-423 (1948).
17 R. L. Smith, and J. J. Zwislocki, Short-term adaptation and incremental responses of single auditory-nerve fibers, BioL Cybernet. 17:169-182 (1975).
18 J. Thorson and M. Biederman-Thorson, Distributed relaxation processes in
sensory adaptation, Science 183:161-172 (1974).