Part 4 :   Encoding stimuli to Neural code


4.1.  Factors to consider in Sensory Stimuli Encoding

1.     Mapping :  Sensory neurons of the same type have a common pathway and terminate in the same region of the brain. (eg: Sound stimuli signals are sent to auditory cortex.) Neighbouring sensory neurons of the same type are mapped to corresponding neighbouring neurons in the brain region. In some cases, afferent neurons that carry signal to the brain will perform lateral inhibition/temporal sharpening to increase the contrast of the signal over space and time, prior to sending the signal to the brain.

2.     Intensity :  Each incoming sensory stimulus signal has a corresponding 'intensity' property.

     For example :

Sensory type Corresponding Intensity variable
Vision Brightness of light
Hearing Loudness of sound
Tactile Pressure of touch
Smell / Taste Concentration of the chemical



4.2.   Stimuli encoding : Objectives

Sensory encoding involves activating specific neurons based on specific input stimuli. The same input stimuli should activate the same set of neurons everytime.

The process of encoding a sensory stimulus should facilitate the following :

  • Map stimuli of the same type to the corresponding neural region.
  • Estimate the approximate intensity of the incoming stimulus within a set range
  • Determine the specific sensory neuron's activation state (ON or OFF state)
  • Detect changes in the sensory neuron's activation state
  • Perform lateral inhibition and temporal sharpening where applicable
  • Determine the duration of the current activation (to enable attention/habituation)
  • Detect spatial/temporal patterns in incoming stimulus where applicable



4.3.   Proposed encoding model

  • In this encoding model, every input signal to the sensory neuron is just a scalar, ie., the intensity of the stimulus.
  • Various downstream neural circuits are involved in encoding, so the Stimuli encoding objectives listed above are achieved
  • Examples of encoding are illustrated below.




4.4.   Encoding scalar value : Intensity estimation


A Neural circuit for signal Intensity estimation
The strength/intensity of the input signal can be estimated within two timesteps using this circuit.
Step 1: Determine the minimum and maximum possible values of the incoming signal's intensity
Step 2: Divide the range into a number of intervals ,depending on the precision required.
Step 3: In the second layer (B1,B2..) set thresholds that match the starting values of each interval.
Step 4: Inhibitory connections are added : B2 inhibits C1 , B3 inhibits C2.

Example :
•   Let the input signal to A be a scalar with value between 1 and 30
•   Let the number of intervals chosen = 3 , so the range is split into three chunks (1..9), (10...19), (20...30)
•   The thresholds of the second layer Bx are chosen to be the first number of each interval chunks.
•   i.e., Thresholds θB1 = 1 ; θB2 = 10 ; θB3 = 20 ; other thresholds = 1
•   If input intensity to A iA is greater than zero, one of the Cx neurons will fire based on its intensity i.e,

  Input to A at time (t)   Result at time (t+2)
iA = { 1,2,...,9} C1 fires @ (t+2)
iA = {10,11,...19} C2 fires @ (t+2)
iA >=20 C3 fires @ (t+2)




4.5.   Encoding a grayscale pixel input : (with change detection )


To run this demo, place mouse-pointer over the grayscale gradient canvas below.
This will send the corresponding pixel's grayscale intensity value to the the sensor neuron A1

Description:
•  The sensory neuron A1 gets the input intensity value that is between 0 and 255 .
•  The intensity of the input is grouped to be in one of the five ranges : ( 1 to 49 ), ( 50 to 99 ), (100 to 149 ), (150 to 200), ( 200 to 255)
• An intensity estimation circuit is created using B1, C* and D_* neurons by setting these thresholds : θC1 = 1 ; θC2 = 50 ; θC3 = 100; θC4 = 150; θC5 = 200;
•  Neuron E is the 'ON state' detection neuron,which fires at time (t+4) when input > 0 at time t.
•  Similarly, neuron F is the 'OFF state' detection neuron,which fires at time (t+4) when input = 0 at time t. This uses the "Inverse neuron" circuit shown in the earlier Section 3.5.
•  Temporal summation connections (t+1,t+2,t+3) exist from E to G and F to H respectively, wherein G and H accumulate inputs over three timesteps.
•  Further downstream, intensity estimation circuits exist for estimating the values in G and H

  Scenario at time (t)   Result
   Input to A1 = { 1,2,...,49}    D1 fires @ time (t+3)
   Input to A1 = { 50,51,..,99}    D2 fires @ time (t+3)
   Input to A1 = {100,101,,...,149}    D3 fires @ time (t+3)
   Input to A1 = {150,151,...,199}    D4 fires @ time (t+3)
   Input to A1 = { 200 and above}    D5 fires @ time (t+3)
   Input to A1 > 0    E fires @ time (t+4)
   Input to A1 is 0    F fires @ time (t+4)
   Input to A1 changes from 0 to positive    K1 fires @ time (t+7)
   Input to A1 is consecutively positive thrice or more    K3 fires @ time (t+7)
   Input to A1 changes from positive to zero    L1 fires @ time (t+7)
   Input to A1 is consecutively zero thrice or more    L3 fires @ time (t+7)









4.6.   Encoding an RGB pixel input


To run this demo, place mouse-pointer over the picture below.
This will send the corresponding pixel's RGB intensity value to the three sensors (R_A1 neuron for red, G_A1 neuron for green input, B_A1 neuron for blue input)


Description:
•  Each of the sensory neurons (R_A1 , G_A1, B_A1) get an input intensity value between 0 and 255 .
•  The intensity of each input is grouped to be in one of the three ranges : ( 1 to 85 ), ( 86 to 170 ), ( 171 to 255)
• The intensity estimation circuit exists for each input, (using *_C* and *_D_* neurons); wherein the intensity of the input is determined to be in one of the three ranges.
•  Neuron E is the 'ON state' detection neuron,which fires at time (t+4) when input > 0 at time t.
•  Similarly, neuron F is the 'OFF state' detection neuron,which fires at time (t+4) when input = 0 at time t. This uses the "Inverse neuron" circuit shown in the earlier Section 3.5.
• Eg: if the input red value to R_A1 >= 1, R_E will fire. (at t+4). Similar behaviour will be seen in the G_* (green) and B_* (blue) circuits.
  Scenario at time (t)   Result
   RED Input to R_A1 = { 1,2,...,85}    R_D1 fires @ time (t+3)
   RED Input to R_A1 = {86,87,,..,170}    R_D2 fires @ time (t+3)
   RED Input to R_A1 = {171 and above}    R_D3 fires @ time (t+3)
   Colour input is white ie., ( R_A1=255, G_A1=255, B_A1=255 )    R_D3, G_D3, B_D3 fire @ time (t+3)
   Colour input is black ie., ( R_A1=0, G_A1=0, B_A1=0 )    R_F, G_F, B_F fire @ time (t+4)


 


4.7.   Encoding a one-dimensional array of scalars (with lateral inhibition)


To run this demo, place mouse-pointer over the grayscale gradient below.
This will send the corresponding set of 5 neighbouring-pixels' grayscale intensity values to the neurons 1_A1, 2_A1, 3_A1, 4_A1, 5_A1.


Description:
•  Consider a vision sensor with an one-dimensional strip of five light-sensors.
•  Each sensor gets an input grayscale intensity value between 0 and 255 .
• For each sensor input(*_A1), an intensity estimation circuit exists similar to Section 4.5 above, but with the range split into three groups( *_D1, *_D2, *_D3) instead of five.
• In stimuli such as vision or touch, lateral inhibition is present , wherein neighbouring neurons inhibit each other's output to sharpen the contrast in the signal(see Mach Bands)
• In this circuit below, such lateral inhibition is achieved by creating inhibitory connections (with weight = -0.2) from each of the *_B1 neurons to neighbouring *_B1 neurons.
• For example, it can be seen that ,when sending all inputs as 255 (by mouse-hovering over the white area of the canvas), the *_B1 neurons fire with a less intense output signal ( less than 255).
• In a comprehensive visual processing neural circuit, there will be a two-dimensional array of sensors instead of such an one-dimensional array, with corresponding lateral inhibitions to neighbouring neurons in two dimensions.




4.8.   Encoding sound



Sound encoding :
•  Sound is composed of superposition of many frequencies, and each incoming frequency has an intensity factor(loudness).This concept is best visualized in a spectrogram.
•  Sound sensation can be thought of as a continuous stream of one-dimensional-array input, wherein each element corresponds to the loudness in one particular frequency.
•  Encoding of sound in such a way would also require Lateral inhibition to be enabled in the encoding neural circuit, similar to section 4.5. above.

In this example :
•  In this example, just one frequency sensor (tuned to 6000Hz) is illustrated instead of an array of frequency sensors.
•  The loudness of sound arriving at 6000Hz is fed into the intensity estimation circuit (6000_A, 6000_B* and 6000_C*).
•  6000_C3 will fire only if the sound loudness corresponding to 6000Hz is high.
•  In this example, the loudness range is set from 0 to 255 ; This range is split into three chunks.
• Thresholds θ6000_B1 = 1 ; θ6000_B2 = 255/3 ; θ6000_B3 = (255*2)/3 ; other thresholds = 1 .
•  When a short beep sound is played at 6000Hz by clicking the button, the loudness of the sound (at 6000Hz) will cause the 6000_C3 neuron to fire for a short time.
•  A full audio encoding circuit (not shown here) will have hundreds of such *A sensors, tuned to proximal frequencies with lateral inhibition connections to neighbouring frequency neurons.
•  It should be noted that since we can extract the time-duration of sound in a specific frequency (See neurons K1..K3 in section 4.5) , complex audio sequences can be detected, which would be useful in speech recognition.

    Index