Joy Bose’s Weblog

Just another WordPress.com weblog

Engineering a sequence machine

Posted by joyboseroy on October 30, 2007

My PhD research can be presented in the form of a dialogue, which goes something along these lines:

The goal of my research is to engineer a high level system using spiking neurons. We have taken a sequence machine as the example of a high level task that can be modelled. A sequence machine is an automaton that can store and recognise sequences of symbols. We chose this particular application because of its biological proximity and the fact that many people have already tried to approach this problem but none from an engineering perspective and also since in our research group we are trying to mdoel neurons in hardware.

So having decided on the goal, the question is how.  We decided to use an associative memory as the high level neuron model, with a separate neural layer to dynamically store the context, which the machine uses to reconstruct the state of the sequence in order to predict the next symbol in the sequence with maximum accuracy based  on its past knowledge. The memory we chose was a Kanerva’s sparse distributed memory using N-of-M codes with rank order codes.

We used an associative memory because its more biologically plausible, and easier to model using spiking neurons and in hardware.  Also its the natural choice given our use of neural network as a memory that can learn and predict sequences.

We used the N-of-M SDM variant because it was already in use in our research group, and we were building up on already published research using such a memory.
We used a separate context neural layer because it was modular, enabled us to keepseparate the long term and short term components of the memory, we could use the N-of-M SDM as it is as a pluggable module.

In the high levbel system we had to also decide on the precise formulation of the problem and theprotocol to be followed. We decided on the following protocol, in order of time within a single time step, to be followed for each symbol:

1. Associate the previous output with the input 2. Form the next context from the input and previous output 3. Form the new output as a function of the new context and the input

This particular order of operations is necessary to preserve well defined boundaries between when to expect which part of the data. Its bit counter intuitive.

Now came the question of spiking neurons. Which model to use? which coding to use? We decided already on rank order coding.

We used rank order codes because 1. Its possible to model it using spiking neurons as Thorpe et al showed 2. Its an interesting coding scheme and biologically plausible 3. As Thorpe showed, its possible to get a high level vector view and a low level spiking neuron view of a rank order code, the spiking neuron view is by using feedforward shunt inhibition

We also had to use feedback reset inhibition (apart from the feedforward shunt inhibition to implement the rank order code) to 1. Implement N-of-M coding 2. As a design decision, because of the problem of interference between spiking wavefronts, since we found that spike bursts would either explode or die out.

We used an RDLIF model of spiking neuron, and a custom built spiking neuron simulator. It is similar to popular LIF  model, yet has more complex 2nd order dynamics.

Yet we found that the RDLIF model cant support an equivalent functionality as the  temporal abstraction of rank order code as a vector of significances. Therefore we had to switch to using a simpler neural model called wheel model, with only 1st order dynamics

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: