Joy Bose’s Weblog

Just another WordPress.com weblog

Archive for the ‘Research’ Category

Computational neuroscience

Posted by joyboseroy on November 8, 2007

The idea of Comp-neuro is that the brain is a deterministic system, and computer modellign can help us understand how it works. It benefits both ways: computer scientists and engineers can get insights on how to produce better fastre and error tolerant systems (made out of unreliable components) from the brain, and neuroscientists can get validation for their theories, because without theories neuroscience is just a mass of data without any understanding of systems level functioning.

The hippocampus (see Rolls and Treves’ book) isone of the best studied system, its implicated in short term memory (CA3/CA1 areas) before it gets converted to long term memory in the neocortex and higher brain regions. People and animals with hippocampal lesions (where this part of brain is damaged)  have retrogade amnesia (difficult to form new memories but old memories, and procedural memories remain). Many have postulated this area as being responsible for spatial maps (esp in rat hippocampus), LTP/LTD plasticity, episodic memory, etc.

A typical comp-neuro experiment is illustrated beautifully in hippocampus chapter of Rolls and treves. First, there needs to be a falsifiable hypothesis. Rolls studied current theories and found their defects, did some mathematical calculations of the number of connections between different layers, the nature of the connections (recurrent etc), how they could be modelled and the expected memory capacity and the constraints on their connectivity, a falsifiable prediction that turned out to be wide off the mark, because of many assumptions. Instead Rolls proposed a simple model based on multiple layers of associative or recurrent neurons, each of whose capacities could be modelled. Then he tested the theory by performing simulation: giving a random pattern to a layer and expecting it to be able to learn and recall by hebbian learning, keeping the connections ratio roughly same as biology data.

Theres lot of speculation involved in neuroscience (this is one way it could happen etc) , and based on that scientists make falsoifiable predictions which are then verified. Treves/Rolls’ book starts with some standard types of neural nets including pattern associative memories, autoassociative memories, competitive nets and recurrent nets trained with error backprop, then starts on different brain regions and how they canbe modelled using these 4-5 basic types of memories.

Advertisements

Posted in Research | Leave a Comment »

Engineering a sequence machine

Posted by joyboseroy on October 30, 2007

My PhD research can be presented in the form of a dialogue, which goes something along these lines:

The goal of my research is to engineer a high level system using spiking neurons. We have taken a sequence machine as the example of a high level task that can be modelled. A sequence machine is an automaton that can store and recognise sequences of symbols. We chose this particular application because of its biological proximity and the fact that many people have already tried to approach this problem but none from an engineering perspective and also since in our research group we are trying to mdoel neurons in hardware.

So having decided on the goal, the question is how.  We decided to use an associative memory as the high level neuron model, with a separate neural layer to dynamically store the context, which the machine uses to reconstruct the state of the sequence in order to predict the next symbol in the sequence with maximum accuracy based  on its past knowledge. The memory we chose was a Kanerva’s sparse distributed memory using N-of-M codes with rank order codes.

We used an associative memory because its more biologically plausible, and easier to model using spiking neurons and in hardware.  Also its the natural choice given our use of neural network as a memory that can learn and predict sequences.

We used the N-of-M SDM variant because it was already in use in our research group, and we were building up on already published research using such a memory.
We used a separate context neural layer because it was modular, enabled us to keepseparate the long term and short term components of the memory, we could use the N-of-M SDM as it is as a pluggable module.

In the high levbel system we had to also decide on the precise formulation of the problem and theprotocol to be followed. We decided on the following protocol, in order of time within a single time step, to be followed for each symbol:

1. Associate the previous output with the input 2. Form the next context from the input and previous output 3. Form the new output as a function of the new context and the input

This particular order of operations is necessary to preserve well defined boundaries between when to expect which part of the data. Its bit counter intuitive.

Now came the question of spiking neurons. Which model to use? which coding to use? We decided already on rank order coding.

We used rank order codes because 1. Its possible to model it using spiking neurons as Thorpe et al showed 2. Its an interesting coding scheme and biologically plausible 3. As Thorpe showed, its possible to get a high level vector view and a low level spiking neuron view of a rank order code, the spiking neuron view is by using feedforward shunt inhibition

We also had to use feedback reset inhibition (apart from the feedforward shunt inhibition to implement the rank order code) to 1. Implement N-of-M coding 2. As a design decision, because of the problem of interference between spiking wavefronts, since we found that spike bursts would either explode or die out.

We used an RDLIF model of spiking neuron, and a custom built spiking neuron simulator. It is similar to popular LIF  model, yet has more complex 2nd order dynamics.

Yet we found that the RDLIF model cant support an equivalent functionality as the  temporal abstraction of rank order code as a vector of significances. Therefore we had to switch to using a simpler neural model called wheel model, with only 1st order dynamics

Posted in Research | Leave a Comment »

Some tools for research in CS

Posted by joyboseroy on October 19, 2007

Learn how to use gnuplot in 3D for plotting graphs. Learn CVS in Unix for keeping track of version changes. IEEE Explore is a very useful database of IEEE papers in all conferences and journals, so is the ACM digital library. Google books is sometimes useful. If possible, get an athens account to access these. Matlab is the most useful software for coding in academic environments, best for writing experiments where efficiency is not the top priority, its added advantage is in the number of toolboxes available and the integrated plotting and math tools, especially in versions 7.0 and higher. If Windows is your thing, learn to code inside an IDE such as eclipse or visual studio.

To submit a paper to a journal or conference, first see call for papers in your field etc. Once you have decided where to submit to, first go to their website, download the template  in latex (such as IEEETrans.cls and a .tex template) of a paper to be submitted to that site, stick to that style. Springer is another popular template for many conferences. Once you have written the tex file, compile using latex abc.tex and dvipdf abc.dvi. There is some website to check IEEE paper format submissions to make sure its ok.

Posted in Research | Leave a Comment »

Advice on writing a paper

Posted by joyboseroy on October 18, 2007

Ask yourself the following questions (Jon’s advice: any good paper should answer all of them): What is the problem? Is the problem important? Has it being solved? Have you solved it?

Gavin’s advice: The smaller the title and abstract, the better it is. Trick is to find related papers, group them in chronological order, build a storyline and establish the gap for your paper, how its completely unique. Write a paper draft outline with methology, experiments, analysis and conclusion, and an abstract before even starting to write the paper.

Posted in Research | Tagged: | Leave a Comment »

My research, study and further work

Posted by joyboseroy on October 18, 2007

I attended the following conferences along with assorted workshops/tutorials: nips in vancouver and whistler in december 2003, bics in sterling in august 2004, ijcnn in montreal in august 2005, icann in warsaw and torun in september 2005, mathematical neuroscience workshop in edinburgh in 2004.

I am at this moment working as a software engineer, hoping to get back into academia after say a couple of years in industry, writing a research paper with my co-authors in an area combining sequence learning, spiking neurons, rank order codes, associative memories, neural engineering, etc wondering how best to focus the paper.

I also hope to engage in coming months in doing some research experiments (as a hobby), studying the feasibility of doing the experiments I mentioned in my PhD thesis future work section, think about how the work can be commercially deployed at all (say in applications of companies like google and rolls royce) or at least how to reproduce the state of the art.

Posted in Research | Tagged: | Leave a Comment »

Resources in Computational Neuroscience

Posted by joyboseroy on October 18, 2007

Computational neuroscience is a fascinating field. If you are interested to get into it (or in a related field such as neural networks), congrats and get busy. Join comp-neuro@neuroinf.org and connectionists@cs.cmu.edu mailing lists. If you are in UK, you can join the following societies: NCAF (natural computing applications forum), IEEE Computational intelligence society which publishes IEEE-TNN and IJCNN, INNS or international neural networks society who publish the journal neural networks. Read the online free encyclopedia at scholarpedia, an initiative led by Izhikevich. Search in internet in the former conference websites of IJCNN, NIPS, ICANN, IWANN etc and see their tutorials and workshops section, same for the MIT, UCL Gatsby and Redwood Neuuroscience institute journal clubs, also see the syllabus and maybe exams, lectures and other course material for course modules in the area of computational neuroscience or the wider fields of artificial intelligence, machine learning and neural networks. Read maybe the introduction to computational neuroscience (trappenberg), scientific american book of the brain, neuroscience by Kendal, schwartz and jessel, theoretical neuroscience by dayan and abbott, neural networks by simon haykin, the book on pattern recognition by Christopher Bishop, encyclopedia of computational neuroscience edited by Arbib. Read different books on the issue, especially the biology/physiology side of it with computer science. Keep yourself updated of the latest research by seeing journal issues announcements in the mailing lists and going to the journals websites or the authors websites and reading their listed papers that you can access for free. Download and learn to use common simulators like SNNS, matlab NN toolbox, spiking neural simulators like GENESIS and NEURON etc. Good search engines for resourcing this field are google scholar and http://citeseer.ist.psu.edu/

For those interested in careers in the area (postdoc and lecturers), first look through job adverts in the mailing lists and also in Nature and New Scientist jobs, jobs.ac.uk and in the US the Chronicle of Higher education. Make a list of skills they are looking forand decide where your interests lie and whats the future path you need to take, write a proper grant and research proposal.

Posted in Research | Tagged: | Leave a Comment »