V1 Cortex and Machine learning

Recently I have met with a young neuroscience PI (Principal Investigator) in CUHK. His name is Owen Ko, graduated PhD in Neuroscience from UCL, then CUHK MBChB bio. Back in his year following Mrsic-Flogel lab, his research focused on functional and synaptic analysis of primary cortex. The lab is known for new techniques of large-scale 2 photons imaging of calcium signals in mouse's cortex. To know more about the techniques, this video (04:30-06:00) have a good explanation by Thomas Mrsic-Flogel himself. With this new technology, we can have a view of real time neuron's firing footage at an unprecedented scale with lots of details. (This also leads to another discussion of tradeoffs between basic science research vs. technology development, which I have heard lots of arguments back in Broad Institute. But I will leave my thoughts in another post.)

Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex”, Nature Neuroscience, 2011 Jul 17;14(8):1045-52.

Screenshot of Owen: (as you can see he has a good sense of humor)


His new lab research interests now include mouse visual cortex visualization, stereo sensory localization of zebra fish (another model organisms often used in neuroscience) and other cool stuffs. He is very kind to let me know some of the new research development later on. I might help out with some tasks of setting up confocal microscope in early April.

But our conversation prompts me to revisit some of the connections between machine learning and neuroscience that I learnt back in MIT. Although the neural networks in deep learning are inspired by but loosely based on our brain on neuronal level, scientists are finding more links and clues between two distinctive exciting academic areas.

  • To learn more about visual cortex in neuroscience, check out this wiki page

  • To learn more about convolution neural network, check out this link and paper we read in the reading group.

CNN visual field & Primate Visual Connections

The image recognition field has spent years trying to find a good statistical tools to classify/segment/generate/recognize different images. But it has been struggling to solve the scale, orientation, noise-invariant problem, ie. the diversity of images representing a 'cat' or any objects. Until recently with the introduction of a neural networks architecture by Yann LeCun, Convolution Neural Networks (CNN), the field's tremendous breakthrough could be partially attributed to CNN's capability to handle this invariance.

Similarity in Neuromorphology

This is an image of Human Visual Cortex
human_visual_cortex visual_cortex Note the similarity in the hierarchical and scaling structures.

When CNN was first invented in the 1980s, the convolution filters design was mainly for solving the problems of high dimensionality that the computers couldn't handle back then. As the field evolves, more people find out that this architecture is good for capturing the localized and interconnected relations that are often stored in pixels of images. For neuroscience side, I also believed that (without proofs) the primary V1 cortex was not mapped out yet until recently. So the mere coincidence might be a clue for us to speculate underlying patterns and connections between 2 fields.

Inspiration from Nature

neuron_paper This is a review paper submitted by the head of DeepMind back in 2017, calling for more interdisciplinary work between Computer Science and Neuroscience.

If I were to work on research in Artificial Intelligence, I would closely follow the neuro field and draw inspirations for new architecture that could open up new research avenues. One of the major takeaways I learnt in the Zhang lab is to look for inspirations from nature. Don't invent the wheel because Nature has properly evolved something functionally similar to what you want build. By studying more about the mechanics, structures and functions in the brain, we can better understand how learning and memory works, build learning networks that approximate the most efficient implementation in the world, aka human brain.

Institutions around the world has already looked into setting up interdisciplinary faculty between AI and neuro. NYU-Shanghai has a conference last week called 'Joint Future of Neuroscience and Artificial Intelligence'. Canada is pushing the AI research clusters:
canada In which MILA has a highlighted publication as

- Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity [arXiv:1612.03214]
- From STDP towards Biologically Plausible Deep Learning

It is exciting time to be either of the fields. By transferring insights from neuroscience, we can accelerate the advances in AI research. One of my personal life goal is to understand my 'intentionality', a term in philosophy of mind, and whether my thoughts are deterministic. To end this article, I would like to use a quote from Demis Hassabis in the Neuron paper:

we believe that the quest to develop AI will ultimately also lead to a better understanding of our own minds and thought processes.
Distilling intelligence into an algorithmic construct and
comparing it to the human brain might yield insights into
some of the deepest and the most enduring mysteries of the
mind, such as the nature of creativity, dreams, and perhaps
one day, even consciousness.