Cracking the Mind's Code: 

LinkedInInstagramLink

How Neural Networks and Mice Brains Unveil Our Inner Workings

Hey there, fellow engineers!

Today, we’re diving deep—like "Alice in Wonderland"-rabbit-hole deep—into the fascinating world of neural networks. But we’re adding a twist. Ever heard of Locally Competitive Algorithms (LCAs) with accumulator neurons? No? Buckle up!

The Journey Begins: What Are LCAs?

Before we get into the nitty-gritty, let's unpack what LCAs are. Simply put, LCAs allow for sparse coding. If you're intrigued by this concept, you're not alone. This field has seen significant advancements, thanks in part to Dr. Rozell's seminal paper, "Sparse Coding via Thresholding and Local Competition in Neural Circuits" 1. Think of LCAs as the 'Marie Kondos' of the neural network world—they keep what sparks joy (i.e., essential features) and toss the rest. LCAs have been around the block, but what’s really rocking the boat is integrating them with accumulator neurons.

The Special Sauce: Accumulator Neurons

Accumulator neurons act like tiny accountants, meticulously keeping track of data. When paired with LCAs, they become the dynamic duo we never knew we needed. These neurons store and process information in a way that’s more aligned with biological neural networks, even if they don’t leak like biological ones. They make LCAs not just good but great.

The Real Deal: Mice & Vision

Here comes the fun part: applying this techie marvel to actual, living, breathing mice. One dictionary in our model learns one-dimensional features from the fluorescent traces observed in the V1 layer of the mice’s visual cortex. The other dictionary? It’s all about those 3D features from what the traces' recordings capture as simultaneous stimuli, like a movie or a GIF that mice may observe while capturing the fluorescent traces' data. This particular model is inspired by the research paper "Dictionary Learning with Accumulator Neurons" 2.

Why Two Dictionaries?

Why stop at one dictionary when you can have two, each serving a specific purpose? The first dictionary focuses on the 1D features, making it easier to interpret and analyze the mice’s visual cortex. The second dictionary dives into 2D or 3D features, allowing us to reconstruct an arbitrary amount of artificial neurons' activities and the corresponding image or video input fidelity. It’s like having two superheroes in one comic book!

The Future is Bright

The future work is nothing short of exciting. Imagine linking two dictionaries to the same input! This paves the way for creating more accurate and flexible models that can adapt to various types of neural data. We’re talking next-level neuroengineering, folks.

Wrapping Up

So there you have it—LCAs, accumulator neurons, and a couple of mice later, we've got ourselves a model that’s a game-changer in the realm of neural networks and neuroengineering. Stay curious, keep exploring, and who knows what we'll unravel next!


Footnotes

Rozell, C. J., Johnson, D. H., Baraniuk, R. G., & Olshausen, B. A. (2008). Sparse Coding via Thresholding and Local Competition in Neural Circuits. Download Paper PDF ↩

Parpart, G., Gonzalez, C., Watkins, Y., Kenyon, G. T., Stewart, T. C., Kim, E., Rego, J., & Nesbit, S. C., O'Brien, A. (2022). Dictionary Learning with Accumulator Neurons. Link to Paper PDF↩