Marcus Ghosh
banner
marcusghosh.bsky.social
Marcus Ghosh
@marcusghosh.bsky.social
Computational neuroscientist.

Research Fellow @imperialcollegeldn.bsky.social and @imperial-ix.bsky.social

Funded by @schmidtsciences.bsky.social
Interested in #neuroscience + #AI and looking for a PhD position?

I can support your application @imperialcollegeldn.bsky.social

✅ Check your eligibility (below)
✅ Contact me (DM or email)

UK nationals: www.imperial.ac.uk/life-science...

Otherwise: www.imperial.ac.uk/study/fees-a...
November 4, 2025 at 9:47 AM
Are #NeuroAI and #AINeuro equivalent?

@rdgao.bsky.social draws a nice distinction between the two.

And introduces Gao's second law:
“Any state-of-the-art algorithm for analyzing brain signals is, for some time, how the brain works.”

Part 1: www.rdgao.com/blog/2024/01...
September 25, 2025 at 12:19 PM
Being part of this grassroots 🌱 neuroscience collaboration was a great experience!

Keep an eye out for our next collaborative effort
September 4, 2025 at 3:17 PM
We’re excited about this work as it:

⭐ Explores a fundamental question: how does structure sculpt function in artificial and biological networks?

⭐ Provides new models (pRNNs), tasks (Multimodal mazes) and tools, in a pip-installable package:

github.com/ghoshm/Multi...

🧵9/9
August 1, 2025 at 8:27 AM
Third, to explore why different circuits function differently, we measured 3 traits from every network.

We find that different architectures learn distinct sensitivities and memory dynamics which shape their function.

E.g. we can predict a network’s robustness to noise from its memory.

🧵8/9
August 1, 2025 at 8:27 AM
Second, to isolate how each pathway changes network function, we compare pairs of circuits which differ by one pathway.

Across pairs, we find that pathways have context dependent effects.

E.g. here hidden-hidden connections decrease learning speed in one task but accelerate it in another.

🧵7/9
August 1, 2025 at 8:27 AM
First, across tasks and functional metrics, many pRNN architectures perform as well as the fully recurrent architecture.

Despite having less pathways and as few as ¼ the number of parameters.

This shows that pRNNs are efficient, yet performant.

🧵6/9
August 1, 2025 at 8:27 AM
To compare pRNN function, we introduce a set of multisensory navigation tasks we call *multimodal mazes*.

In these tasks, we simulate networks as agents with noisy sensors, which provide local clues about the shortest path through each maze.

We add complexity by removing cues or walls.

🧵4/9
August 1, 2025 at 8:27 AM
This allows us to interpolate between:

Feedforward - with no additional pathways.
Fully recurrent - with all nine pathways.

We term the 126 architectures between these two extremes *partially recurrent neural networks* (pRNNs), as signal propagation can be bidirectional, yet sparse.

🧵3/9
August 1, 2025 at 8:27 AM
We start from an artificial neural network with 3 sets of units and 9 possible weight matrices (or pathways).

By keeping the two feedforward pathways (W_ih, W_ho) and adding the other 7 in any combination,

we can generate 2^7 distinct architectures.

All 128 are shown in the post above.

🧵2/9
August 1, 2025 at 8:27 AM
How does the structure of a neural circuit shape its function?

@neuralreckoning.bsky.social & I explore this in our new preprint:

doi.org/10.1101/2025...

🤖🧠🧪

🧵1/9
August 1, 2025 at 8:27 AM
1. Frame your scientific question (🖼️)

Before diving into research, you need to consider your aim and any data you may have.

This will help you to focus on relevant methods and consider if AI methods will be helpful at all.

@scikit-learn.org provide a great map along these lines!
July 25, 2025 at 10:58 AM
How can we best use AI in science?

Myself and 9 other research fellows from @imperial-ix.bsky.social use AI methods in domains from plant biology (🌱) to neuroscience (🧠) and particle physics (🎇).

Together we suggest 10 simple rules @plos.org 🧵

doi.org/10.1371/jour...
July 25, 2025 at 10:58 AM
Had a great time discussing multisensory integration @imrf.bsky.social!

And really enjoyed sharing our new work too
July 21, 2025 at 8:14 AM
Off to my first @imrf.bsky.social conference!

I'll be giving a talk on Friday (talk session 9) on multisensory network architectures - new work from me & @neuralreckoning.bsky.social.

But say hello or DM me before then!
July 15, 2025 at 9:22 AM
July 14, 2025 at 2:44 PM
Fellows from @imperial-ix.bsky.social, including myself, recently visited AIMS Cape Town!

We ran tutorials to show the students how we apply methods from AI to different scientific domains; from particle physics to public health (@ojwatson.bsky.social) and #neuroscience.
May 7, 2025 at 12:39 PM
In the same way, evolutionary algorithms can be used to evolve neural network models for neuroscience-style tasks.

In my own work, we found that these perform well, but interpreting the discovered networks (graphs with arbitrary topologies) is very challenging.

🧵9/10
March 13, 2025 at 10:48 AM
So what can we learn from this approach?

Well, these models:

Fit experimental data (from humans, rats and flies) better than current models. Though, here, this difference is small (< 5%).

Trade-off performance (x-axis) and complexity (y-axis).

🧵6/10
March 13, 2025 at 10:48 AM
For example, here is (part of) one of the evolved models.

🧵5/10
March 13, 2025 at 10:48 AM
@pcastr.bsky.social & co use a new algorithm to explore models in the form of Python code.

Their algorithm, combines an LLM + an evolutionary process + parameter fitting.

So, it (iteratively) tests a set of models, then varies the best ones (by changing their code) to make new ones.

🧵4/10
March 13, 2025 at 10:48 AM
How should we design experiments in computational neuroscience?

This great paper focusses on "empirical design in reinforcement learning", but the ideas are generally applicable!

Here are some of their suggestions with a running example:
arxiv.org/abs/2304.01315
January 30, 2025 at 2:20 PM
New preprint from me, @swathianil.bsky.social & @neuralreckoning.bsky.social.

We consider how animals should combine multisensory signals in the naturalistic case where:
✅ Signals are sparse.
✅ Signals arrive in bursts.
✅ Sensory channels are correlated.

To do so, we compare several models (🧠):
January 14, 2025 at 3:16 PM
Join us in Zambia for the third TReND-CaMinA course: computational neuroscience & machine learning in Africa.

📆Applications open until 15.01.

🧠🧪

trendinafrica.org/trend-camina/
January 9, 2025 at 9:12 AM
I managed to find a physical copy of the conference proceedings; with the name Paul Rozin written inside. Maybe, from @upenn.bsky.social Psychology?

Though, you can read the full transcript here:
cyberneticzoo.com/wp-content/u...
December 16, 2024 at 11:30 AM