Cole Hurwitz
colehurwitz.bsky.social
Cole Hurwitz
@colehurwitz.bsky.social
AI Architect, Core AI, IBM | Agentic AI & AgentOps - find my posts on LinkedIn
After nearly a decade in academia, I am thrilled to share my next chapter: I am joining IBM as an AI Architect in the new Core AI group.

We are building an AgentOps platform to observe, evaluate, and optimize enterprise AI agents and we are hiring. DM me if interested.
November 17, 2025 at 7:32 PM
We evaluate NEMO on brain region localization by predicting the region of individual neurons (and nearby groups) using only the extracted features, and compare it to baseline methods.

NEMO again outperforms both the VAE-based and supervised approaches.
April 21, 2025 at 5:34 PM
We scale NEMO to the full IBL Brain-Wide Map dataset: 675 insertions from over 100 animals, yielding 37,017 high-quality neurons.

Without using any labels, NEMO's features align closely with anatomical regions and are consistent across labs.
April 21, 2025 at 5:34 PM
We benchmark NEMO against two SOTA cell-type classification methods, PhysMAP and a VAE (Beau et al., 2025), using two optotagged datasets from the mouse cerebellum and visual cortex.

NEMO outperforms all baselines, including fully supervised models, with minimal fine-tuning.
April 21, 2025 at 5:34 PM
We construct a paired dataset of spike trains and waveforms for all neurons, transforming spiking activity into an ACG image (Beau et al., 2025) that captures autocorrelation across firing rates.

NEMO is trained to align ACGs and waveforms in a shared embedding space.
April 21, 2025 at 5:34 PM
Thrilled to share our state-of-the-art method for in vivo cell-type classification and brain region localization, NEMO, which is now now a spotlight at @iclr-conf.bsky.social !

We use NEMO to characterize the electrophysiological diversity of cell-types across the entire mouse brain. 🐭 🧪 🧠
April 21, 2025 at 5:34 PM
Certainly, our work at IBL shows the benefits of scale (thanks for the shoutout @tyrellturing.bsky.social).

However, scaling laws are most useful when defined by performance as a function of model size, data, and compute. The POYO paper comes closest, showing scaling with both model and data size.
April 18, 2025 at 2:54 PM
Major take aways from this project: (1) scale is important. We need to keep scaling up these approaches to new brain regions, tasks, and animals, (2) model comparison is hard! We need more community benchmarks for evaluation (like FALCON: snel-repo.github.io/falcon/).
April 15, 2025 at 5:12 PM
Excitingly, NEDS’s learned embeddings exhibit emergent properties: even without explicit training, they are highly predictive of the brain regions in each recording.
April 15, 2025 at 5:12 PM
In comparison to two state-of-the-art neural decoding models (POYO+ and NDT2), we show that NEDS achieves superior performance on predicting behavior from trial-aligned neural activity (we hyperparameter tune all models on a subset of 10 training animals).
April 15, 2025 at 5:12 PM
By pretraining across 70+ animals, we see a large improvement in both encoding and decoding performance after fine-tuning on 10 held-out animals.
April 15, 2025 at 5:12 PM
We tokenize neural activity, continuous, and discrete behaviors at each time step and then feed them into our shared transformer-based encoder model.

We train NEDS by masking out and reconstructing neural activity, behavior, within-modality, and across modalities.
April 15, 2025 at 5:12 PM
Another step toward a foundation model of the mouse brain: "Neural Encoding and Decoding at Scale (NEDS)"

Trained on neural and behavioral data from 70+ mice, NEDS achieves state-of-the-art prediction of behavior (decoding) and neural responses (encoding) on held-out animals. 🐀
April 15, 2025 at 5:12 PM
The International Brain Laboratory reproducibility platform paper is now published at eLife. An amazing effort by many different labs and individuals to understand and improve the reproducibility of electrophysiological measurements in mice. 🧠
March 17, 2025 at 3:22 PM
Come check out our poster at #NeurIPS2024 (Thursday 11am PST in the East Exhibit Hall A-C). Excited to get feedback on our new direction for neurofoundation models. :-)
December 10, 2024 at 8:27 PM
Our fine-tuned, 34-animal pre-trained MtM models can generalize to unseen tasks including behavior decoding from individual brain regions!
December 2, 2024 at 10:22 PM
We scale our MtM approach to train on up to 34 animals to demonstrate that it is a suitable recipe for large-scale pre-training. Performance improves with the number of animals for all metrics after fine-tuning!
December 2, 2024 at 10:22 PM
MtM leads to qualitative and quantitative improvements over previous self-supervised baselines especially for predicting neural activity across brain regions and within brain regions.
December 2, 2024 at 10:22 PM
To address this, we introduce a transformer-based multi-task-masking (MtM) approach. The model alternates between four "tasks": neuron masking, causal masking, inter-region masking, and intra-region masking. We prompt the model to solve each task during training and inference.
December 2, 2024 at 10:22 PM
Ported over from X!

What will a foundation model for the brain look like? 🧠

We argue that it must be able to solve a diverse set of tasks across multiple brain regions and animals.

Check out our NeurIPS paper which introduces a multi-region, multi-animal, multi-task model arxiv.org/abs/2407.14668
December 2, 2024 at 10:22 PM