Taku Ito
banner
takuito.bsky.social
Taku Ito
@takuito.bsky.social

Computational Neuroscience + AI @ IBM Research | 📍NYC | https://ito-takuya.github.io

Neuroscience 41%
Engineering 35%
Pinned
What complexity of algorithms can AI compute? In a new paper with colleagues at IBM Research, we explore how circuit complexity theory can help quantify the degree of algorithmic generalization in AI systems. www.nature.com/articles/s42...
@natmachintell.nature.com
#ML #AI #MLSky
1/n
This is the most astonishing graph of what the Trump regime has done to US science. They have destroyed the federal science workforce across the board. The negative impacts on Americans will be felt for generations, and the US might never be the same again.

www.nature.com/immersive/d4...
Trump has been in office for one year. We at @nature.com did a deep dive looking at the administration's disruption of science in numbers.

Take a look—the numbers are staggering. By me, @dangaristo.bsky.social, Jeff Tollefson, @kimay.bsky.social, & help from @noamross.net @scott-delaney.bsky.social
US science after a year of Trump: what has been lost and what remains
A series of graphics reveals how the Trump administration has sought historic cuts to science and the research workforce.
www.nature.com

Reposted by Taku Ito

Oh wow, deepseek is starting to make serious progress on LLMs that offload memory to external storage: github.com/deepseek-ai/...
github.com
Introducing DroPE: Extending Context by Dropping Positional Embeddings

We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity.

arxiv.org/abs/2512.12167
pub.sakana.ai/DroPE

Reposted by Taku Ito

Excited to see our paper with @mwcole.bsky.social finally out in peer-reviewed form @natcomms.nature.com! We examine how the human brain learns new tasks and optimizes representations over practice…1/n

Reposted by Taku Ito

Our work with @pawa-pawa.bsky.social is out in Nature Machine Intelligence! The choice of activation function affects the representations, dynamics, and circuit solutions that emerge in RNNs trained on cognitive tasks. Activation matters!
www.nature.com/articles/s42...

Reposted by Taku Ito

Did you know that AI can figure out its own way to learn, and that its way is better than one designed by humans? Read more in a @nature.com N&V (and the original paper is in the comment) 🧪 www.nature.com/articles/d41...
AI discovers learning algorithm that outperforms those designed by humans
An artificial-intelligence algorithm that discovers its own way to learn achieves state-of-the-art performance, including on some tasks it had never encountered before.
www.nature.com
(repost welcome) The Generative Model Alignment team at IBM Research is looking for next summer interns! Two candidates for two topics

🍰Reinforcement Learning environments for LLMs

🐎Speculative and non-auto regressive generation for LLMs

interested/curious? DM or email ramon.astudillo@ibm.com

Reposted by Taku Ito

Michael X Cohen on why he left academia/neuroscience.
mikexcohen.substack.com/p/why-i-left...
Why I left academia and neuroscience
Don't worry, this isn't yet another story of rage-quitting.
mikexcohen.substack.com

Reposted by Taku Ito

Nature @nature.com · Sep 26
Nature research paper: Arousal as a universal embedding for spatiotemporal brain dynamics

go.nature.com/4nMUgYz
Arousal as a universal embedding for spatiotemporal brain dynamics - Nature
Reframing of arousal as a latent dynamical system can reconstruct multidimensional measurements of large-scale spatiotemporal brain dynamics on the timescale of seconds in mice.
go.nature.com
Lab’s latest is out in Imaging Neuroscience, led by Kirsten Peterson: “Regularized partial correlation provides reliable functional connectivity estimates while correcting for widespread confounding”, where we demonstrate a major improvement to standard fMRI functional connectivity (correlation) 1/n

Formalizing AI computation in terms of algorithmic complexity can offer a formal way to quantify AI systems while offering a principled foundation to build more algorithmically capable systems in the future.
Blog: research.ibm.com/blog/ai-algo...
arXiv: arxiv.org/abs/2411.05943
Can AI generate truly novel algorithms?
A decades-old approach to measuring algorithmic complexity could provide a window into better understanding how AI systems compute.
research.ibm.com

While using AI models to generate code is commonplace these days, we still do not fully understand the limits of the complexity of the code these models can formulate.
3/n

Using circuits to formalize algorithmic problems for AI models (e.g., depth as time complexity, size as space complexity), we can quantify the complexity of circuit computations (algorithmic complexity) an AI model can perform.
2/n

Reposted by Taku Ito

Mental health research is at a turning point—breakthroughs can transform lives, but only with bold action, investment, and open collaboration. The time for action is now. Read our full statement here: childmind.org/blog/can-sci...
Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N

Reposted by Taku Ito

New preprint! Ziyan and I explore how task order impacts continual learning in neural networks and how to optimize it. Our analysis highlights two key principles for better task sequencing.
Check it out: arxiv.org/pdf/2502.03350
arxiv.org
The entire website for the NIH Office of Research on Women's Health (ORWH) is very nearly stripped bare. This is so, so devastating. orwh.od.nih.gov/research/fun...
orwh.od.nih.gov

Reposted by Taku Ito

New paper in @brain1878.bsky.social: Healthy people under S-ketamine, an NMDAR antagonist, and people living with schizophrenia, a disorder associated with NMDAR hypofunction, spend more time in an external mode of perception - where noisy sensory signals override knowledge about the world.

Reposted by Taku Ito

Reposted by Taku Ito

Quantifying Differences in Neural Population Activity With Shape Metrics https://www.biorxiv.org/content/10.1101/2025.01.10.632411v1

Reposted by Taku Ito

Paper shows very small LLMs can match or beat larger ones through 'deep thinking' - evaluating different solution paths - and other tricks. Their 7B model beats o1-preview on complex math by exploring 64 different solutions & picking the best one.

Test-time compute paradigm seems really fruitful.

Reposted by Taku Ito

New paper out! 🚨 📰 With @batuhanerkat.bsky.social, John McClure, @hussainyk1.bsky.social, @polacklab.bsky.social we reveal how discretized representations in V1 predict suboptimal orientation discrimination. 🧪🧠🐭 This work reconciles neuro and psychometric curves
www.nature.com/articles/s41...
Discretized representations in V1 predict suboptimal orientation discrimination - Nature Communications
How animals generate perceptual decisions remains poorly understood. Here, the authors show that during a discrimination task, the mouse visual cortex does not encode the orientations of the cues but ...
www.nature.com

Reposted by Taku Ito

New results for a new year! “Linking neural population formatting to function” describes our modern take on an old question: how can we understand the contribution of a brain area to behavior?
www.biorxiv.org/content/10.1...
🧠👩🏻‍🔬🧪🧵
#neuroskyence
1/
Linking neural population formatting to function
Animals capable of complex behaviors tend to have more distinct brain areas than simpler organisms, and artificial networks that perform many tasks tend to self-organize into modules (1-3). This sugge...
www.biorxiv.org

Reposted by Taku Ito

And relatedly, Felix wrote a good piece on the stress and anxiety currently affecting many people who work in AI due to the current climate in the industry:

docs.google.com/document/d/1...

If only more folks in AI were gentle and introspective like this...
AI and Stress
200Bn Weights of Responsibility The Stress of Working in Modern AI Felix Hill, Oct 2024 The field of AI has changed irrevocably in the last 2 years. ChatGPT is approaching 200m monthly users. Gemin...
docs.google.com

Reposted by Taku Ito

What was the most important machine learning paper in 2024?

My Famous Deep Learning Papers list (that I use in teaching) does not include any new ideas from the last year.

papers.baulab.info

Which single new paper would you add?
Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark

aiguide.substack.com/p/did-openai...
Did OpenAI Just Solve Abstract Reasoning?
OpenAI’s o3 model aces the "Abstraction and Reasoning Corpus" — but what does it mean?
aiguide.substack.com

Reposted by Taku Ito

📌 Poster Session:
⏰ When: TODAY, Thu, Dec 12, 4:30 p.m. – 7:30 p.m. PST
📍 Where: East Exhibit Hall A-C, #3705
📄 What: Geometry of Naturalistic Object Representations in Recurrent Neural Network Models of Working Memory

Hope to see you there!
@bashivan.bsky.social @takuito.bsky.social
🚨We're very excited to share our latest study, by Pablo Diego and team:

"A polar coordinate system represents syntax in large language models",

📄: Paper arxiv.org/abs/2412.05571
🪧: Poster tomorrow: neurips.cc/virtual/2024...
🧵: Thread 👇