nathanleroux.bsky.social
@nathanleroux.bsky.social
Reposted
📢Out now: a study by @nathanleroux.bsky.social and colleagues from @fz-juelich.de demonstrates that using gain cell-based in-memory computing can accelerate attention in LLMs, training a 1.5B model with up to 70,000x energy savings and 100× speedup over GPUs. www.nature.com/articles/s43...
Analog in-memory computing attention mechanism for fast and energy-efficient large language models - Nature Computational Science
Leveraging in-memory computing with emerging gain-cell devices, the authors accelerate attention—a core mechanism in large language models. They train a 1.5-billion-parameter model, achieving up to a ...
www.nature.com
September 15, 2025 at 9:43 PM
I’m proud to share our new paper in @natcomputsci.nature.com : www.nature.com/articles/s43....
We show that attention in LLMs can be accelerated with analog in-memory computing using Gain Cell circuits. Simuating a 1.5B-parameter model we achieve up to 70 000× lower energy and 100× speedup vs GPUs.
September 11, 2025 at 4:02 PM