Felix Petersen
petersen.ai
Felix Petersen
@petersen.ai
Machine learning researcher @Stanford. https://petersen.ai/
Pinned
Excited to share our #NeurIPS 2024 Oral, Convolutional Differentiable Logic Gate Networks, leading to a range of inference efficiency records, including inference in only 4 nanoseconds 🏎️. We reduce model sizes by factors of 29x-61x over the SOTA. Paper: arxiv.org/abs/2411.04732
I'm excited to share that our work on Convolutional Differentiable Logic Gate Networks was covered by MIT Technology Review. 🎉

www.technologyreview.com/2024/12/20/1...
@hildekuehne.bsky.social
The next generation of neural networks could live in hardware
Researchers have devised a way to make computer vision systems more efficient by building networks out of computer chips’ logic gates.
www.technologyreview.com
December 27, 2024 at 8:40 PM
Reposted by Felix Petersen
Convolutional Differentiable Logic Gate Networks @FHKPetersen
December 13, 2024 at 8:24 PM
Reposted by Felix Petersen
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms @FHKPetersen
December 12, 2024 at 1:23 AM
Join us at our poster session today, 11am-2pm, at East Exhibit Hall A-C *#1502*.
December 12, 2024 at 6:41 PM
Reposted by Felix Petersen
Most innovative paper at #NeurIPS imho. Can we create a network that becomes the physical chip instead of running on a chip? Inference speedups and energy preservation are through the roof !

Oral on Friday at 10am PT

neurips.cc/virtual/2024...
NeurIPS Poster Convolutional Differentiable Logic Gate NetworksNeurIPS 2024
neurips.cc
December 12, 2024 at 5:38 PM
Join us on Wednesday, 11am-2pm for our poster session on Newton Losses in *West Ballroom A-D #6207*. neurips.cc/virtual/2024...
December 10, 2024 at 7:55 PM
Have you ever wondered how training dynamics differ between LLMs 🖋️ and Vision 👁️ models? We explore this and close the gap between VMs and LLMs in our #NeurIPS2024 paper "TrAct: Making First-layer Pre-Activations Trainable".
Paper link 📜: arxiv.org/abs/2410.23970
Video link 🎥: youtu.be/ZjTAjjxbkRY
🧵
December 4, 2024 at 6:39 PM
I'm excited to share our NeurIPS 2024 paper "Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms" 🤖.
Paper link 📜: arxiv.org/abs/2410.19055
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms
When training neural networks with custom objectives, such as ranking losses and shortest-path losses, a common problem is that they are, per se, non-differentiable. A popular approach is to continuou...
arxiv.org
November 28, 2024 at 1:49 AM
Excited to share our #NeurIPS 2024 Oral, Convolutional Differentiable Logic Gate Networks, leading to a range of inference efficiency records, including inference in only 4 nanoseconds 🏎️. We reduce model sizes by factors of 29x-61x over the SOTA. Paper: arxiv.org/abs/2411.04732
November 17, 2024 at 4:34 PM