Iñigo Lara
banner
inigoliz.bsky.social
Iñigo Lara
@inigoliz.bsky.social
Computational Physicist and Maker. I grow ideas 🪴

Puzzled about the emergence of intelligence.

Physics. ML. Photonics. Software. Electronics.

📔 inigoliz.dev

- Just keep moving -
(Don't take my word as based on a thorough scientific basis - I'm just brainstorming based on some thoughts I had recently)
October 31, 2025 at 8:47 AM
Therefore, either:
- Context results from cyclic excitation patterns in the brain net.
- Context results from temporary (and vanishing) changes to the structure of the network (neuron plasticity/synaptic plasticity).

A combination is also an option.
October 31, 2025 at 8:47 AM
Another rather puzzling divergence about current LLMs and our brain:

In LLMs, since the computing is separate from the memory, context doesn't effectively change the weights of the network.

However, in our brain, there's no separate bank of memory (afaik).
October 31, 2025 at 8:44 AM
Or cleaning your context window.

What if:

Day of operation -> Context window accumulates information
Sleeping -> Context window distils into learning, and window cleans.

There is, ofc, some sort of learning happening as the day goes on as well.
October 31, 2025 at 8:41 AM
Reposted by Iñigo Lara
Peter Michael, Zekun Hao, Serge Belongie, and Abe Davis, “Noise-Coded Illumination for Forensic and Photometric Video Analysis,” ACM Transactions on Graphics, 2025.

NCI project page: peterfmichael.com/nci (2/2)
July 30, 2025 at 3:56 PM
Reposted by Iñigo Lara
The limiting factor to modern CPU performance other than thermal and power constraints is the area taken up by the PMOS part of the SRAM of the cache. That's where a lot of the performance gain of a node shrink comes from, allowing for either more cache (AMD) or more and stronger cores (Intel).
July 4, 2025 at 11:23 PM