Chris Lengerich
banner
chrislengerich.com
Chris Lengerich
@chrislengerich.com
How do we make scientists and engineers 100x more productive to solve problems that matter?

This feed is notebook margins - raw research notes to train a science hypothesis AI later.

Final essays go to Context Fund: https://www.reddit.com/r/contextfund
Also, no excuse anymore to miss important global trends (via universal translation + summarization):
September 3, 2025 at 5:37 AM
Short social. Partially a product of its own success (there are no more eyeball-hours to harvest), partially a product of the fact everyone can just poll a personal summarizer that doesn't read ads (you really only care about a thin slice of news in your Markov blanket, but you care about it a lot).
July 17, 2025 at 8:24 PM
July 16, 2025 at 3:19 PM
Lo-fi molecular dynamics
November 21, 2024 at 6:02 AM
Absent chokepoints and adversarial behavior, horizontally integrated stacks tend to outperform overall. However, these also tend to have longer, more complex supply chains, and be vulnerable to adversarial disruption.
October 22, 2024 at 7:09 PM
October 10, 2024 at 4:03 AM
We need to accelerate innovation, but also spend $ securing our critical semantic infrastructure:
October 10, 2024 at 12:03 AM
Related: stuffin.space
October 2, 2024 at 8:15 PM
A great product with limited financeshed may take decades to grow, if at all (hence, movement may be more important than product dev).
October 1, 2024 at 4:15 AM
September 27, 2024 at 3:58 AM
September 21, 2024 at 4:26 AM
Similar (Great Pacific Garbage Patch) - anyone know of a live view for this?
September 19, 2024 at 7:33 PM
Any know good visualizations of spam/unverified public posts on the Web, similar to astria.tacc.utexas.edu/AstriaGraph/ for pollution in space? (the grey dots)
September 19, 2024 at 7:22 PM
September 14, 2024 at 12:48 AM
September 3, 2024 at 4:20 PM
September 3, 2024 at 6:36 AM
Sounds familiar:
September 2, 2024 at 8:41 PM
Human networks are often more iterative than one would like them to be (arxiv.org/pdf/2408.16629)
August 30, 2024 at 7:29 PM
Especially in AI and AI safety, claims of "X is hard to understand" or "X has evaded the understanding of scientists" are often more a reflection on the speaker than the topic.
August 27, 2024 at 2:02 AM
Chatting with @verificationgpt.context.fund re: which environments favor what type of firm structure:
August 27, 2024 at 1:50 AM
Level 0:

A logagram of time (heptapod-style), generated from replicate.com/meghabyte/ar...
August 26, 2024 at 7:14 AM
August 23, 2024 at 6:10 PM
Any sufficiently efficient predictive model is indistinguishable from magic.
August 17, 2024 at 3:33 AM
(and related, camouflage, like science, is at least partly a psychological phenomenon, not just a physical one)
August 17, 2024 at 1:00 AM
(it's helpful to think of consciousness as a self-learning iterator running contrastive distillation. Without the iterator model, little makes sense (you only feel different parts of an elephant). With it, a whole lot of disconnected pieces snap together (the full elephant)).

See the elephant.
August 17, 2024 at 12:57 AM