eigenblake
eigenblake.bsky.social
eigenblake
@eigenblake.bsky.social
SWE near NYC
Neurosymbolic method I'm sure is out there: explicit Simulated Annealing search with an evaluation function equal to the negative cosine distance of some semantic image embedding taken between the candidate and the reference. Given an avatar builder and an image of p, you can get an avatar of p!
January 20, 2026 at 4:29 AM
When someone uses GenAI to completely clone a vibe-coded project, it's called vibejacking.
January 19, 2026 at 8:21 PM
Reinforcement Learning has all of this mystical appeal. You have a magical agent who takes actions in a beautiful environment and learns to maximize so reward by seeking out new experiences.
January 1, 2026 at 12:54 AM
Reposted by eigenblake
You can now spin up Unison Cloud clusters on your own infrastructure!

✅ Build elastic distributed systems and services in vastly less code
✅ Fast, typed RPC
✅ Deployments in seconds
✅ Free to get started

youtu.be/0sZqI1XoGLY
Unison Cloud on your infrastructure
YouTube video by Unison Language
youtu.be
October 1, 2025 at 7:39 PM
This matches my intuition. It's a valid approach. I was thinking we could also topologically sorted learning materials so they they fit within the zone of proximal development. Essentially treating introductions of terms like an optimization task.
www.youtube.com/watch?v=EV13...
A New Programming Fundamentals Course
YouTube video by Nic Barker
www.youtube.com
July 28, 2025 at 4:38 AM
To store songs in birds, we could identify an information density optimized latent space with a β-VAE and then evaluate storage and retrieval performance. Birds might have more "affinity" to these sounds anyway. Yes, I'm proposing a Birdsong Protocol.
www.youtube.com/watch?v=hCQC... #MLSky
I Saved a PNG Image To A Bird
YouTube video by Benn Jordan
www.youtube.com
July 26, 2025 at 6:36 PM
Getting "clankers", the world's first slur against machines and machine intelligence was not in my 2025 bingo card
July 26, 2025 at 6:01 PM
AI influence on language prediction: besides delve, we're gonna be talking about the "it's not just X, it's Y" pattern. Can't wait to see the papers looking at AI adoption in various communities by tracking this meme
June 18, 2025 at 3:19 PM
There's a future where "friendship is sending memes to each other" becomes "friendship is sending chatbot conversation transcripts to each other" becomes "AI now has an opportunity to insert itself between every exchange of information, allowing it to steer humanity." It's our job when to say stop.
May 26, 2025 at 6:55 AM
In today's episode of "thought-crimes to send me to science purgatory"

1. Observation: Some content I can do up to 2.5x, some 1x
2. Hypothesis: You could probably use PLM perplexity on past content to approximate human encoding
3. Test: Is retention constant after perplexity normalization
April 21, 2025 at 5:51 AM
I still don't know what a bookmarklet is and I'm too afraid to ask
April 3, 2025 at 5:05 AM
To mitigate prompt injection, could we introduce classes of privileged tokens which do not appear in training data or web text, and are only inserted at RLHF time? The model would learn to ignore these distractor prompts, randomly injected during RLHF. Projection rings for language models.
March 22, 2025 at 4:38 AM
Patiently waiting for a paper where they use RL objectives and classical/hybrid planning for LLM-driven research reports. You could ensure that documents contain the right variety in all the key metrics you care about before feeding it into any downstream generation task.
March 17, 2025 at 1:54 AM
Has anyone written a character who refuses to adopt a moral framework because of Gödel's Incompleteness Theorem. "No self consistent moral framework can prove its on morality," says the character. "If I have a moral framework, I could never hope to spread it, spread that blemish scarring my mind."
March 16, 2025 at 6:05 AM
They told us that the programming paradigms were functional and object oriented. But the more I look, the more they look like two sides of the same coin: plainly isomorphic. Constraint Programming, Logical Programming, Dataflow Programming, these appear truly different. Formalizations, anyone?
March 12, 2025 at 4:29 AM
An embarrassing peeve I still have: I wish the sci fi/sci fantasy trope was not calling it a "inter dimensional portal." If the laws of physics are the same and you're able to move around in 3D across on time dimension, that's not a different dimension, that's a wormhole.
March 9, 2025 at 3:09 AM
There is probably a continuum in all representations. Natural Language words too. Thematic associations, low-cardinality aspect sets, high cardinality aspect sets, genuine monosemantic concept identification. This explains why Chipotle and Sweetgreen have such different "salads."
March 8, 2025 at 7:25 AM
Category theory useless? That's exactly what they'll use to unify type theory, linear algebra, relational algebra, dense neural word embeddings, and functional programming into one Grand Theory of Forms and Representation. That's gonna be the paper drop of the century. Won't wanna miss it.
March 5, 2025 at 5:45 AM
The Software version of "Linguists can't decide what a word is" is "Computer Scientists can't decide what a gigabyte is"
stackoverflow.com/questions/40...
Gigabyte or Gibibyte (1000 or 1024)?
This may be a duplicate and I apologies if that is so but I really want a definitive answer as that seems to change depending upon where I look. Is it acceptable to say that a gigabyte is 1024 meg...
stackoverflow.com
February 22, 2025 at 3:57 AM
If English really did get rid of the passive voice, the English-speaking world see an explosion in conflict-resolution specialists in businesses, schools -- anywhere where people have meetings and make decisions. #Language #PassiveVoice #EnglishLanguage
February 17, 2025 at 4:05 AM
In the mad dash to implement semantic search let's not forget 4 things

1. You can't improve what you can't measure.
2. These are aesthetic matches, not true composable semantic engines.
3. Finding the optimal chunking isn't just hard, it's NP hard.
4. Garbage documents in, garbage documents out.
February 14, 2025 at 5:02 AM
Reposted by eigenblake
i think that the quality of discourse in the software industry would be a lot better off if programmers were a bit more willing to admit that sometimes we just do stuff because we *like* it, instead of because it's objectively good or bad.
February 6, 2025 at 8:30 PM
All abstractions are leaky. But the null abstraction is maximally leaky! There are effective abstractions and ineffective abstractions and our job is to add and use more of the former and less of the latter. There's so much nuance to explore here. Maybe a blog post later.
February 9, 2025 at 9:43 PM
open.substack.com/pub/splittin...

Post reading this: is an entropic topological sort on the documents in a pre-training corpus possible? The earliest model learns on the easiest data and as training progresses, the tasks get harder (entropy increases) at about the rate the model can solve it
On AI Scaling
It takes all of us.
open.substack.com
February 6, 2025 at 1:48 AM
Natural Language is terrible. It's the best we have.
Science is terrible. It's the best we have.
Democracy is terrible. It's the best we have.
Maths is terrible. It's the best we have.
Programming is terrible. It's the best we have.

We may err

And err

And err again

But less

And less

And less.
February 1, 2025 at 7:41 AM