#encoders
Finally got the encoders in my studio to be stable, thanks to Genlock. Still some stuff to tweak, but on the right track for doing a show next week. Gonna snuggle up with a partner in another city this weekend and dabble in writing some documentation in the mean time. ^.^
November 13, 2025 at 4:48 AM
Got a little MIDI/DAW controller to play around with and… uh… seriously?
November 12, 2025 at 6:59 PM
For the next project, I'm trying to make sure the color balance is accurate, which is how I noticed the rendering issue. After messing with the rendering output settings and the encoders, I think I may have fixed it 💡
November 7, 2025 at 4:10 AM
I bought an iMac Studio for editing when my last PC died. But there is a weird quirk with Macs with certain rendering software with how QuickTime encoders handle colors differently
November 7, 2025 at 4:08 AM
Are you talking about custom designing and sourcing a pcb, switches, rotary encoders, keycaps, a case, and then programming a microcontroller? Or looking for suggestions for a mostly prebuilt keyboard with those features ?
November 5, 2025 at 1:07 AM
OK this thing is actually really fascinating, by @efog.tech. Higher end spec than the Nano/Adept. Comes with a BTU option if you want. Shipping is apparently rough but it's open source. Dual encoders. BLUETOOTH. DIY/semi-DIY kit only. Might have to make this or get some PCBs printed.
November 2, 2025 at 6:38 PM
i forget which encoding speed preset i used for the video encoders but it could always be something slower that makes better use of the bits. also the HLS streams are all CBR so it can't do any kind of VBR stuff to use bits where they're more important for retaining visual fidelity
November 3, 2025 at 1:54 AM
PII detection for 15x-500x cheaper

Goodfire and Rakuten used sparse auto encoders (SAEs, a mech interp thing) to detect PII

an SAE is a secondary model trained on the primary LLM, they used a random forest. This 2nd model tells you what “features” were activated

www.goodfire.ai/research/rak...
Deploying Interpretability to Production with Rakuten: SAE Probes for PII Detection
www.goodfire.ai
October 30, 2025 at 11:51 AM
fwiw my mental model (not great) is that encoders are great at compressing complex inputs (specialized, as you say), decoders are general, and encoder-decoders are the best of both. i.e. you don’t have to go through the encoder half on every single output token
October 21, 2025 at 1:42 PM
The reason why frogs are so effective at resisting the Trump regime, is because frogs is a latent (in neurology it means a deep signal in your brain and in machine learning it's the the middle layers in auto-encoders) representation of your own gut reaction, which does not believe propaganda.

🐸🐸🐸
October 22, 2025 at 1:07 AM
New F1 sim racing mod. iPhone running SIM-Dashboard as an secondary info screen showing flags, tire temps, fuel status, revs, engine mode and current diff and brake balance settings (and a clock and a nice TV-style graphic).
October 21, 2025 at 12:11 PM
DeepSeek-OCR

a tiny 3B-A0.5B MoE OCR model that runs fast on a single A100 40GB with very high precision and excellent compression

why it’s cool — they use images as a way to compress text and get around the O(n^2)

huggingface.co/deepseek-ai/...
October 20, 2025 at 11:12 AM
i think this is the crux of DeepSeek-OCR

1. (text) context gets longer as you add words
2. long context is quadratic
3. you can fit lots of words in an image
4. if you use encoder-decoder architecture, your tokens encode a ton of information
October 20, 2025 at 12:28 PM
What makes an image memorable? And can we predict image memorability using pretrained vision encoders? We explored activations, attention distributions, image patch uniformity and sparse autoencoder losses using image representations across the layers of CLIP, DINOv2 and SigLIP2.
October 15, 2025 at 9:10 AM
Replace Variational Autoencoder (VAE) with pretrained representation encoders (e.g., DINO, SigLIP, MAE) paired with trained decoders, which they terms as Representation Autoencoders (RAE).
October 15, 2025 at 3:49 AM
In “Words That Make Language Models Perceive,” we find if you ask an LLM to “imagine seeing,” then how it processes text becomes more like how a vision system would represent that same scene.

If you ask it to “imagine hearing,” its representation becomes more like that of an auditory model.

3/9
October 10, 2025 at 10:13 PM
I'm only familiar with image input in LLMs on a technical level, but in that case, while text and image inputs use different encoders to create the embeddings, as they flow through the layers of the model they're mostly treated the same way with some exceptions like different positional encoding.
October 6, 2025 at 4:46 AM
Squashed some validation errors and implemented a ton today. Now Niobium can do texture uploads as easy as Metal, and render encoders are in (but no pipelines so no shaders lol, enjoy the clear color)
October 4, 2025 at 1:29 AM
She lives!!! The Frankenstein of quilting parts that are barely compatible with each other are sewing in unison and not breaking thread. Thank god they still make encoders for this machine or I would have been screwed!
October 4, 2025 at 5:04 AM
This is a *very* charitable interpretation of encoders. Where did the data that generated the matrix and weights come from?

Even if the images themselves are not directly stored inside the model itself, their data was directly used to generate the model.

It is, at the very least, derivative.
October 1, 2025 at 2:49 PM
It's a different codec per format. MozJPEG for JPEG, but just the regular encoders for webp and avif, we just expose all the relevant options compared to other apps.
September 30, 2025 at 6:55 PM
Just added the table and some additional thoughts and information to the Snippets section on the website:
notblackmagic.com/snippets/hd-...
Feedback is welcome!
In the future, with time, I want to test the encoders out and compare performance and quality.
September 25, 2025 at 8:51 AM
colbert-muvera-micro a 4M(!!) late interaction model

late interaction models do embedding vector index queries and reranking at the same time leading to far higher accuracy

huggingface.co/NeuML/colber...
September 19, 2025 at 11:15 AM
Super happy that QuARI: Query Adaptive Retrieval Improvement was accepted at #NeurIPS2025. You can significantly boost retrieval performance for very hard retrieval tasks by learning query-specific transformations of your encoders. w/ @jacobsn.bsky.social @pless.bsky.social arxiv.org/pdf/2505.21647
September 18, 2025 at 6:55 PM
I remember when Dattebayo was pirating Naruto 11 years ago. They had the finest translators, the best encoders and the sharpest subtitlers. There was no finer release group, and they wanted only one thing in return to cease pirating:

A legitimate way for people in the US to watch Naruto.

/ 1
February 8, 2024 at 5:07 AM