Roger L. Cauvin
banner
rcauvin.bsky.social
Roger L. Cauvin
@rcauvin.bsky.social
♟️ #prodmgmt | Positioning | Strategy
👨‍🔬 Consumer Science | ML | AI
🏢 Downtown Austin dweller
🏘️ Advocate for inclusive neighborhoods
Sometimes we need to fundamentally rethink the existing journey instead of selectively fixing points of frustration within it. Either way, breaking down the larger aspirational vision into visionettes makes a lot of sense.
September 29, 2025 at 1:25 PM
Me: "I start walking north at constant forward pace of 2 mph. I continue walking at the same pace, but each minute, I turn 1 degree clockwise. After 360 minutes, where do I end up relative to my starting point?"

Gemini: "After 360 minutes, you will end up at your starting point."
September 19, 2025 at 10:41 PM
Concerns about "timing" are also frustrating. I remember a time when, in a meeting, everyone supported a position, but a few people said it was "too early" to publicly express our support. Then, at the next meeting, the same people said it was "too late" to express our support.
August 29, 2025 at 2:33 PM
I hope pro-housing advocates will embrace and apply this mindset to density bonus programs. Inclusionary zoning and density bonus programs are part of a scarcity mindset, not an abundance agenda. We should advocate for better alternatives.
August 19, 2025 at 11:31 PM
"It combines the deep language understanding capabilities of a large language model with the speed and reliability of a dedicated classification head. This leads to a more accurate, consistent, and cost-effective solution."
August 10, 2025 at 4:59 PM
I asked Gemini (in the context of classification problems), and it agrees:

"For most practical classification problems, the embedding-based approach is unequivocally better."
August 10, 2025 at 4:59 PM
An approach that leverages the LLM's first three layers to extract the meaning of the input, bypasses the final output layer, and instead applies task-specific machine learning layers to make the final predictions, is likely to perform better.
August 10, 2025 at 4:59 PM
The fourth "layer" merely outputs ("predicts") a likely "next token". It doesn't directly predict an outcome. You can certainly ask the LLM to output "next tokens" that effectively predict or classify, but it's not what the output layer is designed to do.
August 10, 2025 at 4:59 PM
A typical LLM (1) tokenizes the input, (2) encodes the tokens into "embeddings" (semantic representations), (3) transforms the tokens and encodings to further capture the meaning of the input, and (4) outputs the most relevant and likely "next token".
August 10, 2025 at 4:59 PM