Keith Kerr
banner
keefkerr.bsky.social
Keith Kerr
@keefkerr.bsky.social
Product designer working in AI. Formerly Datadog, Enigma, Foursquare.
https://www.keithkerr.com/
Felt like we were all talking like this during the AR/VR hype cycle and then ended up with windows… in space
January 31, 2025 at 6:02 AM
“What’s different this time is that the company that was first to demonstrate the expected cost reductions was Chinese.” - Dario Amodei, CEO of Anthropic

darioamodei.com/on-deepseek-...
Dario Amodei — On DeepSeek and Export Controls
On DeepSeek and Export Controls
darioamodei.com
January 30, 2025 at 2:37 AM
“DeepSeek-V3 is not a unique breakthrough or something that fundamentally changes the economics of LLM’s; it’s an expected point on an ongoing cost reduction curve”
January 30, 2025 at 2:37 AM
If you follow the scaling curve of how folks are finding ways to increase their model effectiveness/compute cost ratio then V3 is actually on par with the recent generational changes we’ve seen between big models over the past few years
January 30, 2025 at 2:37 AM
However, it’s worth noting that DeepSeek’s V3 model isn’t the huge economic breakthrough folks are hyping it up to be.
January 30, 2025 at 2:37 AM
More access to better models will lead to less reliance on closed source companies and explode the general usage of gen AI. Better machines to burn coal -> more coal used
January 30, 2025 at 2:18 AM
Think industry-specific models, company-specific models, role-specific agents etc. all trained on these sharper, open source, (soon to be lighter weight) models and adapters.
January 30, 2025 at 2:18 AM
This could lead to a rapid advancement of model training and a further push for companies to harness fine-tuning techniques like RL to further focus these models to their business use case
January 30, 2025 at 2:18 AM
So now everyone in the AI world is thinking these training efficiency tricks will lead to gains in making amazing models smaller and easier to train
January 30, 2025 at 2:18 AM
DeepSeek’s model subverted this belief. So back to Jevon’s paradox. When the Industrial Revolution lead to advances in how efficiently the energy from coal could be harnessed, people didn’t use less coal… we used more, a lot more.
January 30, 2025 at 2:18 AM
In GenAI most frontier labs have agreed on the idea the scaling law – basically more compute $’s spent on training leads to better model performance
January 30, 2025 at 2:18 AM
This week DeepSeek released their best model that rivaled the performance of OpenAI’s best model. The kicker, it was trained on a fraction of the price
January 30, 2025 at 2:18 AM
Yeah you too! Been a while.

Ahh I see. I know Atlassian is making a push here for auto-reviewing PRs. Datadog also made an announcement about pushing for an SRE agent. But yeah not aware of any great live examples yet.
January 30, 2025 at 1:27 AM
Perplexity or other answer engines can be considered agents because they use agentic RAG to chunk your question into intent and match to vectors across different sources. Not quite mass adopted yet though
January 30, 2025 at 1:10 AM
Also get a shake or ice cream from Braum’s.

guides.apple.com?ug=CghPS0MgV...
Apple Maps · OKC Tips
12 places
guides.apple.com
January 22, 2025 at 5:30 PM
Hey Dennis! I live in OKC if you need any recommendations
January 22, 2025 at 5:20 PM