Rick Lamers
ricklamers.bsky.social
Rick Lamers
@ricklamers.bsky.social
Tinkering with ML systems at Groq 🧙‍♂️
Pinned
"if you can compress data well, you understand its patterns"

Low loss on training error + highly compressible hypothesis = strong generalization
Reposted by Rick Lamers
Haven't heard of web applets? It's a web-based protocol for building software that both humans & AI can understand and use together.

github.com/unternet-co/...
GitHub - unternet-co/web-applets: The home of the Web Applets spec, demo and SDK
The home of the Web Applets spec, demo and SDK. Contribute to unternet-co/web-applets development by creating an account on GitHub.
github.com
December 9, 2024 at 6:46 AM
Reposted by Rick Lamers
(1/5) "The reports of the LLM scaling laws' demise have been greatly exaggerated."

techcrunch.com/2024/12/06/m...
December 6, 2024 at 6:11 PM
December 1, 2024 at 1:11 PM
"Babe, I'm curve fitting to my hand drawn loss curve, look, look"
November 29, 2024 at 1:23 PM
I slept on Windsurf Editor longer than I should have, try it, it's amazing. Much more agentic than Cursor 🤖

codeium.com/windsurf
November 29, 2024 at 1:15 PM
I was inspired to create a web app that demonstrates the universal approximation power of neural networks using TensorFlow(.js), try it and hack around with the source code!
November 28, 2024 at 7:02 PM
The Universal Approximation Theorem for neural networks by Michael Nielsen in his custom Magic Paper app, just brilliant.

youtu.be/Ijqkc7OLenI
The Universal Approximation Theorem for neural networks
YouTube video by Michael Nielsen
youtu.be
November 28, 2024 at 5:38 PM
Reposted by Rick Lamers
(1/5) The question I get asked a lot is, “Should I be afraid of AI?”

There was this guy who got in a lot of trouble once, his name was Galileo.
November 28, 2024 at 3:43 PM
Reposted by Rick Lamers
Anthropic released an interesting thing today: an attempt at a standard protocol for LLM tools to talk to services that provide tools and extra context to be used other the models modelcontextprotocol.io
Introduction - Model Context Protocol
Get started with the Model Context Protocol (MCP)
modelcontextprotocol.io
November 25, 2024 at 4:37 PM
You probably haven't heard of PydanticAI yet, but I'm sure you have heard of Pydantic. Check it out, it's a VERY neat way to build agentic LLM-backed programs using all the goodness types and structured outputs give you.

The agent framework you won't hate?

Groq + PydanticAI = 🚀
add Groq client support by ricklamers · Pull Request #84 · pydantic/pydantic-ai
Added GroqModel class to interact with Groq API. Implemented structured and text-based responses for Groq models. Added example and tests for Groq model integration. Updated dependencies to include...
github.com
November 24, 2024 at 8:37 PM
Karazhan (raid) according to FLUX 1 Schnell
November 24, 2024 at 7:42 PM
Image generation is just TOO MUCH FUN!

Fast prompt generation with Groq ✅
Fast image generation with Fal.ai
Open Source (MIT) ✅

⚙️ pip install pyimagen
November 23, 2024 at 7:28 PM
Reposted by Rick Lamers
Evals are "too damn expensive" until you:

• can't migrate underlying models safely
• can't add new features with confidence
• can't ship without HITL evals, which takes >100x longer
• product development and iteration grinds to a halt
• lose customer trust due to poor user experience
November 23, 2024 at 4:57 AM
Power of open! Groq hosted open source models + open Bsky platform data (through their API search).

Search for your favorite topic on Bsky and get instant answers plus post links!

Link: groq-bsky.vercel.app
November 19, 2024 at 2:42 PM
Important reason to move to Bluesky: you can hack on it because the data is much easier to access 🔥
November 19, 2024 at 11:50 AM
This meme holds surprisingly well. AWS ushered in an era where infra was no longer a differentiating factor, all startups built on cloud and moats continued to exist (network effects, momentum, proprietary data, etc.). AI APIs = new cloud layer.
November 19, 2024 at 11:19 AM
"if you can compress data well, you understand its patterns"

Low loss on training error + highly compressible hypothesis = strong generalization
November 19, 2024 at 11:13 AM
I love how bolt.new invites you to try new stacks, like building with Remotion for programmatic video generation.
November 19, 2024 at 10:53 AM
Reposted by Rick Lamers
How can you not be jealous when you copy paste a long passage of text to GPT-4, ask it to eg summarize and see tokens flowing moments later
May 27, 2023 at 3:56 AM
OpenAI has been slow to roll out ChatGPT Code Interpreter access to everyone (including me). So I thought it would be fun to code my own version, you can use it today and it's Open Source! https://github.com/ricklamers/gpt-code-ui
May 29, 2023 at 3:10 AM
Great example of why mathematical understanding of eg matrix decompositions is key for optimizing existing methods. You might not need math to utilize other people’s work in machine learning but be prepared to bring a mathematical toolkit if you’re looking to innovate.
Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) - Lightning AI
In the rapidly evolving field of AI, using large language models in an efficient and effective manner is becoming more and more important. In this article, you will learn how to tune an LLM with Low-Rank Adaptation (LoRA) computationally efficiently!
lightning.ai
May 25, 2023 at 11:32 PM