Bram Zijlstra
bramzijlstra.com
Bram Zijlstra
@bramzijlstra.com
Machine Learning Engineer with a background in Philosophy and AI.
I live in Amsterdam. Right now working at the Dutch Chamber of Commerce (KVK). Also founder of a boutique consulting firm.
Every now and then I read posts about 90% of coding will be done by AI this year and I can't lie it makes me a bit nervous. Then a study likes this drops and I can relax a bit
You’ve probably heard about how AI/LLMs can solve Math Olympiad problems ( deepmind.google/discover/blo... ).

So naturally, some people put it to the test — hours after the 2025 US Math Olympiad problems were released.

The result: They all sucked!
April 2, 2025 at 8:06 AM
Reposted by Bram Zijlstra
You’ve probably heard about how AI/LLMs can solve Math Olympiad problems ( deepmind.google/discover/blo... ).

So naturally, some people put it to the test — hours after the 2025 US Math Olympiad problems were released.

The result: They all sucked!
March 31, 2025 at 8:33 PM
There are plenty of cases where survivorship bias doesn’t apply. We just don’t remember them.
March 29, 2025 at 11:45 AM
If I had to bring an institution down from the inside, suggesting to rewrite the entire codebase in a different language would probably my first idea. Not sure if I would need more ideas after that one.
March 28, 2025 at 3:18 PM
Do you think vibe coding is a gradual or fundamental change in technology?
In a way, we've always been vibe coding (but also not at all)
Blog by Bram Zijlstra
bramzijlstra.com
March 27, 2025 at 9:13 AM
Errors with a 200 status code is like gift wrapping a turd
status codes mean things! stop returning errors with a 200!!
March 26, 2025 at 9:35 PM
An infinite amount of monkeys with an LLM can write the entire the complete works of Shakespeare
March 25, 2025 at 12:03 PM
My doctor told me he's into vibe surgeries lately
March 20, 2025 at 6:08 PM
I think the reason why LLMs are overconfident is because we keep saying telling "You are an expert in" literally anything
March 18, 2025 at 2:35 PM
"Why flake8? I use Ruff"
I'm just a dog.

Sat in front of your code.

Silently judging it.
March 15, 2025 at 11:47 AM
Electron apps are the Monkey's Paw to someone wishing cheaper RAM
March 3, 2025 at 3:27 PM
Anyone know best practices / tips for improving LLM quality on classifying long texts? For shorter input few-shot learning and finding the best examples work well, but this not very practical with longer texts. #databs
February 28, 2025 at 2:10 PM
We'll have AI alignment before MS Word alignment
You all talk AI meanwhile I am still stuck in this world where MS Word refuses to do simple alignment and font tasks
February 24, 2025 at 1:32 PM
An agent is an LLM that uses tools.
A tool is someone who keeps saying '2025 is the year of the agents'
February 18, 2025 at 1:53 PM
Today was a day where all emails found me well
February 7, 2025 at 4:18 PM
Interesting benchmark from Adyen on multi-step reasoning. New benchmarks are great in establishing a baseline for historic models, all new models should be treated with suspicion. #databs
huggingface.co/blog/dabstep
February 7, 2025 at 4:12 PM
Really want to like cursor but I am completely Pycharm brained it seems. Anyone else have the same ? Which copilot did you go for?
February 7, 2025 at 1:27 PM
I'm always surprised that ChatGPT suggests old school NLP / string manipulation tricks instead of suggesting to call OpenAI. Seems like upselling 101 to me.
February 6, 2025 at 3:44 PM
“Our company has moat”
The moat:
February 4, 2025 at 4:39 PM
DeepSeek is not refusing the Tiananmen Square question, though it gets the answer wrong.
February 3, 2025 at 1:09 PM
Reposted by Bram Zijlstra
So, look. I'm sure I'm in the minority here on Bluesky in believing that training AI systems isn't copyright infringement.

But, also. Dude.

There's no way OpenAI can make this argument without looking very, very silly.
January 29, 2025 at 6:55 AM
Will copilots (eventually) negatively impact open source library development? Major updates will break any copilot, so it disincentives doing so.
January 27, 2025 at 7:18 PM
Reposted by Bram Zijlstra
Baffling to see the disclosure around DeepSeek

Last I checked the tech industry, we celebrate small teams pushing the industry forward, coming up w novel ways to build software more efficiently. And sharing it.

Yet now there's a class of folks who think this is some bad thing?
January 27, 2025 at 4:17 PM
If I would be a dictator for a day I would ban the word 'syncing' outside the computer domain.
January 27, 2025 at 1:17 PM
Reposted by Bram Zijlstra
DeepSeek released a whole family of inference-scaling / "reasoning" models today, including distilled variants based on Llama and Qwen

Here are my notes on the new models, plus how I ran DeepSeek-R1-Distill-Llama-8B on my Mac using Ollama and LLM

simonwillison.net/2025/Jan/20/...
DeepSeek-R1 and exploring DeepSeek-R1-Distill-Llama-8B
DeepSeek are the Chinese AI lab who dropped the best currently available open weights LLM on Christmas day, DeepSeek v3. That model was trained in part using their unreleased R1 …
simonwillison.net
January 20, 2025 at 3:22 PM