philmod
banner
philmod.bsky.social
philmod
@philmod.bsky.social
Rocket scientist, converted to software. Team lead at H company. Ex-Googler. Belgian. Traveller. Musician. Egg Sommelier. Opinions are my own.
Lego announces Smart Brick, the ‘most significant evolution’ in 50 years share.google/e62z1cBDD6sr...
Lego announces Smart Brick, the ‘most significant evolution’ in 50 years
Starting with Lego Star Wars.
share.google
January 6, 2026 at 8:49 AM
Companies will lose money and trust using LLM chatbots.

I recently bought a sensor for my water meter. It wasn't clear if the sensor was compatible. I wanted to email the company but there was only a chatbot that said it was compatible. After receiving it, I couldn't figure out how to install it
December 23, 2025 at 10:48 AM
AI vs human code gen report: AI code creates 1.7x more issues www.coderabbit.ai/blog/state-o...
AI vs human code gen report: AI code creates 1.7x more issues
We analyzed 470 open-source GitHub pull requests, using CodeRabbit’s structured issue taxonomy and found that AI generated code creates 1.7x more issues.
www.coderabbit.ai
December 22, 2025 at 5:09 PM
"And while agent swarms running in the cloud feels like the "AGI endgame", we live in an intermediate and slow enough takeoff world of jagged capabilities that it makes more sense to run the agents directly on the developer's computer." karpathy.bearblog.dev/year-in-revi...
2025 LLM Year in Review
2025 Year in Review of LLM paradigm changes
karpathy.bearblog.dev
December 22, 2025 at 12:41 PM
After moving our dev workload to the cloud (at Google we dropped our beloved workstations circa 2020, for more powerful and accessible cloud instances), are we going to move it back to a powerful machine with gpu, in order to run llm inference ourselves (maybe cheaper and def more private)?
November 21, 2025 at 2:30 PM
Every airport should have a post box in the departure area. Really!
November 3, 2025 at 9:45 AM
Disposable Code Is Here to Stay, but Durable Code Is What Runs the World www.honeycomb.io/blog/disposa...

The difficulty is how to move from disposable to trusted code imho? How to take a huge bunch of code and make it production ready?
Disposable Code Is Here to Stay, but Durable Code Is What Runs the World
Every day I seem to run into yet another post with someone solemnly opining that “writing code has never been the hardest part of software engineering. And hey, that’s smashing.
www.honeycomb.io
October 15, 2025 at 4:09 PM
Reposted by philmod
Depuis 2005 aux USA, 93% des assassinats commis par des extrémistes sont le fait de l'extrême droite. Partout, l'antiwokisme s'impose pour invisibiliser ces crimes dont il est complice par ses silences et ses outrances.
September 13, 2025 at 8:03 AM
Why do LLM coding models add so many emojis in the code/logs? Is it widely done in public repos?
September 6, 2025 at 3:30 PM
Less experienced engineers overly rely on LLMs without understanding the output, adding more work on the reviewers. metr.org/blog/2025-07...
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
We conduct a randomized controlled trial (RCT) to understand how early-2025 AI tools affect the productivity of experienced open-source developers working on their own repositories. Surprisingly, we…
metr.org
July 28, 2025 at 4:10 PM
Je cherchais des documentaires animaliers pour enfants sur Netflix sur Google. Pas certain que les options 1 et 3 soient appropriées ...
July 23, 2025 at 1:03 PM
The quality of software engineering is worrisome these days. Less experienced engineers rely so much on LLMs, don't always check the generated code nor understand it. Thus reviews take much longer, as I need to ask more questions, and even sometimes help them understand their own code.
July 2, 2025 at 8:22 AM
The Ingredients of a Productive Monorepo blog.swgillespie.me/posts/monore...
The Ingredients of a Productive Monorepo
misguided thoughts
blog.swgillespie.me
June 25, 2025 at 10:09 PM
Understanding and Coding the KV Cache in LLMs from Scratch magazine.sebastianraschka.com/p/coding-the...
Understanding and Coding the KV Cache in LLMs from Scratch
KV caches are one of the most critical techniques for efficient inference in LLMs in production.
magazine.sebastianraschka.com
June 25, 2025 at 4:13 PM
Surfer-H Meets Holo1: Cost-Efficient Web Agent Powered by Open Weights arxiv.org/abs/2506.02865
Surfer-H Meets Holo1: Cost-Efficient Web Agent Powered by Open Weights
We present Surfer-H, a cost-efficient web agent that integrates Vision-Language Models (VLM) to perform user-defined tasks on the web. We pair it with Holo1, a new open-weight collection of VLMs…
arxiv.org
June 4, 2025 at 4:11 PM
Some recruiter called my phone, they said they got my phone number through LinkedIn? Is that possible??
April 25, 2025 at 3:37 PM
Lots of people trying to guess the future of AI. Interesting story. ai-2027.com
AI 2027
A research-backed AI scenario forecast.
ai-2027.com
April 8, 2025 at 7:42 AM
AI Coding in 2024 be like
YouTube video by Programmers are also human
youtube.com
February 12, 2025 at 10:15 AM
It always surprises me that GitHub status page is not automatically updated with issues in real-time. There are clearly more issues than the ones stated. www.githubstatus.com
January 30, 2025 at 2:50 PM
"A rich world model cannot be acquired from language alone"
Ingredients of understanding
Thoughts on how human understanding is different from LLM "understanding"
buff.ly
January 29, 2025 at 5:09 PM
One of the reasons I left the USA last year was to avoid getting back to this weird constant feeling of anxiety while Trump was last president. Today, I want to avoid having these feelings back, I hope it doesn't mean having to leave Bluesky too. youtu.be/t7pH36-5UBo?...
Trump is back. What now?
YouTube video by Hasan Minhaj
youtu.be
January 21, 2025 at 2:37 PM