Daniel Mewes
banner
dmewes.com
Daniel Mewes
@dmewes.com
Computer scientist. Interested in technology, artificial and natural intelligence, emergent complexity, among other things. Blogging at amongai.com.

Currently at Imbue. Previously Ambient.ai, Stripe, RethinkDB, Max Planck Institute.
Congratulations on the raise!
Big fan of your open-endedness work here. We've been getting a lot of inspiration from it at @imbue-ai.bsky.social . Looking forward to see what else Sakana will be developing with the new funding!
November 17, 2025 at 3:43 AM
It is off by default. You have to enable it from what I've read.
October 23, 2025 at 1:10 PM
Duh, if she enabled the browser memories feature, of course it will do that. It's not like they're hiding it, it's one of the advertised features and afaik can be turned off.
October 22, 2025 at 9:33 PM
If you can read this, that means I tried and it worked.
October 10, 2025 at 4:36 AM
The fact that I can point a coding agent to an arbitrary open-source software and have it implement some missing feature in a few minutes is wild. The one-shot implementation quality might not be sufficient for merging it upstream, but it's quite often enough for my one-off use.
October 8, 2025 at 12:08 AM
We just released a beta of such a product last week 🤞https://imbue.com/sculptor-announce/
Runs Claude Code in Docker containers with some special collaboration sauce.
October 5, 2025 at 2:38 PM
We might open source Sculptor in the future, but aren't quite ready for it yet. (between us though, it's written in Python and not obfuscated, so you can also just look at its code)
October 3, 2025 at 9:57 PM
Ah, interesting. I haven't thought of that.
October 2, 2025 at 3:14 PM
Unfortunately I don't think they're made uncomfortable by the rituals, but by Leo criticizing Trump's deportation policy. They just have to latch on to random things now to present the pope in a bad light.
October 2, 2025 at 3:10 PM
Interesting that Sonnet 4.5 is a lot more expensive than Sonnet 4 in this eval. I assume it generates more tokens for the same problem?
October 1, 2025 at 2:19 AM
Yeah I feel like LLMs are such a cool tool for diving into unfamiliar languages. Both for answering questions, but also for generating example code at various levels of complexity. I've recently been trying to learn some Lisp again, including by vibe coding some. bsky.app/profile/dmew...
I used AI vibe coding tools to port Anthropic's API client to Common Lisp: github.com/danielmewes/...
It was a very quick and fun mini project that taught me a thing or two about Lisp. Use it at your own risk.
GitHub - danielmewes/anthropic-sdk-cl-port: An AI-written port of the Anthropic client SDK to Common Lisp.
An AI-written port of the Anthropic client SDK to Common Lisp. - danielmewes/anthropic-sdk-cl-port
github.com
September 29, 2025 at 2:21 PM
Ah yeah. I have never written Cobol myself, but happened to have listened to this podcast just a week ago. youtu.be/Rdm3fgxbLOE?... So I recognized some of the structures when I looked at your webserver.
Episode 60 - COBOL Never Dies
YouTube video by Advent Of Computing
youtu.be
September 29, 2025 at 2:15 PM
Oh, I see you used Claude to write it... Either way, maybe you still heve a sense of the above from reviewing its code?
September 29, 2025 at 1:39 PM
Very cool! Was there anything in the structure of Cobol that you found made it challenging, or just a lack of suitable libraries? (I'd guess it's not a great language for parsing, but then neither is C, is it?)
September 29, 2025 at 1:38 PM
When I first read your summary, I thought you had changed the log prop sampling so it could never sample the stop token for 10 hours. I wonder if that approach would give similar behavior or something different. In that case, there would be no countdown time or user messages at all.
September 29, 2025 at 1:30 PM
Cool experiment! I agree that it might reveal something about a model's ability (tendency?) to seek novelty when it can't make progress with one approach.
September 29, 2025 at 1:28 PM