Michelangelo D’Agostino
mdagost.bsky.social
Michelangelo D’Agostino
@mdagost.bsky.social
AI/ML Practitioner. Formerly VP of ML @ Tegus. Cameo, ShopRunner, Civis Analytics, Braintree/Venmo, Obama 2012. Views are all mine.
As a child of the 90’s, I’d rather have her smoke a cigarette than vape.
January 12, 2025 at 5:43 PM
But agree it’s not helpful when Kevin Scott says “Despite what other people think, we're not at diminishing marginal returns on scale-up…I try to help people understand there is an exponential here.” arstechnica.com/information-...
December 11, 2024 at 6:58 PM
I have no inside information, only what @natolambert.bsky.social wrote in that post:
December 11, 2024 at 6:56 PM
The nuance I was trying to capture is this: www.interconnects.ai/p/scaling-re...
December 11, 2024 at 6:52 PM
Fair point.
December 11, 2024 at 6:45 PM
Isn’t the claim that the pretraining perplexity should be exponential, which doesn’t necessarily imply that the task evals should also be exponential?
December 11, 2024 at 6:37 PM
For years I would practice “calendar zero” and delete all my past events as they happened. But I stopped because my team constantly harangued me about all those emails.
December 11, 2024 at 6:34 PM
To be clear, I agree with you that the definition is foggy. I think one difference is that a true agent, in addition to “going off”, isn’t following a predetermined plan. It’s like a cron job or set of API’s, but once it “goes off” it’s determining its own plan for how to use its tools via the LLM.
December 1, 2024 at 6:20 PM
I’m not so sure—I think the idea is it that goes away from your attention, which is really important, and then resurfaces when it needs something or is done.
December 1, 2024 at 5:40 PM
I’ve been using Firecrawl and LLM’s to build a searchable dataset of childcare programs offered by Chicago Public Schools. Really fun side project!

www.cps-care.info
November 23, 2024 at 2:51 AM