Alejandro
banner
acompa.bsky.social
Alejandro
@acompa.bsky.social
ML / AI at Not Diamond
- prompt adaptation capabilities with outstanding results for a Fortune 100 company

We’re now looking to fill three technical roles for building more and supporting our partners. If you want to build impactful #genai products see the link for more details.

notdiamond.notion.site/Not-Diamond-...
Not Diamond open roles | Notion
Not Diamond is a multi-model AI infrastructure platform used by Fortune 100s and leading startups, backed by folks like Jeff Dean, Julien Chaumond, and Ion Stoica. We are a small, elite team over-inde...
notdiamond.notion.site
February 22, 2025 at 6:54 PM
AGI is ruined!!
December 21, 2024 at 11:29 PM
But “meaningful [to the market]? when the market that has yet to see that threshold” assumes the question.

If you believe the market doesn’t want o1, and ask me to demonstrate otherwise, then I don’t have a shot at convincing you. Even if I point to multiple quarters of Meta’s earning calls, right?
December 21, 2024 at 11:10 PM
Yeah that’s fair. At a minimum we’re seeing strong adoption (ranging from prototypes to production) across customer service contexts, data annotation / summarization, software development, and operational process automation.
December 21, 2024 at 11:02 PM
Well clearly we’ve reached AGI here
December 21, 2024 at 10:07 PM
😂
December 21, 2024 at 7:17 PM
Strongly agreed. I’ve seen some embarrassing (at best!) medical failures from o1. machine-learning-made-simple.medium.com/openai-is-ly...
OpenAI is lying about o-1’s Medical Diagnostic Capabilities
Uncovering critical issues with the model + suggestions on how to improve it for medical diagnosis
machine-learning-made-simple.medium.com
December 21, 2024 at 5:20 PM
(I’m arguing from a perspective where (1) AGI claims are unrealistic, (2) most AI marketing deserves skepticism, and yet (3) we can still develop meaningful apps / workflows around these models.)
December 21, 2024 at 5:19 PM
And you’re concluding this because of

> the researchers' hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept

right?

This is absolutely a failure from an AGI perspective. But could it be useful to identify generalized reasoning patterns?
December 21, 2024 at 5:17 PM
I get it, my industry absolutely has a terrible track record with product hype. I personally hate it.

But the people I know engaging in this work *aren’t* the OAIs of the world. They’re uni lab startups quietly working with hospitals and researchers.
December 21, 2024 at 5:12 PM
Hugs on your pops. That’s fucking terrible.
December 21, 2024 at 5:06 PM
People are! And they’re fine-tuning models atop of Meta’s Llama etc to work with clinical notes and scans! But that’s way less exciting to talk about than OpenAI palace intrigue.
December 21, 2024 at 5:05 PM
(From www.wheresyoured.at/subprimeai/ for context for others)
December 21, 2024 at 4:16 PM
I’ll share one:

> “a big, stupid magic trick” in the form of OpenAI's (rushed) launch of its "o1 (codenamed: strawberry") model

You quoted yourself re: “a big, stupid magic trick.” So: why does o1 qualify as one?
December 21, 2024 at 4:15 PM
No but seriously lol
December 21, 2024 at 3:44 PM
IMO it depends on your goal as a media professional. “LLMs got my answers wrong so they’re bad” is both factually correct and superficial. You can certainly run that, or you can explore _why_ they’re wrong in order to enrich your findings.
December 21, 2024 at 3:31 PM