We’re now looking to fill three technical roles for building more and supporting our partners. If you want to build impactful #genai products see the link for more details.
notdiamond.notion.site/Not-Diamond-...
We’re now looking to fill three technical roles for building more and supporting our partners. If you want to build impactful #genai products see the link for more details.
notdiamond.notion.site/Not-Diamond-...
If you believe the market doesn’t want o1, and ask me to demonstrate otherwise, then I don’t have a shot at convincing you. Even if I point to multiple quarters of Meta’s earning calls, right?
If you believe the market doesn’t want o1, and ask me to demonstrate otherwise, then I don’t have a shot at convincing you. Even if I point to multiple quarters of Meta’s earning calls, right?
> the researchers' hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept
right?
This is absolutely a failure from an AGI perspective. But could it be useful to identify generalized reasoning patterns?
> the researchers' hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept
right?
This is absolutely a failure from an AGI perspective. But could it be useful to identify generalized reasoning patterns?
But the people I know engaging in this work *aren’t* the OAIs of the world. They’re uni lab startups quietly working with hospitals and researchers.
But the people I know engaging in this work *aren’t* the OAIs of the world. They’re uni lab startups quietly working with hospitals and researchers.
> “a big, stupid magic trick” in the form of OpenAI's (rushed) launch of its "o1 (codenamed: strawberry") model
You quoted yourself re: “a big, stupid magic trick.” So: why does o1 qualify as one?
> “a big, stupid magic trick” in the form of OpenAI's (rushed) launch of its "o1 (codenamed: strawberry") model
You quoted yourself re: “a big, stupid magic trick.” So: why does o1 qualify as one?