Formerly: Google. VP AI Development @ DataRobot. Founder of Decision AI. Economist @ FTC
Currently: Using LLMs to predict future events
Instead of learning from a static text, I'd rather learn ideas/insights through interactive conversation. I'll pay for a long prompt that an LLM can talk through with me.
I really want this!
Instead of learning from a static text, I'd rather learn ideas/insights through interactive conversation. I'll pay for a long prompt that an LLM can talk through with me.
I really want this!
Making a custom video game was faster than picking up a birthday gift from Target.
2025 will be the year where we fundamentally rethink how custom apps fit in our lives
Making a custom video game was faster than picking up a birthday gift from Target.
2025 will be the year where we fundamentally rethink how custom apps fit in our lives
I'm using conventional ML ideas (iterating based on validation scores). But it's >10X more broadly useful than conventional ML prediction.
Who else is trying LLMs for pred markets?
I'm using conventional ML ideas (iterating based on validation scores). But it's >10X more broadly useful than conventional ML prediction.
Who else is trying LLMs for pred markets?
You can see the problem on all the social sites.
You can see the problem on all the social sites.
Train-time scaling favors a winner-take-all outcome. Whoever spends the most upfront will have the best model.
Test-time scaling means more people can experiment and innovate... lower barriers to entry.
Train-time scaling favors a winner-take-all outcome. Whoever spends the most upfront will have the best model.
Test-time scaling means more people can experiment and innovate... lower barriers to entry.
Asking a person a question can influence how they'd subsequently respond to variations on that question
So you can't experiment with any precision
With LLMs, set the temp to 0 and you can experiment very precisely
Asking a person a question can influence how they'd subsequently respond to variations on that question
So you can't experiment with any precision
With LLMs, set the temp to 0 and you can experiment very precisely
It looks very fun
It looks very fun
Or a trick to improve cursor for notebooks?
Or a trick to improve cursor for notebooks?