📄Paper: arxiv.org/abs/2510.01171
🌐Blog: verbalized-sampling.com
Team: Jiayi Zhang @simon-ycl.bsky.social @derekch.bsky.social Anthony Sicilia, Michael Tomz, @chrmanning.bsky.social @shi-weiyan.bsky.social
@stanfordnlp.bsky.social XNortheasternXWVU
📄Paper: arxiv.org/abs/2510.01171
🌐Blog: verbalized-sampling.com
Team: Jiayi Zhang @simon-ycl.bsky.social @derekch.bsky.social Anthony Sicilia, Michael Tomz, @chrmanning.bsky.social @shi-weiyan.bsky.social
@stanfordnlp.bsky.social XNortheasternXWVU
✍️ Creative writing → 2.1× diversity
💬 Dialogue → Matches human behavior
📊 Synthetic training data → +18% better
Emergent trend: Big models gain more than small ones
Tested w/ @stanfordnlp.bsky.social on thousands of outputs
✍️ Creative writing → 2.1× diversity
💬 Dialogue → Matches human behavior
📊 Synthetic training data → +18% better
Emergent trend: Big models gain more than small ones
Tested w/ @stanfordnlp.bsky.social on thousands of outputs
Just paste this line before any creative task. That's it!
Instead of the same "safe" answer five times, you get five completely different ones. Here's the difference:
Just paste this line before any creative task. That's it!
Instead of the same "safe" answer five times, you get five completely different ones. Here's the difference:
Ever notice how LLMs all sound the same?
They know 100+ jokes but only ever tell one.
Every blog intro: "In today's digital landscape..."
We figured out why – and how to unlock the rest 🔓
Copy-paste prompt: 🧵
Ever notice how LLMs all sound the same?
They know 100+ jokes but only ever tell one.
Every blog intro: "In today's digital landscape..."
We figured out why – and how to unlock the rest 🔓
Copy-paste prompt: 🧵