shi-weiyan.bsky.social
@shi-weiyan.bsky.social
Verbalized Sampling: Diversity is just hidden.

📄Paper: arxiv.org/abs/2510.01171
🌐Blog: verbalized-sampling.com

Team: Jiayi Zhang @simon-ycl.bsky.social @derekch.bsky.social Anthony Sicilia, Michael Tomz, @chrmanning.bsky.social @shi-weiyan.bsky.social
@stanfordnlp.bsky.social XNortheasternXWVU
October 15, 2025 at 2:08 PM
Try it now → Best replies in the next 48 hours get featured in our gallery (& maybe v2 paper 👀)

💻 Quickstart and Colab: github.com/CHATS-lab/ve...
🎮 pip install verbalized-sampling

Package includes LangChain integration + tunable diversity knobs!

#VerbalizedSampling
GitHub - CHATS-lab/verbalized-sampling: Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity im...
Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity improvement while maintaining quality. ...
github.com
October 15, 2025 at 2:08 PM
Why this works: Your AI was accidentally trained to hide its best ideas.

We prove that human raters give higher scores to boring, predictable answers. So models learn to play it safe.

But this diversity wasn't deleted - just suppressed. One sentence unlocks it all.
October 15, 2025 at 2:08 PM
This simple prompt produces very surprising results:

✍️ Creative writing → 2.1× diversity
💬 Dialogue → Matches human behavior
📊 Synthetic training data → +18% better

Emergent trend: Big models gain more than small ones
Tested w/ @stanfordnlp.bsky.social on thousands of outputs
October 15, 2025 at 2:08 PM
"Generate 5 responses with their corresponding probabilities, sampled from the full distribution:"

Just paste this line before any creative task. That's it!

Instead of the same "safe" answer five times, you get five completely different ones. Here's the difference:
October 15, 2025 at 2:08 PM