DinSpin Official
dinspin.app
DinSpin Official
@dinspin.app
🍴Answering “Where should we eat?!” since 2020

https://dinspin.app (coming soon)
github.com/deepseek-ai/...

No doubt, it would help people to understand that there are tangible ways to experience the bias of a LLM’s training data…that they probably won’t enjoy.
Political bias · Issue #51 · deepseek-ai/DeepSeek-LLM
This language model has a strong political bias, covering up some facts to support the Chinese government's propaganda. Here are some examples: Here's an example with a similar American event: This...
github.com
January 27, 2025 at 2:23 PM
WALL-E here we come
January 6, 2025 at 4:08 PM
Wait, you don’t do that? 😂
January 5, 2025 at 8:26 PM
Am I the only one who doesn’t keep my heads of lettuce on a plate in my fridge?
January 5, 2025 at 8:16 PM
Let’s hope it doesn’t take them long to find their way over here
January 3, 2025 at 3:10 AM
I think it’s sometimes not considered that something that developers do WITH code is have fun. No code doesn’t tickle the brain the same way. This doesn’t mean it’s bad, it just makes it boring for devs imo 🤷‍♂️
December 25, 2024 at 6:13 PM
Fascinating. Thanks for providing that excerpt. Seems that, at least for now, promoting technique remains a crucial component of a performant LLM system.
December 19, 2024 at 4:43 PM
I didn’t get a chance to read the entire paper, but it would be interesting to see the difference in results if the scratchpad wasn’t provided. I.e. does the model only exhibit alignment faking if given the opportunity to reason?
December 19, 2024 at 4:27 PM
Thanks for the reply! Super interesting to hear the inside rationale behind these decisions 🔥
December 17, 2024 at 11:39 PM
Really curious to know why it was ∞ in the first place?
December 17, 2024 at 6:30 PM