Be Stiff
banner
be-stiff.bsky.social
Be Stiff
@be-stiff.bsky.social
With jambonbonbon!!
October 19, 2025 at 11:29 AM
Haha yes maybe. Also people who benefit from rights that were born out of political violence in the past.
September 12, 2025 at 3:02 AM
(They can all fuck off)
September 11, 2025 at 8:09 PM
Can you imagine sinking your hand into that fluffy belly? This is a joy I am privileged to experience daily (for about 15 seconds before the violence begins).
August 31, 2025 at 6:58 PM
David completed this a few weeks ago - said it’s the best game he’s ever played (even though he played it on the switch which is buggy af). Are you loving it?
March 16, 2025 at 1:44 PM
Report it! That’s totally unacceptable.
March 13, 2025 at 10:08 AM
Ha, that’s good - I was wondering if there was yet another new term I had to get to grips with! 😄
March 4, 2025 at 8:49 AM
I don’t know if you saw my other message (threads are still a bit messed up on here) but we aren’t tackling that particular issue with prompting (which I assume you meant when you said “promoting”?)
March 4, 2025 at 8:15 AM
Re sources we have two in this instance- our knowledge base and the LLM’s (no deep searching). Sometimes we can see the info has come from our knowledge base but the answer is off, seems the LLM will sometimes add hallucinated info in its answer. Prob need to look at temp settings/clean up the db.
March 4, 2025 at 6:37 AM
Ah, when I am talking about prompts here it’s not as the user - I’m creating system prompts, so changing the behaviour of the LLM. Mostly prompt based safeguarding stuff but also ensuring the content is accurate. We’re using a mixture of safeguarding tools, orchestration tools and an LLM.
March 4, 2025 at 6:19 AM
Yes! There are always loads of syntax issues with chatGPT, I suppose it’s handy if you know what you’re doing and can edit the output well, but like… most people using it for coding are not at that level. The new version of Claude that came out last week(ish?) is meant to be great with code tho!
March 3, 2025 at 10:10 PM
Wow that is bad, was that a real example? What LLM was it? Tbh obvious wrong answers are one thing but the subtle hallucinations are so much more problematic. Especially when kids are using them to study (which is what I’m working on right now).
March 3, 2025 at 9:52 PM
Me, the prompt engineer, pleading with it to be reasonable.
March 3, 2025 at 12:55 PM