banner
6wredmage.bsky.social
@6wredmage.bsky.social
What I've read to combat this is a retrieval system (RAG) where instead of relying on its own knowledge, the LLM consults with the pdf version of the DSM and pulls the answer from there. Hope this helps?
www.intel.com/content/www/...
How to Implement Retrieval-Augmented Generation (RAG) – Intel
Learn how to build a RAG pipeline to deploy customized LLM applications in less time.
www.intel.com
February 18, 2025 at 4:49 AM
So I'm learning about this and my current understanding is that it depends on how the LLM is taught. If you just run the LLM there is a high chance of hallucinations. If the LLM isn't trained on the DSM manual or other medical and scientific research, it very well could hallucinate.
February 18, 2025 at 4:49 AM
So i really want to put this into my 5c artificer energy deck but not sure how to curve into it.
February 8, 2025 at 7:58 PM
TEAM SAHEELI all the way and forevermore!
February 7, 2025 at 5:07 AM
I can't say that with a straight face as I look at the table of cards from 3 tcgs
February 2, 2025 at 1:34 AM
So i have a genuine question; if I curate the data myself with a feedback loop to check for hallucinations, is that included in the "regurgitation engine"? I'm thinking of making a home lab AI for my data. Tangent I know but would like to know how to use AI for better
February 2, 2025 at 1:32 AM
This is a dope collection! I have lots of fftcg to trade if you like. Got waaaay over my head last summer 😅
January 25, 2025 at 1:11 AM
Rebel blue sky when? Already got a twitch account, why not socials lol
December 27, 2024 at 12:27 AM
This is amazing lmao
October 23, 2024 at 11:11 PM
It's safe here. We like it here
October 23, 2024 at 11:56 AM