cyrilzakka.github.io
- fewer low-hanging proof-of-concept studies (yes, it works for your specialty/organ/classification, too)
- fewer head-to-head model comparisons (yes, llama 3.1 was better than llama 2 for your case, but peer review took 8 months, now there’s llama 4)
- instead: RCTs, relevant outcomes
- fewer low-hanging proof-of-concept studies (yes, it works for your specialty/organ/classification, too)
- fewer head-to-head model comparisons (yes, llama 3.1 was better than llama 2 for your case, but peer review took 8 months, now there’s llama 4)
- instead: RCTs, relevant outcomes
1) Open a file in a supported app, summon HFChat, and it pre-populates the context window. No more copy-pasting. /cc @hf.co