cyrilzakka.github.io
If there’s anything you need help with on that side, please feel free to reach out!
huggingface.co/blog/KeighBe...
huggingface.co/blog/KeighBe...
www.youtube.com/watch?v=YtIQ...
www.youtube.com/watch?v=YtIQ...
huggingface.co/blog/ethics-...
Our analyses found:
- There's a spectrum of "agent"-ness
- *Safety* is a key issue, leading to many other value-based concerns
Read for details & what to do next!
huggingface.co/blog/ethics-...
Our analyses found:
- There's a spectrum of "agent"-ness
- *Safety* is a key issue, leading to many other value-based concerns
Read for details & what to do next!
huggingface.co/blog/ethics-...
Our analyses found:
- There's a spectrum of "agent"-ness
- *Safety* is a key issue, leading to many other value-based concerns
Read for details & what to do next!
We’re looking for folks able to share 5+ de-identified medical cases for general AI benchmarking purposes across all specialties. Credits will be attributed.
Retweets appreciated!
Please DM or reach out at firstName.lastName@huggingface.co
We’re looking for folks able to share 5+ de-identified medical cases for general AI benchmarking purposes across all specialties. Credits will be attributed.
Retweets appreciated!
Please DM or reach out at firstName.lastName@huggingface.co
The goal here was to help patients get a deeper understanding of their own records, with the eventual goal of enabling continuous health monitoring and personalized health suggestions.
The goal here was to help patients get a deeper understanding of their own records, with the eventual goal of enabling continuous health monitoring and personalized health suggestions.
- fewer low-hanging proof-of-concept studies (yes, it works for your specialty/organ/classification, too)
- fewer head-to-head model comparisons (yes, llama 3.1 was better than llama 2 for your case, but peer review took 8 months, now there’s llama 4)
- instead: RCTs, relevant outcomes
- fewer low-hanging proof-of-concept studies (yes, it works for your specialty/organ/classification, too)
- fewer head-to-head model comparisons (yes, llama 3.1 was better than llama 2 for your case, but peer review took 8 months, now there’s llama 4)
- instead: RCTs, relevant outcomes
Hopefully we’ll be seeing more healthcare workers involved in the development, deployment, and evaluation of these systems.
Hopefully we’ll be seeing more healthcare workers involved in the development, deployment, and evaluation of these systems.
1) Open a file in a supported app, summon HFChat, and it pre-populates the context window. No more copy-pasting. /cc @hf.co
1) Open a file in a supported app, summon HFChat, and it pre-populates the context window. No more copy-pasting. /cc @hf.co
Here’s an open, free, at-your-own-pace course:
🔗 About www.nasa.gov/using-ai-ml-...
🔗 Register: canvas.instructure.com/enroll/8JYKD7
Cheers to NASA for the Transform to Open Science Training grant awarding this work 🎉🤗
Here’s an open, free, at-your-own-pace course:
🔗 About www.nasa.gov/using-ai-ml-...
🔗 Register: canvas.instructure.com/enroll/8JYKD7
Cheers to NASA for the Transform to Open Science Training grant awarding this work 🎉🤗
- Support for document/media upload
- System-wide on-device dictation support (supports all EHRs by default) with injection into ANY text field.
- Context injection based on currently active app
If you squint, you can guess where medical AI is headed 🤗
- Support for document/media upload
- System-wide on-device dictation support (supports all EHRs by default) with injection into ANY text field.
- Context injection based on currently active app
If you squint, you can guess where medical AI is headed 🤗
The same 99% will happen here too, but if AI researchers continue to get perma-banned for making available the datasets needed to filter it, it’s going to make this platform unusable.
The same 99% will happen here too, but if AI researchers continue to get perma-banned for making available the datasets needed to filter it, it’s going to make this platform unusable.
1/2
1/2
1/2
Soon, it'll be "on-chip" LLM. Or LLM cores. The system default local LLM. The coding framework's default local LLM.
I find this incredibly exciting. A privacy-first, self-contained, user-owned AI—a 24/7 agent for action, insights & feedback.
github.com/huggingface/...
Soon, it'll be "on-chip" LLM. Or LLM cores. The system default local LLM. The coding framework's default local LLM.
I find this incredibly exciting. A privacy-first, self-contained, user-owned AI—a 24/7 agent for action, insights & feedback.
github.com/huggingface/...
Link: github.com/cyrilzakka/EMG
Link: github.com/cyrilzakka/EMG
Great initiative so that everyone can build their own open-source AI health tracker for $12: huggingface.co/blog/cyrilza...
Great initiative so that everyone can build their own open-source AI health tracker for $12: huggingface.co/blog/cyrilza...
huggingface.co/blog/cyrilza...
huggingface.co/blog/cyrilza...
Using an $11 smart ring, I'll show you how to build your own private health monitoring app. From basic metrics to advanced features like:
- Activity tracking
- HR monitoring
- Sleep analysis
and more!
Using an $11 smart ring, I'll show you how to build your own private health monitoring app. From basic metrics to advanced features like:
- Activity tracking
- HR monitoring
- Sleep analysis
and more!
If there’s anything you need help with on that side, please feel free to reach out!
If there’s anything you need help with on that side, please feel free to reach out!