Try it out by running some of our examples with the `--features metal` flag.
#Candle #RustLang #macOS #Metal #HuggingFace
Try it out by running some of our examples with the `--features metal` flag.
#Candle #RustLang #macOS #Metal #HuggingFace
#tensors #machine-learning #ml #ai
Take a look here:
huggingface.co/blog/KeighBe...
#tensors #machine-learning #ml #ai
Take a look here:
huggingface.co/blog/KeighBe...
huggingface.co/blog/KeighBe...
huggingface.co/blog/KeighBe...
Run the 3-4B model locally with:
```
cargo run --example qwen --release -- --model 3-4b --prompt 'The capital of France is '
```
On macOS, enable Metal for faster inference:
```
--features metal
```
Clone the repo and test it out. github.com/huggingface/...
Run the 3-4B model locally with:
```
cargo run --example qwen --release -- --model 3-4b --prompt 'The capital of France is '
```
On macOS, enable Metal for faster inference:
```
--features metal
```
Clone the repo and test it out. github.com/huggingface/...
- LLMs 3x less likely to clarify than humans
- 16x less likely to provide follow-up requests
- Early failures predict later breakdowns
- Includes preliminary intervention strategies
huggingface.co/datasets/mic...
- LLMs 3x less likely to clarify than humans
- 16x less likely to provide follow-up requests
- Early failures predict later breakdowns
- Includes preliminary intervention strategies
huggingface.co/datasets/mic...
Zero-day support for multiple frameworks including transformers, MLX, llama.cpp, and more! 💼 🚀
Read more here:
huggingface.co/blog/gemma3
Zero-day support for multiple frameworks including transformers, MLX, llama.cpp, and more! 💼 🚀
Read more here:
huggingface.co/blog/gemma3
huggingface.co/spaces/libra...
huggingface.co/spaces/libra...
youtu.be/0eMzc-WnBfQ?...
In which we attempt to figure out MoE, o1, scaling, tech reporting, modern semiconductors, microeconomics, and international geopolitics.
youtu.be/0eMzc-WnBfQ?...
In which we attempt to figure out MoE, o1, scaling, tech reporting, modern semiconductors, microeconomics, and international geopolitics.
Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, Xun Zhou
tl;dr: increasing input vocabulary is always good, increasing output vocabularies is good for bigger models.
arxiv.org/abs/2501.16975
Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, Xun Zhou
tl;dr: increasing input vocabulary is always good, increasing output vocabularies is good for bigger models.
arxiv.org/abs/2501.16975
For the next month, we invite all members of the AI community to participate in one of our 3 AI for Climate tasks, with the goal of developing a highly accurate model while consuming as little energy as possible ⚡
For the next month, we invite all members of the AI community to participate in one of our 3 AI for Climate tasks, with the goal of developing a highly accurate model while consuming as little energy as possible ⚡
1) Open a file in a supported app, summon HFChat, and it pre-populates the context window. No more copy-pasting. /cc @hf.co
1) Open a file in a supported app, summon HFChat, and it pre-populates the context window. No more copy-pasting. /cc @hf.co
On the On-Device team at Hugging Face, we've been profiling energy usage for CoreML models. Here’s some data I collected:
On the On-Device team at Hugging Face, we've been profiling energy usage for CoreML models. Here’s some data I collected: