Download: https://tiles.run
Read blog: https://www.blog.tiles.run/
Join our community: https://go.tiles.run/discord
Exo Labs latest work: blog.exolabs.net/nvidia-dgx-s...
Exo Labs latest work: blog.exolabs.net/nvidia-dgx-s...
www.ushareit.com/m/
www.ushareit.com/m/
www.youtube.com/watch?v=PQqu...
www.youtube.com/watch?v=PQqu...
www.project-syndicate.org/commentary/a...
www.project-syndicate.org/commentary/a...
www.project-syndicate.org/commentary/a...
It’s compatible with Ollama Modelfile syntax and in the beta builds we will use @hf.co Xet Core for model layers deduplication.
www.inkandswitch.com/universal-ve...
It’s compatible with Ollama Modelfile syntax and in the beta builds we will use @hf.co Xet Core for model layers deduplication.
- deepseek.ai/blog/deepsee...
- deepmind.google/models/gemin...
- deepseek.ai/blog/deepsee...
- deepmind.google/models/gemin...
Memory layers are definitely worth studying. Given their parameter-efficient nature and applicability to smaller models, the authors reported promising results using a setup with a 1.3B model + a 1B memory pool.
jessylin.com/2025/10/20/c...
Memory layers are definitely worth studying. Given their parameter-efficient nature and applicability to smaller models, the authors reported promising results using a setup with a 1.3B model + a 1B memory pool.
jessylin.com/2025/10/20/c...
github.com/mzau/mlx-knife
Thanks to Tom Dörr for highlighting this project.
github.com/mzau/mlx-knife
Thanks to Tom Dörr for highlighting this project.
It’s the walls around our data.
A thread on walled gardens, economic incentives, and why even the best models can’t help us if they can’t see us. 🧵
It’s the walls around our data.
A thread on walled gardens, economic incentives, and why even the best models can’t help us if they can’t see us. 🧵