LM Studio
banner
lmstudio-ai.bsky.social
LM Studio
@lmstudio-ai.bsky.social
Download and run local LLMs on your computer 👾 http://lmstudio.ai
👀
March 13, 2025 at 6:36 PM
💯! @mcuban.bsky.social you should try it out :-)
February 18, 2025 at 12:32 AM
🔥
January 7, 2025 at 9:01 PM
Thanks. A few more datapoints will help.

1. Which quantization of model is it for either case?
2. When you load the model in LM Studio, what is the % of GPU offload you define?

It might be easier to go back and forth in a github issue: github.com/lmstudio-ai/...

Thanks!
Issues · lmstudio-ai/lmstudio-bug-tracker
Bug tracking for the LM Studio desktop application - Issues · lmstudio-ai/lmstudio-bug-tracker
github.com
January 7, 2025 at 7:10 PM
Should be as fast or faster. Pls share more details and we’ll help debug
January 7, 2025 at 6:20 PM
Try a very small model like llama 3.2 1B lmstudio.ai/model/llama-...
Llama 3.2 1B
llama • Meta • 1B
lmstudio.ai
November 30, 2024 at 10:12 PM
Very cool! 🦾 Did you try using local LLMs call as a part of the numerical / analytical process itself, or for R code gen?
November 25, 2024 at 2:43 PM
We’re hoping for as much feedback as possible from devs who have been using the OpenAI Tool Use API 🙏
November 23, 2024 at 4:04 PM