I'm so here for it. 4/4
I'm so here for it. 4/4
1. Home server: Gemma3:27b (fits nicely on my RTX3090)
2. Laptop (Framework 13): Qwen3-30B-A3B (Runs shockingly fast on CPU only, and runs nicely in <32GB of RAM. Also has think/no_think option)
3. Phone (Oneplus Nord n30): qwen3-1.7b (not super smart, but runs ok on phone!) 3/
1. Home server: Gemma3:27b (fits nicely on my RTX3090)
2. Laptop (Framework 13): Qwen3-30B-A3B (Runs shockingly fast on CPU only, and runs nicely in <32GB of RAM. Also has think/no_think option)
3. Phone (Oneplus Nord n30): qwen3-1.7b (not super smart, but runs ok on phone!) 3/
- Offline LLM support, so I can still do good work with AI without internet.
- Secure: If I accidently put an API key or secret into the AI, it stays on my device and doesn't get sent to a 3rd party.
- Open Source is good for everyone 2/
- Offline LLM support, so I can still do good work with AI without internet.
- Secure: If I accidently put an API key or secret into the AI, it stays on my device and doesn't get sent to a 3rd party.
- Open Source is good for everyone 2/