2x bigger than GLM-4.6
2x bigger than GLM-4.6
Train gpt-oss locally on 12.8GB VRAM.
In collab with @hf.co, Unsloth trains DeepSeek, Qwen3, GLM faster.
Repo: github.com/unslothai/un...
Blog: unsloth.ai/docs/new/fas...
Train gpt-oss locally on 12.8GB VRAM.
In collab with @hf.co, Unsloth trains DeepSeek, Qwen3, GLM faster.
Repo: github.com/unslothai/un...
Blog: unsloth.ai/docs/new/fas...
obviously don't go asking them about setting up wifi on openBSD, but for basic stuff they're extremely capable for 1.2GBs of data
obviously don't go asking them about setting up wifi on openBSD, but for basic stuff they're extremely capable for 1.2GBs of data
verdict: "they're charging you HOW little for this?"
verdict: "they're charging you HOW little for this?"
GLM-5, Ming-flash-omni from Ant Group , MiniCPM-SALA from OpenBMB, and the upcoming MiniMax M2.5 keep the heat on 🔥
Spring Festival is around the corner, no one’s sleeping!
GLM-5, Ming-flash-omni from Ant Group , MiniCPM-SALA from OpenBMB, and the upcoming MiniMax M2.5 keep the heat on 🔥
Spring Festival is around the corner, no one’s sleeping!
It is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.
It is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens.
https://z.ai/blog/glm-5
https://news.ycombinator.com/item?id=46977210
https://z.ai/blog/glm-5
https://news.ycombinator.com/item?id=46977210
Und es gibt immer noch "Experten" die glauben, auf Basis aktueller KI-Systeme (LLMs) sei eine Superintelligenz (AGI) erreichbar.
LLMs werden immer halluzinieren. Das ist systemimmanent.
LLMs sind leistungsfähige sprachliche Heuristikmaschinen. Mehr nicht.
Und es gibt immer noch "Experten" die glauben, auf Basis aktueller KI-Systeme (LLMs) sei eine Superintelligenz (AGI) erreichbar.
LLMs werden immer halluzinieren. Das ist systemimmanent.
LLMs sind leistungsfähige sprachliche Heuristikmaschinen. Mehr nicht.
Not that anyone here can really run glm 5 locally anyways
Not that anyone here can really run glm 5 locally anyways
Main Link | Techmeme Permalink
Main Link | Techmeme Permalink
Hell, not even Claude Max, I mostly use GLM 4.7.
Hell, not even Claude Max, I mostly use GLM 4.7.
huggingface.co/zai-org/GLM-...
✨ 0.9B
✨ MIT licensed
✨ Multimodal GLM-V architecture
✨ #1 on OmniDocBench v1.5 (94.62)
huggingface.co/zai-org/GLM-...
✨ 0.9B
✨ MIT licensed
✨ Multimodal GLM-V architecture
✨ #1 on OmniDocBench v1.5 (94.62)
Details ao3.org/works/76528861/chapters/208145256
Details ao3.org/works/76528861/chapters/208145256
Thoughts: The are many ways to skin a variable...
#rstats #regression #modelling #tutorial #r #glm #ols #coding
thomvolker.github.io/blog/2506_re...
Thoughts: The are many ways to skin a variable...
#rstats #regression #modelling #tutorial #r #glm #ols #coding
thomvolker.github.io/blog/2506_re...