The Hybrid Group
hybridgroup.com
The Hybrid Group
@hybridgroup.com
We're the software company that makes your hardware work.

https://hybridgroup.com
Pinned
yzma 1.1.0 has just been released.

Tool parsing, streamlined installation, and more download customization. What are you waiting for? Go get it right now.

github.com/hybridgroup/...

#golang #llama #local
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
Reposted by The Hybrid Group
v1.9.1 of Kronk #golang

What's Changed:
* Model catalog system for easy model access
* Jinja template support integrated in catalog system
* Auth service w/ rate limiting
* Tooling to create JWT's for endpoint/rate limiting
* SDK improvements
December 24, 2025 at 4:16 PM
Reposted by The Hybrid Group
TinyGo Bluetooth 0.14 is out just in time for your holiday hacking.

Runs on Linux, macOS, and Windows.
Runs baremetal on @nordicsemi.com or using HCI interface.

Go get it right now!

github.com/tinygo-org/b...

#golang #tinygo #bluetooth
Release 0.14.0 · tinygo-org/bluetooth
core mac: Add MAC accessor for returning the MAC in the usual format nordic semi sd: send the correct response to BLE_GAP_EVT_PHY_UPDATE_REQUEST hci fix: HCI should not read data past the en...
github.com
December 24, 2025 at 11:12 AM
Reposted by The Hybrid Group
Kronk v1.5.3 is the best version yet. #golang

- OpenAI compatible model server supporting
- chat/completions, embeddings, images, audio
- Go API providing full access to local models
- System management via CLI and BUI
- Token based security system

github.com/ardanlabs/kr...
GitHub - ardanlabs/kronk: This project lets you use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications via the yzma module. Kronk provides a high-lev...
This project lets you use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications via the yzma module. Kronk provides a high-level API that feels simil...
github.com
December 19, 2025 at 10:50 PM
Reposted by The Hybrid Group
TInyGo 0.40.1 is out with some critical fixes and improvements that just could not wait. Thank you very much to our global team of humans who joined together so quickly to get this point release done for all of us!

github.com/tinygo-org/t...

#golang #tinygo
Release 0.40.1 · tinygo-org/tinygo
An important point release with some critical fixes and improvements that just could not wait. Thank you very much to our global team of humans who joined together so quickly to get this out for al...
github.com
December 19, 2025 at 12:34 PM
Reposted by The Hybrid Group
So @netlify.com has a bug in their billing system. Every month for most of the last year, I have had to re-enter the same info. And also a support request.
This time, however, all the sites like @tinygo.org are down. So I am pretty unhappy right about now...
December 13, 2025 at 9:42 AM
Reposted by The Hybrid Group
Epsilon is a pure Go WebAssembly runtime with zero dependencies.

This is very exciting.

github.com/ziggy42/epsi...

#golang #wasm #tinygo
GitHub - ziggy42/epsilon: A WASM virtual machine written in Go with 0 dependencies
A WASM virtual machine written in Go with 0 dependencies - ziggy42/epsilon
github.com
December 9, 2025 at 1:43 PM
yzma 1.1.0 has just been released.

Tool parsing, streamlined installation, and more download customization. What are you waiting for? Go get it right now.

github.com/hybridgroup/...

#golang #llama #local
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
December 6, 2025 at 8:01 PM
yzma 1.0 is out!

Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.

- CLI tool to install libraries and download models
- Linux, macOS, and Windows support
- Supports the very latest llama.cpp
- No CGo required

github.com/hybridgroup/...
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
December 4, 2025 at 6:11 PM
yzma 1.0 beta 3 is out!

Write Go applications that directly integrate llama.cpp for hardware accelerated local inference.

- new installation CLI
- Jinja chat templates
- new llama.cpp download binary formats
- more benchmarks

Go get it right now!

github.com/hybridgroup/...

#golang #llama
Release 1.0.0-beta.3 · hybridgroup/yzma
What's Changed feature: use runtime.GOARCH for download to support arm64 on Linux/Windows by @deadprogram in #111 feature: add download location for llama.cpp prebuilt binaries for Linux arm64 Vul...
github.com
December 2, 2025 at 2:30 PM
Reposted by The Hybrid Group
🚀 Just published: Getting started with Kronk
Updated with Kronk v0.25.0
new URL:
k33g.hashnode.dev/baby-steps-w...
December 1, 2025 at 6:29 AM
Reposted by The Hybrid Group
🎉 Yzma now works on NVIDIA Jetson Orin Nano!

Today, thanks to Ron Evans (@deadprogram.com), I'm running local LLM models with hardware acceleration directly on the Jetson!

Your first Yzma program on Jetson: k33g.hashnode.dev/installing-a...

My weekend is going to be busy 🤓
k33g.hashnode.dev
November 28, 2025 at 8:48 AM
We're moving at the speed of thought, so yzma v1.0 beta2 is out!

Better, faster, and more benchmarks to show it too.

Run local models using Go with your CPU, CUDA, or Vulkan.

You know what to do!

github.com/hybridgroup/...

#golang #llama #llamacpp
GitHub - hybridgroup/yzma: Go for hardware accelerated local inference with llama.cpp directly integrated into your applications
Go for hardware accelerated local inference with llama.cpp directly integrated into your applications - hybridgroup/yzma
github.com
November 24, 2025 at 2:51 PM
yzma 1.0 beta1 is out!

Use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications. No external model servers or CGo.

Go get it right now!

github.com/hybridgroup/...

#golang #llama #vlm #llm #local #gpu
GitHub - hybridgroup/yzma: Go for hardware accelerated local inference with llama.cpp directly integrated into your applications
Go for hardware accelerated local inference with llama.cpp directly integrated into your applications - hybridgroup/yzma
github.com
November 20, 2025 at 9:14 PM
Reposted by The Hybrid Group
Thanks to @deadprogram.com and his Yzma project, you don't need to deploy model servers anymore, you can run GGUF models directly in your #golang code.

I have cool examples, including a full RAG app using DuckDB. I will have more complex examples soon.

github.com/ardanlabs/ai...
ai-training/cmd/examples/example13 at main · ardanlabs/ai-training
Provide examples for Go developers to use AI in their products - ardanlabs/ai-training
github.com
November 16, 2025 at 3:36 PM
Reposted by The Hybrid Group
#BuriKaigi 2026 内で開催する TinyGo Keeb Tour (はんだ付け + ソフトなワークショップ) で使う zero-kb02 を準備していってる。みんな、はんだ付けしにきてね!今は TinyGo のファームウェアしかないけど、 Vial とか zmk とか prk でファームウェア書いてくれる人も募集中です。
#tinygo_keeb
November 14, 2025 at 12:21 AM
Reposted by The Hybrid Group
"Captions With Attitude" in your browser from your webcam generated by a Vision Language Model (VLM) from a Go program running entirely on your local machine using llama.cpp!

github.com/hybridgroup/...

#golang #vlm #openCV #llama #yzma
November 11, 2025 at 8:24 PM
Life's comes at you fast, and so do new releases of yzma!

Use pure Go for hardware accelerated local inference on Vision Language Models & Tiny Language Models.

0.9.0 out now with API improvements, model downloading, & more.

github.com/hybridgroup/...

#golang #llama #vlm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
November 7, 2025 at 4:59 PM
Reposted by The Hybrid Group
Ult. Software Design LIVE Schedule

Join @goinggo.net & @kenriquezcodes.bsky.social in this week's streams:
Tue. 11/4 & Thu.11/6 from 11am - 1pm EST

Tomorrow we'll have a special guest: @deadprogram.com

Stay tuned 1hr before the LIVE show for the stream link!😎
📽️Last episodes here: bit.ly/3CShDOS
November 3, 2025 at 8:31 PM
yzma 0.8.0 is out, now with over 87% coverage of the llama.cpp API from pure Go! More robust, more examples.

Go get it right now!

github.com/hybridgroup/...

#golang #llamacpp #vlm #slm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
November 3, 2025 at 10:38 AM
Reposted by The Hybrid Group
On October 12th, the first-ever TinyGo Conf happened in Tokyo, Japan. Report from team member Daniel Esteban aka "Conejo" tells all!

#tinygo #tinygoconf #golang #japan

madriguera.me/tinygo-conf-...
TinyGo Conf 2025 JAPAN
On October 12th, the first-ever TinyGo Conf happened in Tokyo, Japan. I planned to write about it sooner, but I did so many things during my trip that I didn't have the time nor strength to do it unti...
madriguera.me
October 28, 2025 at 6:04 PM
Reposted by The Hybrid Group
"That Machine Always Lies: Truth and Fiction in the Age of Artificial Intelligence"

thatmachinealwayslies.com
That Machine Always Lies
Truth and Fiction in the Age of Artificial Intelligence
thatmachinealwayslies.com
October 21, 2025 at 9:59 AM
Reposted by The Hybrid Group
I'm so excited about the YZMA project from @deadprogram.com. I've taken 3 of his examples and cleaned them up. Next step is to build a mini version of the Ollama service to show the real power of YZMA. #golang

github.com/ardanlabs/ai...
ai-training/cmd/examples/example13 at main · ardanlabs/ai-training
Provide examples for Go developers to use AI in their products - ardanlabs/ai-training
github.com
October 20, 2025 at 6:23 PM