The Hybrid Group
hybridgroup.com
The Hybrid Group
@hybridgroup.com
We're the software company that makes your hardware work.

https://hybridgroup.com
Pinned
Just released yzma 1.8 with what Go coders need:
- latest llama.cpp features/models such as ModelFitParams
- @raspberrypi.com & #nvidia Jetson Orin quick installs
- more benchmarks

go get it right now!

github.com/hybridgroup/...

#golang #ml #llama #llamacpp #cuda #vulkan #raspberrypi #nvidia
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
Reposted by The Hybrid Group
Updated "Captions With Attitude" to use the new yzma release, and now it runs without CGo just using the webcam and browser + pure Go server.

Have a fun weekend!

github.com/hybridgroup/...

#golang #llama #vlm #yzma
GitHub - hybridgroup/captions-with-attitude: Display Captions With Attitude in your browser from your webcam generated by a Vision Language Model (VLM) from a Go program running entirely on your local...
Display Captions With Attitude in your browser from your webcam generated by a Vision Language Model (VLM) from a Go program running entirely on your local machine using llama.cpp - hybridgroup/cap...
github.com
February 13, 2026 at 7:59 PM
Just released yzma 1.8 with what Go coders need:
- latest llama.cpp features/models such as ModelFitParams
- @raspberrypi.com & #nvidia Jetson Orin quick installs
- more benchmarks

go get it right now!

github.com/hybridgroup/...

#golang #ml #llama #llamacpp #cuda #vulkan #raspberrypi #nvidia
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
February 13, 2026 at 5:26 PM
Reposted by The Hybrid Group
yzma 1.7 is out! With support for the very latest llama.cpp features and models, hardware acceleration, and all from Go without needing CGo.

You should go get it right now!

github.com/hybridgroup/...

#golang #llama #ml #vlm #inference #cuda #vulkan
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
February 5, 2026 at 6:17 PM
Reposted by The Hybrid Group
"Systems Programming: Lessons from Building a Networking Stack for Microcontrollers" - Patricio Whittingslow

Video from @fosdem.org @gophers.love

cuddly.tube/w/4dqznedAvq...

#golang #tinygo #fosdem
Systems Programming: Lessons from Building a Networking Stack for Microcontrollers
Developing Go for micocontrollers with 32kB of RAM requires a big shift in thinking, moreso if you are trying to get a complete networking stack with Ethernet, TCP/IP, HTTP to run on said device. O...
cuddly.tube
February 5, 2026 at 4:12 PM
Reposted by The Hybrid Group
Now hearing from @danicat93.bsky.social about "Making of GoDoctor: an MCP server for Go development" here at @gophers.love @fosdem.org

#golang #fosdem
February 1, 2026 at 1:15 PM
Reposted by The Hybrid Group
Here at @gophers.love @fosdem.org at last, and it's a nice crowd!

#fosdem #golang #tinygo
February 1, 2026 at 12:43 PM
Reposted by The Hybrid Group
January 29, 2026 at 8:03 AM
We just released yzma 1.6 in time for @fosdem.org

Use llama.cpp for high-performance hardware accelerated local inference!

github.com/hybridgroup/...

#golang #llama #local #cuda #vulkan
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
January 28, 2026 at 4:43 PM
yzma 1.5 has been released with many small fixes and improvements for your development wants and needs.

Go get it now!

github.com/hybridgroup/...

#golang #llama #cuda #vulkan #metal
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
January 12, 2026 at 2:04 PM
Reposted by The Hybrid Group
GoCV 0.43 has just been released!

- support for the new @opencv.bsky.social 4.13 release
- CUDA 13
- Updated Windows installation

github.com/hybridgroup/...

#golang #opencv
Release 0.43.0 · hybridgroup/gocv
all update to OpenCV 4.13 core add Copy method to Mat (#1346) improve implementation for NewPointVectorFromPoints cuda add implmentations for more arith functions imgproc added in missing...
github.com
January 5, 2026 at 9:25 PM
yzma 1.4.1 has just been released for compatibility with llama.cpp b7628+

Available now!

github.com/hybridgroup/...

#golang #llama #llamacpp #llm #vlm #tlm #slm #vla
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
January 5, 2026 at 9:23 AM
Reposted by The Hybrid Group
Yesterday we had our last TinyGo monthly meeting for 2025 and it was one of our most attended so far! Thank you very much to all the humans who help make this project Go. And just wait until you all see what we have in mind for next year!
#tinygo #golang
December 30, 2025 at 12:27 PM
yzma 1.4 is out, the last release of the year. Support for split models and a few new features that just got added to llamacpp too. Enjoy high performance local inference from Go!

github.com/hybridgroup/...

#golang #llamacpp
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
December 29, 2025 at 6:46 PM
Reposted by The Hybrid Group
v1.9.1 of Kronk #golang

What's Changed:
* Model catalog system for easy model access
* Jinja template support integrated in catalog system
* Auth service w/ rate limiting
* Tooling to create JWT's for endpoint/rate limiting
* SDK improvements
December 24, 2025 at 4:16 PM
Reposted by The Hybrid Group
TinyGo Bluetooth 0.14 is out just in time for your holiday hacking.

Runs on Linux, macOS, and Windows.
Runs baremetal on @nordicsemi.com or using HCI interface.

Go get it right now!

github.com/tinygo-org/b...

#golang #tinygo #bluetooth
Release 0.14.0 · tinygo-org/bluetooth
core mac: Add MAC accessor for returning the MAC in the usual format nordic semi sd: send the correct response to BLE_GAP_EVT_PHY_UPDATE_REQUEST hci fix: HCI should not read data past the en...
github.com
December 24, 2025 at 11:12 AM
Reposted by The Hybrid Group
Kronk v1.5.3 is the best version yet. #golang

- OpenAI compatible model server supporting
- chat/completions, embeddings, images, audio
- Go API providing full access to local models
- System management via CLI and BUI
- Token based security system

github.com/ardanlabs/kr...
GitHub - ardanlabs/kronk: This project lets you use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications via the yzma module. Kronk provides a high-lev...
This project lets you use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications via the yzma module. Kronk provides a high-level API that feels simil...
github.com
December 19, 2025 at 10:50 PM
Reposted by The Hybrid Group
TInyGo 0.40.1 is out with some critical fixes and improvements that just could not wait. Thank you very much to our global team of humans who joined together so quickly to get this point release done for all of us!

github.com/tinygo-org/t...

#golang #tinygo
Release 0.40.1 · tinygo-org/tinygo
An important point release with some critical fixes and improvements that just could not wait. Thank you very much to our global team of humans who joined together so quickly to get this out for al...
github.com
December 19, 2025 at 12:34 PM
Reposted by The Hybrid Group
So @netlify.com has a bug in their billing system. Every month for most of the last year, I have had to re-enter the same info. And also a support request.
This time, however, all the sites like @tinygo.org are down. So I am pretty unhappy right about now...
December 13, 2025 at 9:42 AM
Reposted by The Hybrid Group
Epsilon is a pure Go WebAssembly runtime with zero dependencies.

This is very exciting.

github.com/ziggy42/epsi...

#golang #wasm #tinygo
GitHub - ziggy42/epsilon: A WASM virtual machine written in Go with 0 dependencies
A WASM virtual machine written in Go with 0 dependencies - ziggy42/epsilon
github.com
December 9, 2025 at 1:43 PM
yzma 1.1.0 has just been released.

Tool parsing, streamlined installation, and more download customization. What are you waiting for? Go get it right now.

github.com/hybridgroup/...

#golang #llama #local
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
December 6, 2025 at 8:01 PM
yzma 1.0 is out!

Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.

- CLI tool to install libraries and download models
- Linux, macOS, and Windows support
- Supports the very latest llama.cpp
- No CGo required

github.com/hybridgroup/...
GitHub - hybridgroup/yzma: Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration. - hybridgroup/yzma
github.com
December 4, 2025 at 6:11 PM
yzma 1.0 beta 3 is out!

Write Go applications that directly integrate llama.cpp for hardware accelerated local inference.

- new installation CLI
- Jinja chat templates
- new llama.cpp download binary formats
- more benchmarks

Go get it right now!

github.com/hybridgroup/...

#golang #llama
Release 1.0.0-beta.3 · hybridgroup/yzma
What's Changed feature: use runtime.GOARCH for download to support arm64 on Linux/Windows by @deadprogram in #111 feature: add download location for llama.cpp prebuilt binaries for Linux arm64 Vul...
github.com
December 2, 2025 at 2:30 PM
Reposted by The Hybrid Group
🚀 Just published: Getting started with Kronk
Updated with Kronk v0.25.0
new URL:
k33g.hashnode.dev/baby-steps-w...
December 1, 2025 at 6:29 AM