ArkDevLabs
banner
arkdevlabs.com
ArkDevLabs
@arkdevlabs.com
New GGUF drop! 📦 Qwen3-0.6B ready to download 👉 huggingface.co/Open4bits/Qw...
Open4bits/Qwen3-0.6b-gguf · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 29, 2026 at 6:46 PM
Introduction to Artificial Intelligence
New blog is live! Learn what AI is and why it matters.

arkdevlabs.com/global/blog/...

#AI #ArtificialIntelligence #ArkDevLabs
ArkDevLabs
Building secure, scalable software, automation, and AI-driven platforms.
arkdevlabs.com
January 27, 2026 at 4:09 PM
New release!
Gemma-3-270M (GGUF) is now available for local AI workflows.

Grab it here 👇
huggingface.co/Open4bits/ge...

#LocalAI #GGUF #HuggingFace
Open4bits/gemma-3-270m-gguf · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 25, 2026 at 7:20 PM
New release!
Gemma-3-270M-IT (GGUF) is now live — great for Italian language tasks and local inference.

👉 huggingface.co/Open4bits/ge...

#LocalAI #GGUF #HuggingFace
Open4bits/gemma-3-270m-it-gguf · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 25, 2026 at 7:20 PM
We’ve just released a new blog on GGUF 🚀
Learn what it is, why it matters, and how it’s shaping local AI models.
👉 arkdevlabs.com/global/blog/...

#AI #GGUF #MachineLearning #ArkDevLabs
ArkDevLabs
Building secure, scalable software, automation, and AI-driven platforms.
arkdevlabs.com
January 25, 2026 at 7:17 PM
Open4bits/Granite-4.0-H-Micro-FP4 is now available on Hugging Face.

The FP4 variant delivers extreme compression for highly memory-constrained inference environments.

Download:
huggingface.co/Open4bits/gr...
Open4bits/granite-4.0-h-micro-fp4 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 23, 2026 at 1:45 PM
Open4bits/Granite-4.0-H-Micro-NF4 is now available on Hugging Face.

The NF4 variant uses 4-bit NormalFloat quantization to maximize memory efficiency with minimal quality loss.

Download:
huggingface.co/Open4bits/gr...
Open4bits/granite-4.0-h-micro-nf4 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 23, 2026 at 1:45 PM
Open4bits/Granite-4.0-H-Micro-INT8 is now available on Hugging Face.

The INT8 variant offers a balanced trade-off between performance, memory efficiency, and inference speed.

Download:
huggingface.co/Open4bits/gr...
Open4bits/granite-4.0-h-micro-int8 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 23, 2026 at 1:44 PM
Open4bits/Granite-4.0-H-Micro-FP16 is now available on Hugging Face.

The FP16 variant provides a high-fidelity baseline suitable for accurate inference and further experimentation.

Download:
huggingface.co/Open4bits/gr...
Open4bits/granite-4.0-h-micro-fp16 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 23, 2026 at 1:41 PM
Open4bits/Granite-4.0-H-Micro-Quantized models are now available on Hugging Face.

Multiple quantized variants (FP16, FP8, INT8, NF4, FP4) are provided to support efficient inference across diverse hardware and deployment environments.

Download:
huggingface.co/Open4bits/gr...
Open4bits/granite-4.0-h-micro-quantized · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 23, 2026 at 1:39 PM
Open4bits/LFM2.5-1.2B-Instruct-Quantized is now available on Hugging Face.

Multiple quantized variants (FP16, FP8, INT8, NF4) are provided for efficient inference across diverse hardware.

Download:
huggingface.co/Open4bits/LF...
Open4bits/LFM2.5-1.2B-Instruct-Quantized · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 21, 2026 at 5:08 PM
Open4bits/LFM2.5-1.2B-Base-Quantized is now available on Hugging Face.

Quantized variants (FP16, FP8, INT8, NF4) are provided for efficient inference and deployment.

Download:
huggingface.co/Open4bits/LF...
Open4bits/LFM2.5-1.2B-Base-Quantized · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 21, 2026 at 5:07 PM