Piotr Skalski
banner
skalskip92.bsky.social
Piotr Skalski
@skalskip92.bsky.social
Open-source Lead @roboflow. VLMs. GPU poor. Dog person. Coffee addict. Dyslexic. | GH: https://github.com/SkalskiP | HF: https://huggingface.co/SkalskiP
that's all the code you need to run detection and tracking
"how to track objects with SORT tracker" notebook: colab.research.google.com/github/robof...
April 25, 2025 at 1:03 PM
it's build on top of supervision package allowing you to take advantage of all the tools we already created
April 25, 2025 at 1:03 PM
object detection example project: bsky.app/profile/skal...
PaliGemma2 for object detection on custom dataset

- used google/paligemma2-3b-pt-448 checkpoint
- trained on A100 with 40GB VRAM
- 1h of training
- 0.62 mAP on the validation set

colab with complete fine-tuning code: colab.research.google.com/github/robof...
December 11, 2024 at 4:58 PM
image to JSON example project: bsky.app/profile/skal...
PaliGemma2 for image to JSON data extraction

- used google/paligemma2-3b-pt-336 checkpoint; I tried to make it happen with 224, but 336 performed a lot better
- trained on A100 with 40GB VRAM
- trained with LoRA

colab with complete fine-tuning code: colab.research.google.com/github/robof...
December 11, 2024 at 4:58 PM
you need to prepare your dataset in JSONL format; dataset includes three subsets: train, test, and valid

each subset contains images and annotations.jsonl file where each line of the file is a valid JSON object; each JSON object has three keys: image, prefix, and suffix
December 11, 2024 at 4:58 PM
to limit the memory (VRAM) usage during the training, we can use LoRA, QLoRA, or freeze parts of the graph
December 11, 2024 at 4:58 PM
fine-tuning large vision-language models like PaliGemma 2 can be resource-intensive. to put this into perspective, the largest variant of the recent YOLOv11 object detection model (YOLOv11x) has 56.9M parameters. in contrast, PaliGemma 2 models range from 3B to 28B parameters.
December 11, 2024 at 4:58 PM
PG2 offers 9 pre-trained models with sizes of 3B, 10B, and 28B parameters and resolutions of 224, 448, and 896 pixels.

to pick the right variant, you need to take into account the vision-language task you are solving, available hardware, amount of data, inference speed
December 11, 2024 at 4:58 PM
PG2 combines a SigLIP-So400m vision encoder with a Gemma 2 language model to process images and text. these tokens are then linearly projected and combined with input text tokens. Gemma 2 language model processes these combined tokens and generates output text tokens.
December 11, 2024 at 4:58 PM
the paper suggests some nice strategies to increase the model's detection accuracy using fake boxes and <noise> special token; I plan to explore those in the coming days.
December 8, 2024 at 4:26 PM
PG2 offers 9 pre-trained models with sizes of 3B, 10B, and 28B parameters and resolutions of 224, 448, and 896 pixels.

we can see that PaliGemma2's object detection performance depends more on input resolution than model size. 3B 448 seems like a sweet spot.
December 8, 2024 at 4:26 PM
PG2 performs worse on the object detection task than specialized detectors; you can easily train a YOLOv11 model with 0.9 mAP on this dataset.

compared to PG1, it performs much better; datasets with a large number of classes were hard to fine-tune with previous version
December 8, 2024 at 4:26 PM
also take into account that Gemini and Gemma are 2 different models; Gemma is a lot smaller, open-source and can run locally
December 8, 2024 at 3:03 PM
totally agree; it's not perfect! but
- there are still a lot bigger versions of the model, both in terms of parameters and input resolution
- I only trained it for 1 hour
December 8, 2024 at 3:02 PM
a multimodal dataset I used to fine-tune the model.

link: universe.roboflow.com/roboflow-jvu...
December 6, 2024 at 4:18 PM
PG2 offers 9 pre-trained models with sizes of 3B, 10B, and 28B parameters and resolutions of 224, 448, and 896 pixels.

it looks like OCR-related metrics ST-VQA, TallyQA, and TextCaps... benefit more from increased resolution than model size. that's why I went from 224 to 336.
December 6, 2024 at 4:18 PM
how to prevent this in open-source projects?

- never allow github actions from first-time contributors.
- always require review for new contributors.
- never run important actions automatically via bots.
- protect release actions with unique cases and selected actors.
December 5, 2024 at 9:21 PM
what happened?

malicious code was injected into the pypi deployment workflow (github action).

the source code itself wasn't infected. however, the resulting tar/wheel files were corrupted during the build process.
December 5, 2024 at 9:21 PM
smart parking systems are just the beginning. roboflow workflows can be used for so much more. check out my clothes detection + sam2 + stabilityai inpainting workflow
November 27, 2024 at 5:43 PM
custom python blocks in roboflow workflows are powerful. built a telegram bot connector for real-time alerts.
November 27, 2024 at 5:43 PM