Karl Weinmeister
banner
kweinmeister.bsky.social
Karl Weinmeister
@kweinmeister.bsky.social
Cloud Developer Advocacy @ Google. AI/ML/Data, Blue Devil & Longhorn, wanna-be at home improvement. Opinions are my own.
I put antigravity.google to the test as my daily editor. Here's my story of how building a full-stack app went.
November 30, 2025 at 6:05 PM
Now you can see two websites in a single tab, with Chrome's new Split View.

Enable with "Split View" flag at chrome://flags.

⚠️ As an experimental feature, it could change or be removed at any time.

#browser #googlechrome #techtips
November 2, 2025 at 10:22 AM
Has Gemini ever felt like it's losing focus as your conversation goes on? Naturally, more context means more topics to cover.

Use the /compress command to keep the Gemini CLI on track. It prunes the history without a full reboot.

Get started today with the Gemini CLI: npx @google/gemini-cli
October 7, 2025 at 5:03 PM
The all-new Gemini 2.5 Flash Image model is built different. I had a blast mixing together multiple images and text!

👉 Try it out in AI studio: ai.dev
📖 Read all about it: developers.googleblog.com/en/introduci...
August 26, 2025 at 2:15 PM
How could you discover clues that content might be AI-generated?

I found top words and key insights from #Gemini Flash 2.5:
🧭 Roadmapping the answer to come: "Let's breakdown"
🧐 Definitive answers: "crucial" and "comprehensive"
🙌 Enthusiastic language: "great question"
June 5, 2025 at 3:46 PM
Yek prioritizes files within a token budget, using Git history & other rules. github.com/bodo-run/yek

Command: yek --tokens 1024k .
Result: ~253k tokens! 🎉 Room to spare.
May 14, 2025 at 9:03 PM
Still using Repomix, added the --compress flag to strip comments & whitespace from the filtered set.

Command: repomix . --ignore "..." --compress
Result: Down to ~1.8M tokens. Getting closer!
May 14, 2025 at 9:03 PM
Used Repomix again, this time to filter out non-essential file types for LLM understanding (data files, static assets, etc.).

Command: repomix . --ignore "**/*.ipynb,**/*.csv,..."
Result: Down to ~2.6M tokens! Massive cut.
May 14, 2025 at 9:03 PM
🚀 Qwen3 has dropped!

📐 0.6B - 235B model sizes: “Qwen3-4B can rival the performance of Qwen2.5-72B-Instruct”
🧠 Hybrid thinking and non-thinking modes
🤖 Improved agentic capabilities
🌎 Multilingual support with 119 languages

Blog: qwenlm.github.io/blog/qwen3/

It’s working great on Google Cloud!
April 29, 2025 at 2:25 AM