Enable with "Split View" flag at chrome://flags.
⚠️ As an experimental feature, it could change or be removed at any time.
#browser #googlechrome #techtips
Enable with "Split View" flag at chrome://flags.
⚠️ As an experimental feature, it could change or be removed at any time.
#browser #googlechrome #techtips
Use the /compress command to keep the Gemini CLI on track. It prunes the history without a full reboot.
Get started today with the Gemini CLI: npx @google/gemini-cli
Use the /compress command to keep the Gemini CLI on track. It prunes the history without a full reboot.
Get started today with the Gemini CLI: npx @google/gemini-cli
👉 Try it out in AI studio: ai.dev
📖 Read all about it: developers.googleblog.com/en/introduci...
👉 Try it out in AI studio: ai.dev
📖 Read all about it: developers.googleblog.com/en/introduci...
I found top words and key insights from #Gemini Flash 2.5:
🧭 Roadmapping the answer to come: "Let's breakdown"
🧐 Definitive answers: "crucial" and "comprehensive"
🙌 Enthusiastic language: "great question"
I found top words and key insights from #Gemini Flash 2.5:
🧭 Roadmapping the answer to come: "Let's breakdown"
🧐 Definitive answers: "crucial" and "comprehensive"
🙌 Enthusiastic language: "great question"
Command: yek --tokens 1024k .
Result: ~253k tokens! 🎉 Room to spare.
Command: yek --tokens 1024k .
Result: ~253k tokens! 🎉 Room to spare.
Command: repomix . --ignore "..." --compress
Result: Down to ~1.8M tokens. Getting closer!
Command: repomix . --ignore "..." --compress
Result: Down to ~1.8M tokens. Getting closer!
Command: repomix . --ignore "**/*.ipynb,**/*.csv,..."
Result: Down to ~2.6M tokens! Massive cut.
Command: repomix . --ignore "**/*.ipynb,**/*.csv,..."
Result: Down to ~2.6M tokens! Massive cut.
📐 0.6B - 235B model sizes: “Qwen3-4B can rival the performance of Qwen2.5-72B-Instruct”
🧠 Hybrid thinking and non-thinking modes
🤖 Improved agentic capabilities
🌎 Multilingual support with 119 languages
Blog: qwenlm.github.io/blog/qwen3/
It’s working great on Google Cloud!
📐 0.6B - 235B model sizes: “Qwen3-4B can rival the performance of Qwen2.5-72B-Instruct”
🧠 Hybrid thinking and non-thinking modes
🤖 Improved agentic capabilities
🌎 Multilingual support with 119 languages
Blog: qwenlm.github.io/blog/qwen3/
It’s working great on Google Cloud!