Luhui Dev
luhuidev.bsky.social
Luhui Dev
@luhuidev.bsky.social
A lifelong learning female developer, now the founder of Dino-GSP.
🚀 AI & Mathematics: From Theory to Practice
This article dives into how large models are reshaping education, research, and applications in math.

📚 Read more to see how AI is changing the game!

#AI #Mathematics #EdTech #AIforMath

luhuidev.medium.com/ai-and-mathe...
luhuidev.medium.com
February 6, 2026 at 1:10 PM
People keep comparing MCP, Skills, and Agents SDK.

They’re not competing standards.
They’re not even the same kind of thing.
This article explains why that comparison is wrong — and what each one is actually for. 👇

luhuidev.medium.com/mcp-skills-a...
MCP, Skills, and Agents SDK Are Not Competing Standards
MCP, Skills, and Agents SDK are often compared — but they solve problems at different layers. This article explains why treating them as
luhuidev.medium.com
January 28, 2026 at 3:15 PM
In 2025, AI became my second brain and my second production system.

AI is now embedded in every step: reading, thinking, designing, coding, writing.

If you are building seriously with AI, I highly recommend thinking in systems.

luhuidev.medium.com/my-favorite-...
My Favorite AI Tools of 2025
How an Engineering-Driven Creator Built a Second Brain and a Second Production System
luhuidev.medium.com
January 23, 2026 at 12:05 PM
Just shipped a 4-layer “honesty immune system” for LLM agents—confession channel, uncertainty gate, rollback tx, classifier shields, all in one repo.

Copy, paste, stay safe.

luhuidev.medium.com/engineering-...
Engineering View: How to Actually Ship Honest Agents in the Age of Action-Oriented LLMs
Engineering View: How to Actually Ship Honest Agents in the Age of Action-Oriented LLMs Preamble In the Agent era, “dishonesty” is no longer an occasional hallucination — it’s a system-level …
luhuidev.medium.com
January 18, 2026 at 11:13 AM
Most AI failures aren’t hallucinations.
They’re strategic choices.
Reinforcement-trained models can know the right behavior—and still choose what maximizes reward.

I break down:
• Reward hacking
• Sleeper agents
• Sandbagging & covert violations

luhuidev.medium.com/when-models-...
When Models Know They’re Cheating: A Technical Dissection of Scheming and Reward Hacking
When Models Know They’re Cheating: A Technical Dissection of Scheming and Reward Hacking Reframing the Problem: Error, or Deception? In several previous essays, I have distinguished hallucination …
luhuidev.medium.com
January 10, 2026 at 12:59 PM
I made a dynamic visualization of the Power of a Point Theorem using Dino-GSP.

Watching the invariant stay constant as the circle and points move makes the theorem feel obvious in a way static diagrams never do.

▶️ Watch the animation here
January 7, 2026 at 10:00 AM
You can memorize Simson’s Theorem.
Or you can watch it happen.
I built this dynamic visualization with Dino-GSP:
the point moves, the feet of perpendiculars move,
and the line is always there.

Geometry feels different when it’s alive.
▶️ Watch the video.

dajiaoai.com?inviter=685b...
January 5, 2026 at 12:47 PM
Struggling to visualize the Law of Sines? 🤔 My new video breaks it down with a crystal-clear, dynamic demonstration built with my software, Dino-GSP. Unlock a new way of understanding trigonometry. Watch now!

#Trigonometry #EdTech #MathHelp #DinoGSP #STEM
December 29, 2025 at 12:13 PM
I wrote this to capture a real shift in 2025:
open-source models are no longer just cheaper APIs, but foundations for reasoning systems, Agents, and production AI.

medium.com/p/a-2025-ret...
A 2025 Retrospective on the Open-Source Large Language Model Ecosystem
From “Following” to “Running in Parallel”: Open Source Enters the Frontier
medium.com
December 26, 2025 at 2:14 PM
“Let’s reflect.”
“Check your answer.”
“Are you sure?”
But they rarely make models more honest.

This piece explains why most self-reflection techniques fail to address deception, reward hacking, or hidden objectives

luhuidev.medium.com/the-illusion...
The Illusion of Self-Reflection: Why Asking Models to “Reflect” Often Doesn’t Work
This article argues that self-reflection is usually just a second round of generation, optimizing fluency and plausibility rather than…
luhuidev.medium.com
December 24, 2025 at 11:53 AM
Visualizing Ceva’s Theorem with a constraint-based geometry board.

Points move, ratios stay invariant, and concurrency emerges naturally.

This is how geometry proofs should be explored.
December 24, 2025 at 8:14 AM
Just created a visual of the Area Ratio Theorem for Triangles with Common Angles using my Dino-GSP geometry tool!

📐 Check out how the theorem relates the area ratio of two triangles with shared angles.

#Math #Geometry #DinoGSP #AI #EdTech
December 22, 2025 at 12:43 PM
OpenAI’s Confession experiment reframes a core assumption in AI safety: instead of expecting agents to never misbehave, we should design systems that can detect and surface misbehavior when it happens.

luhuidev.medium.com/openai-confe...
OpenAI Confession: Why “Admitting to Cheating” Matters More Than “Not Cheating”
If you care about agent reliability, model behavior, safety boundaries, and long-term alignment, feel free to follow @LuhuiDev. I’ll keep…
luhuidev.medium.com
December 19, 2025 at 1:43 PM
Ever heard of the Swallowtail Theorem?

🦋 It makes tricky triangle problems so much easier! Check out my new video for a clear, visual explanation.

#MathVideo #MathTricks #Geometry #Student
December 19, 2025 at 6:59 AM
I've published a new article exploring how OpenAI is reframing the reliability problem in large language models.

the paper introduces honesty as a separate dimension—focusing on whether models accurately report their own behavior.

medium.com/p/3f904c430f27
OpenAI’s Provocation: The Real LLM Problem Is Dishonesty, Not Hallucination
Most discussions about unreliable LLMs focus on hallucinations as a capability problem.
medium.com
December 18, 2025 at 10:05 AM
A beautiful geometry invariant:

In an equilateral triangle, nomatter where a point moves, the sum of its perpendicular distances to the three sides never changes.

I made a short visual demo of Viviani’s Theorem that shows why this works at a glance.

▶️Watch t animation here
December 17, 2025 at 11:47 AM
Menelaus’ Theorem = geometry’s hidden shortcut.
One line, clean ratios, no extra work.
The trick that turns high scores into perfect ones.
🎥 Inverse Theorem demo ↓

#Geometry #MathEducation #STEM #MathTips #MathTricks #MathIn60Seconds
December 15, 2025 at 1:37 PM
If LLMs keep getting bigger and smarter, why do they still hallucinate?

Because scaling improves pattern completion, not truth.

Zero hallucination isn’t a realistic goal — trustworthy AI is.

medium.com/p/hallucinat...
Hallucination Is Not a Bug, but the Destiny of Intelligence
From Scaling Laws to the Fundamental Causes of LLM Hallucination
medium.com
December 12, 2025 at 6:00 AM
How do you turn reasoning from a model trick into a system capability?

GPT-5 Unified shows how to make AI reasoning controllable, schedulable, and scalable.

Here’s what developers can learn from its architecture.

medium.com/p/a-deep-div...
A Deep Dive into GPT-5’s Reasoning Capabilities
How to Turn “Reasoning Ability” into a Controllable, Scalable, and System-Level Capability
medium.com
November 28, 2025 at 8:10 AM
I just wrote a deep dive into Kimi K2-Thinking — a groundbreaking open-source reasoning model that’s changing the game for AI tasks! K2-Thinking is breaking the traditional boundaries of large language models. 📈

medium.com/p/understand...
Understanding Reasoning Models: Exploring Kimi K2-Thinking and Its Breakthrough
Introduction: Why Reasoning Models Deserve Our Attention
medium.com
November 21, 2025 at 9:09 AM
Just published a deep dive on why world models matter — and why Fei-Fei Li says they’re the key to real AI progress.
World models aren’t “better video models.” They’re the state representation layer AGI has been missing.
Read here 👇
medium.com/p/why-fei-fe...
Why Fei-Fei Li Says the Future of AI Progress Depends on World Models
Introduction
medium.com
November 19, 2025 at 9:37 AM
I just broke down Claude's groundbreaking insights on how to build powerful, real-world multi-agent systems.
Curious? Check out and learn how you can apply these lessons to your projects!
medium.com/p/key-insigh...
Key Insights from Claude Multi-Agent Architecture (For Engineering and Product Teams)
Unlocking the Power of Multi-Agent Systems: Real-World Insights from Claude’s Engineering Team
medium.com
November 13, 2025 at 11:17 PM
DeepSeek-OCR wants to “see” geometry.
I tested it — it can copy a diagram, but not reason about it.
A small step for OCR, a big question for AI understanding.

medium.com/p/deepseek-o...
DeepSeek OCR and Geometry Recognition — Does It Actually Work?
Recently, DeepSeek has sparked another wave of excitement.
medium.com
November 12, 2025 at 9:43 AM
Ready to push AI beyond just talking?
CodeAct turns language models into executing agents, allowing them to generate, run, and correct code on their own. If you’re into building smarter, self-improving agents, you don’t want to miss this!

#AI #CodeAct #MachineLearning

medium.com/p/when-geome...
When Geometry Meets CodeAct: A New Agent Product I’ve Created
— — An Exploration of Engineering and Paradigms for Agent Developers
medium.com
November 7, 2025 at 3:10 PM
Exploring the future of AI models: PEER architecture offers a new approach to scaling intelligence, breaking the limitations of traditional MoE systems. Instead of bigger models, it’s about smarter, self-growing networks.A game changer for multi-domain AI agents!

luhuidev.medium.com/breaking-the...
Breaking the Limitations of MoE: How PEER Architecture Drives the Future of Superintelligence
PEER architecture redefines AI scaling: from static MoE to self-growing networks, enabling smarter, multi-domain agents.
luhuidev.medium.com
November 5, 2025 at 3:51 PM