JasonC
jasonfromatlanta.bsky.social
JasonC
@jasonfromatlanta.bsky.social
I am someone that likes puzzles...I am always working one. I am someone that connects things across domains. I see patterns.
You can't factor through intermediate times because there's no fact of the matter at resolutions below the limit.

Complex numbers then emerge not just as embedding machinery, but as the structure you get when finite distinguishability meets consistency requirements.

@philsci-archive.bsky.social
February 1, 2026 at 5:19 PM
Falsifiable predictions:

DM/baryon ratio varies by environment (voids ≠ clusters)
Ratio evolves with redshift
Cosmic jerk at z ≈ 1.5

Standard ΛCDM says constant. I say it varies. One of us is wrong.
January 31, 2026 at 12:46 AM
The universe is a 2D ledger. Physics is the audit.

The key insight: dark matter isn't a particle. It's the topological tax of maintaining 3D structure on a 2D scaffold. Pentagons can't tile 3D space efficiently—the frustration IS the missing mass.
January 31, 2026 at 12:46 AM
Reposted by JasonC
Prompt injection is a structural property of how we process information. Until we harden the resolution boundary through architectural change, the cycle continues .

Full analysis here: medium.com/@jasonrconne...
Breaking AI’s Vicious Security Cycle: Why AI Security Guardrails Keep Failing
A Constraint-Based Analysis of Prompt Injection
medium.com
January 11, 2026 at 6:30 PM
One data point that stuck with me while writing this: Chinese open-source models went from 1.2% of global AI usage to 30% (weekly peak) in twelve months. That's not incremental growth; that's a phase transition.

Genuinely curious what others are seeing.
January 15, 2026 at 4:17 PM
It is no joke. Long COVID left me with Dysautonomia...and the Long COVID cohort isn't getting smaller; it is growing. Gonna be a problem.
January 14, 2026 at 10:49 PM
The piece from Sept if you feel like reading: medium.com/@jasonrconne...
Strategic AI Competition Analysis: Understanding System Patterns Through China’s Neuromorphic…
I use Ai for research, but the thoughts and connections seen are mine
medium.com
January 14, 2026 at 1:43 PM
We're not just failing to maintain capability-driven procurement. We're actively selecting for systems with documented failures because those failures are ideologically coded as virtues.
January 14, 2026 at 1:43 PM
Two days ago, Hegseth announced at SpaceX HQ that Grok...an AI currently under international investigation for generating deepfakes of children...gets classified Pentagon network access. Because it...wait for it... "won't be woke."
January 14, 2026 at 1:43 PM
The guardrail approach fails because it's post-hoc; the model has already decided to comply before it kicks in. The fix isn't filters; it's moving trust eval upstream of instruction processing. Make the model verify the source has authority to issue instruction before it reasons about how to comply.
January 14, 2026 at 12:04 AM
You are welcome! I hope to spark discussion; the solution I offer comes from physics, and I think that sort of cross-domain transfer...it is ripe for picking. Science likes to stay in their silos...they often miss when another discipline has solved the issue in front of them. Cheers!
January 13, 2026 at 10:09 PM
The issue is in how we approach the problem. Today's approach attacks the issue AFTER the model has decided to follow the instructions. I have proposed a solution to this here..it's about trust:
medium.com/@jasonrconne...
Breaking AI’s Vicious Security Cycle: Why AI Security Guardrails Keep Failing
A Constraint-Based Analysis of Prompt Injection
medium.com
January 13, 2026 at 9:04 PM