Ron Itelman
@ronitelman.bsky.social
O'Reilly Author, "Unifying Business, Data, and Code" (2024), and Apress author, "The Language of Innovation" (2025)
The compounding negative roi of downstream errors has to be scary with agents...
October 21, 2025 at 9:02 PM
The compounding negative roi of downstream errors has to be scary with agents...
The VC passed on funding us.
Their portfolio companies are shipping AI agents into hospitals right now.
Sleep tight.
Their portfolio companies are shipping AI agents into hospitals right now.
Sleep tight.
October 21, 2025 at 7:34 PM
The VC passed on funding us.
Their portfolio companies are shipping AI agents into hospitals right now.
Sleep tight.
Their portfolio companies are shipping AI agents into hospitals right now.
Sleep tight.
This is why we built TrustLoop. We don't wait for AI to "figure it out." We build the guardrails that validate context, catch ambiguity, and prevent miscommunication before it reaches a patient.
October 21, 2025 at 7:34 PM
This is why we built TrustLoop. We don't wait for AI to "figure it out." We build the guardrails that validate context, catch ambiguity, and prevent miscommunication before it reaches a patient.
You get garbage in, garbage through, garbage out—with patient lives in the balance.
Smarter models don't fix this. ChatGPT-10 won't magically understand hospital context, regional variations, or clinical workflows. The fundamental problem isn't intelligence—it's interpretation.
Smarter models don't fix this. ChatGPT-10 won't magically understand hospital context, regional variations, or clinical workflows. The fundamental problem isn't intelligence—it's interpretation.
October 21, 2025 at 7:34 PM
You get garbage in, garbage through, garbage out—with patient lives in the balance.
Smarter models don't fix this. ChatGPT-10 won't magically understand hospital context, regional variations, or clinical workflows. The fundamental problem isn't intelligence—it's interpretation.
Smarter models don't fix this. ChatGPT-10 won't magically understand hospital context, regional variations, or clinical workflows. The fundamental problem isn't intelligence—it's interpretation.
A nurse in Boston searches for a patient's temperature. Another in Phoenix does the same. One hospital uses Celsius. One uses Fahrenheit. The AI has no idea which is which.
Now multiply that ambiguity across every query, every hospital system, every database, every regional protocol.
Now multiply that ambiguity across every query, every hospital system, every database, every regional protocol.
October 21, 2025 at 7:34 PM
A nurse in Boston searches for a patient's temperature. Another in Phoenix does the same. One hospital uses Celsius. One uses Fahrenheit. The AI has no idea which is which.
Now multiply that ambiguity across every query, every hospital system, every database, every regional protocol.
Now multiply that ambiguity across every query, every hospital system, every database, every regional protocol.
This is gold, thanks for explaining the concept blocks. Definitely have things I want to experiment with now. Thanks for sharing Aaron.
July 21, 2025 at 2:04 PM
This is gold, thanks for explaining the concept blocks. Definitely have things I want to experiment with now. Thanks for sharing Aaron.
Don't know what the rest of this thread was but thank you for sharing about SPLADE
July 17, 2025 at 12:31 PM
Don't know what the rest of this thread was but thank you for sharing about SPLADE
A heuristic perhaps, I can't expend the time and energy to collect, understand, and integrate new information to update my beliefs, so it is more efficient to believe what others I align with believe. So parsimony in one direction but risk in another. Perhaps...
May 29, 2025 at 1:41 AM
A heuristic perhaps, I can't expend the time and energy to collect, understand, and integrate new information to update my beliefs, so it is more efficient to believe what others I align with believe. So parsimony in one direction but risk in another. Perhaps...
Because typically you assume Bayesian belief updates on rational thinking, versus motivated reasoning, and yet what you are pointing out is normal human behavior
May 28, 2025 at 11:44 PM
Because typically you assume Bayesian belief updates on rational thinking, versus motivated reasoning, and yet what you are pointing out is normal human behavior
I'm trying to imagine what a Bayesian model analogy would be to this...
May 28, 2025 at 11:43 PM
I'm trying to imagine what a Bayesian model analogy would be to this...
I'm afraid I'm going to go into some deep meta spiral of philosophical pondering on this ;)
May 28, 2025 at 11:43 PM
I'm afraid I'm going to go into some deep meta spiral of philosophical pondering on this ;)
The updating of beliefs not on perception, but updating beliefs on what is desired to be perceived?
May 28, 2025 at 11:41 PM
The updating of beliefs not on perception, but updating beliefs on what is desired to be perceived?
How would you define motivated cognition?
May 28, 2025 at 11:38 PM
How would you define motivated cognition?