Woojin Kim, MD
banner
woojinkim.com
Woojin Kim, MD
@woojinkim.com
CSO & CMIO @ HOPPR | CMO @ ACR DSI | MSK Radiologist | Imaging Informaticist | AI Enthusiast | Entrepreneur | AI artist | Travel Photographer. Posts are my own.
Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer

"Moreover, with only 1 hour of fMRI data from a new subject, we achieve results comparable to current methods trained on full 40 hour recordings."

amitzalcher.github.io/Brain-IT/
Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer
Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer
amitzalcher.github.io
November 8, 2025 at 7:51 PM
The Real Tech Stack Behind AI Startups: A 200-Company Analysis

"Build cool products. Solve real problems. Use whatever tools work.

Just don’t call your prompt engineering a 'proprietary neural architecture.'"

pub.towardsai.net/i-reverse-en...
The Real Tech Stack Behind AI Startups: A 200-Company Analysis
Three weeks of network monitoring revealed the truth: 73% of funded AI startups are running $33M valuations on $1,200/month in OpenAI…
pub.towardsai.net
November 5, 2025 at 10:11 PM
🚨 New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

thehackernews.com/2025/10/new-...
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands
Researchers uncover a CSRF flaw in ChatGPT Atlas letting attackers inject persistent malicious code.
thehackernews.com
October 28, 2025 at 11:50 PM
Sakana AI's CTO says he's 'absolutely sick' of transformers, the tech that powers every major AI model

"You should only do the research that wouldn't happen if you weren't doing it."

venturebeat.com/ai/sakana-ai...
Sakana AI's CTO says he's 'absolutely sick' of transformers, the tech that powers every major AI model
Llion Jones, co-creator of the transformer technology powering ChatGPT, warns AI research has become too narrow and says he's moving on from his own invention.
venturebeat.com
October 28, 2025 at 2:03 AM
I tried OpenAI’s new Atlas browser but I still don’t know what it’s for

"The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing."

www.technologyreview.com/2025/10/27/1...
I tried OpenAI’s new Atlas browser but I still don’t know what it’s for
My impression is that it is little more than cynicism masquerading as software.
www.technologyreview.com
October 27, 2025 at 6:51 PM
A parody website mocks the hype and dangers of the current large language model boom the-decoder.com/a-parody-web...
A parody website mocks the hype and dangers of the current large language model boom
A new billboard in San Francisco is using sharp satire to highlight the risks of unregulated AI.
the-decoder.com
October 22, 2025 at 6:51 PM
Technology architect builds his own AI testing tool and confirms my “Chain of Babble” theory works!

🤔 Fascinating.
Independent validation “Chain of Babble” beats “Chain of Thought”

generativeai.pub/research-met...
October 19, 2025 at 6:51 PM
AI chatbots aren't giving patients safety warnings for imaging exams

This needs to change.

www.auntminnie.com/imaging-info...
AI chatbots aren't giving patients safety warnings for imaging exams
The inclusion of medical disclaimers in AI responses regarding imaging exams declined significantly between 2022 and 2025.
www.auntminnie.com
October 9, 2025 at 6:51 PM
✨ In 2017, I gave my first lecture at #NIIC.
This week, I'll be back to speak on "Generative and Agentic AI in Medical Imaging."

👥 NIIC has been part of my professional journey for nearly a decade, and it is my way of giving back to the field that has given me so much!
September 23, 2025 at 4:08 PM
You can't eval GPT5 anymore — LessWrong

"The GPT-5 API is aware of today's date (no other model provider does this)... Once the model knows that it is in a simulation, it starts questioning other parts of the simulation."

www.lesswrong.com/posts/DLZokL...
You can't eval GPT5 anymore — LessWrong
The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simul…
www.lesswrong.com
September 20, 2025 at 6:51 PM
Magical Thinking on AI

Friedman is not wrong to worry about what's going to happen vis a vis the U.S., China, and AI. However, we need less magical thinking and more realism.

aiguide.substack.com/p/magical-th...
Magical Thinking on AI
A Response to Thomas Friedman's Recent AI Columns in the New York Times
aiguide.substack.com
September 15, 2025 at 11:50 PM
😊 My "Frozen Shoulder" submission from last year's #RSNA24 "The Art of Imaging Art Contest" was just published in the @rsnasky.bsky.social Radiology Advances journal. academic.oup.com/radadv/artic...
September 8, 2025 at 11:50 PM
After studying 1,500 papers, this post says common prompt-engineering advice is often wrong. Top companies use short, structured prompts, automate and continuously optimize them, and match techniques to the task.

aakashgupta.medium.com/i-studied-1-...
I Studied 1,500 Academic Papers on Prompt Engineering. Here’s Why Everything You Know Is Wrong.
The $50M+ ARR companies are doing the exact opposite of what everyone teaches
aakashgupta.medium.com
September 8, 2025 at 6:51 PM
Silicon Valley’s AI deals are creating zombie startups: ‘You hollowed out the organization’

"...it’s a trend that threatens to thwart innovation as founders abandon their ambitious projects to work for the biggest companies in the world."

www.cnbc.com/2025/08/19/h...
Silicon Valley's AI deals are creating zombie startups: 'You hollowed out the organization'
Unable to make big AI purchases because of regulatory hurdles, tech giants have spent billions to buy top talent from startups, leaving behind shell companies.
www.cnbc.com
September 8, 2025 at 12:16 AM
Why language models hallucinate

Hallucinations occur partly because standard evaluation methods reward guessing over acknowledging uncertainty, encouraging models to guess rather than say "I don't know".

openai.com/index/why-la...
Why language models hallucinate
OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety.
openai.com
September 7, 2025 at 6:51 PM
🎨 @rsnasky.bsky.social is again conducting "The Art of Imaging Art Contest", and top picks will be displayed at #RSNA25.

🗳️ Vote for your favorite art entry by September 18th!

🔗 to each of my artwork is in this 🧵👇

Anna’s Hand 👉 rsna.wishpond.com/art-of-imagi...

#AIart #RadiologyArt #NanoBanana
September 4, 2025 at 11:31 PM
🇨🇳 Alibaba's Tongyi Lab Open-Sources WebWatcher: A Breakthrough in Vision-Language AI Agents

Very nice opinion piece:
OPINION: Deep vs. Shallow: Why today’s LLMs hit a wall

www.rohan-paul.com/p/alibabas-t...
🇨🇳 Alibaba's Tongyi Lab Open-Sources WebWatcher: A Breakthrough in Vision-Language AI Agents
Alibaba open-sources WebWatcher, Tencent launches 3D world model HunyuanWorld-Voyager, plus an opinion on why LLMs hit limits between deep and shallow reasoning.
www.rohan-paul.com
September 4, 2025 at 6:51 PM
Therapists are secretly using ChatGPT. Clients are triggered.

www.technologyreview.com/2025/09/02/1...
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
www.technologyreview.com
September 3, 2025 at 6:51 PM
Clinician Perspectives on AI-Generated Drafts of Test Result Explanations

AI-generated draft comments show promise to reduce clinician burden and improve patient communication, but need further refinement before broad implementation.

jamanetwork.com/journals/jam...
Clinician Perspectives on AI-Generated Drafts of Test Result Explanations
This quality improvement study evaluates clinician perspectives on the usability and utility of generative artificial intelligence (AI)–based large language model tool to draft result comments for…
jamanetwork.com
August 25, 2025 at 9:11 PM
MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers

❗ GPT‑5 43.72%, Grok‑4 33.33%, Claude‑4.0‑Sonnet 29.44%.

arxiv.org/abs/2508.14704
MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers
The Model Context Protocol has emerged as a transformative standard for connecting large language models to external data sources and tools, rapidly gaining adoption across major AI providers and…
arxiv.org
August 25, 2025 at 6:51 PM
AI ranks scientific papers that have been secretly rewritten by AI higher than human-written ones

⚠️ Why there is a growing incentive to lie about AI use in academia

medium.com/the-generato...
AI ranks scientific papers that have been secretly rewritten by AI higher than human-written ones
Why there is a growing incentive to lie about AI use in academia
medium.com
August 22, 2025 at 1:11 AM
MIT breaks its own policies on AI generated research

Why writing research papers with AI threatens scientific integrity

generativeai.pub/mit-breaks-i...
MIT breaks its own policies on AI generated research
Why writing research papers with AI threatens scientific integrity
generativeai.pub
August 21, 2025 at 9:11 PM
MIT report: 95% of enterprise GenAI pilots fail—success hinges on targeted adoption, smart partnerships, and deep workflow integration. Biggest ROI? Back-office automation, not sales tools.

fortune.com/2025/08/18/m...
MIT report: 95% of generative AI pilots at companies are failing
There’s a stark difference in success rates between companies that purchase AI tools from vendors and those that build them internally.
fortune.com
August 21, 2025 at 2:03 AM