Stephan Ellmann
banner
stephanellmann.bsky.social
Stephan Ellmann
@stephanellmann.bsky.social
Orgelspielender Hobby-Facharzt für Radiologie (oder anders rum? 🤔) 🎹⛪🏥🩻
Um die Verwirrung komplett zu machen ist glaube ich der Ostersonntag der nach dem ersten Frühlingsvollmond. Dadurch kann es Konstellationen geben an denen in einem Jahr der Karfreitag vor und im anderen nach dem ersten Frühlingsvollmond liegt 🧐☝️😇
April 18, 2025 at 1:58 PM
Alternativer Kanzler (Wüst oder Günther). Rücktritt Merz und Söder. Fester Ministerposten für Habeck trotz Opposition.
March 10, 2025 at 5:46 PM
Und eine davon auch noch schwarz.
March 1, 2025 at 4:29 PM
Wahnsinn! Herzlichen Glückwunsch und höchsten Respekt!
January 23, 2025 at 9:10 PM
Bin auch mit im Club 👍😷 Auf ein sicheres 2025!
December 31, 2024 at 4:43 PM
#Medicine #ArtificialIntelligence #Radiology #DigitalHealth #MedicalAI #Healthcare #ResearchMethodology #PeerReview #AcademicPublishing
December 24, 2024 at 7:32 AM
https://www.bmj.com/content/387/bmj-2024-081948
December 24, 2024 at 7:32 AM
What's your take on maintaining scientific rigor in AI healthcare publications? How can we better evaluate AI capabilities in medicine? 📚 Source: BMJ 2024;387:e081948
December 24, 2024 at 7:32 AM
capabilities ✅Focus on practical, validated assessment methods for AI in healthcare
December 24, 2024 at 7:32 AM
𝐌𝐨𝐯𝐢𝐧𝐠 𝐅𝐨𝐫𝐰𝐚𝐫𝐝: We need: ✅More 𝚛̲𝚒̲𝚐̲𝚘̲𝚛̲𝚘̲𝚞̲𝚜̲ ̲𝚙̲𝚎̲𝚎̲𝚛̲ ̲𝚛̲𝚎̲𝚟̲𝚒̲𝚎̲𝚠̲ for AI-related medical research ✅Clear distinction between human cognition and AI processing
December 24, 2024 at 7:32 AM
limitations. The anthropomorphization of AI systems through human cognitive testing frameworks doesn't advance our understanding of these tools.
December 24, 2024 at 7:32 AM
𝐖𝐡𝐲 𝐓𝐡𝐢𝐬 𝐌𝐚𝐭𝐭𝐞𝐫𝐬: This publication raises concerns about peer review standards in AI healthcare research. The conclusions could mislead healthcare professionals about AI capabilities and
December 24, 2024 at 7:32 AM
🔍 Results simply confirm the expected: newer models perform better across ALL tasks ⚠️ Drawing parallels between human cognitive decline and model versions is scientifically unsound
December 24, 2024 at 7:32 AM
Newer versions are complete replacements, not aged iterations
December 24, 2024 at 7:32 AM
𝐂𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐈𝐬𝐬𝐮𝐞𝐬: 🧩 Fundamentally flawed premise: Applying human cognitive tests to AI systems lacks scientific validity 📊 The "aging" comparison misrepresents how AI models evolve.
December 24, 2024 at 7:32 AM
⁉️Authors suggest AI shows "cognitive aging" similar to humans⁉️
December 24, 2024 at 7:32 AM
🔹Only ChatGPT-4o achieved a "normal" cognitive score (26/30)🔹All models showed difficulties with visuospatial tasks🔹"Older" models performed worse than newer versions
December 24, 2024 at 7:32 AM
The authors administered the Montreal Cognitive Assessment (MoCA) to leading AI models (ChatGPT-4, Claude, Gemini), reporting that:
December 24, 2024 at 7:32 AM
𝐒𝐭𝐮𝐝𝐲 𝐎𝐯𝐞𝐫𝐯𝐢𝐞𝐰:
December 24, 2024 at 7:32 AM