Melissa McCradden
banner
mdmccradden.bsky.social
Melissa McCradden
@mdmccradden.bsky.social
Bioethicist specializing in ethical and evidence-based integration of technologies in health care. AI Director, Women’s and Children’s Health Network | THRF Clinical Research Fellow, Australian Institute for Machine Learning. Adelaide, South Australia.
I move for immediate approval to cancel phrases such as “AI pioneers/godfathers,” and all variants
a judge sitting in front of an american flag with the words not interested on her face
ALT: a judge sitting in front of an american flag with the words not interested on her face
media.tenor.com
November 7, 2025 at 7:35 AM
Oh wow. This is the very definition of facepalm 🤦🏼‍♀️
October 22, 2025 at 3:37 AM
The worse part though is that as their costs go up for adding these nonsense features, they inevitably offload it onto us, the consumers. 😡 I’m now looking into old school options again to get away from this stuff.
October 12, 2025 at 9:21 PM
Not arrogant - 100% accurate. The very fact that these folks can switch up their buzz words on a dime shows they don’t understand the words in the first place.
October 12, 2025 at 2:34 AM
Reposted by Melissa McCradden
I've yet to see a single discussion of "Canadian digital sovereignty" include any Indigenous experts, at best we're given a token mention that at some point they'll "consult with Indigenous Peoples" like we're some sort of monolith. Umm, y'all need to consider all our lands, and our distinctiveness.
October 11, 2025 at 3:33 PM
Gonna pump my own paper here which is basically saying it’s the wrong problem to focus on for making good decisions in medicine 😅

www.sciencedirect.com/science/arti...
Explaining decisions without explainability? Artificial intelligence and medicolegal accountability
www.sciencedirect.com
October 10, 2025 at 1:34 AM
We’ve also pre-registered our study on OSF to encourage transparency and sharing with the scientific community

osf.io/p6tm5
OSF
osf.io
October 2, 2025 at 12:01 AM
This shared accountability is important to limiting the expanding problem of moral crumple zones, per -

estsjournal.org/index.php/es...
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction | Engaging Science, Technology, and Society
estsjournal.org
October 2, 2025 at 12:01 AM
This is a feasible, generalizable approach to evaluating commercial scribe tools to make an informed decision about where they fit (and don’t fit) & building clinical procedures to share the accountability between physicians and their health institution
October 2, 2025 at 12:01 AM
I like the argument that is glimpsed in @abeba.bsky.social ‘s work - we can oppose using robots in this way w/o the anthropomorphising that is dehumanising. Curious if @abeba.bsky.social you’re expanding some of the argumentation in 2.3 of the paper?

arxiv.org/abs/2001.05046
Robot Rights? Let's Talk about Human Welfare Instead
The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with h...
arxiv.org
September 23, 2025 at 7:17 AM
Do you know what I find wild tho? Nearly a decade of being an ethicist - raising concerns, questions, challenges in a plethora of contexts has been my job. This is different. I’ve never felt so at risk personally for doing this as I do for LLMs. People get personally offended, get nasty even.
September 22, 2025 at 10:41 PM
“is it just everybody is scared to say this and pleased I did?”

^^^* YES, we are!!!
September 22, 2025 at 8:29 AM
I think it’s kinda fascinating to see the epistemic struggle in this write-up between the beliefs espoused about LLM “capabilities” versus the considerations re ethics violations… 🤷🏼‍♀️
August 27, 2025 at 12:08 AM
I hate that I believe you are 100% correct on this prediction 🫠
August 24, 2025 at 1:48 AM
Always stunning how the scientific standards for study designs and the claims one can make from them get thrown out the window when it involves AI. Any study that is not longitudinal is meaningless given we are now seeing the long term effects of AI use = worse learning and performance.
August 11, 2025 at 1:01 AM
Please get in touch if you do or are interested in the silent evaluation phase - we want to hear from you!
August 6, 2025 at 2:01 AM
We are also SO excited to release our preprint for our scoping review reporting on current #silenttrial practices for #HealthAI

@lanatikhomirov.bsky.social did an amazing job leading this work all the way through ❤️

osf.io/preprints/os...
OSF
osf.io
August 6, 2025 at 2:01 AM