Lana Tikhomirov
banner
lanatikhomirov.bsky.social
Lana Tikhomirov
@lanatikhomirov.bsky.social
PhD Student at @aimlofficial. Using cognitive psychology to understand and safeguard clinician-AI decision-making 🥼🩻. She/her. AI ethics, human factors, algorithmic safety.
Pinned
Hey everyone! Thought I'd introduce myself to any new followers here- I'm a PhD student at the University of Adelaide with a background in cognitive psychology, specialising in high-risk decision-making with AI algorithms. I also use perspectives from bioethics and algorithmic safety in my work.
My jaw is not on the floor! 🙄
A researcher found over 130,000 chats with Claude, Grok, and ChatGPT archived publicly. This highlights how user-enabled sharing settings can lead to private conversations being indexed and stored on services like the Internet Archive.
#MLSky
More than 130,000 Claude, Grok, ChatGPT, and Other LLM Chats Readable on Archive.org
The issue of publicly saving shared LLM chats is bigger than just Google.
www.404media.co
August 9, 2025 at 9:13 AM
Our main findings from the scoping review are:
1. Most silent trials do not report model metrics outside of AUC (rarely reporting bias testing, failure modes, and data drift)
2. The evaluator of the silent trial is often unnamed or underspecified- human factors and stakeholder engagement is rare
Our preprint is up! Ever heard of the silent phase of AI evaluation for medical AI? Well, now we’ve summarised the current state of research!
We are also SO excited to release our preprint for our scoping review reporting on current #silenttrial practices for #HealthAI

@lanatikhomirov.bsky.social did an amazing job leading this work all the way through ❤️

osf.io/preprints/os...
August 6, 2025 at 5:39 AM
Our preprint is up! Ever heard of the silent phase of AI evaluation for medical AI? Well, now we’ve summarised the current state of research!
We are also SO excited to release our preprint for our scoping review reporting on current #silenttrial practices for #HealthAI

@lanatikhomirov.bsky.social did an amazing job leading this work all the way through ❤️

osf.io/preprints/os...
OSF
osf.io
August 6, 2025 at 5:34 AM
Interested in AI from the perspective of cognitive science? Listen to Nesh Nikolic from Strategic Psychology interview me on the Better Thinking Podcast as we discuss AI and Psychology! neshnikolic.com/podcast/lana...
April 30, 2025 at 2:30 AM
Reposted by Lana Tikhomirov
Funny to see Western AI executives say the hype around DeepSeek is exaggerated when their entire industry is built on exaggeration of the capabilities and possible returns of Western generative AI.
Deepseek’s AI model is ‘the best work’ out of China but the hype is 'exaggerated,' Google Deepmind CEO says
Deepseek's AI model "is probably the best work" out of China, Demis Hassabis said on Sunday, but added it was not a scientific advancement.
www.cnbc.com
February 9, 2025 at 10:14 PM
Reposted by Lana Tikhomirov
openAI’s data = data they illegally harvested from each and every one of us without consent, awareness or compensation
January 29, 2025 at 5:53 PM
Reposted by Lana Tikhomirov
moar memes
January 29, 2025 at 7:10 PM
Reposted by Lana Tikhomirov
i can’t fix anything that’s going on right now, but i can show you some beauty that’s out there… 🌌✨
January 28, 2025 at 9:19 PM
Reposted by Lana Tikhomirov
this fucking sucks
January 28, 2025 at 7:21 PM
Reposted by Lana Tikhomirov
Holy moly. I'm trying to write an academic paper, and nearly every application I'm using is not only offering Generative AI as an option for writing, but *pushing it* -- pervading the design to the point where a simple misclick would make my content AI-generated. Here's why that's a problem. 🧵
January 27, 2025 at 6:38 PM
Reposted by Lana Tikhomirov
The fact that Deepseek R1 was released three days /before/ Stargate means these guys stood in front of Trump and said they needed half a trillion dollars while they knew R1 was open source and trained for $5M.

Beautiful.
January 28, 2025 at 3:02 AM
So happy to be a part of this project. 🐥
CANAIRI - Collaboration for AI Translational Trials - takes flight!🪽
We will work together to define how to take a #sociotechnical approach to responsible #AI integration in healthcare. Spearheaded by my awesome co-lead @mdmccradden.bsky.social 🦜
nature.com/articles/s41...
CANAIRI: the Collaboration for Translational Artificial Intelligence Trials in healthcare - Nature Medicine
Nature Medicine - CANAIRI: the Collaboration for Translational Artificial Intelligence Trials in healthcare
nature.com
January 8, 2025 at 10:32 PM
Reposted by Lana Tikhomirov
Mark Zuckerberg: I built this platform to give the people a voice
Me: okay but actually you built it to rate how hot the woman at your school were
January 7, 2025 at 5:09 PM
Reposted by Lana Tikhomirov
Abstract submissions for the Global Indigenous Data Sovereignty (GIDSov) 2025 conference close on January 17th!

See the conference website for themes and submission details

gidsov.com.au/abstracts

#IDSov #IDGov
January 7, 2025 at 4:09 AM
Reposted by Lana Tikhomirov
Huge thanks for the support from @aimlofficial.bsky.social and CIHR for their generous funding support!
January 6, 2025 at 10:37 PM
Reposted by Lana Tikhomirov
Lesley-Anne Farmer Bobby Greer Anna Goldenberg Yvonne Ho @shalmalijoshi.bsky.social Jennie Louise Muhammad Mamdani Abdu Mohamud Lyle Palmer Antonios Peperidis Stephen Pfohl Mandy Rickard Carolyn Semmler @kdpsingh.bsky.social Devin Singh Seyi Soremekun @lanatikhomirov.bsky.social
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
Super grateful to our amazing steering team, our patient/consumer partners, our partners at the Aboriginal Health Unit at Women’s and Children’s. 🙏🏻

@unityofvirtue.bsky.social Judy Gichoya Mark Sendak Lauren Erdman @istedman.bsky.social Lauren Oakden-Rayner Ismail Akrout James Anderson
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
So reach out to me and @xiaoliu.bsky.social if you want to get involved with #CANAIRI to build the practices around translational trials for better AI translation!
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
We propose that ethical governance for health institutions should be grounded in these local evaluations (not just AI vibes 😎) to ensure that when we say a tool ‘works,’ it means it works for US, for OUR patients, for OUR staff, and we have the evidence to say that.
a woman is standing on a balcony holding a piece of paper and saying i have the receipts .
ALT: a woman is standing on a balcony holding a piece of paper and saying i have the receipts .
media.tenor.com
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
What does this look like?

It will be different for each tool, depending on many things including how much works has been done before on similar tools, how different the local context is, etc.

How do we decide? That’s what we want to figure out, and we need your help!
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
Fairness evaluations, human factors, cognitive science, patient engagement, environmental considerations (+++ to this one!), economics, and more ➡️ taking this global perspective from the get-go we think will reduce wasteful translation and optimize the benefit!
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
Many know this already. But we can be doing better with how silent trials are practiced. It’s not just about the model - so we introduce ‘translational trial’ to signal the need for a more holistic evaluation during this silent stage, recognizing that AI is a sociotechnical tool
a man in a suit is standing in a doorway and saying `` what you 're thinking but way more ''
ALT: a man in a suit is standing in a doorway and saying `` what you 're thinking but way more ''
media.tenor.com
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
📣 CANAIRI: the Collaboration for Translational AI Trials! Co lead @xiaoliu.bsky.social @naturemedicine.bsky.social

Perhaps most important to AI translation is the local silent trial. Ethically, and from an evidentiary perspective, this is essential!

url.au.m.mimecastprotect.com/s/pQSsClx14m...
url.au.m.mimecastprotect.com
January 6, 2025 at 10:36 PM
Reposted by Lana Tikhomirov
"When trying to develop a measure of intelligence, it’s essential to avoid Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” As a community of AI researchers, we really need to figure that one out." @melaniemitchell.bsky.social
aiguide.substack.com/p/did-openai...
Did OpenAI Just Solve Abstract Reasoning?
OpenAI’s o3 model aces the "Abstraction and Reasoning Corpus" — but what does it mean?
aiguide.substack.com
December 23, 2024 at 9:32 AM