Joe Alderman
jaldmn.bsky.social
Joe Alderman
@jaldmn.bsky.social
NIHR clinical lecturer in AI & digital health | Anaesthesia & critical care doctor | Triathlete (kinda).
https://www.birmingham.ac.uk/staff/profiles/inflammation-ageing/alderman-joseph
Agree. Really important in general, and particularly for healthcare models

www.bmj.com/content/388/...
Uncertainty of risk estimates from clinical prediction models: rationale, challenges, and approaches
www.bmj.com
March 13, 2025 at 5:58 PM
December 18, 2024 at 5:38 PM
@unisouthampton.bsky.social @who.int @moorfieldsbrc.bsky.social

Special thanks to our funders & supporters: The NHS AI Lab, The Health Foundation and the NIHR @healthfoundation.bsky.social @nihr.bsky.social

/end.
December 18, 2024 at 5:35 PM
Last thing to say is an enormous THANK YOU to all who have contributed their time, energy and expertise to this work.

Thanks for STANDING Together with us these last few years 🥹

(@ing a few people below, but I don't have everyone added on BSky. Sorry if I missed anyone out)

12/
December 18, 2024 at 5:35 PM
We hope STANDING Together helps everyone across the AI development lifecycle to make thoughtful choices about the way they use data, reducing the risk that biases in datasets feed through to biases in algorithms and downstream patient harm.

10/
December 18, 2024 at 5:35 PM
These recommendations are the culmination of nearly 3 years of work by an international group of researchers, healthcare professionals, policy experts, funders, medical device regulators, AI/ML developers, and many more besides.

9/
December 18, 2024 at 5:35 PM
STANDING Together = STANdards for data Diversity, INclusivity and Generalisability.

We have worked with >350 stakeholders from 58 countries to agree a set of recommendations to improve the documentation and use of health datasets.

8/
December 18, 2024 at 5:35 PM
Key point: there is (probably) no such thing as a perfect dataset!

Knowledge of a dataset's limitations is not a negative - it is actually a positive, as steps might then be taken to mitigate any issues. Not knowing ≠ there are no issues...

7/
December 18, 2024 at 5:35 PM
Those using datasets should carefully appraise the suitability of the dataset for their purpose, and consider how they might mitigate any biases or limitations contained within.

6/
December 18, 2024 at 5:35 PM
To prevent this happening, it's really important that those creating datasets also supply documentation. This should transparently explain what data they contain, and describe any limitations or related issues which those using data should be aware of.

5/
December 18, 2024 at 5:35 PM
There are lots of reasons why algorithms can be biased. One key driver is the data used to develop or evaluate them.

Biases in data can pass along the chain and drive biases in algorithms, leading to downstream issues which can be hard to predict in advance.

4/
December 18, 2024 at 5:35 PM
BUT: these benefits are not guaranteed. In fact, there is growing evidence that medical AI works better for certain groups than others. This may contribute to health inequity and cause patients harm.

3/
December 18, 2024 at 5:35 PM
The world of medical artificial intelligence is moving at a remarkable pace, with a dizzying range of AI/ML tools already available for use in patients' care today.

These tools are undoubtedly cool, and have great potential to improve health!

2/
December 18, 2024 at 5:35 PM