Cuong Nguyen
@probabilita.bsky.social
Lecturer at Centre for Vision, Speech and Signal Processing, University of Surrey
Our implementation in Jax can be found on Github at github.com/cnguyen10/pl2d
GitHub - cnguyen10/pl2d: Probabilistic learning to defer
Probabilistic learning to defer. Contribute to cnguyen10/pl2d development by creating an account on GitHub.
github.com
February 21, 2025 at 11:26 AM
Our implementation in Jax can be found on Github at github.com/cnguyen10/pl2d
This research is a part of the PecMan project (sites.google.com/view/pecmanp...) funded by EPSRC - UKRI, led by Professor Gustavo Carneiro from CVSSP - University of Surrey, and in collaboration with Dr Toan Do from @monashuniversity.bsky.social
PecMan Project
Project Overview
Current generic AI models for mammogram analysis provide biased results for patients and inflexible analysis for radiologists, reducing patients’ and radiologists’ trust in such model...
sites.google.com
February 21, 2025 at 11:26 AM
This research is a part of the PecMan project (sites.google.com/view/pecmanp...) funded by EPSRC - UKRI, led by Professor Gustavo Carneiro from CVSSP - University of Surrey, and in collaboration with Dr Toan Do from @monashuniversity.bsky.social
A workload constraint is also integrated into, allowing the system to distribute workload evenly across all experts (otherwise, the system will not learn and defer all samples to the best team member, causing unfairness and burnout).
February 21, 2025 at 11:26 AM
A workload constraint is also integrated into, allowing the system to distribute workload evenly across all experts (otherwise, the system will not learn and defer all samples to the best team member, causing unfairness and burnout).
This paper addresses the problem of missing annotations made by human experts, meaning that each human expert annotates only a part of the training dataset.
February 21, 2025 at 11:26 AM
This paper addresses the problem of missing annotations made by human experts, meaning that each human expert annotates only a part of the training dataset.