Oisin Mac Aodha
banner
oisinmacaodha.bsky.social
Oisin Mac Aodha
@oisinmacaodha.bsky.social
Reader in Computer Vision and Machine Learning @ School of Informatics, University of Edinburgh.
https://homepages.inf.ed.ac.uk/omacaod
The goal of semantic correspondence estimation is to establish semantically meaningful matches across different images of an object. It turns out however that recent supervised methods generalize poorly beyond the annotated keypoints seen during training. #NeurIPS2025
December 5, 2025 at 4:05 PM
Today at #NeurIPS2025, Bingchen Zhao will be presenting our new LLM Speedrunning Benchmark which evaluates an LLM agent's ability to reproduce scientific finding in the context of LLM training.

Stop by poster #3313 from 4:30pm to 7:30pm at #NuerIPS2025 today in San Diego to learn more.
December 5, 2025 at 3:53 PM
Leonie @bossemel.bsky.social will be presenting CleverBirds, our new human visual learning dataset, at #NeurIPS2025 in San Diego today.

Stop by poster #2012 from 11am-2pm to learn more.
December 4, 2025 at 2:15 PM
We will be presenting Elle Miller's (@elle-miller.bsky.social) work on Enhancing Tactile-based Reinforcement Learning for Robotic Control at #NeurIPS2025 in San Diego today.

Stop by poster #2317 from 11am to 2pm PT today to learn more.

Full paper:
arxiv.org/abs/2510.21609
December 3, 2025 at 3:58 PM
Traditional clustering methods aim to group unlabelled data points based on their similarity to each other. However, clustering, in the absence of additional information, is an ill-posed problem as there may be many different, yet equally valid, ways to partition a dataset.
November 24, 2025 at 9:05 AM
We have PhD opportunity (start date Sep 2026) at the University of Edinburgh at the intersection of biodiversity mapping and zoonotic disease prediction.

It is part of the UKRI AI Centre for Doctoral Training in Biomedical Innovation based in the School of Informatics:
ai4bi-cdt.ed.ac.uk
November 21, 2025 at 1:48 PM
Interested in doing a PhD in machine learning at the University of Edinburgh starting Sept 2026?

My group works on topics in vision, machine learning, and AI for climate.

For more information and details on how to get in touch, please check out my website:
homepages.inf.ed.ac.uk/omacaod
October 16, 2025 at 9:15 AM
FS-SINR is efficient. At test time, it can take an arbitrary number of observations (i.e., context locations) as input, along with optional metadata, and generate a predicted range in a single forward pass of the model.
July 18, 2025 at 12:30 PM
We obtain better performance in the few-shot setting, i.e., where we have very limited observations for a species. In the x-axis in this plot we vary the number of observations provided to each model for a set of different species and on the y-axis we measure the quality of the range predictions.
July 18, 2025 at 12:30 PM
We observe improved range prediction performance compared to existing methods, e.g., SINR from Cole et al. at ICML 2023 or LE-SINR from Hamilton et al. at NeurIPS 2024.

Top row: Gabar Goshawk
Bottom row: Black-naped Monarch
July 18, 2025 at 12:30 PM
In this example, we see a prediction for FS-SINR using a single presence observation as input shown as a white dot (left). Conditioning the model with text (e.g. middle and right), can dramatically change the range predictions.
July 18, 2025 at 12:30 PM
FS-SINR can be conditioned on in-situ presence observations for a species not seen during training in addition to text descriptions of their ranges or images of the species if available.
July 18, 2025 at 12:30 PM
This week at #ICML we are presenting our new work titled Feedforward Few-shot Species Range Estimation.

TLDR;
* Our model, FS-SINR, can estimate a species' range from few observations
* It does not require an retraining for previously unseen species
* It can integrate text and image information
July 18, 2025 at 12:30 PM
CrossSDF: 3D Reconstruction of Thin Structures From Cross-Sections

We will be presenting our work on thin structure reconstruction at the final poster session (4-6pm) at #CVPR2025 today.

Stop by poster #457 to learn more.
June 15, 2025 at 12:55 PM
DepthCues: Evaluating Monocular Depth Perception in Large Vision Models

Do automated monocular depth estimation methods use similar visual cues to humans?

To learn more, stop by poster #405 in the evening session (17:00 to 19:00) today at #CVPR2025.
June 14, 2025 at 12:48 PM
MVSAnywhere: Zero-Shot Multi-View Stereo

Looking for a multi-view stereo depth estimation model which works anywhere, in any scene, with any range of depths?

If so, stop by our poster #81 today in the morning session (10:30 to 12:20) at #CVPR2025.
June 14, 2025 at 12:38 PM
You can find Room 104E on Level 1 (i.e. street level).
June 11, 2025 at 12:56 PM
We have a great line up of fantastic speakers.
June 11, 2025 at 12:56 PM
Come join us for the 12th Workshop on Fine-Grained Visual Categorization (FGVC). Starting today at 9am at @cvprconference.bsky.social.

We will be in room 104E.

#FGVC #CVPR2025
@fgvcworkshop.bsky.social
June 11, 2025 at 12:56 PM
Tomorrow in poster session 2 at #ICLR2025 Neehar (@therealpaneni.bsky.social) will be presenting his work on comparing neural networks via concepts.

Representational Similarity via Interpretable Visual Concepts
arxiv.org/abs/2503.15699
nkondapa.github.io/rsvc-page/
April 23, 2025 at 4:43 PM
We show that even without any ecological fine-tuning the commercial LLMs tested outperform naive baselines. However, they still exhibit significant limitations, particularly in generating spatially accurate range maps and classifying threats.
February 11, 2025 at 5:42 PM
Do Large Language Models (LLMs) possess ecological knowledge?

For example, can they do tasks such as:
(1) predict the presence of species at a location
(2) generate range maps
(3) list critically endangered species
(4) perform threat assessment
(5) estimate species traits
February 11, 2025 at 5:42 PM
Interested in doing a PhD in machine learning at the University of Edinburgh? If so, check out the ML-Systems PhD Programme:
mlsystems.uk

Application deadline is 22nd January (next week).
January 17, 2025 at 5:40 PM
Also today at #NeurIPS2024 Eddie will be presenting our work on fine-grained text-to-image retrieval:

INQUIRE: A Natural World Text-to-Image Retrieval Benchmark

East Exhibit Hall A-C #4510
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
arxiv.org/abs/2411.02537
December 13, 2024 at 11:25 AM
Today at #NeurIPS2024 Max @max-ham.bsky.social will be presenting our work on species range estimation:

Combining Observational Data and Language for Species Range Estimation

East Exhibit Hall A-C #3903
Fri 13 Dec 11 a.m. PST
arxiv.org/abs/2410.10931
December 13, 2024 at 11:23 AM