mehmetaygun.bsky.social
mehmetaygun.bsky.social
mehmetaygun.bsky.social
@mehmetaygun.bsky.social
Computer Vision/Machine Learning PhD student
at The University of Edinburgh - mehmetaygun.github.io
Reposted by mehmetaygun.bsky.social
Interested in doing a PhD in machine learning at the University of Edinburgh starting Sept 2026?

My group works on topics in vision, machine learning, and AI for climate.

For more information and details on how to get in touch, please check out my website:
homepages.inf.ed.ac.uk/omacaod
October 16, 2025 at 9:15 AM
Reposted by mehmetaygun.bsky.social
🌍 Excited to announce our Workshop on AI for Climate & Conservation (AICC) at #EurIPS2025 in Copenhagen! 🎉

📢 Call for Participation: sites.google.com/g.harvard.ed...

Confirmed speakers from Mistral AI, DeepMind, ETH Zurich, LSCE & more.

Looking forward to meeting and discussing in Copenhagen!
September 19, 2025 at 10:37 AM
Reposted by mehmetaygun.bsky.social
DepthCues: Evaluating Monocular Depth Perception in Large Vision Models

Do automated monocular depth estimation methods use similar visual cues to humans?

To learn more, stop by poster #405 in the evening session (17:00 to 19:00) today at #CVPR2025.
June 14, 2025 at 12:48 PM
Reposted by mehmetaygun.bsky.social
Come join us for the 12th Workshop on Fine-Grained Visual Categorization (FGVC). Starting today at 9am at @cvprconference.bsky.social.

We will be in room 104E.

#FGVC #CVPR2025
@fgvcworkshop.bsky.social
June 11, 2025 at 12:56 PM
Reposted by mehmetaygun.bsky.social
DepthCues: a bench to evaluate monocular depth perception of large vision models via linear probe.
It consists of 6 depth related tasks to assess response to visual cues.
They test 20 pretrained models.
As expected DAv2, DUSt3R, DINOv2 do well, but SigLIP is not bad
danier97.github.io/depthcues/
November 28, 2024 at 12:12 AM
Reposted by mehmetaygun.bsky.social
Submission deadline is extended to **March 07**.
The CMT page is now online and we are looking forward to your submissions on AI systems for expert-tasks and fine-grained analysis.
More info at: sites.google.com/view/fgvc12/...

#CVPR #CVPR2025 #AI
@cvprconference.bsky.social
FGVC12 Workshop accepted to CVPR 2025, Nashville!
CALL FOR PAPERS: sites.google.com/view/fgvc12/...
We discuss domains where expert knowledge is typically required and investigate artificial systems that can efficiently distinguish a large number of very similar visual concepts.
#CVPR #CVPR2025 #AI
March 3, 2025 at 3:36 PM
Reposted by mehmetaygun.bsky.social
Reminder that the deadlines for submitting papers to the FGVC workshop at #CVPR2025 are coming up soon.

The scope of the workshop is quite broad, e.g. fine-grained learning, multi-modal, human in the loop, etc.

More info here:
sites.google.com/view/fgvc12/...

@cvprconference.bsky.social
FGVC12 Workshop accepted to CVPR 2025, Nashville!
CALL FOR PAPERS: sites.google.com/view/fgvc12/...
We discuss domains where expert knowledge is typically required and investigate artificial systems that can efficiently distinguish a large number of very similar visual concepts.
#CVPR #CVPR2025 #AI
March 1, 2025 at 9:48 AM
Reposted by mehmetaygun.bsky.social
Amazing achievement from iNaturalist!

Also really proud that our SINR species range estimation models serve as the underlying technology for this.

Spatial Implicit Neural Representations for Global-Scale Species Mapping
arxiv.org/abs/2306.02564
🎉 Celebrating 100,000 Modeled Taxa in the iNaturalist Open Range Map Dataset!

To mark this milestone, we're making model-generated distribution data even more accessible. Explore, analyze, and use this data to power biodiversity research! 🌍🔍
www.inaturalist.org/posts/106918
Celebrating 100,000 Modeled Taxa with the iNaturalist Open Range Map Dataset
A Major Milestone for Biodiversity Mapping We passed a major milestone with today’s update to iNaturalist’s Computer Vision and Geomodel—100,000 modeled taxa! To mark this achievement, we’re excited t...
www.inaturalist.org
February 26, 2025 at 8:47 AM
Reposted by mehmetaygun.bsky.social
Prof. Gabriel Brostow at University College London has some really cool sounding computer vision PhD openings starting mid to late 2025.

Gabe is a fantastic PhD advisor!
Interested in doing a PhD in Computer Vision Super-Tools in my group at UCL? If you count as a "UK Home" resident, see my PhD student vacancies here: www0.cs.ucl.ac.uk/staff/G.Bros....
Gabriel Brostow Homepage at UCL University College London
Professor of Computer Vision/AI and Chief Research Scientist
www0.cs.ucl.ac.uk
January 20, 2025 at 9:35 AM
Reposted by mehmetaygun.bsky.social
Interested in doing a PhD in machine learning at the University of Edinburgh? If so, check out the ML-Systems PhD Programme:
mlsystems.uk

Application deadline is 22nd January (next week).
January 17, 2025 at 5:40 PM
Reposted by mehmetaygun.bsky.social
FGVC12 Workshop accepted to CVPR 2025, Nashville!
CALL FOR PAPERS: sites.google.com/view/fgvc12/...
We discuss domains where expert knowledge is typically required and investigate artificial systems that can efficiently distinguish a large number of very similar visual concepts.
#CVPR #CVPR2025 #AI
January 9, 2025 at 5:06 PM
Reposted by mehmetaygun.bsky.social
Interested in doing a PhD in Computer Vision Super-Tools in my group at UCL? If you count as a "UK Home" resident, see my PhD student vacancies here: www0.cs.ucl.ac.uk/staff/G.Bros....
Gabriel Brostow Homepage at UCL University College London
Professor of Computer Vision/AI and Chief Research Scientist
www0.cs.ucl.ac.uk
January 5, 2025 at 8:16 PM
Reposted by mehmetaygun.bsky.social
Interested in species distribution modelling? If so, check out recent work where we introduce a new model that can estimate the spatial range of a species from only a text description as input.

Max Hamilton @max-ham.bsky.social will be at #NeurIPS this week to present it.
❓How can we predict where a species may be found when observations are limited?

✨Introducing Le-SINR: A text to range map model that can enable scientists to produce more accurate range maps with fewer observations.

Thread 🧵
December 9, 2024 at 3:25 PM
Reposted by mehmetaygun.bsky.social
Check out Eddie and Omiros' work introducing our new vision-language retrieval dataset INQUIRE which will be presented at #NeurIPS2024 next week.
🎯 How can we empower scientific discovery in millions of nature photos?

Introducing INQUIRE: A benchmark testing if AI vision-language models can help scientists find biodiversity patterns- from disease symptoms to rare behaviors- hidden in vast image collections.

Thread👇🧵
December 6, 2024 at 10:20 PM