Adriano D'Alessandro
banner
adrian-dalessandro.bsky.social
Adriano D'Alessandro
@adrian-dalessandro.bsky.social
| Computer vision researcher
| Computer science PhD candidate @ SFU
| More: https://dalessandro.dev/

I like to count things and periodically I work on applications in plant agriculture + ecology.

Follow for stale political hot takes.

Free Palestine 🇵🇸
This is an interesting failure mode. It's obvious with context that the segmented object is a pistachio. Yet it doesn't use that surrounding context and just generates a featureless bean.
November 24, 2025 at 5:22 PM
This one is fun, though! The desired outcome is unclear because the billboard itself is flat, but people are not. It generates a (very oddly shaped) 3D person.
November 24, 2025 at 5:17 PM
I was shocked it was able to generate something reasonable, here. The source image is an aerial view of pelicans. It generated something reasonably pelican-esque!
November 24, 2025 at 5:16 PM
I find LLMs help with executive dysfunction. It's easier to start by criticizing and re-writing what an LLM wrote than it is to just start writing myself. It's related to Cunningham's Law: "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."
November 17, 2025 at 7:39 PM
"(30) Detailed license information for datasets is missing; the authors only state that datasets are publicly available but do not specify license types (e.g., MIT, CC BY-SA) or any restrictions on use, raising potential copyright concerns."

The reviewer is REALLY claiming they wrote this?
November 15, 2025 at 5:42 AM
Lol why "rightfully"? What did zombies ever do to the folks at Leger 🧟
November 5, 2025 at 1:55 AM
What a great expression of the sublime
October 24, 2025 at 5:22 PM
This will be useful for crowd counting in very dense scenes where CNNs still seem to dominate. I'm also reminded of an "older" 2021 work on differentiable patch selection for image recognition.
October 23, 2025 at 4:01 PM
it's trivial appeals to "novelty" or "dataset size". It's very easy to be a critic and very little is learned by low effort criticism. It's often much harder to be a champion by default, which encourages more thorough reviews.
October 22, 2025 at 6:21 AM
human reviewer. The AI struggled to find trickier conceptual issues and challenges. Given this, I don't know that it solves the BIG problem: the astronomical number of paper submissions. However, the AI summary was still useful.
October 20, 2025 at 8:40 PM
2. Authors did not seem to engage with the AI feedback in the rebuttal. This might be due to the 2500 character limit for the rebuttal.
3. Through the reviewer discussion period, the most important issues raised were entirely missed by the AI.

So far as I can tell, you simply cannot replace a good
October 20, 2025 at 8:40 PM
Video essays are just easier to listen to in the background. Travel vlogs are more visual and experiential and require more attentiveness to get the full experience.
September 24, 2025 at 4:52 PM
That's interesting! I sometimes use this puzzle as a litmus test for mental rotation in multimodal language models. I didn't expect that they could learn this skill spontaneously but it's interesting to see the information is there in some models!
September 22, 2025 at 2:53 PM
I just reviewed for AAAI and I wasn't sold on the AI reviewer. It provides a decent breadth but it's a bit shallow. What I think would be superior is an LLM that audits the reviewer's actual review, attempts to identify weak arguments by the reviewer, and tries to get the reviewer to correct that.
September 22, 2025 at 8:17 AM
questions specific to the review, with the goal of improving the overall quality of the review. Perhaps when the reviewer submits, the LLM generates a list of questions that the reviewer should answer before finalizing.
September 22, 2025 at 7:46 AM
One idea is having the reviewers work with an LLM. For example, a reviewer on a paper claimed the work wasn't novel because prior work existed. What prior work? They never said! I ended up pressing them on it because I wanted to champion the paper. An LLM in the loop could probe reviewers with key
September 22, 2025 at 7:46 AM
The total number of submissions could increase while still representing a reduction in the rate of submission growth. So I'm curious to see if EMNLP has such a drastic jump. But this might also just be a new normal, with LLMs accelerating the speed of research and now there's simply more papers.
September 22, 2025 at 6:53 AM
I disagree with the idea that accepting more papers inherently leads to more papers being submitted. If I submit a paper, I'm not just sitting on my hands. I'm polishing the submitted paper and starting a new paper. If the submitted paper is rejected, I'm sending 2 papers to the next conference.
September 21, 2025 at 10:04 PM
Just accept more papers? I'm curious what % of the total submission volume is borderline/accept papers that are resubmitted with minimal changes.
September 21, 2025 at 5:02 PM
On the other hand, I suppose having access to it would just lead to authors gaming the system.
September 10, 2025 at 7:38 PM