Emily A-Izzeddin
banner
emilya-izzeddin.bsky.social
Emily A-Izzeddin
@emilya-izzeddin.bsky.social
Postdoccing with the Flemingos at JLU Giessen
Doing vision things 👀🦩🪼🦘
(she/her)
As this is my final PhD paper to be published, I want to give a special shout-out to my supervisors, @willjharrison.bsky.social and Jason. There’s no way to do them justice in 300 characters, so I’ve attached an excerpt from my thesis acknowledgements that still rings true today.

11/11
October 8, 2025 at 7:13 AM
As always, I couldn't have done it without the team: @tsawallis.bsky.social, Jason Mattingley, and @willjharrison.bsky.social. This project (almost) never felt like work and was (mostly) pure joy — in no small part because of them.

10/11
October 8, 2025 at 7:13 AM
Overall, our results suggest that judgements made for naturalistic stimuli are strongly associated with those predicted by very simple features, without needing to rely on more complex visual properties.

9/11
October 8, 2025 at 7:13 AM
To be clear, we're not suggesting that these are the two magic features that explain all judgements for naturalistic images. We by no means conducted an exhaustive investigation of all possible predictors, and welcome the possibility that others could be just as, or even more useful.

8/11
October 8, 2025 at 7:13 AM
If we distort the patches (e.g., by thresholding pixel values or reducing the patch to edges), we can still predict participants' responses.

In this case, after heavily altering the pixel values, only the structural similarity predictor remained significant.

7/11
October 8, 2025 at 7:13 AM
We computed similarity using two metrics, comparing the standard to the target and foil separately:

1. Pixel-wise luminance RMS error - how well matched the luminance values are

2. Phase-invariant structural similarity between the patches - how well matched the amplitude spectra are

6/11
October 8, 2025 at 7:13 AM
We fit a GLMM to participants' responses and found they could be explained by assuming participants selected the patch most similar to the standard.

Here, average participant responses are the datapoints and solid lines show the GLMM predictions.

5/11
October 8, 2025 at 7:13 AM
Oh, and the foil was always taken from the same spatial location as the target, but from its own photo.

4/11
October 8, 2025 at 7:13 AM
The standard patch was always taken from the centre of a broader photo, with the target taken from one of 33 possible locations that varied in distance and azimuth offset from the standard.

Note: participants were never told about these conditions and never saw the full photos

3/11
October 8, 2025 at 7:13 AM
We had participants tell us which of two image patches they thought was most likely to belong to the same scene as a preceding standard.

One patch (the target) always came from the same broader photograph as the standard, and the other (the foil) came from an entirely different photograph.

2/11
October 8, 2025 at 7:13 AM
Thanks for reading if you've made it this far! I've had to skip a lot of the details, so if you're interested in learning more, feel free to have a read of the paper - here's the link again:

www.nature.com/articles/s41...

10/10
Investigating orientation adaptation following naturalistic film viewing - Scientific Reports
Scientific Reports - Investigating orientation adaptation following naturalistic film viewing
www.nature.com
September 29, 2025 at 8:27 AM
Massive shout out to the team: @reubenrideaux.bsky.social, Jason Mattingley & @willjharrison.bsky.social.

This project was the problem child of my PhD and has frequently come face to face with abandonment. It's publication truly wouldn't have been possible without their unwavering support.

9/10
September 29, 2025 at 8:27 AM
Overall, we were pretty surprised and discuss some of the potential reasons for our basically null result in the paper. Ultimately, however, we feel our results strongly demonstrate the need for more thorough exploration of adaptation in response to more naturalistic viewing conditions.

8/10
September 29, 2025 at 8:27 AM
Overall biases didn’t shift significantly in response to the adaptors. Remember the standard white bar shown with comment #3? Its orientation relative to the adaptor is plotted on the x-axis. We also looked at whether the biases changed over the course of the session - short answer: nope.

7/10
September 29, 2025 at 8:27 AM
After collecting a bunch of data (and then re-collecting it after finding an error), counterbalancing the combinations of adaptor orientations, subtracting out individuals’ baseline biases, and fitting the data with a GLMM, we found… not a whole lot…

6/10
September 29, 2025 at 8:27 AM
For Session 3, participants saw the second half of Casablanca with a different adaptor orientation to what they experienced in the first session. Across sessions, each participant saw one cardinal and one oblique adaptor. Otherwise, the clip/trial structure was the same as session 2.

5/10
September 29, 2025 at 8:27 AM
In session 2, participants saw the first half of Casablanca - we assigned them to experience one of the four potential adaptor orientations above for that session. The movie was shown in 30 second clips, separated by 5 trials of the same perceptual task as in session 1.

4/10
September 29, 2025 at 8:27 AM
The study itself had participants come in for three sessions - in the first, we just got their baseline performance at our perceptual task: is the central grating tilted to the right or left of the peripheral standard white bar?

3/10
September 29, 2025 at 8:27 AM
The movie itself was filtered frame by frame to have contrast at a specified adaptor orientation at low spatial frequencies. We completed this process four times, so we could test different adaptor orientations (0, 45, 90, and 135).

2/10
September 29, 2025 at 8:27 AM
August 29, 2025 at 6:34 AM