Jonathan F. Kominsky
banner
jfkominsky.bsky.social
Jonathan F. Kominsky
@jfkominsky.bsky.social
(he/him) Assistant Professor of Cognitive Science at Central European University in Vienna, PI of the CEU Causal Cognition Lab (https://ccl.ceu.edu) #CogSci #PsychSciSky #SciSky

Personal site: https://www.jfkominsky.com
The results showed that people really were better at detecting lures related to coherent videos. When it came to reject fragmented lures, they actually performed at chance level! (Exp. 3 also replicated our order-related findings from Exp. 1 and 2). (21/24)
September 16, 2025 at 7:28 PM
Basically, only half of the images participants saw were actual stills from the video (originals); the rest (lures) had been doctored in a manner that misrepresented causally relevant details. Here, the red burger and the barrel switched position. (20/24)
September 16, 2025 at 7:28 PM
In Experiment 1, we tested the hypothesis regarding recall of event order. It was a within-subjects study, meaning each participant saw both coherent and fragmented videos. As predicted, people were better at remembering the order of coherent events. (16/24)
September 16, 2025 at 7:28 PM
We should probably note that all the previous GIFs were just snippets – here’s a complete fragmented video from Experiment 1 (these videos were the shortest), in case you are curious: (11/24)
September 16, 2025 at 7:28 PM
Coherent videos also had cuts, but they only changed the point of view (i.e., position of the camera) and did not interfere with the causal structure of the event. We hypothesized uninterrupted causal links would lead to more accurate event recall. (10/24)
September 16, 2025 at 7:28 PM
Importantly, fragmented videos were not completely devoid of causal structure. Instead, they contained causal discontinuities: After each cut in a fragmented video, objects would abruptly be in different positions and move at different speeds. (9/24)
September 16, 2025 at 7:28 PM
But back to causal structure: In all of our three experiments, we varied the causal structure of these unfamiliar events. Videos with a high degree of causal structure we called coherent; those with interrupted causal structure we called fragmented. (8/24)
September 16, 2025 at 7:28 PM
As we were not interested in specific content – to the contrary, we wanted to eliminate its influence – we did not present movies or TikTok clips to our participants, but rather fairly odd videos of arbitrary objects hitting each other. (6/24)
September 16, 2025 at 7:28 PM
This opens up a huge set of critical questions for future work, first about how exactly the perceptual processing these "causal" event categories works, but also about the nature of causal *representations*.

For now, I'll just close on a screenshot of my favorite sentence from the paper.

/end
August 31, 2025 at 7:18 AM
For test events we used the same launch/push event as Exp 2.

The results were spectacularly clear:

We perfectly replicated the entraining adaptation effect from Exp 2.

Adapting to the one-object event did *nothing*.

There is specialized perceptual processing for *causal* entraining.

17/22
August 31, 2025 at 7:18 AM
Experiment 3 replicated the entraining adaptation condition from Experiment 2 and compared it to adaptation to a nearly identical event in which there was no object B: A just moves from one side of the screen to the other continuously. We called these "one-object" events.

16/22
August 31, 2025 at 7:18 AM
So adapting to launching should lead to people seeing more "push" events, but adapting to entraining might lead people to see more "launch" events. If both effects are retinotopically specific, it's clear evidence for two independent perceptual categories.

And that's exactly what we found!

14/22
August 31, 2025 at 7:18 AM
Entraining doesn't generate an adaptation effect for launch/pass displays.

However, launching and entraining are also endpoints of a continuous feature dimension: How long A and B stay in contact.

We can make a new kind of ambiguous test event using this feature dimension, "launch/push".

11/22
August 31, 2025 at 7:18 AM
To my surprise, all four events generated the same retinotopically specific visual adaptation effect! That's the green-highlighted area in the graph below (read the paper for a full description of what these graphs mean).

9/22
August 31, 2025 at 7:18 AM
The problem is, there's more than one event that Michotte and others described as "causal perception", like entraining (seen here).

In 2020, Brian Scholl and I found that adapting to entraining does *not* generate the same adaptation effect on these launch/pass displays.

4/22
August 31, 2025 at 7:18 AM
Martin Rolfs and colleagues in 2013 showed that if you adapt people to hundreds of launching events, and ask them to categorize ambiguous "launch/pass" events, events that looked ambiguous before adaptation look more like "pass" after, but only when presented to the same place on the retina.

3/22
August 31, 2025 at 7:18 AM
Starting with Michotte, people have argued that, for some types of cause-and-effect interactions (notably launching events), we don't *infer* causality, we actually *perceive* it directly.

One of the best pieces of evidence for this is that you can get visual adaptation to launching events.

2/22
August 31, 2025 at 7:18 AM
We were interested in understanding how kids represent descriptive and prescriptive information. We tested three hypotheses:

1. Kids mostly represent prescriptive information
2. Kids mostly represent descriptive information
3. Kids have an undifferentiated representation that combines both (3/10)
June 17, 2025 at 8:35 AM
I mean this is on their front page
June 3, 2025 at 2:24 PM
Meanwhile, DNC vice-chair David Hogg just sent a fundraising email that starts with this.

There are people in the US and among the dems who do get it, they just don't have real decision-making power. That needs to change *very* soon.
April 27, 2025 at 4:47 PM
It's 5pm and Nova knows when it's time to clock out from her job (of running around the house and screaming her head off while I tried to work).
April 22, 2025 at 3:11 PM
And now a commentary on *extremely* local politics.

The Vienna state elections are next week. On my walk home today I got waved down by some volunteers from SPÖ who handed me a pen, flyer, and Easter egg, seen here. I said I'm an immigrant, I can't vote, they said no problem, take it anyway (1/2)
April 18, 2025 at 2:35 PM
I did not expect making the bed to be the most challenging thing I did this week but here we are.
December 4, 2024 at 9:19 PM
I just hit 1,000 followers! Amazing. Let me reintroduce myself:

- I'm an assistant professor at CEU in Vienna, Austria
- I study how the human mind identifies, represents, and reasons about cause and effect from infancy through adulthood
- I made PyHab (github.com/jfkominsky/P...)
- My cat is cute
November 21, 2024 at 11:00 AM
This much.
November 19, 2024 at 4:27 PM