Jonathan F. Kominsky
banner
jfkominsky.bsky.social
Jonathan F. Kominsky
@jfkominsky.bsky.social
(he/him) Assistant Professor of Cognitive Science at Central European University in Vienna, PI of the CEU Causal Cognition Lab (https://ccl.ceu.edu) #CogSci #PsychSciSky #SciSky

Personal site: https://www.jfkominsky.com
We should probably note that all the previous GIFs were just snippets – here’s a complete fragmented video from Experiment 1 (these videos were the shortest), in case you are curious: (11/24)
September 16, 2025 at 7:28 PM
Coherent videos also had cuts, but they only changed the point of view (i.e., position of the camera) and did not interfere with the causal structure of the event. We hypothesized uninterrupted causal links would lead to more accurate event recall. (10/24)
September 16, 2025 at 7:28 PM
Importantly, fragmented videos were not completely devoid of causal structure. Instead, they contained causal discontinuities: After each cut in a fragmented video, objects would abruptly be in different positions and move at different speeds. (9/24)
September 16, 2025 at 7:28 PM
As we were not interested in specific content – to the contrary, we wanted to eliminate its influence – we did not present movies or TikTok clips to our participants, but rather fairly odd videos of arbitrary objects hitting each other. (6/24)
September 16, 2025 at 7:28 PM
Experiment 3 replicated the entraining adaptation condition from Experiment 2 and compared it to adaptation to a nearly identical event in which there was no object B: A just moves from one side of the screen to the other continuously. We called these "one-object" events.

16/22
August 31, 2025 at 7:18 AM
The problem is, there's more than one event that Michotte and others described as "causal perception", like entraining (seen here).

In 2020, Brian Scholl and I found that adapting to entraining does *not* generate the same adaptation effect on these launch/pass displays.

4/22
August 31, 2025 at 7:18 AM
Martin Rolfs and colleagues in 2013 showed that if you adapt people to hundreds of launching events, and ask them to categorize ambiguous "launch/pass" events, events that looked ambiguous before adaptation look more like "pass" after, but only when presented to the same place on the retina.

3/22
August 31, 2025 at 7:18 AM
Starting with Michotte, people have argued that, for some types of cause-and-effect interactions (notably launching events), we don't *infer* causality, we actually *perceive* it directly.

One of the best pieces of evidence for this is that you can get visual adaptation to launching events.

2/22
August 31, 2025 at 7:18 AM