Martin Zettersten
banner
mzettersten.bsky.social
Martin Zettersten
@mzettersten.bsky.social
Asst Prof UCSD Cognitive Science
language development | cognitive development | learning
https://mzettersten.github.io/
(he/his)
also, check out the supplement either at the journal or in the preprint (osf.io/preprints/ps...) for a ton more on looking time data and measurement, including one of my favorite figures, showing how test-retest reliability depends on both the number of trials and the sample
July 22, 2024 at 3:04 PM
At the same time, the 2 evidence sources still do not fully agree. There are key differences in the effects of moderators. E.g., ManyBabies1 found that IDS preference was larger for older infants & headturn-based designs. We did not find similar effects in the meta-analysis. 7/
April 18, 2024 at 10:48 PM
In the end, this led us to a pretty satisfying result: the meta-analysis and the multi-site replication almost *perfectly* agree on the average effect size (d~0.35). Check out the squint-inducing plot below with *many* effect sizes. 6/
April 18, 2024 at 10:48 PM
3 tidbits:
(1) The effect is bigger in adults than kids - and rule-based categories are quite hard for kids.
(2) Kids' verbal knowledge of specific feature names correlates with learning (suggesting a connection w/ language)
(3) Love these goofy aliens in the kid-friendly version of task. (2/2)
September 25, 2023 at 4:36 PM
Was anything stable across test sessions? It turns out the answer is YES. Infants’ average *overall* looking times between sessions (and also how many trials they contributed) was robustly correlated across sessions. It’s just that *preferential* looking was not consistent (5/6)
September 5, 2023 at 8:03 PM
Interestingly, as we increased the number of trials required for inclusion, test-retest correlations increased somewhat (more trials per kid helps!)- but even the largest correlations were quite small (and likely too small to be practical for studying indiv differences) (4/6)
September 5, 2023 at 8:02 PM
So, as part of ManyBabies1, a number of labs brought in babies for a second test session. Despite a large infant sample (N=158), we saw no evidence of test-retest reliability in preregistered analyses. Correlations between looking time in session 1 and 2 was small (r=.09) (3/6)
September 5, 2023 at 8:00 PM