Guilherme Almeida
almeida2808.bsky.social
Guilherme Almeida
@almeida2808.bsky.social
XJur.
Interestingly, LLMs diverge here. GPT-4o and Llama 3.2 90b were not affected by the time pressure manipulation, but Claude 3 and Gemini Pro were. Moreover, the latter were similar to humans in that they relied more on text under forced delay than under time pressure. 12/14
March 11, 2025 at 9:23 PM
Even more surprisingly, the same thing was true for all the models we tested: all of them were less textualist on the new stimuli when compared to the old stimuli. We interpret this to be evidence of conceptual mastery. Even subtle differences between stimuli are tracked by current LLMs. 8/14
March 11, 2025 at 9:23 PM
The human data was surprising in that it revealed a significant difference between old and new vignettes. We didn't expect there to be any difference, but participants relied on text to a lesser extent on new vignettes when compared to old vignettes 7/14
March 11, 2025 at 9:23 PM
To address issue (1), we created new vignettes that were supposed to match up perfectly with those in an earlier paper (doi.org/10.1037/lhb0...), changing just the exact words used. If models are just memorizing, they wouldn't be able to generalize to the new stimuli (although that's debatable) 5/14
March 11, 2025 at 9:23 PM
We find that lay intuitions about legal change are most in line with the precedents-as-rules account. However, we also found some support for the precedents-as-analogies model. The paper discusses some of the implications of these findings. Feedback is welcome!
February 18, 2025 at 10:55 PM
Precedents-as-rules: the law changes only when a new case is decided differently;
Precedents-as-reasons: the law changes w/ every new case;
Precedents-as-analogies: the law changes when similar cases are decided differently, but stays the same when dissimilar cases are decided differently.
February 18, 2025 at 10:55 PM
Torcendo pra essa rede pegar!
September 16, 2024 at 11:46 PM