Basile Garcia
banner
bsgarcia.bsky.social
Basile Garcia
@bsgarcia.bsky.social
Cognitive Science postdoc. University of Geneva. Ex-HRL team (DEC, ENS, Paris)
human behavior/reinforcement learning/decision-making/computational modeling
If people unknowingly prefer AI judgments (but reject them once labeled) it demonstrates the disconnect between perceived neutrality and the reality of algorithmic influence, highlighting a loss of autonomy to an opaque system and an imbalance of information.
April 23, 2025 at 10:19 AM
Why does this matter?

As AI enters legal, medical, and personal decision-making, how people perceive its moral reasoning must be understood.
April 23, 2025 at 10:19 AM
💡 Agreement was driven by deeper semantic content.
Terms tied to cost-benefit logic (“save,” “lives”) often triggered disagreement, especially in personal moral dilemmas.
So: detection relied on surface cues, but judgment aligned with meaning.
April 23, 2025 at 10:19 AM
📝 Formal language cues (like length, typos, and first-person usage) helped participants detect AI-generated justifications.
But they had little to no effect on agreement.
People spotted the machine from the style, not the substance.
April 23, 2025 at 10:19 AM
✍️ We tried “humanizing” the AI (dv2h; purple): shorter responses, added typos, mimicked human tone.
This reduced detection—but not the belief-based bias.
People still agreed with content they believed was human, even when it wasn’t.
April 23, 2025 at 10:19 AM
🕺But here’s the twist:
In complex moral dilemmas (personal moral), participants preferred AI-generated justifications—but only when they didn’t know they came from AI.
When they thought a justification was from AI, they agreed less.
So: pro-AI content, anti-AI belief.
April 23, 2025 at 10:19 AM
🔍 First, detection.
People could spot AI-generated moral justifications better than chance—especially in morally difficult scenarios.
Still, accuracy stayed below 70%, and many AI responses passed as human.
April 23, 2025 at 10:19 AM
Participants were presented with moral dilemma justifications, either human or AI-generated. They had to detect the source and say whether they agreed or not ⚖️
April 23, 2025 at 10:19 AM