Hadi Hosseini
banner
hadihoss.bsky.social
Hadi Hosseini
@hadihoss.bsky.social
Associate professor @Penn_State | PhD @UofWaterloo | postdoc @SCSatCMU.
AI/CS/Econ.
Teaching machines to behave socially & #sapiens to behave optimally!
His rules/intuitions are based on simple observations and are often elegant.
Not sure this time; the stock market value is primarily based on overall expectations and not necessarily actual value created.
November 5, 2025 at 5:00 PM
Read paper: arxiv.org/abs/2506.04478

This is a joint work with two of my great students, Samarth and Ronak!
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and stru...
arxiv.org
October 22, 2025 at 3:51 AM
Verdict:
Even top-tier reasoning models struggle when:
- generating stable solutions,
- detecting blocking pairs (instabilities),
- repairing unstable matchings, or
- scaling reasoning to larger markets.
October 22, 2025 at 3:51 AM
Why matching markets?

1. They require reasoning over ranked preferences (who prefers whom) while following iterative algorithms. They’re the ideal stress test for LLM reasoning.
2. They’re everywhere: assigning students to schools, workers to jobs, riders to drivers, and even content to users.
October 22, 2025 at 3:51 AM
2️⃣ Algorithmic, step-by-step reasoning. In computer science education, we call this algorithmic thinking, i.e. the capacity to follow and adapt structured procedures. It’s what allows us to detect errors, repair them, and maintain logical consistency.
October 22, 2025 at 3:51 AM
🤖 Two abilities are essential:

1️⃣ Understanding and reasoning about preferences.
They must grasp which option is more desirable: Is a 9 AM meeting better than 7 AM? Is candidate A preferred to B? This is about comprehending relative value and priority, not just data.
October 22, 2025 at 3:51 AM
3️⃣ Selection from a menu aligns better than generation. When asked to select from a preselected menu of options, some models (GPT-4o and Claude-3.5) display more human-like fairness, prioritizing equitability.
October 4, 2025 at 11:39 PM
1️⃣ LLMs are often misaligned with human notions of fairness. They seldom minimize inequality (equitability), instead favoring envy-freeness or efficiency.

2️⃣ When money is available, humans use it to restore balance, equalizing utilities; most LLMs do not.
October 4, 2025 at 11:39 PM
As LLMs increasingly act in social and economic domains, normative alignment—ensuring their outputs reflect human values—becomes critical. Our findings:
October 4, 2025 at 11:39 PM
Haha, I am waiting for a day that we can train AI to model your young self. Then we get to use the sharp mind (and argue all day about dumb choices), lol
September 18, 2025 at 6:27 PM
Are you secretly hiding inside the code? Lol
September 18, 2025 at 1:14 AM
Only if they knew that about contrapositives!
September 7, 2025 at 1:08 PM