Hadi Hosseini
@hadihoss.bsky.social
Associate professor @Penn_State | PhD @UofWaterloo | postdoc @SCSatCMU.
AI/CS/Econ.
Teaching machines to behave socially & #sapiens to behave optimally!
AI/CS/Econ.
Teaching machines to behave socially & #sapiens to behave optimally!
His rules/intuitions are based on simple observations and are often elegant.
Not sure this time; the stock market value is primarily based on overall expectations and not necessarily actual value created.
Not sure this time; the stock market value is primarily based on overall expectations and not necessarily actual value created.
November 5, 2025 at 5:00 PM
His rules/intuitions are based on simple observations and are often elegant.
Not sure this time; the stock market value is primarily based on overall expectations and not necessarily actual value created.
Not sure this time; the stock market value is primarily based on overall expectations and not necessarily actual value created.
Read paper: arxiv.org/abs/2506.04478
This is a joint work with two of my great students, Samarth and Ronak!
This is a joint work with two of my great students, Samarth and Ronak!
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and stru...
arxiv.org
October 22, 2025 at 3:51 AM
Read paper: arxiv.org/abs/2506.04478
This is a joint work with two of my great students, Samarth and Ronak!
This is a joint work with two of my great students, Samarth and Ronak!
Verdict:
Even top-tier reasoning models struggle when:
- generating stable solutions,
- detecting blocking pairs (instabilities),
- repairing unstable matchings, or
- scaling reasoning to larger markets.
Even top-tier reasoning models struggle when:
- generating stable solutions,
- detecting blocking pairs (instabilities),
- repairing unstable matchings, or
- scaling reasoning to larger markets.
October 22, 2025 at 3:51 AM
Verdict:
Even top-tier reasoning models struggle when:
- generating stable solutions,
- detecting blocking pairs (instabilities),
- repairing unstable matchings, or
- scaling reasoning to larger markets.
Even top-tier reasoning models struggle when:
- generating stable solutions,
- detecting blocking pairs (instabilities),
- repairing unstable matchings, or
- scaling reasoning to larger markets.
Why matching markets?
1. They require reasoning over ranked preferences (who prefers whom) while following iterative algorithms. They’re the ideal stress test for LLM reasoning.
2. They’re everywhere: assigning students to schools, workers to jobs, riders to drivers, and even content to users.
1. They require reasoning over ranked preferences (who prefers whom) while following iterative algorithms. They’re the ideal stress test for LLM reasoning.
2. They’re everywhere: assigning students to schools, workers to jobs, riders to drivers, and even content to users.
October 22, 2025 at 3:51 AM
Why matching markets?
1. They require reasoning over ranked preferences (who prefers whom) while following iterative algorithms. They’re the ideal stress test for LLM reasoning.
2. They’re everywhere: assigning students to schools, workers to jobs, riders to drivers, and even content to users.
1. They require reasoning over ranked preferences (who prefers whom) while following iterative algorithms. They’re the ideal stress test for LLM reasoning.
2. They’re everywhere: assigning students to schools, workers to jobs, riders to drivers, and even content to users.
2️⃣ Algorithmic, step-by-step reasoning. In computer science education, we call this algorithmic thinking, i.e. the capacity to follow and adapt structured procedures. It’s what allows us to detect errors, repair them, and maintain logical consistency.
October 22, 2025 at 3:51 AM
2️⃣ Algorithmic, step-by-step reasoning. In computer science education, we call this algorithmic thinking, i.e. the capacity to follow and adapt structured procedures. It’s what allows us to detect errors, repair them, and maintain logical consistency.
🤖 Two abilities are essential:
1️⃣ Understanding and reasoning about preferences.
They must grasp which option is more desirable: Is a 9 AM meeting better than 7 AM? Is candidate A preferred to B? This is about comprehending relative value and priority, not just data.
1️⃣ Understanding and reasoning about preferences.
They must grasp which option is more desirable: Is a 9 AM meeting better than 7 AM? Is candidate A preferred to B? This is about comprehending relative value and priority, not just data.
October 22, 2025 at 3:51 AM
🤖 Two abilities are essential:
1️⃣ Understanding and reasoning about preferences.
They must grasp which option is more desirable: Is a 9 AM meeting better than 7 AM? Is candidate A preferred to B? This is about comprehending relative value and priority, not just data.
1️⃣ Understanding and reasoning about preferences.
They must grasp which option is more desirable: Is a 9 AM meeting better than 7 AM? Is candidate A preferred to B? This is about comprehending relative value and priority, not just data.
Read the full paper: arxiv.org/abs/2502.00313
#NeurIPS2025 #AIAlignment #Fairness #LLMs #EthicsInAI #HumanValues
#NeurIPS2025 #AIAlignment #Fairness #LLMs #EthicsInAI #HumanValues
Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values
The growing interest in employing large language models (LLMs) for decision-making in social and economic contexts has raised questions about their potential to function as agents in these domains. A ...
arxiv.org
October 4, 2025 at 11:39 PM
Read the full paper: arxiv.org/abs/2502.00313
#NeurIPS2025 #AIAlignment #Fairness #LLMs #EthicsInAI #HumanValues
#NeurIPS2025 #AIAlignment #Fairness #LLMs #EthicsInAI #HumanValues
3️⃣ Selection from a menu aligns better than generation. When asked to select from a preselected menu of options, some models (GPT-4o and Claude-3.5) display more human-like fairness, prioritizing equitability.
October 4, 2025 at 11:39 PM
3️⃣ Selection from a menu aligns better than generation. When asked to select from a preselected menu of options, some models (GPT-4o and Claude-3.5) display more human-like fairness, prioritizing equitability.
1️⃣ LLMs are often misaligned with human notions of fairness. They seldom minimize inequality (equitability), instead favoring envy-freeness or efficiency.
2️⃣ When money is available, humans use it to restore balance, equalizing utilities; most LLMs do not.
2️⃣ When money is available, humans use it to restore balance, equalizing utilities; most LLMs do not.
October 4, 2025 at 11:39 PM
1️⃣ LLMs are often misaligned with human notions of fairness. They seldom minimize inequality (equitability), instead favoring envy-freeness or efficiency.
2️⃣ When money is available, humans use it to restore balance, equalizing utilities; most LLMs do not.
2️⃣ When money is available, humans use it to restore balance, equalizing utilities; most LLMs do not.
As LLMs increasingly act in social and economic domains, normative alignment—ensuring their outputs reflect human values—becomes critical. Our findings:
October 4, 2025 at 11:39 PM
As LLMs increasingly act in social and economic domains, normative alignment—ensuring their outputs reflect human values—becomes critical. Our findings:
Haha, I am waiting for a day that we can train AI to model your young self. Then we get to use the sharp mind (and argue all day about dumb choices), lol
September 18, 2025 at 6:27 PM
Haha, I am waiting for a day that we can train AI to model your young self. Then we get to use the sharp mind (and argue all day about dumb choices), lol
Are you secretly hiding inside the code? Lol
September 18, 2025 at 1:14 AM
Are you secretly hiding inside the code? Lol
Only if they knew that about contrapositives!
September 7, 2025 at 1:08 PM
Only if they knew that about contrapositives!