Karolina Stańczak
@karstanczak.bsky.social
#NLP Postdoc at Mila - Quebec AI Institute and McGill University | Former PhD @ University of Copenhagen (CopeNLU)
🌐 karstanczak.github.io
🌐 karstanczak.github.io
Reposted by Karolina Stańczak
6/ 🤝 Thanks to our steering committees and co-organizers for their hard work in making the VLMs4All Workshop possible!
@meharbhatia.bsky.social @rabiul.bsky.social @spandanagella.bsky.social @sivareddyg.bsky.social @svansteenkiste.bsky.social @karstanczak.bsky.social
@meharbhatia.bsky.social @rabiul.bsky.social @spandanagella.bsky.social @sivareddyg.bsky.social @svansteenkiste.bsky.social @karstanczak.bsky.social
March 14, 2025 at 3:55 PM
6/ 🤝 Thanks to our steering committees and co-organizers for their hard work in making the VLMs4All Workshop possible!
@meharbhatia.bsky.social @rabiul.bsky.social @spandanagella.bsky.social @sivareddyg.bsky.social @svansteenkiste.bsky.social @karstanczak.bsky.social
@meharbhatia.bsky.social @rabiul.bsky.social @spandanagella.bsky.social @sivareddyg.bsky.social @svansteenkiste.bsky.social @karstanczak.bsky.social
5/ Read our full paper here: arxiv.org/abs/2503.00069
Let’s discuss! How should AI align with society? 🤝💡
Let’s discuss! How should AI align with society? 🤝💡
Societal Alignment Frameworks Can Improve LLM Alignment
Recent progress in large language models (LLMs) has focused on producing responses that meet human expectations and align with shared values - a process coined alignment. However, aligning LLMs remain...
arxiv.org
March 4, 2025 at 4:08 PM
5/ Read our full paper here: arxiv.org/abs/2503.00069
Let’s discuss! How should AI align with society? 🤝💡
Let’s discuss! How should AI align with society? 🤝💡
4/ We also discuss the role of participatory alignment, where diverse stakeholders help shape AI behavior rather than deferring solely to designers.
March 4, 2025 at 4:08 PM
4/ We also discuss the role of participatory alignment, where diverse stakeholders help shape AI behavior rather than deferring solely to designers.
3/ Instead of perfecting rigid alignment objectives, we explore how LLMs can navigate uncertainty—a feature, not a flaw!
March 4, 2025 at 4:08 PM
3/ Instead of perfecting rigid alignment objectives, we explore how LLMs can navigate uncertainty—a feature, not a flaw!
2/ We propose leveraging societal alignment frameworks to guide LLM alignment:
🔹 Social alignment: Modeling norms, values & cultural competence
🔹 Economic alignment: Fair reward mechanisms & collective decision-making
🔹 Contractual alignment: Legal principles for LLMs
🔹 Social alignment: Modeling norms, values & cultural competence
🔹 Economic alignment: Fair reward mechanisms & collective decision-making
🔹 Contractual alignment: Legal principles for LLMs
March 4, 2025 at 4:08 PM
2/ We propose leveraging societal alignment frameworks to guide LLM alignment:
🔹 Social alignment: Modeling norms, values & cultural competence
🔹 Economic alignment: Fair reward mechanisms & collective decision-making
🔹 Contractual alignment: Legal principles for LLMs
🔹 Social alignment: Modeling norms, values & cultural competence
🔹 Economic alignment: Fair reward mechanisms & collective decision-making
🔹 Contractual alignment: Legal principles for LLMs
1/ LLM alignment remains a challenge because human values are complex, dynamic, and often conflict with narrow optimization goals.
Existing methods like RLHF struggle with misspecified objectives.
Existing methods like RLHF struggle with misspecified objectives.
March 4, 2025 at 4:08 PM
1/ LLM alignment remains a challenge because human values are complex, dynamic, and often conflict with narrow optimization goals.
Existing methods like RLHF struggle with misspecified objectives.
Existing methods like RLHF struggle with misspecified objectives.