Tereza Blazkova
@terezablazek.bsky.social
PhD Student in Social Data Science, University of Copenhagen
AI & Society | Algorithmic Fairness | ML | Education Data Science
https://tereza-blazkova.github.io/
AI & Society | Algorithmic Fairness | ML | Education Data Science
https://tereza-blazkova.github.io/
Reposted by Tereza Blazkova
Join us for a Data Discussion on Friday, November 7! 📅
Daniel Juhász Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.
Event🔗: sodas.ku.dk/events/sodas...
Daniel Juhász Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.
Event🔗: sodas.ku.dk/events/sodas...
SODAS Data Discussion 3 (Fall 2025)
SODAS is delighted to host Daniel Juhász Vigild and Stephanie Brandl for the Fall 2025 Data Discussion series!
sodas.ku.dk
October 31, 2025 at 1:12 PM
Join us for a Data Discussion on Friday, November 7! 📅
Daniel Juhász Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.
Event🔗: sodas.ku.dk/events/sodas...
Daniel Juhász Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.
Event🔗: sodas.ku.dk/events/sodas...
Reposted by Tereza Blazkova
time to implement it into healthcare systems www.reddit.com/r/ChatGPT/co...
From the ChatGPT community on Reddit: ChatGPT asked if I wanted a diagram of what’s going on inside my pregnant belly.
Explore this post and more from the ChatGPT community
www.reddit.com
August 26, 2025 at 5:40 AM
time to implement it into healthcare systems www.reddit.com/r/ChatGPT/co...
Cheers from Learning@Scale poster 033 😋
July 22, 2025 at 11:56 AM
Cheers from Learning@Scale poster 033 😋
While human behavior and the data describing it evolve over time, fairness is often evaluated at a single snapshot. Yet, as we show in our newly published paper, fairness is dynamic. We studied how fairness evolves in dropout prediction across enrollment and found that it shifts over time.
July 21, 2025 at 8:02 AM
While human behavior and the data describing it evolve over time, fairness is often evaluated at a single snapshot. Yet, as we show in our newly published paper, fairness is dynamic. We studied how fairness evolves in dropout prediction across enrollment and found that it shifts over time.
Reposted by Tereza Blazkova
Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)
Check out the amazing (original) paper here: www.nature.com/articles/s43...
Check out the amazing (original) paper here: www.nature.com/articles/s43...
Large language models act as if they are part of a group - Nature Computational Science
An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...
www.nature.com
January 2, 2025 at 2:11 PM
Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)
Check out the amazing (original) paper here: www.nature.com/articles/s43...
Check out the amazing (original) paper here: www.nature.com/articles/s43...