Pippa Sterk
pippasterk.bsky.social
Pippa Sterk
@pippasterk.bsky.social
Lecturer in Sociology at the University of Bristol.

Interested in queer/LGBT+ pedagogies 🌈

Occasional creative writer, podcaster, film curator, and events producer.
If we want students to make informed decisions, we need to actually INFORM them of the consequences of their decisions and the context in which they make their decisions. Not just go with what's popular and easy now, because the alternative would be too difficult.
September 12, 2025 at 10:42 AM
Students are understandably confused about what AI is and how they can/can't/should/shouldn't use it. It doesn't help them to tell half the story, give definitions that misrepresent what AI does, and ignore all the inequalities that are fundamental to these technologies.
September 12, 2025 at 10:42 AM
Nobody can ever give me a good answer to how 'ethical' use of AI takes into account environmental and labour concerns. It's always 'something to consider' but nobody ever genuinely considers it! Nobody is willing to admit that they just don't CARE enough about these concerns to reject convenience.
September 12, 2025 at 10:42 AM
If you think that extraction is worth the convenience it brings, you can make that decision. Everyone makes these decisions all the time, that's what happens under capitalism. But don't dress it up as 'ethical' use, when it's just a matter of choosing convenience.
September 12, 2025 at 10:42 AM
Furthermore, these technologies are made by for-profit companies. Striving towards ethical AI use is like striving towards ethical coca cola consumption, or ethical iphone ownership. These are products that are created within systems of EXTRACTION. This extraction has material consequences!
September 12, 2025 at 10:42 AM
AI doesn't autonomously 'reason' or 'think' or 'make decisions', it responds to input in the way that it is programmed to. Those responses may change over time, or be so complex that it is difficult to trace how and why these changes occur, but they still change through HUMAN decision-making.
September 12, 2025 at 10:42 AM
one of the most difficult things about teaching in a world of AI is getting students (but also fellow scholars!!) to understand that AI is firstly an entirely poorly-defined 'field', and secondly have profoundly human input in the form of datasets and codes.
September 12, 2025 at 10:42 AM
Calling a decision 'informed' or 'ethical' doesn't make it so! You have to consider why you are making your decision, what its impact is, and what alternatives exist, AND THEN re-evaluate your decision in light of this information. You can't just do what you wanted to do anyway and call it a day.
September 12, 2025 at 10:42 AM
It is so frustrating to see so many alleged initiatives for AI 'literacy' / ethical AI use in academia essentially equate to 'the robot is your trusted friend/pet/servant who's here to help you 😀 talk to it, and see what it says 😀'
September 12, 2025 at 10:42 AM
'ESEA' events with only
people from one country / the Sinosphere / East Asia / your buddies / light skinned people / people who were part of the upper class in their home coutries, moved in their 20s, and are now shocked to find that structural oppression exists.... It's embarrassing!!
September 6, 2025 at 2:26 PM
Nobody is forcing you to call something ESEA when it isn't! If your panel is 5 people from China just say it's about China! If your event presumes that nobody Asian was raised in Europe, don't say it includes the diaspora! Just say what you're actually doing 🙏🏽
September 6, 2025 at 2:26 PM
What is desirable about an interaction that you can turn on or off at will, with not long term consequences? What is interesting about an entity that you can continue accessing no matter how badly you treat it? Who will always tell you what you want to hear? Is this not just a desire for control?
August 26, 2025 at 10:27 AM
People having 'real' connections with AI use does not mean that these connections are healthy or outside of politics. If anything it should cause us to think twice about the way we think about human connections.
August 26, 2025 at 10:27 AM
'people were truly sad when ChatGPT updated, we need to take this grief seriously'

yeah and I feel uneasy when Sainsbury's changes the layout of the aisles. That doesn't mean the bok choi basket is actually sentient. How I feel about something is unrelated to whether the thing itself has feelings.
August 26, 2025 at 10:27 AM
If you enjoyed my presentation, I literally JUST had an article out about my research with LGBT+ volunteers in Higher Education, and how the rise of academic transphobia lays bare the unjust foundations of Higher Ed altogether

Please have a read here:

www.emerald.com/insight/cont...
“We just can’t afford to be separated on that” – affective solidarities among university-based lesbian, gay, bi and trans volunteers | Emerald Insight
1
www.emerald.com
May 19, 2025 at 8:23 AM
And as educators, is 'realistic' really the best we can do?
May 1, 2025 at 9:35 AM
Many students are disappointed by fellow coursemates using genAI to write their assignments for them. Many students WANT to learn the skills that genAI supposedly makes redundant. Students WANT to learn how to engage with and create complex texts. Acknowledging this drive is also being 'realistic'.
May 1, 2025 at 9:35 AM