Joseph Seering
josephseering.bsky.social
Joseph Seering
@josephseering.bsky.social
Assistant Prof at KAIST School of Computing. HCI, AI, T&S.
papers.ssrn.com/sol3/papers....

What I think a lot of people who haven't done qualitative research may not understand is that the intended outcome of qualitative research is not only the analysis produced but also the growth of the researcher performing that analysis.
<span>We reject the use of generative artificial intelligence for reflexive qualitative research</span>
We write as 416 experienced qualitative researchers from 38 countries, to reject the use of generative artificial intelligence (GenAI) applications for Big Q Qu
papers.ssrn.com
November 12, 2025 at 4:39 AM
Hello Bluesky! I've been quiet for a while on the social media front -- it's been a very busy two years with an international move and setting up a new lab at KAIST, but I wanted to take a moment to highlight some of my students' fantastic new work in the domain of online safety.
April 16, 2025 at 11:14 AM
The 2025 Jang Young Sil Fellow Program is open for applications for 1 year postdoc positions at KAIST. Deadline is 3/13 at 4PM (KST). If you are interested in applying, please reach out and I can provide more details.
March 5, 2025 at 5:13 AM
I had an interesting conversation a couple of years ago about whether ~AI-generated content creators should be handled the same as human content creators from a T&S perspective. At the time, it was an academic conversation, but it seems to be increasingly relevant now.
February 26, 2025 at 5:29 AM
Generally speaking, if community moderators want a feature enough to build it themselves, it's often worth considering for wider deployment. Many of the most powerful user-facing moderation tools on platforms started as third party concepts built by users themselves to meet their specific needs.
February 14, 2025 at 4:42 AM
This is a great feature idea, and FWIW very similar features are used in community moderation where moderators can leave notes about particular users to remind themselves and other mods. Mostly this is done via third party tools, but some first party too. No reason it wouldn't work on bsky.
February 14, 2025 at 4:39 AM
I wonder whether there was any serious discussion about not implementing this. It may seem like a no-brainer, but there's a serious discussion to be had about value added vs increased safety costs.
bsky.app Bluesky @bsky.app · Dec 26
Merry Christmas from us to you 🎄🎁💙 We launched Trending Topics today, and you can find it by tapping the search icon on the bottom bar of the app or the right sidebar on desktop.
December 29, 2024 at 3:56 PM
Side note, I was trying not to get too much into the details of that specific case, but off-service conduct policies are really interesting. I think people don't often realize how much policies are shaped by the technical capacity to enforce them.
December 14, 2024 at 2:09 PM
The question of whether to boot Singal is one of what will be an increasingly large number of decisions that Bluesky as an organization really does not want to make. It's important to remember that Bluesky was created with an ethos directly opposed to central authority making these decisions.
Moderation decisions draw intense public scrutiny from many places. That’s why Bluesky's architecture is designed to enable communities to control their online spaces, independent of the company or its leadership. We will continue to work on empowering people with more control over their experience.
December 14, 2024 at 1:33 PM
Riding in a taxi this morning, the driver was listening to a popular radio program that teaches English through references to news articles and current events. The segment ended by teaching the words "martial law", "declare", and "lift."
December 3, 2024 at 11:55 PM
So I really like a lot about what this paper is doing, and I hope we can see more of this.
New @acm-cscw.bsky.social paper, new content moderation paradigm.

Post Guidance lets moderators prevent rule-breaking by triggering interventions as users write posts!

We implemented PG on Reddit and tested it in a massive field experiment (n=97k). It became a feature!

arxiv.org/abs/2411.16814
December 1, 2024 at 1:52 AM
Proud to announce the first successful MS defense from my lab! Yubin Choi presented on her work studying users' perceptions of privacy issues when disclosing health information to LLMs. She is applying to PhD programs in CS/HCI this cycle, so keep an eye out for her application!
November 27, 2024 at 1:52 AM