Joseph Seering
josephseering.bsky.social
Joseph Seering
@josephseering.bsky.social
Assistant Prof at KAIST School of Computing. HCI, AI, T&S.
Ah I see, that's interesting. I probably can't contribute anything useful from a legal perspective, but the hard part from my perspective is that I can't think of a standard that couldn't be gamed through a relatively small platform redesign.
November 19, 2025 at 8:08 AM
This matters for, e.g., research ethics; in my work, I've stopped using data from small communities on Twitch because even though everything is technically public, those communities have a greater expectation of privacy.
November 18, 2025 at 8:51 AM
I can't speak to the legal perspective, but functionally I think "public" is a spectrum rather than a binary. In a subreddit with a couple dozen active users, their expectation is that most of their conversations will be seen by very few other people.
November 18, 2025 at 8:51 AM
I don't see a way to draw a clear line there, on either Reddit or Facebook.
November 18, 2025 at 1:01 AM
Slightly off topic, but Reddit is actually kind of an interesting case. For small subreddits, it makes some sense to consider it a mutually communicating group, but for, e.g., /r/funny with tens of millions of subscribers and content algorithmically distributed to many more through the feed?
November 18, 2025 at 1:01 AM
Personally, I'm not ready to say that LLMs shouldn't ever be used in any part of qualitative research, but I am confident in saying that it isn't really qualitative research anymore if there isn't a human doing the reflexive analysis. It's a different thing, which may have different value.
November 12, 2025 at 4:39 AM
(Obviously there have been lots of papers published over the last couple of years proposing/testing systems to do this in various ways.)
November 12, 2025 at 4:39 AM
I generally find the linked short essay reasonable. Full disclosure, as someone who's done a good amount of qualitative research, we've also done some investigating in my lab over whether there's value in integrating LLMs somewhere in the qualitative research process.
November 12, 2025 at 4:39 AM
If we try to replace that human reflexive process with an LLM-driven analysis, this outcome is different. We could debate the value of ~increasing the expertise of the LLM in that area~ (to whatever extent that is actually possible) but clearly a human's expertise is not increasing in the same way.
November 12, 2025 at 4:39 AM
We don't talk about this enough, I think, but when I see published qualitative work I think of the contribution not only as what is stated within the paper or the talk but also the fact that the world now has one more expert in that broader area. This is a public good.
November 12, 2025 at 4:39 AM
By immersing yourself in the data and reflecting on it (through a variably structured process), your expertise grows, and you are (ideally) able to build connections and ideas that go beyond specifically what is in the data.
November 12, 2025 at 4:39 AM
These five papers are starting points for the work we’re doing, and our next round of work is already well underway. I’m excited to be able to share our successes so far, but equally excited for what’s still to come!
April 16, 2025 at 11:14 AM
Even when platforms do not provide tools that support restorative processes, creative users will build them themselves. This paper shows how user-created appeals systems are constructed, what goals the users have, and what these processes can accomplish.
April 16, 2025 at 11:14 AM
This work, also led by Juhoon Lee
@juhoonlee.bsky.social with support from Bich Ngoc (Rubi) Doan and Jonghyun Jee, maps the complex and impressive systems that users have built in order to incorporate custom appeals processes into their Discord servers.
April 16, 2025 at 11:14 AM
A final paper in this line of work — also to be presented at CSCW 2025 — offers some hope in this regard, looking at user-created and managed appeals systems in Discord Communities. joseph.seering.org/papers/Lee_e...
joseph.seering.org
April 16, 2025 at 11:14 AM
It is deeply concerning that the spaces where today’s young people are developing social skills have been designed without any clear place for apologies. We need young people to be learning conflict resolution skills that are more nuanced than just "ban or block and move on".
April 16, 2025 at 11:14 AM
A number of Discord moderators gave feedback on the bot and some tested it in their servers, but a major takeaway was how alien apologies seem to have become to the process of online safety.
April 16, 2025 at 11:14 AM
Bich Ngoc (Rubi) Doan built “ApoloBot”, a Discord bot designed to facilitate apologies as part of the restorative processes in Discord servers. This system supports moderators throughout the process of initiating and monitoring apology-giving.
April 16, 2025 at 11:14 AM
On this note, a third paper to be presented at CHI 2025 tackles this issue more broadly from a design perspective, noting how modern social platforms rarely provide features to support one of the most fundamental human communication processes: apologies. joseph.seering.org/papers/Doan_...
joseph.seering.org
April 16, 2025 at 11:14 AM
Together, these two papers argue that online child safety cannot be understood solely as the process of preventing harm to children, but rather must be seen as the process of developing better opportunities for young users to learn and grow online.
April 16, 2025 at 11:14 AM
Teens we interviewed were learning organization, management, and conflict resolution skills that they might otherwise have few opportunities to practice, and they took deep, genuine pride in the communities they had helped to build.
April 16, 2025 at 11:14 AM
This may at first seem concerning -- and the paper outlines some of the potential risks -- but it should also be understood as an incredible growth opportunity for young people if they are sufficiently well supported.
April 16, 2025 at 11:14 AM
Though we usually think of moderation as a role for adults, a striking number of Discord servers (and likely other online social spaces) are moderated in part by teens. We found servers with many thousands of users that had 14 and 15-year-olds on the volunteer moderation teams.
April 16, 2025 at 11:14 AM
Another fantastic paper focused on the safety experiences of young users, led by Jina Yoon and collaborating with
@axz.bsky.social, will be presented at CSCW 2025. joseph.seering.org/papers/Yoon_...
joseph.seering.org
April 16, 2025 at 11:14 AM