Ozgur Can Seckin
ozgurcanseckin.bsky.social
Ozgur Can Seckin
@ozgurcanseckin.bsky.social
Indiana University Bloomington - Informatics PhD. previously: Plaid MLE, Glassdoor MLE, Sabanci University - Data Science MSc., Galatasaray University - Economics BSc.
Thank you for the article! AI "resonating" with people has been another problem we should definitely think more of www.nytimes.com/2025/08/08/t...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com
October 2, 2025 at 6:10 PM
Kudos to my wonderful advisors and coworker who made this paper possible! @baottruong.bsky.social @alessandroflammini @fil.bsky.social 🙏
October 2, 2025 at 3:49 PM
We propose that "constructive conflicts" can model healthier, "bridging" content 🌉 But, “destructive conflicts" shouldn't be ignored but approached with careful linguistic choices to transform toxic arguments into productive dialogues.
October 2, 2025 at 3:49 PM
Since destructive conflicts are too important to ignore, we analyze their language 🔍 We find that how something is said matters immensely. Civil language (asking questions, providing detail, and using hedges) makes posts far more resilient to toxicity, while negative language has the opposite effect
October 2, 2025 at 3:49 PM
- Destructive conflicts 🪖 (high C & high TA, panel c) focus on polarizing identity issues like abortion and LGBTQ+ rights.
- Constructive conflicts 🕊️ (high C & low TA, panel d) spark civil debate on policy topics like student loans, AI, and marijuana legalization.
October 2, 2025 at 3:49 PM
To identify constructive conflict, we train two models to score posts based on their likelihood of attracting toxic comments - which we call toxicity attraction (TA) model - and their controversiality - (C) model 💻 Plotting the scores inferred by these models shows clear patterns.
October 2, 2025 at 3:49 PM
We find that a post doesn't need to be toxic to attract toxic comments ☢️ Our Reddit data shows that 47% of non-toxic submissions still attract at least one toxic reply, while only 6% of toxic submissions do. The initial post's content, therefore, is a poor predictor of a comment section's health.
October 2, 2025 at 3:49 PM
In our research, we argue that the key lies in identifying constructive conflict — controversial posts that are toxicity resilient. We define a "toxicity resilient" post as one that is less likely to attract toxic responses from other users. See our #ICWSM2026 paper here arxiv.org/abs/2509.18303 📖
Identifying Constructive Conflict in Online Discussions through Controversial yet Toxicity Resilient Posts
Bridging content that brings together individuals with opposing viewpoints on social media remains elusive, overshadowed by echo chambers and toxic exchanges. We propose that algorithmic curation coul...
arxiv.org
October 2, 2025 at 3:49 PM
A simple solution, prioritizing only “the feel-good" content, is flawed, as it avoids important societal topics that are inherently negative and can devolve into toxic debates. After all, how often does thinking about wars, viruses, or economic policy put a smile on your face?
October 2, 2025 at 3:49 PM
#ic2s2 let's connect! If we had a conversation and I haven't followed you yet, please do follow!
July 24, 2025 at 11:01 PM
Reposted by Ozgur Can Seckin
The Effects of Outgroup Agreement and Ingroup Dissent on Political Polarization
📍 Talk | Jul 24, 11:00 AM | Troselli

Scaling of Community Rules Across Mastodon Servers
📍 Talk | Jul 24, 11:00 AM | Vingen 3+4
#ic2s2
July 21, 2025 at 10:14 AM
Hi Jaycee, The plots reflect numbers for all users - we haven’t distinguished between bots and real people.

Though, given that generative AI can now create highly realistic personas and images, it seems like there’s a growing need for a sophisticated bot detection algorithm to catch bots online..
April 25, 2025 at 6:07 PM
All is done by my wonderful co-authors and advisors here! @filipisilva.bsky.social, @baottruong.bsky.social, Sangyeon Kim, Fan Huang, Nick Liu, Alessandro Flammini, @fil.bsky.social, @osome.iu.edu
April 18, 2025 at 10:19 PM
As Bluesky continues to mature, influential accounts are emerging, posing familiar risks of misinformation, abuse and toxicity on platform. Understanding these dynamics can help inform effective governance and moderation strategies moving forward. Toxic posts: >.5 score from OpenAI moderation endp.
April 18, 2025 at 9:16 PM
All is done by my wonderful co-authors here! @filipisilva.bsky.social @baottruong.bsky.social @sangyeonkim @fanhuang.bsky.social @nickliu @alessandroflammini @fil.bsky.social
April 18, 2025 at 9:10 PM
Our analysis reveals that Bluesky rapidly developed a dense, highly clustered network structure. This "friend-of-a-friend" connectivity, characterized by strong hubs, enables swift and viral information diffusion, similar to established platforms such as Twitter/X and Weibo.
April 18, 2025 at 9:10 PM