Ugur Kursuncu
banner
ugurkursuncu.bsky.social
Ugur Kursuncu
@ugurkursuncu.bsky.social
Assistant Professor, Georgia State University, Social Computing, Artificial Intelligence, Neurosymbolic AI, Knowledge-Infused Learning (K-IL), Cyber Social Threats, SWAN AI Group.
🚨 We’re #hiring Postdoctoral Researchers in AI/ML at Georgia State University!

Join us to work on novel and exciting ideas in machine/deep learning, #agenticAI / #generativeAI, #neurosymbolicAI, and #knowledgegraphs, applied to problems in social, business, and health domains.

1/4 🧵
August 1, 2025 at 7:54 PM
Proud to see my PhD student Trilok @trilokpadhi.bsky.social present our work at #ACL2025. Exciting to see our collaborative efforts recognized w/ Rahul Garg, Hemang Jain, @pkprofgiri.bsky.social

Meet Trilok to say hi if you are at ACL in Vienna.

#NeuroSymbolicAI #CyberSocialSafety
#AdvisorProud
July 31, 2025 at 3:28 AM
📢 Deadline Extended for submissions to #CySoc2025 to ensure broader participation.
💡Share your research on generative AI, online safety, harms and threats, or political conflict in online platforms, with the leading minds in the field. We’d love to see your submissions.

📆New Deadline: April 10 AoE
April 1, 2025 at 3:32 PM
🚨 #CySoc2025 Program Shaping Up Nicely! We’re excited to announce key highlights from the International Workshop on Cyber Social Threats at #ICWSM2025 @icwsm.bsky.social

📢 and Call for Papers is still open!

📆 Submission Deadline: March 31, 2025
🔗 More: cy-soc.github.io/2025/

For more: 🧵 1/6 👇
March 21, 2025 at 5:36 PM
🚨 Call for Papers! 🚨
We’re excited to announce the 6th International Workshop on Cyber Social Threats hashtag#CySoc2025 that will be held at hashtag#ICWSM2025 in Denmark! 🎉

🌍 Spotlight Topic: "Political Conflicts in Online Platforms in the Era of Gen-AI."

📅 Submission Deadline: March 31, 2025
February 11, 2025 at 6:58 PM
Our findings show improved decision boundaries, with clearer separations between toxic and non-toxic content. ✅

📄 Read more in our preprint: arxiv.org/pdf/2411.12174
January 14, 2025 at 7:46 PM
Utilizing KGs, e.g., #ConceptNet, our model bridges gaps in semantics of harmful content, enhancing explicit contextual cues often missed by other approaches. Our approach integrates text, visuals, and commonsense knowledge to detect subtle toxicity with greater recall and precision. 💡🚀
January 14, 2025 at 7:46 PM
We introduce KID-VLM, our novel #Neurosymbolic AI approach that combines commonsense knowledge graphs (KGs) with knowledge distillation from Large Vision-Language Models (LVLMs) to detect toxicity online. 📖🖼️🔍
"Just KIDDIN’: Knowledge Infusion and Distillation for Detection of INdecent Memes"
January 14, 2025 at 7:46 PM
Detecting #OnlineToxicity is a challenging problem because it often hides behind sarcasm, dark humor, and cultural references. 🧩 To address this problem, we need models that capture explicit contextual cues and provide culturally informed decision criterion to separate #harmful from #harmless. 🧠📊
January 14, 2025 at 7:46 PM