Communication and Intelligence @ UChicago
banner
uchicagoci.bsky.social
Communication and Intelligence @ UChicago
@uchicagoci.bsky.social
A team of three human PIs—Ari Holtzman, Mina Lee, and Chenhao Tan studying and building the new information ecosystem of humans and machines. https://substack.com/@cichicago, https://ci.cs.uchicago.edu/
Reposted by Communication and Intelligence @ UChicago
Excited that we are having the first talk in AI & Scientific Discovery online seminar on Friday at 12pm ET/11am CT/9am PT by the awesome Lei Li from CMU!

🧪 Generative AI for Functional Protein Design🤖

#artificialintelligence #scientificdiscovery

ai-scientific-discovery.github.io
September 29, 2025 at 5:57 PM
Reposted by Communication and Intelligence @ UChicago
Totally agree! Not sure fully automated AI scientist will be the most effective approach. That said, scientists will certainly be very different in the future.
My skepticism of LLM-as-scientist stems from how imbalanced the literature is. Median paper is mildly negative result presented as positive, it's unclear how to RLHF on good hypothesis vs. bad hypothesis, etc. We barely know how to teach this skill, how can we RLHF it
September 29, 2025 at 6:10 PM
Reposted by Communication and Intelligence @ UChicago
🚀 We’re thrilled to announce the upcoming AI & Scientific Discovery online seminar! We have an amazing lineup of speakers.

This series will dive into how AI is accelerating research, enabling breakthroughs, and shaping the future of research across disciplines.

ai-scientific-discovery.github.io
September 25, 2025 at 6:28 PM
Reposted by Communication and Intelligence @ UChicago
As AI becomes increasingly capable of conducting analyses and following instructions, my prediction is that the role of scientists will increasingly focus on identifying and selecting important problems to work on ("selector"), and effectively evaluating analyses performed by AI ("evaluator").
September 16, 2025 at 3:07 PM
Reposted by Communication and Intelligence @ UChicago
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL. The workshop will explore how AI can advance scientific discovery. Please use this Google form to indicate your interest (corrected link):

forms.gle/MFcdKYnckNno...

More in the 🧵! Please share! #MLSky 🧠
Program Committee Interest for the Second Workshop on AI & Scientific Discovery
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL (Annual meetings of The Association for Computational Linguistics, the European Language Resource Association and Internat...
forms.gle
August 29, 2025 at 4:00 PM
Reposted by Communication and Intelligence @ UChicago
⚡️Ever asked an LLM-as-Marilyn Monroe about the 2020 election? Our paper calls this concept incongruence, common in both AI and how humans create and reason.
🧠Read my blog to learn what we found, why it matters for AI safety and creativity, and what's next: cichicago.substack.com/p/concept-in...
July 31, 2025 at 7:06 PM
Reposted by Communication and Intelligence @ UChicago
Prompting is our most successful tool for exploring LLMs, but the term evokes eye-rolls and grimaces from scientists. Why? Because prompting as scientific inquiry has become conflated with prompt engineering.

This is holding us back. 🧵and new paper with @ari-holtzman.bsky.social .
July 9, 2025 at 8:07 PM
Reposted by Communication and Intelligence @ UChicago
When you walk into the ER, you could get a doc:
1. Fresh from a week of not working
2. Tired from working too many shifts

@oziadias.bsky.social has been both and thinks that they're different! But can you tell from their notes? Yes we can! Paper @natcomms.nature.com www.nature.com/articles/s41...
July 2, 2025 at 7:22 PM
Reposted by Communication and Intelligence @ UChicago
I am glad that you found our paper entertaining! This is a great point for my follow-up thread on the implications of concept incongruence. Our main goal is to raise awareness and provide clarity around concept incongruence.
Highly entertaining paper and writeup, but does it really matter? Is it important that models can't abstain on counterfactuals?
Or that the leak information?
🚨 New paper alert 🚨

Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️

1/n 🧵
May 28, 2025 at 12:56 PM
Reposted by Communication and Intelligence @ UChicago
🚨 New paper alert 🚨

Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️

1/n 🧵
May 27, 2025 at 1:59 PM
Reposted by Communication and Intelligence @ UChicago
1/n 🚀🚀🚀 Thrilled to share our latest work🔥: HypoEval - Hypothesis-Guided Evaluation for Natural Language Generation! 🧠💬📊
There’s a lot of excitement around using LLMs for automated evaluation, but many methods fall short on alignment or explainability — let’s dive in! 🌊
May 12, 2025 at 7:23 PM
Reposted by Communication and Intelligence @ UChicago
🧑‍⚖️How well can LLMs summarize complex legal documents? And can we use LLMs to evaluate?

Excited to be in Albuquerque presenting our paper this afternoon at @naaclmeeting 2025!
May 1, 2025 at 7:25 PM
Reposted by Communication and Intelligence @ UChicago
Although I cannot make #NAACL2025, @chicagohai.bsky.social will be there. Please say hi!

@chachachen.bsky.social GPT ❌ x-rays (Friday 9-10:30)
@mheddaya.bsky.social CaseSumm and LLM 🧑‍⚖️ (Thursday 2-3:30)
@haokunliu.bsky.social @qiaoyu-rosa.bsky.social hypothesis generation 🔬 (Saturday at 4pm)
April 30, 2025 at 8:19 PM