Zining Zhu
zhuzining.bsky.social
Zining Zhu
@zhuzining.bsky.social
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
Let's bring in more formal reasoning properties in the commonsense reasoning datasets! Introducing ACCORD arxiv.org/abs/2406.02804, to be presented at #NAACL2025 w/ François Roewer-Després, Jinyue Feng and Frank Rudzicz. 1/n
$\texttt{ACCORD}$: Closing the Commonsense Measurability Gap
We present $\texttt{ACCORD}$, a framework and benchmark suite for disentangling the commonsense grounding and reasoning abilities of large language models (LLMs) through controlled, multi-hop counterf...
arxiv.org
February 6, 2025 at 3:12 PM
A uniquely interesting book with a lot of new information, and I feel the urge to take notes (either to echo or to debate) while reading. Highly recommend.
December 22, 2024 at 4:57 AM
Reposted by Zining Zhu
Nature Biotechnology

Behind the graduate mental health crisis in science
www.nature.com/articles/s41...
Behind the graduate mental health crisis in science - Nature Biotechnology
Survey results identify how scientific research and teaching contribute to the graduate student mental health crisis.
www.nature.com
November 28, 2024 at 1:02 PM
Reposted by Zining Zhu
I know there are already plenty of tips out there on how to write an effective rebuttal, but I thought I’d share mine as well. I’m not claiming to be an expert or to have a perfect success rate, but I hope these suggestions might be helpful for anyone who could use them.
November 27, 2024 at 4:30 AM
What are some recent papers that show making models explainable can also make them safer?
November 19, 2024 at 10:08 PM
Hi I'm starting to use Bluesky!
November 19, 2024 at 9:59 PM