Nathaniel Hudson
banner
nathaniel-hudson.bsky.social
Nathaniel Hudson
@nathaniel-hudson.bsky.social
👨🏽‍💻 Ph.D. Computer Scientist
🏫 Assistant Professor at Illinois Tech
🛜 https://nathaniel-hudson.github.io/
Reposted by Nathaniel Hudson
Our work in collaboration with the Gagliardi group: “Cartesian Equivariant Representations for Learning and Understanding Molecular Orbitals” accepted to the Proceedings of the National Academy of Sciences (PNAS)
chemrxiv.org/engage/api-g...

@danielgrzenda.bsky.social @nathaniel-hudson.bsky.social
chemrxiv.org
October 27, 2025 at 11:00 PM
Reposted by Nathaniel Hudson
We’re welcoming 5 new faculty members to our computer science department this semester—bringing expertise in AI, cybersecurity, HPC, cryptography, and more.
https://bit.ly/46pyymr
Cutting-Edge Researchers Join Department of Computer Science
The Department of Computer Science at Illinois Tech’s College of Science welcomes five new members to its faculty this semester who will help provide the rigorous education needed to gui
www.iit.edu
September 23, 2025 at 5:11 PM
Reposted by Nathaniel Hudson
📢 Exciting news! Our paper on Flight—a novel hierarchical federated learning framework built on Globus Compute, Parsl, and ProxyStore—is now officially published in Future Generation Computer Systems! ✈️🤖

Use the link below to check out the paper on FGCS:
www.sciencedirect.com/science/arti...
urldefense.com
July 22, 2025 at 3:17 PM
It should have always been obvious that relying on LLMs to stochastically regurgitate text on behalf of users is catastrophic for learning outcomes. I hope to see continued research in this direction as more educational institutions choose to surrender to AI hype.
New research from MIT found that those who used ChatGPT can’t remember any of the content of their essays.

Key takeaway: the product doesn’t suffer, but the process does. And when it comes to essays, the process *is* how they learn.

arxiv.org/pdf/2506.088...
June 19, 2025 at 6:23 AM
Reposted by Nathaniel Hudson
@mansisakarvadia.bsky.social, Aswathy (PhD students), and @nathaniel-hudson.bsky.social (postdoc) presented their work on identifying and ablating memorization in #LLMs at the 2024 MSLD workshop! 🎉 Their research is also accepted to ICLR 2025 — check it out: mansisak.com/memorization/
April 29, 2025 at 3:31 AM
Reposted by Nathaniel Hudson
1/🧵ICLR 2025 Spotlight Research on LM & Memorization!
Language models (LMs) often "memorize" data, leading to privacy risks. This paper explores ways to reduce that!
Paper: arxiv.org/pdf/2410.02159
Code: github.com/msakarvadia/...
Blog: mansisak.com/memorization/
March 4, 2025 at 6:15 PM
Super proud to have had the opportunity to mentor Jordan on this project. He's bound for great things.
Our first ever post celebrates @globuslabs.bsky.social undergraduate intern Jordan Pettyjohn winning 1st place at the Student Research Competition at SC24 for his work "Mind Your Manners: Detoxifying Language Models" sc24.supercomputing.org/proceedings/...
November 25, 2024 at 7:33 PM
Reposted by Nathaniel Hudson
Our first ever post celebrates @globuslabs.bsky.social undergraduate intern Jordan Pettyjohn winning 1st place at the Student Research Competition at SC24 for his work "Mind Your Manners: Detoxifying Language Models" sc24.supercomputing.org/proceedings/...
November 25, 2024 at 7:29 PM