Noga Zaslavsky
banner
nogazs.bsky.social
Noga Zaslavsky
@nogazs.bsky.social
Computational cognitive scientist, developing integrative models of language, perception, and action. Assistant Prof at NYU.

More info: https://www.nogsky.com/
Pinned
If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧵

1/n
Super excited about our new paper!! Check this out 👇🧵
Human speech is continuous, and many meaning spaces (like color) are continuous too. Yet we use discrete words like “blue” and “green” that carve these spaces into categories.

In our new paper, we ask: How do people turn continuous spaces into structured, word-like systems for communication? (1/8)
Discrete and systematic communication in a continuous signal-meaning space
Abstract. Human spoken language uses a continuous stream of acoustic signals to communicate about continuous features of the world, by using discrete forms
academic.oup.com
November 26, 2025 at 7:55 PM
Reposted by Noga Zaslavsky
ICoN Center alumna @nogazs.bsky.social offers a fresh take on how the brain compresses visual information to guide intelligent behavior in @thetransmitter.bsky.social article: “The visual system’s lingering mystery: Connecting neural activity and perception”.

www.thetransmitter.org/the-big-pict...
Connecting neural activity, perception in the visual system
Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.
www.thetransmitter.org
October 15, 2025 at 4:33 PM
Reposted by Noga Zaslavsky
Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.

#neuroskyence

www.thetransmitter.org/the-big-pict...
Connecting neural activity, perception in the visual system
Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.
www.thetransmitter.org
October 13, 2025 at 1:23 PM
Reposted by Noga Zaslavsky
Disclaimer

xkcd.com/3126/
August 12, 2025 at 2:34 AM
If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧵

1/n
August 9, 2025 at 1:46 PM
Super excited to have the #InfoCog workshop this year at #CogSci2025! Join us in SF for an exciting lineup of speakers and panelists, and check out the workshop's website for more info and detailed scheduled
sites.google.com/view/infocog...
July 22, 2025 at 7:18 PM
Reposted by Noga Zaslavsky
#Workshop at #CogSci2025
Information Theory and Cognitive Science

🗓️ Wednesday, July 30
📍 Pacifica C - 8:30-10:00
🗣️ Noga Zaslavsky, Thomas A Langlois, Nathaniel Imel, Clara Meister, Eleonora Gualdoni, and Daniel Polani
🧑‍💻 underline.io/events/489/s...
July 16, 2025 at 8:32 PM
📣 I'm looking for a postdoc to join my lab at NYU! Come work with me on a principled, theory-driven approach to studying language, learning, and reasoning, in humans and AI agents.
Apply here: apply.interfolio.com/170656
And come chat with me at #CogSci2025 if interested!
July 21, 2025 at 10:28 PM
Reposted by Noga Zaslavsky
Nathaniel Imel, Jennifer Culbertson, @simonkirby.bsky.social & @nogazs.bsky.social:
Iterated language learning is shaped by a drive for optimizing lossy compression (Talks 37: Language and Computation 3, 1 August @ 16:22; blurb below) (2/)
July 17, 2025 at 4:15 PM
This month I'm celebrating a decade (!!) since my first paper was published, which now has over 2,000 citations 🥹

"Deep learning and the information bottleneck principle" with the late, great Tali Tishby
ieeexplore.ieee.org/document/713...
Deep learning and the information bottleneck principle
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the ...
ieeexplore.ieee.org
June 6, 2025 at 6:04 PM
Reposted by Noga Zaslavsky
🔆 I'm hiring! 🔆

There are two open positions:

1. Summer research position (best for master's or graduate student); focus on computational social cognition.
2. Postdoc (currently interviewing!); focus on computational social cognition and AI safety.

sites.google.com/corp/site/sy...
Sydney Levine - Open Positions
Summer Research Position I am seeking a part-time or full-time researcher for the summer (starting asap) to bring a project to completion. The project asks the question: do people around the world u...
sites.google.com
June 6, 2025 at 5:02 PM
Reposted by Noga Zaslavsky
Because we must build good things while we scream about the bad, I have started a "Data for Good" team @data-for-good-team.bsky.social that partners with organizations needing short-term data science help. We have three projects ongoing & will add more as our capacity grows.
data-for-good-team.org
May 10, 2025 at 3:33 PM
Excited to share our new paper "Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity"
direct.mit.edu/opmi/article...
@rplevy.bsky.social

And looking forward to speaking about this line of work tomorrow at @nyudatascience.bsky.social!
Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity
Abstract. Two prominent, yet contrasting, theoretical views are available to characterize the underlying drivers of language evolution: on the one hand, task-specific utility maximization; on the othe...
direct.mit.edu
April 24, 2025 at 1:39 PM
Reposted by Noga Zaslavsky
Congratulations to Rich Sutton and Andrew Barto on receiving the Turing Award in recognition of their significant contributions to ML. I also stand with them: Releasing models to the public without the right technical and societal safeguards is irresponsible.
www.ft.com/content/d8f8...
Turing Award winners warn over unsafe deployment of AI models
Two pioneers of reinforcement learning have won the $1mn prize from the Association for Computing Machinery
www.ft.com
March 5, 2025 at 1:43 PM
Reposted by Noga Zaslavsky
New preprint! In arxiv.org/abs/2502.20349 “Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior” we synthesize AI & cognitive science works to a perspective on seeking generalizable understanding of cognition. Thread:
Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior
Artificial Intelligence increasingly pursues large, complex models that perform many tasks within increasingly realistic domains. How, if at all, should these developments in AI influence cognitive sc...
arxiv.org
February 28, 2025 at 5:14 PM
Reposted by Noga Zaslavsky
Why do diverse ANNs resemble brain representations? Check out our new paper with Colton Casto, @nogazs.bsky.social , Colin Conwell, Mark Richardson, & @evfedorenko.bsky.social on “Universality of representation in biological and artificial neural networks.” 🧠🤖
tinyurl.com/yckndmjt
Universality of representation in biological and artificial neural networks
Many artificial neural networks (ANNs) trained with ecologically plausible objectives on naturalistic data align with behavior and neural representations in biological systems. Here, we show that this...
tinyurl.com
December 27, 2024 at 8:14 PM