pentagonalize.bsky.social
@pentagonalize.bsky.social
The FLaNN Workshop submission deadline has been extended to Feb 19!

Invited talks + posters (non-archival): expressivity, computation, and learning in neural nets/LLMs. Previous work welcome. Graduate students encouraged to submit!

📍 Yale University
🗓️ May 11-13, 2026
📣 FLaNN 2026 at Yale 🍮

Invited talks+posters (non-archival): expressivity, computation, and learning in neural nets/LLMs

Speakers: Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, Gail Weiss

Abstracts due Feb 12, 2026
Details: flann.cs.yale.edu
February 12, 2026 at 8:05 PM
📣 FLaNN 2026 at Yale 🍮

Invited talks+posters (non-archival): expressivity, computation, and learning in neural nets/LLMs

Speakers: Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, Gail Weiss

Abstracts due Feb 12, 2026
Details: flann.cs.yale.edu
February 4, 2026 at 3:24 PM
Deadline in just under two weeks!
CFP for the First Workshop on Formal Languages and Neural Networks!

"We welcome posters dicussing the formal expressivity, computational properties, and learning behavior of neural networks!"

Call for posters: flann.cs.yale.edu/cfp.html
Deadline: February 12, 2026

@pentagonalize.bsky.social
FLaNN Workshop 2026
flann.cs.yale.edu
January 31, 2026 at 12:14 AM
Announcing the first Workshop on Formal Languages and Neural Networks (FLaNN)!

We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs).
December 19, 2025 at 2:59 AM
We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!

Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!
The Transformer Cookbook
We present the transformer cookbook: a collection of techniques for directly encoding algorithms into a transformer's parameters. This work addresses the steep learning curve of such endeavors, a prob...
arxiv.org
October 3, 2025 at 4:24 PM
Reposted
New paper and two not-so-new papers on arXiv about transformer expressivity: (1) With @pentagonalize and Dana Angluin, "Simulating Hard Attention Using Soft Attention" arxiv.org/abs/2412.09925
Simulating Hard Attention Using Soft Attention
We study conditions under which transformers using soft attention can simulate hard attention, that is, effectively focus all attention on a subset of positions. First, we examine several variants of ...
arxiv.org
December 23, 2024 at 10:55 PM