Benjie Wang
benjiewang.bsky.social
Benjie Wang
@benjiewang.bsky.social
Postdoc @ UCLA StarAI Lab, PhD in CS from Oxford. Probabilistic ML, Tractable Models, Causality
Also check out the awesome paper "Sum of Squares Circuits" (arxiv.org/pdf/2408.11778) by @loreloc_, Stefan Mengel, and @tetraduzione, which concurrently showed the separation between monotone and squared circuits. Also at AAAI 2025 today poster #840!
arxiv.org
February 27, 2025 at 2:57 PM
Inception PCs strictly subsume monotone and squared PCs, and are strictly more expressive than both. We show this leads to improved downstream modeling performance when normalizing for FLOPS:
February 27, 2025 at 2:57 PM
To overcome these limitations, we propose Inception PCs, a novel tractable probabilistic model representing a deep *sum-of-square-of-sums*.

Inception PCs explicitly introduce two types of latent variables into the circuit for the mixtures encoded at sum nodes.
February 27, 2025 at 2:57 PM
We show that the reverse also holds (!!) - some tractable distributions expressed as monotone circuits cannot be compactly expressed as a square.
February 27, 2025 at 2:57 PM
On the other hand, squared circuits (arxiv.org/abs/2310.00724) allow use of arbitrary real parameters by *squaring* the circuit output. It was previously proven that squared circuits can be exponentially more expressive than monotone circuits!
Subtractive Mixture Models via Squaring: Representation and Learning
Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of c...
arxiv.org
February 27, 2025 at 2:57 PM
Probabilistic circuits are deep *tractable* probabilistic models that allow efficient and exact computation of marginals.

Traditionally, monotone circuits enforce non-negativity by using non-negative weights.

Paper: arxiv.org/abs/2408.00876
February 27, 2025 at 2:57 PM
Thanks Devendra!
December 14, 2024 at 6:03 PM
Thanks to my amazing co-authors Denis Mauá, @yjchoi1.bsky.social, @guyvdb.bsky.social. Hope to see you at the poster session!
December 13, 2024 at 7:10 PM
Along the way we also show a bunch of other cool results, like:
- More efficient algorithms for causal inference on circuits
- New circuit properties
- Separation/hardness results
December 13, 2024 at 7:10 PM
Building upon the prior PC atlas (proceedings.neurips.cc/paper_files/... ), our algebraic atlas provides a comprehensive approach for deriving **efficient algorithms** and **tractability conditions** for arbitrary compositional queries.

Try our atlas the next time you come across a new query!
December 13, 2024 at 7:10 PM
Just as circuits serve as a unifying representation of models, we show how you can express many queries as compositions of just a few basic operations: aggregation (marginalization, max, etc.), product, and elementwise mappings.
December 13, 2024 at 7:10 PM
Circuits are a unifying representation of probability distributions as a computation graph of sums and products. Here we consider the more general algebraic circuits, where sum/product is replaced with a semiring operation (think e.g. OR and AND for Boolean circuits).
December 13, 2024 at 7:10 PM
Hi! I work on prob ML & tractable models.
December 4, 2024 at 8:22 PM