Kareem Ahmed
kareemyousrii.bsky.social
Kareem Ahmed
@kareemyousrii.bsky.social
Postdoc @ University of California, Irvine | PhD from CS@UCLA

Neuro-Symbolic AI, Tractable Probabilistic Reasoning, Generative Models

kareemahmed.com
More importantly, we can efficiently condition this approximate distribution on our constraint such that any sample provably satisfies the constraint. We can reweigh our samples using the LLM to correct for any bias introduced by our approximate distribution.
December 11, 2024 at 12:20 AM
To do so, we construct a first-order approximation of the LLM centered at the unconstrained sample. This approximation naturally does not constitute the best LM, but allows us to efficiently represent a distribution over all sentences of bounded length.
December 11, 2024 at 12:20 AM
Now imagine we want to ban a bad expression, say "full of sh!t". We start by taking a sample from the LLM. The sample, shown in red, violates the constraint. What we want to do now is project the sample onto the support of the LLM distribution satisfying the constraint, m(alpha).
December 11, 2024 at 12:20 AM
More importantly, we can efficiently condition this approximate distribution on our constraint such that any sample provably satisfies the constraint. We can reweigh our samples using the LLM to correct for any bias introduced by our approximate distribution.
December 11, 2024 at 12:14 AM
To do so, we construct a first-order approximation of the LLM centered at the unconstrained sample. This approximation naturally does not constitute the best LM, but allows us to efficiently represent a distribution over all sentences of bounded length.
December 11, 2024 at 12:14 AM
Now imagine we want to ban a bad expression, say "full of sh!t". We start by taking a sample from the LLM. The sample, shown in red, violates the constraint. What we want to do now is project the sample onto the support of the LLM distribution satisfying the constraint, m(alpha).
December 11, 2024 at 12:14 AM