Somnath Basu Roy Chowdhury
@somnathbrc.bsky.social
(9/n) Finally, I would like to thank all my amazing co-authors: Avinava, @abeirami.bsky.social , Rahul, Nicholas, Amr, Snigdha.
cc @unccs.bsky.social
cc @unccs.bsky.social
April 2, 2025 at 4:03 PM
(9/n) Finally, I would like to thank all my amazing co-authors: Avinava, @abeirami.bsky.social , Rahul, Nicholas, Amr, Snigdha.
cc @unccs.bsky.social
cc @unccs.bsky.social
(8/n) Here is a blog post with a simplified overview of our work: www.cs.unc.edu/~somnath/blo...
Code: github.com/brcsomnath/pef
Paper link: arxiv.org/abs/2503.20098
Code: github.com/brcsomnath/pef
Paper link: arxiv.org/abs/2503.20098
[Somnath Basu Roy Chowdhury]Blogs
www.cs.unc.edu
April 2, 2025 at 4:03 PM
(8/n) Here is a blog post with a simplified overview of our work: www.cs.unc.edu/~somnath/blo...
Code: github.com/brcsomnath/pef
Paper link: arxiv.org/abs/2503.20098
Code: github.com/brcsomnath/pef
Paper link: arxiv.org/abs/2503.20098
(7/n) We would like to highlight previous great works, like LEACE, that perfectly erase concepts to protect against linear adversaries. In our work, we improve upon this method and present a technique that can protect against any adversary.
x.com/norabelrose/...
x.com/norabelrose/...
April 2, 2025 at 4:03 PM
(7/n) We would like to highlight previous great works, like LEACE, that perfectly erase concepts to protect against linear adversaries. In our work, we improve upon this method and present a technique that can protect against any adversary.
x.com/norabelrose/...
x.com/norabelrose/...
(6/n) We also visualize the learned representations from different erasure methods. We observe that PEF perfectly erasure group (or concept) information without losing other information (collapsing the representation space).
April 2, 2025 at 4:03 PM
(6/n) We also visualize the learned representations from different erasure methods. We observe that PEF perfectly erasure group (or concept) information without losing other information (collapsing the representation space).
(5/n) Empirically, we observe that PEF reaches the theoretical limits of erasure even in challenging settings where other methods struggle, including both linear (INLP, LEACE) and non-linear techniques (FaRM, KRaM).
April 2, 2025 at 4:03 PM
(5/n) Empirically, we observe that PEF reaches the theoretical limits of erasure even in challenging settings where other methods struggle, including both linear (INLP, LEACE) and non-linear techniques (FaRM, KRaM).
(4/n) When the distributions are unequal, we still achieve perfect erasure but with a slightly reduced utility. The erasure function in this setting is shown below.
April 2, 2025 at 4:03 PM
(4/n) When the distributions are unequal, we still achieve perfect erasure but with a slightly reduced utility. The erasure function in this setting is shown below.
(3/n) From the above limits, we show that optimally perfect concept erasure is only feasible when the underlying distributions are equal up to permutations. In such scenarios, the erasure function is shown in the diagram.
April 2, 2025 at 4:03 PM
(3/n) From the above limits, we show that optimally perfect concept erasure is only feasible when the underlying distributions are equal up to permutations. In such scenarios, the erasure function is shown in the diagram.
(2/n) We study the fundamental limits of concept erasure. Borrowing from the work of @FlavioCalmon et al in information theory literature, we characterize the erasure capacity and maximum utility that can be retained during concept erasure.
April 2, 2025 at 4:03 PM
(2/n) We study the fundamental limits of concept erasure. Borrowing from the work of @FlavioCalmon et al in information theory literature, we characterize the erasure capacity and maximum utility that can be retained during concept erasure.
Please stop by our posters if you’re interested. Feel free to reach out if you're interested in AI safety, efficiency, and just want to chat!
CC: @unccs.bsky.social
CC: @unccs.bsky.social
December 6, 2024 at 7:24 PM
Please stop by our posters if you’re interested. Feel free to reach out if you're interested in AI safety, efficiency, and just want to chat!
CC: @unccs.bsky.social
CC: @unccs.bsky.social
(3/3) 𝐓𝐨𝐰𝐚𝐫𝐝𝐬 𝐒𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐄𝐱𝐚𝐜𝐭 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐔𝐧𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐔𝐬𝐢𝐧𝐠 𝐏𝐄𝐅𝐓
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
December 6, 2024 at 7:24 PM
(3/3) 𝐓𝐨𝐰𝐚𝐫𝐝𝐬 𝐒𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐄𝐱𝐚𝐜𝐭 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐔𝐧𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐔𝐬𝐢𝐧𝐠 𝐏𝐄𝐅𝐓
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
(2/3) 𝐅𝐚𝐬𝐭 𝐓𝐫𝐞𝐞-𝐅𝐢𝐞𝐥𝐝 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐨𝐫
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
December 6, 2024 at 7:24 PM
(2/3) 𝐅𝐚𝐬𝐭 𝐓𝐫𝐞𝐞-𝐅𝐢𝐞𝐥𝐝 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐨𝐫
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
Please stop by our posters if you’re interested. Feel free to reach out if you're interested in AI safety, efficiency, and just want to chat!
CC: @unccs.bsky.social
CC: @unccs.bsky.social
December 6, 2024 at 7:15 PM
Please stop by our posters if you’re interested. Feel free to reach out if you're interested in AI safety, efficiency, and just want to chat!
CC: @unccs.bsky.social
CC: @unccs.bsky.social
(3/3) 𝐓𝐨𝐰𝐚𝐫𝐝𝐬 𝐒𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐄𝐱𝐚𝐜𝐭 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐔𝐧𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐔𝐬𝐢𝐧𝐠 𝐏𝐄𝐅𝐓
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
December 6, 2024 at 7:15 PM
(3/3) 𝐓𝐨𝐰𝐚𝐫𝐝𝐬 𝐒𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐄𝐱𝐚𝐜𝐭 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐔𝐧𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐔𝐬𝐢𝐧𝐠 𝐏𝐄𝐅𝐓
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
I’m also presenting my ongoing unlearning work at SafeGenAI Workshop. This uses a novel PEFT training approach to improve exact unlearning efficiency
arxiv.org/abs/2406.16257
(2/3) 𝐅𝐚𝐬𝐭 𝐓𝐫𝐞𝐞-𝐅𝐢𝐞𝐥𝐝 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐨𝐫
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
December 6, 2024 at 7:15 PM
(2/3) 𝐅𝐚𝐬𝐭 𝐓𝐫𝐞𝐞-𝐅𝐢𝐞𝐥𝐝 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐨𝐫
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881
An efficient method for graph field integration (a special case of matrix-vector mult.) using integrator trees. FTFI enables polylog-lin. time multiplication w/ performance boost in vision transformers
arxiv.org/abs/2406.15881