Results show that as little as 250 malicious documents can be used to “poison” a language model, even as model size and training data grow: bit.ly/4n0mH4t
Results show that as little as 250 malicious documents can be used to “poison” a language model, even as model size and training data grow: bit.ly/4n0mH4t
Results show that as little as 250 malicious documents can be used to “poison” a language model, even as model size and training data grow: bit.ly/4n0mH4t
How can we get around this? 👇
How can we get around this? 👇
medium.com/@turinghut23...
medium.com/@turinghut23...