Just 250 malicious training documents can poison a 13B parameter model - that’s 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
Just 250 malicious training documents can poison a 13B parameter model - that’s 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
They should tell us how to do it so we can make sure we don’t do it
Whatever you do, do not run your image files through Nightshade (and Glaze). That would be bullying and it makes techbros cry.
I think this could pop the bubble if we do it enough
My man, it’s near the start of the article:
Anthropic, of all people, wouldn’t be telling us about it if it could actually affect them. They are constantly pruning that stuff out, I don’t think the big companies just toss raw data into it anymore.