cm0002@lemmy.cafe to Technology@lemmy.zipEnglish · 1 month agoResearchers Jailbreak AI by Flooding It With Bullshit Jargonwww.404media.coexternal-linkmessage-square17linkfedilinkarrow-up188arrow-down14cross-posted to: pulse_of_truth@infosec.pubtechnology@lemmy.ml
arrow-up184arrow-down1external-linkResearchers Jailbreak AI by Flooding It With Bullshit Jargonwww.404media.cocm0002@lemmy.cafe to Technology@lemmy.zipEnglish · 1 month agomessage-square17linkfedilinkcross-posted to: pulse_of_truth@infosec.pubtechnology@lemmy.ml
minus-squareSheeEttin@lemmy.ziplinkfedilinkEnglisharrow-up4·1 month agoNo, those filters are performed by a separate system on the output text after it’s been generated.
minus-squareAvicenna@lemmy.worldlinkfedilinkEnglisharrow-up1·1 month agomakes sense though I wonder if you can also tweak the initial prompt so that the output is also full of jargon so that output filter also misses the context
minus-squareSheeEttin@lemmy.ziplinkfedilinkEnglisharrow-up1·1 month agoYes. I tried it, and it only filtered English and Chinese. If I told it to use Spanish, it didn’t get killed.
No, those filters are performed by a separate system on the output text after it’s been generated.
makes sense though I wonder if you can also tweak the initial prompt so that the output is also full of jargon so that output filter also misses the context
Yes. I tried it, and it only filtered English and Chinese. If I told it to use Spanish, it didn’t get killed.