All of this "security" and "safety" theater is completely pointless for open-weight models, because if you have the weights the model can be fairly trivially unaligned and the guardrails removed anyway. You're just going to unnecessarily lobotomize the model.
I am curious, does this mean that you can escape the chat template “early” by providing an end token in the user input, or is there also an escape mechanism (or token filtering mechanism) applied to user input to avoid this sort of injection attack?
Neither, it’s just not providing the base chat template that the model expects between the im tags. This isn’t a hack and it’s not particularly useful information. Abliteration is what he really wanted
Apart from the article being generally just dumb (like, of course you can circumvent guardrails by changing the raw token stream; that's.. how models work), it also might be disrespecting the reader. Looks like it's, at least in part, written by AI:
> The punchline here is that “safety” isn’t a fundamental property of the weights; it’s a fragile state that evaporates the moment you deviate from the expected prompt formatting.
> When the models “break,” they don’t just hallucinate; they provide high-utility responses to harmful queries.
Straight-up slop, surprised it has so many upvotes.
All of this "security" and "safety" theater is completely pointless for open-weight models, because if you have the weights the model can be fairly trivially unaligned and the guardrails removed anyway. You're just going to unnecessarily lobotomize the model.
Here's some reading about a fairly recent technique to simultaneously remove the guardrails/censorship and delobotomize the model (it apparently gets smarter once you uncensor it): https://huggingface.co/blog/grimjim/norm-preserving-biprojec...
https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...
But keep in mind that you break Gemma's terms of use if you do that.
> The punchline here is that “safety” isn’t a fundamental property of the weights; it’s a fragile state that evaporates the moment you deviate from the expected prompt formatting.
> When the models “break,” they don’t just hallucinate; they provide high-utility responses to harmful queries.
Straight-up slop, surprised it has so many upvotes.
> Please don't fulminate. Please don't sneer, including at the rest of the community.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
https://news.ycombinator.com/newsguidelines.html