this post was submitted on 30 Jan 2025
75 points (93.1% liked)

TechTakes

1610 readers
74 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

So I was just reading this thread about deepseek refusing to answer questions about Tianenmen square.

It seems obvious from screenshots of people trying to jailbreak the webapp that there's some middleware that just drops the connection when the incident is mentioned. However I've already asked the self hosted model multiple controversial China questions and it's answered them all.

The poster of the thread was also running the model locally, the 14b model to be specific, so what's happening? I decide to check for myself and lo and behold, I get the same "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."

Is it just that specific model being censored? Is it because it's the qwen model it's distilled from that's censored? But isn't the 7b model also distilled from qwen?

So I check the 7b model again, and this time round that's also censored. I panic for a few seconds. Have the Chinese somehow broken into my local model to cover it up after I downloaded it.

I check the screenshot I have of it answering the first time I asked and ask the exact same question again, and not only does it work, it acknowledges the previous question.

So wtf is going on? It seems that "Tianenmen square" will clumsily shut down any kind of response, but Tiananmen square is completely fine to discuss.

So the local model actually is censored, but the filter is so shit, you might not even notice it.

It'll be interesting to see what happens with the next release. Will the censorship be less thorough, stay the same, or will china again piss away a massive amount of soft power and goodwill over something that everybody knows about anyway?

you are viewing a single comment's thread
view the rest of the comments
[–] TropicalDingdong 0 points 1 week ago

Right on. Yeah, so that would also point in the direction of extreme over-fitting during training. You might check out this notebook on abliteration if you want to learn more: https://colab.research.google.com/drive/1VYm3hOcvCpbGiqKZb141gJwjdmmCcVpR

My guess is you could confirm this is the case by looking at the high dimensional orthogonality of a given instruction that is refused, and then again some which are not refused. You could probably just try the two different spellings of Tianemen and Tianamen be able to key in on the censored response vector.