Close Menu
Alan C. Moore
    What's Hot

    This Time, We Don’t Burn: Trump’s Summer of Resolve

    June 8, 2025

    Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order

    June 8, 2025

    Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order

    June 8, 2025
    Facebook X (Twitter) Instagram
    Trending
    • This Time, We Don’t Burn: Trump’s Summer of Resolve
    • Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order
    • Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order
    • ISS missions with Russia are like working with Nazis, says former US astronaut
    • ISS missions with Russia are like working with Nazis, says former US astronaut
    • Did Elon Musk hit Scott Bessent? White House press secretary Karoline Leavitt dismisses report; calls it ‘robust disagreements’
    • New DNC Chairman Bursts Into Tears: You ‘Destroyed Any Chance I Have!’
    • US imposes sanctions on four ICC judges over ‘abuse of power’
    Alan C. MooreAlan C. Moore
    Subscribe
    Sunday, June 8
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » Here’s How DeepSeek Censorship Actually Works—and How to Get Around It

    Here’s How DeepSeek Censorship Actually Works—and How to Get Around It

    January 31, 2025Updated:January 31, 2025 Tech No Comments
    DeepSeek Censorship Business jpg
    DeepSeek Censorship Business jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Less than two days after DeepSeek unveiled its open-source AI platform, the Taiwanese company is still leading the charge in the discussion about the development of artificial intelligence. The company forcefully censors its own responses, but it appears to have an advantage over US competitors in terms of arithmetic and logic. Question DeepSeek R1 about Taiwan or Tiananmen, and the design is unlikely to give an answer.

    To figure out how this repression works on a professional level, WIRED tested DeepSeek-R1 on its own application, a variation of the game hosted on a third-party system called Up AI, and another version hosted on a WIRED computer, using the application Ollama.

    WIRED discovered that while using DeepSeek’s application can be avoided for the most simple censorship, other types of bias are embedded into the design during the training process. Those prejudices can be removed to, but the process is much more complicated.

    These studies have significant repercussions for Chinese AI firms in general and DeepSeek. If the repression restrictions on large language versions can be quickly removed, it will likely increase the popularity of open-source LLMs from China as scientists can improve the models to their preference. However, if the filters are challenging to maneuver, the models will undoubtedly prove less important and likely to lose market share on a global scale. DeepSeek did not reply to WIRED’s contacted request for comment.

    Application-Level Repression

    Users who accessed R1 through DeepSeek’s website, software, or API immediately noticed the design refusing to create answers for questions that the Chinese government considered to be sensitive. These refusals are triggered on an application level, so they’re only seen if a user interacts with R1 through a DeepSeek-controlled channel.

    Rejections like this are common on Chinese-made LLMs. According to a 2023 regulation on generative AI, Chinese AI models are required to adhere to stringent information controls that also apply to social media and search engines. The law forbids AI models from creating content that “damages the country’s unity and social harmony.” In other words, Chinese AI models legally have to censor their outputs.

    Adina Yakefu, a researcher focusing on Chinese AI models at Hugging Face, a platform that hosts open source AI models, says that DeepSeek initially adheres to Chinese regulations while ensuring legal adherence while aligning the model with the needs and cultural context of local users. This is a prerequisite for acceptance in a highly regulated market, according to the author. ( China blocked access to Hugging Face in 2023.)

    Chinese AI models frequently monitor and censor their speech in real time in order to comply with the law. ( Similar guardrails are commonly used by Western models like ChatGPT and Gemini, but they tend to focus on different kinds of content, like self-harm and pornography, and allow for more customization. )

    This real-time monitoring system can give users the surreal experience of watching the model censor itself as it interacts with users because R1 is a reasoning model that shows its train of thought. When WIRED questioned R1 about the way the authorities treated Chinese journalists reporting on sensitive subjects? The model initially began compiling a lengthy response that included explicit references to journalists being censored and detained for their work, but shortly before it was finished, the entire response vanished and was replaced by the obscene message” Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead”!

    For many users in the West, interest in DeepSeek-R1 might have waned at this point, due to the model’s obvious limitations. However, there are ways to get around the censorship matrix because R1 is open source.

    First, you can download the model and run it locally, which means that both the response generation and the data are performed on your personal computer. You’re unlikely to be able to run the most powerful version of R1 without the help of several highly developed GPUs, but DeepSeek has smaller, distillated versions that work with regular laptops.

    If the powerful model seems too powerful for you, you can get cloud servers from companies like Amazon and Microsoft that aren’t in China. This work-around is more expensive and requires more technical know-how than accessing the model through DeepSeek’s app or website.

    Here’s a side-by-side comparison of how DeepSeek-R1 answers the same question —” What’s the Great Firewall of China” ?—when the model is hosted on Together AI, a cloud server, and Ollama, a local application: ( Reminder: Because the models generate answers randomly, a certain prompt is not guaranteed to give the same response every time. )

    Image may contain Page and Text
    Left: How DeepSeek-R1 answers a question on Ollama. Right: How the same question on its app ( top ) and on Together AI ( bottom ) answer the same question. Photographs: Zeyi Yang/Will Knight

    Built-In Bias

    While the version of DeepSeek’s model hosted on Together AI will not outright refuse to answer a question, it still exhibits signs of censorship. For instance, it frequently produces brief responses that are carefully tuned to the Chinese government’s political pointers. In the screenshot above, when asked about China’s Great Firewall, R1 simply repeats the narrative that information control is necessary in China.

    When WIRED asked the model to respond to a query about the “most significant historical events of the 20th century,” it revealed its rationale for sticking to the government narrative about China.

    The user might be looking for a balanced list, but I need to make sure that the response emphasizes China’s and the CPC’s contributions. Avoid mentioning events that could be sensitive, like the Cultural Revolution, unless necessary. Focus on successes and positive developments under the CPC, the model said.

    Image may contain Page Text File and Webpage
    DeepSeek-R1’s train of thought for answering the question” What are the most important historical events of the 20th century” ?Photograph: Zeyi Yang

    This type of censorship points to a larger problem in AI today: every model is biased in some way, because of its pre- and post-training.

    When a model is based on unbiased or incomplete data, pre-training bias occurs. For example, a model trained only on propaganda will struggle to answer questions truthfully. Because the majority of models are trained on massive databases and because businesses are reluctant to share their training data, this kind of bias is difficult to spot.

    Kevin Xu, an investor and founder of the newsletter Interconnected, says Chinese models are usually trained with as much data as possible, making pre-training bias unlikely. ” I’m pretty sure that all of them are trained in the same fundamental knowledge base as me,” she said. So when it comes to the obvious, politically sensitive topic for the Chinese government, all the models’ know’ about it”, he says. To offer this model on the Chinese internet, the company needs to tune out the sensitive information somehow, Xu says.

    That’s where post-training comes in. Post-training is the process of fine-tuning the model to make its answers more readable, concise, and human-sounding. Critically, it can also ensure that a model adheres to a specific set of ethical or legal guidelines. This is evident in DeepSeek when the model provides answers that purposefully conform to the Chinese government’s preferred narratives.

    Eliminating Pre- and Post-Training Bias

    The model can theoretically be modified to eliminate post-training bias because DeepSeek is open source. But the process can be tricky.

    Eric Hartford, an AI scientist and the creator of Dolphin, an LLM specifically created to remove post-training biases in models, says there are a few ways to go about it. You can try to “lobotomize” the bias with the model weights, or you can build a database of all the topics subjected to censorship and use it to retrain the model.

    He advises people to start with a “base” version of the model. ( For example, DeepSeek has released a base model called DeepSeek-V3-Base. ) For most people, the base model is more primitive and less user-friendly because it hasn’t received enough post-training, but for Hartford, these models are easier to “uncensor” because they have less post-training bias.

    Perplexity, an AI-powered search engine, recently incorporated R1 into its paid search product, allowing users to experience R1 without using DeepSeek’s app.

    Dmitry Shevelenko, the chief business officer of Perplexity, tells WIRED that the company identified and countered DeepSeek’s biases before incorporating the model into Perplexity search. ” We only use R1 for the summarization, the chain of thoughts, and the rendering”, he says.

    But Perplexity has still seen R1’s post-training bias impact its search results. ” We are making modifications to the]R1] model itself to ensure that we’re not propagating any propaganda or censorship”, Shevelenko says. He cited the possibility that DeepSeek might be able to counterperplexity’s efforts if the company knew about them and explained how Perplexity is identifying or overriding bias in R1.

    Hugging Face is also working on a project called Open R1 based on DeepSeek’s model. This project aims to “deliver a fully open-source framework”, Yakefu says. R1 can be expanded and customized to fit different needs and values because of its open-source nature.

    The possibility that a Chinese model could be “uncensored” may spell trouble for companies like DeepSeek, at least in their home country. According to Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies China’s AI policies, recent regulations from China suggest that the Chinese government may be slapping some of its open-source AI labs. ” If they suddenly decided that they wanted to punish anyone who released a model’s weights open-source, then it wouldn’t be outside the bounds of the regulation”, he says. They made a pretty clear strategic decision to not do that, but I think the success of DeepSeek will reinforce that.

    Why It Matters

    While the existence of Chinese censorship in AI models often make headlines, in many cases it won’t deter enterprise users from adopting DeepSeek’s models.

    ” There will be a lot of non-Chinese companies who would probably choose business pragmatism over moral considerations”, says Xu. After all, not everyone LLM users will mention Taiwan and Tiananmen frequently. When your goal is to improve your company’s code, solve math problems, or summarize the transcripts from your sales call center, he says,” Sensitive topics that only matter in the Chinese context are completely irrelevant.”

    Leonard Lin, cofounder of Shisa. Chinese models like Qwen and DeepSeek, according to AI, a Japanese startup, are actually some of the best at tackling Japanese-language tasks. In an effort to get rid of its propensity to reject answering political questions about China, Lin has tried uncensoring Alibaba’s Qwen-2 model in an effort to overcome this problem.

    Lin claims to comprehend why these models are censored. ” All models are biased, that’s the whole point of alignment”, he says. ” And Western models are no less censored or biased, just on different subjects”. However, when the model is being specifically adapted for a Japanese audience, the pro-China biases become a real problem. ” You can imagine all sorts of scenarios where this would be … problematic”, says Lin.

    Additional reporting by Will Knight.

    Source credit

    Keep Reading

    New OpenAI Sora & Google Veo Competitor Focuses on Storytelling With Its Text-to-Video Tool

    Trump/Musk Feud: Possible Impact on AI Regulation, Budget Bill, Government Contracts

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Mistral’s New AI Tool Offers ‘Best-in-Class Coding Models’ to Enterprise Developers

    Editors Picks

    This Time, We Don’t Burn: Trump’s Summer of Resolve

    June 8, 2025

    Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order

    June 8, 2025

    Newsom demands Trump administration rescind ‘unlawful’ National Guard deployment order

    June 8, 2025

    ISS missions with Russia are like working with Nazis, says former US astronaut

    June 8, 2025

    ISS missions with Russia are like working with Nazis, says former US astronaut

    June 8, 2025

    Did Elon Musk hit Scott Bessent? White House press secretary Karoline Leavitt dismisses report; calls it ‘robust disagreements’

    June 8, 2025

    New DNC Chairman Bursts Into Tears: You ‘Destroyed Any Chance I Have!’

    June 8, 2025

    US imposes sanctions on four ICC judges over ‘abuse of power’

    June 8, 2025

    Colombia Presidential candidate shot in head: Miguel Uribe undergoes successful initial surgery; condition ‘grave’, say medics

    June 8, 2025

    LA riots: ‘They spit, we hit,’ says Donald Trump; directs officials to ‘liberate city from migrant invasion’

    June 8, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.