Screenshot eines MDR-Artikels รผber ein Interview mit Professor Michael Schwertel zum Thema KI-Deepseek

DeepSeek and AI Censorship: When Artificial Intelligence Remains Silent

Artificial intelligence is often perceived as a neutral technology. It answers questions, structures information, and supports decision-making. But what happens when an AI system suddenly refuses to respond to certain topics?

In my interview with MDR, we discussed exactly that: the Chinese AI model DeepSeek and the question of whether and how political censorship can be embedded into artificial intelligence systems.

What Is DeepSeek?

DeepSeek is a Chinese AI language model that gained international attention in early 2025. Technically impressive, developed at comparatively low cost, and rapidly downloaded by millions of users worldwide.

However, early tests revealed a striking pattern. When confronted with politically sensitive topics, the system either responded evasively or refused to answer entirely.

Examples include historical events, geopolitical conflicts, and questions related to freedom of expression.

This is not a technical malfunction. It is system design.

AI Is Never Neutral

In the MDR interview, I explained that artificial intelligence is always a product of its training data and regulatory environment.

AI systems do not learn in a vacuum.

They are trained.

They are filtered.

They are governed.

When a political system controls information, that logic inevitably becomes visible in digital systems as well.

DeepSeek illustrates that AI censorship is not a hypothetical future scenario. It is already reality.

Why This Matters for Europe

Many users assume AI functions as an objective knowledge source. But AI systems are not encyclopedias. They are statistical models calculating probabilities.

If certain information is excluded or actively suppressed, the resulting output can distort reality.

For democratic societies, this has serious implications:

Transparency becomes essential.

Data origin becomes essential.

Regulation becomes essential.

The European AI Act attempts to address these challenges by defining transparency obligations and risk categories. However, regulation alone is not enough. Media literacy is equally important.

DeepSeek as a Case Study in Strategic AI Policy

DeepSeek is not just a single model. It represents a broader strategic question:

Who controls the information architecture of the future?

If AI systems increasingly filter, prioritize, or suppress information, power structures shift.

Not loudly.

Not visibly.

But structurally.

That is why we must understand AI not only technically, but politically.

What Companies and Educational Institutions Should Do Now

The DeepSeek debate highlights three key action areas:

  1. Critically test AI systems before integrating them

  2. Cross-check information across multiple sources

  3. Demand transparency regarding training data and governance structures

Especially in companies, media organizations, and educational institutions, AI must not be treated as a neutral authority without scrutiny.

Conclusion: AI Requires Critical Reflection, Not Hype

DeepSeek makes one thing clear: artificial intelligence is never fully objective.

It reflects power structures.

It reproduces data logic.

It operates within political frameworks.

The most important skill in the age of AI is therefore not only technical knowledge, but contextual understanding.

That was the core message of the MDR interview.

Not alarmism, but informed analysis.

 

More about the proper use of LLMs can be learned in this hands-on Workshop.

More about the current developments in LLMs can be explored in this keynote on future mindset and innovation.

Media and interview inquiries can be submitted here.

Table of Contents

KI Zukunft Risiken Regulierung Mediendiskurs
Mitglieder der Jury Kinder & Jugend des Grimme-Preises 2026 gemeinsam im Grimme-Institut.
Vortrag von Michael Schwertel รผber Kรผnstliche Intelligenz und die Zukunft der Mediengestalter Ausbildung am Simon Ohm Berufskolleg in Kรถln
Prof. Michael Schwertel im ZDF Morgenmagazin zum Thema Moltbook und autonome KI-Agenten
Scroll to Top