Artificial intelligence is changing the way we work, the media landscape, and society as a whole, but above all, it is doing so at a speed that leaves us almost no time to think. The German media journal mediendiskurs interviewed me on exactly these topics. Here are the key ideas from our conversation, along with a link to the full interview.
The Real Problem Is the Speed
Technological innovations that take work away from people have always existed. But AI is different in one crucial way: in the past, we had at least 25 years to adapt and create new jobs. Today, we are confronted with new innovations every week, leaving almost no time for reflection. My image for this: it is as if the internet had been invented just one day after Gutenberg created the printing press.
AI Is Not Intelligence. It Is a Simulation
Many people feel a vague fear of an all-powerful AI, shaped by science-fiction stories. But a more realistic danger is this: AI is a simulation of intelligence, not the real thing. It has no human experience, no common sense. If it misunderstands an instruction, the consequences can be serious. Not because AI is evil, but because it lacks context. A simple example: you can know everything about water and still never have felt what it is like to jump into waves.
This becomes especially risky when AI is used for purposes that its training data was never designed for. For example, in psychotherapy. German therapy data is not publicly available and therefore not part of AI training datasets. This can lead to wrong, and in the worst case dangerous, results.
It Is Not Just Young People Who Need an “AI License” – Society as a Whole Does
The debate about banning TikTok for those under 16 misses the bigger picture. Many 14-year-olds handle social media more confidently than 40-year-olds who fall for conspiracy theories. This is not an age problem – it is an education problem. In Indonesia, there is already a requirement for people over 40 to complete an AI literacy course. What we need is a lifelong learning initiative for everyone on how to use AI and digital media responsibly.
Platforms Can Regulate. They Often Just Don’t Want To
Platform operators often defend themselves by claiming that controlling all content is technically impossible. I think that argument is weak. Platforms already use AI extensively to guide user behavior and maximize attention. The same technology could be used for content moderation. What is missing is not the technology. It is a clear political framework. Society, through its elected representatives, must define where the boundaries are.
Digital Sovereignty Is Europe’s Opportunity
Europe talks a lot about digital sovereignty. But the first step is simple: use open-source software instead of spending billions on US tech companies. The data generated in the process is one of the most valuable resources of our time. It makes no sense to give it away.
The full interview, with all questions and answers, has been published in mediendiskurs — one of Germany’s leading academic journals on responsibility in audiovisual media. A longer version will appear in the JMS-Report (01/2026).
👉 Read the full interview on mediendiskurs.online
Want to explore these topics in depth? As a keynote speaker and workshop facilitator, I bring exactly these perspectives to companies, conferences, and media organizations. Get in touch.



