Artificial intelligence (AI) has significantly changed the way web content is created. Texts, images, and even entire video sequences can be generated in seconds – a tool that saves website operators and agencies an enormous amount of time. Even Google now displays AI-generated short answers directly in search results without users having to visit the original source. But this progress brings with it profound problems – for independent media as well as for AI itself.
AI can create content quickly, cheaply, and in large quantities. This enables new business models, saves editorial resources, and allows multilingual communication with minimal effort. But there is a downside to these benefits: much of this automated content is based on human-created journalism – and it is precisely this that is in danger of disappearing under the weight of automation.
Independent media are usually financed through advertising, subscriptions, or sponsorship. When AI models summarize content and platforms such as Google or Microsoft display it directly in their search results, the original providers lose reach – and with it their most important source of income. This is a problem for large media companies with strong brands or paywalls, but it is survivable. For small editorial offices, specialist blogs, or freelance authors, however, it threatens their very existence. Without income, quality cannot be produced. The result: the editorial substance on which AI models are based dries up.
Another effect is even more dangerous in the long term – including for AI models themselves: the more generated content dominates the internet, the more frequently new AI systems will draw on existing AI-based sources when creating text. This leads to a decline in quality. Errors, superficialities, or incorrect summaries are copied and reinforced with each iteration. The result is an erosion of content: instead of well-founded content, increasingly redundant, stylized, and meaningless texts are produced—based on increasingly empty data streams. AI loses its foundation. It's a self-reinforcing cycle.
This development affects not only the market but also the formation of public opinion. Small, specialized editorial offices and independent information platforms make an important contribution to social diversity. If they disappear, dangerous information monopolies will emerge. Only content that is aggregated, filtered, and distributed by large corporations will achieve visibility. Critical, investigative, or non-mainstream voices will be lost, as will regional or specialized reporting.
Regulatory responses are lagging far behind this development. Even ambitious legislative projects such as the EU AI Act will not take full effect until 2026 at the earliest. By then, many independent providers could already have disappeared. At the same time, more and more AI products are entering the market that access current content, automatically summarize it, and monetize it – without involving the creators. Google, Microsoft, Apple, and specialized providers such as Deep Seek benefit directly from current content without adequately compensating the media concerned.
The responsibility lies with the large platforms.
A purely moral appeal will not suffice. Concrete, binding measures are needed, such as:
Artificial intelligence has the potential to make content production more efficient and accessible. There is no reason not to use AI to process your own content. But this progress does not come free of charge – it poses an acute threat to the economic basis of independent media. The big platform operators benefit twice: they save on their own editorial staff and earn money from the content of others. Without binding rules on remuneration, transparency, and fairness, media diversity on the internet will be decimated in no time - and AI will end up standing on a foundation that it destroyed itself.