Kurier Europejski

European Democracy & Institutions

Institutions

Synthetic Media and the 2024-2029 Parliament: Regulating AI in European Democracy

Isabelle MercierDirector, Digital Democracy Programme, Sciences Po Paris

European legislators spent years avoiding the question of how to regulate AI-generated political content. The AI Act, now being implemented in 2025-2026, forces the issue. Its overlap with the Digital Services Act and national electoral laws is producing a mess of inconsistent rules.

The EU Artificial Intelligence Act, adopted in 2024 and taking effect in stages through 2025 and 2026, classifies AI systems that generate or manipulate image, audio, or video as 'limited risk' systems with transparency obligations. Article 50 requires providers of general-purpose AI models to build technical solutions for detecting AI-generated content, and deployers must disclose when content is synthetic. The drafters had consumer protection and intellectual property in mind. Political speech and election campaigning were an afterthought, handled mostly in recitals with few operational rules.

The 2024 European Parliament elections showed why this matters. Fact-checking organisations logged hundreds of AI-generated images and audio clips targeting candidates across member states. Days before the vote, a deepfaked audio recording of a Slovak candidate discussing vote-buying spread on social media. Fact-checkers caught it, but not before millions had listened. The incident laid bare the gaps in existing law. The Digital Services Act's content moderation provisions were not built for real-time electoral disinformation. National defamation laws move too slowly to help before ballots are cast.

Since then, member states have layered their own rules on top of the EU framework. Belgium, which held the Council presidency in early 2024, tried to fast-track harmonised electoral transparency rules. The effort stalled when governments disagreed over whether labelling requirements should cover only paid political ads or all political content on platforms. France and Germany have both passed national laws criminalising the spread of deceptive synthetic media during 'election silence' periods, but their definitions of 'deceptive' and the scope of 'silence' vary significantly. A deepfake targeting a French MEP can be illegal in Paris and legal on a server in another member state.

The technical challenges are equally messy. Watermarking and metadata tagging, the two main approaches to synthetic content detection, are both easy to strip or spoof. Open-source models released without built-in safeguards can be run on local hardware, bypassing platform detection entirely. Researchers at the University of Amsterdam demonstrated in late 2025 that they could remove watermarks from popular image generation models with a simple script, rendering the AI Act's transparency requirements practically meaningless for determined bad actors. The arms race between detection and evasion is already tilting toward evasion.

Platform responses have been hesitant and inconsistent. Meta and Google have both announced policies requiring labels on AI-generated political content, but enforcement remains patchy. A study by the European Digital Media Observatory found that less than 40 percent of obviously synthetic political images posted on major platforms in early 2026 carried any visible label. The platforms blame users for failing to disclose, while civil society groups blame the platforms for designing systems that make disclosure optional and penalties negligible. Neither side is fully wrong, which makes regulation harder.

Industry self-regulation has produced more press releases than results. The Coalition for Content Provenance and Authenticity (C2PA), backed by Adobe, Microsoft, and several broadcasters, has developed technical standards for content credentials that would trace an image's origin through the production chain. Uptake has been limited. Fewer than 15 percent of newsrooms in the EU have implemented C2PA workflows, according to a 2025 Reuters Institute survey. The reasons are familiar: cost, technical complexity, and uncertainty about whether the standards will become legally mandated or remain voluntary. Without regulatory backing, voluntary standards rarely achieve critical mass.

The transatlantic dimension complicates matters further. American tech companies, which control the platforms where most synthetic content is distributed, are subject to US law and US political pressure. The second Trump administration has made clear that it views European content regulation as a trade barrier, and several Republican senators have introduced legislation that would prohibit American companies from complying with foreign labelling requirements. If this legislation passes, European regulators would face the unenviable choice of accepting unlabelled synthetic content from major platforms or risking a trade dispute with Washington.

The European Electoral Authority, created in 2024, has proposed a common framework for platform transparency in European elections. It would require pre-election audits of recommendation algorithms and real-time disclosure of political ad spending on AI-generated content. Whether member states hand over enough authority to make this work is doubtful. The AI Act's transparency obligations are a baseline, not a solution. Without electoral-specific rules, European democracy will keep losing the race between synthetic deception and informed deliberation.

Looking beyond the immediate regulatory scramble, the deeper problem is epistemic. Synthetic media does not just deceive; it undermines the shared factual foundation that makes democratic deliberation possible. When voters cannot trust what they see and hear, they retreat into tribalism or apathy. The 2024 Slovak deepfake was debunked quickly, but surveys showed that a significant minority of voters still believed the audio was genuine even after correction. The lie travels faster than the fact-check, and the damage to trust often outlasts the specific deception. Regulating synthetic media is necessary. Rebuilding the trust it destroys is harder, and no EU directive can do it.