The European Commission has set out a strategy to harden the EU’s information space against foreign interference and large-scale manipulation, enlisting major online platforms and content creators while creating new rapid-response and coordination mechanisms across the bloc.
Presented in Brussels on 12 November, the “European Democracy Shield” builds on existing obligations in the Digital Services Act (DSA) and introduces additional, largely non-legislative measures aimed at faster detection, coordinated response, and broader public awareness. It follows months of debate about election integrity, the misuse of generative artificial intelligence, and increasingly sophisticated campaigns attributed to foreign governments.
Under the plan, the Commission will operationalise a DSA “incidents and crisis protocol” to improve how national authorities and EU bodies coordinate when a major disinformation or influence operation is detected. While the DSA already requires very large online platforms to assess and mitigate systemic risks, the Shield is intended to link tools and actors across institutions so that interventions—such as adjustments to recommender systems, demotions, labelling, or access to trusted flaggers—can be deployed more swiftly in a cross-border setting.
In parallel, the Commission signalled sharper expectations for companies that are signatories to the EU’s voluntary Code of Practice on Disinformation. Platforms including Google, Microsoft, Meta and TikTok could be asked to step up detection and labelling of AI-generated and manipulated media, an area where platform policies remain uneven and where enforcement is complicated by cross-posting among services. The voluntary commitments sit alongside enforceable DSA duties; the Commission’s approach is to use both tracks to close operational gaps that emerge during fast-moving information incidents.
A new European Centre for Democratic Resilience will be established to act as a focal point for expertise, training and information-sharing among member states, EU institutions and civil society. According to the Commission, the Centre will support common situational awareness, develop playbooks for recurring threat patterns, and help align national responses with platform actions under the DSA when cross-border coordination is required. Responsibility for the Shield sits with the Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection, Michael McGrath.
The strategy also addresses the role of influencers in political communication. The Commission plans a voluntary network to brief creators on relevant EU rules (including transparency obligations and limits around paid political content) and to encourage responsible amplification practices during sensitive periods such as election campaigns. Officials argue that creators—often outside formal party structures—shape audience behaviour at scale and therefore need clearer guidance on compliance and provenance standards for political and civic-process content.
Announcing the package, Mr McGrath said the Democracy Shield “connects the dots, making sure Europe’s tools and actors work together effectively in defence of our shared values,” framing the risks in terms of both election security and societal trust. His portfolio, created in the current Commission, combines democracy and rule-of-law files with consumer protection and elements of digital policy, reflecting the cross-cutting nature of the threat landscape.
Today’s proposals follow preparatory steps earlier this year, including a public consultation on the Shield and calls to support fact-checking networks. The Commission has previously indicated it would develop guidance on responsible AI use in electoral contexts and will continue funding media literacy and independent media resilience initiatives. The broader policy track links the Shield to recent EU rules on political advertising transparency and to risk-mitigation duties for very large platforms under the DSA.
Several operational questions remain. First, how the crisis protocol will trigger, and how information will flow between national regulators, EU bodies and platforms, will determine speed and effectiveness during cross-border incidents. Secondly, labelling and detection of AI-generated content require interoperable technical solutions and cooperation among platforms to avoid whack-a-mole effects. Thirdly, engagement with influencers is voluntary; uptake and adherence will need monitoring to ensure the network reaches audiences most exposed to mis- and disinformation. Lastly, while voluntary measures can be deployed quickly, the Commission will rely on the DSA’s enforcement architecture for any binding interventions, including risk assessments, audits and potential sanctions where systemic risk mitigation falls short.
The Commission’s move comes against a backdrop of heightened concern about foreign information manipulation and interference (FIMI) targeting EU states and institutions. Earlier in the year, Mr McGrath warned about intensifying efforts by hostile actors and flagged the need for more coherent defences, signalling that the Shield would be a central plank of the EU’s response.
Next steps include standing up the European Centre for Democratic Resilience and testing the DSA crisis protocol with national authorities and platforms. The Commission is expected to detail implementation timelines and governance for the Centre, along with metrics to evaluate the effectiveness of platform labelling, content provenance, and other mitigations during election periods and major crises.

