We will publish on this page main results from the project including core publications and deliverables.
The primary aim of Deliverable 1.1 is to establish a comprehensive theoretical framework for the project. It endeavors to bridge empirical research on social media with insights from the history of political thought and democratic theory, addressing the central question of whether the rise of social media is precipitating a crisis of democracy. This debate is not new; ever since the advent of political systems identifying as democratic, there have been persistent predictions of their imminent crisis. Yet, this raises pressing questions: Is the current diagnosis of crisis justified, and, if so, what distinguishes it from previous crises in the history of democracy? Democracy often appears as a "moving target," its intellectual history in the West characterized by a discordant array of claims about its essence, prerequisites, and dynamics. Fundamental questions remain contested: Who qualifies as a citizen? What conditions are essential for democracy to thrive? What institutions are indispensable for its sustainability? However, in much of the existing research on the interplay between social media and democracy, the concept of democracy is left largely undefined.
Deliverable 1.1 aims to engage with this "moving target" by identifying the core elements of liberal democracy necessary to evaluate the impact of social media on contemporary democratic systems. To do so, the analysis unfolds in two steps. First, it traces the historical evolution of democracy, exploring the tensions and debates that have shaped its development. By revisiting historical arguments—both critical and supportive—about democracy, the deliverable seeks to contextualize contemporary challenges. In this step, we assess the relevance of Robert Dahl’s concept of polyarchy as a framework for understanding liberal democracy, complemented by the perspectives of Jürgen Habermas. Second, these theoretical insights are connected to empirical studies examining how social media influences democratic societies and institutions. This dual approach—combining historical, theoretical, and empirical perspectives—aims to provide a nuanced understanding of whether the current challenges posed by social media signify a distinct crisis of democracy or a continuation of its perennial tensions.
This deliverable contributes to the project’s theoretical framework for analyzing digital citizenship. SoMe4Dem is dedicated to examining how social media transform democratic public spheres, particularly by identifying the main causal mechanisms at work in democracies. The conceptual, theoretical, and systematic foundation for this approach is laid out in this deliverable. This task is addressed by conceptually rearticulating the core dimensions of the public sphere—participation, polarization, and trust—as interrelated dynamic concepts that define democracy in the social media age. Rather than assuming that social media either simply democratize or undermine democracy, the deliverable demonstrates how platformed communication can both enable and constrain citizenship. For example, it shows how social media can expand new forms of engagement while simultaneously generating conflicts and eroding institutional trust. In doing so, Deliverable 1.2 advances the project’s agenda by moving beyond simplistic binaries and clarifying the conditions under which digital infrastructures affect democratic citizenship. The paper highlights the contingent interplay of the three dimensions: how participation can be both empowering and merely performative; how polarization can threaten deliberative cohesion yet also foster contestation; and how trust can sustain legitimacy even while being undermined by disinformation.
The second part of the deliverable introduces the novel notion of "memetic participation", a model of viral, remix-driven political engagement on social media. By emphasizing the relational and iterative nature of digital activism—for instance, through Tik-Tok trends or hashtag campaigns—this concept reframes understandings of online participation and points to new hypotheses about how networked content creation can spur real-world mobilization and shape public opinion.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This final conceptual Deliverable 1.3 departs from the already-analyzed structural crisis of democracy resulting from the expansion of digital platforms (as analyzed in Deliverables 1.1 and 1.2), arguing that social media’s transformation of the public sphere undermines the key conditions for democratic participation and discourse. This deliverable focuses on conceptual and practical solutions and recommendations. Firstly, the deliverable deals with the limitations of binary, optimistic-versus-pessimistic narratives about social media’s democratic impact, suggesting a more nuanced analysis. To this end, the study employs a conceptual framework that aligns the Participation-Polarization-Trust (PPT) triad with an input-throughput-output legitimacy model to systematically assess how digital infrastructures restructure civic participation, intensify or reconfigure polarization, and reshape the foundations of public trust in democratic outcomes. Secondly, the deliverable combines one regulatory anchor case with one empirical bridge case. The Australian Online Safety Amendment Bill 2024 illustrates a regulatory attempt to safeguard the digital public sphere by restricting youth exposure and enforcing accountability of digital platforms. In parallel, the empirical work reported in D2.4 serves as an empirical diagnostic probe that primarily maps legitimacy-relevant conditions by analyzing discursive polarization and discourse quality in hybrid public spheres. Together, these two cases reveal key paradoxes of digital deliberation: participation often becomes a performative spectacle rather than genuine empowerment; polarization intensifies without any meta-consensus to ground constructive debate; and perceived legitimacy may erode when online engagement is experienced as superficial or orchestrated. Thirdly, the deliverable introduces the concept of “legitimacy feedback loops” as a reflexive theoretical device, highlighting how efforts to enhance input legitimacy (inclusive participation) can unintentionally affect deliberative quality (through-put) and public acceptance of outcomes (output). This dynamic perspective underscores the need for integrated, multi-dimensional strategies to recalibrate democratic legiti-macy in the digital era. Finally, considerations regarding possible recommendations and the implementation of the theoretical insights follow and conclude the conceptual and theoretical framework of the SoMe4Dem project.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Cultural memory provides a shared set of historical markers that communities rely on over the long term to justify their epistemic foundations and collective identity. Cultural memory can function integratively, symbolically stabilizing public discourse and identity, but it can also fuel conflict and polarization when interpreted through competing ideological frames. While cultural memory concerns the institutionalized, long‑term layer of remembrance, communicative memory denotes a different modus operandi of cultural memory: the living memory of three to four generations, circulating in everyday communication and highly sensitive to shifts in media practices.
In this deliverable, we focus in particular on transformations triggered by social media. Accordingly, we examine how these media-driven transformations recalibrate the integrative versus polarizing roles of cultural/communicative memory within the public sphere. This issue is particularly salient in contemporary politics, where populist actors selectively mobilise cultural memory to legitimise policy choices and frame conflicts.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable provides the first version of a dataset of attitudinally positioned populations together with the conceptual and methodological framework required to leverage attitudinal inference for large populations of social media users in Europe. Building on recent advancements on large-scale multi-dimensional political attitude inference in social networks and text, we show how to create a European sample of attitudinally-positioned users along a Left-Right and a Anti-elite dimension measuring attitudes towards elites and trust in institutions. These two dimensions are shown to be relevant to conduct both traditional political analysis on social media and analyses accounting for new forms of polarization related to democratic backsliding. This dataset of users will serve as a frame of reference for the development of case studies exploring different links between activity in online platforms, evidence and impacts in politics in other tasks of the project.
This deliverable presents an empirical investigation into political participation and engagement on social media, focusing on the role of individuals’ attitudes and behaviors towards political issues and events. By leveraging, among others, data from D2.1 (Deliverable 2.1), we operationalize key dimensions of the public sphere, particularly political participation and identity, polarization, and trust, to analyze how these factors influence political activity online.
Our analysis is grounded in the dimensions of political attitudes, including ideological alignment, trust in institutions and representatives, and engagement with political content. Using spatially and attitudinally situated populations, we explore how these attitudes drive various forms of social media participation, ranging from self-curation of political identity to active dissemination of political content and -petition related- engagement in digital mobilization.
Through the SoMe4Dem framework, we explore the intersection of social media affordances, online behaviors and political participation, investigating how individuals’ positions on major issues, such as immigration, nationalism, and traditional values, shape their political activity and engagement with political contents and processes. This work also connects citizen’s political participation with political representatives and party systems across EU countries, offering insights into how digital engagement and activism related to more traditional political structures.
Among others, the deliverable further contributes to conceptualizing and measuring online political participation, particularly in the context of European politics. Additionally, the deliverable aims to relate political attitudes to political discourse and socio-political identity narratives in social media, using advanced natural language processing techniques, machine-learning algorithms, and (structural) statistical models for categorizing textual data. This to qualify and quantify online socio-political narratives, their reach, the engagement they trigger, and their relevance in public discourse, mobilization, and political conflicts.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable advances the SoMe4Dem project’s goal of critically assessing the relationship between social media and democracy by placing (political) narratives at the center of analysis. Based on 5 papers, preprints and paper drafts, it develops and applies a narrative-based framework to better understand how polarization and political conflict are structured, amplified, and transformed on social media. Conceptually, the deliverable argues that political discourse on social media is not only shaped by isolated statements, toxic language, or individual arguments, but by broader narrative structures. These narratives contribute to both ideological polarization
(e.g. through issue alignment and coherent worldviews) and affective polarization (e.g. through antagonistic constructions of opponents). Methodologically, the work introduces a computational tool that make narratives empirically tractable in large-scale social media datasets. It thereby proposes scalable approaches for detecting and analyzing narrative signals, enabling systematic investigation of how political meaning is constructed and circulated online. Empirically, the deliverable examines how narratives operate across different platforms and contexts. It shows that platform affordances, such as virality dynamics, retweet structures, broadcast architectures,
and moderation regimes, shape the visibility, alignment, and amplification of antagonistic and state-aligned narratives. By linking narrative dynamics to platform governance and political economy, the work highlights how digital infrastructures can facilitate polarization, strategic amplification, and the blurring of boundaries between state and non-state actors. Within the project, this deliverable constitutes a core contribution to the study of affective polarization, while also connecting to research on cultural memory, platform affordances, and regulatory implications. Overall, it provides a theoretically grounded and methodologically innovative framework for analyzing democratic discourse on social media through the lens of narratives, polarization, and platform structures.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable translates the theoretical framework developed in D1.4 (Communicative Memory) into a reproducible measurement architecture for assessing how cultural and communicative memory shape democratic contestation in hybrid public spheres. For this purpose, we operationalize our central theoretical move from memory regimes (e.g. nation-based, human-rights-based, cosmopolitan /transnational repertoires) to discursive modes (antagonistic, agonistic, deliberative) and integrate these components into an input–throughput–output framework. Empirically, D2.4 applies an LLM-assisted approach to detect narrative configurations, assess interactional modes (AAD) and evaluate deliberative quality using DQI-style indicators, while explicitly addressing validity constraints through human evaluation.
The framework is tested across three case-study fields of memory covering four commemorations and documented through five linked preprint articles attached to this deliverable. Case Study 1 (Slovenia’s Day of Resistance) serves as the main methodological demonstrator, linking narrative structures to discursive modes and deliberative functions across social media and online news, and adding an interpretive account of how platform affordances and multimodality shape agonistic memory practices. Case Study 2 (Giorno del ricordo / foibe, Italy–Slovenia) triangulates LLM-assisted cross-media comparisons (X vs online news, 2022–2024) with a survey on citizens’ agonistic orientations to diagnose whether, and where, agonistic potential persists under polarisation. Case Study 3 (Europe Day and the fall of the Berlin Wall, France/Germany/Italy/Slovenia) tests a three-step design on Twitter/X (conflict presence; antagonistic vs non-antagonistic tone; deliberative quality) and uses topic modelling to identify hotspots of low-quality conflict and “pockets” where deliberative signals remain visible.
Finally, D2.4 adds a forward-looking extension from measurement to intervention. An article on fine-tuned LLMs and AI agents shows how discourse-quality indicators can improve detection and under controlled conditions shift generated outputs toward desired properties such as civility and constructiveness.
Overall, D2.4 delivers a transferable, empirically tested framework for measuring how contested memories shape discourse quality and democratic potentials in hybrid public spheres.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This survey aims at understanding how the empirical work done in the literature to understand the phenomenon of trust and the role of language may help understanding how social media contribute to the creation of trust in institutions, among citizens and representatives, and, more generally, in democratic processes (elections, referendums, etc.). By relying on a large body of literature developed in contexts that reach beyond social media, this survey targets regularities and key aspects of the trust building process in social media, focusing on causal mechanisms and identifying the appropriate tools fr the measurement of trust. This collection of causal triggers to trust may also help identify which features and dynamics within social media are responsible for the emergence of trust, the focus on cooperation or the resort to conflictual patterns.
This review will highlight that the different disciplines that have been concerned with measuring trust have used different methodological frameworks and tools, such as laboratory and field experiments, and surveys, each with unique strengths and weaknesses.
This report presents a road-map for building reliable computational models of online platforms in accordance with the procedures established through the Digital Services Act (DSA). It develops basic terminology to facilitate communication between scientists, policymakers and platform providers about systemic risk assessment using computational models. It outlines the different steps to be undertaken in DSA auditing processes, enhancing their accountability and reliability. In turn, following these procedures will lead the way to empirically reliable computational models for assessing the impact of digital policies and interventions.
We complement this general framework with a work-in-progress report on a real platform which serves as a use case for developing the methodology involved.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Building on the literature review developed in Deliverable 3.1, this deliverable brings together experimental, survey-based, and conversational analyses to examine how trust and conflictual language jointly shape online interactions. Across the three empirical studies, we demonstrate that trust and conflict online are not merely content-level phenomena, but emerge from the interaction between heuristic decision-making under uncertainty, platform governance structures, and the temporal and structural organization of online conversations—highlighting how trust erosion and conflict escalation are fundamentally socio-technical and interactional processes.
Study 1 demonstrates that in fast-paced social media environments, users rely heavily on heuristic cues—such as perceived similarity or aggregated judgments—when evaluating and sharing content under time pressure. While Twitter/X frequent users display higher output and confidence, their improved raw performance reflects greater speed and engagement rather than better accuracy. Social cues can improve correctedness for less familiar users, but they do little to
change the sharing behavior of frequent users, who remain more habit-driven. These findings reveal how interpersonal trust cues, platform familiarity, and cognitive load interact to shape both information accuracy and the circulation of misinformation.
Study 2 shifts to a collaborative setting by examining trust in Wikipedia as a form of platform-level institutional trust. Using a large-scale survey of active contributors, it shows that sustained trust depends not on interpersonal familiarity but on procedural affordances—transparent norms, open governance, dispute resolution mechanisms, and visible accountability structures. Wikipedia functions as a “self-organizing bureaucracy,” illustrating how platform design choices can maintain long-term trust even in open, volunteer-driven systems. This stands in contrast with fast-paced social platforms and highlights how institutional trust can be produced, not merely assumed, through governance architectures.
Study 3 provides the missing interactional link by analyzing the micro-dynamics of conflictual language within threaded climate-change discussions. Using LLM-based multi-dimensional tone annotations, it shows that conflict emerges not solely from ideological differences but from the temporal rhythm and structural organization of conversations. Slower reply dynamics foster more respectful and less nasty language, while temporal distance from a parent post influences emotional framing. Posts systematically align with both their parent and sibling contributions, and early replies in a branch disproportionately shape subsequent tone. The study identifies how local conversational climates, alignment effects, and early-branch cues create self-reinforcing pathways through which conflict spreads.
Taken together, these studies reveal that trust erosion and conflict escalation online are not only content problems but structural and interactional problems. Trust acts as a cognitive shortcut under uncertainty; institutional trust is enabled or constrained by platform governance; and conflict is shaped by the geometry of conversation—timing, branching, alignment, and user-level participation patterns.
This integrated perspective yields concrete implications for platform governance. Interventions should not only target content accuracy but also the conditions under which users make decisions and the interactional structures that guide conversational trajectories. Design strategies that modulate pacing, highlight early-branch cues, embed procedural transparency, or provide lightweight friction can help counteract the mechanisms through which mistrust and conflict become self-reinforcing.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Online discussions are dominated by a highly active minority of users—a dynamic that can severely distort perceptions of public opinion and contribute to political polarization. Despite the longstanding nature of this phenomenon, little is known about what distinguishes the silent majority from those who do engage publicly, and whether there are effective strategies to reduce unequal levels of participation. The prominent original “Spiral of Silence” theory describes a dynamic that leads individuals to withhold their opinions if they perceive they are in the minority and fear social sanctions for voicing their opinions. Our findings challenge the uniformity of this effect as we find that perceptions of discussion environments serve as catalysts for participation inequality—inhibiting the silent while further encouraging the most active.
In this collective field experiment with 520—fully informed and consenting—US participants,we created large customized discussion groups on Reddit that allowed us to test these questions in a controlled but ecologically valid environment. We examined how personal characteristics and perceptions of online discussions shaped individual participation, and we tested how two interventions (namely, content moderation norms and financial incentives) influenced participation and downstream consequences. Consistent with prior research, we found that frequent commenters were predominantly male, highly interested in politics, and undeterred by perceived polarization or toxicity. However, we also found that silent users were more likely to speak up when they perceived the discussion to be respectful and constructive. Those who remained silent had the highest perceptions of the environment’s toxicity and polarization.
We found evidence for positive reinforcement of behavior through social feedback over time. Incentivizing participants to write comments decreased participation inequality, whereas highlighting moderation norms seemed to “backfire” and increased comment toxicity. Understanding what drives participation in online discussions matters. Our manuscript makes a significant contribution to understanding the origins and drivers of participation inequality, and provides evidence-based ideas for re-designing digital environments to foster more equitable democratic discourse.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable presents an empirically-informed framework that connects social media affordances to the functions of the public sphere in liberal democracies. This framework will function as a basis for further systematic approaches to the study of how social media impact the public sphere. The deliverable reports on three papers or drafts of papers respectively in which aspects of this framework are developed.
The report “Social media and the public sphere—An empirically-informed taxonomy of platform affordances” introduces the framework’s central concepts of technical affordances vs. social affordances, and illustrates these with examples from a large set of social media platforms. The report likewise includes an empirically-informed discussion of how these affordances might be measured.
The published paper “A computational analysis of Telegram’s narrative affordances” offers an empirical investigation of how technical and social affordances shape the (politically extreme) narratives that propagate on the messaging platform Telegram.
Finally, the draft paper “A dynamical model of platform choice” (see here for the more recent preprint) offers a first step towards a model of usage dynamics of competing social media platforms with respect to two social affordances: news consumption and identity expression, and their corresponding user preferences for more or less diversity. Combined, these contributions constitute a conceptual and empirical foundation for futurework in the project, notably with regards to the empirical analysis of social media narratives, work on digital citizenship, and principles for evidence-based regulations of social media platforms.
This study evaluates two strategies to improve individuals’ ability to navigate online environments more competently to mitigate potential harms of online misinformation: a lateral reading intervention that trains users to assess the trustworthiness of online news sources by consulting external sources and a claim-focused search strategy that encourages searching for evidence behind specific claims made in online news articles. Conducted with a nationally representative German sample (N = 2,666), the randomized controlled experiment found that both interventions modestly improve the ability to discern trustworthy sources and credible claims from untrustworthy sources and false claims without backfire effects. Lateral reading proved effective for supporters of populist and far-right parties, while both strategies were particularly beneficial for younger and less-educated individuals. Behavioral data revealed that lateral reading increased on-topic browsing as well as online searches to identify untrustworthy sources, whereas claim-based search boosted immediate search activity and trust in credible information, indicating complementary mechanisms. However, neither intervention showed lasting effects two weeks later, highlighting the need for sustained engagement to maintain improvements. These findings suggest that combining the strategies and targeting specific demographics could enhance media literacy and resilience against misinformation.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable reports on the project’s work on reading protocols for online platforms, and reflects on how these protocols can inform the construction of social media interfaces and methods for facilitating pluralistic online discourse. Combining empirical results from machine-guided analyses of social media interfaces with perspectives from computational simulations, the report concretely covers the results of two papers produced in the project.
First, the published paper “Dark metrics and the mainstreaming of political extremism on Dutch-speaking Telegram: A comparative reading of platform affordances” (published in Platforms and Society, 2026) offers a comparative analysis of Telegram’s human- vs. algorithmically-curated features for the discovery of political channels on the messaging platform Telegram. Based on large datasets snowballed from seed Telegram channels of political parties in Flanders and the Netherlands, this paper develops a “centrifugal reading” approach that brings into view qualitative and quantitative differences in the political spaces made visible in Telegram’s interface
to users choosing to follow either 1) Telegram’s human-curated links to other channels or 2) Telegram’s recent, but opaque algorithmic recommender engine for “similar” channels.
The study “Lateral reading beyond Google: Comparing search results, chatbots andWikipedia for assessing source credibility” examines how different source-checking strategies, including Google search, chatbots, and Wikipedia, support credibility judgments. Analyzing 874 U.S. and German news domains, it finds that all strategies similarly distinguish credible from non-credible sources. A single search result often conveys most credibility cues, and search results rely partly on source recognition, unlike chatbots. Effectiveness depends on both user behavior and the information each channel provides. Speaking to the over-arching concept of “reading protocols” for social media users, this manuscript further introduces an LLM-based framework to audit source-checking tools at scale. Taken together, both papers foreground how specific platform design choices around discovery, recommendation, and (eventually) evaluation actively shape users’ political and epistemic environments online. These observations, and the methods used to surface them, constitute foundations for the future development of social media interfaces and third-party “middleware” that facilitates pluralistic online (political) discourse.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Background and Objectives
This report presents a systematic computational analysis examining how social media algorithms impact democratic discourse and information quality. The research addresses a critical challenge: social media platforms’ attention-driven business models inadvertently amplify negative, divisive, and low-quality content, contributing to democratic backsliding, polarization, and misinformation spread. The study develops an agent-based model (ABM) to test whether content-neutral “exogenous cues”—quality signals based on context rather than content analysis—can improve information ecosystems without triggering censorship controversies.
Methodology
The research team developed an empirically-groundedABMof information sharing on Twitter/X using a dataset of 833,648 tweets from 3,441 U.S. users, covering 3,678 news domains rated by NewsGuard for reliability. Exogenous cues examined included over 40 context-based quality signals such as audience political diversity, sharing inequality (Gini coefficient), and cognitive network centrality—all computed without analyzing content. The model simulates how users create and share content within ideologically-clustered networks, testing four algorithmic ranking scenarios:
1. Ranking by domain quality measured in NewsGuard score.
2. Ranking by domain audience diversity (previous approach from the literature).
3. Ranking by partisan-neutral exogenous cues (includes diversity plus cues considering political leaning without differentiating partisans).
4. Ranking by partisan-blind exogenous cues (no consideration of political orientation).
Key Findings
All four ranking interventions substantially increased the average NewsGuard score of shared content compared to baseline. Quality improvements occurred across the entire political spectrum, including left-leaning, neutral, and right-leaning domains. Rankings using combined exogenous cues (Simulations 3 and 4) maintained quality improvements throughout the simulation period. In contrast, diversity alone (Simulation 2) showed initial gains that gradually declined toward baseline levels due to amplification of low-quality sites with diverse audiences. Despite being content-neutral, all interventions shifted overall content sharing slightly leftward on the political spectrum. This effect reflects an empirical reality rather than algorithmic bias: right-leaning news sources in the corpus (and in broader research) demonstrate systematically lower quality scores. The algorithms equalized quality across political orientations, but because fewer high-quality right-wing sources exist in the ecosystem, boosting quality naturally reduced right-wing content share.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Open Source Investigations (OSINV) and Open Source Intelligence (OSINT) represent emerging forms of digital citizenship, where skilled individuals leverage publicly available data to verify contested events and counter misinformation through the construction of narratives of truth. This report examines how these practices have evolved into consequential truth-production mechanisms that challenge traditional epistemic authorities. Organized around three central questions, the report provides empirical insights into OSINT/V's transformative impact on truth production in the post-truth era.
Who constitutes the OSINV community? Russia's invasion of Ukraine catalyzed a dramatic expansion of OSINV practices across independent investigators, institutional organizations like Bellingcat, legal accountability actors, and cybersecurity communities. These actors negotiate continuous tensions between democratizing investigative capacities and establishing new forms of expertise.
What methodologies characterize OSINV practice? Practitioners deploy three distinct approaches: digital verification (establishing correspondence between digital objects), augmented investigations (computational reconstruction of environments), and legal research (translating digital evidence for juridical frameworks). Each methodology embodies specific relationships to verification, representational ethics, and truth claims.
How does OSINV acquire epistemic authority? A 23-parameter Trustworthiness Evaluation Matrix developped during the Digital Methods Initiatve 2023 Winter School demonstrates how authority stems from methodological transparency rather than institutional prestige. In an invited forum co-edited by the author, OSINV experts reflect upon how the practice is strengthening its legitimacy by institutionalizing.
In addition to examining OSINV as a community and methodological field, the report extends open-source approaches to the structural mapping of Russian disinformation production. Rather than focusing solely on the verification of events or the debunking of misleading content, this research applies employment-trace analysis and publicly available organisational data to reconstruct how influence operations are institutionally organised. By analysing workforce composition, role differentiation, and inter-organisational linkages, the project demonstrates how OSINT methods can be used to map the infrastructures that sustain disinformation practices.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
This deliverable presents an empirical investigation into political participation and engagement on social media, focusing on the role of individuals’ attitudes towards political issues and events. This deliverable mainly leverages two data sources, developed in previous deliverables of this project. It uses a panel of users of X (previously Twitter) in several EU countries, for which we inferred positions on political issues and ideology dimensions using ideology scaling procedures calibrated with political survey data, as part of work leading to Deliverable D2.1. The second data source is a collection of posts on X that link to web pages for gathering signatures for petitions and for collecting donations in crowdfunding campaigns, developed in work leading to Deliverable D2.2.
This document examines the role of issue and ideology positions in the prevalence of participation in the dissemination of these online campaigns, from both descriptive perspectives (across different geographic scales in the EU) and modeling perspectives. The latter perspective seeks to identify the specific association between polarization and trust (operationalised as political dimensions), and participation in these campaigns. Within the theoretical framework of the SoMe4Dem project, we explore the intersection of social media affordances, online behaviors and political participation, to further the understanding of how individuals’ positions along major issues shape activity in the online domain. Our work also contributes to the study of individuals’ participation across EU countries when platforms and tools (such as these petition and crowdfunding sites) extend their agency to spheres beyond formal electoral democracy.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.
Based on the research outcomes of the SoMe4Dem project, this deliverable presents a policy brief outlining five key principles for developing, using, and regulating social media platforms in democratic societies. These principles are specifically intended to inform regulatory bodies and other relevant stakeholders aiming to foster digital citizenship in social media environments, and start from different roles users might adopt on social media, where they may (simultaneously) act as content consumers (principle 1), content producers (principle 2), community builders (principle 3), adversaries (principle 4), and platform owners, builders, and moderators (principle 5).
Principle 1 emphasizes that platforms should strengthen user autonomy by enabling users to
meaningfully understand, evaluate, and influence their algorithmically curated information environments through transparent controls, alternative curation modes, and tools that support critical navigation.
Principle 2 focuses on safeguarding heterogeneous participation, arguing that platforms must lower barriers to entry, counteract cumulative visibility advantages, and treat participation as a democratic right rather than a byproduct of engagement optimization.
Principle 3 shifts attention to the collective dimension of digital life over individualized engagement metrics, and calls on platforms to foster spaces that enable sustained interaction, shared agency, and mutual accountability.
Principle 4 underscores that disagreement is constitutive of democratic politics and therefore urges platforms to redesign incentive structures so that the pursuit of virality does not amplify conflict but instead supports deliberative forms of contestation.
Finally, Principle 5 advances reflexivity in the form of continuous monitoring, transparency, and feedback mechanisms as a core governance requirement, so that users can assess and respond to the societal impact of platforms. As such, platforms would not compete only for attention, but also for trust and democratic responsibility. For each principle, this deliverable further highlights supporting research from the SoMe4Dem project. By way of conclusion, we situate these principles in relation to three on-going structural developments that are shaping the future of digital public spheres.
Disclaimer: This deliverable is a draft and not yet approved by the Commission.