Current diagnoses that democracy is in crisis at the beginning of the 21st century share a common argumentative reference point: the (implicit) reference to the dysfunctional constitution of the political public sphere which is currently undergoing structural change. The rise of social media platforms is considered as one of its main constituents. While social media were considered to make the public arena more open and thus more responsive, it was also argued that these platforms lead to new mechanisms of fragmentation and exclusion, an erosion of norms in public debate and a loss of trust in traditional institutions.
During the last three years, a network of researchers from different fields has worked towards a better understanding of the impact of these developments by providing better empirical evidence, identifying and modeling main causal mechanisms and discussing ways of improving the capacity of social media to contribute to the functioning of the public arena in a liberal democracy. In this final conference we will present the latest results from the project complemented by perspectives from external experts.
Most studies related to online politics require the operationalization and measurement of relevant forms of political opinions. Examples of these studies include the study of online polarization, the spread and the moderation of misinformation, political biases in algorithms, among many others. In this presentation we will consider computational methods theoretically anchored in compared politics, aiming at using social media trace data to position large numbers of platform users in ideology and issue dimensions in a way that is comparable across countries. This comparability, we will show, is essential to the development of frameworks of study for social media at European scale, and for the audit of platforms as mandated by recent EU regulations such as the DSA. We will illustrate this applicability through a number of case studies on content moderation and alogirth auditing.
This final presentation from the side of the critical democratic theory of the project serves as the final synthesis of the theoretical, empirical, and solution-oriented results of the SoMe4Dem project. It introduces the final thesis and diagnosis of the democracy crisis: the structural change of the public sphere caused by social media, and the central puzzle: social media can simultaneously erode and expand democracy.
The presentation emphasises the approach introduced in Deliverables 1.1 and 1.2, namely that effects of social media are not sufficient incorporated into approaches of democracy theory, posing the risk of transferring uncovered "meanings and expectations" onto new phenomena. To avoid these genetic fallacies, we will once again use our already introduced P(articipation)-P(olaritaion)-T(rust) triad as a systematic concept: Through this framework, we want to understand and describe the relationship between social media and democracy: Conceptual re-articulation of the three dimensions as interdependent, dynamic concepts. In this final step, our focus is on using these three elements, and thereby boosting Digital Competence and Lateral Reading Interventions. Finally, on a conceptual level, we are striving to reform new forms of Digital Citizenship and criticism of the established standards of deliberative democracy.
The presentation reports upon a test case of content moderation by the social media platform, TikTok, of hateful conspiracy videos targeting Muslims. The videos in question depict a European future when in 2050 cities have been ‘Islamised’. Made by a diverse set of creators, the videos are portrayals of one European city after another – Amsterdam, 2050, Barcelona, 2050, Milan, 2050 and so forth – devastated after the religious and cultural shift. The paper examines how creators position these videos as content-as-usual, with platform hashtag and descriptor norms referencing memetic user culture and virality efforts. Subsequently, in the test case, the videos are reported to the platform through the content moderation reporting function as violating its hateful conspiracy policy. Only when reviewed a second time by the human moderators, after appeals are issued, are certain videos removed or demoted, demonstrating an unevenness in moderation between automated and manual review. We discuss the implications of these findings in terms of automated slippage and human-machine misalignment as well as the user reporting burden assumed in an algorithmic environment promoting personalised content. It concludes with a call for moderation auditing to test platform claims about the efficacy of especially automated moderation.
By systematizing findings from multiple Social Media for Democracy studies, we characterize how digital political participation is evolving across the European Union. We show how political identity expression and mobilization on X/Twitter can be used to map the issue- and ideology-specific dynamics that structure contemporary online political participation and activism, including petition-related call-to-action campaigns and other digital mobilization practices.
We show that online mobilization builds on distinct platform affordances and is constructed on top of different types of signals of political orientation and identity, including those recoverable from profile biographies and follow networks. This enables us to link political ideology, identity, and mobilization in Europe’s digital public sphere, and to discuss the implications for future CSS research on online political engagement, societal divides and polarization within and across the EU. The findings of the studies we present, when considered jointly, allow for an in-depth comparative characterization of online mobilization repertoires across European countries, issues, and ideological camps.
This presentation introduces a comparative framework for measuring agonistic and deliberative dynamics in contested public debates on cultural memory. Building on theories of agonistic democracy and deliberative democracy, and drawing on evaluative models based on DQI indicators, we present a customised methodology for assessing, measuring, and comparing levels of agonism and deliberation across social media (X) and traditional media. The framework uses media-specific operationalisations tailored to the communicative logics of each arena and is developed and tested on debates over collective memory and competing historical interpretations. We apply Greimas’s narrative analysis to identify Subject–Object–Opponent narrative triads and code the opponent’s political orientation (as a proxy for left–centre–right narrative positioning), enabling us to identify the most common narrative structures in the agonism clusters.
We present results from applying the framework to analyses at the national, cross-border, and transnational levels, with a particular focus on discussions of Giorno del ricordo (10 February), which I one of the most polarising commemorative events in Italy and the Italo-Slovenian borderland. The case illustrates how antagonistic escalation, agonistic engagement, and deliberative potential coexist across media environments, and how computational measures can help identify where dialogue-oriented openings nevertheless emerge within highly polarised memory conflicts.
Online political discussions are often dominated by a small group of active users, while most remain silent. This visibility gap can distort perceptions of public opinion and fuel polarization. Using a collective field experiment on Reddit, we examined factors predicting self-selection into silent “lurker” and active “power-user” roles and tested whether participation differentials can be reduced with norm- or incentive-based interventions. We recruited 520 United States participants, randomly assigned them to conditions in six private communities, and asked them to discuss 20 political issues over 4 weeks while completing weekly surveys. Lurking (posting nothing) was most common among users who perceived discussions as toxic, disrespectful, or unconstructive; these same perceptions also predicted power usership (more posting, conditional on not lurking). Experimentally, financial incentives for commenting reduced participation differentials, whereas we did not find effects from a civility norm treatment. These findings support preference- and incentive-based accounts of participation but suggest that light-touch interventions are unlikely to bridge participation gaps, let alone polarization.
Digital literacy, or the ability to navigate and evaluate online content effectively, is often proposed as a solution to misinformation. However, evidence suggests that platform familiarity alone does not guarantee improved true/false news discernment; rather, interventions must target reflective reasoning and accuracy salience. This study investigates how social media literacy, source homophily, and informational cues influence misinformation discernment and trust dynamics under time pressure, comparing high-activity (Group 1: >100 tweets/year) and low-activity (Group 2: 0 tweets/year) social media users. We conducted a two-phase online experiment (N = 332) using a between-subjects design. Phase 1 (N = 92) measured baseline accuracy in evaluating news headlines; Phase 2 (N = 240) introduced TRUE/FALSE labels derived from prior participants, manipulated by homophily and choice. Participants were categorized as high-activity (Group 1: >100 tweets/year) or low-activity (Group 2: 0 tweets/year). Results show that Group 1 answered more items and achieved higher raw accuracy, driven by false news detection, but exhibited no advantage in correctedness scores and followed signals—even incorrect ones—more often. Group 2 improved correctedness significantly when exposed to labels but at the expense of time. Findings suggest that platform familiarity fosters speed, not critical reasoning, underscoring the need for literacy programs emphasizing reflective thinking and trust-building.
Caterina Cruciani, Gloria Gardenal, Anna Moretti, Costanza Sartoris, UNIVE
Introducing large language models into the world of simulations has come with a great promise: increasing realism. Rather than exchanging states such as 'pro' or 'against' an on opinion, agents can exchange actual arguments. And rather than having simple rules and formulas that decide whether an agent engages in a specific behavior or not, agents can rely on artificial intelligence to make these decisions. But this comes at a cost: the number of moving parts in the model becomes tremendous, and it is hard to say what exactly *causes* an outcome. In essence, clear rules are replaced by black boxes.
This is a clash of two worlds. On the one hand, one in which well-defined mathematical models are studied, in which many simplifying assumptions are made, but in which we can clearly isolate the effect of setting one specific parameter. On the other hand, one in which we try to mimic the real world as closely as possible, but has problems isolating effects.
In this talk, I am not going to take sides but talk about the experiences from three years of discussions we had within the TWON project about such issues, how to deal with them, and what conclusions to draw. With this, I hope to draw a realistic picture of what is possible and what is not, and what caveats and opportunities have to be considered.
To understand how social media shapes online discourse or contributes to polarization, we need models of collective online choice that link users' behavioral adaptation to the emergence of complex and dynamic digital environments. This study develops a dynamic model of platform selection based on Social Feedback Theory, using multi-agent reinforcement learning to capture how user decisions are shaped by past rewards across platforms. A key parameter ($\mu$) governs whether users seek approval from like-minded peers or exposure to opposing views. Agent-based simulations combined with dynamical-systems analysis reveal a social dilemma: even when users value diversity, collective dynamics can trap online environments in polarized echo chambers that reduce overall user satisfaction. Above a critical diversity threshold, a different equilibrium appears in which one large, integrated platform dominates while smaller platforms persist as extremist niches. In an intermediate regime, the two outcomes coexist, generating path dependence and hysteresis. We further demonstrate how modest, strategically targeted interventions — such as rewarding minority participation — can destabilize polarization and promote integrated discourse. The model shifts attention from belief change to participatory choice. It links micro-level learning dynamics to macro-level online fragmentation and informs mechanism-based interventions in the digital public sphere.
Online conspiracy theory communities raise persistent concerns for digital governance, yet the behavioral dynamics that sustain participation in these spaces remain poorly understood. Using longitudinal data from a minimally moderated social platform, we analyze how social feedback, routine behavior, and ideological context shape user engagement over time. We find that repeated participation reduces sensitivity to positive and negative feedback, weakening the link between social reinforcement and continued activity. Content characteristics influence engagement unevenly, and while habitual posting is generally associated with disengagement, ideological alignment moderates this effect in conspiracy-focused communities. Together, these findings highlight how routine participation and community context can stabilize engagement even when social rewards are volatile, offering a general framework for understanding the persistence of fringe online communities.
Russia’s information activity is commonly described through overlapping labels such as “propaganda”, “disinformation”, and the umbrella term “influence operations”. This talk argues that part of this conceptual overlap reflects an underused organisational lens and a methodological gap: we rarely observe how such activity is produced as labour, routines, and pipelines, because the relevant organisations and processes are typically inaccessible to direct fieldwork. We address this gap with an OSINT-first research design and a reproducible OSINT workflow that treats publicly available labour-market traces as organisational evidence.
We compare two emblematic Russian entities: RT, a state-funded international broadcaster typically characterised as a propaganda outlet, and ANO Dialog, a parastatal organisation sanctioned for disinformation and embedded in Russia’s digital information control ecosystem. Using an OSINT workflow centred on labour-market data, we reconstruct organisational structure and production pipelines from large-scale CVs on Russia’s leading job-seeking portal HeadHunter, triangulated with document analysis of organisational materials and policy documents. The approach relies on transparent collection, entity resolution, role coding, and network reconstruction of personnel mobility and shared employment links.
Empirically, we examine what “propaganda” and “disinformation” look like as organisational routines. The analysis maps role repertoires, skill profiles, and career trajectories, distinguishing editorial and media production roles from monitoring, moderation, analytics, coordination, and campaign-adjacent functions. We also assess external orientation through language repertoires and labour-market positioning, including whether RT’s workforce is primarily organised around media production for foreign audiences or whether it also exhibits platform-facing monitoring and manipulation functions commonly associated with disinformation operations. By comparing these organisational footprints, the study evaluates whether the two labels correspond to distinct organisational forms or whether they conceal substantial convergence in how informational disorder is produced.
In this talk, I take two of my studies – one on the German Twitter follow network, the other investigating resonances between TV talks shows and TikTok channels during the recent German Federal Elections – as the starting point for further thoughts on how Social Media influence mechanics have gradually shifted: from a reputation game, that while somehow meritocratic still suffers from rich-get-richer dynamics, towards a resonance probing lottery, that lures with an egalitarian get-rich-quickly appeal, but is prone to manipulation and abuse.
Online sharing strongly guides which news people encounter on a daily basis. In this context, especially research on the types of news shared by users of differential political leaning has received considerable attention. We argue that many existing approaches (i) rely on an overly simplified measurement of political leaning, (ii) consider only the outlet level in their analyses, and/or (iii) study news circulation among partisans by making ex-ante distinctions between partisan and non-partisan news. We introduce a research pipeline that allows a systematic mapping of news sharing both with respect to source and content in a multidimensional political space. Based on the resulting political cartography of news sharing for the German Twittersphere, we show that political fringes are most actively circulating news, especially right-wing elite-/EU-skeptical/protectionist users. Outlets mostly shared by right-leaning users turn out to supply news to both highly elite-/EU-skeptical/protectionist users and their ideological counterparts – but not the same stories. We do not find evidence for strong news fragmentation. However, news sharing is disproportionately reliant on few intermediaries towards the right and elite-/EU-skeptical/protectionist dimension. Implications for future research are discussed.
Narratives are key interpretative devices by which humans make sense of political reality. This work shows how the analysis of conflicting narratives, i.e. conflicting interpretive lenses through which political reality is experienced and told, provides insight into the discursive mechanisms of polarization in the public sphere. Building upon previous work that has structurally identified ideologically polarized issues in the German Twittersphere between 2021 and 2023, we analyze the discursive dimension of polarization by extracting textual signals of conflicting narratives from tweets of opposing opinion groups. Focusing on a selection of salient issues and events (the war in Ukraine, Covid, climate change), the analysis sheds light on the fault lines in the underlying polarized debates by exposing diverging interpretations of political reality. Furthermore, we provide first evidence for meta-narrative patterns that may be at the origin of opinion alignment across issues.
This talk examines the use of large language models (LLMs) in the social sciences. I discuss to what extent LLMs can produce outputs that are representative of social trends, and whether they can be reliably used to annotate or analyze social science data. Beyond the well-documented concerns about representativity and reproducibility, I argue that the very training and alignment strategies underlying these models raise deeper epistemic issues. LLMs are shaped by optimization and alignment techniques that are far from neutral, and these processes inevitably influence the patterns, norms, and values reflected in their outputs. The talk builds in part on arguments developed in my book « Understanding Conversational AI », Ubiquity Press (esp. chapter 6), and aims to clarify both the potential and the structural limitations of LLMs as tools for social scientific inquiry.
How can we determine the quality of information online? Conventionally, this is done by inspecting the content itself, such as the storyline or the actors and their relations. Modern search engines use natural language-processing tools that analyse content. We call those “endogenous” cues to information quality. Although endogenous cues are valuable, they have limitations, such as the inability to differentiate between extremist content and counterextremist content because both types of messages tend to be tagged with similar keywords. Relying on content also makes endogenous cues potentially prone to abuse for censorship purposes. By contrast, “exogenous” cues rely on the context—not content—of information to assess quality. A famous example of the use of exogenous cues is Google’s PageRank algorithm, which takes network centrality as a key indicator of quality: Well-connected websites appear higher up in search results, irrespective of their content. We explore a number of possible exogenous cues using two large Twitter/X datasets. We use NewsGuard scores to provide estimates of the quality of domains being shared by users and develop an ensemble of exogenous cues, such as cognitive centrality and the skew of distributions of shares, that can predict quality without any analysis of content. We then embed those cues into a standard newsfeed recommender system that is based on collaborative filtering by boosting recommendations based on the quality signal provided by the ensemble of exogenous cues. Using the same Twitter/X dataset, we show that the modified recommender system can continue to satisfy user preferences while enhancing the quality of recommendations. The results provide an existence proof for the design of newsfeed algorithms that provide users with higher-quality information without getting entangled in content analysis.
Parallel to system-level efforts to address online misinformation—such as platform moderation and regulatory interventions—individuals can be empowered to navigate digital information environments more competently. Several individual-level interventions have been proposed to improve users’ media literacy and accuracy discernment, including strategies that encourage verifying the trustworthiness of claims and sources through web searches. However, recent evidence suggests that claim-based online verification can sometimes backfire, and major gaps remain regarding the effectiveness of media literacy interventions across different populations and contexts. In this talk, I present results from a study testing two media literacy interventions that rely on online verification strategies: a source-focused lateral reading video intervention designed to boost internet users’ competence to discern trustworthy from untrustworthy news sources and a claim-focused online search strategy that instructs users to search for evidence behind specific claims. I will also situate our findings within the broader research landscape on interventions to counter misinformation.
(Co-authors on the project: Lisa Oswald, Anastasia Kozyreva, Pietro Leonardo Nickl, Stefan M. Herzog, and Ralph Hertwig)
Multimodal platform architectures, algorithmic curation, and visually and audiovisually saturated communication reshape contemporary regimes of cultural memory. Whereas cultural memory once relied on relatively stable institutional frameworks – museums, archives, education, and mass media – it now unfolds within hybrid digital ecosystems where institutional narratives intersect with vernacular, user-generated, and algorithmically modulated content. The locus of memory is shifting from archival stability toward dynamic, processual, networked, and increasingly individualized forms of connective and hyper-connective memory.
Within these environments, the interplay of platform affordances, semiotic modes, and perceptual modalities operates as a material-semiotic and affective infrastructure shaping what is remembered, forgotten, and circulated. The presentation highlights a shift from narrative to diagrammatic logics of memory: condensed, affectively charged, multimodal fragments facilitated by algorithmic curation often privilege immediacy and emotional salience over contextual depth. These transformations also have democratic implications, as the combined dynamics of affordances, modes, and modalities enable both participatory and expressive forms of memory activism as well as polarization, fragmentation, and antagonistic forms of memory politics.