Most adoption in networked environments is coerced. The literature on social contagion, critical mass, and threshold dynamics converges on a finding that existing ethical frameworks are poorly equipped to handle: network topology can predict and override individual choice. This raises a question for neuroethics that deserves more precision than it typically receives. If structural position in a network determines adoption outcomes more reliably than private preference, where does agency actually reside?

The answer requires clarity about what "coercion" means in this context, how networks produce it, and why the distinction between voluntary and involuntary adoption collapses at specific density thresholds.

What Counts as Coercion

Coercion is a contested concept in philosophy, and the claim that networks coerce demands more than a threshold model.

Robert Nozick (1969) defined coercion through a conditional structure: A coerces B when A threatens B with consequences that make B's alternatives unacceptable, and B acts accordingly. The threat need not be explicit. Nozick's account requires that the coercer intend to restrict options and that the target recognize the restriction. In network adoption, neither condition holds cleanly. No single node "intends" to pressure another. The pressure emerges from topology itself: from degree distributions, clustering coefficients, and the density of local ties.

This is closer to what Harry Frankfurt (1971) called structural unfreedom. Frankfurt distinguished between first-order desires (wanting to adopt or not) and second-order volitions (wanting to want what you want). A person is free, on Frankfurt's account, when their first-order desires align with their second-order volitions. Network pressure disrupts this alignment. You adopt a platform you dislike because the social cost of refusal exceeds your private distaste. Your first-order action (adoption) contradicts your second-order preference (resistance). You act, but as someone other than the agent you wish to be.

The structural unfreedom framework identifies the core mechanism: when an individual adopts despite negative private utility, Frankfurt's condition for unfreedom is met. The agent acts against their own reflective preferences. This matters because it shifts the ethical question from "who is coercing?" to "what structural conditions produce unfreedom?" In networks, coercion has no author. It is an emergent property of connectivity.

Free Agent 1st-order action 2nd-order volition = Desires align with reflective preference Coerced Agent 1st-order action 2nd-order volition pressure Action contradicts reflective preference
Frankfurt's condition for unfreedom: action and volition separate under network pressure.

The Critical Mass Window: 10 to 43 Percent

The empirical evidence for critical mass thresholds in networked adoption comes from multiple domains, and each study illuminates a different mechanism.

Xie et al. (2011) demonstrated through simulation and mean-field analysis that a committed minority of approximately 10 percent can flip majority opinion in a population. Their model used binary-state agents on random networks, showing that below 10 percent commitment, the minority opinion remains marginal; above it, consensus shifts rapidly. The mechanism is probabilistic: at 10 percent density, committed agents encounter uncommitted ones frequently enough to create local majorities that propagate.

Centola, Becker, Brackbill, and Baronchelli (2018) moved this from simulation to experiment. Using an online platform where participants chose between competing social norms, they found the tipping point at roughly 25 percent. Below this threshold, committed minorities failed to shift the convention. Above it, the population flipped within a few rounds. The critical finding: this was complex contagion, requiring multiple exposures, and it followed qualitatively different dynamics from simple information spread.

Everall, Tschofenig, Donges, and Otto (2025) provide the most rigorous cross-disciplinary validation of this window to date. Their systematic review in Earth System Dynamics collected 95 observations across 39 studies on complex contagion in social networks. They found the critical mass for social tipping spans 10 to 43 percent, with most cases tipping by 40 percent. The Pareto-like pattern held: roughly 20 percent of the population, once committed, can tip the remaining 80 percent. Their analysis also identified key moderating factors: trust within community structures, norm type and context, and the targeting of groups with moderate preferences and network positions (rather than highly central nodes) as the most effective paths to enabling endogenous spread. This cross-disciplinary, cross-parametric validation moves the critical mass window from an isolated finding to a repeatedly observed empirical regularity.

Chenoweth and Stephan (2011) found that nonviolent political campaigns succeed when they achieve 3.5 percent active, sustained participation. This lower threshold reflects a different mechanism: political action has high visibility and signals commitment intensity, so fewer participants generate proportionally more pressure. The 3.5 percent figure is a lower bound on cascade initiation, not the coercion window itself.

The pattern across these findings: below 10 percent, adoption is driven by agents with genuine positive preference for the innovation (Rogers, 2003). Between roughly 10 and 25 percent, social pressure from existing adopters begins exceeding the thresholds of agents whose private preference opposes adoption. The cascade accelerates, but most of the acceleration comes from unwilling participants. Above 25 percent, the new norm is functionally established. Remaining non-adopters face a binary between adoption and social isolation, and resistance collapses because the cost of non-adoption becomes prohibitive, not because preferences have changed.

This window is where the concept of "choice" loses statistical meaning. Individual decisions become predictable from network position and local adoption density alone.

The Critical Mass Window Adoption Rate (%) Coerced Adopters 0 10 25 43 100 willing coerced 3.5% Chenoweth 10% Xie et al. 25% Centola et al. 43% Everall et al. True believers Structure overrides preference Norm established
Coerced adoption peaks between 10% and 43%, dwarfing willing adoption across the critical mass window.

Topology and the Structure of Pressure

Network topology amplifies the coercion mechanism in specific, measurable ways. In scale-free networks, where a few nodes hold disproportionate connections, high-degree hubs transmit pressure to large portions of the network simultaneously. Coercion onset requires fewer initial adopters: sometimes under 5 percent (Watts, 2002). In small-world networks with high clustering, pressure is locally intense but spreads more slowly across clusters, requiring 10 to 20 percent for cascade. Random networks fall between these extremes.

Recent work complicates this clean topology story in a productive way. Eckles, Mossel, Rahimian, and Sen (2024) showed in Nature Human Behaviour that the long-standing distinction between simple and complex contagion topologies breaks down under realistic conditions. When there is even a small probability of adoption below the threshold, randomly rewired "long ties" accelerate the spread of threshold-based contagions. This reverses the earlier consensus, drawn from Centola and Macy (2007), that complex contagions require clustered networks with wide bridges. The implication for coercion: if long ties accelerate even socially reinforced adoption, then coercion zones may emerge faster and across more network types than a three-topology comparison would suggest.

The distinction between simple and complex contagion is central to the coercion argument. Centola and Macy (2007) showed that behaviors requiring social reinforcement (joining a protest, adopting a technology your contacts use, changing a health behavior) spread through wide bridges: clusters of connections that provide multiple, redundant signals. Simple contagion (news, information) spreads through any single connection. Coerced adoption is a complex contagion phenomenon. A single neighbor's adoption rarely pushes an unwilling agent past their threshold. It takes coordinated local pressure.

Aiyappa, Flammini, and Ahn (2024) recently advanced this framework from the cognitive level. In Science Advances, they showed that individual cognitive processes, specifically the structure of internal belief networks with weighted connections between attitudes, produce both simple and complex contagion dynamics at the population level. The type of social contagion that emerges is a function of how individual minds are structured, and the two frameworks are complementary: the network computes decisions through the agents, and the agents' internal belief structures shape what kind of computation the network performs.

The key finding across topologies and across these newer models: coercion is not proportional to adoption rate. It peaks in a specific window and then declines as the remaining non-adopters are either highly resistant or simply disconnected.

Scale-Free Hub-dominated Cascade from <5% Small-World Clustered pressure Cascade from 10–20% Random Uniform connectivity Intermediate cascade Same agents. Same preferences. Different outcomes. Eckles et al. (2024): these distinctions blur under probabilistic adoption.
Topology determines where pressure concentrates and how fast cascades initiate.

Networks as Cognitive Architecture

The claim that social networks function as cognitive systems draws on a specific philosophical tradition.

Clark and Chalmers (1998) argued that cognitive processes extend beyond the skull when external resources play the same functional role as internal mental states. Their "parity principle" holds that if an external process, were it occurring in the head, would count as cognitive, then it counts as cognitive regardless of where it occurs. A notebook that stores beliefs functions as memory. A calculator that performs arithmetic functions as computation.

Social networks satisfy this criterion. When an individual's adoption decision is determined by the states of their neighbors, the network is computing the decision. Local pressure functions as a weighted sum across inputs: the same operation a neuron performs. Connections are synapses. Adoption events are firing patterns. The network processes information and produces behavioral outputs.

Edwin Hutchins (1995) documented this empirically in his study of naval navigation teams. He showed that cognitive labor in complex tasks is distributed across individuals and artifacts, and that the "unit of analysis" for cognition is the system, not any individual participant. No single crew member navigates the ship. The navigation emerges from the interactions between crew members, instruments, charts, and communication protocols.

This reframing has direct consequences for neuroethics. If we accept that networks perform cognition, then the study of how technology affects the brain must include how network topology structures decision-making. When enough neighbors adopt, an individual's state changes through a process that is functionally computational. The network computes the decision before the agent deliberates.

Neil Levy (2007) argued that neuroethics should concern itself with any system that modulates cognitive function, and social networks qualify. They modulate decision-making by restructuring the information and pressure environment in which choices occur. A network that amplifies social proof is doing cognitive work: filtering signals, weighting inputs, producing behavioral outputs. The network becomes cognitive architecture that bypasses volition, and the ethical question becomes: who designs the architecture?

Neural Computation Weighted inputs Threshold firing Behavioral output Functionally equivalent Network Computation Peer pressure Threshold adoption Behavioral output Clark & Chalmers (1998): if it would count as cognitive in the head, it counts as cognitive in the network.
The parity principle: neural and network computation perform the same functional operation.

Algorithmic Amplification and the Erosion of Second-Order Reflection

The critical mass literature describes a single adoption event. Real networked environments compound the problem through algorithmic systems that continuously restructure the pressure landscape.

Lu (2024) argued in Humanities and Social Sciences Communications that personalized algorithmic decision-making creates challenges to user autonomy that are structurally difficult to eliminate. The challenges are threefold: algorithms deviate from a user's authentic self by optimizing for engagement metrics that do not track genuine preference; they create self-reinforcing loops that progressively narrow the user's exposure and therefore the user's self-concept; and they lead to a measurable decline in the user's deliberative capacities over time. On Frankfurt's framework, this is a systematic attack on the conditions for second-order reflection. The network does not merely override your preference in a single adoption decision; algorithmic curation degrades your ability to form second-order volitions at all.

Mann (2025) pushed this further in the California Management Review, arguing that technology adoption activates psychological defense mechanisms that resemble trauma response. When adoption is perceived as successful, the individual's original defense structures collapse: the user internalizes the rules imposed by the technology and develops a form of emotional identification with it, reinterpreting the origin of the constraint as a chosen relationship. Mann calls this "technological Stockholm syndrome." The implication for coercion research is significant: agents who are initially coerced into adoption may undergo a psychological process in which their preferences restructure to align with their behavior post-adoption, rendering the coercion invisible even to the person who experienced it. In biological systems, private utility itself gets rewritten.

Manufactured Consent as Measurable Process

Noam Chomsky and Edward Herman (1988) used the term "manufactured consent" to describe how media systems produce public agreement through structural filtering. The critical mass literature operationalizes this: consent is manufactured when network structure pushes adoption rates past the coercion threshold, producing behavioral compliance that looks voluntary from the outside but registers as coerced at the level of individual preference.

The gap between structural determinism and human reality is itself informative. Real humans sometimes resist past their threshold. They find workarounds, form counter-networks, develop ironic compliance. The coercion rates implied by the literature represent a ceiling. Even at significantly lower rates, the ethical implications hold. If a substantial fraction of adoption in networked systems is structurally coerced, the assumption of individual consent that underlies platform governance, technology policy, and market regulation is compromised.

Conditional Agency and Distributed Responsibility

If network topology predicts individual choice with probability approaching 1 as density increases, personal consent becomes conditional on structural context. This shifts the ethical framework from autonomy (the capacity for self-governance) to what we might call conditional agency: free will modulated by topology and pressure, genuine in some configurations, illusory in others.

The responsibility implications follow directly. In a system where no single actor coerces but the structure produces coercion, who bears moral responsibility?

Traditional ethics assigns responsibility to agents with intent. Tort law looks for proximate cause. Neither framework handles emergent coercion well. The platform designer optimized for engagement. The early adopter liked the product. The algorithm maximized a metric. But the coercion happened. If the literature can predict and measure where coercion occurs, the absence of individual intent becomes insufficient defense.

Systems that amplify social influence through algorithmic curation, network-effect lock-in, or social proof mechanisms must be treated as cognitive infrastructures. They are architectures that produce specific patterns of adoption, and those patterns include systematic coercion of agents whose private preferences oppose the emergent norm.

Design Implications

If autonomy is a design variable, three interventions follow from the literature.

First, transparency about network pressure. The critical mass research can estimate coercion probability for any given network position and adoption rate. Showing users their position in the pressure landscape (how much of their adoption behavior is driven by peer effects versus genuine preference) would make structural coercion visible. Visibility does not eliminate pressure, but it restores the second-order reflection that Frankfurt identified as the condition for freedom.

Second, friction at the coercion threshold. The 10 to 25 percent window is identifiable in real-time. Platforms could introduce deliberate slowdowns during cascade phases: confirmation steps, cooling-off periods, prompts that surface private preference before social pressure resolves the decision. Everall et al. (2025) support this approach from the opposite direction: their systematic review found that targeting individuals with moderate preferences and moderate network positions is most effective for enabling norm change. The same principle in reverse: protecting moderately positioned agents during cascade phases would be the most effective friction point against coerced adoption.

Third, exit architecture. Real systems could reduce coercion by lowering exit costs: data portability, interoperability standards, reduced switching penalties. When leaving is cheap, the pressure to stay despite negative utility drops. The coercion zone narrows.

Limitations

The critical mass literature has important boundary conditions. Most models assume agents have fixed private utilities and thresholds. Real preferences shift through deliberation, persuasion, and experience. An agent who initially resists a technology may come to value it after coerced adoption. Mann's (2025) work on technological Stockholm syndrome suggests this preference revision may be psychologically mediated through defense mechanism collapse and emotional identification with the technology, raising the question of whether post-adoption preference alignment represents genuine preference formation or a deeper form of structural capture.

Most models also lack strategic behavior. Real agents form coalitions, coordinate resistance, and engage in signaling. A group of holdouts who publicly commit to non-adoption can shift local pressure dynamics in ways the current models do not capture.

The threshold mechanism in much of this literature overstates the sharpness of the coercion boundary. Human adoption decisions involve stochastic elements, emotional states, and contextual factors that create a probability distribution around the threshold. Eckles et al. (2024) demonstrated that even introducing a small probability of adoption below the threshold fundamentally changes how contagion spreads through different network structures, suggesting that clean topology-dependent results may overstate structural determinism.

These limitations bound the findings. They do not invalidate them. The literature demonstrates that network structure produces coerced adoption at scale. The precise rates and boundaries are approximations. The structural mechanism is real, and the 10 to 43 percent coercion window is now corroborated by a cross-disciplinary systematic review spanning 39 studies (Everall et al., 2025).