Modern political and cultural conflicts increasingly involve a mystifying fact: large groups of people who see the same videos, reports, and other information make radically different conclusions (Kahan et al., 2012; Iyengar & Westwood, 2014). One group sees injustice or tyranny, the other sees necessity or self‑defense. Social science research across psychology, sociology, and political science suggests this is not mainly a matter of ignorance or lack of information, but rather a matter of how human beings reason with information when beliefs, identities, values, and power are at stake (Kunda, 1990; Lodge & Taber, 2013).
This paper examines why shared information does not produce shared conclusions, drawing on cognitive psychology, moral psychology, and political theory. The aim is not to defend any ideological position, but to clarify the mechanisms that reliably produce divergent interpretations.
Information is not self‑interpreting
A common assumption in public discourse is that “the facts speak for themselves.” Empirical work on human reasoning shows this does not happen. Information must be interpreted, and interpretation is conditioned by prior beliefs, emotional commitments, and social context (Kunda, 1990; Nickerson, 1998).
Evidence is rarely appraised in a neutral, accuracy‑oriented way. Reasoning is instead shaped by “motivated” processing, in which cognitive operations are unconsciously steered towards conclusions that protect one’s existing beliefs, values, or identities (Kunda, 1990; Taber & Lodge, 2006). When evidence is ambiguous—which is often the case with video footage, eyewitness testimony, or ongoing events—motivated reasoning is especially strong, causing sincere observers to “follow the evidence” to opposite conclusions (Lord et al., 1979; Lodge & Taber, 2013).
Confirmation bias as a filter
One of psychology’s most robust findings is confirmation bias: the tendency to seek, notice, and remember information that confirms one’s prior beliefs, while discounting, reinterpreting, or otherwise discrediting information that challenges one’s preexisting beliefs (Nickerson, 1998; Tversky & Kahneman, 1974). Nickerson’s (1998) survey of the literature documents confirmation bias across many domains and shows that it affects hypothesis testing, memory, and interpretation rather than merely causing people to flatly deny facts that are too obvious to reject.
Experimental work shows how identical mixed evidence can polarize opinion when people selectively assimilate it. Lord et al. (1979) exposed participants to identical pro‑ and anti‑capital‑punishment research studies and found that they became more extreme in their original views: they treated the confirming studies as methodologically sound and the disconfirming ones as flawed. Over time, this selective attention and weighting produce the subjective experience that “the evidence clearly supports my position” (Lord et al., 1979; Plous, 1993).
Identity‑protective cognition
Ideas become linked with social identity, especially partisan affiliation, moral self‑image, and group loyalty. Recognizing this can help people see that disagreement is often about belonging, not just being right, and foster understanding.
Experimental research on “cultural cognition” finds that people with different cultural or ideological worldviews interpret the same scientific information differently on contested issues, such as climate change or gun control (Kahan et al., 2010). In one large‑scale study of climate risk perceptions, Kahan et al. (2012) found that science literacy and numeracy often increased, rather than decreased, polarization: more knowledgeable individuals were better at selectively using evidence to support identity‑consistent conclusions. Similar dynamics appear in partisan motivated reasoning, where people scrutinize opposing arguments more harshly than those from their own side (Bolsen et al., 2013; Taber & Lodge, 2006).
Narrative framing and meaning
Humans understand reality through narratives. Events are interpreted using narrative frames that specify who acted, why they acted, who was harmed, and what the event symbolizes (Entman, 1993). Entman also conceptualizes framing as selecting and highlighting aspects of reality and making them more salient in communication to promote a particular problem definition, causal interpretation, moral evaluation, and remedy. Once an event is placed into a narrative, new information is processed in ways that preserve that storyline (Chong & Druckman, 2007).
A single incident involving federal or state authority, for example, can be framed as proof of institutional abuse, evidence of necessary enforcement, a tragic anomaly, or a symptom of systemic failure. Each frame selectively highlights certain facts, background conditions, and causal chains over others, so observers are in effect answering different questions about what the event means (Entman, 1993; Scheufele & Tewksbury, 2006). Political communication research shows that such frames can shift the considerations people apply when forming judgments, even when the underlying factual content is held constant (Chong & Druckman, 2007).
Trade‑offs among moral values
Political psychology suggests that people organize moral judgment around multiple, sometimes competing, values rather than a single moral dimension. Haidt’s moral foundations theory argues that individuals assign different weights to moral concerns, such as care, fairness, loyalty, authority, and sanctity, and these uneven weightings contribute to reliable ideological differences (Haidt, 2013). One can build on this idea and view political conflict as a recurring trade‑off among freedom, stability, and equality: all three are desirable, but cannot be maximized simultaneously (Tetlock, 2003).
Research on basic values and “sacred” commitments shows that people tend to prioritize certain values as non‑negotiable while viewing others as conditional or context‑dependent (Schwartz, 1992; Tetlock, 2003). Order‑threats elevate stability over freedom, group‑threats elevate equality over liberty, and autonomy‑threats elevate freedom over coordination. Studies of sacred values and taboo trade‑offs further show that when people view a core value as threatened, they become resistant to compromises and view alternatives not just as mistaken but as morally wrong (Tetlock, 2003). The result is that the same policy or action can appear justified or unjust, depending on which value feels endangered, with these shifts felt as moral necessity rather than inconsistency.
Power and flexible principles
Political behavior involves a recurring pattern: support for abstract principles such as checks and balances, free speech, or constraints on executive power often diminishes when those principles frustrate one’s preferred outcomes. Levitsky and Ziblatt (2019) show that parties and leaders emphasize institutional restraints much more strongly when out of power but downplay or reinterpret them when in power.
The pattern need not reflect conscious hypocrisy. People tend to see power exercised by one’s own side as legitimate, protective, and necessary, while viewing equivalent power exercised by the other side as dangerous or authoritarian (Graham & Svolik, 2020). Experimental work on democratic norms shows that partisans are more willing to tolerate norm violations when their own side benefits, even while endorsing the norms in the abstract (Graham & Svolik, 2020). The principle remains rhetorically endorsed, but its application becomes situational and identity‑dependent.
Emotion, threat, and crisis
Emotions centrally influence belief formation and political judgment. Affective intelligence theory proposes that different emotional states, such as anxiety, enthusiasm, and anger, trigger distinct processing modes: threat‑related emotions narrow attention and increase reliance on heuristics (Marcus et al., 2000). When people perceive that core values or identities are threatened, they become more receptive to information that offers protection and more dismissive of information emphasizing procedural or long‑term costs (Brader, 2006; Huddy et al., 2005).
Empirical research shows that perceived threat and anxiety increase support for coercive or exceptional measures, such as expansive security policies or emergency powers (Huddy et al., 2005). After major crises, research finds that individuals who feel more threatened are more likely to accept civil liberties restrictions and to justify actions they otherwise oppose, especially when leaders frame such actions as necessary to fight danger (Huddy et al., 2005; Marcus et al., 2000). Divergent groups, however, often perceive different threats—physical security, moral harm, loss of status, or institutional decay—even when exposed to the same events (Iyengar & Westwood, 2014).
Why is more information not the solution
Taken together, these mechanisms explain why simply increasing information access is not a reliable way to reduce disagreement. New information does not enter a neutral mind; it enters a structured psychological system shaped by one’s identity, values, emotions, and social incentives (Lodge & Taber, 2013; Kahan et al., 2012). Motivated reasoning and confirmation bias filter evidence, narrative frames organize it into coherent stories, moral priorities determine which trade‑offs are acceptable, power dynamics affect when principles are applied, and emotions, especially threat, affect how much weight is given to potential costs and safeguards (Entman, 1993; Marcus et al., 2000).
As a result, the same data are filtered rather than absorbed, and ambiguity is routinely resolved to preserve consistency with preexisting commitments. Moral certainty can increase with polarization, because each side experiences its own position as reasonable, evidence‑based, and morally justified within its own interpretive framework (Iyengar & Westwood, 2014; Taber & Lodge, 2006). Within any given group, disagreement feels like a clash between truth and error rather than between incompatible but structured ways of seeing.
Implications for political discourse
Understanding these dynamics does not eliminate disagreement, but it explains it. Persistent conflict does not require mass irrationality or widespread bad faith. It instead arises naturally from how human beings reason under uncertainty, moral commitment, and group affiliation (Haidt, 2013; Lodge & Taber, 2013).
More productive discourse may depend on devoting some attention to underlying interpretive structures: which values are being prioritized, which threats are being perceived, and which trade‑offs are being accepted. When participants can explicitly acknowledge these layers—motivated reasoning, identity protection, framing, moral trade‑offs, power interests—debate has a better chance of moving beyond stalemated, incompatible exchanges of conclusions drawn from the same information (Chong & Druckman, 2007; Levitsky & Ziblatt, 2019).
Conclusion
People believe different things from the same information, not because reality is infinitely malleable, but because human cognition is not value-neutral. Interpretation is filtered by motivated reasoning, identity protection, narrative framing, moral trade‑offs, emotional threat, and power dynamics. These forces are common across ideological groups and are most active when issues are morally charged or politically consequential (Kunda, 1990; Marcus et al., 2000). Recognizing this pattern does not require abandoning one’s convictions. It requires acknowledging that disagreement is often rooted less in ignorance than in the predictable structure of human reasoning itself (Haidt, 2013; Lodge & Taber, 2013).
ART Analysis: How People Draw Opposing Conclusions from the Same Facts
Awareness
The first step is awareness of what’s actually happening when people look at the same set of facts and draw opposite conclusions.
Key patterns to bring to the surface:
Disagreement usually persists even after people watch the same video, read the same report, or hear the same testimony.
Each side typically believes the other side is either dishonest, stupid, or malicious.
For most people, their own conclusions feel obvious, reasonable, and morally grounded.
Reflective Questions:
When I say “the evidence is clear,” what am I assuming about how evidence should be interpreted?
Which details did I instinctively notice first—and which details did I mentally discount or dismiss?
If someone else reached the opposite conclusion, am I assuming evil intent, or could they be operating from a different set of values?
Awareness at this stage is not about giving up a position. It’s about noticing that interpretation happens before conscious reasoning begins.
Reason
This stage addresses why the mind works this way, and what incentives are at work under the surface.
Key dynamics to consider:
Reasoning is usually driven by the need to defend identity, values, or group belonging.
Moral priorities (freedom, stability, equality) act as filters; they are not neutral lenses.
Perceptions of power: when a group we trust does it, it feels legitimate. When a group we distrust does it, it feels threatening.
Analytical Questions:
Which value do I instinctively prioritize in this case: freedom, stability, or equality?
If the same action were done by a group I oppose, would my interpretation change?
Am I approaching this event more as a factual question or as a moral-symbolic question?
What outcome do I want to be true—and how might that shape my interpretation?
Reasoning is better when people assess incentives, not just arguments.
Transformation
This stage focuses on how thinking can become more disciplined without becoming passive or relativistic.
The goal is not “everyone is equally right” but:
Greater humility about certainty
More precise boundaries between facts, values, and narratives
Stronger resistance to reflexive tribal alignment
Transformational Questions:
What would it look like to hold my conclusion with confidence but less moral certainty?
Which limits on power would I still support even if my preferred side permanently controlled the system?
If I had to explain the opposing interpretation in a way they would say is fair, could I do it?
Am I using principles as constraints on my side, or only as weapons against the other?
A practical reframing question:
“What am I willing to be wrong about if new information emerges—and what am I unwilling to reconsider no matter what?”
That question reveals where belief ends, and identity begins.
Closing Integration
Instead of asking:
“Who’s lying?”
“Who’s evil?”
Ask:
“Which values conflict here?”
“Which risks am I more afraid of?”
“Which trade-offs am I quietly accepting?”
People don’t disagree because reality is unknowable. They disagree because meaning is negotiated through values, identity, and power. Clearer thinking begins when those forces are made visible rather than denied.
References
Bolsen, T., Druckman, J. N., & Cook, F. L. (2013). The influence of partisan motivated reasoning on public opinion. Political Behavior, 36(2), 235–262. https://doi.org/10.1007/s11109-013-9238-0
Brader, T. (2006). Campaigning for hearts and minds: How Emotional Appeals in Political Ads Work. University of Chicago Press.
Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of Political Science, 10(1), 103–126. https://doi.org/10.1146/annurev.polisci.10.072805.103054
Entman, R. M. (1993). Framing: toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x
Graham, M. H., & Svolik, M. W. (2020). Democracy in America? Partisanship, polarization, and the robustness of support for democracy in the United States. American Political Science Review, 114(2), 392–409. https://doi.org/10.1017/s0003055420000052
Haidt, J. (2013). The righteous mind: Why Good People Are Divided by Politics and Religion. Vintage.
Huddy, L., Feldman, S., Taber, C., & Lahav, G. (2005). Threat, anxiety, and support of antiterrorism policies. American Journal of Political Science, 49(3), 593. https://doi.org/10.2307/3647734
Iyengar, S., & Westwood, S. J. (2014). Fear and Loathing across Party Lines: New Evidence on Group Polarization. American Journal of Political Science, 59(3), 690–707. https://doi.org/10.1111/ajps.12152
Kahan, D. M., Jenkins‐Smith, H., & Braman, D. (2010). Cultural cognition of scientific consensus. Journal of Risk Research, 14(2), 147–174. https://doi.org/10.1080/13669877.2010.511246
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732–735. https://doi.org/10.1038/nclimate1547
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Levitsky, S., & Ziblatt, D. (2019). How democracies die. National Geographic Books.
Lodge, M., & Taber, C. S. (2013). The rationalizing voter. In Cambridge University Press eBooks. https://doi.org/10.1017/cbo9781139032490
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
Marcus, G. E., Neuman, W. R., & MacKuen, M. (2000). Affective intelligence and political judgment. University of Chicago Press.
Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
Plous, S. (1993). The psychology of judgment and decision making. McGraw-Hill Education.
Scheufele, D. A., & Tewksbury, D. (2006). Framing, agenda setting, and priming: The evolution of three media Effects models. Journal of Communication, 57(1), 9–20. https://doi.org/10.1111/j.0021-9916.2007.00326.x
Schwartz, S. H. (1992). Universals in the Content and Structure of Values: Theoretical advances and empirical tests in 20 countries. In Advances in experimental social psychology (pp. 1–65). https://doi.org/10.1016/s0065-2601(08)60281-6
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x
Tetlock, P. E. (2003). Thinking the unthinkable: sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320–324. https://doi.org/10.1016/s1364-6613(03)00135-9
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
