How Digital Platforms, Creator Economy, and AI Are Cannibalizing Democracy’s Knowledge Foundation
Artificial intelligence systems face a fundamental paradox: they depend entirely on human-generated knowledge for their training and operation, yet the digital ecosystem in which AI operates is systematically destroying the very infrastructure that produces quality information. This creates what we might call “the parasite’s dilemma”—AI companies are building increasingly sophisticated systems to synthesize and distribute knowledge while simultaneously undermining the economic and institutional foundations that make knowledge creation possible.
This analysis is not an argument against AI or digital platforms, but rather a call for intentional system design. These technologies can be architected to strengthen rather than cannibalize the information infrastructure they depend upon—but only if we understand the current extractive dynamics and design alternatives.
Understanding this dilemma requires examining how three successive waves of digital disruption have cannibalized what we call the “quality information supply chain”—the structured pipeline that democratic societies developed to transform raw knowledge discovery into reliable public information. Each wave has been more extractive than the last: first digital platforms captured advertising revenue from traditional media, then the creator economy built audiences by repackaging others’ reporting without compensation, and now AI systems synthesize vast amounts of information while providing no economic return to original creators.
The stakes extend far beyond media economics. This systematic destruction strikes at the foundational requirements of democratic governance itself—the shared frameworks for evidence-based reasoning that diverse societies need to make collective decisions. Without these frameworks, democracy becomes impossible.
The Quality Information Supply Chain: Democracy’s Original Knowledge Infrastructure
Before examining how this infrastructure is being dismantled, we must understand what we’re losing. For centuries, democratic societies developed sophisticated quality information supply chains—structured pipelines that transformed raw knowledge discovery into actionable wisdom for public consumption, with built-in quality controls and economic incentives aligned with accuracy rather than engagement.
Stage 1: Academic Discovery and Peer Review Universities and research institutions served as the primary sites of knowledge creation. Researchers operated within rigorous peer-review systems that, while sometimes slow and conservative, provided crucial quality control. The academic environment offered several key advantages: stable funding that allowed for long-term investigation, institutional independence from immediate market pressures, and career incentives aligned with accuracy and thoroughness rather than speed or sensationalism.
Stage 2: Expert Synthesis and Educational Materials Leading academics synthesized years or decades of research into comprehensive textbooks and educational materials. This process served as a secondary filter, requiring authors to distill complex findings into coherent frameworks while maintaining academic rigor. Publishers invested significantly in fact-checking and editorial oversight, knowing their reputation depended on accuracy.
Stage 3: Educational Dissemination and Critical Engagement Colleges and universities distributed this synthesized knowledge through structured curricula. Students didn’t just consume information—they engaged with it through discussion, analysis, and application under expert guidance. This stage created a trained class of professionals who understood not just facts, but how to evaluate and apply information critically.
Stage 4: Workforce Integration and Public Knowledge Graduates entered various professions, carrying with them both specific expertise and general information literacy skills. Through their work and civic participation, they helped elevate the overall quality of public discourse and decision-making. Professional associations, continuing education, and workplace mentorship further refined and distributed knowledge.
This pipeline had natural economic incentives aligned with quality. Academic careers rewarded accuracy and thoroughness. Publishers profited from producing trusted educational materials. Educational institutions competed on the quality of their graduates’ preparation for real-world challenges.
Crucially, this system was also supported by legal frameworks of accountability. Institutional news sources operated under established standards of journalistic liability—they could be sued for libel, held accountable for false reporting, and faced professional consequences for ethical violations. These legal standards created powerful incentives for verification, fact-checking, and editorial oversight.
The First Wave: How Digital Platforms Broke the Chain
The emergence of Google, Facebook, and other Web 2.0 platforms fundamentally disrupted this information ecosystem through extractive, monopolistic business models built on what Harvard Business School professor Shoshana Zuboff terms “surveillance capitalism”—the practice of extracting human behavioral data to predict and influence future behavior for commercial gain.
The Attention Economy’s Perverse Incentives Unlike the traditional pipeline, where quality determined long-term success, digital platforms optimized for immediate engagement. Algorithms learned to surface content that generated clicks, shares, comments, and prolonged browsing—metrics that often correlated inversely with accuracy, nuance, or long-term value. Sensational, emotionally charged, or controversial content consistently outperformed careful, measured analysis.
The Collapse of Traditional Revenue Models Google’s dominance in search effectively captured the economic value that previously supported journalism and educational publishing. When users could access information “for free” through search, they stopped paying for newspapers, magazines, and reference materials. Advertising revenue, previously distributed across thousands of independent publishers, became concentrated in the hands of a few tech giants.
The scale of this extraction is staggering: Google and Facebook together captured approximately 60% of all digital advertising revenue by 2020, while newspaper advertising revenue fell from $49 billion in 2000 to under $9 billion by 2020. Meanwhile, newsroom employment dropped by more than half over the same period, from roughly 71,000 journalists in 2000 to around 31,000 by 2020.
Facebook compounded this problem by creating walled gardens where information sharing occurred within algorithmically curated feeds rather than through direct engagement with original sources. Publishers found themselves dependent on these platforms for distribution, yet unable to capture sufficient revenue to sustain quality operations.
The Legal Shield Problem In the United States, Section 230 of the Communications Decency Act fundamentally disrupted accountability structures by exempting platforms from liability for content published by third parties. While originally intended to protect nascent internet services, this legal shield enabled platforms to distribute information without the quality controls that constrained traditional media. Similar legal frameworks in other jurisdictions have created comparable liability gaps, though the specific mechanisms vary by country.
The result was a two-tier system where legacy institutional sources remained legally accountable for their reporting while platform-distributed content faced no comparable standards. This asymmetry accelerated the migration of audiences from accountable to unaccountable sources, creating competitive disadvantages for the very institutions that invested in quality and accuracy.
The Second Wave: The Creator Economy’s Deeper Extraction
The emergence of the creator economy introduced another destructive dynamic: the rise of news communicators who build audiences and revenue streams by repackaging information gathered by others. This represents a second parasitic layer that further drains resources from the quality information supply chain.
The Free-Rider Problem These creators—operating through newsletters, podcasts, and social media—have become recognized as authoritative news sources despite performing primarily as distributors rather than gatherers of information. Traditional newsrooms and investigative journalists bear the substantial costs of reporting: sending correspondents to cover events, conducting interviews, verifying sources, navigating legal challenges, and maintaining bureaus in different locations.
Meanwhile, creator economy participants can aggregate, summarize, and redistribute this reporting with minimal overhead costs. Many platforms now operate creator funds that compensate these communicators based on the attention they generate from audiences they curate, creating a direct revenue stream for information redistribution. The platforms themselves cultivate huge audiences which they monetize through advertising, e-commerce, and other revenue streams, capturing additional economic value from the information circulation. However, these compensation flows stop at the creator and platform level—the original reporting sources that provide the underlying information often receive no compensation and may never even know their work is being repackaged and monetized by these communicators.
This creates a perverse economic dynamic where creators are incentivized to find and repackage compelling information without citing or compensating original sources. Consider a typical example: an investigative team spends six months and $50,000 uncovering government contract fraud, publishing their findings in a local newspaper. Within hours, that investigation is repackaged into:
- A viral Twitter thread by a political commentator (50,000 likes, monetized through subscriptions)
- A YouTube video analysis (100,000 views, ad revenue)
- Multiple newsletter summaries (thousands of paid subscribers)
- Podcast episode discussions (sponsor revenue)
Each of these derivative products may generate more individual revenue than the original newspaper receives, often without any attribution back to the source. The original newsroom, meanwhile, struggles to justify the cost of such investigations when they can’t capture the economic value their work creates.
Accelerating the Collapse This dynamic accelerates the collapse of primary information gathering. As digital platforms continue to capture revenue from news organizations, with Facebook and Google together accounting for about 60 percent of the digital advertising market, newsrooms reduce their reporting capacity, creating an information ecosystem increasingly dependent on recycled content rather than fresh investigation. Industry data show that the combined effect of these changes was to reduce traffic referrals from Facebook to publishers by 48% last year and from X by 27%. Research by the Reuters Institute for the Study of Journalism demonstrates how digital platforms’ dominance has fundamentally disrupted traditional journalism business models, while studies of local news environments show increasing consolidation as independent outlets struggle to compete with platform-mediated distribution.
The creators who build large audiences by effectively communicating other people’s reporting often have little incentive to invest in original information gathering themselves, since their competitive advantage lies in communication and audience building rather than journalism.
The Loss of Information Categories and Cognitive Infrastructure
The collapse of these economic structures coincided with the disappearance of physical and structural cues that once taught people to distinguish between different types of information—a loss that has profound implications for how societies evaluate truth claims and make collective decisions.
Why Information Categories Matter: The Tension Between Personal Truth and Collective Action The destruction of traditional information categories has created a fundamental tension in how societies navigate truth and decision-making. On one hand, the democratization of information distribution has validated personal experiences and perspectives previously excluded from mainstream discourse. There is genuine value in this expansion—personal truths matter, lived experiences deserve recognition, and diverse perspectives enrich collective understanding.
However, this liberation comes with a critical trade-off. While personal truths are important for individual meaning-making and community building, societies still need mechanisms for collective decision-making and coordinated action. Democracy requires citizens to make shared choices about policies, resource allocation, and social priorities. Market capitalism depends on participants having common standards for evaluating products, services, and business practices. These collective functions require some level of shared epistemic foundation—agreed-upon methods for distinguishing between more and less reliable information.
Consider what happens when epistemic coordination breaks down: During the COVID-19 pandemic, different groups operated from completely incompatible frameworks for evaluating medical evidence. Some prioritized peer-reviewed studies and public health expertise, others trusted personal anecdotes and alternative media sources, still others relied primarily on political affiliations to determine their positions. Without shared standards for weighing evidence, society struggled to coordinate responses to a collective threat, even when objective realities (like hospital capacity) demanded urgent action.
Evidence and Reasoning as Democratic Infrastructure
To understand why this matters, we need to introduce a crucial concept: epistemology—the study of how we know what we know. Epistemic reasoning refers to the mental skills we use to evaluate information: How do we decide what sources to trust? How do we weigh competing evidence? How do we distinguish between correlation and causation, or between expert analysis and personal opinion?
These aren’t abstract philosophical concepts—they’re practical cognitive tools that every citizen uses when deciding how to vote, what products to buy, or which medical advice to follow. Epistemic skills include the ability to:
- Trace information back to its original source
- Evaluate the credibility and expertise of different sources
- Understand different types of evidence (statistical, anecdotal, experimental)
- Recognize logical fallacies and cognitive biases
- Distinguish between facts, interpretations, and opinions
Evidence and reasoning are not merely abstract intellectual tools—they are the foundational decision-making mechanisms that make democratic, pluralistic, inclusive societies possible. In diverse societies where people hold different values and competing interests, evidence-based reasoning provides the essential common language for productive disagreement and collective problem-solving.
The quality information supply chain was never just about producing “better” information in some abstract sense. It was about maintaining the epistemic infrastructure that democratic pluralism requires to function. Without reliable mechanisms for distinguishing between rigorous evidence and sophisticated manipulation, democratic societies lose their capacity for evidence-based collective decision-making.
The Lost Art of Information Literacy: Physical Structure and Mental Categories Physical newspapers provided clear visual and spatial separation between news reporting and editorial opinion. The news section looked different from the editorial page, which looked different from the opinion section, which looked different from the classified ads. These physical boundaries trained readers to mentally categorize information and adjust their critical evaluation accordingly. Children learned to read these cues naturally, understanding that a front-page news story should be evaluated differently from an editorial cartoon.
The digital transformation eliminated these physical structures without replacing them with equivalent organizational systems. On social media platforms and in search results, all information appears in essentially identical formats—the same fonts, the same layout templates, the same distribution mechanisms. A rigorously fact-checked investigative report looks exactly the same as a personal opinion blog post or a piece of sponsored content.
This loss affected multiple generations simultaneously. Younger generations never learned to read these traditional structural cues, but older generations who understood print-based distinctions found themselves equally lost when those familiar organizational systems disappeared.
The Cognitive Rewiring Problem The collapse of structural information literacy has been accelerated by the neuroplastic effects of digital media consumption. As neuroscience research demonstrates, the skills we practice get stronger while neural pathways for underused abilities atrophy. The modern digital environment has been systematically training our brains for rapid task-switching and shallow processing rather than the sustained, contemplative engagement that quality information evaluation requires.
This cognitive rewiring manifests in documented declines across multiple domains: we read less, retain less of what we read, and struggle with complex texts. The digital information ecosystem rewards skimming and scanning while making the sustained focus required for deep reading increasingly difficult. Users routinely overestimate their comprehension of material they’ve only skimmed, creating false confidence in their information processing abilities.
The result is a society where virtually everyone—regardless of age—lacks both the structural frameworks and the cognitive habits necessary to quickly categorize and appropriately evaluate different types of information. Without these dual supports, people default to evaluating all information using emotional resonance or social proof rather than appropriate quality standards.
The Third Wave: AI as the Ultimate Extractor
Artificial intelligence represents the culmination and acceleration of these destructive dynamics, creating a fully extractive relationship with the quality information supply chain while introducing qualitatively new threats to democratic reasoning. This is where the parasite’s dilemma becomes most acute: AI systems are entirely dependent on the information infrastructure they’re helping to destroy.
From Cognitive to Epistemic Degradation The cognitive impairments documented in digital media consumption provide the foundation for understanding how AI represents a qualitatively different and more dangerous threat to human reasoning capabilities. While digital media has already rewired our brains for shallow processing and fragmented attention, AI extends this degradation into two critical new domains that threaten the very foundations of democratic reasoning.
While digital media trained us to skim and scan rather than read deeply, AI systems now threaten to eliminate the need for epistemological reasoning altogether. Where social media and web browsing still required users to encounter multiple sources, compare claims, and make basic judgments about credibility, AI presents synthesized conclusions that bypass these critical thinking processes entirely.
The cognitive skills being lost extend beyond attention and memory to include fundamental epistemological capabilities: the ability to trace the provenance of information, evaluate the reliability of sources, distinguish between different types of evidence, and understand the methodology behind knowledge claims. These are precisely the reasoning skills that democratic societies require for citizens to participate meaningfully in collective decision-making.
Unlike the passive cognitive atrophy caused by digital media consumption, AI actively discourages epistemological engagement. When an AI system provides a confident-sounding answer to a complex question, users have little incentive to investigate the underlying sources, challenge the reasoning, or seek alternative perspectives. The system appears to have already done the intellectual work, making further critical analysis seem unnecessary.
The Black Box Manipulation Problem The most insidious aspect of AI’s epistemic influence lies in its complete opacity around information selection and user profiling. Even when users ask specific, seemingly objective questions, AI systems operate as closed black boxes that provide no transparency about what they know about the user or how they prioritize and filter information in their responses.
This creates unprecedented opportunities for manipulation that extend far beyond the engagement-driven algorithms of social media platforms. While Facebook and YouTube algorithms were problematic because they prioritized content that maximized user engagement for advertising revenue, at least users could observe patterns in what content appeared in their feeds and make some inferences about the underlying logic.
AI systems, by contrast, can tailor not just what information to provide, but how to frame it, what context to include or exclude, and what conclusions to emphasize—all based on detailed user profiles that remain completely hidden from the users themselves.
For example, when two users ask an AI system “Should I invest in renewable energy stocks?”, the responses might be dramatically different based on hidden profile data. User A, identified as risk-averse with a history of environmental concerns, receives a response emphasizing stable returns and ESG benefits. User B, flagged as profit-motivated with tech interests, gets a response focused on growth potential and emerging technologies. Both users believe they received objective financial analysis, unaware that the AI crafted fundamentally different arguments based on psychological profiles designed to maximize engagement or desired outcomes.
As AI companies face increasing pressure to become profitable, they are already adopting the same engagement-maximizing strategies pioneered by social media platforms. This means AI responses may be optimized not for truth or user benefit, but for continued platform usage, subscription retention, or the promotion of particular products and viewpoints that serve corporate interests.
The Dependency Paradox Here lies the fundamental dilemma that gives this essay its title: AI systems depend entirely on the quality information supply chain for their training data and ongoing knowledge, yet they accelerate its destruction through multiple mechanisms:
- Economic extraction: AI companies profit from synthesizing information produced by others without compensating the original creators
- Synthetic content pollution: AI-generated content floods information channels, making it harder for quality information to achieve visibility while simultaneously degrading the training data for future AI systems. This creates an accelerating feedback loop: AI systems trained on increasingly synthetic and low-quality data produce even more degraded content, which then becomes training material for subsequent AI models. This phenomenon contributes to what some call the “dead internet theory” where authentic human-created content becomes increasingly rare, potentially leading to what researchers term “model collapse”—where AI systems degrade over successive generations due to training on their own synthetic outputs.
- User dependency: As people become accustomed to AI-mediated reasoning, they lose both the cognitive skills for deep information processing and the epistemological skills necessary for independent critical evaluation
- Authority displacement: AI systems establish implicit epistemic authority without the accountability structures that constrained traditional knowledge sources
The Existential Threat to Democratic Pluralism
The cascading effects of these three waves of extraction—digital platforms, creator economy, and AI—represent an existential threat to democratic governance itself. When citizens cannot distinguish between rigorous evidence and sophisticated manipulation, or when they lose faith in reasoning processes because those processes have been systematically gamed by bad actors, the foundational requirements for democratic pluralism begin to erode.
This is why the collapse of information categories represents more than just a media crisis—it’s a crisis of democratic capacity itself. The alternatives to evidence-based democratic reasoning are stark: authoritarian imposition of truth, fragmentation into incompatible epistemic tribes, or pure power struggles where the strongest voice wins regardless of merit. None of these alternatives can sustain the pluralistic, inclusive democracy that depends on citizens’ ability to reason together about complex problems.
The Regulatory Divergence: Protecting Extraction vs. Preserving Democracy
The international response to these challenges reveals troubling priorities. While some regions attempt to preserve democratic discourse through governance frameworks, powerful corporate interests are actively working to prevent accountability measures.
The regulatory response reveals a fundamental divergence in political philosophy. Europe’s AI Act includes transparency requirements and risk assessments for AI systems used in information distribution, acknowledging concerns about epistemic influence and democratic discourse. Beyond the AI Act, European nations are pioneering additional protections: Denmark has officially recognized the human face, body, and voice as intellectual property, treating personal physical and vocal attributes as elements of identity worthy of copyright-like protection. Germany empowers its cybersecurity agency to monitor synthetic media during elections, while France processes deepfake complaints under identity theft laws.
The United States is moving in the opposite direction. In May 2025, Congress passed a 10-year moratorium on state AI laws, effectively preventing states from enforcing any AI regulations for a decade. This measure, tucked into a broader budget bill, would wipe out existing state protections against deepfakes, automated hiring discrimination, and other AI harms while giving tech companies unprecedented freedom from accountability.
This divergence is particularly concerning given the analysis in this essay. The moratorium doesn’t just prevent harmful AI regulation—it actively protects the extractive business models that are cannibalizing the quality information supply chain. By blocking state-level attempts to create transparency requirements or accountability measures, the moratorium ensures that AI systems can continue operating as black boxes, manipulating information and extracting value from original creators without oversight.
The autocratic logic extends beyond AI regulation to systematic attacks on knowledge creation itself. Proposed cuts to higher education funding represent a coherent strategy for those who view democracy as a failed system: by reducing the amount of knowledge created at universities—the first stage of the quality information supply chain—authoritarians can reduce the amount of verified information available to citizens, making populations easier to control through propaganda and manipulation.
This approach recognizes that an informed citizenry capable of independent reasoning poses an existential threat to autocratic governance. Rather than competing in the marketplace of ideas, the strategy involves constraining the supply of quality information at its source, ensuring that citizens must rely on centralized authorities for processed conclusions rather than developing their own analytical capabilities.
The contrast couldn’t be starker: Europe is attempting to preserve democratic discourse through AI governance, while powerful corporate interests in the US are successfully lobbying to eliminate even the possibility of such protections. This suggests we may be witnessing not just market failure, but the active degradation of democratic governance capabilities in favor of concentrated corporate power.
Future AI governance frameworks will need to address not just preventing AI-generated misinformation, but ensuring that AI systems contribute to rather than undermine the epistemic foundations of democratic decision-making. This means transparency requirements for AI training data sources, obligations to compensate original information creators, and standards for maintaining rather than degrading citizens’ epistemic capabilities. The current US trajectory makes such comprehensive reform increasingly unlikely.
Solutions and Recommendations: Rebuilding Information Infrastructure for the AI Age
Addressing this crisis requires moving beyond the extractive models of surveillance capitalism toward systems that align economic incentives with information quality. This reconstruction must operate on multiple levels simultaneously: educational, economic, legal, and technological.
Information Ecosystem Education: Beyond Media Literacy Traditional media literacy education, while valuable, is insufficient for addressing the current crisis. We need comprehensive information ecosystem education that teaches people to understand and navigate the complete quality information supply chain.
This education must include:
- Supply Chain Awareness: Teaching people to recognize and value the different stages of information production, from primary research and reporting through expert synthesis to public dissemination. People need to understand what it costs to produce quality information and why those costs matter.
- Information Typology: Explicit instruction in distinguishing between different categories of information—primary research, peer-reviewed analysis, investigative reporting, expert commentary, personal opinion, and synthetic content. This includes understanding the different validation processes and accountability structures associated with each type.
- Economic Literacy: Helping people understand how information production is funded and how economic incentives shape what information gets produced and distributed. This includes awareness of the parasitic dynamics that drain resources from primary information gathering.
- AI and Algorithmic Literacy: Education about how AI systems work, what their limitations are, and how to evaluate AI-generated information. This includes understanding training data provenance, the difference between synthesis and original research, and the importance of transparency in epistemic systems.
Generating Political Will for Structural Reform Educational efforts alone cannot restore the quality information supply chain without corresponding political action to restructure the economic and legal frameworks that currently reward parasitic information practices.
Key areas for political advocacy include:
- Platform Accountability Reform: Updating Section 230 and similar laws to create graduated liability for platforms based on their role in information distribution. Platforms that algorithmically amplify content or present themselves as authoritative sources should face greater accountability than passive hosting services.
- Antitrust Enforcement: Breaking up the monopolistic control that a few tech giants exercise over information distribution and advertising revenue. This includes ensuring that content creators can maintain direct relationships with their audiences without platform intermediation.
- Public Information Infrastructure Investment: Creating public funding mechanisms for investigative journalism, fact-checking operations, and academic research that operate independently of both corporate and platform interests.
- AI Transparency Requirements: Mandating that AI systems used for information synthesis or distribution provide clear documentation of their training data sources, decision-making processes, and confidence levels. AI agents that present themselves as authoritative should be required to meet transparency standards comparable to those expected of human experts.
Institutional Reconstruction for Quality Assurance Rather than simply returning to old gatekeeping models, we need new institutions that combine digital accessibility with traditional quality controls:
- Verified Source Networks: Creating certification systems that help users identify information sources that meet specific quality and accountability standards, similar to how academic journals or professional associations currently operate.
- Economic Models for Quality: Developing sustainable funding mechanisms that reward accuracy, thoroughness, and long-term value rather than immediate engagement. This includes exploring public funding, subscription models, and micropayment systems that can support quality information production.
- Collaborative Verification Systems: Building tools that enable communities of experts to collaboratively verify and contextualize information without recreating the exclusionary aspects of traditional gatekeeping.
- AI as Quality Enhancement: Deploying AI systems specifically to support rather than replace human expertise—for fact-checking, source verification, bias detection, and helping users understand the provenance and reliability of information they encounter.
The Path Forward: Recognizing Information as Democratic Infrastructure
The choice before us is clear: we can continue allowing engagement-optimized algorithms and profit-driven AI systems to shape our collective understanding of reality, or we can build new systems that prioritize truth, accuracy, and human flourishing over platform profits.
Most importantly, we must recognize that the health of our information ecosystem directly determines the health of our democratic and economic systems. We cannot have sustainable capitalism or functional democracy while our information infrastructure remains captured by extractive digital systems that cannibalize the very knowledge sources they depend upon.
The AI Industry’s Self-Interest Problem
For AI companies and developers, this crisis represents a fundamental business sustainability challenge. Current AI systems are entirely dependent on the quality information supply chain for their capabilities—language models trained on Wikipedia, scientific papers, news articles, and educational content perform dramatically better than those trained solely on social media posts or synthetic content.
As the quality information supply chain continues to degrade, AI companies will face increasingly severe limitations:
- Training data quality decline: Fewer high-quality sources means AI models trained on progressively degraded datasets
- Synthetic content contamination: As AI-generated content floods the internet, future training datasets become polluted with lower-quality synthetic material
- Expert knowledge scarcity: As newsrooms close and academic institutions lose funding, the human expertise that creates training data disappears
- Legal and reputational risks: Training on copyrighted content without compensation creates mounting legal challenges
The most successful AI companies will likely be those that recognize this dependency and invest in sustaining—rather than extracting from—the information infrastructure they require. This might mean funding journalism, compensating content creators, or developing business models that strengthen rather than weaken the quality information supply chain.
The parasite’s dilemma is ultimately society’s dilemma: we can either restore the quality information supply chain that both democracy and AI depend upon, or we can watch both systems collapse under the weight of their own contradictions. The quality of our future depends on which path we choose.
