"Democracy" burning
Steve Johnson on Unsplash

AI as anti-democratic infrastructure (and what education can do about it)

Alex Karp (2026), CEO of Palantir, recently framed AI as a force that will reduce the power of “highly educated, often female voters, who vote mostly Democrat,” while increasing the power of non-college-educated, working-class men. He did not describe this as a deliberate shift to decide who holds influence, and one could argue this is a political extension of Hans Moravec's paradox and its implications. However, educators must treat this framing as a warning, because schools and universities currently conscript themselves into the same technical stack that makes this power shift possible. When institutions centralize identity, content, and analytics into platforms (LMS, SSO, and enterprise data pipelines), they create the infrastructure necessary to rank, summarize, and throttle speech and access at scale.

Karp’s remarks read more completely once you place Palantir in its political and operational context:

“I no longer believe that freedom and democracy are compatible” (Thiel, 2009).

Peter Thiel, a Palantir co-founder, frames democracy as incompatible with freedom, which implies that democratic constraint is a problem to be engineered around rather than a safeguard. One must wonder, freedom for whom? When powerful figures around a company building state-facing data systems speak this way, education must stop treating AI as a “teaching tool” and start recognizing it as a governance choice. In a platform-mediated classroom, the tool does not just answer questions; it decides which questions are allowed, together with the set of prescribed answers.

Education policy discussion often stalls at the level of individual cognition. Rebecca Winthrop warns that student dependence on AI can produce “cognitive stunting,” and argues that educators should impose guardrails to protect student development (Winthrop, 2026). Yet cognitive development is not the only democratic risk. Students can work hard and still lose agency if private systems control what information surfaces first, what gets summarized into “the answer,” and what topics trigger refusal or forced generality, all without audit rights or meaningful notice.

This governance problem becomes concrete once generative AI moves from informal use into institutional infrastructure. Large language models generate fluent explanations, compress contested topics into short summaries, and enforce boundaries through refusals and safety filters. Those controls operate like policy because they shape what a student can easily ask, what a teacher can easily assign, and what a class can easily discuss. Institutions rarely receive stable change information when these boundaries move. A mid-semester model update, for example, can change what a system refuses to do, how it paraphrases, or which sources it cites, without notice to instructors. When a campus integrates AI through vendor policies and design choices, it imports a private rule system into public education, then inherits responsibility for the consequences of upstream design decisions.

Palantir serves as a precedent for what this integration looks like in high-stakes settings. Wired describes Palantir’s Gotham as software designed for police and government clients that links people, places, and events for investigative and operational workflows (Tufekci, 2025). This does not necessarily mean that Palantir is “coming for schools.” It shows how an infrastructure and analysis firm designs software for institutional operations. Tools built for investigation, classification, and intervention normalize continuous monitoring because monitoring becomes cheap, searchable, and routinized. When similar logics enter education through analytics dashboards and AI “integrity” tooling, administrators gain a new appetite for measurement, and teachers inherit a larger burden of compliance work. Students learn participation carries visibility, and visibility carries risk.

The familiar “cognitive laziness” narrative misses the structural point. AI does not automatically weaken thinking; outcomes depend on task design and how educators structure revision and justification. The deeper democratic risk emerges when a small cluster of firms controls the systems (search, summarization, tutoring, and writing support) that make knowledge easy to retrieve and easy to trust. This steers attention and reasoning. Summaries carry framing choices. Over time, students learn to ask only what the system responds to smoothly, and institutions learn to design around what the system allows.

Sovereignty as a decolonial practice, not a compliance program

My upcoming article for Learning Futures and Emerging Technologies (in process) argues that educational AI adoption often reproduces colonial relations through four linked “lanes”: infrastructure, classification, epistemology, and labor. The claim is structural. Coloniality persists through contracts, defaults, and update cycles that shift authority upward while pushing responsibility downward. This is why the usual institutional response feels inadequate. A disclosure statement, a training module, or a classroom policy regulates user conduct at the edge while leaving platform governance untouched.

Seen through that lens, Karp’s and Thiel’s statements stop being background noise. They describe pressure toward a political outcome: fewer educated publics with the confidence and capacity to contest power. AI can contribute to that outcome in two ways at once. First, it can automate or devalue parts of professional knowledge work. Second, it can centralize the governance of inquiry itself. When the same private systems set boundaries for what can be asked, what counts as credible, and what gets flagged as risky, a democratic culture of contestation becomes harder to sustain. The educational system then trains adaptation to platform rules rather than practice in public reasoning.

Sovereignty names the counter-move. In the manuscript’s terms, sovereignty is the effort to break the enclosure by relocating control over infrastructure, classification thresholds, epistemic boundaries, and repair labor back into institutions accountable to the public. That relocation is not an administrative wishlist. It is a theory of institutional agency. A school is sovereign when it can refuse defaults, inspect the system that governs inquiry, and exit without losing its records, context, and institutional memory. Without those capacities, “adoption” becomes a disguised transfer of jurisdiction.

Each lane clarifies what sovereignty must disrupt.

Infrastructural coloniality manifests as an automated presumption of guilt when “pilots” become permanent dependencies. Identity, content, analytics, and support consolidate into one stack, and exit becomes reputationally and operationally costly. That dependency matters politically because it shifts the institution’s policy cycle onto the vendor’s update cycle. The vendor’s release becomes the institution’s rule change.

Classification coloniality shows up when institutions import automated suspicion. Detectors, proctoring flags, and predictive risk signals turn student writing and student behavior into a governance problem to be managed. This shifts the burden of proof downward, especially for students whose language use falls outside dominant registers. A system that makes accusation cheap changes the culture of learning. Students write to avoid flags. Teachers grade with an eye toward enforcement. Trust erodes, and self-censorship becomes a rational strategy.

Epistemic coloniality shows up when safety layers and ranking behavior pre-shape inquiry. Task refusals, softened answers, and hidden ranking rules decide what questions “work,” which sources surface first, and what counts as a reasonable conclusion. That boundary-setting becomes anti-democratic when it is uninspected, unappealable, and changeable by opaque updates controlled by a powerful few. In other words, it is a form of censorship and mind-shaping. It creates a practical politics of knowledge: some questions become frictional or meaningless, then students learn to avoid them.

Labor coloniality shows up when public institutions subsidize private systems through repair. Teachers and students verify hallucinations, manage disputes, rewrite assignments around platform limits, and absorb the workload of integrity processes triggered by opaque tools. The platform captures adoption and telemetry; the institution absorbs cost and conflict. This is extraction by design rather than accident.

These four lanes also clarify why sovereignty must be innovative. Institutions can design governance that behaves like democratic governance: deliberative, contestable, and reversible. They can treat model updates as policy changes that require notice and justification. They can treat refusal behavior as a curricular boundary that belongs to educators and communities rather than vendor risk teams. They can treat telemetry as extraction unless the institution can name its purpose, constrain its use, and stop it.

Innovation can also be collective. Universities can form consortia to share evaluation capacity, negotiate shared terms, and create credible exit paths. Systems can build “public option” AI services with transparent moderation rules and community oversight, either through open-weight deployments or institution-controlled models with published update practices. Institutions can also adopt a minimum sovereignty threshold for any AI system that touches assessment, discipline, or student support, then refuse deployment when the threshold is not met. The point is not procurement theater. The point is jurisdiction.

When sovereignty is constrained

Vendors often resist meaningful sovereignty because sovereignty limits extraction. When institutions cannot secure jurisdiction over the system, they can still teach students how jurisdiction works. Schools should treat AI literacy as a study of authority under conditions of mediation. A prompt course teaches students to elicit cleaner outputs. It does not teach them how claims earn credibility. A stronger target is source discipline. Require students to attach primary sources to any factual claim they carry forward from an AI response, then grade the evidence chain rather than the polish of the prose. When a model summarizes a politically contested topic, require students to identify what the summary emphasized, what it compressed, and what it omitted, then compare that framing against abstracts, methods sections, or statutory text the summary claims to represent.

Task refusals and safety filters belong in the curriculum for the same reason. When a model refuses to answer a question or answers by sidestepping it, it draws a boundary around inquiry and enforces it without deliberation or appeal. Students should learn to treat that boundary as a constraint on scholarship: name it, probe it, and document its effects. This practice involves tracking which topics trigger refusals, which yield softened answers, and which sources appear when the system does respond. That work keeps the limits of inquiry contestable, which is the core democratic habit at stake.

Repair labor and due process as anti-extraction

AI adoption imposes a hidden tax on public institutions. Teachers verify claims, correct confident errors, redesign assessments to reduce false accusations, and spend time in meetings and documentation when detection tools trigger integrity procedures. Students do parallel work when they validate outputs and defend authorship. This is not incidental friction. It is a transfer of labor from public education into the stabilization of private systems. Institutions should treat that labor as evidence about how the system governs. A tool that routinely produces anxiety and dispute governs through suspicion rather than support.

Due process belongs at the center of AI policy because classification systems produce harms through opacity. Institutions should adopt a clear procedural standard: no adverse action based solely on automated detection. They should require human review, transparent standards of evidence, and remedies that do not depend on students disproving opaque algorithmic claims. These safeguards protect learning as a space where students can experiment, revise, and dissent without being trapped in a permanent presumption of guilt.

Governance has authors. Karp chose to describe AI as a lever for weakening a specific electorate. Thiel chose to cast democracy as a problem for freedom. Education must treat these statements as evidence about the political environment in which AI infrastructure is being built. The response is not more training modules. It is institutional imagination that reclaims jurisdiction across the four lanes: infrastructure that remains governable, classification that remains contestable, epistemic boundaries that remain visible, and labor that remains recognized rather than extracted.

References

Karp, A. (as cited in Sirota, D.). (2026). Palantir CEO says AI will be used to reduce power among educated people, particularly Democrats. The New Republic. https://newrepublic.com/post/207693/palantir-ceo-karp-disrupting-democratic-power

Thiel, P. (2009). The education of a libertarian. Cato Unbound. https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian

Tufekci, Z. (2025). What does Palantir actually do? Wired. https://www.wired.com/story/palantir-what-the-company-does/

Winthrop, R. (2026). “Cognitive stunting” risk and guardrails for students using AI (LinkedIn post). https://www.linkedin.com/feed/update/urn:li:activity:7437933804511707136/

Moravec, J. W. (in process). Beyond the digital enclosure: AI, coloniality, and the pursuit of educational sovereignty. Learning Futures and Emerging Technologies.