AI widens the gap with agency
Education was fiercely political long before the arrival of artificial intelligence. It has always structured access to opportunity, voice, and legitimacy. What AI changes is the tempo and reach of those structures. Systems that once operated locally now operate at scale. Classifications, predictions, and evaluations travel faster and further. When decision-making expands in this way, power shifts from individuals to architecture.
Public discussion often frames AI as a contest between those who control the tools and those who are controlled by them. The framing is familiar: a technically fluent minority accumulates advantage; everyone else adapts to automated judgments. The appeal lies in its clarity. Yet the deeper determinants predate the technology. AI extends the logic of the institution in which it operates. It does not replace it.
The central issue, then, concerns judgment. Who retains the capacity to deliberate, to dissent, and to assume responsibility for consequences? Technical proficiency matters. It is insufficient. Education must cultivate agency and self-efficacy if judgment is to remain human.
Agency refers to the capacity to initiate action under constraint and to sustain direction in conditions of uncertainty. It is ethical as well as practical, because action shapes the lives of others. Self-efficacy refers to the belief that one’s actions can influence outcomes, particularly when circumstances are difficult. It affects persistence, resilience, and willingness to revise. These capacities intersect. They are not identical. Agency without self-efficacy erodes into formal compliance. Self-efficacy without agency remains private conviction. Contemporary educational environments require both.
Neither capacity develops in abstraction. Agency grows through repeated experiences of choice, consequence, and revision. Self-efficacy strengthens when individuals see credible evidence that their decisions alter outcomes. Institutions determine whether such experiences are available. Schools can cultivate authorship, or they can reproduce deference. The distinction is structural, not rhetorical.
Industrial models of schooling prioritized standardized outcomes at scale. Age grading, pacing guides, and assessment regimes organized around external criteria reflect this legacy. Instruction became delivery. Learning became uptake. Under these conditions, students learn to align with expectations. Teachers learn to manage variance. Initiative often appears as deviation.
Over time, this architecture communicates a durable lesson about authority. Standards originate elsewhere. Evaluation flows downward. Permission precedes action. When outcomes feel predetermined, effort shifts toward anticipation rather than creation. Self-efficacy weakens because the relationship between action and consequence becomes opaque.
AI integrates smoothly into such systems. Automated grading platforms, predictive analytics, content generators, and monitoring tools are frequently adopted in the name of efficiency. Each system embeds assumptions about merit, risk, and trust. These assumptions are rarely neutral. When introduced into compliance-oriented institutions, AI consolidates existing patterns of control and normalizes the outsourcing of judgment.
Judgment, however, entails accountability. Algorithms can model patterns and generate recommendations. They cannot assume responsibility for the ethical implications of those recommendations. When AI becomes the default arbiter of educational decisions, authority moves further from those affected by it. Opportunities for contestation narrow. This outcome reflects institutional design, not technological inevitability.
The implications extend beyond pedagogy. In contexts where public discourse is already constrained, AI can intensify the narrowing of inquiry. Automated filters, risk scores, and behavioral dashboards can quietly redefine what counts as acceptable participation. Such systems scale enforcement without visible deliberation. Pluralism becomes more fragile when governance is embedded in code.
Agency and self-efficacy therefore depend on civic infrastructure. Transparent decision-making, due process, and meaningful avenues for appeal render action rational. Without these structures, exhortations to initiative lack credibility. Individuals are unlikely to invest effort where they see no path to influence.
Education mediates between individual development and collective life. When learners participate as co-authors of inquiry, they acquire expectations of participation beyond the classroom. When they are positioned as recipients of predetermined content, they internalize passivity. Teachers draw parallel conclusions about their professional agency. Institutional design shapes beliefs about whether systems respond or impose.
In an environment saturated with AI, agency requires epistemic discipline. Learners must decide what to trust, what to question, and what to treat as provisional. Fluent outputs can obscure uncertainty and bias. Without structured skepticism, deference becomes habitual. With it, AI remains a tool for inquiry rather than a source of authority.
Agency also requires moral and political literacy. Automated proctoring systems presuppose a theory of trust. Predictive analytics presuppose a theory of risk and deservingness. Training data reflects historical distributions of power. When these embedded values remain unnamed, they appear natural. Naming them reopens the possibility of deliberation.
As production costs decline, the locus of educational work shifts. If AI can generate competent drafts, presentations, or code, then the critical task lies in framing questions, evaluating assumptions, and negotiating trade-offs. Automation accelerates execution. It does not determine direction. Education must therefore concentrate on the cultivation of discernment.
Self-efficacy sustains this process under strain. It develops through meaningful responsibility and credible feedback. High-stakes testing regimes and rigid evaluative frameworks can undermine it by conflating error with deficiency. When experimentation is disproportionately penalized, avoidance becomes rational. Over time, caution replaces initiative.
Teachers encounter analogous constraints. Scripted curricula and platform analytics may narrow professional discretion. When educators perceive limited influence over learning conditions, they struggle to model the agency they are asked to foster. Institutional compliance becomes self-reinforcing.
Design choices carry significant consequences. If AI can produce polished artifacts, assessment must examine the reasoning behind them: how problems were defined, how evidence was weighed, how revisions were made. This approach preserves rigor while centering judgment.
AI literacy must also expand beyond operational competence. Students should analyze systems as socio-technical constructs shaped by incentives, data, and governance decisions. They should interrogate outputs and demand explanation. Such practices align technical understanding with civic responsibility.
Learning environments should incorporate consequential participation. Students require structured opportunities to influence direction and standards. Teachers require space to exercise professional judgment. Participation is preparation for shared governance.
Dialogue warrants protection. Evidence-based disagreement counters both algorithmic fluency and political narrowing. Spaces in which claims can be examined without retribution sustain habits essential to democratic life.
Institutional governance must remain contestable. Automated decisions should be explainable and open to appeal. Authority that cannot be questioned fosters dependency. Contestability affirms that institutions exist to serve a public, not to manage a population.
The emerging divide will not run simply between those proficient with AI and those who are not. It will run between systems that cultivate responsible authorship and those that normalize management. Structural constraints limit action. Institutional redesign can expand it. This conviction underlies the broader argument for a positive rebellion in education: a deliberate effort to build systems that assume human responsibility rather than managed passivity.
Education either reinforces normalization of control or cultivates authorship. Curriculum, assessment, and governance signal which orientation prevails. AI will amplify that signal.
If AI becomes a managerial overlay on human life, civic responsibility contracts. If it remains subordinate to human judgment, institutions must prepare individuals to exercise that judgment with competence and integrity. The decisive question concerns formation, not fluency. Do our systems prepare authors of shared life, or operators within predefined frameworks?
The direction remains unsettled. AI can extend human capacity. It can also consolidate quiet control. The outcome will depend less on software than on the institutional choices that shape its use.


