
John MoravecThe tool was never the problem: Phone bans, AI, and the failure of control
A new study in the U.S. on school phone bans gives us some food for thought.
Researchers examining the use of lockable phone pouches across U.S. schools found what many would expect at a surface level: phone use drops. But the downstream effects are harder to reconcile with the policy narrative. In the first year, disciplinary incidents increase and student well-being declines. Over time, those effects fade and even reverse. Academic outcomes, meanwhile, remain largely unchanged (Allcott et al., 2026).
It is tempting to read this as a simple story about adjustment. Take something away, people react, then they adapt. There is plenty of research like this in organization psychology, so there should not be much surprise. But we risk missing the more important question:
If removing phones produces disruption, what does that say about the dependency structure schools have allowed to develop?
Phones did not create distraction inside schools. They absorbed it. They filled time, mediated social interaction, and compensated for weak engagement. When they disappear, the system does not suddenly produce better learning conditions. It exposes the gaps that were already there.
The response to those gaps is revealing. Instead of redesigning learning, schools tighten control. This means more rules, more enforcement, more attention to behavior. The policy thus addresses the visible symptom, but not the underlying condition.
This pattern reflects a deeper issue that predates phones, AI, or any specific technology:
Education does not have a technology problem. It has a model-of-learning problem.
For generations, schooling has been organized less around learning than around management. It groups students by age, moves them through fixed schedules, divides knowledge into discrete units, and evaluates performance through standardized outputs. These structures made sense in systems that needed to coördinate large numbers of people efficiently. They made schooling visible and easier to manage to administrators and policymakers.
But visibility is not the same as learning.
It is important to understand that distinction because the conditions that sustained the model no longer hold. Access to information is no longer scarce. Tools can assist, extend, or replace parts of human cognition, augmenting how we learn and “know” things. The boundary between individual and distributed thinking via machines has blurred.
Yet the structure of schooling remains largely intact.
That is why the last decade feels incoherent. Not long ago, schools could not get enough technology into classrooms. Devices promised access. Platforms promised personalization. “EdTech” became a stand-in for “innovation” because the system rarely asked what educational technology was supposed to change. Now the same system retreats from innovation back to the comfort of control, locking phones away and treating generative AI as a threat to academic integrity.
This may be rationalized as a system trying to restore control over conditions it no longer understands. When a tool fits the existing model, it is adopted. When it exposes the limits of the model, it is restricted.
AI brings this contradiction into sharp focus. Students and teachers are already using these tools, often faster than institutions can respond. The gap between policy and practice continues to widen (RAND Corporation, 2025). In an increasing number of contexts, AI is embedded in how work gets done.
However, the effects of AI are not uniform. Evidence suggests that AI can support learning when it is used to extend reasoning but undermine it when it replaces cognitive effort (Khalil & Er, 2025). The tool can produce both outcomes. The difference lies in how learning is designed. This creates a problem that the current system is not equipped to handle.
For decades, education has relied on static outputs as proxies for learning. Essays, exams, and assignments were treated as evidence of individual cognition. AI breaks the assumptions we’ve long held about the assessment system. It can produce outputs that look like learning without revealing to what extent thinking occurred.
If a student can generate a high-quality response with minimal effort, the issue should not be centered around academic dishonesty. It is that the approach we have relied on no longer carries the meaning we once assigned to it.
Researchers might describe AI and assessment as a “wicked problem,” not because it is complicated, but because it cannot be resolved within the current structure (Corbin et al., 2025). Efforts to preserve integrity through detection and surveillance attempt to restore visibility, but they do not address the underlying shift.
The system responds in the only way it knows how. It increases control. We see this in phone bans, AI detection tools, proctoring systems, and expansive policy statements about appropriate use. These measures of response to anxiety or paranoia create the appearance of order by reëstablishing the old regime, but they do not resolve the contradiction.
Decades ago, we faced a similar crisis with calculators. Calculators were eventually integrated because their function was limited and well-defined. They did not challenge the structure of assessment itself.
AI does and today’s challenges are thus wildly different. It operates across the full arc of thinking, from generating ideas to refining arguments. It does not simply assist with tasks. It reshapes them.
This is why the current moment feels unstable. The system is trying to apply old rules to a new form of cognition. As Albert Einstein is (mis)attributed to having said, “insanity is doing the same thing over and over again and expecting different results.”
And, in this case, the result is predictable: Control substitutes for design, policy substitutes for strategy, and restriction substitutes for equity. The gap between school and the world widens.
The way out does not lie in more restrictive policies or more enthusiastic adoption. It requires a different model of learning.
Assessment must move beyond static outputs toward evaluating process, reasoning, and judgment. Students need to demonstrate how they think, not just what they produce. Classroom practice must shift from controlling inputs to designing activity. The relevant question is not whether a student used AI, but what kind of thinking the task required and how the tool shaped that thinking.
Governance must move from prohibition to shared norms. Students and teachers need clarity about when and how tools are used, grounded in purpose rather than enforcement. Teacher preparation must also change. Educators cannot be expected to integrate tools effectively without frameworks that reflect how those tools alter cognition.
None of this represents a return to a previous version of education. The idea that the system can simply go back to promoting personal and community growth ignores the fact that it has never been structurally organized around those goals at scale.
That is the work ahead.
The phone ban study does not show that extreme control measures works. It shows that removal produces disruption and adaptation without addressing learning itself. AI does not create a new problem. It makes the existing one visible.
The system can continue to oscillate between adoption and restriction, or it can confront the harder question:
What does learning look like when tools are part of thinking?
Policymakers and school systems architects need to figure this out now.
References
Allcott, H., Baron, E. J., Dee, T., Duckworth, A. L., Gentzkow, M., & Jacob, B. (2026). The effects of school phone bans: National evidence from lockable pouches (NBER Working Paper No. 35132). National Bureau of Economic Research.
Corbin, L., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education.
Khalil, M., & Er, E. (2025). Will AI transform education? Evidence from student use of generative AI. Studies in Higher Education.
RAND Corporation. (2025). AI use in schools is quickly increasing but guidance lags.
