

Is AI making organizations lazy?
In an earlier post, I asked if AI is making humans “cognitively lazy.” The answer, I argued, is not really. It is a tool, just like a calculator or book, that can enhance human capacity. But what about at the organizational level? Within corporations, governments, and universities? I’m afraid the answer is somewhat different.
AI promises efficiency, scale, and predictive precision. And there are real gains being realized. Yet many organizations have traded understanding for output. Their enjoy their dashboards, reports are generated on demand, and decision-making is better informed, but comprehension has decayed. This decay arises not from technology itself but from a fundamental confusion about the nature of knowledge.
Leaders often claim to manage “knowledge assets” when they manage data streams. They speak of “knowledge sharing” when they circulate unexamined summaries. They celebrate “insight generation” when algorithms rearrange correlations. Such language mistakes the signal for the sense, the measurable for the meaningful. The result is organizational laziness disguised as sophistication.
From data to innovation: a hierarchy often ignored
To restore clarity, we must separate four levels of cognition:
- Data are raw observations of quantitative or qualitative fragments without interpretation.
- Information is organized data that answers basic questions of who, what, when, and where.
- Knowledge is information interpreted through human experience and contextual understanding; it answers how and why.
- Innovation occurs when knowledge is applied purposefully to produce change or create value.
AI systems function superbly in the first two tiers. They can collect, classify, and correlate immense quantities of data, producing streams of information with speed and precision. Yet knowledge lies beyond this range. Knowledge is the human capacity to interpret information within context, to recognize patterns that matter, and to apply understanding in new and uncertain situations. It integrates reasoning, experience, and foresight. Unlike data, knowledge is not stored or retrieved; it is constructed through reflection, conversation, and judgment. It lives in people, not in databases.
Data systems operate through generalization. They assume that what works in one context (Kenya, for example) will work in another, such as Peru. Human knowledge recognizes the limits of such assumptions. It understands that context determines meaning, cultures are complex, and that application requires adaptation. Machines process correlations; humans discern relevance. This capacity to transfer insight across contexts, to modify understanding for new conditions, and to anticipate consequences is what defines knowledge and makes it indispensable to decision-making.
Organizations that confuse information with knowledge abandon this human advantage. They automate reporting but not learning. They perfect procedures but forget purpose. Without people who can interpret, adapt, and synthesize, they become efficient information engines trapped at the lower tiers of understanding. Real knowledge systems—human systems—remain dynamic precisely because they question, adjust, and learn.
AI and the illusion of knowing
Generative and analytical AI tools intensify this confusion. Their fluency creates the illusion of comprehension. A system that produces coherent language appears to “know,” yet it merely rearranges probabilities. When organizations use these outputs uncritically, they inherit this illusion. AI’s convenience discourages the reflective labor through which genuine knowledge arises.
Knowledge develops when people debate interpretations, compare experiences, and connect ideas to context. In contrast, AI-generated text bypasses this process. It delivers the appearance of insight while reducing the effort of inquiry. The organization becomes informationally rich and cognitively poor, a paradox of apparent intelligence built on synthetic certainty.
This explains why many so-called “knowledge organizations” plateau. Their internal conversations collapse into dashboards. Their analysts curate information, not meaning. Their decision cycles shorten even as their foresight weakens. They operate with abundant data but no epistemic depth.
Knowledge as integration vs the informatics trap
Knowledge is integrative. It connects what is known with what is still uncertain. It exists at the intersection of perception, interpretation, and action. Polanyi (1966) described this as tacit knowing: the personal dimension that cannot be fully expressed or automated. Nonaka and Takeuchi (1995) later modeled organizational knowledge as a dynamic conversion between tacit and explicit forms, mediated through dialogue and reflection. These processes remain irreducibly human.
AI can support this work but cannot perform it. It lacks consciousness of relevance, an awareness of gaps, and the ability to situate facts within meaning structures. Data and information are substrates for knowledge, but they do not become knowledge until humans integrate them into a coherent frame of understanding. The danger is not that AI will think for us, but that we will stop thinking because it seems to have done so.
However, today’s wave of AI adoption has revived an old, post-WWII fantasy that knowledge can be stored, indexed, and retrieved like a commodity. This fantasy produces informatics organizations, entities optimized for data throughput rather than insight. They excel at producing reports no one reads, metrics no one questions, and decisions no one understands.
Such organizations are lazy not because their people are idle but because their systems reward surface efficiency over deep comprehension. They mistake the circulation of information for the creation of knowledge. Their leaders rely on “evidence” without context and “learning analytics” without learning. In doing so, they abdicate the human responsibility to interpret.
Reclaiming organizational knowledge
Recovering knowledge work within organizations requires epistemic discipline. Leaders must first recognize that data and information are inputs, not outcomes. Knowledge arises when people interpret these materials within a framework that connects evidence to purpose. Treating data analysis as a substitute for judgment confines organizations to perpetual reaction. To move beyond this, they must reëstablish interpretation as a central function. Every AI-generated result should prompt a human response: what does this mean, how does it fit, and why does it matter? Without that interpretive layer, organizations mistake pattern recognition for insight.
Reclaiming knowledge also depends on deliberate synthesis. Machines can identify correlations, but humans integrate them into coherent understanding. This process demands context, comparison, and abstraction. These are capacities unique to reflective thought. AI can support the search for relationships across vast datasets, yet the act of making sense remains human. When analysts and managers explain why patterns matter, not merely that they exist, they reintroduce accountability to reasoning and transform information from static record into living knowledge that guides action.
Knowledge must also be understood as a social and evolving construct, not a static asset stored in repositories. Understanding develops through dialogue, debate, and reflection. When organizations rely exclusively on algorithmic reporting, they suppress these processes and lose their collective ability to learn. Sustained knowledge work therefore requires what might be called slow cognition: deliberate inquiry that values depth over speed. In practice, this means preserving time and space for discussion, review, and reinterpretation. Knowledge is not the end product of data analysis but the beginning of intelligent action. Only by restoring these human processes can organizations rise above informatics and recover their capacity to think.
In knowledge-based organizations, AI should extend expertise, not replace it. Human judgment remains indispensable. What humans now do has changed, but their role has not diminished. Any institution that imagines it can remove people from the equation reduces itself to the management of data and information. No organization that hopes to thrive in a knowledge-based economy can prosper with that orientation.
Perhaps it can be concluded AI does not erode intelligence; it reveals where none existed. Most organizations were never knowledge organizations to begin with; they were information factories. AI merely exposes the gap between data processing and understanding. To remain relevant, leaders must rebuild their epistemic infrastructure: systems that value meaning over metrics; synthesis over speed. Knowledge creation forms when organizations decide to think again.
References and recommended readings
Moravec, J. W. (2025, August 27). Is AI making us “cognitively lazy”? Education Futures. https://educationfutures.com/post/is-ai-making-us-cognitively-lazy
Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford University Press.
Polanyi, M. (1966). The tacit dimension. University of Chicago Press.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Doubleday.
Tsoukas, H. (2009). A dialogical approach to the creation of new knowledge in organizations. Organization Science, 20(6), 941–957. https://doi.org/10.1287/orsc.1090.0435
von Krogh, G., Ichijo, K., & Nonaka, I. (2000). Enabling knowledge creation: How to unlock the mystery of tacit knowledge and release the power of innovation. Oxford University Press.