As AI systems become more capable and more autonomous, the appeal of speed, efficiency and profitability is intensifying. But new research into cognitive offloading suggests the long-term consequences of unexamined and unconscious adoption may extend beyond operations and into human capability itself.
This article explores the subtle erosion of judgement, authorship and agency that can occur when fluency is mistaken for thinking and acceleration outpaces reflection. Framed through the ontological distinctions of Awareness, Authenticity and Responsibility, it challenges leaders to move beyond enthusiasm and exercise stewardship.
Because what is at stake is not merely productivity, but our human capacity to think clearly and exercise judgement.
The AI rollercoaster is gaining velocity.
Individual users love the way AI saves them time and makes tasks easier and faster. Institutions value the increase in productivity and profitability it promises. When ease and profit align, acceleration becomes inevitable. But inevitability is not the same as wisdom.
We’ve seen this pattern before.
Social media promised connection and convenience and we embraced it with enthusiasm. Yes, there were benefits, but also online bullying, misinformation, scams, cyber threats and digital ecosystems that influenced behaviour, amplified outrage and deepened polarisation.
The rules lagged innovation.
Users followed the path laid out for them, often without fully understanding where it might lead. Not because we are careless. But because human beings are wired to conserve effort. When we are busy, distracted, tired or operating on autopilot, we default to what is simple and frictionless.
AI fits that wiring perfectly. But this time, the consequences might not just be social. They could be cognitive.
The Oracle Myth
In a recent LinkedIn post, Ricardo Perez Font described an advisory session with a senior manager who presented a beautifully polished strategic roadmap. It was articulate, structured and filled with current industry language. On the surface, it looked impressive.
Ricardo asked a simple question: “Why did you prioritise channel X over channel Y in Q3?”
The manager froze. He couldn’t explain the rationale because he hadn’t made the decision - the algorithm had.
Ricardo called this the “Oracle Myth” – the tendency to treat AI as a strategic authority rather than what it actually is: a powerful probabilistic engine generating plausible outputs based on patterns in data. It is designed to sound coherent and convincing, not to verify truth. While it can produce extraordinary results, it can also fabricate information with equal confidence. Hallucinations are not glitches in the system - they are a feature of how these models work.
The danger is not that AI produces poor work. Often, it produces excellent work. The danger is more subtle. We begin to confuse fluent language with sound reasoning, we allow polished output to replace personal authorship and we accept conclusions we cannot defend.
In the moment Ricardo described, the issue wasn’t technology - it was agency.
Cognitive Offloading
In February 2026, the Centre for the Critical Analysis of Digital Society released one of the most recent examinations of AI’s cognitive impact. In Technologies Smarter, Humans “Dumber”?, Dr Dirk Van Damme explores what researchers describe as “cognitive offloading”, the increasingly common practice of outsourcing memory, analysis and even elements of judgement to digital systems.
Humans have always used tools to extend capability: writing reduced the need to memorise; calculators reduced the need for mental arithmetic. But this moment is different in both scale and depth. Generative AI is not merely storing information. It is synthesising, drafting, prioritising, recommending and, in some cases, appearing to reason. At the same time, we are witnessing the rapid uptake of agentic AI systems designed not just to assist but to act, booking flights, scheduling meetings, executing transactions, managing workflows and increasingly making micro-decisions on behalf of their users.
Dr Van Damme does not argue that technology makes us inherently less intelligent. His concern is more nuanced. When individuals repeatedly rely on external systems to perform higher-order thinking tasks, they may reduce their own engagement in the cognitive processes required to develop and sustain those skills. When systems begin not only to inform our decisions but to make them, the opportunity for cognitive engagement narrows further.
Over time, effort avoidance becomes habit and habit becomes dependency.
Perhaps most concerning, says Van Damme, is the illusion of competence that fluency creates. When AI-generated responses are coherent and confident, users frequently overestimate both the accuracy of the output and their own understanding of it. Access to information begins to masquerade as mastery. The shift is subtle. It is not merely behavioural - it is cognitive.
In a conversation this week with a 25-year veteran of AI and data technologies, I commented that unless we proceed with our eyes wide open, we risk not only shooting ourselves in the foot, but in the head. It was a confronting way of putting it, but the velocity of adoption is outpacing reflection. When acceleration becomes the default, foresight becomes optional.
Awareness in AI Adoption
The question is no longer whether we use AI, but how consciously we do so. And that begins with Awareness.
Awareness is not simply noticing that AI exists or understanding how it functions. It is recognising the forces at play within and around us: the pull of ease; the pressure of competition; the promise of profitability; and the subtle drift towards autopilot when a system offers to think on our behalf.
At its heart, Awareness safeguards judgement, which is the expression of our autonomy. It is the capacity to weigh context, values, consequences and trade-offs before acting. When exercised with care, judgement enables effective and responsible action. When outsourced without reflection, it begins to atrophy.
AI can inform judgement by expanding the data available to us, surfacing patterns we might miss and generating alternative perspectives. But it cannot own the decision. When we defer too quickly, when we accept outputs we cannot interrogate, we are not augmenting autonomy – we are diluting it.
Without Awareness, acceleration feels inevitable. With it, we retain the capacity to choose.
Authenticity vs Automated Fluency
If Awareness safeguards our capacity to judge, Authenticity safeguards our ownership of what we present as our own.
Authenticity in this context is not about personal branding or emotional transparency. It is about congruence. It is the alignment between the thinking we have genuinely done and the claims we make in public. When we present a strategy, a report or a recommendation, Authenticity asks a simple question: do I truly understand and stand behind this, or am I borrowing fluency?
The “Oracle Myth” illustrates what happens when that alignment fractures. The document may be polished; the language may be persuasive; the structure may be impeccable. But if the reasoning has not been wrestled with, interrogated and internalised, authorship is compromised.
AI can accelerate drafting and expand perspective. It can suggest alternatives and surface blind spots. Used consciously, it can strengthen Authenticity by challenging assumptions. Used passively, it can erode it. When we allow generated outputs to substitute for our own cognitive labour, we risk presenting confidence without comprehension.
Over time, that gap matters. Not only for credibility, but for character and integrity.
The Cost of Abdicating Responsibility
If Awareness protects judgement and Authenticity protects authorship, Responsibility protects consequence.
Responsibility asks a broader question: not only can we do this, but what follows if we do? It shifts the frame from short-term gain to long-term impact.
For leaders, this is not abstract. It plays out in everyday choices. How consciously we use AI ourselves. How we encourage or require our teams to use it. The targets we set. The efficiencies we demand. The assumptions we allow to go unchallenged in executive and board-level conversations.
There is growing pressure in many organisations from non-technical executives who see AI as a cure-all, the next great lever of growth, the technological hope that will solve structural problems. Productivity targets are recalibrated. Headcount models are redrawn. AI initiatives are fast-tracked. The narrative is compelling.
But leadership requires more than enthusiasm - it requires discernment.
Technical leaders, in particular, have a critical role to play. They sit at the intersection of possibility and consequence. They understand both the power and the limitations of these systems. Exercising Responsibility means engaging actively in those conversations, correcting assumptions, naming risks and ensuring that governance keeps pace with ambition.
Responsibility does not mean resisting innovation. It means stewarding it. It means designing human-in-the-loop decision processes where judgement remains central. It means applying clear standards of accountability to AI-driven initiatives. It means asking not only what we gain in the next quarter, but what capacities we may erode over time.
If the Van Damme research is correct and cognitive offloading reshapes how we think, then the implications extend beyond operational efficiency. They touch the development of human capability itself.
That is not a technical issue, it is one of leadership.
An Invitation to Conscious Leadership
The momentum behind AI will continue to build. Competitive pressure will intensify. Economic incentives will grow stronger. The appeal of tools that make our work faster and easier will only increase.
Our task as leaders is to ensure our progress is guided by reflection.
We have already experienced what happens when innovation outpaces governance and adoption outpaces understanding. The social consequences are still unfolding. The cognitive consequences are only beginning to surface.
The real question is not whether AI will become embedded in every function, every industry and every workflow. It almost certainly will. The question is whether we embed it unconsciously or whether we do so with deliberate leadership.
If we allow systems to shape how we decide, how we think and how we judge without sustained scrutiny, we might not notice what we are surrendering until it is already diminished and harder to recover.
Acceleration is easy. Stewardship is harder – it requires courage, technical clarity and leaders willing to slow the conversation down long enough to ask better questions.
Because what is at stake is not merely operational efficiency or quarterly returns. It is the future of our human capacity to think clearly and exercise judgement.
