The Neuroscience Risks of Using AI to Think for Us
This article was first published by AMA and, as part of the AMA Global Network, is republished by Management Centre Europe with permission.
AI makes it easier and faster to get answers.
But using AI too often to think for us can come with hidden risks.
From a neuroscience perspective, our brains are designed to save energy. When people rely on AI instead of engaging their own thinking, skills like analysis, memory, and judgment can weaken over time.
This article explains the neuroscience risks of over‑reliance on AI and highlights why leaders need to protect human thinking capabilities as AI becomes part of everyday work.
Listen to this article:
As a hands-on CEO for over four decades and a dedicated student of neuroscience for the past 10 years, I’ve watched the rapid adoption of AI within the workforce. Yet amid the enthusiasm, there are potential business and human-factor risks that are rarely discussed.
Learning and development leaders can benefit from a neuroscience perspective on how and why frequent AI use creates business risks, and how to implement neuroscience-based “countermeasures.”
When examining the risks of using AI from a neuroscience perspective, the most essential principle for learning professionals to be aware of is that our brains have evolved to be energy-saving devices.
Primal in concept, I know, but we are both a primitive and an evolved species.
THE HUMAN BRAIN IS REMARKABLE
Within just three pounds of tissue sit 86 billion neurons, connected by 100 trillion synapses to form millions of miles of neural pathways that keep us alive. Some of our pathways carry signals automatically without us having to think about them, such as those for our breathing, heartbeat, and digestion. Others happen upon a stimulus, such as seeing a red stop sign or intuiting danger from the environment around us.
Every human brain has what neuroscientists call two “systems”—our fast/reactive and our slow/thinking systems. At the center of the brain is the walnut-size amygdala. The amygdala is part of the fast/reactive system. This part of the brain is often called the reptilian brain because it rapidly triggers our natural fight-or-flight response. It is the fast, automatic brain system that enables us to hit the brakes when we see a red light, automatically.
Our visual cortex is directly connected to the primal brain (fast/reactive). This circuit enables everyone to process and filter information instantaneously based on its relevance to us at that moment. Our primal brain system receives inputs from our senses, then determines if that information should be sent to the second brain system. While we are exposed to over 30,000 messages a day, our fast-paced brains process only a few thousand.
The rational system is composed of the areas of the brain responsible for our logical and analytical thinking. This system operates more slowly than our primal brain areas. The rational system is where we do work tasks such as math, writing, analytics, planning, or debating pros and cons.
THE NEUROSCIENCE RISKS OF WORK QUALITY FROM AI
What’s relevant to leaders is the neuroscience of the cognitive skills required to perform at high levels on the job. These skills are as basic as memory and as sophisticated as critical thinking and problem ideation.
The risk of using AI to think for us stems from the principle that our brains are always seeking to conserve energy. Our brains save energy by developing, over time, thick, well-connected, well-insulated neural pathways. The more developed the pathway, the less energy the brain requires to send electrical signals and the more frequently it prefers to use this pathway. For example, at work, we have developed the ability to skim an inbox and instantly know what’s urgent versus noise.
Consider our neuropathways to be like the body’s muscles. The more we use them and the more weight we lift, the stronger our muscles get. Conversely, if we don’t use our muscles, they start to atrophy.
The same is true of our neural pathways. They become more efficient (use less energy) as we use them more to perform cognitive processes such as sorting, analyzing, refining, creating, and evaluating.
Unfortunately, our brains can become lazy when we use AI.
While AI enables us to obtain answers to questions quickly, we become less capable of truly knowing, understanding, or sharing the responses AI generates on our behalf. As AI becomes more pervasive in the workplace, this cognitive reduction risk is a key concern that leaders and talent developers should be aware of. Let’s examine this risk in more depth.
AI CHANGES HOW OUR BRAINS WORK
The most concerning aspect of using AI is that, from a neuroscience perspective, humans are reducing the level of their brains’ cognitive capabilities.
As detailed in a publication by MIT Media Lab (“Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task”), researchers asked 54 participants to write essays in three different ways:
1) without any technology assistance,
2) with the help of an internet search engine, and
3) with the help of OpenAI.
Over the course of four months during the study, researchers used electroencephalography (EEG) to monitor participants’ brain activity while they wrote their essays.
The findings: While those using AI wrote their essays 60% faster, 83% of AI chatbot users couldn’t recall a single sentence from the essays they wrote just minutes before.
They measured a 47% drop in cognitive engagement.
Low cognitive workload. Low comprehension. Low memory! This is what neuroscientists call “AI-induced cognitive atrophy.” While our brains crave ways to save energy, the more we use AI to do our thinking, the less we can structure our thoughts, trigger memories, and discuss AI outputs.
Speedy AI responses do not lead to better human comprehension.
REDUCING THE NEUROSCIENCE RISKS OF AI
What can leaders do to mitigate these neuroscience risks while enhancing human-AI collaboration? They can intentionally design learning programs and teach team leaders about work processes that harness both AI power and our human cognitive capabilities.
Specifically, frame AI as a “team member” whose ideas need human validation and critique. Establish a policy requiring team members to explain why they agree or disagree with AI suggestions. Help your AI users understand the effects of high and low cognitive load on their work quality, and create human + AI collaboration work patterns that cycle between high and low cognitive load.
The MIT study showed that the most successful group of essay writers weren’t those who always or never used AI. They started writing without AI and then brought it in strategically after developing their own ideas on the topic. In light of this, leaders should encourage teams to start problem solving with 15 to 30 minutes of unassisted thinking to engage their prefrontal cortex and memory systems. Then use AI-assisted ideation to enhance solutions, identify blind spots, and validate solutions. This approach supports what neuroscientists call “desirable difficulty,” the cognitive effort that drives thinking and learning.
Practice human + AI collaboration. Encourage diversity in solution approaches. Diversity prevents the brain from adapting to the same problem-solving patterns. When possible, structure teams that blend AI skeptics and low users alongside AI enthusiasts and heavy users. Rotate team members to experience both AI-assisted and unassisted work, which supports solution variety and neural flexibility.
Support neuropathway development. Our memories result from what is referred to as “deep encoding.” Think about an unforgettable time in your life and how many of your senses were all being stimulated at once. In a work situation, changing how information is handled increases its encoding.
For example:
1
Ask employees to handwrite summaries of AI-generated content, a process that will engage different neural pathways than typing and reading a response on a screen.
2
Use “teach-back” sessions for team members to explain a given AI output and provide their positive and negative perspectives on the topic.
3
Encourage teams to demonstrate their ideation and evaluation process with each other without AI. Then do the same by expanding and challenging their outputs using AI.
While those using AI wrote their essays 60% faster, 83% of AI chatbot users couldn’t recall a single sentence from the essays they wrote just minutes before
IF AI FEELS SO GOOD TO USE, WHAT’S THE PROBLEM?
Our primal brain system is designed to immediately trigger the release of various hormones throughout our bodies in response to our senses. It releases adrenaline, the hormone associated with the fight-or-flight response, when we sense danger. It releases dopamine, the reward reinforcement hormone, when we hit a slot machine jackpot, or when we hold a baby in our arms, it releases oxytocin, the human bonding hormone.
The risk that accompanies frequent AI use is the impact on our dopamine neurons. They signal the release of dopamine not only when we receive a reward but also in anticipation of its receipt.
While dopamine signals reward us for learning, they also reward us for the low-energy costs of AI use. This, in turn, subconsciously encourages us to repeat the rapid rewards of using AI.
Here’s the problem at work: When we rely on AI to think for us, to give us answers easily, and even to analyze data for us, we often accept AI’s initial responses as the final answer.
If we don’t call upon our cognitive capabilities to evaluate, challenge, understand, and explore the answers from AI, the technology starts to hijack our dopamine reward system.
Instant answers generated by an algorithm enable our brains to have low, shallow cognitive engagement with the information provided. This is what led MIT to discover the correlation between low cognitive engagement and low comprehension and memory, which contribute to “work slop,” which can occur when we accept the first answers from AI as the final answer.
OUR BRAIN STOPS TRYING TO REMEMBER
Also concerning is what neuroscience researchers call the “cognitive offloading” that results from AI use. AI chatbots promote cognitive offloading to reduce cognitive burden and brain energy use.
While the use of AI is highly beneficial in accessing information, its cumulative use creates the opposite, “cognitive debt.” The less we use our prefrontal cortex to think, the less we deeply understand and remember the information AI provides. This is important for learning professionals as they seek to roll out AI fluency and adoption programs. Using AI alone, without an understanding of the importance of cognitive effort, does not lead to mastery.
WE CAN’T BE HERE, THERE, AND EVERYWHERE
Neuroscientists are also finding that working with AI contributes to challenges in sustained attention and can degrade skills.
A Swiss study (“AI Tools in Society: Impact on Cognitive Offloading and the Future of Critical Thinking,” Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School) found that there was a significant negative correlation between frequent AI tool usage and critical thinking abilities, with younger people exhibiting higher dependence on AI tools and lower critical thinking scores.
Employees who heavily rely on AI are losing core skills at a startling rate. As quoted by Business Insider, Anastasia Berg, an assistant professor of philosophy at the University of California, Irvine, junior employees are the most vulnerable to this deskilling process.
In another study from Oxford University Press (“Teaching the AI-Native Generation”), a survey of 2,000 UK students shows while they believe AI tools help them think faster, researchers say they are losing the ability to think deeply.
During a human collaboration process, when people are working together on a common topic or issue, our human-to-human connections sustain our attention. Conversely, when we are only interacting with AI, our focus can be easily fragmented. There is a low-energy cost to switching when we use AI. This switching cost is what neuroscientists refer to as “cognitive setup time,” the mental energy required to reorient oneself to initiate or restart a given task.
While this may feel good at first, the cumulative effect of jumping from task to task—for example, starting with prompt writing for topic A, then moving to topic B, then jumping back to topic A, and moving on to topic C—actually decreases our focus and deep work capacity. It increases our mental fatigue and reduces the comprehension of AI outputs.
Our dopamine response system rewards us for all the work we accomplish with such ease and low-energy consumption. However, low cognitive energy use, by jumping from task to task, contributes to low-quality work outputs.
It’s hard to fight off the rapid dopamine reinforcement rewards from AI.
In writing this article, I used two different AIs to help craft my outline. But as I looked at each output, it was clear the article lacked an authentic voice. Worse, AI had hallucinated and provided unverifiable research findings. Aware of the risks of relying on AI, I knew that digging in and using as much of my cognitive power as possible would yield a much better article than if I had just used an AI output.
NOW WHAT?
For learning and development leaders, understanding these neuroscience risks is the first step toward proactively protecting your workforce’s cognitive capabilities while still leveraging AI’s immense potential.
Neuroscience research is becoming clearer every day. When individuals and teams default to AI without engaging their critical thinking, comprehension, and memory, their production suffers and they are less able to validate or expand on AI-generated outputs.
This is where a structured human + AI collaboration workshop can deliver a big impact throughout the workforce. Learning leader programs should avoid merely teaching AI tool usage without providing deliberate practices that engage both high cognitive and low cognitive thinking.
Practices within workshops can warm up our brains to challenge AI outputs and prevent cognitive atrophy.
By investing in human + AI collaboration development now, L&D leaders can support their organizations in maintaining the mental edge needed to innovate, rather than simply outsourcing thinking to machines.
WRITTEN BY
Russell M. Kern, CEO of Kern and Partners, specializes in team collaboration development workshops, blending human cognition with AI knowledge access. Kern is the author of the bestselling book Transform or Die: How to Build Teams that Outthink, Outpace, and Outprofit the Competition in the AI Age.
At MCE, AI is not a technology programme. It is a management and leadership capability.
Explore our AI leadership programmes
Leading an AI-Ready Organisation
For senior leaders shaping direction, culture, and accountability.
- 3 Days In Person
- 6 Sessions Online
- Available In Company
Augmented Leadership: Blending EQ with AI
For senior leaders navigating judgment, trust, and human impact
- 2 Days Online + 3 Days In Person
- Available In Company
Harnessing the Power of AI and Critical Thinking for Better Decision Making
Critical Thinking and Decision Excellence in an AI-Enabled Environment