- Published
- Topics
- AI cognitive skills work productivity critical thinking knowledge work cognitive shifts
From Doer to Steward: How AI Is Rewiring the Way You Think
Microsoft researchers tracked 936 real work tasks and found AI doesn't just change what you do—it fundamentally shifts how you think. Here's what's changing, and why it matters.
There's a moment I see repeatedly in my EverydayAI training sessions. Someone
uses ChatGPT to draft an email, reads it, tweaks a sentence, and hits send. Ten
minutes instead of thirty. They feel productive. Efficient. Smart.
But here's what they don't realize: the actual work they just did is
completely different from writing that email without AI. Not just
faster—different in kind.
A new study from Microsoft Research
tracked 319 knowledge workers across 936 real-world AI tasks and found something
profound: GenAI tools are rewiring three fundamental cognitive activities. Not
replacing them—transforming them into something else entirely.
If you're using AI at work (and you probably are), your brain is already
adapting to these shifts. The question is whether you're conscious of it.
The Three Cognitive Shifts
The researchers found that when knowledge workers use GenAI tools, cognitive
effort moves in three distinct ways:
- From information gathering → to information verification
- From problem-solving → to response integration
- From task execution → to task stewardship
Let me break down what this actually means for your daily work.
Shift #1: You're Not Finding Information Anymore—You're Fact-Checking It
Before AI: You'd spend 20 minutes searching Google, clicking through seven
tabs, scanning three articles, pulling quotes, and synthesizing the information
into your own words.
With AI: You spend 30 seconds prompting ChatGPT and get a perfectly
formatted answer with citations.
Sounds like pure win, right?
Except now you face a different cognitive challenge: Is any of this actually
true?
The researchers found that 114 out of 319 participants (36%) consistently
cross-referenced AI outputs against external sources. But that means 64% didn't.
They just accepted the information and moved on, especially when the task felt
routine or low-stakes.
One participant, a lawyer (P147), captured the problem perfectly: "AI tends to
make up information to agree with whatever points you are trying to make, so it
takes valuable time to manually verify."
Notice what happened there? The cognitive load didn't disappear—it shifted.
You're no longer hunting for information; you're now detective work,
validating sources, checking dates, verifying claims.
What This Means For You
The old skill: Knowing where to find information
The new skill: Knowing how to verify information you didn't find
You need to develop what the researchers call "information verification
literacy":
- Can you spot plausible-but-wrong claims in your domain?
- Do you know which sources are authoritative for your field?
- Can you tell when AI is synthesizing real information vs. hallucinating?
A market researcher (P232) in the study showed how to do this right: "ChatGPT
gives immediate results at sufficient detail for me to grasp industry basics.
But I still cross-check against press reports and newsletters I trust."
She's not replacing research with AI. She's using AI to accelerate initial
understanding, then deploying traditional research to validate. That's the
hybrid cognitive model that works.
Shift #2: You're Not Solving Problems—You're Integrating Solutions
Before AI: You'd sit with a blank page, think through the problem, draft
multiple solutions, evaluate trade-offs, and arrive at an answer.
With AI: You describe the problem, get three solutions instantly, and pick
one.
Again, this looks like pure efficiency. But something crucial is lost: the
generative thinking process itself.
The researchers found that knowledge workers now spend cognitive effort on
response integration—figuring out how to incorporate AI outputs into their
specific context—rather than on problem-solving from first principles.
36 out of 319 participants (11%) reported having to carefully select and extract
only relevant parts of AI responses. Like this auditor (P188) who used ChatGPT
for resume bullet points: "Some information didn't relate to my role or even to
the country I was working in. I had to critically evaluate what would apply."
Another 45 participants reported needing to modify the style of AI outputs to
match their personal voice or professional standards. A scientist (P210) noted:
"Often AI writes awful stuff like 'our groundbreaking and fundamental analysis
shows...' that sounds too emphatic and doesn't fit scientific style."
What This Means For You
The old skill: Generating solutions from scratch
The new skill: Evaluating and adapting pre-generated solutions
This shift has a hidden cost. When you always start with AI's solution, you:
- Miss the insights that come from struggling with the problem yourself
- Don't build the mental models that help you solve similar problems faster next
time - Risk accepting "good enough" solutions instead of optimal ones
A programmer (P154) in the study modeled the right approach: "When ChatGPT
solves a code problem, I make sure I understand how it works so I can do it
myself next time."
He's treating AI as a teacher, not a replacement. The output isn't the goal—the
understanding is.
Shift #3: You're Not Executing Tasks—You're Stewarding Them
This is the most profound shift, and the hardest to see while it's happening.
Before AI: You'd write the report, build the spreadsheet, design the
presentation. You were the producer.
With AI: You prompt the tool, evaluate outputs, refine prompts, integrate
results, and quality-check the final product. You're now the steward.
The researchers found this shift across all six cognitive activities in Bloom's
taxonomy:
- Analysis: Instead of breaking down problems yourself, you're breaking down
AI's understanding of the problem (48 participants reported this) - Synthesis: Instead of putting ideas together, you're steering AI to
combine them correctly (48 participants reported needing to constantly
redirect AI) - Evaluation: Instead of quality-checking your own work, you're
quality-checking AI's work (42 participants reported this requires new
cognitive skills)
Here's the kicker: you're still accountable for the output. The AI doesn't
take responsibility. You do.
The Stewardship Tax
One participant (P24) described the cognitive cost of this shift perfectly when
trying to generate images with DALL-E: "Image generation requires more effort
for everything except the actual image generation. I have to think of what I
want drawn, then how the AI wants it described, then correct it when it makes
wacky outputs."
Notice the cognitive layers:
- Conceptualizing the goal (what do I want?)
- Translating intent into AI-legible prompts (how does the AI understand this?)
- Evaluating outputs (is this what I meant?)
- Iterating on prompts (how do I guide it closer?)
You're no longer just doing the work. You're managing a process where someone
else (well, something else) does the work.
This is why "prompt engineering" has become a skill. It's the tax you pay for
delegating task execution to AI.
What This Means For You
The old skill: Executing tasks end-to-end
The new skill: Translating intentions, steering processes, and maintaining
quality oversight
Some questions to ask yourself:
- Am I spending more time managing AI than I would have spent doing the task?
- Do I understand the work well enough to catch when AI gets it wrong?
- Am I building or losing the skills needed to execute this task myself?
A teacher (P19) in the study showed excellent stewardship when generating an
image for a hand-washing presentation: "I noticed it was missing soap
dispensers. So I changed my prompt to include them and tried again. By thinking
about what the image really needed to show, I got a much better result."
She didn't just accept what AI produced. She stayed cognitively engaged with the
goal, caught the gap, and iterated.
The Efficiency Trap
Here's what concerns me most about these shifts: workers consistently reported
that AI made critical thinking require "less effort."
Across all cognitive activities:
- 72% said Knowledge tasks required less effort with AI
- 79% said Comprehension required less effort
- 69% said Application required less effort
- 76% said Synthesis required less effort
That sounds great until you realize: less effort can mean less learning.
The researchers specifically warn about this: "GenAI tools reduce the perceived
effort of critical thinking while also encouraging over-reliance on AI, with
confidence in the tool often diminishing independent problem-solving."
When everything feels easier, you stop building cognitive muscle. And when you
need that muscle for something AI can't handle? It's atrophied.
The Hybrid Cognitive Model
The knowledge workers who thrived in this study weren't rejecting AI or blindly
accepting it. They were building a hybrid cognitive model:
For verification: Use AI to gather information quickly, then verify against
trusted domain sources
For integration: Let AI generate solutions, but deeply understand them
before adapting to your context
For stewardship: Delegate execution to AI, but maintain strong conceptual
oversight and final accountability
One participant (P308) showed this perfectly when asking Claude to write web
application code: "I had to make sure it runs without error and then observe
how it functioned."
She didn't just paste code and ship it. She tested, observed, and verified
behavior. She maintained technical agency.
Your Cognitive Fitness Plan
If you're using AI regularly at work, here's how to ensure these shifts
strengthen you rather than weaken you:
1. Schedule "AI-free" time for foundational tasks
Once a week, solve a problem completely without AI. Keep your problem-solving
muscles active.
2. Verify before integrating
Treat every AI output as a draft that needs fact-checking. Build verification
into your workflow, not as an afterthought.
3. Understand before accepting
If AI solves something you couldn't solve yourself, spend time learning how it
works. Otherwise, you're just a middleman.
4. Track your skill development
Are you getting better at your core competencies, or just better at prompting
AI? The answer matters for your long-term career.
5. Practice stewardship deliberately
Get good at translating intent, catching errors, and maintaining quality
standards. These are now core skills.
The Long View
We're only a few years into the GenAI era. These cognitive shifts are still
early. Still subtle. Still reversible.
But cognitive habits compound. If you spend the next five years letting AI think
for you—even in small ways, even for "routine" tasks—you'll wake up in 2030 with
a very different skill set than you have today.
The choice isn't between using AI or not using it. That ship has sailed.
The choice is between using AI as a cognitive amplifier or a cognitive
replacement.
The researchers found that knowledge workers' tendency to reflect on their work
positively correlated with maintaining critical thinking habits, even when using
AI tools. Translation: people who already think deeply continue to think
deeply with AI.
But people who don't? AI gives them permission to think even less.
Which group are you in?
Related Reading
This post is part of a series on maintaining critical thinking in the AI era:
- The Confidence Trap: Why Trusting AI Makes You Think Less -
Research reveals the paradox of AI confidence vs. self-confidence - Don't Let AI Make You Lazy: A Practical Guide to Staying Sharp -
Actionable tactics to overcome barriers and build cognitive fitness - AI Productivity Gains: Reality vs Hype -
What the data actually shows about AI's impact on work
For deeper context: Read the full
Second Renaissance analysis on how AI is
compressing centuries of transformation into decades—and why that demands
different strategies than past technological revolutions.
This post draws on: Lee, H.P., Sarkar, A., et al. (2025). "The Impact of
Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort
and Confidence Effects From a Survey of Knowledge Workers." CHI Conference on
Human Factors in Computing Systems.