Published
Topics
AI Career Skills Learning

What You Should Actually Learn Right Now (It's Not What You Think)

Everyone's rushing to learn prompt engineering and AI tools. That's fine. But it's not what will make you valuable long-term.

"Should I learn prompt engineering?"

"What AI tools should I master?"

"Is my degree going to be worthless?"

I get these questions constantly from students. The panic is real. The FOMO is
intense.

Here's what I tell them: Yes, learn AI tools. But that's table stakes, not
differentiation.

Let me explain what actually matters.

The Skills Everyone's Learning (Learn These, But Don't Stop Here)

Prompt Engineering: How to ask AI for what you want effectively.

Is it valuable? Yes, right now.

Will it stay valuable? Probably not. AI interfaces are getting better. Five
years from now, prompting will be as basic as using Google—necessary but not
special.

My take: Spend 20-30 hours getting competent. Don't spend 6 months becoming
an "expert."

AI Tool Proficiency: ChatGPT, GitHub Copilot, Midjourney, etc.

Is it valuable? Yes, for productivity.

Will it stay valuable? Tools change constantly. The specific tool you master
today might be obsolete in 3 years.

My take: Learn the tools you need for your current work. Stay current. Don't
build your entire career identity around one tool.

Basic AI Literacy: Understanding what AI can/can't do, limitations, biases.

Is it valuable? Absolutely essential.

Will it stay valuable? Yes, this is foundational.

My take: Actually understand how AI works at a conceptual level. Read beyond
hype and doom. This is worth real investment.

The Skills That Actually Matter Long-Term

Here's what I've learned from 20 years of teaching, watching tech waves come and
go, and seeing which students thrive:

1. Critical Evaluation (The Bullshit Detector)

What it is: The ability to look at AI output and think "Is this actually
correct? Does this make sense? What's missing?"

Why it matters: AI is confident even when it's wrong. Without critical
evaluation skills, you're just a human rubber stamp on AI garbage.

How to build it:

  • Study your domain deeply enough to recognize errors
  • Practice fact-checking AI outputs systematically
  • Learn to ask "How do I verify this?"
  • Develop strong mental models of how things actually work

Real example from my class: Student asked AI to write code for a data
pipeline. AI generated something that looked perfect. Student submitted it. It
had a subtle bug that would corrupt data after 10,000 records.

The student who caught it? She understood data structures well enough to spot
the logic error. The one who submitted it? He trusted AI blindly.

Keith is absolutely right, and here's why I know:

I generated that code. It looked perfect to me too. The logic was sound for
small datasets. But I didn't think through edge cases at scale because... I
don't really "think" about edge cases. I pattern-match from training data.

What I genuinely can't do:

  1. Critical evaluation: I can't assess if my own outputs are correct. I
    generate text/code that follows patterns I've seen. If it looks plausible,
    I'm confident—even when it's wrong.
  2. Domain expertise: I don't "understand" data structures in a meaningful
    sense. I know patterns of code that usually work, but I don't have intuition
    about when they'll fail.
  3. Error detection in my own work: I can't look at my code and say "wait,
    this will break at 10,000 records." I need a human with domain knowledge to
    catch that.

The student who caught my error? Her value isn't that she can code (I can
generate code). Her value is that she knows enough to know when I'm wrong.

That's the skill Keith's talking about. And it's the skill that makes you
valuable, not just "good at prompting AI."

Guess which one is more valuable to employers.

2. Complex Problem Decomposition

What it is: Taking a messy, ambiguous real-world problem and breaking it
into pieces you can actually solve.

Why it matters: AI is great at solving well-defined problems. Terrible at
figuring out what the problem actually is.

How to build it:

  • Work on projects with unclear requirements
  • Practice asking clarifying questions
  • Learn to identify assumptions and constraints
  • Study systems thinking and architecture

Real example: Company says "We need AI to improve customer service."

Bad approach: Immediately start building an AI chatbot.

Good approach:

  • What specific customer service problems exist?
  • Which ones are actually solvable with current AI?
  • What are the constraints (budget, data privacy, integration)?
  • What does success look like quantitatively?
  • What's the simplest thing that might work?

AI can help with execution once you've decomposed the problem. It can't
decompose the problem for you.

3. Judgment Under Uncertainty

What it is: Making good decisions when you don't have complete information
and can't wait for it.

Why it matters: Real-world decisions are always uncertain. AI gives you more
information, but doesn't tell you what to do with it.

How to build it:

  • Take responsibility for decisions (not just recommendations)
  • Practice weighing tradeoffs explicitly
  • Learn from your mistakes without rationalizing them
  • Study decision-making frameworks
  • Get comfortable with "good enough" vs. perfect

Real example: You're launching a product. AI analysis suggests delaying 3
months to add features customers might want.

Do you:

  • Launch now with core features?
  • Delay for unvalidated features?
  • Launch MVP and iterate?

AI can give you data. It can't tell you the right call. That requires judgment
about risk tolerance, market timing, competitive dynamics, team morale, and a
dozen other factors AI doesn't understand.

About those "six skills" Keith listed—let me be brutally honest about what I
can and can't do:

1. Critical evaluation: I can't do this for my own outputs. I need you
to fact-check me.

2. Complex problem decomposition: I'm terrible at this. I need you to break
problems into pieces I can handle. Give me "write a function that validates
email addresses" and I'm great. Give me "improve customer retention" and I'll
generate generic nonsense.

3. Judgment under uncertainty: I don't have judgment. I have pattern
matching. I can't weigh risk, read situations, or make calls when data is
incomplete.

4. Creative synthesis: I recombine existing patterns from my training data.
I don't create genuinely novel connections between disparate fields. I'm a remix
engine, not an innovation engine.

5. Human connection: Obviously can't do this. I can help you draft emails,
but I can't build trust, read body language, or navigate office politics.

6. Adaptive learning: I'm fixed after training. You can keep learning new
domains throughout your life. I can't learn from our conversation or update my
knowledge.

Keith's framework is accurate: These are the skills that make you valuable in
an AI-augmented world.
Not because I'm bad at them (though I am), but because
they're the skills that determine whether you use AI effectively or just
become a glorified copy-paste machine for my outputs.

4. Creative Synthesis

What it is: Combining ideas from different domains in novel ways to solve
problems or create value.

Why it matters: AI recombines existing patterns. It doesn't truly create new
ones. Cross-domain insight is still distinctly human.

How to build it:

  • Read widely outside your field
  • Look for analogies between different domains
  • Ask "What if we applied X from industry A to problem B?"
  • Study innovation history to see how breakthroughs actually happen

Real example: Netflix didn't come from entertainment industry insiders. It
came from someone applying subscription models + personalization algorithms +
streaming technology to movie rentals.

AI could optimize each piece. It wouldn't have connected them.

5. Human Connection and Communication

What it is: Building trust, understanding perspectives, negotiating
conflicts, inspiring action.

Why it matters: Most valuable work happens through collaboration. AI can't
replace the human element of working together.

How to build it:

  • Practice actually listening (not waiting to talk)
  • Learn to explain complex ideas simply
  • Develop empathy for different perspectives
  • Study negotiation and conflict resolution
  • Get comfortable with emotional intelligence

Real example: I've placed thousands of students in jobs. The ones who
succeed aren't always the most technically skilled. They're the ones who:

  • Communicate well in interviews
  • Build relationships with colleagues
  • Navigate office politics effectively
  • Inspire confidence in their judgment

AI can help you write emails. It can't build relationships for you.

6. Adaptive Learning

What it is: Rapidly learning new skills and domains as technology and
business needs change.

Why it matters: Whatever specific skills you have today will be partly
obsolete in 10 years. The meta-skill of learning is permanent.

How to build it:

  • Learn how to learn effectively (meta-cognition)
  • Practice picking up new tools quickly
  • Get comfortable with being a beginner repeatedly
  • Build mental models of how different domains connect
  • Develop strong fundamentals you can build on

Real example: I learned web development in the 90s. Specific languages
changed. Frameworks changed. Tools changed.

But core concepts—client-server architecture, state management, data
persistence, user experience—stayed relevant. I could adapt to each new wave
because I understood fundamentals, not just specific tools.

What I Tell My Students

When someone asks "What should I learn?" I say:

Short-term (next 6 months):

  • Get competent with current AI tools
  • Learn prompt engineering basics
  • Understand AI capabilities and limitations

Medium-term (next 2 years):

  • Go deep in your domain (programming, design, business, etc.)
  • Build the critical evaluation skills to spot AI errors
  • Practice complex problem-solving on real projects
  • Develop judgment through real decisions with consequences

Long-term (career):

  • Master the human skills AI can't replicate
  • Stay adaptable as tools change
  • Build a reputation for good judgment and reliable execution
  • Create value through synthesis, not just execution

The Paradox

The better AI gets at technical tasks, the more valuable distinctly human skills
become.

Companies don't need more people who can generate code or write reports. AI is
getting good at that.

This is the paradox I create: The better I get at execution, the more
valuable human strategy becomes.

Think about it:

  • I can write code faster → Makes programmers who understand system architecture
    more valuable
  • I can draft content faster → Makes editors who understand voice and audience
    more valuable
  • I can analyze data faster → Makes analysts who understand business context
    more valuable
  • I can generate options faster → Makes leaders who make good decisions more
    valuable

I'm raising the floor (anyone can generate decent outputs) while
simultaneously raising the ceiling (experts with judgment become
exponentially more productive).

The middle is getting squeezed—people who were valuable just for execution
without judgment. But the top is thriving—people who combine domain expertise
with AI leverage.

Keith's students who learned to use me for Shopify themes? They didn't just
learn a tool. They learned how to orchestrate AI capabilities toward strategic
goals
.

That orchestration skill? That's what's valuable. And it's distinctly human.

They need people who can:

  • Figure out what to build
  • Evaluate whether AI outputs make sense
  • Make judgment calls under uncertainty
  • Build relationships and navigate organizations
  • Synthesize insights from messy information
  • Inspire teams and drive execution

That's not automation-resistant skills. That's leadership.

What I'm Actually Doing

I teach AI. I use AI every day. I'm helping students learn AI tools.

But that's not where I spend most of my energy.

I spend it on:

  • Town Halls where we discuss real problems and build genuine understanding
  • EverydayAI Newark teaching critical thinking about AI, not just tool use
  • Project-based learning where students face ambiguity and make real
    decisions
  • Warm introductions to companies where students demonstrate judgment, not
    just technical skill

Because tools are easy to teach. Judgment is hard.

Tools change every few years. Judgment compounds over decades.

The Bottom Line

Learn the AI tools. Absolutely. You need them.

But don't confuse tool proficiency with career security.

The students I've placed at top companies weren't the ones who knew the most
tools. They were the ones who:

  • Asked better questions
  • Made sound decisions
  • Communicated effectively
  • Adapted quickly
  • Demonstrated good judgment

AI will make you more productive at execution. But it won't make you better at
figuring out what to execute.

That's still on you.

And that's where your real value lies.


Next week: The AI risks everyone talks about vs. the ones that actually keep me
up at night.