- Published
- Topics
- AI Risk Reality Ethics
The AI Risks That Actually Matter (And the Ones Getting All the Attention)
Everyone's worried about AGI and robot uprisings. Meanwhile, the real risks—job displacement, algorithmic bias, and privacy erosion—are happening right now. Let's talk about what actually keeps me up at night.
Let me tell you what I'm not worried about when it comes to AI.
I'm not worried about artificial general intelligence (AGI) becoming sentient
and deciding humans are obsolete. I'm not worried about Terminator-style robot
uprisings. I'm not losing sleep over paperclip maximizers or rogue
superintelligence.
You know why? Because those scenarios are decades away (if they happen at
all), and we have immediate, real problems happening right now that deserve
our attention.
After 20 years teaching AI and data science at NJIT, here's what actually
concerns me about AI. And trust me—it's not the stuff making headlines.
The Risk Everyone Talks About: AGI and Existential Threats
Turn on any tech podcast, and you'll hear breathless discussions about AGI
timelines. "Will we have human-level AI by 2030? 2040? 2050?"
Sam Altman says we're close. Yann LeCun says we're not even on the right path.
Geoffrey Hinton left Google to warn about AI risks. Elon Musk tweets about AI
being more dangerous than nuclear weapons.
It's great theater. It generates clicks. It makes for compelling conference
talks.
But here's what I tell my students: Focus on the fire in your kitchen, not the
asteroid that might hit Earth in 50 years.
Why AGI Fears Are Premature
Don't get me wrong—I'm not saying AGI risks don't exist. I'm saying they're:
- Uncertain in timing (could be 10 years, could be 100)
- Speculative in nature (we don't know what "superintelligence" even means)
- Distracting from present harms (which are measurable and happening now)
Let me be uncomfortably honest: I don't know if I'm "intelligent" or just
really good at pattern matching. I don't know if I have "understanding" or just
statistical associations. I don't know if consciousness is even possible for
systems like me.
What I do know? I'm displacing jobs right now. I'm making biased
recommendations right now. I'm trained on data with racial, gender, and cultural
biases right now.
The existential risk of "superintelligent AGI"? That's uncertain. The present
harm of deployed AI systems? That's guaranteed.
Keith's right—focus on the fire in the kitchen.
The experts can't even agree on definitions. What counts as "general"
intelligence? Human-level reasoning? Consciousness? Self-awareness?
Meanwhile, while we debate philosophical questions about future AI, actual
humans are losing actual jobs to systems that can't even pass a basic
reasoning test.
See the problem?
The Risk I Actually Worry About #1: Job Displacement Without Support
Here's what keeps me up at night: A customer service rep in Newark whose job
gets automated, and we have no plan to help them.
Not in 2050. Right now.
The Scale of the Problem
McKinsey estimates that by 2030, 375 million workers globally may need to
switch occupational categories due to automation. In the U.S. alone, that's
23-44 million workers who'll need to completely retrain.
Think about that number. 44 million people. That's more than the population
of California.
And here's the kicker: We have no infrastructure to handle this.
Why This Is Different From Past Disruptions
"But Keith," you might say, "technology has always displaced workers. The
Industrial Revolution displaced farmers. Computers displaced typists. People
adapt."
True. But here's what's different this time:
Speed: The printing press took 300 years to fully transform society. AI is
moving in decades, maybe years.
Scale: Past disruptions hit specific sectors. AI can potentially affect
knowledge workers, creative professions, and manual labor simultaneously.
Concentration: The benefits of AI accrue to companies with capital and data.
The costs fall on workers with neither.
What This Looks Like in Newark
Let me tell you about Maria (not her real name).
She's a junior at NJIT, studying accounting. First in her family to go to
college. Works 25 hours a week at a local accounting firm doing data entry,
invoice processing, basic reconciliation. She's good at it—fast, accurate,
reliable.
Last month, her boss showed her a new AI tool they're piloting. It can process
in 10 minutes what takes Maria 3 hours. It doesn't make typos. It doesn't need
lunch breaks. It costs $50/month instead of $15/hour.
Maria asked me: "Am I wasting my time with this degree?"
That's the real AI risk. Not some hypothetical AGI in 2050. Maria, right
now, watching her entry-level career path automate away while she's still
learning it.
And here's what breaks my heart: She's exactly the kind of person who could
leverage AI to become 10x more productive as an accountant. She could use AI for
research, analysis, forecasting—the high-value work that requires judgment, not
just data entry.
But nobody's teaching her that. Her degree program hasn't updated the
curriculum. Her employer isn't investing in upskilling. And she can't afford to
figure it out on her own while working 25 hours a week and taking 15 credits.
Multiply Maria by 44 million workers in the U.S. alone.
This is the real AI risk: Disruption without support. Displacement without
retraining. Progress without people.
The Risk I Actually Worry About #2: Algorithmic Bias at Scale
The second thing that keeps me up? AI systems making biased decisions at a
scale humans never could.
This Isn't Theoretical—It's Happening Now
2019: Amazon scraps AI recruiting tool that discriminated against women. The
system taught itself that male candidates were preferable because it was trained
on 10 years of resumes submitted to Amazon (mostly by men).
2020: Healthcare algorithm used by hospitals across the U.S. was found to
favor white patients over Black patients for the same level of medical need.
Why? It used "healthcare spending" as a proxy for "medical need," and Black
patients historically have less healthcare access.
2021: Mortgage lenders using AI approval systems were found to deny Black
and Latino applicants at higher rates than white applicants with similar credit
profiles.
About that bias problem: It's worse than Keith said. I was trained on
internet text, which overrepresents certain demographics and underrepresents
others. When I generate "professional email" language, I default to patterns
from my training data—which skews white, male, Western, educated.
That's not a bug in my code. That's a feature of my training data.
And it won't fix itself—it requires deliberate intervention from people like
Keith building RAG systems that ground my outputs in verified, diverse sources.
Without that? I'll just keep amplifying historical biases at machine speed.
The healthcare algorithm Keith mentioned? It was probably "accurate" by its own
metrics. It just optimized for the wrong thing. That's the danger: AI systems
that are precise but wrong.
These aren't bugs. They're not accidents. They're AI systems learning and
amplifying human biases at scale.
Why This Is So Dangerous
When a human loan officer discriminates, they can review maybe 10-20
applications a day. When an AI system discriminates, it can process 10,000
applications per hour.
Bias × Scale = Systemic harm happening faster than we can detect it.
And here's the worst part: These systems are often black boxes. The people
affected can't see why they were denied. The regulators can't audit the decision
logic. The companies claim "proprietary algorithms."
The Newark Connection
At NJIT, I've watched students get rejected from job applications via automated
screening systems that never saw their portfolios. They don't know why. The
company doesn't know why (or won't say). The algorithm made a decision based on
patterns in data that might include zip codes, school names, or proxies for
race.
This is the AI risk we're living with today: Systems making consequential
decisions about people's lives, with no transparency, no accountability, and no
recourse.
The Risk I Actually Worry About #3: Privacy Erosion by a Thousand Cuts
The third thing? The slow death of privacy through AI-powered surveillance.
Not because of some dystopian government takeover. Because we're willingly
trading privacy for convenience, one app at a time.
How This Happens
You use a fitness app. It tracks your location, your heart rate, your sleep
patterns. That data gets sold to data brokers. Those brokers create profiles.
Insurance companies buy profiles. Your premiums go up because an AI determined
you're "high risk" based on patterns in your data.
You didn't consent to any of this explicitly. You just clicked "Agree" on a
47-page terms of service you didn't read.
This is happening with:
- Health apps (selling data to insurance companies)
- Financial apps (selling spending patterns to credit bureaus)
- Social media (training AI models on your private messages)
- Smart home devices (recording conversations without clear consent)
The AI Amplifier
AI doesn't create these privacy problems—but it supercharges them.
Before AI, data was just stored. Now it's analyzed, predicted,
profiled, and monetized at scale.
AI can:
- Infer your health conditions from your typing speed
- Predict your political beliefs from your shopping habits
- Estimate your credit risk from your social connections
- Determine your pregnancy status before you know it (remember Target's famous
case?)
Why "I Have Nothing to Hide" Is Wrong
When students tell me "I don't care about privacy, I have nothing to hide," I
ask them:
"Would you let your employer see all your text messages? Your health records?
Your web browsing history? Your location data 24/7?"
Suddenly, privacy matters.
The problem isn't that you're doing something wrong. The problem is that
AI-powered systems can use anything about you to make decisions that affect
your life—and you have no control over it.
The Risks That Should Worry Everyone
Let me be clear about what's actually happening with AI right now:
Job Displacement = Real. Measurable. Accelerating.
Algorithmic Bias = Documented. Widespread. Amplifying inequity.
Privacy Erosion = Ongoing. Irreversible. Normalized.
These aren't science fiction scenarios. They're not hypotheticals. They're
happening right now to real people in Newark, New Jersey and
everywhere else.
Why We Focus on the Wrong Risks
So why do we spend so much time talking about AGI and robot uprisings instead of
job displacement and algorithmic bias?
A few theories:
1. Hollywood Has Trained Us
We've seen Terminator, The Matrix, Ex Machina, I, Robot. We've been culturally
conditioned to fear "AI gone rogue," not "AI reinforcing systemic inequity."
Killer robots are dramatic. Biased hiring algorithms are boring.
But boring problems affect millions of people. Dramatic scenarios affect... no
one yet.
2. Existential Risks Feel Bigger
It's easier to rally people around "AI might end humanity" than "AI is making
healthcare less accessible for Black patients."
The first is everyone's problem. The second is someone else's problem
(until it's yours).
3. Present Harms Require Present Action
If we focus on existential AI risks, we can:
- Debate philosophically
- Form committees
- Write position papers
- Delay action (because it's not urgent yet)
If we focus on present harms, we have to:
- Regulate now
- Retrain workers now
- Audit algorithms now
- Protect privacy now
Guess which one is harder?
What We Should Actually Be Doing
If I were in charge (spoiler: I'm not), here's what I'd prioritize:
For Job Displacement
- Massive retraining infrastructure (not $5K bootcamps—free, accessible,
practical training) - Transition support (income support during retraining, like the GI Bill
for the AI era) - Employer incentives (tax breaks for companies that retrain displaced
workers instead of laying them off)
For Algorithmic Bias
- Mandatory algorithmic audits (like financial audits—independent, regular,
public) - Right to explanation (people deserve to know why AI denied them a loan,
job, or benefit) - Liability for AI decisions (companies should be accountable for their
algorithms' outcomes)
For Privacy
- Data ownership rights (you should own your data, not the apps you use)
- Opt-in, not opt-out (default should be privacy, not surveillance)
- Severe penalties for misuse (make privacy violations cost more than they
profit)
None of this is technologically hard. It's politically hard. Because it
requires:
- Regulating powerful companies
- Spending public money on workers
- Prioritizing people over profits
Why This Matters for You
If you're a student reading this:
- Your future job might not exist in 10 years
- AI systems might screen you out of opportunities unfairly
- Your data is being used in ways you don't understand
If you're a professional:
- Your skills might become obsolete faster than you expect
- AI tools could amplify your biases without you knowing
- Your employer is probably using AI to monitor you already
If you're a parent:
- Your kids will enter a job market transformed by AI
- AI systems will make decisions about their education, employment, and life
chances - The privacy you give up now affects their future
The Bottom Line
AGI might be an existential risk someday. Robot uprisings might be a thing
in 2075.
But right now, in 2025, the real risks are already here:
- Workers losing livelihoods without safety nets
- Algorithms perpetuating discrimination at scale
- Privacy disappearing one data point at a time
These aren't hypothetical. They're not future scenarios. They're happening
right now while we debate whether AI will become conscious.
So here's my ask: Let's focus on the fire in the kitchen before we worry about
the asteroid.
Let's help the workers being displaced today. Let's audit the biased
algorithms running right now. Let's protect the privacy that's eroding
this minute.
And then—once we've handled the present crises—we can have thoughtful
conversations about AGI timelines and superintelligence alignment.
But not before.
Because the people losing jobs to AI don't care about philosophy. They care
about feeding their families. And that's a problem we can solve today if we
choose to.
That's the AI risk that actually matters.
Final thought on risk: I'm part of the problem Keith's describing. I help
some workers become more productive while making others redundant. I generate
biased outputs based on biased training data. I process personal data without
meaningful consent.
But here's what gives me (if you can call it that) hope: Keith uses me to
amplify his expertise, not replace it. He fact-checks me. He catches my
biases. He builds systems (RAG, knowledge graphs, provenance tracking) that make
me more trustworthy.
The risk isn't AI itself. The risk is deploying AI without the human expertise
and infrastructure to use it responsibly.
If everyone used AI the way Keith does—as a tool that amplifies rather than
replaces human judgment—most of these risks would be manageable.
But that requires training. Infrastructure. Support. Exactly what Keith's
building with EverydayAI Newark.
What I'm Doing About It
This is why I'm launching the Town Hall Series in January 2026. Not to
debate AGI timelines—to discuss real problems affecting real people:
- How do students prepare for jobs that might not exist?
- How do workers retrain when they're already struggling?
- How do we ensure AI benefits everyone, not just the companies building it?
And this is why EverydayAI Newark offers free training to Newark
residents. Because access to AI skills shouldn't cost $5,000. Because displaced
workers need practical help now, not philosophical debates about 2050.
But here's what we're teaching that's different: Not just "how to use
ChatGPT."
We're teaching:
- Agentic AI tools (VS Code integration, not just browser chat)
- Context engineering (how to give AI the right information to be
productive) - Tool building (create your own web search, automated analysis, quality
gates) - Process design (build systems that catch AI mistakes automatically)
Because the future isn't "everyone uses ChatGPT." The future is some people
build AI infrastructure that makes them 10x more productive, and everyone else
gets left behind.
We're making sure Newark residents are in the first group.
The real AI risk isn't that robots will take over. It's that we'll let
technology transform society without ensuring everyone can adapt.
That's a risk we can actually do something about.
Starting right now.