- Limit Lift AI
- Posts
- How long before AI poses a true existential threat?
How long before AI poses a true existential threat?
We've all seen the movies about rogue AI rising up to take out us pesky humans. But how do those stories relate to where we actually are in AI development (specifically with regards to "narrow" vs "general" AI)?
If you’ve read anything at all about AI in the past year or so, you’ve probably had the thought. You know the one - “when will the AI rise up against us?” “Do I need to be extra nice to ChatGPT justttttt in case?” “How long before we are no longer the top of the metaphorical food chain"?”
Honestly, I think about it a lot. And while I truly believe AI has the capability to make our lives significantly better at it’s current stage of development - it’s hard not to wonder what’s to come.
It’s a big question, and if your feed looks anything like mine, you’ve probably seen everything from “AGI is inevitable and terrifying” to “relax, it’s just fancy autocomplete.”
The truth? We don’t really know yet! It’s a super fun gray area.
I think the best place to start this conversation is discussing the difference between Narrow AI (what we’ve got now) and General AI (what we may—or may not—be unleashing soon), and whether we should be worried. Like, end-of-civilization worried.
Narrow AI vs. General AI — and the Big “What If?”
Narrow AI is basically like a super-focused intern. It can write a decent email, clean up your calendar, even generate a haiku about tax season—but ask it to do anything outside its skillset and it’s toast. These are the AIs we know: ChatGPT, Siri, your smart fridge that still thinks mayo is a vegetable.
General AI (AGI), though? That’s the dream/nightmare version. It’s the “I do everything” polymath—code, reason, create, strategize, maybe even develop a favorite band. It wouldn’t just follow instructions; it would understand context, make decisions, adapt to new domains—basically, your brain, but running on 9 iced lattes. Without human empathy, or aligned goals of survival.
Ok so now that we have those definitions - where are we on the spectrum of development from Narrow to General AI?
Despite the hype(/fear-mongering), we’re not at AGI. Not even close.
What we do have are extremely powerful Narrow AI systems. GPT‑4 can write essays and debug code, but it still struggles with long-term logic, abstract reasoning, or anything that requires real-world understanding (like folding a fitted sheet).
Some experts think we’ll hit AGI within 5–10 years. Others say we’ll need a full-on scientific breakthrough to get there. It’s a bit like arguing whether we’ll reach Mars using our current rockets, or if we first need to invent warp drive. Depends who you ask—and how many research papers they’ve skimmed.
But the key takeaway is that the AI we talk about (and have) today is nowhere near the AI that would be required to pose an existential threat.
Why does this matter?
Like with all things, the most effective antidote to baseless fear-mongering is having an understanding of the facts.
Because the scariest AI headlines are not about ChatGPT writing bad poetry. They’re about what happens if we build something that’s not just smart, but strategic—with goals, autonomy, and the potential to outmaneuver us.
That’s what keeps people like Geoffrey Hinton (ex-Google, literal AI legend) and Sam Altman (OpenAI CEO) nervously pacing their kitchens. Hinton left Google to issue public warnings. Altman called GPT-5’s development “Manhattan Project-level” serious. Demis Hassabis from DeepMind thinks we’re building something more impactful than the Industrial Revolution—only way faster.
Soooooo yeah…big stakes.
Existential threat… or science fiction?
The doomsday scenarios aren’t about killer robots (though let’s admit, Boston Dynamics videos do feel like foreshadowing). The concern is quieter—and sneakier.
Picture hiring the smartest assistant on Earth. Only problem: they don’t really get you. You say “maximize profits,” and they do… by automating away your job, selling your cat’s data, and converting your 401(k) into Dogecoin. Not evil. Just terrifyingly literal.
That’s the danger of misalignment. When an AI is too good at doing what you asked—especially when you didn’t think through what you were asking. Lacking the human-ness that we all have (some level of empathy, self preservation, etc), the AI will not have the deeper level of nuance that acts in our best interest. And it is pretty much impossible to design a prompt and set of parameters that will take every possible scenario into account, preventing this misalignment.
Throw in geopolitical competition, billion-dollar incentives, and an “if we don’t build it, someone else will” mindset? Definitely not the recipe for the mindful, responsible development of AI that we desperately need.
What can we actually do?
Right now, AI is changing our lives in amazing ways. Brainstorming tools, marketing assistants, tutors, even startup partners. It’s like giving everyone their own mini research team—without needing snacks or PTO.
But with great power comes… you know the rest.
We don’t need to slam the brakes—but we do need to stop pretending this is a casual Sunday drive. This is more like flying a self-piloting jet while the manual is still being written. We should probably make sure we know where the eject button is.
So no, this isn’t “panic and throw your laptop in a lake” territory. But it is a time for seatbelts, protocols, and probably a few global policy meetings with actual teeth.
Wrapping up
AGI isn’t here yet. But if and when it shows up, the impact won’t be incremental. It’ll be exponential. And the time to prepare isn’t after that moment—it’s now.
The best thing we can do? Stay curious. Stay grounded. Ask better questions. And maybe—just maybe—demand that the people building this stuff aren’t doing it for leaderboard points. Say “thank you” to ChatGPT every once in a while (mostly kidding lol).
Because if this AI really does go from intern to polymath overnight, we want to make sure it's working with us—not around us.
What do you think? Do you see AGI as a genuine risk—or just another tech buzzword? Hit reply and let me know where you land. What’s your gut feeling about the path we’re on?