Back in 2021, I wrote a blog post titled The Singularity — an attempt to grapple with the accelerating progress of artificial intelligence and where it might lead. Looking back now, in 2025, it’s clear that some of those thoughts have aged surprisingly well, while others reflect the popular narratives of their time — particularly around the idea of the “Singularity,” a term that has since lost its central place in the AI debate.
Let’s dissect what held up, what didn’t, and where the conversation has moved today.
(Like many things in 2025, this post was co-written with ChatGPT — something I never imagined when I wrote the original piece.)
What I Got Right: A World Already Shifting
- General AI would arrive sooner than expected.
I predicted that general AI was “closer than we think,” and that’s turned out to be true — perhaps even truer than I imagined. Systems like GPT-4, Claude, Gemini, and open-source models like Mistral are already demonstrating reasoning, coding, creative writing, and problem-solving capabilities across a vast range of tasks. Today’s AI systems are increasingly agentic — capable of planning, tool use, and multi-step reasoning — blurring the line between “narrow” and “general” intelligence in ways that were hard to anticipate back then. - AI thrives on data.
I noted that progress in AI requires “access to data” and the ability to analyze massive information streams — a foundational truth that has only intensified. Modern LLMs are trained on trillions of tokens, and the hunger for high-quality, diverse data continues to shape the AI arms race. - Pattern recognition is AI’s superpower.
The core strength I identified — AI’s ability to analyze vast inputs and find patterns humans would miss — remains central. Whether it’s predicting protein folding (AlphaFold), debugging code, or even generating scientific hypotheses, today’s systems are superhuman pattern matchers.
Where the Vision Diverged from Reality
- The Singularity is no longer the centerpiece.
In 2021, like many others, I leaned heavily on the idea of “The Singularity” — a moment of runaway self-improvement where AI would surpass human intelligence and reshape civilization in a blink. But that framing has faded from serious discourse. Instead, researchers and policymakers now focus on AGI (Artificial General Intelligence) — systems that match human-level capabilities — and ASI (Artificial Superintelligence) — systems that vastly exceed them.
Rather than a sudden leap, we’re experiencing a gradual, continuous transformation, more like boiling water than a lightning strike. The fears and hopes once projected onto the singularity have been redistributed into more grounded conversations: alignment, control, policy, and socioeconomic impact. - Exponential self-improvement didn’t happen (yet).
I speculated that once AI reached a certain level, it could redesign itself and trigger a feedback loop of improvement. While this is still theoretically possible, what we’ve seen instead is scaling — improvements through larger models, better training techniques, and increased compute — not recursive redesign. LLMs aren’t rewriting their own source code or building better chips (yet). - Physical embodiment is no longer central.
I assumed that true intelligence would require bodies, sensors, and robots — but language turned out to be a powerful interface. We’re still far from androids walking the streets, but we now converse with AI daily via text, voice, and even video — and this seems to be enough for many applications. Language is a body of sorts.
The Current Landscape (2025): AGI is No Longer Theoretical
We now live in a time where:
• Many researchers argue we may already have proto-AGI.
• While many governments are racing to regulate advanced AI models out of concern for misuse and loss of control, others — notably the U.S. — are simultaneously pushing to become AI superpowers, prioritizing strategic advantage over strict safeguards.
• Open-source and proprietary models compete fiercely, raising questions about access and power.
• AI is shifting labor markets, altering education, reshaping creativity, and even affecting geopolitics.
The focus has shifted from when AGI will arrive to how we manage it.
Singularity vs. AGI / ASI: What’s the Difference?
Here’s how I now see it:
• The Singularity was a dramatic narrative — a tipping point of unknowable change, often portrayed as sudden and overwhelming. It was emotionally powerful but vague.
• AGI is the near-term, definable goal: AI that can perform any intellectual task a human can.
• ASI is the real concern: intelligence orders of magnitude beyond ours, potentially uncontrollable.
The singularity might still happen — but not as a bang. More likely, we’ll wake up one day and realize it’s already here in pieces: agents chaining tools together, models building models, systems steering institutions.
If I Were Writing That Blog Today…
Instead of predicting a singularity, I’d focus on:
• Alignment and safety: How do we ensure powerful AI systems share our goals?
• Power and control: Who gets to decide how AI is developed and used?
• Human integration: How do we adapt education, work, and values for an AI-augmented world?
• Plural futures: Rather than one inevitable future, how do we keep multiple pathways open?
Final Thought
My 2021 post was written with both curiosity and awe — and I still feel those emotions now. But with today’s perspective, I’d replace the language of singularity with one of stewardship. The future of intelligence — artificial or otherwise — doesn’t just need visionaries. It needs caretakers.