Shadow AI, the AGI Mirage, and Why Curiosity Is the Last Human Edge
Sash Mohapatra

Sash spent 20 years at Microsoft guiding enterprise clients through the cloud revolution and the rise of AI. Now, as the founder of The Rift, he’s on a mission to enhance human potential by helping people develop practical, future-ready AI skills. He writes from a place of deep curiosity, exploring what it means to stay human as machines reshape the world around us.
August 17, 2025

A few weeks ago, I was speaking with a VP at a large software company, one that actually makes AI products. He admitted he didn’t know where to start with AI. Then he laughed and told me it was his nine-year-old son who showed him that you can actually ask follow-up questions to ChatGPT.
That moment has stayed with me. If executives building these products are still figuring out the basics, what does that mean for the rest of us?
We’re at an odd stage in the AI revolution. The hype has peaked. The headlines about AI changing everything are still loud, but the reality on the ground is quieter, messier, and more human. Yes, adoption is spreading. Yes, companies are piloting projects. But is the impact material? Are employees actually using AI to transform the way they work? Or are we just layering a new tool on top of old processes and calling it progress?
The Mirage of Adoption
The numbers, at first glance, look impressive. McKinsey reported that by late 2024, 78% of organizations used AI in at least one business function, up from 55% in 2023. But here’s the catch: fewer than one in five saw meaningful impact at the enterprise level. Only 17% attributed 5% or more of their earnings to AI, and most of those gains came from narrow use cases.
This disconnect is everywhere. I’ve seen AI pilots launched with tiny budgets, a few champions, and lots of ambition. A few months later, those same pilots are “elevated” to enterprise AI programs, but without real strategy or funding. The result? Nothing changes, except the slide decks.
Meanwhile, “shadow AI” is everywhere. Employees use their own tools in their own way, without training, governance, or alignment with company goals. Ask around in any office, and you’ll hear the same stories: someone quietly using ChatGPT for project planning, or Gemini for summarizing documents, or MidJourney for design drafts. Organizations are fooling themselves if they think they’ve “adopted AI” when in reality, their people are improvising in the shadows.
It’s not that adoption isn’t happening. It’s that adoption is being mistaken for transformation.
Subscribe now to unlock the full article and gain unlimited access to all premium content.
SubscribeA few weeks ago, I was speaking with a Sales VP at a large company, one that actually makes AI products. He admitted he didn’t know where to start with AI. Then he laughed and told me it was his nine-year-old son who showed him that you can actually ask follow-up questions to ChatGPT.
That moment has stayed with me. If executives in companies building these products are still figuring out the basics, what does that mean for the rest of us?
We’re at an odd stage in the AI revolution. The hype has peaked. The headlines about AI changing everything are still loud, but the reality on the ground is quieter, messier, and more human. Yes, adoption is spreading. Yes, companies are piloting projects. But is the impact material? Are employees actually using AI to transform the way they work? Or are we just layering a new tool on top of old processes and calling it progress?
The Mirage of Adoption
The numbers, at first glance, look impressive. McKinsey reported that by late 2024, 78% of organizations used AI in at least one business function, up from 55% in 2023. But here’s the catch: fewer than one in five saw meaningful impact at the enterprise level. Only 17% attributed 5% or more of their earnings to AI, and most of those gains came from narrow use cases.
This disconnect is everywhere. I’ve seen AI pilots launched with tiny budgets, a few champions, and lots of ambition. A few months later, those same pilots are “elevated” to enterprise AI programs, but without real strategy or funding. The result? Nothing changes, except the slide decks.
Meanwhile, “shadow AI” is everywhere. Employees use their own tools in their own way, without training, governance, or alignment with company goals. Ask around in any office, and you’ll hear the same stories: someone quietly using ChatGPT for project planning, or Gemini for summarizing documents, or MidJourney for design drafts. Organizations are fooling themselves if they think they’ve “adopted AI” when in reality, their people are improvising in the shadows.
It’s not that adoption isn’t happening. It’s that adoption is being mistaken for transformation.
Pilots Don’t Deliver Transformation
Here’s the catch-22 companies are stuck in: they expect ROI from pilots, but they’re piloting AI as if it were a plug-in. They layer it on top of existing processes, hoping for measurable results. But real change doesn’t come from retrofitting. It comes from re-imagining.
This was true with electricity. In the early 1900s, factories tried swapping out steam engines with electric motors while keeping their layouts the same. Productivity barely budged. The real transformation only came when they redesigned entire factories around electric power.
AI is no different. We can’t just bolt it on and expect miracles. Some processes need to be rebuilt from the ground up with AI at the core, if we want to see consequential impact.
And yet, the potential is massive when companies do commit. Infosys recently reported manpower savings of 5% to 35% using hybrid “poly-AI” systems. Small businesses in the UK are reporting productivity boosts between 27% and 133% by using AI for things like staff scheduling and marketing. The Reserve Bank of India projected that generative AI could boost efficiency in banking by nearly 46%.
The lesson is simple: AI delivers when it’s embedded into workflows and given resources, not when it’s treated as a side experiment.
But even if every organization fixed its AI strategy tomorrow, there’s a deeper bottleneck that no budget line can solve: people themselves.
The Human Skilling Gap
Every week I meet dozens of professionals across industries, levels, and roles. Only a handful have really thrown themselves into exploring these clunky tools we have today. Most people are barely scratching the surface. Many don’t know where to start. Some hold onto ideological beliefs that AI diminishes human potential.
This divide is striking. On one end are highly intelligent, experienced professionals, especially creatives who dismiss AI without trying it. They’re proud of their craft, and they see AI as a threat to it. On the other end are people who fear AI will take their jobs, and the media bubble reinforces that fear until it calcifies into apathy.
But in between, there’s a quieter group. They accept that AI is here to stay. They’re worried about falling behind. They just don’t know how to begin.
And that’s the real challenge: not adoption, but human adaptation.
Curiosity: The Meta-Skill
When people ask me what skill matters most in this AI era, my answer is always the same: curiosity.
Knowledge is now available on tap with large language models. We don’t need to memorize it. We don’t even need to know where to look for it. All we need to do is ask.
But here’s the catch: the quality of what we get depends entirely on the quality of what we ask. Inquiry is the last human edge. Curiosity leads to exploration. Exploration leads to discovery. Curiosity isn’t just a soft skill anymore, it’s the meta-skill that underpins everything else.
The data supports this. A McKinsey survey in 2025 found that while 74% of full-time employees already use AI tools like ChatGPT or Gemini at work, only 33% have received any formal training. Globally, 46% of business leaders cite workforce skill gaps as the top barrier to AI adoption. In the UK, it’s even higher, 81% of firms say a lack of specialist talent is holding them back.
At the same time, demand for higher-order human skills is rising. A 2025 study found that jobs using AI tools now require 36.7% more cognitive skills and 5.2% more social skills than before. In other words: humans aren’t being replaced. They’re being asked to level up.
And curiosity is the on-ramp.
This is why I find the obsession with AGI so misplaced. While humans are struggling to adapt, the industry is pouring billions into a mythical chase.
AGI: The Mythical chase
I don’t know how to feel about AGI. On one hand, it’s fascinating. On the other, it feels almost mythical, something humans have never experienced, can’t define, but are chasing anyway. Researchers talk about AGI as if it will arrive in a single model update, a leap that changes everything overnight. But that’s not how intelligence works.
I’ve used Claude Opus 4.1, one of the smartest models out there. Its reasoning is impressive. But its daily thread limits make it practically unusable. Intelligence without long-term memory and continuity isn’t intelligence. It’s a demo.
This is where I believe the first real “AGI-like” experiences will come from not a model release, but an environment. Imagine models with persistent memory, multimodal interactions, and the ability to reason over time. Imagine infrastructure that lets intelligence unfold instead of resetting every few hours.
Benchmarks don’t capture that. In fact, the gap between the top models on benchmarks is shrinking, from 11.9% to just 5.4% in a year. Models are converging, not exploding apart. The magic won’t come from the next leaderboard jump. It will come from how we architect the systems around the models.
The Distraction Problem
Here’s the uncomfortable truth: the chase for AGI is distracting companies from the more urgent question - how people actually work with AI today.
You can spend billions building a model that beats every benchmark. But if it doesn’t empower employees to achieve more, it’s useless. ROI won’t come from abstract breakthroughs. It will come from embedding AI into workflows, training people to use it, and building environments where it can function in context.
Capgemini estimates that agentic AI systems that can reason, act, and coordinate could generate up to $450 billion in economic value over the next three years. But here’s the reality: only 2% of organizations have scaled these systems. The rest are either piloting or stuck in analysis paralysis.
The opportunity is massive. But the focus is misplaced.
What Comes Next
The AI story is shifting. We’ve moved past the phase of experimentation and hype. The next chapter is about depth:
Re-imagining processes, not just retrofitting them.
Skilling humans, not sidelining them.
Building environments where intelligence feels alive, not just bigger benchmarks.
It’s no longer a question of if AI will change work. It already is. The real question is how deeply we’re willing to lean in.
Executives fumbling for where to start. Employees improvising in the shadows. Professionals resisting out of pride or fear. These are the very human dynamics shaping the AI transition right now.
And yet, behind the noise, the opportunity is enormous. The businesses that embrace curiosity, rebuild workflows, and invest in infrastructure will define the next era. Not because they chased AGI, but because they built the conditions for intelligence, human and machine, to thrive together.
That’s also why I built The Rift: to help people cut through the noise, lean into curiosity, and grow with these tools rather than be sidelined by them. AI’s future isn’t only in the labs. It’s in the hands of people willing to reimagine how they work, learn, and build. And that’s where the real revolution begins.
A few weeks ago, I was speaking with a Sales VP at a large company, one that actually makes AI products. He admitted he didn’t know where to start with AI. Then he laughed and told me it was his nine-year-old son who showed him that you can actually ask follow-up questions to ChatGPT.
That moment has stayed with me. If executives in companies building these products are still figuring out the basics, what does that mean for the rest of us?
We’re at an odd stage in the AI revolution. The hype has peaked. The headlines about AI changing everything are still loud, but the reality on the ground is quieter, messier, and more human. Yes, adoption is spreading. Yes, companies are piloting projects. But is the impact material? Are employees actually using AI to transform the way they work? Or are we just layering a new tool on top of old processes and calling it progress?
The Mirage of Adoption
The numbers, at first glance, look impressive. McKinsey reported that by late 2024, 78% of organizations used AI in at least one business function, up from 55% in 2023. But here’s the catch: fewer than one in five saw meaningful impact at the enterprise level. Only 17% attributed 5% or more of their earnings to AI, and most of those gains came from narrow use cases.
This disconnect is everywhere. I’ve seen AI pilots launched with tiny budgets, a few champions, and lots of ambition. A few months later, those same pilots are “elevated” to enterprise AI programs, but without real strategy or funding. The result? Nothing changes, except the slide decks.
Meanwhile, “shadow AI” is everywhere. Employees use their own tools in their own way, without training, governance, or alignment with company goals. Ask around in any office, and you’ll hear the same stories: someone quietly using ChatGPT for project planning, or Gemini for summarizing documents, or MidJourney for design drafts. Organizations are fooling themselves if they think they’ve “adopted AI” when in reality, their people are improvising in the shadows.
It’s not that adoption isn’t happening. It’s that adoption is being mistaken for transformation.
Pilots Don’t Deliver Transformation
Here’s the catch-22 companies are stuck in: they expect ROI from pilots, but they’re piloting AI as if it were a plug-in. They layer it on top of existing processes, hoping for measurable results. But real change doesn’t come from retrofitting. It comes from re-imagining.
This was true with electricity. In the early 1900s, factories tried swapping out steam engines with electric motors while keeping their layouts the same. Productivity barely budged. The real transformation only came when they redesigned entire factories around electric power.
AI is no different. We can’t just bolt it on and expect miracles. Some processes need to be rebuilt from the ground up with AI at the core, if we want to see consequential impact.
And yet, the potential is massive when companies do commit. Infosys recently reported manpower savings of 5% to 35% using hybrid “poly-AI” systems. Small businesses in the UK are reporting productivity boosts between 27% and 133% by using AI for things like staff scheduling and marketing. The Reserve Bank of India projected that generative AI could boost efficiency in banking by nearly 46%.
The lesson is simple: AI delivers when it’s embedded into workflows and given resources, not when it’s treated as a side experiment.
But even if every organization fixed its AI strategy tomorrow, there’s a deeper bottleneck that no budget line can solve: people themselves.
The Human Skilling Gap
Every week I meet dozens of professionals across industries, levels, and roles. Only a handful have really thrown themselves into exploring these clunky tools we have today. Most people are barely scratching the surface. Many don’t know where to start. Some hold onto ideological beliefs that AI diminishes human potential.
This divide is striking. On one end are highly intelligent, experienced professionals, especially creatives who dismiss AI without trying it. They’re proud of their craft, and they see AI as a threat to it. On the other end are people who fear AI will take their jobs, and the media bubble reinforces that fear until it calcifies into apathy.
But in between, there’s a quieter group. They accept that AI is here to stay. They’re worried about falling behind. They just don’t know how to begin.
And that’s the real challenge: not adoption, but human adaptation.
Curiosity: The Meta-Skill
When people ask me what skill matters most in this AI era, my answer is always the same: curiosity.
Knowledge is now available on tap with large language models. We don’t need to memorize it. We don’t even need to know where to look for it. All we need to do is ask.
But here’s the catch: the quality of what we get depends entirely on the quality of what we ask. Inquiry is the last human edge. Curiosity leads to exploration. Exploration leads to discovery. Curiosity isn’t just a soft skill anymore, it’s the meta-skill that underpins everything else.
The data supports this. A McKinsey survey in 2025 found that while 74% of full-time employees already use AI tools like ChatGPT or Gemini at work, only 33% have received any formal training. Globally, 46% of business leaders cite workforce skill gaps as the top barrier to AI adoption. In the UK, it’s even higher, 81% of firms say a lack of specialist talent is holding them back.
At the same time, demand for higher-order human skills is rising. A 2025 study found that jobs using AI tools now require 36.7% more cognitive skills and 5.2% more social skills than before. In other words: humans aren’t being replaced. They’re being asked to level up.
And curiosity is the on-ramp.
This is why I find the obsession with AGI so misplaced. While humans are struggling to adapt, the industry is pouring billions into a mythical chase.
AGI: The Mythical chase
I don’t know how to feel about AGI. On one hand, it’s fascinating. On the other, it feels almost mythical, something humans have never experienced, can’t define, but are chasing anyway. Researchers talk about AGI as if it will arrive in a single model update, a leap that changes everything overnight. But that’s not how intelligence works.
I’ve used Claude Opus 4.1, one of the smartest models out there. Its reasoning is impressive. But its daily thread limits make it practically unusable. Intelligence without long-term memory and continuity isn’t intelligence. It’s a demo.
This is where I believe the first real “AGI-like” experiences will come from not a model release, but an environment. Imagine models with persistent memory, multimodal interactions, and the ability to reason over time. Imagine infrastructure that lets intelligence unfold instead of resetting every few hours.
Benchmarks don’t capture that. In fact, the gap between the top models on benchmarks is shrinking, from 11.9% to just 5.4% in a year. Models are converging, not exploding apart. The magic won’t come from the next leaderboard jump. It will come from how we architect the systems around the models.
The Distraction Problem
Here’s the uncomfortable truth: the chase for AGI is distracting companies from the more urgent question - how people actually work with AI today.
You can spend billions building a model that beats every benchmark. But if it doesn’t empower employees to achieve more, it’s useless. ROI won’t come from abstract breakthroughs. It will come from embedding AI into workflows, training people to use it, and building environments where it can function in context.
Capgemini estimates that agentic AI systems that can reason, act, and coordinate could generate up to $450 billion in economic value over the next three years. But here’s the reality: only 2% of organizations have scaled these systems. The rest are either piloting or stuck in analysis paralysis.
The opportunity is massive. But the focus is misplaced.
What Comes Next
The AI story is shifting. We’ve moved past the phase of experimentation and hype. The next chapter is about depth:
Re-imagining processes, not just retrofitting them.
Skilling humans, not sidelining them.
Building environments where intelligence feels alive, not just bigger benchmarks.
It’s no longer a question of if AI will change work. It already is. The real question is how deeply we’re willing to lean in.
Executives fumbling for where to start. Employees improvising in the shadows. Professionals resisting out of pride or fear. These are the very human dynamics shaping the AI transition right now.
And yet, behind the noise, the opportunity is enormous. The businesses that embrace curiosity, rebuild workflows, and invest in infrastructure will define the next era. Not because they chased AGI, but because they built the conditions for intelligence, human and machine, to thrive together.
That’s also why I built The Rift: to help people cut through the noise, lean into curiosity, and grow with these tools rather than be sidelined by them. AI’s future isn’t only in the labs. It’s in the hands of people willing to reimagine how they work, learn, and build. And that’s where the real revolution begins.