FROM ASSISTIVE TO AGENTIC AI: An Exploration of the Hype and Reality
Sash Mohapatra

Sash spent 20 years at Microsoft guiding enterprise clients through the cloud revolution and the rise of AI. Now, as the founder of The Rift, he’s on a mission to enhance human potential by helping people develop practical, future-ready AI skills. He writes from a place of deep curiosity, exploring what it means to stay human as machines reshape the world around us.
January 13, 2025

My earliest experiences with AI were modest. I asked Siri for the weather, set reminders with Alexa, and occasionally laughed at their robotic misunderstandings. These assistants felt magical, but they were limited—always reactive, never proactive.
Fast forward to building The Rift, and everything changed. Tools like ChatGPT and Claude weren’t just assistants; they became collaborators. What began as simple Q&A turned into deep problem-solving and brainstorming. Over time, I realized I was no longer simply using AI. I was partnering with it.
Subscribe now to unlock the full article and gain unlimited access to all premium content.
SubscribeMy earliest experiences with AI were modest. I asked Siri for the weather, set reminders with Alexa, and occasionally laughed at their robotic misunderstandings. These assistants felt magical, but they were limited—always reactive, never proactive.
Fast forward to building The Rift, and everything changed. Tools like ChatGPT and Claude weren’t just assistants; they became collaborators. What began as simple Q&A turned into deep problem-solving and brainstorming. Over time, I realized I was no longer simply using AI. I was partnering with it.
Eventually, I began creating small, purpose-built agents to help automate tasks: pulling news data, transforming it, and tagging it for relevance. These weren’t just scripts. They were reliable, repeatable, and quietly running in the background. And through this experience, I began to grasp a deeper truth: the future of AI isn’t assistive. It’s agentic.
What Agentic AI Really Means
By 2025, "AI agents" became the tech world’s favorite buzzword. But very few tools live up to the title. True agents:
- Act autonomously to accomplish goals.
- Adapt and learn from real-world feedback.
- Collaborate with systems and other agents.
This is a departure from automation-as-usual. It’s not about giving instructions. It’s about setting a goal and letting the system chart its own path—within boundaries you define.
The Solopreneur's Reality
As a solopreneur, I fantasize about automating everything. But I’ve learned that mindless automation creates messes. The real work lies in deciding what to offload—and how.
I’ve found success in using AI to extend my thinking and execute smaller parts of a broader vision. That means:
- Clearly articulating the why behind each task.
- Breaking work into agent-executable chunks.
- Using diagrams and frameworks to keep everything aligned.
- Repeating the vision—again and again—to maintain context.
Today, I use several agents to manage The Rift’s plumbing: data curation, tagging, and transformation. They don’t make decisions (yet), but they help me move faster. It’s nerve-wracking to let go, but that tension is where growth lives.
The Next Leap: Operator and Manus
Then came OpenAI’s Operator and China's Manus, two tools that are changing the game.
Operator lets AI take actions across the web—booking tickets, submitting forms, managing real tasks. It does so with thoughtful safety layers: it prompts user takeovers for sensitive inputs, requires approval before big actions, and refuses high-stakes tasks like banking. Watch modes and privacy settings keep things transparent and controllable.
Manus goes even further. Built by the Chinese startup Monica, it executes production-grade code, interacts with terminals and APIs, and performs full workflows with minimal human involvement. It doesn’t suggest. It does.
These aren’t upgrades. They’re structural shifts—tools that transform what a single individual or a small team can achieve, if used wisely.
What Happens When You Scale It
Now imagine this agentic model running a city. In a future smart city, specialized agents could:
- Monitor and reroute traffic in real time.
- Sync public transit based on live demand.
- Coordinate emergency responses the moment incidents occur.
- Balance energy loads across districts.
- Monitor environmental data and take proactive action.
This is a Multi-Agent System (MAS): a decentralized, collaborative web of agents working together with shared data, feedback loops, and dynamic goals. It’s not science fiction. Operator and Manus are the early threads of this future’s fabric.
Balancing Autonomy with Responsibility
But with power comes complexity. When something goes wrong, who’s responsible? Can we trust autonomous systems to act in alignment with human values?
These aren’t abstract concerns. I’ve had automations misunderstand prompts, hallucinate summaries, and even just make things up. Multiply that by a thousand agents, and the stakes grow exponentially.
Operator provides a glimpse into responsible design. Its layered safeguards include takeover prompts for sensitive data, final-action approvals, and security mechanisms that detect suspicious behavior. It’s still a research preview, and far from perfect, but it’s a step in the right direction.
Ultimately, we need mental models to guide us. I follow a "Pilot/Co-Pilot" mindset:
- Define the vision.
- Break it into tasks.
- Let agents execute.
- Supervise, refine, and adapt.
The Road Ahead
We’re shifting from assistants to agents, from commands to collaboration. And while AI capabilities accelerate, the real transformation is how we choose to use them.
For me, leadership now means experimentation. I’m handing over repeatable tasks, keeping the strategy, and building guardrails that allow agents to stretch—but not snap.
We’re not building a world where humans are replaced. We’re building one where our time, energy, and creativity are finally freed to do more of what makes us human.
Agents aren’t here to take over. They’re here to team up.
My earliest experiences with AI were modest. I asked Siri for the weather, set reminders with Alexa, and occasionally laughed at their robotic misunderstandings. These assistants felt magical, but they were limited—always reactive, never proactive.
Fast forward to building The Rift, and everything changed. Tools like ChatGPT and Claude weren’t just assistants; they became collaborators. What began as simple Q&A turned into deep problem-solving and brainstorming. Over time, I realized I was no longer simply using AI. I was partnering with it.
Eventually, I began creating small, purpose-built agents to help automate tasks: pulling news data, transforming it, and tagging it for relevance. These weren’t just scripts. They were reliable, repeatable, and quietly running in the background. And through this experience, I began to grasp a deeper truth: the future of AI isn’t assistive. It’s agentic.
What Agentic AI Really Means
By 2025, "AI agents" became the tech world’s favorite buzzword. But very few tools live up to the title. True agents:
- Act autonomously to accomplish goals.
- Adapt and learn from real-world feedback.
- Collaborate with systems and other agents.
This is a departure from automation-as-usual. It’s not about giving instructions. It’s about setting a goal and letting the system chart its own path—within boundaries you define.
The Solopreneur's Reality
As a solopreneur, I fantasize about automating everything. But I’ve learned that mindless automation creates messes. The real work lies in deciding what to offload—and how.
I’ve found success in using AI to extend my thinking and execute smaller parts of a broader vision. That means:
- Clearly articulating the why behind each task.
- Breaking work into agent-executable chunks.
- Using diagrams and frameworks to keep everything aligned.
- Repeating the vision—again and again—to maintain context.
Today, I use several agents to manage The Rift’s plumbing: data curation, tagging, and transformation. They don’t make decisions (yet), but they help me move faster. It’s nerve-wracking to let go, but that tension is where growth lives.
The Next Leap: Operator and Manus
Then came OpenAI’s Operator and China's Manus, two tools that are changing the game.
Operator lets AI take actions across the web—booking tickets, submitting forms, managing real tasks. It does so with thoughtful safety layers: it prompts user takeovers for sensitive inputs, requires approval before big actions, and refuses high-stakes tasks like banking. Watch modes and privacy settings keep things transparent and controllable.
Manus goes even further. Built by the Chinese startup Monica, it executes production-grade code, interacts with terminals and APIs, and performs full workflows with minimal human involvement. It doesn’t suggest. It does.
These aren’t upgrades. They’re structural shifts—tools that transform what a single individual or a small team can achieve, if used wisely.
What Happens When You Scale It
Now imagine this agentic model running a city. In a future smart city, specialized agents could:
- Monitor and reroute traffic in real time.
- Sync public transit based on live demand.
- Coordinate emergency responses the moment incidents occur.
- Balance energy loads across districts.
- Monitor environmental data and take proactive action.
This is a Multi-Agent System (MAS): a decentralized, collaborative web of agents working together with shared data, feedback loops, and dynamic goals. It’s not science fiction. Operator and Manus are the early threads of this future’s fabric.
Balancing Autonomy with Responsibility
But with power comes complexity. When something goes wrong, who’s responsible? Can we trust autonomous systems to act in alignment with human values?
These aren’t abstract concerns. I’ve had automations misunderstand prompts, hallucinate summaries, and even just make things up. Multiply that by a thousand agents, and the stakes grow exponentially.
Operator provides a glimpse into responsible design. Its layered safeguards include takeover prompts for sensitive data, final-action approvals, and security mechanisms that detect suspicious behavior. It’s still a research preview, and far from perfect, but it’s a step in the right direction.
Ultimately, we need mental models to guide us. I follow a "Pilot/Co-Pilot" mindset:
- Define the vision.
- Break it into tasks.
- Let agents execute.
- Supervise, refine, and adapt.
The Road Ahead
We’re shifting from assistants to agents, from commands to collaboration. And while AI capabilities accelerate, the real transformation is how we choose to use them.
For me, leadership now means experimentation. I’m handing over repeatable tasks, keeping the strategy, and building guardrails that allow agents to stretch—but not snap.
We’re not building a world where humans are replaced. We’re building one where our time, energy, and creativity are finally freed to do more of what makes us human.
Agents aren’t here to take over. They’re here to team up.