Some discoveries happen in a burst - a late-night breakthrough, a bug resolved, a new app that changes the way you work. More often, insight creeps up slowly, a string of small surprises that, together, force you to see the landscape anew.

For me, that moment started not with a headline, but with a tool: DeepSeek. Part of building The Rift, is relentlessly stress-testing the newest models for everything from coding and copywriting to news curation and strategy. I'm always on the hunt for what actually works, not just what trends on tech Twitter.

A Personal Experiment: DeepSeek and the Unseen Shift

It was a typical evening at my desk with a vague hope of shaving minutes off mundane tasks using AI. My first run with DeepSeek for auto-coding was almost accidental, one of countless experiments in the workflow. And yet, what happened next gave me pause: not only did DeepSeek turn out solid, functional code, but the cost was minuscule, sometimes less than a penny for what would've cost me a latte's worth in OpenAI credits.

Sure, it was missing slick search tools. Documentation still felt like a patchwork in progress. But the cost-performance ratio was undeniable. My curiosity was piqued. I'd watched countless U.S. and European labs polish model after model, always fighting for the next leaderboard spot, but here was a fresh angle: massive, quietly growing power at a wholly different price. It was my first real taste of "China edge."

Manus and the Feeling of Frictionlessness

A few months later, I caught a break with early access to Manus, a tool that offered something close to a dream for anyone doing deep research: agentic computer use. Manus wasn't just generating prose or code, it was running, using, and orchestrating a desktop environment, looking up references, and staying in flow. The interface was reminiscent of the best consumer design thinking, but what really struck me was how frictionless it all felt. In those hours, it became clear: the play here wasn't just catching up to U.S. benchmarks, but leaping ahead in user experience and workflow integration.

Once you see what's possible, you want to know where the momentum is coming from. That's when I started paying real attention to Chinese AI innovation - not just from frontier names like Baidu and Qwen, but from the thousands of hands building, tweaking, and deploying these tools at scale.

Subscribe now to unlock the full article and gain unlimited access to all premium content.

Subscribe

Some discoveries happen in a burst—a late-night breakthrough, a bug resolved, a new app that changes the way you work. More often, insight creeps up slowly—a string of small surprises that, together, force you to see the landscape anew.

For me, that moment started not with a headline, but with a tool: DeepSeek. I run The Rift, a platform all about charting the frontier in AI, web3, and quantum. Part of the job is relentlessly stress-testing the newest models for everything from coding and copywriting to news curation and strategy. I'm always on the hunt for what actually works, not just what trends on tech Twitter.

A Personal Experiment: DeepSeek and the Unseen Shift

It was a typical evening at my desk—test script open, datasets ready, a vague hope of shaving minutes off mundane tasks. My first run with DeepSeek for auto-coding was almost accidental, one of countless experiments in the workflow. And yet, what happened next gave me pause: not only did DeepSeek turn out solid, functional code, but the cost was minuscule—sometimes less than a penny for what would've cost me a latte's worth in OpenAI credits.

Sure, it was missing slick search tools. Documentation still felt like a patchwork in progress. But the cost-performance ratio was undeniable. My curiosity was piqued. I'd watched countless U.S. and European labs polish model after model, always fighting for the next leaderboard spot, but here was a fresh angle: massive, quietly growing power at a wholly different price. It was my first real taste of "China edge."

Manus and the Feeling of Frictionlessness

A few months later, I caught a break with early access to Manus, a tool that offered something close to a dream for anyone doing deep research: agentic computer use. Manus wasn't just generating prose or code—it was running, using, and orchestrating a desktop environment, looking up references, and staying in flow. The interface was reminiscent of the best consumer design thinking, but what really struck me was how frictionless it all felt. In those hours, it became clear: the play here wasn't just catching up to U.S. benchmarks, but leaping ahead in user experience and workflow integration.

Once you see what's possible, you want to know where the momentum is coming from. That's when I started paying real attention to Chinese AI innovation—not just from frontier names like Baidu and Qwen, but from the thousands of hands building, tweaking, and deploying these tools at scale.

The Data Reveals a Shifting Landscape

What I discovered in my research painted a picture that most tech media was missing. Chinese AI models have rapidly closed the performance gap with their American rivals. According to Stanford's 2025 AI Index Report, the performance difference on major benchmarks like MMLU and HumanEval shrank from double digits in 2023 to near parity in 2024.

But the momentum runs deeper than model performance. Chinese models now dominate the open-source leaderboards.

  • Alibaba's Qwen models hold the 3 of the top 10 spots on Hugging Face's Open LLM Leaderboard
  • Over 100,000 derivative models have been built on Qwen's ecosystem, surpassing Meta's Llama community [TechWire Asia 2025]
  • Chinese institutions produced 15 notable AI models in 2024 vs 40 from the U.S. - but the quality gap has essentially disappeared [Stanford AI Index 2025]

And let's not skip the core economic lever: cost. DeepSeek's efficiency means I can mass-curate content, test, and deploy pipelines at 100–200x less outlay versus Western APIs. When new releases hit, it isn't just the "wow" of added capability, it's practical, immediate savings that change the math of what experiments I even consider.

Infrastructure: Building the Foundation for Scale

Most discussions about AI pit model against model, innovation against innovation, as if software floats in the void. The reality is deeply physical. AI needs both minds and machines working in sync - huge data centers, seamless energy flows, and roads for actual deployment.

China has reimagined the playbook here. The infrastructure commitment is staggering:

It's infrastructure as a national strategy. While U.S. data center growth bumps into bottlenecks from grid permits and power shortages, China is building the physical backbone to support massive AI deployment at scale.

The Open Source Advantage

What strikes me most about this shift isn't just the technical capabilities, it's the strategy. While U.S. companies increasingly gate their best models behind expensive APIs, Chinese firms are embracing radical openness. DeepSeek's recent R1 model, which rivals OpenAI's O1 in reasoning capabilities, was released completely open-source. The implications are profound.

As Nvidia's Jensen Huang observed, China's open-source AI movement serves as a "catalyst for global progress" and provides access to capabilities that help ecosystems worldwide. When a freelancer in Lagos can experiment with the same reasoning models as a researcher in Beijing, the distribution of AI capability fundamentally changes.

Culture: The Engine Behind the Numbers

What's really propelling this speed and scale? Having worked alongside teams from across Asia early in my career, what strikes me about Chinese AI development isn't just individual ambition, but something more collective. There's a particular intensity to how problems get approached, a willingness to iterate relentlessly until something works.

The research supports these observations. Chinese tech workers average significantly more hours than their U.S. counterparts, but more importantly, they show different motivational patterns. The emphasis on long-term thinking, borrowed from traditions like "shilianjiu" (falling down nine times, getting up ten times) - seems to permeate Chinese AI labs. This philosophy manifests as a willingness to test, break, rebuild, and test again with a patience that Western venture timelines rarely allow.

Distribution > Dominance

If there's a single lesson stretching across my experiments and industry research, it's this: AI's next chapter is about distribution, not just dominance. DeepSeek, Qwen, and others they're "good enough" to be used anywhere. China's models may trail by percentage points on some benchmarks, but in actual adoption and cost, especially across the Global South - they're leapfrogging fast.

U.S. innovation still leads in raw research watts OpenAI, Google, Anthropic are pushing boundaries. But as a user, I see the reality: friends abroad running up against paywalls, frustrating API limits, and glacial export controls. The American models can be brilliant, but for the majority, they aren't accessible or affordable at the scale they could (or should) be.

Risks and the Ongoing Experiment

None of this is without risk, and the stakes extend far beyond technology. When AI capabilities become concentrated in systems controlled by the Chinese government, questions of data sovereignty, surveillance, and information control become unavoidable. As Chinese models become the accessible choice for developers worldwide, their built-in constraints and biases propagate across countless applications, creating dependencies that run deeper than pure technology.

But the deeper concern isn't just about which country's models we use, it's about the pace of the race itself. The breathtaking speed of China's AI development, impressive as it may be, raises fundamental questions about whether we're moving too fast to ensure these systems remain aligned with human values. When the pressure to compete becomes overwhelming, corners get cut in the places that matter most: safety testing, ethical considerations, and the guardrails that keep AI systems serving humanity rather than surveilling it.

The risk of AI-enabled surveillance is particularly acute. These technologies, regardless of their origin, possess unprecedented capabilities to monitor, analyze, and predict human behavior. In the wrong hands, or even well-intentioned hands operating without sufficient oversight, they represent a threat to human autonomy that transcends geopolitical boundaries.

What we're witnessing isn't just a technological arms race, but a test of whether democratic societies can maintain their commitment to human dignity while competing with more centralized approaches. The organizational capacity on display from China is undeniably impressive, but the question remains: can we harness similar coordination for AI development without sacrificing the governance structures and human-centered values that make the technology worth building in the first place?

The answer to that question may determine not just who leads in AI, but what kind of future AI creates for all of us.

Some discoveries happen in a burst—a late-night breakthrough, a bug resolved, a new app that changes the way you work. More often, insight creeps up slowly—a string of small surprises that, together, force you to see the landscape anew.

For me, that moment started not with a headline, but with a tool: DeepSeek. I run The Rift, a platform all about charting the frontier in AI, web3, and quantum. Part of the job is relentlessly stress-testing the newest models for everything from coding and copywriting to news curation and strategy. I'm always on the hunt for what actually works, not just what trends on tech Twitter.

A Personal Experiment: DeepSeek and the Unseen Shift

It was a typical evening at my desk—test script open, datasets ready, a vague hope of shaving minutes off mundane tasks. My first run with DeepSeek for auto-coding was almost accidental, one of countless experiments in the workflow. And yet, what happened next gave me pause: not only did DeepSeek turn out solid, functional code, but the cost was minuscule—sometimes less than a penny for what would've cost me a latte's worth in OpenAI credits.

Sure, it was missing slick search tools. Documentation still felt like a patchwork in progress. But the cost-performance ratio was undeniable. My curiosity was piqued. I'd watched countless U.S. and European labs polish model after model, always fighting for the next leaderboard spot, but here was a fresh angle: massive, quietly growing power at a wholly different price. It was my first real taste of "China edge."

Manus and the Feeling of Frictionlessness

A few months later, I caught a break with early access to Manus, a tool that offered something close to a dream for anyone doing deep research: agentic computer use. Manus wasn't just generating prose or code—it was running, using, and orchestrating a desktop environment, looking up references, and staying in flow. The interface was reminiscent of the best consumer design thinking, but what really struck me was how frictionless it all felt. In those hours, it became clear: the play here wasn't just catching up to U.S. benchmarks, but leaping ahead in user experience and workflow integration.

Once you see what's possible, you want to know where the momentum is coming from. That's when I started paying real attention to Chinese AI innovation—not just from frontier names like Baidu and Qwen, but from the thousands of hands building, tweaking, and deploying these tools at scale.

The Data Reveals a Shifting Landscape

What I discovered in my research painted a picture that most tech media was missing. Chinese AI models have rapidly closed the performance gap with their American rivals. According to Stanford's 2025 AI Index Report, the performance difference on major benchmarks like MMLU and HumanEval shrank from double digits in 2023 to near parity in 2024.

But the momentum runs deeper than model performance. Chinese models now dominate the open-source leaderboards.

  • Alibaba's Qwen models hold the 3 of the top 10 spots on Hugging Face's Open LLM Leaderboard
  • Over 100,000 derivative models have been built on Qwen's ecosystem, surpassing Meta's Llama community [TechWire Asia 2025]
  • Chinese institutions produced 15 notable AI models in 2024 vs 40 from the U.S. - but the quality gap has essentially disappeared [Stanford AI Index 2025]

And let's not skip the core economic lever: cost. DeepSeek's efficiency means I can mass-curate content, test, and deploy pipelines at 100–200x less outlay versus Western APIs. When new releases hit, it isn't just the "wow" of added capability, it's practical, immediate savings that change the math of what experiments I even consider.

Infrastructure: Building the Foundation for Scale

Most discussions about AI pit model against model, innovation against innovation, as if software floats in the void. The reality is deeply physical. AI needs both minds and machines working in sync - huge data centers, seamless energy flows, and roads for actual deployment.

China has reimagined the playbook here. The infrastructure commitment is staggering:

It's infrastructure as a national strategy. While U.S. data center growth bumps into bottlenecks from grid permits and power shortages, China is building the physical backbone to support massive AI deployment at scale.

The Open Source Advantage

What strikes me most about this shift isn't just the technical capabilities, it's the strategy. While U.S. companies increasingly gate their best models behind expensive APIs, Chinese firms are embracing radical openness. DeepSeek's recent R1 model, which rivals OpenAI's O1 in reasoning capabilities, was released completely open-source. The implications are profound.

As Nvidia's Jensen Huang observed, China's open-source AI movement serves as a "catalyst for global progress" and provides access to capabilities that help ecosystems worldwide. When a freelancer in Lagos can experiment with the same reasoning models as a researcher in Beijing, the distribution of AI capability fundamentally changes.

Culture: The Engine Behind the Numbers

What's really propelling this speed and scale? Having worked alongside teams from across Asia early in my career, what strikes me about Chinese AI development isn't just individual ambition, but something more collective. There's a particular intensity to how problems get approached, a willingness to iterate relentlessly until something works.

The research supports these observations. Chinese tech workers average significantly more hours than their U.S. counterparts, but more importantly, they show different motivational patterns. The emphasis on long-term thinking, borrowed from traditions like "shilianjiu" (falling down nine times, getting up ten times) - seems to permeate Chinese AI labs. This philosophy manifests as a willingness to test, break, rebuild, and test again with a patience that Western venture timelines rarely allow.

Distribution > Dominance

If there's a single lesson stretching across my experiments and industry research, it's this: AI's next chapter is about distribution, not just dominance. DeepSeek, Qwen, and others they're "good enough" to be used anywhere. China's models may trail by percentage points on some benchmarks, but in actual adoption and cost, especially across the Global South - they're leapfrogging fast.

U.S. innovation still leads in raw research watts OpenAI, Google, Anthropic are pushing boundaries. But as a user, I see the reality: friends abroad running up against paywalls, frustrating API limits, and glacial export controls. The American models can be brilliant, but for the majority, they aren't accessible or affordable at the scale they could (or should) be.

Risks and the Ongoing Experiment

None of this is without risk, and the stakes extend far beyond technology. When AI capabilities become concentrated in systems controlled by the Chinese government, questions of data sovereignty, surveillance, and information control become unavoidable. As Chinese models become the accessible choice for developers worldwide, their built-in constraints and biases propagate across countless applications, creating dependencies that run deeper than pure technology.

But the deeper concern isn't just about which country's models we use, it's about the pace of the race itself. The breathtaking speed of China's AI development, impressive as it may be, raises fundamental questions about whether we're moving too fast to ensure these systems remain aligned with human values. When the pressure to compete becomes overwhelming, corners get cut in the places that matter most: safety testing, ethical considerations, and the guardrails that keep AI systems serving humanity rather than surveilling it.

The risk of AI-enabled surveillance is particularly acute. These technologies, regardless of their origin, possess unprecedented capabilities to monitor, analyze, and predict human behavior. In the wrong hands, or even well-intentioned hands operating without sufficient oversight, they represent a threat to human autonomy that transcends geopolitical boundaries.

What we're witnessing isn't just a technological arms race, but a test of whether democratic societies can maintain their commitment to human dignity while competing with more centralized approaches. The organizational capacity on display from China is undeniably impressive, but the question remains: can we harness similar coordination for AI development without sacrificing the governance structures and human-centered values that make the technology worth building in the first place?

The answer to that question may determine not just who leads in AI, but what kind of future AI creates for all of us.

More Explorations

See All