Across Asia Pacific, the AI race is accelerating along a familiar axis: scale. Many recent model releases, from Tencent’s Hy3 Preview to DeepSeek’s V4 , signal progress through size, compute intensity, and benchmark performance. The industry’s momentum appears firmly anchored in the belief that bigger models will define the next phase of AI.

But inside enterprises, a different question is emerging. One that is less about how powerful models become, and more about whether they can be deployed safely, compliantly, and meaningfully within real-world constraints. In conversations with Hans Dekkers, General Manager of IBM Asia Pacific, a more grounded concern surfaces: data sovereignty.

“In Asia, when I look at the AI boom, sovereignty is measuring more than the models,” he says.

The Problem: When AI Models Meet Enterprise Reality

The disconnect begins with how enterprises operate. While the market celebrates general-purpose intelligence, companies are structured around highly specific workflows, regulatory requirements, and proprietary datasets. A single model often struggles to map cleanly onto that complexity.

Dekkers frames this as a structural mismatch rather than a technological limitation. “Winning in the AI era requires more than just tools,” he says. “It’s about a new mindset, new skills, and a new operating model.”

That operating model rarely revolves around a single system. Instead, enterprises are finding themselves navigating a fragmented AI landscape, where different models—global, regional, and internal—coexist but are hardly integrated. The assumption that one model could serve as the backbone of enterprise AI is beginning to erode under the weight of real-world deployment in terms of regional compliance.

Data Sovereignty: Key to Unlock Enterprises AI Potential

If the first challenge is structural, the second is regulatory. Across Asia Pacific, data sovereignty is an operational constraint shaping how AI can be used.

“99% of enterprise data is still untouched by AI,” Dekkers notes, pointing not to technical barriers but to hesitation around data exposure. Often times, when asked to expose internal data to large model providers, many enterprises find themselves arriving at a reluctant answer: “No.”

This reluctance is amplified by fragmented regulatory environments across the region, where data localization laws and compliance requirements vary significantly from one market to another. The traditional trade-off, centralize for efficiency or localize for compliance, is becoming increasingly untenable.

“We see that model is no longer viable,” Dekkers says. “The choice is not between compliance and innovation… it’s about maintaining control across your entire digital architecture.”

In this context, the value of a model is less measured purely by performance, but by whether it can operate within the boundaries enterprises cannot afford to cross.

The Rise Of Domain-Specific Systems

Rather than relying on a single, general-purpose model, enterprises are moving toward building and deploying multiple smaller, domain-specific systems.

Mr. Dekkers’ view is explicit on this point. “I believe every client will have 100 to 200 of these models in the future,” he said, noting that for institutions like banks, these systems would span lending, trading, HR, and finance—each highly specialized, and all trained on enterprise data.

These models will be more aligned with specific workflows and designed to deliver accuracy within narrow domains. A media organization, for example, might operate separate models for research, editing, and publishing, each optimized for its function rather than generalized across tasks.

This marks a shift from AI as a centralized capability to AI as a distributed layer embedded across the organization. “You want these models to be 100% correct,” Dekkers adds. “You want them to be trained on your data… not on something irrelevant to your business.”

The Real Deal: Orchestrating Intelligence

As enterprises begin to operate multiple models across different environments, more challenges emerge:

How do we ensure the right model is used for the right task, maintain compliance when data cannot move freely across borders, and integrate generic AI outputs into professional workflows?

This is where the competitive focus is shifting. When asked about where big models such as DeepSeek V4 and Tencent’s Hy3 Preview model fit for IBM’s vision, Dekkers shared that “AI innovation is becoming increasingly distributed and multipolar.” In response, IBM is positioning itself as a trusted enterprise-grade orchestration platform.

The concept is a “bring your own model” environment where enterprises can deploy different models (whether from global providers, regional players like Tencent or Alibaba, or internal teams) within a single, governed system.

“We allow clients to use the best tool for the job,” he says, “all managed within a secure and governed environment.” In this framework, orchestration becomes the central layer to the ultimate efficiency. It is the system that connects fragmented intelligence, enforces governance, and enables enterprises to scale AI without losing control.

The Future: Op System Under Human Rules

Looking ahead, Dekkers describes the future operating system of the enterprise is the one that can “think, decide, and act across the institution, under human rules.” The underlying direction is concrete:

In a region still chasing hyperscalers, the more immediate challenge, especially for cross-border enterprises, is how to deploy AI safely, compliantly, and at scale. And the services that can truly empower these enterprises will be those that can bridge this gap: translating cutting-edge AI capabilities into systems that let enterprises retain control over their data, adapt across regulatory boundaries, and orchestrate intelligence in a way that help their businesses scale in the long run.