Wall Street may be debating the end of SaaS, but inside enterprises, a different problem is taking shape: AI that can think, but cannot be trusted to act. McKinsey’s recent report notes that 74% of respondents see inaccuracy as a highly relevant risk, and only about one‑third of organizations report mature governance for agentic AI.

ServiceNow is positioning its workflow‑driven AI platform as the control layer for that gap, introducing new systems to govern how AI agents operate across data, workflows, and external services. The company’s thesis runs counter to much of the current hype cycle. It asserts that the bottleneck in enterprise AI is the absence of structure around how next-gen AI models execute work inside organizations. Agents can generate insights, but they rarely carry them through to execution across disconnected systems. That disconnect, Gaurav Rewari, EVP and GM of data and analytics at ServiceNow, said, has kept many AI initiatives trapped between experimentation and production.

“What we’re increasingly hearing from customers is that the first wave of AI did not come with the same level of control that enterprises expect. There’s been a lot of caution historically around human access to systems — but that same level of governance wasn’t extended to AI agents. Now there’s a backlash,” Rewari told me.

To collapse that gap, the company’s latest push is aimed directly at that execution layer. It is introducing mechanisms such as a private MCP Registry to control which external services agents can access, an expanded Workflow Data Network to ensure data quality and observability at the point of use, and enhancements to its underlying RaptorDB Pro architecture to support real-time, high-volume agentic workloads without latency or duplication.

Together, these systems attempt to answer three questions most enterprises still cannot clearly address: what an AI agent is connected to, who authorized that access, and whether the system can handle the task it has been assigned under real-world constraints.

“Historically, IT built the systems and the department head owned their respective outcomes. That line is getting blurred. With agentic agents taking on autonomous decisions and actions, you can’t separate the two the way you used to,” Chris Bedi, chief customer officer and enterprise AI advisor at ServiceNow, told me. “The fix is forcing the decision-rights conversation before you scale such as who approves what an agent can act on? how far autonomy extends? and what triggers a human checkpoint?.”

The announcements arrive on the heels of the company’s recent reporting of first‑quarter 2026 earnings. Subscription revenue grew 22% year‑over‑year to $3.671 billion, while total revenue rose 22% to $3.77 billion. The company also raised its full‑year 2026 subscription‑revenue outlook from roughly $15.55 billion to $15.735 billion, citing strong AI‑driven demand, with its AI‑related product line (Now Assist) on track to approach $1 billion in annual contract value by 2026.

Despite strong operating performance, shares fell as much as 14% in after-hours trading, reflecting investor concerns and persistent anxiety about AI-driven disruption to traditional SaaS pricing models. Since then, the stock has staged a modest rebound into early May 2026, suggesting renewed confidence that its deep integration within customer workflows remains a durable advantage.

“The models are advancing quickly, but they were never the real constraint. What’s holding companies back is everything underneath: fragmented data, unclear governance, and the lack of an execution layer to turn insight into action,” claims Bedi. “Despite record investment, enterprise AI maturity declined because organizations can’t operationalize what the models produce. The risk of inequality is real because of who can connect AI to how the business actually runs.” He noted that enterprises trying to stitch together data pipelines, governance, and workflows on their own often see only a small subset of scale in production.

The Emergence of Institutional Intelligence

The new capabilities are supported by ServiceNow’s proprietary Now LLM along with additional third-party models such as Azure OpenAI , Google Gemini, and Claude. Central to the platform’s revamped approach is the Context Engine, which executives describe as a “graph of graphs” — a system designed to encode business meaning directly into every AI-driven decision. By linking current workflows, policies, and historical decisions into a unified semantic layer, the platform enables agents to reason within the organization's structure rather than operate as isolated tools. Over time, this context compounds, turning each workflow and decision into training data for how the business actually runs.

Rewari framed the Context Engine as the connective tissue that turns fragmented enterprise signals into something AI can actually act on. “The Context Engine brings together the knowledge graph, decision graph, action graph, and more to provide a 360-degree view that informs actions,” he said. The architecture, he argued, is less about aggregating data and more about encoding how a business operates — linking decisions to policies, actions to outcomes, and systems to one another in real time. “Systems are only as good as the data underneath them,” Rewari said, pointing to the company’s parallel investment in Workflow Data Fabric. This layer is designed to ensure that the data feeding those decisions is not just accessible, but governed, observable, and trustworthy at scale. He asserts that the platform’s scale provides a foundation for this approach, as it processes more than 95 billion workflows and 7 trillion workflow transactions annually, generating a continuous stream of operational data that feeds into the Context Engine.

“The data catalog we’re introducing is effectively an encyclopedia for all your data and analytics assets,” he added, describing it as a unifying reference point that brings structure and meaning to otherwise disconnected data estates. The long-term direction, Rewari suggested, is toward autonomy. “This is moving toward what we call autonomous data governance,” he said. “If an issue is detected, an agentic workflow can help remediate it automatically.”

The era of “sidecar AI,” where intelligence hovers alongside systems, is giving way to AI-native operations where reasoning and execution are tightly coupled. ServiceNow is aligning itself with that shift by embedding AI into the core fabric of its platform. The transition comes at a moment when confidence in enterprise software is increasingly unsettled. The so-called SaaSpocalypse — theory that increasingly capable AI agents and models could erode the need for traditional software — has weighed on valuations, even as companies continue to deliver strong growth. But the narrative also overlooks a more fundamental dynamic.

While AI lowers the barrier to creating software-like functionality, it significantly raises the bar for operating that functionality reliably at scale. What appears to simplify development introduces a new class of complexity beneath the surface — around data integrity, access control, auditability, and real-time decision-making that must hold up under runtime and production conditions.

Opening Development While Preserving Control

Another addition unveiled is the MCP Registry, a private, enterprise-grade catalog for AI connections built on the Model Context Protocol. The registry introduces a governed environment where AI agents can only connect to approved systems, with permissions scoped and actions fully auditable. In effect, it extends identity and access management principles, long applied to human users, to AI agents.

Many organizations enforce strict controls over employee access to sensitive systems, yet apply far fewer controls to AI agents that increasingly interact with those same systems. As agents begin to execute tasks autonomously, that inconsistency becomes a risk. The MCP Registry operates in conjunction with the company’s broader governance framework, including its AI Control Tower and the newly introduced AI Gateway.

Moreover, the platform is also expanding its Workflow Data Network to bring external partners such as IBM and Boomi into a unified framework for data quality, observability, and privacy. Traditional models emphasized data centralization, often described as “data gravity.” ServiceNow is advancing a different concept through zero-copy connectors and real-time integration. The platform retrieves and processes only what is necessary, reducing latency while preserving existing investments.

“We are not in the business of fighting for data gravity. In fact, we explicitly say — let the data stay where it is. Customers have already invested heavily in platforms like Snowflake, Databricks, Oracle, BigQuery, and we integrate with all of them. What matters to us is what we call knowledge gravity,” said Rewari. “If we can tap into those systems, extract the insights, and combine that with our system of action, then we can bring insights and actions together without forcing customers to centralize everything into one place. That’s why we’ve built zero-copy connectors. The goal is not to create lock-in, but to create a layer that works across an existing ecosystem.”

Supporting this layer of intelligence are new capabilities, such as Live Connect and Live Archive, which aim to ease organizational pain to query live operational data and historical records within a single environment, while multi-modal processing supports complex workloads involving graph and time-series data. This convergence is particularly important for agentic AI, which requires immediate access to both current and historical context.

Likewise, new software development kits and Build Agent capabilities allow developers to create AI-driven workflows using tools from providers such as OpenAI and Anthropic, then deploy those applications directly into the ServiceNow platform. According to the company, what distinguishes this approach is that governance is embedded by default within every application.

Enterprise AI Platforms Matter More as AI Agents Get Smarter?

Market narratives increasingly suggest that more capable AI agents from companies such as Anthropic and OpenAI could allow enterprises to bypass traditional software vendors and build directly on foundation models. ServiceNow executives argue the opposite dynamic is taking shape. As AI systems grow more powerful, organizations rely more, not less, on structured platforms to manage complexity. Rewari framed it as a coordination problem, arguing that AI increases the need for enterprise systems by adding new layers of orchestration, control, and accountability that only integrated platforms can handle.

“What enterprises need is air traffic control — coordinating thousands of processes and ensuring safe operations at scale,” he said. “I do think there is some level of misunderstanding in the market about what it actually takes to operationalize enterprise-grade AI. It’s far more complex than just deploying models.”

Competitors are moving in a similar direction. Data platforms such as Snowflake, Databricks , Microsoft (with Fabric), and Informatica are building AI-ready data fabrics that excel at consolidating data, metadata, and analytics for model consumption. Meanwhile, agentic orchestration platforms like IBM WatsonX, Salesforce Einstein, and UiPath are advancing multi-agent coordination, reasoning, and compliance layers that sit atop existing systems.

But across these categories, most platforms either excel at data and intelligence or at orchestration, rarely both within a single, deeply integrated system of execution. This is where ServiceNow is attempting to differentiate. Its ecosystem strategy centers around integrating data platforms into a centralized control plane, effectively sitting above the stack while remaining embedded in how work actually gets done.

The debate around the SaaSpocalypse may persist, but enterprises are not walking away from software — they are pushing it to become more integrated, accountable, and execution-ready. Bedi argues that what changes with agentic AI is not just capability, but speed and blast radius. The more useful framing, he suggests, is to treat AI agents like employees. “You would not place them in decision-making roles without clear scope, performance monitoring, and regular reviews. The same discipline applies to agents — onboard them into controlled workflows, define their autonomy, and monitor performance before issues scale.”