Docker Turns The Developer Laptop Into A Governed AI Runtime
The developer laptop has quietly become the most exposed node in the enterprise, and most security stacks cannot see what is happening on it. Docker is trying to change that. On May 12, the company announced Docker AI Governance, a control plane that lets security teams set runtime policy for AI agents from a single console and propagate it to every machine where an agent runs, including the laptops sitting outside the corporate perimeter.
The framing matters as much as the product. Docker argues that AI agents running on developer machines have effectively become production systems, reaching private repositories, production APIs, customer records and the open internet, often inside the same session and using the developer's own credentials. Continuous integration tooling does not see this activity because the agent is not a pipeline. The virtual private cloud does not see it because the laptop is outside the perimeter. Identity and access management does not see it because the agent is acting as the developer. The gap is the story.
That gap is widening quickly. Model Context Protocol, the open interface for connecting agents to external tools, has moved from a year-old standard to enterprise default in a short window. One industry analysis published last week pegs MCP adoption at around 78% inside production AI teams, with more than 9,400 servers in the public registry. Every one of those endpoints is a tool an agent can call, and most enterprises have not yet decided who is allowed to call what.
What Docker AI Governance actually does
Docker AI Governance covers four control surfaces from one admin console. These include network, filesystem, credentials and MCP tool access. Administrators define allow and deny rules for domains, IP ranges and filesystem paths. They set read-only or read-write scopes for mounts. They approve which MCP servers and tools are available organization-wide, with unapproved servers blocked by default. Every policy decision generates a structured event with user identity, timestamp, session context and the rule that triggered the outcome, and logs export to existing SIEM and compliance systems.
The enforcement model is what separates this from earlier MCP gateway products. Agent sessions run inside microVM-based sandboxes, the same primitive Docker first shipped in January with Docker Sandboxes , and every tool call routes through the Docker MCP Gateway before reaching an external system. Policy lives at the runtime layer, not as advisory rules layered on top, and propagates automatically through existing single sign-on and SCIM provisioning flows when a developer authenticates.
The product is generally available now, with no preview gating, and pricing is handled through Docker's enterprise sales channel rather than the published Business tier.
The structural argument and its limits
Docker's pitch is that AI agent governance belongs to whoever owns the runtime that executes the agent. Endpoint security tools do not extend into clusters. Cluster security tools do not reach the laptop. Cloud security tools run in neither place. Docker covers all three because Docker is what is actually running the agent in all three, with the same sandbox primitive on the developer machine, inside Kubernetes and across cloud environments.
The argument is structurally sound, but it deserves hedging. A crowded field of MCP gateway vendors makes overlapping claims, including Bifrost, Cloudflare AI Gateway, Kong and Azure API Management , each optimized for different deployment patterns. Cloudflare leans on its existing edge network and is targeted at enterprises that already run on Cloudflare One. Kong is the default for organizations already standardized on Konnect for API governance. Bifrost emphasizes in-VPC deployments and air-gapped environments. None of them control the developer laptop the way Docker does, which is the structural point Docker is making, but several offer broader gateway functionality that Docker does not match today.
The competitive pressure from hyperscalers is the harder problem. AWS, Google Cloud and Microsoft are racing to build their own agent registries and governance layers tied to their identity and compute platforms. Docker's bet is that the runtime layer wins because it is the only layer that exists everywhere the agent runs. The hyperscalers are betting on the catalog and identity layers. The sequence enterprises pick first will shape which vendor sets the policy model for the rest.
What CXOs should take from this
The honest test for any AI governance discussion inside an enterprise is whether someone can answer three questions today. What did an agent touch in the last hour? What credentials did it use? Where did the data go?
Most CISOs cannot answer any of those with confidence, because the agent is operating in a blind spot that traditional security tools were not built to cover. Tolerating that gap was tenable when agents were autocompleting functions. It is not tenable when agents are shipping code to main, sending emails on behalf of finance teams and querying production systems on behalf of sales.
Three takeaways are worth the boardroom conversation. First, the developer laptop should be treated as production infrastructure for governance purposes, regardless of where it physically sits, because the credentials and access paths available to an agent running on it are production-grade. Second, runtime-level enforcement is meaningfully harder to bypass than advisory policy layers, which means platform decisions about which runtime executes the agent now carry security weight that container choices a decade ago did not. Third, MCP tool catalogs need an approval workflow today, not after the first incident, because every unapproved server is a credential disclosure waiting to happen.
Docker AI Governance does not solve agent risk on its own, and the product will have to prove its propagation model and audit fidelity at scale before security leaders sign off on the broadest agent deployments. What the launch does is force the right argument into the open. The laptop is the new production. The agent is the new workload. The runtime is the new control plane. Enterprises that accept that framing now will spend the next year choosing vendors. Those that wait will spend it explaining incidents.
Loading article...