Back to Blog

The Agentic Control Plane War Just Got Real — And It's the Wrong Abstraction

Snowflake and Oracle both declared themselves the agentic control plane this week. But real enterprises have Postgres, SQL Server, Snowflake, and Oracle — and agents need a neutral API layer, not another vendor platform.

Two announcements landed within 24 hours of each other this week. Both are land grabs. Both position the same thing — a database platform — as the “control plane for the agentic enterprise.” If you squinted, you’d think it was the same press release with different logos.

On April 21, Snowflake expanded Snowflake Intelligence and Cortex Code to “power the control plane for the agentic enterprise.” Cortex Code — their AI coding agent launched in late 2025 — now reaches out of Snowflake itself and into AWS Glue, Databricks, and PostgreSQL. The pitch: developers can build agentic applications without migrating data.

On April 22, Oracle announced the Oracle AI Database Agent for Gemini Enterprise. Business users ask questions in plain English; Oracle handles natural-language-to-SQL translation and execution; Gemini Enterprise handles the UX and agent invocation. OAuth scopes every query to the user’s own database identity. Available immediately on Google Cloud Marketplace for Oracle Autonomous Database customers, with wider rollout planned this summer across 15 regions.

Separately, these are competent launches. Together, they expose the shape of the fight: every database vendor on earth now wants to be the single surface your agents talk to. The assumption underneath — that a “control plane” should live inside a database platform — is the part worth questioning.

What both announcements get right

Start with what’s genuinely useful here.

Oracle’s OAuth-scoped agent is the right security posture. Queries run as the invoking user’s database identity. Results are bounded by the schemas, tables, and row-level policies the DBA has already scoped. That’s the model enterprise security teams have been asking for since the MCP auth spec clarified OAuth 2.1 requirements earlier this month. Pinning agent access to human user identity — not a shared service account — is how you get auditability that actually holds up in a compliance review.

Snowflake’s Cortex Code expansion into Postgres, Glue, and Databricks concedes something important: no enterprise has all its data in one warehouse. Half of Snowflake’s own customers use Cortex Code, and clearly many of them are asking to connect to systems Snowflake doesn’t run. Reaching out to Postgres instead of insisting you migrate first is the right call commercially. It’s also the right call technically — the data gravity battle was decided years ago, and “get everything into my warehouse” is a losing prerequisite for agent tooling.

Both vendors also got the framing right, at least rhetorically. “Control plane” is the correct mental model. Agents need a place where tool discovery, auth, governance, audit, and invocation all converge. Without that, every team invents its own wiring, every agent ships with hardcoded credentials, and nobody can tell you what an agent touched last Tuesday at 2:14 PM.

The question isn’t whether enterprises need an agentic control plane. They do. The question is whether your warehouse vendor should be it.

Why a database vendor can’t be the neutral control plane

Here’s the structural problem. A control plane is, by definition, the thing every agent talks to. If it lives inside Snowflake, it can serve Snowflake data brilliantly and non-Snowflake data awkwardly. If it lives inside Oracle — even Oracle-via-Gemini-Enterprise — it can serve Oracle data brilliantly and non-Oracle data not at all.

Look at the plumbing. The Oracle AI Database Agent “gives authorized users a secure way to query Oracle AI Database running as Oracle AI Database@Google Cloud in plain English from Gemini Enterprise.” Unpack the qualifiers:

  1. Users authorized on this Oracle database
  2. Running Oracle AI Database (not standard Oracle)
  3. At Google Cloud (not Oracle Cloud, not on-prem)
  4. From Gemini Enterprise (not Claude, not ChatGPT, not an internal agent)

That’s four layers of vendor coupling on a single query. The agent is technically impressive. It’s also unusable for the 80% of your agent workload that doesn’t fit those four predicates.

Cortex Code is more open — it crosses into Postgres and Databricks — but it still runs from Snowflake, bills from Snowflake, logs into Snowflake’s audit trail, and assumes Snowflake is the primary. If you’re a Snowflake-first shop, that’s a fine deal. If you’re a shop with a Postgres OLTP tier, a Snowflake analytics tier, a SQL Server legacy tier, and an Oracle system-of-record, Cortex Code is another tool your platform team has to integrate alongside four other tools that all think they’re the center of the universe.

This is how we got to the present state, where 97% of enterprise AI workloads touch a database but only 15% of those databases are actually ready for agent access. The gap isn’t that vendors haven’t shipped MCP servers. It’s that every MCP server assumes its platform is the destination, and no single platform owns enough of the enterprise data to make that assumption true.

The agentic control plane is an API, not a platform

Here’s the inversion worth taking seriously: a control plane shouldn’t be a database platform at all. It should be a neutral API layer — one that exposes any database as both REST and MCP, enforces a single RBAC model across all of them, and ships audit logs to one place regardless of which backend the agent touched.

That’s the architecture where the “control plane” metaphor actually holds up. When a new agent arrives — Cursor, Claude Code, an internal agent, whatever Gemini Enterprise looks like in six months — it registers with the API layer once. It gets scoped credentials from the API layer. Every tool call goes through the API layer. Every audit event comes out of the API layer. The database vendors go back to being what they always were: fast, reliable, domain-optimized storage.

This is why the Faucet approach looks so boring on paper and so interesting in practice. You point one binary at Postgres, Snowflake, SQL Server, or Oracle. You get the same REST surface, the same MCP server, the same RBAC model, the same audit trail. The database vendor choice becomes an implementation detail instead of a control plane decision.

Here’s what “register a new database” looks like in Faucet:

# Register a Postgres OLTP database
faucet database add \
  --name orders \
  --type postgres \
  --dsn "postgres://reader:$PASS@orders-db.prod:5432/orders?sslmode=require"

# Register a Snowflake analytics warehouse
faucet database add \
  --name analytics \
  --type snowflake \
  --dsn "snowflake://svc_user@ACCT/ANALYTICS/PUBLIC?warehouse=AGENTS_WH&role=AGENT_READ"

# Register an Oracle system-of-record
faucet database add \
  --name erp \
  --type oracle \
  --dsn "oracle://app_user:$PASS@erp.prod:1521/ERPSVC"

Same command. Same RBAC pipeline. Same MCP output. The agent never learns the difference between orders, analytics, and erp — it just calls list_records or get_record against whichever dataset it was scoped to.

Attach role-based permissions once and they apply everywhere:

faucet role create analyst \
  --allow "orders.public.*:read" \
  --allow "analytics.reporting.*:read" \
  --deny "erp.finance.payroll:*"

faucet user create agent-claude \
  --role analyst \
  --expires 7d

The audit trail is one stream, not three:

faucet audit tail --user agent-claude --last 1h
# 2026-04-23T14:02:18Z agent-claude list_records orders.public.customers limit=100 rows=100 [OK]
# 2026-04-23T14:02:19Z agent-claude list_records analytics.reporting.daily_sales filter=date=today rows=1 [OK]
# 2026-04-23T14:02:21Z agent-claude get_record erp.finance.payroll/42 [DENIED: role 'analyst']

That last line is the thing you can’t get from a warehouse-centric control plane. When the agent tries to touch payroll from an Oracle system via a tool it discovered somewhere, the audit shows the denial with the rule that triggered it — and it shows it in the same stream as the Postgres and Snowflake calls. One place, one format, one query to answer “what did this agent touch and what was it blocked from.”

The MCP angle: why neutrality matters more now

MCP adoption is past the “will it stick” phase. The April 2026 MCP Dev Summit in New York drew roughly 1,200 attendees. AWS, Google, and Cloudflare have all doubled down. The MCP auth spec is settling. Red Hat shipped a developer preview of an MCP server for RHEL earlier this month. zMaticoo, of all companies, launched an MCP layer on April 21 to expose their ADX/DSP business data to LLMs. The protocol is everywhere.

That maturity creates a new problem. A year ago, shipping any MCP server was a differentiator. Today, every data vendor ships one, and enterprises are discovering that running 10 MCP servers (one per vendor) is just as bad as running 10 REST APIs (one per vendor) — maybe worse, because each one has its own auth model, its own tool naming convention, its own idea of pagination, and its own audit format.

The 2026 MCP roadmap explicitly names governance maturation and enterprise readiness as priorities. The pain points enterprises actually hit are audit trails, SSO-integrated auth, gateway behavior, and configuration portability. That is a list of things a database vendor can’t solve alone — because those concerns live across databases, not inside any one of them.

If your agent needs to read a customer record from Postgres and cross-reference the payment history in Snowflake and check the ERP balance in Oracle — a completely ordinary three-system question — the control plane question is: whose audit log captures that full transaction? Whose RBAC denied the ERP read? Whose auth token did the agent present at each hop? If the answer is “three different control planes, stitched together by a platform team in their spare time,” you don’t have a control plane. You have a spreadsheet of IOUs.

What to look for over the next 90 days

A few predictions worth writing down so we can check them later:

Databricks will announce something nearly identical to Snowflake’s Cortex Code expansion. They have to. Being the control plane is an asymmetric race — the first to ship a credible cross-vendor agent story gets to frame the market. Expect Databricks’ version to reach into Snowflake specifically, because why not.

Microsoft will position Data API Builder (DAB) + Fabric as the Microsoft control plane answer. Last week’s DAB MCP server announcement was step one. The convergence pitch writes itself: one API layer, one MCP surface, Fabric on top for orchestration. The risk for Microsoft is that DAB historically only supported SQL Server and Cosmos — enterprise shops running Postgres and Oracle will remain unmoved.

At least one vendor will explicitly position against the “warehouse as control plane” framing. The opening is obvious. Every hyperscaler has a reason to want the control plane to live in their agent platform (Bedrock, Vertex, whatever Azure calls it this quarter) rather than in a warehouse that could get acquired by a competitor. Watch Google and AWS in particular.

The MCP gateway category will consolidate fast. There are maybe six credible “MCP gateway” products right now. By Q3 it’s two or three, and they’ll all have pivoted to calling themselves “agentic control planes.” The word “gateway” is about to disappear from marketing pages the same way “API management” quietly became “API platform” a decade ago.

The boring bet

Snowflake and Oracle both told a compelling story this week about agents and data converging on their platforms. They’re right that convergence is coming. They’re wrong about the location.

The convergence point isn’t a database. It isn’t even a warehouse that has graciously agreed to talk to other databases. It’s the thin, neutral layer that sits between your agents and every data source you have — the layer that doesn’t care which vendor you paid last quarter or plan to pay next quarter. That layer is an API.

The control plane that actually works is the one you can replace the database under without telling your agents. That’s the test. If swapping Postgres for Snowflake would require your agents to relearn their tool catalog, you don’t have a control plane. You have a dependency.

Getting Started

Faucet is an open-source database-to-API generator. Point it at Postgres, MySQL, SQL Server, Snowflake, SQLite, or Oracle and you get a single-binary REST + MCP server with RBAC, OAuth, OpenAPI 3.1, and unified audit logging. No migrations, no vendor lock-in, no warehouse required.

curl -fsSL https://get.faucet.dev | sh

Then connect your first database:

faucet database add --name mydb --type postgres \
  --dsn "postgres://user:pass@host:5432/db"

faucet serve --addr :8080

Your database is now a REST API at http://localhost:8080 and an MCP server at http://localhost:8080/mcp. Add another database — different vendor, same command — and your agent catalog grows without your agent code changing.

The control plane is the API. The database is a backend. Treat them that way and the next vendor announcement stops being an existential question about your architecture and starts being what it should always have been: just another connector to add.