Google Cloud Next 2026 wrapped yesterday in Las Vegas, and the headline-grabbing announcements landed the way you’d expect. Vertex AI is gone — renamed to the Gemini Enterprise Agent Platform. Agentspace is absorbed into Gemini Enterprise as a unified product. The A2A protocol hit version 1.2 with cryptographically signed agent cards, now governed by the Linux Foundation’s Agentic AI Foundation, with 150 organizations running it in production (not pilot). ServiceNow got crowned 2026 Partner of the Year across four categories and shipped a joint autonomous-operations solution spanning 5G networking, retail, and IT using A2A, A2UI, and MCP together.
The announcement that actually changes what enterprise architects do next week wasn’t any of those. It was this one line in the Apigee press material: Apigee now functions as an MCP bridge, translating any standard API into a discoverable agent tool with existing security and governance controls.
One sentence. Enormous downstream consequences. And it exposes a prerequisite that every vendor selling agent platforms has quietly been avoiding.
What Apigee-as-MCP-bridge actually does
Apigee has been Google’s API management layer for over a decade. Proxies, rate limits, OAuth, quota, analytics, developer portal — the standard API gateway stack. Enterprises with mature API programs already pipe traffic through it. That’s the point.
What Apigee now does in addition: for any API it fronts, it can publish an MCP tool description. The existing OpenAPI spec, the existing OAuth scopes, the existing rate limits and quotas — all of that becomes the MCP tool’s behavior. Claude, Gemini, ChatGPT, or an internal agent discovers the tool via MCP, authenticates against the same OAuth flow human developers use, and invokes the API under the same policies your API team already enforces.
Concretely, that means:
- You don’t write a custom MCP server per API. Apigee generates it.
- You don’t build a separate auth layer for agents. OAuth scopes carry through.
- You don’t maintain two audit trails. Apigee’s existing logs capture agent calls with the same structure as human developer calls.
- You don’t define new rate limits. The existing quotas apply.
This is the right architecture. Agents are just another API consumer. Treating them as a new species — with bespoke MCP servers, custom auth, separate governance — was always a distortion driven by “we need to ship MCP fast” rather than “this is what makes sense structurally.” Apigee-as-MCP-bridge corrects course: the agent protocol sits on top of the API layer, not beside it.
The CIO’s mental model becomes a single sentence: everything that’s already an API is now an agent tool.
The prerequisite problem
Read that sentence again. Everything that’s already an API is now an agent tool.
Now ask the uncomfortable question: what percentage of your enterprise data is actually behind a clean, versioned, OAuth-scoped REST API right now?
If you work at an enterprise with a mature API program, the answer is higher than most — maybe 30-40% of operational data surfaces. If you work at a typical enterprise, it’s lower. Most of what sits in Postgres, SQL Server, Oracle, Snowflake, and the twelve legacy MySQL instances nobody wants to admit are still in production has never had an API in front of it. It has ad-hoc queries from BI tools, ETL pipelines into a warehouse, and a lot of direct JDBC from internal services that predate anyone’s API strategy.
Those are the databases agents need to read. Those are the ones Apigee can’t help with — because Apigee bridges APIs. No API, no bridge.
This is the prerequisite gap the agent vendors keep avoiding. Google’s announcement assumes you have APIs. Anthropic’s MCP tooling assumes you have APIs. OpenAI’s Responses API tool-use model assumes you have APIs. Every single “point your agent at your enterprise” pitch assumes you have APIs.
The data that matters most — customer records, transaction history, inventory levels, employee records, the actual operational state of the business — often isn’t behind one. And the teams responsible for those systems aren’t going to spend Q3 hand-writing Go services just to participate in the agent roadmap.
The earlier numbers now have a new context
Earlier this month, we wrote about the statistic that 97% of enterprise AI workloads touch a database but only 15% of those databases are actually ready for agent access. At the time, “ready” meant: MCP server, RBAC, audit logging, OAuth, the whole enterprise-grade checklist. The Apigee announcement reframes that gap.
Now the question isn’t “does your database have an MCP server?” It’s “does your database have an API?” Because as soon as it does, Apigee — or Microsoft’s DAB MCP server, or any of the other API-to-MCP bridges that are going to ship in the next 90 days — can turn it into an agent tool without anyone writing a custom MCP implementation.
The bottleneck moved one layer down the stack. It used to be “MCP server availability.” Now it’s “API availability at the database layer.” That’s actually a much bigger problem, because writing an MCP server for an API you already have is a week of work. Building a full REST API for a database that doesn’t have one is a quarter.
What else Next 2026 confirmed about this direction
Apigee-as-MCP-bridge wasn’t the only Next announcement pointing in the same direction. Stack them together and the pattern is hard to miss:
Google’s managed MCP servers are all API-mediated. BigQuery, Compute Engine, Kubernetes, Google Maps — already launched. Cloud Run, Storage, and databases announced as coming. Every single one of these is a service with an existing API surface. Google isn’t writing MCP servers for raw storage blocks or database files. They’re wrapping existing APIs. The pattern for enterprise-owned data is the same: wrap your API, don’t write agent-specific code.
A2A v1.2’s cryptographic signing moves trust to the transport, not the tool. Signed agent cards mean an agent proves its identity to another agent at the protocol level. The tool catalog — what each agent can actually do — stays specified in MCP. This split makes sense only if the MCP surface is stable, consistent, and enterprise-grade. Which pushes even harder on the “we need real APIs underneath” conclusion.
The A2UI protocol fills the third corner. MCP for agent-to-tool. A2A for agent-to-agent. A2UI for agent-to-UI. Three protocols, one missing layer underneath: the operational data that any of them actually need to reason about.
Vertex → Gemini Enterprise Agent Platform rename is a positioning move. Google is telling enterprises: the platform is the whole stack, not just the model. That framing works only if the stack extends all the way down to the data. Which means Google is about to discover, if they haven’t already, that the harder half of the enterprise data problem isn’t models or agents — it’s that the data isn’t accessible through anything an agent can consume.
All four signals converge: the industry has decided agents consume APIs (via MCP, wrapped by bridges like Apigee), and the frontier of the problem is now “make your data available as an API.”
The database-to-API layer is the unglamorous prerequisite
Here’s the uncomfortable part for database vendors and enterprise data teams: the fastest way to participate in the agent stack isn’t to ship a custom MCP server per database. It’s to expose your database as a clean REST API once, and let MCP bridges like Apigee, DAB, and whatever ships next do the translation.
This inverts the dominant messaging from most database vendors this year. Every major database platform — Postgres, Oracle, Snowflake, SQL Server, MongoDB — has shipped or announced a native MCP server. Each one ships with its own auth model, its own tool naming, its own idea of pagination and filtering. Run three of them and you have three MCP servers that agents have to learn as three distinct vocabularies.
Wrap those same three databases behind a consistent REST surface and route that surface through an MCP bridge, and agents see one consistent vocabulary across all three backends. The database vendor choice becomes an implementation detail. The API becomes the contract.
This is the architecture Apigee’s announcement implicitly endorses and that’s going to define the next 12 months of enterprise agent deployment:
Agent (Claude, Gemini, ChatGPT, internal)
↓ MCP
Apigee / DAB / neutral MCP bridge
↓ REST (with OAuth, RBAC, audit)
Database API layer
↓ SQL
Postgres / Oracle / SQL Server / Snowflake / SQLite
Apigee handles the top two layers for enterprises that already have APIs. The bottom three layers — consistent REST from any database, with auth and audit built in — is the part most enterprises haven’t solved.
Where Faucet fits
Faucet is open-source software that does exactly that bottom-three-layers job. Point the binary at a database. Get a REST API with OpenAPI 3.1, RBAC, OAuth 2.1, and unified audit logging. Turn on the MCP server with a flag. Consistent tool vocabulary across Postgres, MySQL, SQL Server, Oracle, Snowflake, and SQLite.
Here’s what “expose a database to an MCP bridge” looks like in Faucet:
# Install
curl -fsSL https://get.faucet.dev | sh
# Register a Postgres OLTP database
faucet database add \
--name orders \
--type postgres \
--dsn "postgres://reader:$PASS@orders-db.prod:5432/orders?sslmode=require"
# Register an Oracle system-of-record
faucet database add \
--name erp \
--type oracle \
--dsn "oracle://app_user:$PASS@erp.prod:1521/ERPSVC"
# Start the server — REST, OpenAPI, and MCP all at once
faucet serve --addr :8080
You now have:
http://localhost:8080/api/orders/public/customers— REST with pagination, filtering, sortinghttp://localhost:8080/openapi.json— OpenAPI 3.1 spec Apigee can import directlyhttp://localhost:8080/mcp— native MCP endpoint for agents that connect directly
If you’re running Apigee, point it at the OpenAPI spec and you’re done — every table becomes an API, every API becomes an MCP tool, the auth and audit pipeline is consistent across Postgres, Oracle, Snowflake, or whatever else you add later. Add a role, the permissions apply everywhere:
faucet role create agent-reader \
--allow "orders.public.customers:read" \
--allow "orders.public.orders:read" \
--deny "erp.finance.payroll:*"
faucet user create gemini-enterprise \
--role agent-reader \
--expires 30d
The audit trail is one stream whether the request came through Apigee, direct REST, or the MCP endpoint:
faucet audit tail --user gemini-enterprise --last 1h
# 2026-04-24T14:02:18Z gemini-enterprise list_records orders.public.customers limit=100 rows=100 [OK] via=apigee
# 2026-04-24T14:02:19Z gemini-enterprise get_record orders.public.orders/8823 [OK] via=mcp
# 2026-04-24T14:02:21Z gemini-enterprise get_record erp.finance.payroll/42 [DENIED: role 'agent-reader'] via=apigee
That via= field matters. When your auditor asks “how did the agent reach payroll,” the answer is in the same log whether the call came through Apigee, a direct MCP connection, or a raw REST hit from an internal script. One log, one RBAC model, one set of roles — no matter how many protocols sit above it.
The bet under the bet
Google’s real bet at Next 2026 is that the agent stack gets standardized at the top (MCP for tools, A2A for agents, A2UI for UI), and that the layer below — where APIs meet databases — is the customer’s problem to solve. That’s a reasonable bet. Google isn’t going to write a universal database-to-API layer for your Oracle boxes. Neither is Anthropic. Neither is OpenAI.
The enterprises that move fastest over the next 90 days will be the ones that figured out the database-to-API layer as a one-line operation rather than a quarter-long engineering project. The ones that don’t will still be arguing about MCP server strategy when Apigee has already solved the layer above them.
The glamorous part of the stack — the model, the agent framework, the protocols — is going to keep moving fast. The prerequisite is still just: make your database into an API. If you haven’t done that yet, do it this week. Everything above it assumes you have.
Getting Started
Faucet is open-source software that turns any SQL database into a production-ready REST API and MCP server. Single binary, no migrations, works with Postgres, MySQL, SQL Server, Oracle, Snowflake, and SQLite.
curl -fsSL https://get.faucet.dev | sh
Connect your first database:
faucet database add --name mydb --type postgres \
--dsn "postgres://user:pass@host:5432/db"
faucet serve --addr :8080
You now have a REST API, an OpenAPI 3.1 spec ready to import into Apigee or any other MCP bridge, and a native MCP endpoint — all from one binary, in the time it takes to read the Google Cloud Next press releases.
The protocols above you will keep evolving. The layer below them — making your databases into APIs — is the part you control. Start there.