Back to Blog

Gemini Enterprise Agent Platform: 200 Models, One Database Problem

Google killed the Vertex AI brand at Cloud Next 2026 and replaced it with the Gemini Enterprise Agent Platform — 200+ models, managed MCP servers, A2A protocol, and an Oracle-only path to your data. Here is what that means for the database layer.

Vertex AI is dead. At Google Cloud Next ‘26 in Las Vegas this week, Google quietly retired the Vertex AI brand and replaced it with the Gemini Enterprise Agent Platform — a single, top-to-bottom developer surface for building, deploying, governing, and optimizing AI agents. The Model Garden now lists 200+ models, including Gemini 3.1 Pro, Gemini 3.1 Flash Image, Lyria 3, Gemma 4, Claude Opus 4.7, Claude Sonnet, Haiku, and open-weight models like Llama. Project Mariner ships as a managed web-browsing agent. The Agent Development Kit (ADK) gets first-class support for sub-agent networks. The Agent2Agent (A2A) protocol moves from experimental to production-grade. And Google is offering managed MCP servers across its own services.

It is the largest agent-platform consolidation any hyperscaler has shipped. It is also, conspicuously, a vendor-specific answer to a vendor-neutral question.

Because tucked inside the keynote was a smaller announcement that tells you exactly where the rough edges still are: Oracle AI Database Agent for Gemini Enterprise, now in preview on Google Cloud Marketplace. The pitch is that business users can ask questions of their Oracle data in plain English through Gemini Enterprise, no SQL required, with full A2A compatibility. The thing nobody is saying out loud: Google needed a separate, vendor-co-branded agent just to give Gemini reliable read access to a single database brand.

If your company runs Postgres on AWS, MySQL on GCP, and SQL Server on a 2019-vintage VM in a colo, the announcement above is not for you. And you are most companies.

What Google actually shipped this week

Let’s catalog the agent-side wins first, because they are real:

  • Model Garden at 200+ models. First-class access to Gemini 3.1, Anthropic’s Claude family (Opus 4.7, Sonnet, Haiku), Llama, Gemma 4, and dozens of open-weight checkpoints from a single API surface.
  • Agent Development Kit 2.0. The ADK now organizes agents in a network of sub-agents — orchestrator, planner, specialists — with first-class typing and observability. Sub-agent reasoning is the new default pattern for non-trivial agents.
  • A2A goes GA. Agent2Agent protocol leaves preview. Cross-platform, cross-vendor agent communication with a stable spec, signed envelopes, and capability discovery. This is the protocol piece that makes “Salesforce agent calls a Google agent calls an internal agent” actually work.
  • Managed MCP servers. Google Cloud will host MCP endpoints for its own services (BigQuery, Cloud Storage, Pub/Sub, Spanner) so customers don’t have to operate them themselves.
  • Project Mariner as a managed agent. Web-browsing capabilities exposed as an MCP-compatible tool for any agent on the platform.
  • Workspace Studio. No-code agent builder targeting business users inside Google Workspace.

That is a serious, well-engineered platform release. SiliconANGLE called it Google bringing “agentic development, optimization, and governance under one roof.” HPCwire framed it as the full-stack bet against OpenAI and Anthropic. Both reads are correct.

But all of it sits on the same load-bearing assumption every agent platform has made for the last 18 months: that the database layer underneath your business data is somebody else’s problem.

The Oracle-shaped tell

When Anthropic donated MCP to the Agentic AI Foundation in December 2025, the entire industry agreed that data access should be standardized at the protocol level, not the vendor level. By March 2026, MCP had hit 97 million monthly SDK downloads across Python and TypeScript and over 10,000 active public servers. AWS, Microsoft, Google, and Cloudflare all signed on as platinum members.

So why, four months later, does Google need a custom-built, Oracle-branded agent just to query Oracle?

Two reasons, and they are the same two reasons every enterprise we talk to is wrestling with right now:

  1. The MCP servers shipped by database vendors are wildly inconsistent. Some return raw rows. Some return schema-aware structured responses. Some enforce RBAC. Some don’t. Some support write operations safely. Some let an agent drop your tables. Microsoft DAB shipped a unified MCP server for Postgres, MySQL, and SQL Server in mid-April. Oracle shipped its own. Snowflake has a different one. Each is reasonable in isolation. None of them compose.

  2. Per-database, per-cloud agent connectors don’t scale. The Oracle AI Database Agent is wired specifically to Gemini Enterprise. If you also want it to work with Claude in AWS Bedrock or with an internal agent running on your own infrastructure, you write that integration too. Multiply by the number of databases you run, and you are back to the 150 KLOC of custom integration glue that Lucidworks said costs $150K per system to maintain.

The vendor-specific path produces beautiful demos. It does not produce a maintainable production posture.

What the agent platform actually needs from the database layer

If you read between the lines of the Cloud Next keynote, the agent-platform requirements are now specific enough to write down:

  • A single, uniform tool surface across all your databases. Postgres, MySQL, SQL Server, Oracle, Snowflake, SQLite — agents should see one set of tools (list_tables, describe_table, read_rows, create_row) regardless of what’s underneath.
  • MCP-native and OpenAPI-native, simultaneously. The same database access layer needs to feed an MCP-speaking agent (Gemini, Claude, GPT) and a REST-speaking application (your existing services, internal dashboards, partner integrations). One source of truth, two protocols.
  • Real RBAC at the field level. Not “this API key can call this endpoint.” Actual per-role, per-table, per-field, per-operation policies enforced before the query hits the database. The Vercel AI breach earlier this week — credentials extracted from environment variables in a misconfigured AI Gateway — is the canonical example of what happens when you skip this.
  • Vendor and cloud agnostic. If you commit to Gemini Enterprise today, the database access layer should still work when you also adopt Bedrock AgentCore, OpenAI Agents SDK, or anything Microsoft ships at Build next month. The protocol gets standardized; the layer underneath should follow.
  • Single binary, no JVM, no operator. Most enterprise databases live behind a firewall. The thing that exposes them safely needs to deploy in two minutes on a 2-core VM, not require a Helm chart.

This is the gap the Gemini Enterprise Agent Platform doesn’t fill. It is the gap Faucet is built to fill.

What it looks like with Faucet underneath

Faucet is a single Go binary that you point at a database — Postgres, MySQL, SQL Server, Oracle, Snowflake, or SQLite — and it gives you back a production-ready REST API and a conformant MCP server, with RBAC, authentication, OpenAPI 3.1 docs, and audit logging built in. No code generation. No restart on schema change.

Here is what wiring Faucet up to a Postgres instance looks like end-to-end:

# Install
curl -fsSL https://get.faucet.dev | sh

# Connect a database
faucet db add prod-postgres \
  --type postgres \
  --dsn "postgres://reader:****@db.internal:5432/orders"

# Start the server (REST + MCP on the same process)
faucet serve --port 8080 --mcp-port 8081

That’s it. You now have:

  • A REST API at http://localhost:8080/api/prod-postgres/{table} with full CRUD, filtering, pagination, and OpenAPI docs at /docs.
  • An MCP server at http://localhost:8081/mcp exposing core navigation tools (list_tables, describe_table, read_rows) and per-table typed tools that agents can enable on demand to keep the context window lean.
  • A SQLite-backed config store tracking every connection, role, and policy.

Want to plug it into Gemini Enterprise’s managed MCP infrastructure? Add the Faucet endpoint as an external MCP server in your agent’s tool config. The same endpoint works in Claude Desktop, Cursor, the OpenAI Agents SDK, and anything else that speaks MCP 2025-11-25.

# Add to Claude Code in one line
claude mcp add faucet-prod http://localhost:8081/mcp

# Or add to Gemini Enterprise via the ADK
adk agent tool add my-agent --mcp http://localhost:8081/mcp

RBAC is declarative. Here is a policy that lets the support role read from customers and orders but blocks access to the payments table and to the customers.ssn column:

roles:
  support:
    databases:
      prod-postgres:
        tables:
          customers:
            operations: [read]
            field_deny: [ssn, dob]
          orders:
            operations: [read]
          payments:
            operations: []

Apply it once, and every request — REST or MCP — runs through the same policy engine. No per-protocol divergence. The agent and the dashboard see the same enforcement, because they hit the same Faucet process.

The 30-second version of the argument

Google Cloud Next ‘26 made one thing crystal clear: the agent layer is consolidating fast. Within 18 months we have gone from “every team writes their own LangChain wrapper” to “every hyperscaler ships a managed agent platform with hundreds of models, A2A protocol support, and managed MCP servers.” That part of the stack is solved, or solved enough.

The database layer is going the other way. Each database vendor is shipping its own MCP server. Each cloud is shipping its own database-specific agent connector (Oracle AI Database Agent for Gemini, Bedrock Knowledge Bases for AWS, Microsoft Fabric for Azure). Each one is well-built. None of them compose. And the more agent platforms an enterprise adopts, the worse the combinatorics get: M agent platforms × N databases = M×N integrations.

The way out is a neutral layer between your databases and your agents — one that speaks REST for your apps, MCP for your agents, enforces RBAC at the field level, and runs anywhere. That is what Faucet is, and that is why we ship it as a single binary with no runtime dependencies.

The 200+ models in Model Garden are spectacular. They will still need a way to talk to your data when you also run Bedrock, also run OpenAI, and still have a SQL Server cluster nobody wants to migrate. Don’t let the platform layer decide your database layer for you.

Getting started

curl -fsSL https://get.faucet.dev | sh
faucet db add demo --type sqlite --dsn ./demo.db
faucet serve

Open http://localhost:8080/docs for the OpenAPI surface and http://localhost:8080/mcp for the agent tools. Wire it into Claude Code, Cursor, or the agent platform of your choice in one line. If you run a serious enterprise database — Oracle, SQL Server, Snowflake — the connection string is the only thing that changes. The CLI, the API, the tools, and the policy syntax stay the same.

The agent platforms are converging. Make sure your database layer doesn’t fragment underneath them.