For the last eighteen months, almost every production MCP deployment I have looked at has had the same hidden assumption baked into it: the agent authenticates to the database as a single shared service account.
It is the default in every starter template. The agent gets a DATABASE_URL, the URL embeds credentials, those credentials map to a role with whatever permissions the team felt comfortable handing out, and every user prompt — Alice asking about her own deals, Bob asking about the whole pipeline — flows through that one identity.
This month that default got publicly demoted. On April 14, Databricks announced major enhancements to its Unity AI Gateway, the most consequential of which is on-behalf-of (OBO) execution for MCP servers. When an agent calls an MCP tool against a governed database, the call runs with the requesting user’s exact Unity Catalog permissions. Not the agent’s. Not a shared service account. The user’s.
That is a quiet announcement with loud implications. It is the first time a major data platform has made OBO the default posture for agent-to-data access, and it sets the template for what enterprise buyers are going to demand from every other MCP server in the next two quarters.
If you are building or operating an MCP server that touches a database, here is what you need to internalize.
The Service Account Anti-Pattern, In One Diagram
A typical production MCP setup today looks like this:
Alice ──┐
│ ┌── PostgreSQL
Bob ────┼── LLM ── MCP server ── role ─────┼── (sees everything
│ (svc-agent) │ svc-agent can see)
Carol ──┘ └──
The MCP server holds one connection (or a pool keyed to one identity). Whatever Alice can see in her prompt history, the MCP server can also see in the database — because the MCP server is, in database terms, the same user every time.
This collapses three security boundaries into one:
- Authentication: who is talking to the LLM
- Authorization: what data the LLM can pull on their behalf
- Audit: whose action shows up in the database log
Every one of those should differ for Alice, Bob, and Carol. In a service-account world, they collapse to “svc-agent did it” — which is useless for compliance, useless for forensics, and unsafe at any non-trivial scale.
We have known this is wrong for a while. Vercel learned it publicly on April 21 when an attacker used a leaked agent service account to read production database credentials out of an environment variable store. The blast radius was the entire role’s permission set, because there was no per-user identity to constrain it.
What changed this month is that the most aggressive enterprise data vendor on the market just made fixing it the default.
What Databricks Actually Shipped
Strip away the marketing and Unity AI Gateway’s MCP governance layer does three concrete things:
1. The MCP server is registered as a Unity Catalog object.
It has an owner, a set of grants, and lineage tracking. You can GRANT EXECUTE ON MCP SERVER ... the same way you grant table access. Every MCP server is now a first-class governed resource, not a sidecar process the platform team forgot about.
2. Tool calls execute on-behalf-of the requesting user.
When an agent calls a tool, the gateway propagates the original user’s identity into the MCP server’s database session. If Alice triggers the agent and the agent calls query_orders, the SQL runs as Alice. If Alice cannot see Bob’s orders, the agent cannot see them either, no matter how cleverly the LLM phrases the query.
3. The audit log is per-user.
Unity Catalog’s audit pipeline now records (user, agent, tool, query, rows_returned) for every MCP call. When the security team asks “who pulled customer 4421’s data on Tuesday,” the answer is a real human name, not “svc-agent.”
The Databricks announcement is the marquee one, but it is part of a pattern. Cloudflare’s Mesh launch the same week made the equivalent move at the network layer: every agent gets a distinct identity, and policies attach to that identity, not to a shared tunnel. The two announcements together signal that the industry has decided shared service accounts for agents are an artifact of the prototype era.
Why Service Accounts Won the Last Two Years
It is worth being honest about why this anti-pattern got entrenched.
Service accounts were the path of least resistance because the alternative — propagating the user identity from the chat client all the way down to the database session — required four things that did not exist when MCP first shipped:
- A standardized way for the MCP server to receive a user token from the agent runtime.
- A standardized way for the user token to map to a database principal.
- A database driver that supports session-level identity switching cheaply.
- A pooling layer that does not multiplex sessions across users.
Each of those was a research project a year ago. None of them are anymore. The MCP authorization spec went GA in late March. The big database vendors all shipped per-session SET ROLE / SET SESSION AUTHORIZATION variants. Connection poolers (PgBouncer, ProxySQL, the cloud-native ones) finally have first-class “transaction-pinned” or “session-pinned” modes that play nicely with per-user roles.
The pieces are on the table. Databricks is the first major vendor to assemble them into a default. They will not be the last.
What This Means for Your MCP Server
If your MCP server is going to be deployed into a Databricks-style governance environment — and most enterprise MCP servers will be, within the year — it has to support OBO end-to-end. That means three concrete capabilities:
Identity propagation. The MCP server has to accept a user identity (a JWT, an OAuth token, an opaque session ID it can introspect) on every tool call and refuse to run if it is missing or invalid. The agent runtime is responsible for getting the identity in; the MCP server is responsible for not silently falling back to a service account when it is not.
Per-user database sessions. The MCP server has to be able to translate “this user is Alice” into “execute this SQL with Alice’s database role.” For Postgres that is a SET ROLE alice after acquiring a connection. For SQL Server, EXECUTE AS USER. For Snowflake, the OAuth-on-behalf-of flow is built in. For Oracle, proxy authentication. Every major database has the primitive; the MCP server has to actually use it.
Per-user audit. Every query needs to log who ran it. Not just the SQL — the user. If the only thing in your audit log is the connection identity, OBO did not happen, regardless of what the marketing page says.
The bar Databricks just set is that all three of these have to work out of the box, not as a custom integration that takes the platform team six weeks to wire up.
How Faucet Handles OBO
This is the part where it gets self-interested, but it is on-topic so I will be direct.
Faucet was designed from the start with per-user identity propagation as a first-class concern, because the alternative — service accounts behind every API — was already the most common security gap in REST API gateways before MCP made it ten times worse. Three pieces matter:
JWT-bound database sessions. When a request hits a Faucet endpoint with a JWT, Faucet maps the sub claim (or any configurable claim) to a database principal and issues a SET ROLE (Postgres / SQL Server-equivalent) on the connection before any user query runs. The mapping is explicit, declarative, and lives in the Faucet config. There is no “fallback to default role” path; if the claim does not map, the request is rejected.
# Map JWT claim "email" to a database role
faucet config set auth.role_claim email
faucet config set [email protected] analyst_role
faucet config set [email protected] admin_role
# Verify
faucet auth test --token "$JWT"
# → user: [email protected] → db_role: analyst_role
MCP server inherits the same mapping. Faucet’s MCP server is the same binary as the REST API. When an agent calls a Faucet MCP tool, the user token flows through the same identity pipeline. If Alice’s agent asks for list_orders, the underlying SQL runs as analyst_role, not as Faucet’s own service identity.
# Start the MCP server with OBO enabled
faucet mcp serve --auth obo --port 8081
# In your agent's MCP config:
{
"mcpServers": {
"faucet": {
"url": "http://localhost:8081",
"auth": {
"type": "bearer",
"token_source": "user_session"
}
}
}
}
Per-user audit out of the box. Faucet’s audit log records (user, role, endpoint, sql, rows_returned, latency_ms) for every request, REST or MCP. That feeds straight into a SIEM or, in the Databricks case, into Unity Catalog’s audit pipeline through the standard event format.
# Tail the audit log live
faucet audit tail --filter [email protected]
# Export the last 24h to JSON
faucet audit export --since 24h --format json > audit.json
The point is not that Faucet is the only way to do this. The point is that “OBO at the data API layer” is a feature that has to be designed in from the start. Bolting it onto an MCP server that was built around a service account is a rewrite, not a config change.
The Three Architectures That Are About to Lose
If you accept that OBO is becoming the default, three patterns that are common today are going to look obsolete by the end of the year.
Pattern 1: The “agent role” with row-level security.
You give the agent a single role and try to enforce per-user filtering through Postgres RLS policies that read from a session variable the application sets. This works in theory and breaks in practice — the session variable is one RESET away from being unset, the LLM can be coaxed into calling tools that bypass the filter, and the audit log says “agent” not “Alice.” Better than nothing, but it is a defense-in-depth layer, not a primary boundary.
Pattern 2: The MCP-server-per-tenant. You spin up a separate MCP server process for each customer, each with its own scoped credentials. This works but does not scale — at 1,000 tenants you are running 1,000 sidecars, the orchestration overhead eats your operations team, and you still have a service account problem within each tenant. Useful for very-high-isolation cases (regulated multi-tenant SaaS), wrong as a default.
Pattern 3: The “trust the LLM to add the WHERE clause.”
You instruct the agent in its system prompt to always include WHERE user_id = {{current_user}} and hope. Do not do this. The LLM is not a security boundary. Treat its output as untrusted user input that happens to be syntactically SQL. Every shop that is in the news for an agent data leak in the last six months had some version of this in their stack.
The pattern that wins is OBO at the data API layer, with the MCP server propagating the user identity into a per-user database session. Databricks just made it the default for their stack. Snowflake’s MCP integration uses the same primitive. Postgres has the surface area for it. The protocol-level support landed in MCP earlier this year.
There is no excuse left to ship a new MCP server in 2026 that authenticates to the database as a single shared identity.
What to Do This Quarter
If you operate an MCP server that touches a database, three things to put on the roadmap:
-
Audit your current setup. Find the line in your config that holds the database credentials the MCP server uses. If those credentials grant access to data that not every user of the agent is supposed to see, you have a service-account problem. Write it down.
-
Pick your identity source. Where is the user identity going to come from? OAuth via the chat client? An internal SSO? A signed JWT from your platform? You need a single answer, and it needs to be a real auth token, not a username string in a header.
-
Pick your propagation point. Either (a) your MCP server itself does the SET ROLE, or (b) you put a data API layer in between (Faucet, Hasura, PostgREST in proxy-auth mode, the Databricks gateway) that handles the propagation and the MCP server just passes the token through. Option (b) is usually the right answer because it gives you the same boundary for non-agent traffic.
The companies that get this right in the next six months are the ones that will be able to say “yes” when their largest customer asks “does the AI agent run with my employee’s permissions, or yours?” The companies that do not are the ones whose deals are going to stall in security review starting around midyear.
Getting Started
Faucet ships OBO support for Postgres, MySQL, SQL Server, Oracle, Snowflake, and SQLite, on both the REST API and the MCP server. Single binary, embedded UI, no platform team required.
curl -fsSL https://get.faucet.dev | sh
Then point Faucet at your database, define the JWT claim that maps to a database role, and the same identity propagates into every REST endpoint, every MCP tool call, and every line of the audit log. Detail in the docs at wiki.faucet.dev.
The service-account era was a phase. We are leaving it.