Back to Blog

The Vercel Breach: Why AI Tools Should Never Hold Raw Database Credentials

The April 2026 Vercel-Context.ai supply chain attack exposed thousands of environment variables — including database credentials, API keys, and signing secrets. The lesson isn't 'mark your env vars sensitive.' It's that AI tools should never see raw database credentials in the first place. Here's what scoped API access looks like instead.

On Sunday April 19, a hacker using the handle ShinyHunters posted on BreachForums offering Vercel internal data — databases, source code, employee accounts, access keys — for $2 million. By Monday morning, the crypto developer ecosystem was scrambling. Vercel customers were rotating credentials. Mandiant was on the call.

The post-mortem dropped in pieces over the next 48 hours, and the attack chain reads like a security awareness training slide that nobody believed could actually happen:

  1. Lumma infostealer grabs a Context.ai employee’s credentials sometime in February 2026.
  2. The attacker uses that foothold to push a malicious browser extension through Context.ai’s update channel.
  3. A Vercel employee — using Context AI as a workplace AI assistant — installs the extension. They’ve already granted Context AI full read access to their Google Drive and connected it to their Vercel Enterprise Google account.
  4. The attacker pivots from the browser extension into the employee’s Google Workspace.
  5. From Google Workspace, they pivot into Vercel’s internal environments.
  6. Inside Vercel, they read environment variables — the ones not marked “sensitive” — for an unknown number of customer projects.

The exposed environment variables, per Vercel’s own bulletin and analysis from GitGuardian and Trend Micro, included database connection strings, API keys, OAuth tokens, signing keys for JWT issuance, and third-party service credentials. The “sensitive” flag in Vercel’s UI is opt-in. Most teams didn’t opt in.

If you ship anything on Vercel and you’re reading this in the days after, you already know the rotation drill. This post is about something else: why a database credential was sitting in a plain-text environment variable in the first place, and what AI-era infrastructure should look like instead.

The environment variable problem is older than AI

DATABASE_URL=postgres://app_user:[email protected]:5432/prod has been the default since the Twelve-Factor App essay was published in 2011. It works. It’s portable. It survives container restarts. It’s also a single string that grants a process unrestricted access to a production database, and we’ve collectively agreed to put it next to the recipe for LOG_LEVEL.

That tradeoff was acceptable when the only things reading env vars were the application processes you wrote and deployed. The threat model was “an attacker breaches my server.” If they got that far, the database credential was the least of your problems.

The threat model in 2026 is different. Things reading your environment variables now include:

  • The application process (still).
  • The CI runner that builds and deploys it.
  • The platform’s own internal services that handle routing, observability, and edge functions.
  • Browser-based AI assistants that the platform employee installed last Tuesday.
  • Whatever future supply chain compromise reaches any of the above.

The Vercel incident is a clean demonstration of the shift. The attacker never touched a customer’s application. They never popped a server. They walked in through a browser extension installed by an employee, pivoted through three SaaS boundaries, and read configuration data at rest. The database credentials in those env vars granted exactly as much access as they would have on day one of the project — which is to say, full access, because that’s how DATABASE_URL works.

”Mark your env vars sensitive” is a bandage

Vercel’s customer guidance after the breach was, reasonably, to use the sensitive environment variable feature. Sensitive variables are encrypted at rest and not displayed in the dashboard. That’s strictly better than the default. It would have prevented this specific exposure.

It does not prevent the next one.

A sensitive env var still resolves to a plain-text string at runtime. Anything inside the runtime — the framework, every dependency in the import graph, every piece of telemetry, every AI coding assistant with shell access during a build — can read it. The encryption-at-rest layer protects against database theft of the platform’s own configuration store. It doesn’t protect against the dozens of legitimate reads that happen every time the variable is used.

More importantly, the encryption boundary doesn’t change the blast radius of what the credential grants. A database username/password pair authorizes whoever holds it to do whatever the SQL grants permit, until manually rotated. The credential has no expiration. It has no audience binding. It has no per-request scoping. It cannot be revoked for one consumer without revoking it for all of them. It produces no audit trail that distinguishes “the application read 100 rows” from “an attacker exfiltrated the entire users table.”

The Vercel bulletin recommends rotating any credential that lived in a non-sensitive environment variable. Customers did. Some of them are still doing it as of this writing — there’s no atomic “rotate the world” button when the credentials in question are PostgreSQL passwords scattered across pg_hba.conf, MongoDB connection strings in three regions, and SaaS API keys with no programmatic rotation endpoint.

This is the operational reality the Vercel breach exposed: raw database credentials are a 2011 abstraction we’ve been trying to make work in a world where the things touching them have multiplied tenfold.

What a scoped boundary looks like

The alternative isn’t novel. It’s the same pattern we adopted for third-party APIs a decade ago: don’t let consumers hold the master credential. Issue them a token that represents a specific role, a specific scope, and a specific lifetime. When something goes wrong, revoke the token. The master credential — the one that can do anything — lives in exactly one place, owned by exactly one process, and is never handed to anyone else.

For databases, this means putting an HTTP layer in front of the database that:

  1. Holds the database credential itself, not the consumer. The connection pool lives in the API layer. Consumers never see postgres://.
  2. Authenticates consumers with short-lived tokens. A JWT with a 15-minute expiration that’s bound to a role is structurally incapable of being “a credential leaked in 2026 that works in 2027.”
  3. Authorizes per-resource and per-operation. A token issued to an AI agent for “read the products table” cannot read the users table even if the bearer asks nicely.
  4. Logs every request with the token’s identity. When the post-mortem comes, you can answer “what did this consumer access?” without parsing query logs.

This is what Faucet does. It’s what PostgREST does, what Supabase does on top of PostgREST, what Hasura does for GraphQL, what DreamFactory has done for fifteen years. The shape of the answer is settled. What changed in the last eighteen months is that the consumers changed: AI agents now read data on behalf of humans, MCP servers expose database tools to language models, and the per-employee count of “things that need database access” went from 1 (the app) to N (the app, plus every agent the employee runs).

Every one of those new consumers is a candidate for credential theft in the Vercel sense. The question for the architect is whether each one holds a DATABASE_URL or a scoped token.

A concrete shape

Here’s how the same workload looks under both models.

The legacy shape — what most teams ship today, and what got customers burned in the Vercel incident:

# .env (or Vercel environment variables)
DATABASE_URL=postgres://app:[email protected]:5432/prod
ANTHROPIC_API_KEY=sk-ant-...
# app.py
import psycopg
import anthropic

conn = psycopg.connect(os.environ["DATABASE_URL"])
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

# The model can call any tool. Any tool can run any query.
# DATABASE_URL grants superuser-equivalent access to whoever holds it.

The scoped shape — what Faucet emits when you point it at a database:

# Database credential lives ONLY on the Faucet host.
# Application processes hold a JWT that represents a role, not a connection.
faucet start --db postgres://app:[email protected]:5432/prod \
  --listen 0.0.0.0:8080
# Define what the AI-agent role can actually do.
faucet role create ai-readonly \
  --table products:read \
  --table reviews:read \
  --rate-limit 100/min

# Issue a token bound to that role, with a 15-minute lifetime.
TOKEN=$(faucet token issue --role ai-readonly --ttl 15m)
# app.py
import os, requests, anthropic

FAUCET = "https://api.internal:8080"
TOKEN  = os.environ["FAUCET_TOKEN"]  # short-lived, scoped, revocable

def get_products(category: str):
    r = requests.get(
        f"{FAUCET}/products",
        headers={"Authorization": f"Bearer {TOKEN}"},
        params={"category": category, "limit": 50},
    )
    return r.json()

# The AI agent can call get_products(). It cannot reach users.
# If the token leaks, it expires in 15 minutes and can be revoked instantly.
# Every request is logged with the role identity.

The DATABASE_URL still exists in the second model — it has to, because something has to talk to PostgreSQL — but its blast radius shrinks to a single host: the Faucet binary. That host doesn’t run a browser. It doesn’t install employee-side AI extensions. It doesn’t sync to anybody’s Google Drive. It’s the only thing in the system that holds the keys, and there’s exactly one of it to harden.

The MCP angle

The MCP protocol pushed this question from “good architecture” to “operational necessity.” MCP servers — the bridge between language models and external systems — are now running in production at Pinterest, Lucidworks, Google, Microsoft, Oracle, and a long tail of enterprise teams. The MCP ecosystem hit 97 million monthly SDK downloads in March 2026, up from 2 million at launch in November 2024. There are over 10,000 live MCP servers in the wild.

Every one of those servers is, by definition, a process that holds credentials and exposes them as callable tools to a language model. The “AI agent that has read/write access to your production database, can send emails, and has access to financial systems” that security analysts have been warning about for two years is, in MCP terms, a perfectly normal Tuesday deployment.

CVE-2026-32211 — the missing-authentication vulnerability in Microsoft’s @azure-devops/mcp package, disclosed April 3 with a CVSS of 9.1 — is the canary. It was an MCP server that didn’t check who was calling its tools. Anyone on the network could invoke them. The fix was straightforward, but the broader pattern is the harder problem: MCP servers tend to run with the privileges of the user who started them, holding the credentials that user has, exposing them as tools to whatever model connects.

Faucet’s MCP server inverts this. The credentials live in Faucet. The MCP layer is just a different transport for the same RBAC-scoped tokens. An agent connected over MCP authenticates with a JWT bound to a role, gets a tool surface that reflects exactly what that role can do, and produces audit trail entries identical to any HTTP consumer. The “agent” is just another consumer; the role system is what actually decides what’s allowed.

# Same role, different transport. The model sees only what ai-readonly can do.
faucet mcp serve --role ai-readonly --listen stdio

This is the part that matters for the Vercel-class problem. A leaked MCP token is a leaked role-bound, expiring credential. A leaked DATABASE_URL is a leaked god-mode credential that has to be rotated by hand across however many services reference it. These are not the same incident.

What to do this week

If you’re rotating credentials in the wake of the Vercel breach — and a meaningful percentage of readers of this post are — the immediate action is the obvious one: rotate, mark sensitive, audit the access logs, file the incident report. That work is unavoidable.

The longer-arc action is to look at the rotation list and ask, for each item, why a credential with that level of access was reachable from a process that didn’t strictly need it. If the answer is “because that’s how DATABASE_URL works,” you have an architecture decision to make.

The bandage is encryption at rest. The fix is moving the credential to a single host, putting an HTTP/MCP layer in front of it, and giving every consumer — application, agent, employee, vendor — a token that represents a role rather than a key to the kingdom.

Getting Started

# Install Faucet
curl -fsSL https://get.faucet.dev | sh

# Point it at your database
faucet start --db postgres://user:pass@host:5432/db

# Define a role
faucet role create ai-readonly --table products:read --rate-limit 100/min

# Issue a short-lived token
faucet token issue --role ai-readonly --ttl 15m

# Or expose it as an MCP server
faucet mcp serve --role ai-readonly

That’s the whole shape. One binary, scoped tokens, RBAC, audit logs, MCP transport. The database credential never leaves the Faucet host. Everything else is a consumer.

The Vercel breach is going to keep generating post-mortems for weeks. The interesting ones won’t be about Context.ai’s browser extension or Vercel’s encryption defaults. They’ll be about the fact that environment variables holding database master credentials are a 2011 design that we extended into 2026 by inertia, and the AI-era consumer count finally made the math break.

Don’t let your post-mortem be one of those.