Last week at TDX 2026, Salesforce shipped something bigger than a feature. They shipped a pattern.
Headless 360 — announced April 15 in San Francisco — declared that every capability in Salesforce is now available through three distinct interfaces simultaneously: a REST API, an MCP tool, and a CLI command. Not “and we’ll add MCP later.” Not “API-first, and eventually MCP.” All three, day one, as peers.
The launch numbers tell you they’re serious: 60+ new MCP tools, 30+ preconfigured coding skills that give Claude Code, Cursor, Codex, and Windsurf live access to a customer’s entire org, a new DSL called Agent Script that was open-sourced on the spot, and a unified AgentExchange marketplace with 10,000 Salesforce apps, 2,600+ Slack apps, and 1,000+ Agentforce agents and MCP servers. They also put a $50M Builders Fund behind it.
But the numbers aren’t the story. The architecture is. Salesforce took the stance that in 2026, any serious platform has to expose itself three ways — because it has three different kinds of consumers, and no single interface serves all three well.
This is the new default. And it has uncomfortable implications for the one system most application teams have been ignoring: the database.
Why three surfaces, not one
The temptation, once MCP took off, was to treat it as a replacement for REST. Some vendors went that way. “Why do we need both?” they asked. “Agents will use MCP. REST is legacy.”
Salesforce’s answer, delivered architecturally, is that each surface has a different consumer with different requirements:
The REST API is for deterministic code. Microservices, integrations, webhooks, backend batch jobs, third-party apps. These consumers know exactly what they want. They want predictable HTTP semantics, stable contracts, OpenAPI specs, and the ability to chain calls without a reasoning model in the loop. REST has been table-stakes for 15 years and isn’t going anywhere.
The MCP tool is for reasoning agents. Claude, Cursor, Codex, Windsurf. These consumers pick tools dynamically based on natural-language intent. They want rich descriptions, examples in tool metadata, semantic tool names, and self-documenting schemas. They also want things REST doesn’t naturally give them: tool annotations that describe side effects, progress updates, resource references that agents can introspect. MCP is a different contract shape because the consumer is a different kind of process.
The CLI command is for humans and scripts. Developers in a terminal, CI/CD pipelines, Makefiles, quick-and-dirty automation. These consumers want shell-native flags, pipe-friendly output, exit codes that mean something, and the ability to type a command from memory. cURL-ing an API from bash works, but a proper CLI collapses five lines of JSON scaffolding into one line.
A system that only exposes one of these three is going to leave two of its three constituencies reaching for workarounds. That’s what “headless” really means in Headless 360: the old assumption that a SaaS platform’s center of gravity was its browser UI is dead. The center of gravity is now the machine-readable surface — and there are three of those.
The database is the last holdout
Here’s where this gets concrete for anyone building AI applications.
Walk through the stack. Your frontend? React, Vue, Svelte — all with mature tooling. Your backend framework? Has a CLI, probably has OpenAPI generation, and every major framework now has MCP server packages shipping. Your observability stack? Datadog, Grafana, Honeycomb all shipped MCP servers this year. Your CI/CD? GitHub, GitLab, CircleCI — API, CLI, and increasingly MCP.
Then you get to the database.
What does a fresh Postgres instance expose to an AI agent today? Zero surfaces. No REST API. No MCP server. No CLI command beyond psql, which is an interactive shell, not a scripted interface. If you want any of the three, you’re building it.
Gartner’s April forecast pegs agentic AI spending at $201.9 billion in 2026, up 141% year over year. That money is flowing into tools that turn systems into agent-accessible surfaces. The vendors who got there first — Salesforce, Microsoft with Power Apps + DAB, Domo, Oracle — are capturing the enterprise attention. The ones still assuming “REST API is enough” are losing it.
And the database is where every AI application eventually lands, because that’s where the data is. The pattern I keep seeing is: team builds an agent, agent works great in demo, team goes to wire it to the production database, and suddenly they’re reinventing auth, writing bespoke MCP tools, building a thin REST layer on top of the database, and debugging why the LLM keeps hallucinating column names because there’s no schema tool it can call.
Headless 360 says: that wiring should come in the box.
What Salesforce did that your database vendor didn’t
The more interesting thing about the TDX announcement isn’t that Salesforce shipped MCP support. Plenty of vendors have. It’s what they shipped alongside it.
The Agentforce Experience Layer separates what an agent does from how it appears. The same agent can render interactively in Slack, mobile, Teams, ChatGPT, Claude, Gemini, or any MCP-compatible client. That’s not just a presentation trick — it’s a bet that the client surface is going to fragment further, and you want your integration to survive that fragmentation without rewriting.
Agent Script, the new open-sourced DSL for defining deterministic agent behavior, acknowledges that pure-LLM control flow isn’t reliable enough for enterprise workflows. You need a way to say “when the user asks X, deterministically do Y, and only invoke the model for the parts that actually require reasoning.” This is the quiet admission across the industry right now: agents need guardrails, and those guardrails belong in code, not in prompts.
Lifecycle tooling — testing, evaluation, experimentation, observation, orchestration — shipped as a suite. Because the lesson from every large enterprise pilot in the last 12 months is that agents without eval and observability become incident generators. You ship them and then you can’t debug them.
Now look at your database. If you connect an agent to it today, what eval framework are you using for the tools the agent calls? What observability do you have on which queries the agent ran against which tables? What auth story governs which rows that agent was allowed to see? For most teams, the honest answer is: none of it. The database is a black box from the agent’s perspective, and from the operator’s perspective the agent is a black box. Two black boxes talking to each other.
The three-surface rule, applied to your database
This is the gap we’ve been building Faucet to close, and the TDX announcement clarifies why the three-surface approach matters in the database layer specifically.
Point Faucet at any SQL database — PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, SQLite — and it generates all three surfaces simultaneously:
A REST API with OpenAPI 3.1, per-table CRUD endpoints, filtering, pagination, and RBAC. For your microservices and integrations.
An MCP server with tools scoped to that database’s schema — query_users, create_order, list_products — plus resource discovery so an agent can introspect the schema before it writes a query. For Claude, Cursor, and anything else that speaks MCP.
A CLI (faucet) for the human and the CI pipeline. Start servers, run migrations, manage connections, inspect configs, all from the shell.
Same auth boundary, same RBAC rules, same audit log, three shapes. Whichever consumer is asking — deterministic code, reasoning agent, human in a terminal — the database speaks its language.
What this looks like in practice
Install Faucet:
curl -fsSL https://get.faucet.dev | sh
Connect a database and start serving all three surfaces:
# Connect a Postgres database
faucet connect postgres --dsn "postgres://user:pass@localhost/mydb" --name main
# Start the server — REST + MCP + admin UI on one port
faucet serve --port 8080
That command is now exposing your database three ways.
REST surface — deterministic code can hit it directly:
curl "http://localhost:8080/api/main/users?filter=active=true&limit=10" \
-H "Authorization: Bearer $FAUCET_TOKEN"
MCP surface — wire it into Claude Code with a single command:
claude mcp add --transport http faucet-main http://localhost:8080/mcp
Claude now sees a tool set generated from your schema: list_users, get_user, create_order, query_products, plus a describe_schema resource it can introspect before making up column names. Tool annotations mark destructive operations as destructive=true so the agent knows to confirm before calling them, mirroring the MCP spec’s guidance on side-effect disclosure.
CLI surface — for scripts and humans:
# Run a query against a registered connection
faucet query main "SELECT count(*) FROM orders WHERE created_at > now() - interval '1 day'"
# List tables
faucet tables main
# Spin up a read-only role bound to specific tables
faucet role create analyst --read users,orders --db main
Three surfaces. Same RBAC. Same audit log. Nothing to stitch together.
Where this leaves the database vendors
The big database vendors have been shipping MCP servers over the past six months — Oracle, Microsoft’s DAB, Snowflake, every major cloud warehouse. That’s real, and it’s good. But the rollout has two problems that Headless 360 throws into relief.
First, each vendor’s MCP server is its own surface area. If you run Postgres, MySQL, and Snowflake in production — which describes most mid-size engineering orgs — you get three different MCP servers, three different auth models, three different tool naming conventions. Agents now have to reason about which MCP server to call for which data.
Second, most of these vendor MCP servers don’t come with a matching REST API or a matching CLI. They’re MCP-only. Which means if you want a traditional integration to that same table, you’re still building a REST layer yourself, and if you want a human-operable CLI, good luck. The three-surface promise only holds if something unifies them.
The whole point of a database-to-API tool at this stage is to be the unifier — one binary, one config, one RBAC model, three surfaces, any of the major SQL engines underneath. That’s the bet. The Salesforce launch validates that the three-surface pattern is becoming the platform-layer default. The question for the database layer is whether teams will assemble that pattern themselves out of five different vendor pieces, or run one tool that does it in thirty seconds.
Getting started
Headless 360 is the kind of launch that reframes the conversation. For the rest of 2026, expect “is it headless?” to become the question enterprise buyers ask every vendor. Does your system have an API? An MCP tool? A CLI? All three, with shared auth and audit?
For your database layer, you can answer yes today:
curl -fsSL https://get.faucet.dev | sh
faucet connect postgres --dsn "$DATABASE_URL" --name prod
faucet serve --port 8080
Three surfaces. One binary. Any SQL database. The pattern Salesforce just made the default, available to your data layer in under a minute.
Further reading