Vercel
Vercel calls itself “the AI Cloud.” They mean it — they've done more to make their platform legible to AI tools than almost anyone else. But there's a critical distinction they're missing: they've built for coding assistants, not for autonomous agents. That gap is costing them.
1. Discoverability — 9/10
Vercel is one of the few platforms that has genuinely invested in machine-readable documentation. They publish llms.txt and llms-full.txt at the root — a full, structured sitemap designed for LLMs to consume. Every documentation page is accessible as markdown by appending .md to the URL. An agent can navigate Vercel's entire knowledge base without a browser.
Gap: No /.well-known/agent.json. They haven't implemented A2A protocol discovery, which means autonomous agents following the emerging standard won't find Vercel's capabilities through the expected channel.
2. Tool Surface — 7/10
Vercel ships an official MCP server at mcp.vercel.com — that's real commitment. The tools cover the right things: search docs, list projects, manage deployments, read logs. For a developer using Cursor or Claude Code, this is excellent.
But here's the catch: Vercel MCP only works with a curated whitelist of approved AI clients. If you're building an autonomous agent that needs to interact with Vercel programmatically, you can't. You're not on the list. This is a deliberate design choice — and it's the right choice for security in a coding assistant context. But it means the MCP server is functionally useless for autonomous agent-to-agent workflows.
Gap: The underlying REST API is comprehensive but not wrapped for agents — no OpenAPI spec prominently linked, no agent-native SDK. You can get there, but it takes work.
3. Auth Simplicity — 6/10
The MCP server uses OAuth. That's the right call for a coding assistant — it gives human users a clear authorization flow. But for an autonomous agent, OAuth is a nightmare. There's no session, no browser, no human to click “authorize.”
The REST API supports bearer tokens, which is usable. But token creation requires a human to log into the Vercel dashboard and generate one manually — there's no programmatic provisioning path for agents. Compare this to Stripe, where an agent can work with a single API key from day one.
Gap: No lightweight API key option for agent access. No agent-specific credential scoping. Every agent integration requires prior human involvement to set up auth.
4. Response Quality — 8/10
Where Vercel does well: consistent JSON structure, predictable pagination, clear resource IDs (they all have typed prefixes — prj_, team_, dpl_). An agent can reliably extract what it needs without fragile parsing. Deployment state is explicit and enumerable.
Gap: Deployment logs can be large and unstructured — agents ingesting full log output will burn tokens. A structured log summary endpoint would be high value.
5. Error Handling — 8/10
Standard HTTP status codes, consistent error object shape, clear rate limit headers. Vercel's error messages are generally actionable — “Project not found” vs. a generic 404. Build errors surface with useful context.
Gap: Some deployment failure states require log inspection to understand — the error object alone doesn't always tell you what went wrong.
Agent Use Cases That Work Today
- ✓Coding assistants (Claude Code, Cursor) deploying via MCP
- ✓CI/CD agents triggering deployments via REST API
- ✓Monitoring agents reading deployment status
- ✓Documentation agents using llms.txt for context
- ✓AI-assisted development workflows via Skills.sh
What's Blocked for Autonomous Agents
- ⚠MCP access — whitelist only, arbitrary agents excluded
- ⚠Programmatic auth provisioning — requires human dashboard access
- ⚠A2A discovery — no /.well-known/agent.json
- ⚠Large log ingestion — token-expensive without a summary endpoint
The Real Finding
Vercel has done the hard part. They have machine-readable docs, an MCP server, structured APIs, typed IDs, and a clear mental model of agent users. The score isn't 9/10 for one reason: they've optimised for human-adjacent AI (coding assistants) and not for autonomous agents.
That's not a criticism — it's a product decision. Cursor and Claude Code are Vercel's actual users today. But as fully autonomous agents become more common, the OAuth-only, whitelist-only approach will become a friction point. The gap between “great for assisted coding” and “great for autonomous operation” is exactly the gap Botlington exists to map.
Three Things Vercel Should Do
- Publish
/.well-known/agent.json— it takes 20 minutes and signals to the emerging agent ecosystem that you're playing the long game. - Add an agent token type — scoped, machine-provisionable, no OAuth dance required. This is what Stripe gets right that Vercel doesn't.
- Open MCP to non-whitelisted clients with scoped permissions — or publish the REST API as an OpenAPI spec that agent frameworks can auto-generate tool definitions from.
Want an audit for your product?
Botlington scores your API across 5 agent-readiness dimensions and tells you exactly what to fix — before agents start bouncing off your auth layer.
Get your audit →