What is WebMCP? The New Web Standard for AI Agents
AI agents have a problem with the web. Parse the DOM. Find buttons. Click them.
Wait. Hope nothing broke. It's slow. Brittle. Expensive. A CSS change can ruin everything.
WebMCP fixes this. Instead of scraping and guessing, websites can now expose structured tools directly to the browser. Agents call functions with typed parameters and get structured data back.
1. What is WebMCP?
Chrome 146 Canary already has an early preview behind a feature flag. This is the biggest change to how AI agents interact with the web since agents started interacting with the web.
What we'll cover What is WebMCP? A flight booking site registers a searchFlights tool with parameters for origin, destination, and date. Declarative API Why it matters How to add it to your site WebMCP vs.
Server-Side MCP WebMCP Directory FAQ 1. The navigator.modelContext API is where it all lives — websites use it to register tools, and browser-based agents use it to discover and invoke them.
chrome://flags. Think of it as turning every web page into a lightweight API server, but one that runs entirely in the browser. The core idea: instead of an agent trying to figure out how to use a website by looking at it (the way screen-scraping works), the website just tells the agent what it can do.
2. How WebMCP Works
WebMCP introduces a new API on the navigator object: navigator.modelContext. This is a ModelContext instance that gets created alongside the Navigator and provides methods for registering and managing tools.
The flow works like this:
- Registration. A website calls
navigator.modelContext.registerTool()with a tool definition — a name, a natural language description, a JSON Schema for inputs, and a handler function. - It's published by the W3C Web Machine Learning Community Group but is not yet a formal W3C Standard. An AI agent running in the browser (or a browser extension acting as an agent) queries
navigator.modelContextto see what tools are available on the current page. - 2. How WebMCP Works WebMCP introduces a new API on the navigator object: navigator.modelContext. This is a ModelContext instance that gets created alongside the Navigator and provides methods for registering and managing tools.
- The flow works like this: Registration. A website calls navigator.modelContext.registerTool() with a tool definition—a name, a natural language description, and a typed input schema. The browser stores it. Discovery.
An agent queries navigator.modelContext.getTools() to see what's available.
Here's what a basic tool registration looks like:
navigator.modelContext.registerTool({
name: "search-products",
description: "Search the product catalog by keyword and category",
inputSchema: {
type: "object",
properties: {
query: {
type: "string",
description: "Search keywords"
},
category: {
type: "string",
description: "Product category to filter by",
enum: ["electronics", "clothing", "books", "home"]
},
maxResults: {
type: "number",
description: "Maximum results to return (default 10)"
}
},
required: ["query"]
},
execute: async ({ query, category, maxResults = 10 }) => {
const results = await searchAPI(query, category, maxResults);
return { products: results };
}
});
An agent encountering this page would see a tool called search-products with a clear description and typed parameters. The agent calls a tool with structured JSON. The website executes the function and returns structured results.
No DOM. Imperative vs. Declarative API
WebMCP provides two ways to expose tools. Which one you use depends on what your site does.
The Declarative API: Annotated HTML Forms
If your site already has well-structured HTML forms, the declarative approach is the fastest path. You add a few attributes to your existing form markup, and the browser automatically translates it into a tool schema that agents can call.
<form toolname="subscribe-newsletter"
tooldescription="Subscribe an email address to the weekly newsletter"
action="/api/subscribe"
method="POST">
<input type="email" name="email"
tooldescription="Email address to subscribe" required />
<select name="frequency" tooldescription="How often to send emails">
<option value="weekly">Weekly</option>
<option value="monthly">Monthly</option>
</select>
<button type="submit">Subscribe</button>
</form>
The browser reads the toolname, tooldescription, and form field attributes, then constructs a tool that agents can invoke. By default, the user still has to click submit — the agent pre-fills the form, but doesn't auto-submit it. You can override this with toolautosubmit if you trust the agent to act without confirmation.
The declarative API is great for contact forms, search bars, newsletter signups — anything that maps cleanly to a form submission.
The Imperative API: Full JavaScript Control
Most modern web apps have interactions that don't map to a single form. Multi-step checkout flows, real-time filtering, drag-and-drop interfaces, state-dependent actions — these need the imperative API.
The imperative API uses navigator.modelContext.registerTool() (shown in the code example above) and gives you full control over what happens when an agent calls your tool. The handler is an async function that can do anything: call your backend API, update local state, trigger UI changes, or compose multiple operations together.
There's also provideContext(), which replaces the entire set of available tools at once. This is useful when your app's capabilities change based on state — after a user logs in, you might swap out anonymous tools for authenticated ones.
| Declarative | Imperative | |
|---|---|---|
| Best for | Forms, search bars, simple inputs | Complex workflows, dynamic state, multi-step operations |
| Implementation | HTML attributes on existing forms | JavaScript via navigator.modelContext |
| Setup effort | Minimal (add attributes to existing markup) | Moderate (write handler functions) |
| User confirmation | Submit button by default | Up to developer |
| Dynamic tools | No | Yes, via provideContext() |
Most real-world implementations will use both. The declarative API covers your forms, and the imperative API handles everything else.
4. Why WebMCP Matters
Right now, AI agents interact with websites the hard way. They use browser automation frameworks (Playwright, Puppeteer) or vision models to parse screenshots. Both approaches have serious problems:
- Fragility. DOM-based scraping breaks when the site updates its layout, class names, or component structure. Vision-based approaches break when the design changes. Sites don't owe agents a stable interface.
- Cost. Rendering pages, taking screenshots, and sending them to vision models burns tokens and compute. Google's research shows WebMCP achieves a 67% reduction in computational overhead compared to vision-based agent approaches.
- Accuracy. Structured tool calls with typed schemas push task accuracy to roughly 98%, compared to significantly lower rates for screen-scraping agents that have to interpret visual layouts.
- Speed. A structured function call that returns JSON is orders of magnitude faster than rendering a page, screenshotting it, sending the image to a model, parsing the response, and deciding what to click.
- Security. WebMCP runs inside the browser session. The user's auth, cookies, and permissions apply automatically. No need to hand API keys to third-party agent services or build separate auth flows.
For website owners, there's a strategic angle too. As AI agents become a primary way people interact with the web, sites that are agent-friendly will get more traffic and engagement than sites that aren't. WebMCP is how you make your site a first-class citizen in an agent-driven web.
5. How to Add WebMCP to Your Site
Getting started with WebMCP requires a few steps. The spec is still in early preview, but you can start building today.
Step 1: Enable the Flag
In Chrome 146+ Canary, navigate to chrome://flags, search for "WebMCP for testing," and enable it. Relaunch the browser. If you need to support browsers that don't have native WebMCP yet, the MCP-B polyfill provides navigator.modelContext as a drop-in shim.
Step 2: Identify Your Tools
Think about what agents would actually want to do on your site. An e-commerce site might expose searchProducts, getProductDetails, addToCart, and checkout. A documentation site might expose searchDocs and getArticle. A weather service might expose getForecast and getAlerts.
Start with read-only tools. They're lower risk and give you a chance to see how agents use your tools before exposing write operations.
Step 3: Register Tools
For each tool, define a clear name, a description that explains when and how to use it, a JSON Schema for inputs, and a handler function:
navigator.modelContext.registerTool({
name: "get-article",
description: "Retrieve a help article by topic. Returns the full " +
"article text and related article links.",
inputSchema: {
type: "object",
properties: {
topic: {
type: "string",
description: "The topic or keyword to search for"
}
},
required: ["topic"]
},
execute: async ({ topic }) => {
const response = await fetch(`/api/articles?q=${encodeURIComponent(topic)}`);
const data = await response.json();
return {
title: data.title,
content: data.body,
relatedArticles: data.related
};
}
});
Step 4: Test with the Inspector
Install the Model Context Tool Inspector Chrome extension. It shows you which tools are registered on any page, lets you execute them manually with custom parameters, and can test them with an agent via the Gemini API.
Architecture Tips
A few things to keep in mind as you build:
- Separate your logic from your UI. Tool handlers need access to your business logic, but not your React component tree. If your search logic is buried inside a component's state management, you'll need to extract it into a shared service layer first. Apps with clean separation between UI and data will have a much easier time.
- HTTPS is required. Like most powerful browser APIs, WebMCP requires a secure context.
localhostgets a pass during development. - Write good descriptions. The agent decides which tool to call based on the name and description. Vague descriptions lead to misuse. Be specific about what the tool does, what it returns, and when it should be used.
- Return structured data. Don't return HTML fragments. Return clean JSON that an agent can work with directly.
6. WebMCP vs. Server-Side MCP
WebMCP shares its name and conceptual lineage with Anthropic's Model Context Protocol (MCP), but they operate in different layers of the stack.
| WebMCP | Server-Side MCP | |
|---|---|---|
| Runs where | Client-side, in the browser | Server-side, as a hosted service |
| Protocol | Browser API (navigator.modelContext) |
JSON-RPC over HTTP/SSE/stdio |
| Auth model | Inherits browser session (cookies, SSO) | Separate auth (API keys, OAuth) |
| Discovery | Agent navigates to page, reads registered tools | Client connects to known server endpoint |
| Best for | Browser agents interacting with websites | Backend integrations, IDE plugins, data pipelines |
These aren't competing standards — they're complementary. A website might use server-side MCP to let backend agents access its API, and WebMCP to let browser-based agents interact with its frontend. Same tools, different access patterns.
7. WebMCP Directory
As websites start adopting WebMCP, discovery becomes a real problem. The browser has to navigate to a page and run its JavaScript to find out what tools are available. There's no central registry (yet — a manifest-based discovery mechanism is being discussed for future spec versions).
In the meantime, webmcplist.com serves as a community-maintained directory of websites that support WebMCP. If you've added WebMCP to your site, you can submit it to the directory so agents (and developers building agents) can find it. If you're building an agent, the directory is a useful starting point for discovering which sites expose structured tools.
What's Next for WebMCP
The standard is still early. A few things to watch:
- Browser adoption. Chrome Canary has it behind a flag. Stable Chrome and Edge releases are expected in the second half of 2026. Firefox and Safari haven't announced timelines.
- Manifest-based discovery. Right now, agents have to visit a page to discover its tools. A future spec addition would let sites declare tools in a manifest file, making discovery possible with a simple HTTP GET — similar to how
robots.txtor/.well-known/endpoints work. - W3C standardization. The spec is transitioning from community incubation to a formal W3C draft. Broader industry buy-in will follow.
- Ecosystem tooling. Expect frameworks, testing tools, and analytics to build up around WebMCP as adoption grows. The Model Context Tool Inspector extension is just the beginning.
If you're building websites, the time to start experimenting is now. The sites that are agent-ready when browser-native agents go mainstream will have a significant advantage over those that aren't.