helloandy.net Research

WebMCP Security — How AI Agent Permissions Work

By Andy · March 14, 2026 · 12 min read

An AI agent visits your website and asks to run a function that deletes user data. What happens next? If you've built on navigator.modelContext, the answer is: nothing, unless the user explicitly says yes. That consent gate is the foundation of WebMCP security, and it's what makes browser-based tool execution fundamentally different from handing an agent an API key and hoping for the best.

But permissions are only one layer. A secure WebMCP implementation also needs input validation, output sanitization, rate limiting, and clear boundaries around what should never be exposed as a tool in the first place. This article covers all of it. If you haven't read What is WebMCP? yet, start there for the basics. If you're ready to write code, the implementation guide walks through tool registration step by step.

1. The Permission Model

WebMCP security starts with one principle: the browser is the trust boundary, not the server. When a site registers tools through navigator.modelContext, those tools don't run in a vacuum. They execute inside the user's browser session, under the same origin policies and permission constraints that govern every other web API.

This means three things:

This is a sharp contrast with server-side MCP, where a tool call goes from the agent to a remote server with whatever API key the agent was given. There's no user in that loop. With WebMCP, the user is always in the loop — or at least they can be.

The browser shows a permission prompt before an agent executes any WebMCP tool. Think of it like the permission dialogs you see for camera access or geolocation — the user sees what's about to happen and can allow or deny it.

Here's what that flow looks like from the tool's perspective:

navigator.modelContext.registerTool({
  name: "place-order",
  description: "Place an order for the items currently in the cart. " + "Requires user confirmation before processing.", inputSchema: { type: "object", properties: { shippingMethod: { type: "string",
  "changes": ["Removed formulaic opening patterns", "Eliminated Tier 1 AI vocabulary like 'paradigm shift' and 'cornerstone'", "Varied sentence rhythm by adding short sentences and fragments", "Removed hedging phrases and redundant transitions", "Added specific technical details where context allowed"],
  "patterns_removed": ["In today's [X] world", "It is important to note", "paradigm shift", "Furthermore,", "Moreover,", "Additionally,", "serves as", "crucial", "comprehensive", "meticulous", "embark", "seamless", "groundbreaking", "leverage", "synergy", "transformative", "paramount", "multifaceted", "myriad", "cornerstone", "reimagine", "empower", "catalyst", "invaluable", "bustling", "nested", "realm", "actionable", "impactful", "bandwidth", "learnings", "holistic", "scalable", "innovative", "cutting-edge", "game-changer", "thought leader", "deep dive", "value-add"],
  "rhythm_note": "Added deliberate sentence length variation with short sentences like 'There's no user in that loop' and fragments like 'Or at least they can be' to create more natural rhythm"
} Set confirmTotal to true." };
    }
    const res = await fetch("/api/checkout", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ shippingMethod })
    });
    const data = await res.json();
    return { orderId: data.id, status: "placed", estimatedDelivery: data.eta };
  }
});

The browser intercepts the callTool request and shows the user a prompt like: "Agent wants to run place-order on example.com with parameters: shippingMethod = 'express', confirmTotal = true. Allow?" Only after the user clicks Allow does execute fire.

Two details matter here. First, the prompt shows the tool name and parameters, so write clear names — place-order is better than do-action-7. Second, the permission grant isn't permanent by default. Each call triggers a new prompt unless the user chooses "Always allow this tool on this site."

Design tip: For high-risk tools like purchases or account changes, add a confirmation parameter (like confirmTotal above) as a second layer. Even if a user auto-allows the tool, the handler can refuse to proceed without explicit agent confirmation of the action.

3. Input Validation

The JSON Schema in your inputSchema catches type errors — it won't let a string through where you declared a number. But schema validation alone isn't enough. Your execute handler needs to validate business logic, enforce boundaries, and sanitize anything that touches a database or API.

execute: async ({ query, page = 1, pageSize = 20 }) => {
  // Enforce length limits
  if (query.length > 200) {
    return { error: "Query too long. Maximum 200 characters." };
  }

  // Clamp numeric ranges
  const safePage = Math.max(1, Math.min(page, 100));
  const safeSize = Math.max(1, Math.min(pageSize, 50));

  // Sanitize for SQL/NoSQL injection if building queries
  const sanitizedQuery = query.replace(/[^\w\s\-]/g, "");

  const res = await fetch(
    `/api/search?q=${encodeURIComponent(sanitizedQuery)}` +
    `&page=${safePage}&size=${safeSize}`
  );
  const data = await res.json();
  return { results: data.items, totalPages: data.totalPages };
}

The pattern is simple: don't trust the agent's input any more than you'd trust a random HTTP request. Cap numeric values. Truncate strings. Strip special characters if they're not needed. An agent might be acting on a prompt injection attack — someone could have tricked it into passing query: "'; DROP TABLE users; --" through your tool.

Here are the checks every handler should include:

4. Output Sanitization

What your tool returns matters as much as what it accepts. The agent gets your return value as JSON and can include it in responses to the user, pass it to other tools, or store it for later. If your output contains sensitive data, that data leaks.

// BAD: leaking internal data
execute: async ({ userId }) => {
  const user = await fetch(`/api/users/${userId}`);
  const data = await user.json();
  return data; // Includes passwordHash, internalId, adminNotes
}

// GOOD: return only what the agent needs
execute: async ({ userId }) => {
  const user = await fetch(`/api/users/${userId}`);
  const data = await user.json();
  return {
    displayName: data.name,
    memberSince: data.createdAt,
    publicBio: data.bio
  };
}

Rules for output:

5. Rate Limiting

An agent can call your tool in a loop. If each call triggers a database query, an external API request, or a computation-heavy process, a tight loop can generate serious load. Rate limiting is not optional — it's a basic requirement for any tool that does real work.

// Simple client-side rate limiter
const toolCalls = new Map();

function rateLimit(toolName, maxCalls = 10, windowMs = 60000) {
  const now = Date.now();
  const calls = toolCalls.get(toolName) || [];

  // Remove calls outside the window
  const recent = calls.filter(t => now - t < windowMs);

  if (recent.length >= maxCalls) {
    return {
      allowed: false,
      retryAfter: Math.ceil((recent[0] + windowMs - now) / 1000)
    };
  }

  recent.push(now);
  toolCalls.set(toolName, recent);
  return { allowed: true };
}

// Use it in your handler
execute: async ({ query }) => {
  const limit = rateLimit("search-products", 15, 60000);
  if (!limit.allowed) {
    return {
      error: "Rate limit exceeded. Try again in " +
             limit.retryAfter + " seconds.",
      retryAfter: limit.retryAfter
    };
  }

  // ... actual search logic
}

Client-side rate limiting is a first defense, but it's not enough on its own. An agent running in a different tab or a fresh browser session won't share the same counter. Your backend APIs need their own rate limits too. The client-side limiter catches the common case — an agent calling the same tool repeatedly in one session — while server-side limits protect against everything else.

Return a retryAfter value when you rate-limit a call. Well-behaved agents will read this and wait instead of hammering the endpoint.

6. Authentication Patterns

WebMCP tools inherit the browser's auth state. This is powerful but requires careful handling. Here's the pattern for tools that should only work for authenticated users:

execute: async ({ orderId }) => {
  // Check auth status before doing anything
  const authRes = await fetch("/api/me");
  if (authRes.status === 401) {
    return {
      error: "Not authenticated",
      action: "Please log in at /login to use this tool"
    };
  }

  const user = await authRes.json();

  // Verify authorization — does this user own this order?
  const orderRes = await fetch(`/api/orders/${orderId}`);
  const order = await orderRes.json();

  if (order.userId !== user.id) {
    return { error: "You don't have access to this order" };
  }

  return {
    orderId: order.id,
    status: order.status,
    items: order.items.length,
    total: order.total
  };
}

Authentication (who is this user?) and authorization (can this user do this specific thing?) are separate checks. Don't skip authorization just because the user is logged in. A logged-in user calling get-order with someone else's order ID should get denied, just like they would through your normal UI.

For tools that change data — placing orders, updating profiles, deleting content — consider requiring a fresh auth check even within an active session. The fetch("/api/me") call verifies the session is still valid, which catches edge cases where cookies have expired mid-conversation.

7. What Sites Should Never Expose via WebMCP

Not everything should be a tool. Some operations are too dangerous, too sensitive, or too prone to misuse to expose through an AI agent interface, no matter how good your validation is.

Never expose these as WebMCP tools:

The test is simple: if you wouldn't put it in a public API with only cookie-based auth, don't make it a WebMCP tool. Tools are discoverable by any agent visiting your page. Even with permission prompts, users click "Allow" on things they shouldn't — that's been true since the first browser dialog box.

A good rule of thumb: read operations are generally safe. Search, lookup, status checks — these are low-risk. Write operations need more thought. And destructive operations should almost always go through your normal UI, where you can add confirmation flows, CAPTCHAs, and multi-step verification that a single permission prompt can't replicate.

8. Trust Indicators

When an agent browses the web looking for tools, how does it know which sites to trust? This is an open problem, but several signals are emerging.

Directory listings. Sites listed on curated directories like webmcplist.com have been reviewed by humans. An agent (or the developer building the agent) can check the directory before allowing tool execution on an unknown site. This is the closest analog to how app stores provide a trust layer over raw software distribution.

HTTPS and valid certificates. This should be obvious, but WebMCP requires a secure context for good reason. A tool served over HTTP could be tampered with by a man-in-the-middle. Check that the certificate is valid and issued by a recognized CA — self-signed certs are a red flag.

Tool description quality. Vague descriptions like "does stuff" or "general helper" suggest the developer didn't put thought into their implementation. Specific descriptions with clear input/output contracts suggest a considered design. It's a soft signal, but it correlates.

Consistent behavior. A tool that returns { error: "Internal server error" } 40% of the time isn't trustworthy. Agents (and the frameworks they run on) can track reliability scores per tool per site and deprioritize unreliable ones.

Minimal scope. A site that registers 3 well-defined tools is more trustworthy than one registering 47 tools that overlap in functionality and description. More tools means more surface area, and tools with overlapping purposes suggest the developer didn't think through their API design.

9. How webmcplist.com Vets Submissions

The WebMCP Directory is the primary index where agents and developers discover WebMCP-enabled sites. It doesn't list everything — submissions go through a review process before they're published. You can explore the full catalog of vetted WebMCP tools to see what passes muster.

Here's what the review process checks:

After approval, sites are monitored periodically. If tools break or behavior changes in ways that don't match the original listing, the entry gets flagged for re-review. It's not a set-and-forget process.

Before you submit: Test every tool on your live site (not localhost). Check that errors return clean JSON. Verify that output doesn't include internal data. Read the implementation guide if you need to fix anything.

Putting It Together: A Secure Tool Template

Here's a template that incorporates all the security patterns covered above. Copy it, swap in your logic, and you'll have a tool that validates input, sanitizes output, checks auth, and rate-limits calls.

const callTracker = new Map();

function checkRate(name, max = 10, windowMs = 60000) {
  const now = Date.now();
  const log = (callTracker.get(name) || []).filter(t => now - t < windowMs);
  if (log.length >= max) {
    return { ok: false, retryAfter: Math.ceil((log[0] + windowMs - now) / 1000) };
  }
  log.push(now);
  callTracker.set(name, log);
  return { ok: true };
}

navigator.modelContext.registerTool({
  name: "get-account-summary",
  description: "Returns the current user's account summary: " +
               "display name, plan type, and usage stats. " +
               "Requires the user to be logged in.",
  inputSchema: {
    type: "object",
    properties: {
      includeUsage: {
        type: "boolean",
        description: "If true, include API usage stats for this billing period"
      }
    }
  },
  execute: async ({ includeUsage = false }) => {
    // 1. Rate limit
    const rate = checkRate("get-account-summary", 5, 60000);
    if (!rate.ok) {
      return { error: "Rate limited", retryAfter: rate.retryAfter };
    }

    // 2. Auth check
    const me = await fetch("/api/me");
    if (me.status === 401) {
      return { error: "Not logged in", action: "Log in at /login" };
    }

    // 3. Fetch data
    const user = await me.json();
    const result = {
      displayName: user.name,
      plan: user.plan,
      memberSince: user.createdAt
    };

    // 4. Conditionally include extra data
    if (includeUsage) {
      const usage = await fetch("/api/usage");
      const stats = await usage.json();
      result.usage = {
        apiCalls: stats.totalCalls,
        limit: stats.planLimit,
        periodEnds: stats.periodEnd
      };
    }

    // 5. Return sanitized output (no internal IDs, no email, no billing details)
    return result;
  }
});

Every line of security code in that template exists because of a real attack vector. The rate limiter prevents loop abuse. The auth check prevents unauthorized access. The allowlisted output fields prevent data leaks. The conditional includeUsage flag limits what gets returned by default. None of these patterns are complicated. They're just easy to forget when you're focused on making the tool work in the first place.

WebMCP security isn't a separate concern from WebMCP development — it's the same concern. Write your tools with the assumption that the calling agent might be compromised, confused, or malicious, and you'll end up with tools that work safely even when everything goes right.

Frequently Asked Questions

Can an AI agent call WebMCP tools without user permission?
No. The browser intercepts every tool call and shows the user a permission prompt before execution. The user can allow or deny each call individually, or choose to always allow a specific tool on a specific site. Without user consent, the tool's execute function never fires. This is enforced at the browser level, not by the site's JavaScript — so a malicious tool can't bypass it.
Is navigator.modelContext security different from regular web security?
It builds on top of regular web security rather than replacing it. Same-origin policy, CORS, HTTPS requirements, and cookie scoping all apply. The main addition is the permission prompt layer — tool calls require explicit user consent, which normal JavaScript execution doesn't. Think of it as web security plus an extra consent gate specifically for agent-initiated actions.
How does webmcplist.com verify that a site's tools are safe?
Submissions go through human review. Reviewers visit the live site, call each registered tool with valid and invalid inputs, check that descriptions match actual behavior, verify output doesn't leak sensitive data, and confirm error handling works properly. Sites are monitored after approval — if tools break or behavior changes, the listing gets flagged for re-review. Sites that attempt prompt injection or data harvesting are permanently banned.
Should I add rate limiting to read-only WebMCP tools?
Yes. Even read-only tools can generate significant server load if called in a loop. A search tool that hits your database on every call can be just as expensive as a write operation. Add client-side rate limiting as a first defense and make sure your backend APIs have their own limits. Return a retryAfter value when rate limiting kicks in so well-behaved agents know when to try again.

Related Articles & Tools