MCP-Ready Websites: Building the Analytics.js for AI Agents

Vyshnav S Deepak
mcp
ai-agents
javascript
web-development
automation

In my previous post about AI agents and accessibility, I talked about how agents struggle with poorly structured websites. But there's a bigger opportunity here that goes beyond just semantic HTML.

What if, instead of agents learning to navigate websites, websites could speak directly to agents?

I'm thinking about creating a JavaScript SDK that makes websites natively compatible with MCP-based agents, similar to how websites integrate with analytics or marketing tools today.

Current MCP Ecosystem

What Exists:

Core Infrastructure:

  • Official TypeScript SDK for building MCP servers and clients
  • Growing ecosystem with 40+ MCP servers in Anthropic's official repository

Browser Automation Tools:

  • Browser automation MCP servers like Browserbase and browser-agent
  • Browser tools MCP that can perform accessibility audits and DOM analysis
  • Chrome DevTools-based MCP servers for browser control

What's Missing:

A website-side JavaScript SDK that exposes MCP-compatible resources, tools, and prompts directly from the webpage itself.

Currently, agents have to scrape and navigate websites like humans do. But what if websites could expose their functionality through MCP interfaces that agents can directly use?

Your SDK Opportunity: "MCP-Ready Websites"

The Vision:

Instead of agents having to scrape/navigate websites, websites would expose their functionality through MCP interfaces that agents can directly use.

Think about it: when you add Google Analytics to your site, you don't make Google scrape your pages for visitor data. You expose that data through their SDK. Why should AI agents be any different?

Implementation Concept:

// Website Integration (like GA/Marketing tools)
<script src="https://cdn.yoursdk.com/mcp-ready.js"></script>
<script>
  MCPReady.init({
    resources: {
      'product-catalog': () => getProductData(),
      'user-cart': () => getCurrentCart(),
      'search-results': (query) => performSearch(query)
    },
    tools: {
      'add-to-cart': (productId) => addToCart(productId),
      'checkout': (data) => processCheckout(data),
      'contact-form': (message) => submitContact(message)
    },
    prompts: {
      'navigation-help': 'This site has products, cart, and contact sections...',
      'current-context': () => getCurrentPageContext()
    }
  });
</script>

Framework Integrations:

// React
import { useMCPReady } from '@your-sdk/react'

function ProductPage() {
  const { exposeResource, exposeTool } = useMCPReady()
  
  // Expose product data as MCP resource
  exposeResource('current-product', () => productData)
  
  // Expose add to cart as MCP tool
  exposeTool('add-to-cart', handleAddToCart)
  
  return <ProductDisplay />
}

// Next.js API routes become MCP tools automatically
export default function handler(req, res) {
  // Auto-exposed as MCP tool based on route
  // /api/checkout becomes 'checkout' tool
}

Technical Architecture

Browser-Side MCP Server:

Your SDK would essentially create a lightweight MCP server that runs in the browser and exposes:

  • Resources - Dynamic content, product data, user state
  • Tools - Actions agents can perform (add to cart, submit forms, etc.)
  • Prompts - Context about how to use the site effectively

Communication Layer:

// Multiple connection options
const mcpServer = new MCPServer({
  transports: [
    new WebSocketTransport(),     // Real-time bidirectional
    new PostMessageTransport(),  // Cross-origin communication  
    new RESTTransport()          // HTTP fallback
  ]
});

Agent Discovery:

Websites could advertise their MCP capabilities via:

<!-- Meta tag discovery -->
<meta name="mcp-enabled" content="true">
<meta name="mcp-version" content="1.0">

<!-- Well-known endpoint -->
<!-- /.well-known/mcp-manifest.json -->
{
  "version": "1.0",
  "capabilities": ["resources", "tools", "prompts"],
  "endpoints": {
    "websocket": "wss://example.com/mcp",
    "rest": "https://example.com/api/mcp"
  }
}

<!-- Structured data markup -->
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "WebSite",
  "mcpEnabled": true,
  "mcpCapabilities": ["ecommerce", "search", "contact"]
}
</script>

Market Timing & Opportunity

This is incredibly well-positioned because:

MCP is Early

  • Anthropic only released MCP in November 2024
  • Still in the "infrastructure building" phase
  • Limited to developer/enterprise adoption so far

Growing Adoption

  • 40+ official MCP servers and growing rapidly
  • Major players like Browserbase already building MCP integrations
  • OpenAI's recent browser integration announcements validate agent-web interaction needs

Infrastructure Gap

  • Current solutions focus on browser automation, not native website integration
  • No standard way for websites to expose functionality to agents
  • Similar to the pre-REST API era when everyone scraped websites for data

Historical Parallel

This follows the same pattern as:

  • REST APIs (2000s) - Websites exposing data instead of being scraped
  • GraphQL (2010s) - More efficient, flexible data exposure
  • MCP layer (2020s) - Native agent integration

Business Model & Market Size

Freemium SaaS Model:

  • Free tier: Basic MCP exposure for small sites
  • Pro tier: Advanced analytics, custom tools, priority support
  • Enterprise: White-label solutions, complex integrations, SLAs

Marketplace Opportunity:

  • Pre-built integrations for Shopify, WordPress, etc.
  • Template library for common use cases
  • Developer ecosystem around MCP tools

Market Validation:

  • Anthropic's Computer Use demonstrates agent-web interaction demand
  • Current browser automation market ($2B+) validates need
  • Every website becomes a potential customer

The Bigger Picture

This could become the standard way websites expose functionality to AI agents - like how REST APIs became the standard for web services.

You'd be creating the "MCP.js" equivalent of what jQuery was for DOM manipulation, or what analytics.js was for web tracking.

The key insight is this: instead of agents learning to navigate websites like humans, websites should learn to speak directly to agents through MCP.

This connects directly back to my previous post about accessibility - while semantic HTML makes sites more agent-friendly, an MCP SDK makes them natively agent-compatible.

We're not just fixing the accessibility problem; we're leapfrogging it entirely.


What do you think? Is this the missing piece of the MCP ecosystem? I'd love to hear your thoughts - especially if you're already building with MCP or thinking about agent integrations.

© 2024 Vyshnav S Deepak