AI & AutomationAwards

B2B Data via MCP: How to Connect AI Agents to Real-Time Signals

The Model Context Protocol (MCP) is an open standard that lets AI agents discover and use external data sources through a unified interface. This guide shows you how to build an MCP server that connects AI agents to real-time B2B signals, with working TypeScript code.

·16 min read

Article Content

The Model Context Protocol (MCP) is an open standard that lets AI agents discover and use external data sources through a unified interface. Instead of writing custom API integrations for every data source, you build one MCP server and every MCP-compatible agent — Claude, Cursor, Windsurf, custom LangChain agents — can immediately access your B2B signal data. This guide shows you how to build an MCP server that connects AI agents to real-time B2B signals, with working TypeScript code and architecture patterns you can deploy today.

Quick answer: MCP (Model Context Protocol) is to AI agents what USB is to peripherals — a universal connector. An MCP server for B2B data lets any AI agent query company signals, enrich accounts, and trigger workflows through a standardized protocol. This eliminates the need for per-agent custom integrations and makes your signal data accessible to the entire emerging ecosystem of AI coding assistants, GTM agents, and autonomous workflows.

The MCP specification was open-sourced by Anthropic in November 2024 and has since been adopted by Cursor, Windsurf, Sourcegraph, Replit, and dozens of other agent platforms. As of April 2026, the MCP ecosystem includes 10,000+ community-built servers. But the B2B data category is nearly empty — only two providers (Autobound and Captain Data) offer native MCP support for sales intelligence data. This is a whitespace opportunity for GTM engineering teams.

According to the MCP specification, the protocol uses a client-server architecture where a host application (like Claude Desktop) manages multiple client connections, each connected to a specific MCP server. The protocol runs over JSON-RPC 2.0, supporting both stdio (local) and SSE (remote) transports. For B2B data, this means your signal API becomes a first-class tool that any AI agent can discover, query, and reason about — without you writing integration code for each agent framework.


MCP Architecture: Host, Client, Server

Before building, you need to understand MCP's three-layer architecture. Getting this wrong means building something that technically works but doesn't integrate cleanly with the agents your team actually uses.

The three layers

Layer Role Examples B2B Data Context
Host The application that manages MCP client connections and enforces security policies Claude Desktop, Cursor, VS Code, custom app Your GTM tool, IDE, or agent runtime
Client Protocol bridge between host and server. Maintains 1:1 connection with a single server. Built into the host application Handles auth, serialization, transport
Server Exposes tools, resources, and prompts to the client via MCP Your B2B data MCP server Wraps your signal API, exposes tools like "get_company_signals"

The key insight: you build the server. The host and client are provided by the agent platform (Claude, Cursor, etc.). Your MCP server wraps your B2B data API and exposes it as a set of tools that any MCP-compatible agent can discover and use.

MCP primitives for B2B data

MCP servers can expose three types of primitives:

  • Tools (model-controlled): Functions the AI agent can call. For B2B data: get_company_signals, enrich_company, search_companies, get_signal_types.
  • Resources (application-controlled): Data the host application can read. For B2B data: signal type definitions, API documentation, company watchlists.
  • Prompts (user-controlled): Reusable prompt templates. For B2B data: "Research this company for outreach", "Generate a signal-based email for [domain]".

For most B2B data use cases, tools are the primary primitive. Resources and prompts are nice-to-haves that improve agent UX but aren't required.


Building a B2B Signal Data MCP Server (TypeScript)

Here's a working MCP server that connects AI agents to Autobound's Signal API. This server exposes three tools that let any MCP-compatible agent query company signals, search for companies by signal type, and list available signal types.

Prerequisites

  • Node.js 18+ and TypeScript
  • An Autobound API key (get one at autobound.ai/developers)
  • The @modelcontextprotocol/sdk npm package

Step 1: Project setup

mkdir b2b-signals-mcp && cd b2b-signals-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init --target es2022 --module nodenext --outDir dist

Step 2: Define the server with tools

// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const API_BASE = "https://signals.autobound.ai/v1";
const API_KEY = process.env.AUTOBOUND_API_KEY;

if (!API_KEY) {
  throw new Error("AUTOBOUND_API_KEY environment variable is required");
}

async function apiCall(path: string, params?: Record<string, string>) {
  const url = new URL(`${API_BASE}${path}`);
  if (params) {
    Object.entries(params).forEach(([k, v]) => url.searchParams.set(k, v));
  }
  const res = await fetch(url.toString(), {
    headers: { "Authorization": `Bearer ${API_KEY}` }
  });
  if (!res.ok) {
    const error = await res.json().catch(() => ({ message: res.statusText }));
    throw new Error(`API error ${res.status}: ${error.message}`);
  }
  return res.json();
}

const server = new McpServer({
  name: "b2b-signals",
  version: "1.0.0",
  description: "Real-time B2B company signals from Autobound"
});

// Tool 1: Get signals for a specific company
server.tool(
  "get_company_signals",
  "Get real-time business signals for a company by domain. Returns funding, " +
  "hiring, executive changes, technology adoption, SEC filings, news, and more.",
  {
    domain: z.string().describe("Company domain (e.g. 'stripe.com')"),
    signal_types: z.array(z.string()).optional()
      .describe("Filter by signal type: funding, hiring, news, sec_filing, " +
                "job_change, technology, intent, earnings, patent, expansion"),
    days_back: z.number().optional().default(30)
      .describe("How many days of signal history to return (default: 30)")
  },
  async ({ domain, signal_types, days_back }) => {
    const params: Record<string, string> = {
      domain,
      days_back: String(days_back ?? 30)
    };
    if (signal_types?.length) {
      params.signal_type = signal_types.join(",");
    }
    const data = await apiCall("/signals", params);
    return {
      content: [{ type: "text", text: JSON.stringify(data, null, 2) }]
    };
  }
);

// Tool 2: Search companies by signal criteria
server.tool(
  "search_companies_by_signal",
  "Find companies that recently had a specific signal. " +
  "Example: 'which companies raised funding this week?'",
  {
    signal_type: z.string()
      .describe("Signal type: funding, hiring, job_change, technology, " +
                "news, sec_filing, earnings, intent, expansion"),
    industry: z.string().optional()
      .describe("Filter by industry (e.g. 'SaaS', 'FinTech', 'Healthcare')"),
    min_employees: z.number().optional()
      .describe("Minimum employee count"),
    max_employees: z.number().optional()
      .describe("Maximum employee count"),
    days_back: z.number().optional().default(7)
      .describe("How recent (default: 7 days)"),
    limit: z.number().optional().default(25)
      .describe("Max results (default: 25)")
  },
  async ({ signal_type, industry, min_employees, max_employees, 
           days_back, limit }) => {
    const params: Record<string, string> = {
      signal_type,
      days_back: String(days_back ?? 7),
      limit: String(limit ?? 25)
    };
    if (industry) params.industry = industry;
    if (min_employees) params.min_employees = String(min_employees);
    if (max_employees) params.max_employees = String(max_employees);
    const data = await apiCall("/signals/search", params);
    return {
      content: [{ type: "text", text: JSON.stringify(data, null, 2) }]
    };
  }
);

// Tool 3: List available signal types and subtypes
server.tool(
  "list_signal_types",
  "List all available signal types and their subtypes. " +
  "Use this to understand what kinds of business events are tracked.",
  {},
  async () => {
    const data = await apiCall("/signal-types");
    return {
      content: [{ type: "text", text: JSON.stringify(data, null, 2) }]
    };
  }
);

// Resource: signal type documentation
server.resource(
  "signal-directory",
  "signals://directory",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      mimeType: "text/markdown",
      text: "# Autobound Signal Directory\n\n" +
            "25+ signal types, 700+ subtypes, 35+ sources.\n\n" +
            "## Signal Types\n" +
            "- **Funding**: Series A-E, IPO, M&A, SPAC, debt financing\n" +
            "- **Hiring**: Job postings, headcount growth, team expansion\n" +
            "- **Executive Changes**: New CXO, VP promotions, departures\n" +
            "- **Technology**: New tool adoption, migration, contract expiry\n" +
            "- **SEC Filings**: 10-K, 10-Q, 8-K, 20-F, 6-K\n" +
            "- **Earnings**: Transcripts, revenue, guidance changes\n" +
            "- **News**: Product launches, partnerships, expansions\n" +
            "- **Intent**: Topic research, review site visits\n" +
            "- **Social**: LinkedIn posts, Reddit, Glassdoor, Twitter\n" +
            "- **Patents**: New filings, grants\n" +
            "- **Web Traffic**: SEO changes, traffic trends\n" +
            "- **GitHub**: Repository activity, tech stack signals\n\n" +
            "Full directory: https://autobound.ai/signals/directory"
    }]
  })
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("B2B Signals MCP server running on stdio");
}

main().catch(console.error);

Step 3: Build and configure

npx tsc

Add the server to your MCP client configuration. For Claude Desktop, edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "b2b-signals": {
      "command": "node",
      "args": ["/path/to/b2b-signals-mcp/dist/index.js"],
      "env": {
        "AUTOBOUND_API_KEY": "your-api-key-here"
      }
    }
  }
}

For Cursor, add it to .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "b2b-signals": {
      "command": "node",
      "args": ["./b2b-signals-mcp/dist/index.js"],
      "env": {
        "AUTOBOUND_API_KEY": "your-api-key-here"
      }
    }
  }
}

Step 4: Test it

Once configured, restart your MCP host. In Claude Desktop, you should see the B2B signals tools available. Try:

  • "What signals do you have for stripe.com?"
  • "Which SaaS companies raised funding this week?"
  • "Show me all available signal types"
  • "Find companies with hiring signals in the healthcare industry with 100-500 employees"

The agent will call the appropriate tool, receive structured JSON, and reason about the results — generating summaries, identifying patterns, or suggesting actions based on the signal data.


Why MCP Matters for GTM Teams

The practical impact of MCP for go-to-market teams goes beyond developer convenience. Here's what changes when your B2B signal data is MCP-accessible:

1. Any team member can query signals in natural language

A sales manager doesn't need to learn an API or build a dashboard query. They open Claude and ask: "Which of my target accounts showed hiring signals this week?" The agent calls search_companies_by_signal, filters by their account list, and returns a prioritized summary. Zero code, zero training.

2. Agents compose signals into workflows automatically

An AI agent with access to both signal data (via MCP) and a CRM (via another MCP server) can autonomously build workflows: detect a funding signal → check if the company is in the CRM → if not, create a lead → if yes, update the account with the signal → trigger a sequence. These multi-step workflows emerge from the agent's reasoning, not from hardcoded automation rules.

3. Developer tools gain B2B context

When Cursor or Claude Code has access to B2B signals via MCP, developers building GTM applications can query live signal data while coding. "Show me what the API response looks like for a company with 5+ recent signals" returns real data, not mock fixtures. Read our guide to data APIs for Claude Code for more on this pattern.

4. Signal data becomes composable infrastructure

MCP makes your signal data a building block that any tool in your stack can use. Slack bot needs signals? Connect the MCP server. Custom dashboard needs data? Same server. New AI agent framework launches? It probably supports MCP. Build once, connect everywhere.


MCP vs. Direct API Integration: When to Use Each

Criteria MCP Server Direct API Integration
Setup time 2-4 hours for basic server 1-2 hours per agent integration
Multi-agent support Build once, all agents connect Build separately for each agent
Latency overhead ~5-20ms MCP protocol overhead Direct, no overhead
Discovery Agent auto-discovers available tools Manually configured per integration
Schema evolution Update server once, all agents get new tools Update each integration separately
Best for Multi-agent environments, developer tools, team access Single-agent, high-volume, latency-sensitive

Rule of thumb: If you have one agent consuming B2B data, use direct API integration. If you have two or more agents (or expect to), build an MCP server. The upfront investment pays for itself the second time you need to connect a new agent.


Autobound vs. Captain Data: MCP for B2B Data

As of April 2026, only two B2B data providers have native MCP support. Here's how they compare:

Dimension Autobound Captain Data
Signal types 25+ types, 700+ subtypes LinkedIn + web scraping (limited types)
Data sources 35+ independent sources Primarily LinkedIn, Google, web
Company coverage 50M+ companies Varies by scraping scope
MCP tools available Signal queries, search, type listing, batch LinkedIn extraction, web scraping workflows
Data model Typed signals with provenance and confidence Raw scraped data (variable structure)
Compliance Full provenance, compliance classification Scraping-based (compliance risk)
Delivery beyond MCP REST API, GCS push, webhooks, flat file API, webhooks
Best for Comprehensive signal intelligence via agents LinkedIn-specific scraping workflows

The fundamental difference: Autobound provides structured, multi-source signal intelligence with provenance and compliance metadata. Captain Data provides raw scraped data from web sources. For AI agents making business decisions based on signal data, the structured, typed approach is significantly more reliable — agents can trust Autobound's confidence scores and provenance metadata in ways they can't with raw scraped HTML.


Advanced Patterns: Production MCP Servers for B2B Data

Pattern 1: Remote SSE transport for team-wide access

The basic server above uses stdio transport (local process). For team-wide access, deploy the MCP server as a remote service using SSE (Server-Sent Events) transport:

import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";

const app = express();

app.get("/sse", async (req, res) => {
  const transport = new SSEServerTransport("/messages", res);
  await server.connect(transport);
});

app.post("/messages", async (req, res) => {
  // Handle incoming messages from client
  await transport.handlePostMessage(req, res);
});

app.listen(3001, () => {
  console.log("B2B Signals MCP server (SSE) on port 3001");
});

Deploy this as a Cloud Run service or Kubernetes pod, and every team member's Claude/Cursor can connect to the same MCP server.

Pattern 2: Caching layer for cost control

Agent queries can be repetitive. Adding a TTL cache reduces API calls and cost:

const cache = new Map<string, { data: unknown; expiry: number }>();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function cachedApiCall(path: string, params?: Record<string, string>) {
  const key = `${path}?${new URLSearchParams(params).toString()}`;
  const cached = cache.get(key);
  if (cached && cached.expiry > Date.now()) return cached.data;
  
  const data = await apiCall(path, params);
  cache.set(key, { data, expiry: Date.now() + CACHE_TTL });
  return data;
}

Pattern 3: Composite tools for complex agent workflows

Instead of exposing raw API endpoints, create higher-level tools that combine multiple data calls into a single agent action:

server.tool(
  "research_company_for_outreach",
  "Comprehensive research on a company for sales outreach. " +
  "Returns recent signals, signal summary, and suggested talking points.",
  {
    domain: z.string().describe("Company domain")
  },
  async ({ domain }) => {
    const [signals, company] = await Promise.all([
      apiCall("/signals", { domain, days_back: "90" }),
      apiCall("/company", { domain })
    ]);
    
    const summary = {
      company: company,
      recent_signals: signals.signals?.slice(0, 10),
      signal_count_by_type: groupBy(signals.signals, "signal_type"),
      suggested_hooks: signals.signals
        ?.filter((s: any) => s.confidence > 0.9)
        ?.slice(0, 3)
        ?.map((s: any) => ({
          type: s.signal_type,
          summary: s.data,
          detected: s.detected_at
        }))
    };
    
    return {
      content: [{ type: "text", text: JSON.stringify(summary, null, 2) }]
    };
  }
);

This gives agents a single tool call that returns everything needed for outreach research, instead of requiring multiple sequential calls.


Real-World Use Cases

Account research in Claude Desktop

A sales rep opens Claude Desktop and asks: "Research Snowflake's recent activity. What signals would make good conversation starters?" Claude calls get_company_signals with domain: 'snowflake.com', receives 15 recent signals (SEC filing, 3 hiring signals, 2 technology changes, earnings transcript), and synthesizes a briefing with specific talking points grounded in real data.

Pipeline building in Cursor

A RevOps engineer building a pipeline generator in Cursor asks the AI: "Find 20 mid-market SaaS companies that raised Series B funding in the last 30 days." Cursor calls search_companies_by_signal and returns real results that the engineer can immediately use as test data — or build into the application logic.

Autonomous GTM agent

A custom LangChain agent monitors signals via MCP, detects a high-confidence funding event + hiring surge combination, looks up the account in Salesforce (via a CRM MCP server), creates a task for the account owner, and drafts a personalized email referencing both signals. The entire workflow runs autonomously, triggered by signal data flowing through the MCP server.


Frequently Asked Questions

What is Model Context Protocol (MCP)?

MCP is an open standard created by Anthropic that provides a universal way for AI agents to connect to external data sources and tools. It uses JSON-RPC 2.0 over stdio or SSE transports. Think of it as a USB-C port for AI agents — one standard connector that works with any compatible device. The specification is open-source at modelcontextprotocol.io.

Which AI agents support MCP?

As of April 2026: Claude (Desktop and Code), Cursor, Windsurf, Sourcegraph Cody, Replit, VS Code Copilot (via extension), and any custom agent built with LangChain, LlamaIndex, or the Anthropic SDK's agent framework. The ecosystem is growing rapidly — any new agent framework launching today typically includes MCP support.

Do I need to know TypeScript to build an MCP server?

The official MCP SDK is available in TypeScript and Python. The TypeScript SDK is the most mature. Python is fully supported and works well for teams already using Python for data engineering. The protocol itself is language-agnostic — any language that can handle JSON-RPC 2.0 can implement an MCP server.

How is MCP different from function calling / tool use?

Function calling (tool use) is how an LLM invokes a function within a single conversation. MCP is the protocol for how that function discovers and connects to external services. MCP sits one layer below tool use — it's the plumbing that makes tools available. An agent uses tool calling to invoke an MCP tool, and MCP handles the connection to the underlying service.

Can I use MCP with B2B data providers that don't support it natively?

Yes. The code in this guide wraps a REST API in an MCP server. You can do this with any B2B data API — ZoomInfo, Apollo, PDL, Lusha, or any other provider with a REST endpoint. The MCP server acts as an adapter layer. The advantage of providers with native MCP support (like Autobound) is that their MCP implementation is optimized, maintained, and includes tool descriptions that agents understand well.

Is MCP secure? How do I protect my API keys?

MCP inherits the security model of its transport. Stdio transport runs locally (your machine only). SSE transport should be deployed behind authentication (API key, OAuth, or mTLS). API keys are passed as environment variables to the MCP server process, never exposed to the AI agent. The host application (Claude, Cursor) enforces tool approval policies — agents must request permission before calling MCP tools.


The Bottom Line

MCP is becoming the standard interface between AI agents and external data. For B2B signal data, this means building one MCP server and getting instant compatibility with Claude, Cursor, and every other MCP-compatible agent in the ecosystem — today and in the future.

The B2B data MCP category is still early. Only Autobound and Captain Data have native support, and the gap in signal breadth (700+ subtypes vs. LinkedIn scraping) is significant. Teams that build MCP servers for their signal data now will have a structural advantage as AI agents become the primary interface for GTM workflows.

Start with the TypeScript example in this guide. Customize the tools for your team's workflows. Deploy it for your entire team. And if you want the broadest signal data to power it, explore the signal directory or book a demo to see Autobound's Signal API in action.


Last updated: April 2026. MCP specification details from modelcontextprotocol.io. For Autobound's latest MCP and API capabilities, visit autobound.ai/developers.

Frequently Asked Questions

What is Model Context Protocol (MCP)?

MCP is an open standard created by Anthropic that provides a universal way for AI agents to connect to external data sources and tools. It uses JSON-RPC 2.0 over stdio or SSE transports. Think of it as a USB-C port for AI agents &mdash; one standard connector that works with any compatible device. The specification is open-source at modelcontextprotocol.io.

Which AI agents support MCP?

As of April 2026: Claude (Desktop and Code), Cursor, Windsurf, Sourcegraph Cody, Replit, VS Code Copilot (via extension), and any custom agent built with LangChain, LlamaIndex, or the Anthropic SDK's agent framework. The ecosystem is growing rapidly &mdash; any new agent framework launching today typically includes MCP support.

Do I need to know TypeScript to build an MCP server?

The official MCP SDK is available in TypeScript and Python. The TypeScript SDK is the most mature. Python is fully supported and works well for teams already using Python for data engineering. The protocol itself is language-agnostic &mdash; any language that can handle JSON-RPC 2.0 can implement an MCP server.

How is MCP different from function calling / tool use?

Function calling (tool use) is how an LLM invokes a function within a single conversation. MCP is the protocol for how that function discovers and connects to external services. MCP sits one layer below tool use &mdash; it's the plumbing that makes tools available. An agent uses tool calling to invoke an MCP tool, and MCP handles the connection to the underlying service.

Can I use MCP with B2B data providers that don't support it natively?

Yes. The code in this guide wraps a REST API in an MCP server. You can do this with any B2B data API &mdash; ZoomInfo, Apollo, PDL, Lusha, or any other provider with a REST endpoint. The MCP server acts as an adapter layer. The advantage of providers with native MCP support (like Autobound ) is that their MCP implementation is optimized, maintained, and includes tool descriptions that agents understand well.

Is MCP secure? How do I protect my API keys?

MCP inherits the security model of its transport. Stdio transport runs locally (your machine only). SSE transport should be deployed behind authentication (API key, OAuth, or mTLS). API keys are passed as environment variables to the MCP server process, never exposed to the AI agent. The host application (Claude, Cursor) enforces tool approval policies &mdash; agents must request permission before calling MCP tools.

Explore Signal Data

32 signal sources. 250M+ contacts. 50M+ companies. Talk to our team about signal data for your use case.