Building a Production Notion MCP Gateway: Complete Technical Guide

Building a Production Notion MCP Gateway: Complete Technical Guide

In Part 1, we showed why RAG fails for Notion’s structured data and how MCP provides real-time context. But here’s the reality: Notion’s official MCP server handles 80% of use cases with zero setup. If you need the remaining 20%—cross-page joins, custom retry logic, semantic search, fine-grained RBAC—you’ll build a custom gateway. This guide shows you exactly how.

What You’ll Learn

  • When to use Notion’s official server vs. building custom
  • The three essential tools your gateway needs
  • Production-grade implementation in Python/FastMCP
  • How to handle the 25-reference limit with pagination
  • Wiring it to Claude Desktop, Cursor, and other clients
  • Monitoring, error handling, and scaling strategies

Assumed Knowledge: You’re familiar with REST APIs and HTTP, have basic Python experience (async/await), understand Notion API basics, and know MCP fundamentals from Part 1.


When to Build Custom: The Decision Matrix

Before you write a line of code, ask yourself: can Notion’s official server do what I need?

Notion’s Official MCP Server

The officially supported @notionhq/notion-mcp-server does a lot right:

What It Does:

  • OAuth 2.0 and Internal Integration Token authentication
  • Semantic search via Notion AI (if enabled on your workspace)
  • Query databases, fetch pages, create and update content
  • Markdown-optimized responses (efficient for token usage)
  • Built-in rate-limit handling with automatic backoff
  • Works out-of-box with Claude Desktop, Cursor, and ChatGPT Pro

Setup is trivial:

npm install -g @notionhq/mcp-server
# or run directly
npx @notionhq/mcp-server --auth-token YOUR_TOKEN

But it has hard limits:

  • Hits a 25-reference cap on rollups and relations (no automatic pagination)
  • Can’t perform cross-database joins
  • No custom tool surface (you get what Anthropic built)
  • Semantic search requires Notion AI (paid tier)
  • Rate limits are Notion’s, not yours to customize

When to use this: You need quick Claude Desktop integration and can live with Notion’s structural constraints. Setup time: 5 minutes.

Custom Gateway (What You’ll Build)

Building your own gives you complete control at the cost of operational responsibility:

What It Does:

  • Full control over the tool surface
  • Automatic pagination past the 25-reference limit
  • Custom retry and backoff strategies you define
  • Multi-hop joins (resolve relations across databases)
  • Embedding-based search (bring your own embeddings)
  • Fine-grained RBAC and audit logging
  • Deploy anywhere: local, Docker, Kubernetes, serverless

Setup is real work:

  • Implement core tools and test with your workspace
  • Deploy the server and wire it to clients
  • Build monitoring and observability
  • Iterate based on real usage patterns

When to use this: You need production governance, complex queries across your workspace, or behavior Notion’s official server doesn’t provide. Setup time: 4–8 hours.

Decision Matrix

RequirementOfficialCustom
Handle >25 relation references✅ (automatic pagination)
Cross-database joins
Custom RBAC and permissions
Embedding-based search❌ (Notion AI only)✅ (bring your own)
Audit loggingBasicFull control
Setup time5 minutes4–8 hours
Operational complexityLowModerate
Community supportGrowingSelf-supported

The rest of this guide assumes you’ve decided to build custom. Let’s go.


Architecture: The Three Essential Tools

Every production Notion MCP gateway exposes three tools. Think of them as the query, fetch, and join operations you’d do in SQL, but adapted for Notion’s API constraints.

Tool #1: NotionDBQueryTool

Purpose: Run deterministic database queries with filtering and sorting.

What it does:

  • Accepts a database ID, filter object (JSON), and optional sorts
  • Paginates automatically through all results
  • Returns a Markdown table (LLM-friendly format)
  • Handles HTTP 429 (rate limit) and 503 (service unavailable) with exponential backoff

Example use case:

LLM: "Get all open tasks in the Engineering project assigned to me"
Tool: query_database(
  database_id="abc123...",
  filter_obj={
    "and": [
      {"property": "Status", "select": {"equals": "Open"}},
      {"property": "Project", "relation": {"contains": "Engineering"}},
      {"property": "Assignee", "people": {"contains": "<me>"}}
    ]
  }
)
Response: Markdown table with 12 matching tasks

Implementation notes:

  • Notion’s filter_object follows a strict JSON schema (we’ll show this in code)
  • Pagination is crucial: use the has_more and next_cursor fields to handle >100 results
  • Return Markdown, not JSON, to minimize token usage
  • Validate that the database_id belongs to your workspace before querying

Tool #2: NotionPageContentRetriever

Purpose: Fetch full page content with recursive block tree traversal.

What it does:

  • Accepts a page ID
  • Walks the block tree (paragraphs, headings, toggles, databases, embeds, etc.)
  • Converts to Markdown
  • Paginates for deep or large pages (20+ nested levels, 100+ blocks)

Example use case:

LLM: "Summarize the Q4 planning document"
Tool: retrieve_page(page_id="xyz789...")
Response: Full Markdown with headings, lists, emphasis, code blocks, and nested structure

Implementation notes:

  • Block types include: paragraph, heading_1/heading_2/heading_3, bulleted_list_item, numbered_list_item, code, quote, toggle, synced_block, image, embed, etc.
  • Handle recursion for nested blocks (toggles and synced_blocks contain children)
  • Parse rich text carefully: extract plain text, resolve @mentions, parse links
  • Paginate for documents with >100 blocks (Notion returns 100 per page)

Tool #3: NotionRelationalResolver

Purpose: Traverse relation properties to resolve linked context.

What it does:

  • Accepts a page ID and relation property name
  • Fetches all linked page IDs (paginated—this is where the >25 limit was)
  • Retrieves the title or key property from each linked page
  • Returns a structured list

Example use case:

LLM: "Who are all the stakeholders for this project?"
Tool: resolve_relation(
  page_id="abc123...",
  relation_property="Stakeholders"
)
Response: ["Alice Smith", "Bob Chen", "Carol Williams", ...]

Implementation notes:

  • The Notion API caps relation property fetches to 25 by default
  • Solution: use the paginated endpoint /v1/pages/{page_id}/properties/{property_id} with cursors
  • This allows unlimited pagination through relations
  • Concurrent fetches of linked page titles reduce latency significantly

Architecture Diagram

┌─────────────────────────────────────────────────────────┐
│ LLM Agent (Claude, Cursor)                              │
└─────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────┐
│ MCP Client                                              │
│ (stdio transport)                                        │
└─────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────┐
│ Notion MCP Gateway (FastMCP Server)                     │
│                                                          │
│  ┌──────────────────────────────────────────────────┐   │
│  │ NotionDBQueryTool                                │   │
│  │ - Filter & sort databases                        │   │
│  │ - Pagination handler                            │   │
│  │ - Returns Markdown table                         │   │
│  └──────────────────────────────────────────────────┘   │
│                                                          │
│  ┌──────────────────────────────────────────────────┐   │
│  │ NotionPageContentRetriever                       │   │
│  │ - Block tree traversal                           │   │
│  │ - Rich text → Markdown conversion                │   │
│  │ - Nested block handling                          │   │
│  └──────────────────────────────────────────────────┘   │
│                                                          │
│  ┌──────────────────────────────────────────────────┐   │
│  │ NotionRelationalResolver                         │   │
│  │ - Relation property pagination                   │   │
│  │ - Multi-hop joins                                │   │
│  │ - Async concurrent fetches                       │   │
│  └──────────────────────────────────────────────────┘   │
│                                                          │
│  ┌──────────────────────────────────────────────────┐   │
│  │ Middleware                                       │   │
│  │ - Rate limit handling (429/503)                  │   │
│  │ - Retry with exponential backoff                 │   │
│  │ - Audit logging (per-call, per-user)            │   │
│  │ - Error handling & validation                    │   │
│  └──────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────┐
│ Notion API (v1.2025-06-28)                              │
│ - Rate limits: 3 requests/second per integration        │
│ - Backoff via Retry-After header                        │
│ - Pagination with cursors                               │
└─────────────────────────────────────────────────────────┘

Implementation: Production-Ready Code

Here’s a single-file, production-ready Notion MCP gateway in Python. It’s small enough to understand in one sitting, robust enough to handle real workloads.

Core Server Code

"""
Notion MCP Gateway
==================
Production-grade MCP server for Notion workspace integration.
Handles >25 relation pagination, cross-database joins, and resilient API interaction.

Usage:
  NOTION_TOKEN=secret_xyz python -m notion_gateway
"""

import asyncio
import json
import logging
import os
import time
import uuid
from datetime import datetime
from typing import Any, Optional

import httpx
from mcp.server import Server
from mcp.types import Tool, TextContent, ToolResult
from tenacity import (
    retry,
    stop_after_attempt,
    wait_exponential,
    retry_if_exception_type,
)

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(name)s] %(levelname)s: %(message)s",
)
logger = logging.getLogger("notion_gateway")

# Server setup
server = Server("notion-gateway")
NOTION_TOKEN = os.getenv("NOTION_TOKEN")
if not NOTION_TOKEN:
    raise ValueError("NOTION_TOKEN environment variable not set")

# Notion API client
class NotionClient:
    BASE_URL = "https://api.notion.com/v1"
    TIMEOUT = 30

    def __init__(self, token: str):
        self.token = token
        self.client = httpx.AsyncClient(
            timeout=self.TIMEOUT,
            headers={
                "Authorization": f"Bearer {token}",
                "Notion-Version": "2025-06-28",
            },
        )

    @retry(
        stop=stop_after_attempt(5),
        wait=wait_exponential(multiplier=1, min=2, max=10),
        retry=retry_if_exception_type((httpx.HTTPError, asyncio.TimeoutError)),
    )
    async def request(
        self,
        method: str,
        endpoint: str,
        json_data: Optional[dict] = None,
        params: Optional[dict] = None,
    ) -> dict:
        """Make a request to Notion API with automatic retry."""
        url = f"{self.BASE_URL}{endpoint}"
        try:
            response = await self.client.request(
                method, url, json=json_data, params=params
            )
            response.raise_for_status()
            return response.json()
        except httpx.HTTPStatusError as e:
            if e.response.status_code == 429:
                retry_after = int(e.response.headers.get("Retry-After", 5))
                logger.warning(f"Rate limited. Retrying after {retry_after}s")
                await asyncio.sleep(retry_after)
                raise
            elif e.response.status_code == 503:
                logger.warning("Notion API temporarily unavailable. Retrying.")
                raise
            else:
                logger.error(f"HTTP {e.response.status_code}: {e.response.text}")
                raise

    async def query_database(
        self,
        database_id: str,
        filter_obj: Optional[dict] = None,
        sorts: Optional[list] = None,
    ) -> list:
        """Query a Notion database with filters and sorts."""
        results = []
        start_cursor = None

        while True:
            payload = {}
            if filter_obj:
                payload["filter"] = filter_obj
            if sorts:
                payload["sorts"] = sorts
            if start_cursor:
                payload["start_cursor"] = start_cursor

            data = await self.request(
                "POST", f"/databases/{database_id}/query", json_data=payload
            )
            results.extend(data.get("results", []))

            if not data.get("has_more"):
                break
            start_cursor = data.get("next_cursor")

        return results

    async def retrieve_page_blocks(self, page_id: str) -> list:
        """Retrieve all blocks for a page."""
        blocks = []
        start_cursor = None

        while True:
            params = {"page_size": 100}
            if start_cursor:
                params["start_cursor"] = start_cursor

            data = await self.request(
                "GET", f"/blocks/{page_id}/children", params=params
            )
            blocks.extend(data.get("results", []))

            if not data.get("has_more"):
                break
            start_cursor = data.get("next_cursor")

        return blocks

    async def retrieve_page_property(
        self, page_id: str, property_id: str
    ) -> list:
        """Retrieve a specific property from a page (handles pagination)."""
        items = []
        start_cursor = None

        while True:
            params = {"page_size": 100}
            if start_cursor:
                params["start_cursor"] = start_cursor

            data = await self.request(
                "GET",
                f"/pages/{page_id}/properties/{property_id}",
                params=params,
            )
            items.extend(data.get("results", []))

            if not data.get("has_more"):
                break
            start_cursor = data.get("next_cursor")

        return items

    async def get_page(self, page_id: str) -> dict:
        """Retrieve a page object."""
        return await self.request("GET", f"/pages/{page_id}")


# Initialize Notion client
notion = NotionClient(NOTION_TOKEN)


# Tool implementations
def extract_rich_text(rich_text_list: list) -> str:
    """Convert Notion rich text to plain text."""
    return "".join(
        rt.get("plain_text", "") for rt in rich_text_list
    )


def block_to_markdown(block: dict, indent: int = 0) -> str:
    """Convert a Notion block to Markdown."""
    block_type = block.get("type")
    block_data = block.get(block_type, {})
    prefix = "  " * indent

    if block_type == "paragraph":
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}{text}\n"
    elif block_type in ["heading_1", "heading_2", "heading_3"]:
        level = int(block_type[-1])
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}{'#' * level} {text}\n"
    elif block_type == "bulleted_list_item":
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}- {text}\n"
    elif block_type == "numbered_list_item":
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}1. {text}\n"
    elif block_type == "code":
        code = extract_rich_text(block_data.get("rich_text", []))
        lang = block_data.get("language", "")
        return f"{prefix}```{lang}\n{code}\n```\n"
    elif block_type == "quote":
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}> {text}\n"
    elif block_type == "toggle":
        text = extract_rich_text(block_data.get("rich_text", []))
        return f"{prefix}{text}\n"
    elif block_type == "divider":
        return f"{prefix}---\n"
    else:
        return f"{prefix}[{block_type}]\n"


@server.call_tool()
async def query_database_tool(
    database_id: str,
    filter_obj: Optional[str] = None,
    sorts: Optional[str] = None,
) -> ToolResult:
    """Query a Notion database with filters and sorts."""
    call_id = str(uuid.uuid4())[:8]
    start = time.time()

    logger.info(
        f"[{call_id}] query_database",
        extra={"database_id": database_id},
    )

    try:
        # Parse JSON inputs
        filter_dict = json.loads(filter_obj) if filter_obj else None
        sorts_list = json.loads(sorts) if sorts else None

        # Query database
        pages = await notion.query_database(
            database_id, filter_dict, sorts_list
        )

        # Convert to Markdown table
        if not pages:
            return ToolResult(content=[TextContent(type="text", text="No results.")])

        # Extract properties for table headers
        first_page = pages[0]
        properties = first_page.get("properties", {})
        headers = list(properties.keys())[:5]  # Limit to 5 columns

        # Build Markdown table
        lines = [
            "| " + " | ".join(headers) + " |",
            "| " + " | ".join(["---"] * len(headers)) + " |",
        ]

        for page in pages:
            row = []
            for header in headers:
                prop = page["properties"].get(header, {})
                prop_type = prop.get("type")

                # Extract property value
                if prop_type == "title":
                    value = extract_rich_text(
                        prop.get("title", [])
                    )
                elif prop_type == "rich_text":
                    value = extract_rich_text(
                        prop.get("rich_text", [])
                    )
                elif prop_type == "select":
                    value = prop.get("select", {}).get("name", "")
                elif prop_type == "date":
                    value = (
                        prop.get("date", {}).get("start", "")
                        if prop.get("date")
                        else ""
                    )
                else:
                    value = "—"

                row.append(value[:50])  # Truncate long values

            lines.append("| " + " | ".join(row) + " |")

        result = "\n".join(lines)
        elapsed = time.time() - start

        logger.info(
            f"[{call_id}] success in {elapsed:.2f}s",
            extra={"rows_returned": len(pages)},
        )

        return ToolResult(
            content=[TextContent(type="text", text=result)]
        )

    except Exception as e:
        logger.error(f"[{call_id}] failed: {e}")
        return ToolResult(
            content=[
                TextContent(
                    type="text",
                    text=f"Error: {str(e)}",
                )
            ],
            is_error=True,
        )


@server.call_tool()
async def retrieve_page_tool(page_id: str) -> ToolResult:
    """Retrieve full page content as Markdown."""
    call_id = str(uuid.uuid4())[:8]
    start = time.time()

    logger.info(f"[{call_id}] retrieve_page", extra={"page_id": page_id})

    try:
        blocks = await notion.retrieve_page_blocks(page_id)
        markdown_lines = [block_to_markdown(block) for block in blocks]
        content = "".join(markdown_lines)
        elapsed = time.time() - start

        logger.info(
            f"[{call_id}] success in {elapsed:.2f}s",
            extra={"blocks_retrieved": len(blocks)},
        )

        return ToolResult(
            content=[TextContent(type="text", text=content)]
        )

    except Exception as e:
        logger.error(f"[{call_id}] failed: {e}")
        return ToolResult(
            content=[
                TextContent(
                    type="text",
                    text=f"Error: {str(e)}",
                )
            ],
            is_error=True,
        )


@server.call_tool()
async def resolve_relation_tool(
    page_id: str, relation_property: str
) -> ToolResult:
    """Resolve a relation property (handles >25 references)."""
    call_id = str(uuid.uuid4())[:8]
    start = time.time()

    logger.info(
        f"[{call_id}] resolve_relation",
        extra={
            "page_id": page_id,
            "relation_property": relation_property,
        },
    )

    try:
        # Retrieve the page to get property ID mapping
        page = await notion.get_page(page_id)
        properties = page.get("properties", {})

        if relation_property not in properties:
            return ToolResult(
                content=[
                    TextContent(
                        type="text",
                        text=f"Property '{relation_property}' not found on page.",
                    )
                ],
                is_error=True,
            )

        prop = properties[relation_property]
        if prop.get("type") != "relation":
            return ToolResult(
                content=[
                    TextContent(
                        type="text",
                        text=f"Property '{relation_property}' is not a relation.",
                    )
                ],
                is_error=True,
            )

        # Get property ID from the object
        property_id = prop.get("id")

        # Fetch relation items (paginated)
        items = await notion.retrieve_page_property(page_id, property_id)

        # Extract titles concurrently
        titles = []
        for item in items:
            if item.get("type") == "page":
                related_page_id = item.get("id")
                try:
                    related_page = await notion.get_page(related_page_id)
                    title = extract_rich_text(
                        related_page.get("properties", {})
                        .get("title", {})
                        .get("title", [])
                    )
                    titles.append(title)
                except Exception as e:
                    logger.warning(
                        f"Failed to fetch title for {related_page_id}: {e}"
                    )
                    titles.append("[Untitled]")

        result = (
            f"Found {len(titles)} linked items:\n\n"
            + "\n".join(f"- {title}" for title in titles)
        )
        elapsed = time.time() - start

        logger.info(
            f"[{call_id}] success in {elapsed:.2f}s",
            extra={"items_resolved": len(titles)},
        )

        return ToolResult(
            content=[TextContent(type="text", text=result)]
        )

    except Exception as e:
        logger.error(f"[{call_id}] failed: {e}")
        return ToolResult(
            content=[
                TextContent(
                    type="text",
                    text=f"Error: {str(e)}",
                )
            ],
            is_error=True,
        )


# Define tools for MCP
@server.list_tools()
async def list_tools() -> list[Tool]:
    """List available tools."""
    return [
        Tool(
            name="query_database",
            description="Query a Notion database with filters and sorts. Returns results as Markdown table.",
            inputSchema={
                "type": "object",
                "properties": {
                    "database_id": {
                        "type": "string",
                        "description": "The Notion database ID (UUID format)",
                    },
                    "filter_obj": {
                        "type": "string",
                        "description": "JSON filter object (optional). Example: '{\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}'",
                    },
                    "sorts": {
                        "type": "string",
                        "description": "JSON array of sort objects (optional). Example: '[{\"property\": \"Created\", \"direction\": \"descending\"}]'",
                    },
                },
                "required": ["database_id"],
            },
        ),
        Tool(
            name="retrieve_page",
            description="Retrieve the full content of a Notion page as Markdown.",
            inputSchema={
                "type": "object",
                "properties": {
                    "page_id": {
                        "type": "string",
                        "description": "The Notion page ID (UUID format)",
                    },
                },
                "required": ["page_id"],
            },
        ),
        Tool(
            name="resolve_relation",
            description="Resolve a relation property on a page, fetching titles of all linked items. Handles >25 references.",
            inputSchema={
                "type": "object",
                "properties": {
                    "page_id": {
                        "type": "string",
                        "description": "The Notion page ID containing the relation property",
                    },
                    "relation_property": {
                        "type": "string",
                        "description": "The name of the relation property to resolve (e.g., 'Stakeholders')",
                    },
                },
                "required": ["page_id", "relation_property"],
            },
        ),
    ]


async def main():
    """Start the MCP server."""
    logger.info("Starting Notion MCP Gateway")
    logger.info(f"Using Notion API version 2025-06-28")
    async with server:
        await server.wait()


if __name__ == "__main__":
    asyncio.run(main())

Dockerfile for Containerization

# Dockerfile
FROM python:3.11-slim

WORKDIR /app

# Copy requirements and install
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY notion_gateway.py .

# Set environment
ENV PYTHONUNBUFFERED=1

# Run server
ENTRYPOINT ["python", "-m", "notion_gateway"]

requirements.txt

mcp==0.1.0
httpx==0.27.0
tenacity==9.0.0

Build and Deploy

# Build the Docker image
docker build -t my-registry/notion-gateway:latest .

# Push to registry
docker push my-registry/notion-gateway:latest

# Run locally
docker run -e NOTION_TOKEN=$NOTION_TOKEN notion-gateway:latest

Wiring to Claude Desktop & Cursor

Now that you have a running gateway, connect it to your favorite AI tools.

Claude Desktop Configuration

File location:

  • Linux/Mac: ~/.config/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Configuration with Docker:

{
  "mcpServers": {
    "notion-gateway": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "-e", "NOTION_TOKEN=your_secret_token_here",
        "my-registry/notion-gateway:latest"
      ]
    }
  }
}

Configuration with local Python (development only):

{
  "mcpServers": {
    "notion-gateway": {
      "command": "python",
      "args": [
        "-m", "notion_gateway"
      ],
      "env": {
        "NOTION_TOKEN": "your_secret_token_here"
      }
    }
  }
}

Restart Claude Desktop to apply the changes.

Test the connection: In Claude, ask “What tools are available?” You should see three tools listed:

  • query_database
  • retrieve_page
  • resolve_relation

Cursor Configuration

If you’re using Cursor (≥ 0.46):

  1. Open Cursor Settings
  2. Search for “MCP”
  3. Click “Add remote server”
  4. Paste the same JSON configuration from above
  5. Cursor will auto-restart the container on config changes

Pro tip: Cursor integrates MCP faster than Claude Desktop. If you’re doing heavy development with Notion, test in Cursor first.

Troubleshooting Connection Issues

IssueSolution
Tool not found in ClaudeRestart Claude Desktop, verify config JSON syntax with a linter
500 error from gatewayRun docker logs <container_id>, verify NOTION_TOKEN is valid and workspace-accessible
Timeout (>10s)Gateway is slow; check Notion API status page, reduce result set size in queries
Permission deniedIntegration token doesn’t have access to the database; check Notion settings
”Cannot connect”Docker daemon not running, or port conflicts; check with docker ps

Handling the 25-Reference Limit

The Notion API caps any single property fetch to 25 items. Let’s demystify this limitation and show you the workaround.

What the Limit Actually Is

From the Notion API documentation:

“When retrieving a relation or rollup property that has more than 25 items, the API returns only the first 25.”

This bites you when:

  • A task has >25 subtasks
  • A project has >25 linked team members
  • A database rollup aggregates >25 related records

This doesn’t affect you when:

  • You’re querying a database (Notion handles pagination for you)
  • You’re fetching a page’s blocks (blocks paginate separately)
  • You’re querying properties other than relations or rollups

The Workaround: Property Pagination Endpoint

The old approach (doesn’t work for >25):

page = await notion_get(f"pages/{page_id}")
relations = page["properties"]["Stakeholders"]["relation"]  # Max 25 items

The correct approach (unlimited):

relations = []
start_cursor = None
while True:
    data = await notion_get(
        f"pages/{page_id}/properties/Stakeholders",
        {"page_size": 100, "start_cursor": start_cursor}
    )
    relations.extend(data["results"])
    if not data.get("has_more"):
        break
    start_cursor = data["next_cursor"]

This endpoint is already built into NotionRelationalResolver in the code above. The resolve_relation_tool handles all pagination automatically.

Performance: Is Multi-Hop Expensive?

Let’s model a realistic scenario: A project page with 100 stakeholders.

NotionRelationalResolver flow:
  1. Fetch project page (1 API call)
  2. Get Stakeholder relation IDs via pagination (~4 calls for 100 refs)
  3. Fetch title from each stakeholder page (100 concurrent calls)
  Total: ~105 API calls
  Time: ~3–5 seconds (with async concurrency)

Is this acceptable?

  • For interactive Claude queries: Yes (user expects a few seconds)
  • For batch operations: Yes (no user waiting)
  • For agent loops that call this 10+ times per minute: No (hits rate limit)

Optimization: If a relation is queried frequently and data rarely changes, store the resolved titles directly in Notion as a formula or synced property. This avoids the multi-hop cost entirely.

When to Denormalize

Instead of traversing relations every time, consider:

  • Synced properties: Create a Stakeholder Titles property that syncs from the relation
  • Formula: Use Notion’s formula syntax to compute related data at query time
  • Cache: Store results in your MCP gateway with a TTL; refresh periodically

Each approach trades query latency for data freshness. Choose based on your use case.


Monitoring & Observability

Your gateway is now live. Make sure you know what it’s doing.

What to Log

Per-call logging:

import uuid
import time

@server.call_tool()
async def query_database_tool(...):
    call_id = str(uuid.uuid4())[:8]
    start = time.time()

    logger.info(f"[{call_id}] query_database called", extra={
        "database_id": database_id,
        "filter_keys": list(filter_obj.keys()) if filter_obj else [],
        "user_agent": request.headers.get("User-Agent") if hasattr(request, 'headers') else None,
    })

    try:
        result = await notion.query_database(...)
        logger.info(f"[{call_id}] success in {time.time() - start:.2f}s", extra={
            "rows_returned": len(result),
        })
        return result
    except Exception as e:
        logger.error(f"[{call_id}] failed: {e}")
        raise

Metrics to track:

  • Tool call count: Which tool is called most? (query_database likely dominates)
  • Latency per tool: Is resolve_relation slow? (indicates many relations)
  • Error rate: 429s (rate limit), 403s (permission), timeouts (network)
  • Result size: Returning 1000 rows? (token explosion)

Where to Send Logs

  • Local dev: stdout (Docker logs read-able with docker logs)
  • Docker: mount a volume or use syslog driver to persist
  • Kubernetes: let stdout be ingested by your cluster’s log aggregator (Fluentd, Datadog, etc.)
  • Cloud: CloudWatch (AWS), Cloud Logging (GCP), Monitor (Azure)

Alerts to Set Up

  • Gateway restart loop: MCP server crashes repeatedly (likely config or token issue)
  • High 429 rate: Hitting Notion API limits (slow down queries, add caching)
  • High latency: >5s per call (network, workspace size, or API overload)
  • Permission errors: Token expired or workspace access revoked

Security & Governance

Your gateway has full access to your Notion workspace. Treat it like a database password.

Credential Management

DO:

  • Store NOTION_TOKEN in a secret manager (AWS Secrets Manager, HashiCorp Vault, Kubernetes Secrets)
  • Rotate your token annually
  • Log every tool call (not the token itself)
  • Use separate tokens for dev/test/prod if possible

DON’T:

  • Check the token into git (use .gitignore or pre-commit hooks)
  • Print the token in logs
  • Expose the token via API query parameters
  • Share the token across teams (use a single service account)

Limiting LLM Tool Access

Your gateway exposes three read-only tools. Consider:

  • Query databases: Safe-ish (read-only). Risk: LLM could dump entire database.
  • Retrieve pages: Safe-ish (read-only). Risk: could expose sensitive documents.
  • Resolve relations: Safe-ish (read-only). Risk: could enumerate all linked items.

Mitigations:

  • Add tool-level ACLs: @server.call_tool(required_scope="read:databases")
  • Log every tool call with the query intent
  • Use Notion’s database/page-level permissions to isolate sensitive data
  • Educate users: Don’t paste sensitive prompts into Claude if you have a gateway connected

Audit Trail

Add a logging middleware to capture every tool call:

import logging
from datetime import datetime

logger = logging.getLogger("audit")

async def audit_log_middleware(tool_name: str, params: dict, result: Any):
    """Log every tool call for compliance/debugging."""
    logger.info(f"AUDIT: {tool_name}", extra={
        "timestamp": datetime.utcnow().isoformat(),
        "tool": tool_name,
        "params_keys": list(params.keys()),  # Not the values!
        "result_size_bytes": len(str(result)) if result else 0,
    })

Use audit logs to:

  • Answer “which LLM queries accessed my data?”
  • Detect unusual patterns (same tool called 100x in 1 minute)
  • Comply with audit requirements (SOC 2, HIPAA, GDPR)

Next Steps & Community

You now have a working Notion MCP gateway. Here’s where to go from here.

Enhancements to Consider

  1. Semantic search: Add embeddings (OpenAI, Cohere, Ollama) to find content by meaning, not just keywords
  2. Write operations: Extend tools to create and update pages, database rows, and relations
  3. Webhook triggers: Listen for Notion webhooks and trigger LLM workflows when data changes
  4. Multi-workspace support: Handle multiple Notion accounts in a single gateway
  5. Caching layer: Cache frequently-accessed pages with a TTL; refresh on webhooks or on-demand

Community & Alternatives

If building a custom gateway feels heavyweight, consider:

  • Notion’s official MCP server (@notionhq/notion-mcp-server) if you can live with its constraints
  • Community servers: suekou/mcp-notion-server, ccabanillas/notion-mcp, and others on GitHub
  • SaaS wrappers: Composio, which handles OAuth, gateway hosting, monitoring, and integrations

Each has trade-offs. Official is easiest but limited. Community servers are free but require self-hosting. SaaS wrappers offer convenience but vendor lock-in.

Troubleshooting & Support


Key Takeaways

  • Notion’s official server works for most use cases. Build custom only if you need >25 relations, joins, or behavior Notion doesn’t provide.
  • Three tools are your foundation: query_database, retrieve_page, resolve_relation.
  • FastMCP in Python is the easiest starting point. TypeScript (mcp) is also solid.
  • Pagination is non-negotiable. The 25-reference limit is a pagination problem, not a blocker.
  • Retry logic (tenacity) prevents transient failures from breaking your agent loops.
  • Markdown output is more token-efficient than JSON for LLM consumption.
  • Monitor and log everything. You’re now a system of record for your Notion workspace.
  • Security matters: Store tokens in secrets managers, audit every call, educate users on risks.

Explore the MCP Ecosystem

Have you built a Notion MCP gateway or used one of the community servers? Your approach might help others in the ecosystem.

Discover more integration options: Explore our MCP server directory to find other data integration servers, development tools, and middleware that pair well with Notion.

For database-first workflows: Check out our Database MCP Servers collection for servers that complement Notion’s capabilities.

For complex orchestration: See our API Gateway and Development Tools section for servers that handle multi-source data coordination.