MCP servers I actually use to run my business in 2026

A living reference of MCP servers I use to run a business - with setup details and gotchas.

MCP servers I actually use to run my business in 2026

I run a B2B AdTech company as CPO. Product, code, ops, analytics, comms across Slack, Discord and Telegram. All me. MCP servers are how one person operates like a team. But here's what most "awesome MCP" lists miss: servers alone are dumb plumbing. Servers + AI skills are the actual product.

MCP + skills: the real unlock

An MCP server gives your AI a capability: "you can now access Slack." A skill tells it when, how, and why to use that capability. Skills are system prompts, rules files (.claude/rules, CLAUDE.md, AGENT.md), Perplexity Space instructions, or custom prompt templates.

Without skills, you have a toolbox. With skills, you have an operator.

Morning standup on autopilot. Every morning I type "daily standup" and get a single report: what happened in Slack, Gmail, Discord, and Huly overnight, what's on my calendar today, and what's overdue. The skill defines which channels matter, which email senders are priority, and how to format the output. The MCP stack behind it: Google Workspace + Slack + Discord + Huly + Memory. I read one screen instead of opening five apps.

Cross-database business reports. I run three separate Postgres databases. One prompt pulls data from all three, cross-references metrics, and renders charts via Quickchart or Excalidraw. The skill describes exactly which tables to query, how to join the data, what the key metrics are, and what format the output should take. Without the skill, the AI would ask me 15 clarifying questions. With the skill, it just runs.

Comms monitoring with draft responses. Slack and Discord channels generate hundreds of messages a day. A skill defines which channels to watch, what "needs my attention" means (mentions, questions, escalations), and how to draft a response in my voice. I review drafts, edit if needed, send. The AI does the reading and first-pass thinking. I do the final call.

The pattern: MCP = verbs. Skills = sentences. You need both.

Prerequisites

Before any of this works you can follow this guide:

Make your macOS AI agents ready in 15 minutes: the only setup guide you need
A step-by-step guide to set up your macOS for AI coding agents like Codex, Claude Code, and Gemini CLI. One manual step, then a script handles the rest.

or install AI apps manually

# AI CLIs
curl -fsSL https://claude.ai/install.sh | bash
npm install -g @google/gemini-cli
npm install -g @openai/codex

# Python server runner
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.cargo/bin:$PATH"

# For Run Python server
curl -fsSL https://deno.land/install.sh | sh
export PATH="$HOME/.deno/bin:$PATH"

Configs go into claude_desktop_config.json (Claude Desktop), .claude/settings.json (Claude Code), or settings.json (Gemini CLI).

Base servers

Always-on. These give the AI fundamentals: file access, structured reasoning, persistent memory.

Filesystem grants read/write access to your directories. Scope it tightly in the args. Pointing at / is asking for trouble.

{
  "filesystem": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Projects"]
  }
}

Sequential Thinking forces the AI to decompose complex problems step by step instead of hallucinating a one-shot answer. Zero config beyond install.

{
  "sequential-thinking": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-sequential-thinking"]
  }
}

Memory stores facts persistently across sessions. The gotcha: you must define when the AI should save and where the file lives. Add rules to your system prompt like "save key decisions and user preferences to memory." Without explicit rules, it either saves everything or nothing.

{
  "memory": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-memory"],
    "env": {
      "MEMORY_FILE_PATH": "/Users/you/Projects/memory.jsonl"
    }
  }
}

Development

Context7 pulls up-to-date library docs directly into AI context. Remote server, no local install. Get an API key from Context7 website.

{
  "context7": {
    "type": "streamable-http",
    "url": "https://mcp.context7.com/mcp",
    "headers": { "CONTEXT7_API_KEY": "<YOUR_KEY>" }
  }
}

Grep searches code across 1M+ GitHub repos. No auth, no key.

{
  "grep": {
    "type": "streamable-http",
    "url": "https://mcp.grep.app"
  }
}

Stitch is Google's tool for web design like Figma. Requires a Google API key via X-Goog-Api-Key.

{
  "stitch": {
    "type": "streamable-http",
    "url": "https://stitch.googleapis.com/mcp",
    "headers": { "X-Goog-Api-Key": "<YOUR_GOOGLE_API_KEY>" }
  }
}

Talk to Figma is the quirky one. Make design in Figma for you. It uses bunx (not npx), and you must have the Figma MCP plugin running inside Figma while you work. No plugin running = dead server.

{
  "talk-to-figma": {
    "command": "bunx",
    "args": ["-y", "cursor-talk-to-figma-mcp"]
  }
}

GitHub integration, use the streamable-http endpoint with a Bearer token.

{
  "github": {
    "type": "streamable-http",
    "url": "https://api.githubcopilot.com/mcp",
    "headers": { "Authorization": "Bearer <YOUR_GITHUB_TOKEN>" }
  }
}

Repomix packs entire repositories into an AI-readable format. Plain npx, no auth needed.

{
  "repomix": {
    "command": "npx",
    "args": ["-y", "repomix", "--mcp"]
  }
}

Automation and scraping

The highest-leverage category for a solo founder.

n8n connects your AI to your workflow automation engine. You need a running self-hosted n8n instance and an API key.

{
  "n8n": {
    "command": "npx",
    "args": ["-y", "n8n-mcp"],
    "env": {
      "MCP_MODE": "stdio",
      "LOG_LEVEL": "error",
      "DISABLE_CONSOLE_OUTPUT": "true",
      "N8N_API_URL": "https://n8n.yourdomain.com",
      "N8N_API_KEY": "<YOUR_N8N_KEY>"
    }
  }
}

Fetch. URL in, content out. Uses uvx, needs uv installed, no auth.

{
  "fetch": {
    "command": "uvx",
    "args": ["mcp-server-fetch"]
  }
}

BrowserMCP bridges your actual Chrome session to the AI. First install the Chrome extension. No extension = silent failure, no error message.

{
  "browsermcp": {
    "command": "npx",
    "args": ["-y", "@browsermcp/mcp"]
  }
}

Playwright when you don't need your live browser session. It spins up its own headless instance. Better for automated scraping flows.

{
  "playwright": {
    "command": "npx",
    "args": ["-y", "@playwright/mcp"]
  }
}

Crawl4AI does AI-optimized web scraping. Heavier setup than others: runs via Docker or self hosted solution, exposed as SSE endpoint. You connect through mcp-proxy with Bearer auth. Docker container must be running first.

{
  "crawl4ai": {
    "command": "uvx",
    "args": ["mcp-proxy", "--headers", "Authorization", "Bearer <YOUR_TOKEN>", "http://your-crawl4ai-host/mcp/sse"]
  }
}

Apify if you need pre-built scrapers for Amazon, LinkedIn, or similar platforms. Remote server, Bearer auth. $5 free credits every month.

{
  "apify": {
    "type": "streamable-http",
    "url": "https://mcp.apify.com",
    "headers": { "Authorization": "Bearer <YOUR_APIFY_KEY>" },
  }
}

Brave Search adds web search without Google. Needs a Brave API key.

{
  "brave-search": {
    "command": "npx",
    "args": ["-y", "@brave/brave-search-mcp-server"],
    "env": { "BRAVE_API_KEY": "<YOUR_BRAVE_KEY>" }
  }
}

Databases

PostgreSQL gives direct SQL access from the AI. I run multiple instances pointing at different databases, each with its own config block. Name them clearly. The server name is how the AI decides which DB to query.

{
  "postgres-discovery": {
    "command": "npx",
    "args": ["-y", "@henkey/postgres-mcp-server", "--connection-string", "postgresql://user:pass@host:5432/db_name"]
  }
}

Google services

Three servers, three completely different auth flows. This is where everyone gets stuck.

Google Workspace covers Gmail, Calendar, Drive, Docs, and Sheets. Uses uvx with OAuth 2.0. Create credentials in Google Cloud Console. The --tool-tier complete flag unlocks all tools. Optionally add a Programmable Search Engine for Google Search.

{
  "google_workspace": {
    "command": "uvx",
    "args": ["workspace-mcp", "--tool-tier", "complete"],
    "env": {
      "GOOGLE_OAUTH_CLIENT_ID": "<YOUR_CLIENT_ID>.apps.googleusercontent.com",
      "GOOGLE_OAUTH_CLIENT_SECRET": "<YOUR_SECRET>",
      "GOOGLE_PSE_API_KEY": "<YOUR_PSE_KEY>",
      "GOOGLE_PSE_ENGINE_ID": "<YOUR_ENGINE_ID>"
    }
  }
}

Google Cloud Run manages containerized apps. Completely different auth: gcloud CLI with Application Default Credentials. Free tier available.

gcloud auth login
gcloud config set project your-project
gcloud auth application-default login
{
  "google-cloud-run": {
    "command": "npx",
    "args": ["-y", "https://github.com/GoogleCloudPlatform/cloud-run-mcp"],
    "env": {
      "GOOGLE_CLOUD_PROJECT": "<YOUR_PROJECT>",
      "GOOGLE_CLOUD_REGION": "<YOUR_REGION>",
      "SKIP_IAM_CHECK": "false"
    }
  }
}

Google Analytics uses a service account. Not OAuth, not ADC. You need the SA email, private key (full PEM block), and GA property ID.

{
  "google-analytics": {
    "command": "npx",
    "args": ["-y", "@toolsdk.ai/mcp-server-google-analytics"],
    "env": {
      "GOOGLE_CLIENT_EMAIL": "sa@your-project.iam.gserviceaccount.com",
      "GOOGLE_PRIVATE_KEY": "-----BEGIN PRIVATE KEY-----\n<KEY>\n-----END PRIVATE KEY-----\n",
      "GA_PROPERTY_ID": "<YOUR_PROPERTY_ID>"
    }
  }
}
Server Auth method You need
Google Workspace OAuth 2.0 Client ID + Client Secret from Cloud Console
Google Cloud Run Application Default Credentials gcloud CLI logged in
Google Analytics Service Account SA email + private key PEM + property ID

Communication

Three platforms, three entirely different setup paths.

Telegram has the most involved auth of any server in this list. You need a Telegram API app (not a bot token, an actual app from my.telegram.org). Then run an interactive auth flow in terminal with your phone number. If you have 2FA, add --password. If re-authing, add --new. This creates a local session file.

npx -y @chaindead/telegram-mcp auth --app-id <APP_ID> --api-hash <API_HASH> --phone +19001234567
{
  "telegram": {
    "command": "npx",
    "args": ["-y", "@chaindead/telegram-mcp"],
    "env": {
      "TG_APP_ID": "<YOUR_APP_ID>",
      "TG_API_HASH": "<YOUR_API_HASH>"
    }
  }
}

Discord requires a bot application at discord.com/developers. Everyone hits the same wall: you must enable three Gateway Intents (Message Content, Server Members, Presence). Miss one and the bot connects but reads nothing. Also, this server runs via uv from a local clone, not npx.

cd /path/to/mcp-discord
uv sync
{
  "discord": {
    "command": "uv",
    "args": ["--directory", "/path/to/mcp-discord", "run", "mcp-discord"],
    "env": { "DISCORD_TOKEN": "<YOUR_BOT_TOKEN>" },
    "disabled": true
  }
}

Slack is the simplest of the three. Create a Slack app, grab the xoxp user token, done. One detail: SLACK_MCP_ADD_MESSAGE_TOOL defaults to false. Set it to true or your AI can read but not write.

{
  "slack": {
    "command": "npx",
    "args": ["-y", "slack-mcp-server", "--transport", "stdio"],
    "env": {
      "SLACK_MCP_XOXP_TOKEN": "<YOUR_XOXP_TOKEN>",
      "SLACK_MCP_ADD_MESSAGE_TOOL": "true"
    }
  }
}
Platform Auth type Key gotcha
Telegram API app + interactive phone auth Not a bot token. Full user session
Discord Bot token + Gateway Intents 3 intents must be enabled or reads fail silently
Slack xoxp user token Message sending disabled by default

Project management

Linear uses mcp-remote to proxy the hosted endpoint. No API key in the config. Auth happens through the remote flow. Claude Desktop also has a built-in Linear integration, so you might not need this at all.

{
  "linear": {
    "command": "npx",
    "args": ["-y", "mcp-remote", "https://mcp.linear.app/mcp"]
  }
}

Notion setup looks simple, but the auth format is fragile. OPENAPI_MCP_HEADERS takes a JSON string with both Authorization and Notion-Version inside. One escaped-quote mistake and it fails silently.

{
  "notion": {
    "command": "npx",
    "args": ["-y", "@notionhq/notion-mcp-server"],
    "env": {
      "OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer <YOUR_TOKEN>\", \"Notion-Version\": \"2022-06-28\"}"
    }
  }
}

Outline connects to a self-hosted wiki. Watch the npx args: it uses a --package flag with a separate -c binary name, which is unusual.

{
  "outline": {
    "command": "npx",
    "args": ["-y", "--package=outline-mcp-server", "-c", "outline-mcp-server-stdio"],
    "env": {
      "OUTLINE_API_KEY": "<YOUR_KEY>",
      "OUTLINE_API_URL": "https://wiki.yourdomain.com/api"
    }
  }
}

Huly uses login-based auth with email and password, not a token. You also pass the workspace name as an env var.

{
  "huly": {
    "command": "npx",
    "args": ["-y", "@firfi/huly-mcp"],
    "env": {
      "HULY_URL": "https://huly.app",
      "HULY_EMAIL": "<YOUR_EMAIL>",
      "HULY_PASSWORD": "<YOUR_PASSWORD>",
      "HULY_WORKSPACE": "<YOUR_WORKSPACE>"
    }
  }
}

Obsidian won't work out of the box. You first need to install the Local REST API community plugin inside Obsidian itself (Settings → Community plugins → "Local REST API" → Install → Enable). Without the plugin, the server has nothing to connect to.

{
  "mcp-obsidian": {
    "command": "uvx",
    "args": ["mcp-obsidian"],
    "env": {
      "OBSIDIAN_API_KEY": "<YOUR_KEY>",
      "OBSIDIAN_HOST": "127.0.0.1",
      "OBSIDIAN_PORT": "27124"
    }
  }
}

AI and code

HuggingFace opens access to models, datasets, and Spaces on HF Hub. Uses mcp-remote with a Bearer token.

{
  "huggingface": {
    "command": "npx",
    "args": ["mcp-remote", "https://huggingface.co/mcp", "--header", "Authorization: Bearer <YOUR_HF_TOKEN>"]
  }
}

Gemini CLI lets Claude invoke Gemini. No auth in the config. It uses your global Gemini CLI credentials.

{
  "gemini-cli": {
    "command": "npx",
    "args": ["-y", "gemini-mcp-tool"]
  }
}

Claude Code works the other direction. Lets Gemini or another agent invoke Claude. Require an active login session or API key.

{
  "claude-code-cli": {
    "command": "npx",
    "args": ["-y", "@steipete/claude-code-mcp"]
  }
}

Codex CLI wraps OpenAI's code agent as an MCP tool.

{
  "codex-cli": {
    "command": "npx",
    "args": ["-y", "@trishchuk/codex-mcp-tool"]
  }
}

Perplexity Ask adds real-time web search with source citations into your AI workflow.

{
  "perplexity-ask": {
    "command": "npx",
    "args": ["-y", "server-perplexity-ask"],
    "env": { "PERPLEXITY_API_KEY": "<YOUR_KEY>" }
  }
}

Run Python executes Python in a sandboxed WebAssembly environment. It needs Deno, not Node. Install Deno first, then point the command at the binary path directly. I use it for prototyping in Claude Desktop before deploying to Cloud Run.

curl -fsSL https://deno.land/install.sh | sh
{
  "Run Python": {
    "command": "/Users/you/.deno/bin/deno",
    "args": ["run", "-N", "-R=node_modules", "-W=node_modules", "--node-modules-dir=auto", "jsr:@pydantic/mcp-run-python", "stdio"]
  }
}

Productivity

Excalidraw handles diagrams and whiteboarding. Remote server, no auth, just works.

{
  "excalidraw": {
    "type": "streamable-http",
    "url": "https://mcp.excalidraw.com"
  }
}

Utilities

Desktop Commander does system-level macOS automation.

{
  "desktop-commander": {
    "command": "npx",
    "args": ["-y", "@wonderwhy-er/desktop-commander"]
  }
}

AppleScript is the escape hatch. If no MCP server exists for a macOS tool, AppleScript can probably automate it anyway. Works with any app and Terminal.

{
  "applescript-execute": {
    "command": "npx",
    "args": ["-y", "@peakmojo/applescript-mcp"]
  }
}

Quickchart generates charts and graphs.

{
  "quickchart": {
    "command": "npx",
    "args": ["-y", "@gongrzhe/quickchart-mcp-server"]
  }
}

Tally manages online forms.

{
  "tally": {
    "command": "npx",
    "args": ["-y", "mcp-remote", "https://api.tally.so/mcp", "--header", "Authorization: Bearer <YOUR_TOKEN>"]
  }
}

Time does timezone conversions. Simple, but essential when your AI reasons about scheduling across zones.

{
  "time": {
    "command": "uvx",
    "args": ["mcp-server-time"]
  }
}

The "disabled" pattern

Don't run everything simultaneously. Each active server adds tool descriptions to context. More tools = slower, dumber output. Most servers stay "disabled": true and get flipped on per task.

My always-on set: Filesystem, Memory, Sequential Thinking, Google Workspace, PostgreSQL, Slack, Discord, Telegram, Huly, AppleScript. Everything else activates on demand.