What is the Chattermill Model Context Protocol (MCP): A Complete Guide

Last Updated:
March 18, 2026
Reading time:
2
minutes

​​AI agents have gone from futuristic concepts to indispensable, daily tools, now deeply embedded into how we work. Adoption is accelerating and so are our expectations. 

Teams are becoming more AI-native. There's real curiosity and excitement about what AI agents can do - and people are actively looking for ways to use them to automate work, boost productivity, and get better insights.

But there’s been a ceiling. Large Language Models (LLMs) are powerful - they can search the web and synthesize answers from vast amounts of public information. 

But on their own, they can't access your data: your files, your business systems, the tools and applications you use. They're cut off from the context and business understanding that lives in your own systems. MCP (Model Context Protocol) bridges that gap.

Model Context Protocol explained 

Model Context Protocol (MCP) is an open standard that enables Large Language Models to connect to external applications, tools, and platforms. 

Developed by Anthropic in 2024, MCP is quickly becoming the default way AI agents (e.g. Claude, Cursor or ChatGPT) and LLMs can interact with your data and external tools, without the need to build custom integrations. 

Think of it like a universal adapter. App developers build one side of the connection (an MCP server), and AI agents build the other side (an MCP client). Once both sides are connected, they can talk to each other - the AI can pull in data from your external applications. 

Introducing Chattermill MCP

MCP opens the door to connecting any business data to your AI agents - and customer feedback is one of the most valuable data sources a company has.

We believe these insights should be accessible where your product, CX, marketing, and support teams need them - in the AI tools they already use every day.

Say a product manager is using an AI agent (e.g. Claude) to draft a PRD (Product Requirements Document) for improving the checkout experience. By connecting Chattermill data, they can pull in what customers have been saying about checkout - grounding the document in real voice of the customer at scale and quantified insights.

That's customer context, right where the work happens - and exactly why we built the Chattermill MCP.

The Chattermill MCP Server connects AI agents to your Chattermill data - so you can ask questions about customer feedback directly in AI agents like Claude, Claude Code, ChatGPT, Cursor, Codex, Gemini CLI, Manus, Notion Agent, or OpenClaw - and more to come.

Once connected to your preferred AI agent, you'll be able to query your Chattermill data, and the MCP server will respond with relevant insights - all without leaving your current app.

Sign up for early access and start working with your customer insights in a whole new way.

See the Chattermill MCP in action 

As a CX Manager, with the Chattermill MCP, you can bring your customer feedback directly into your AI agent. Ask what your biggest issues are, drill into the details, pull real customer quotes, and even cross-reference with operational data like delivery logs or CRM records - all in one place, without ever leaving your workflow.

And it's not just CX teams - as a Product Manager, you can use those same Chattermill insights to go from customer feedback to a fully drafted PRD, right where your team already works. 

Ask your AI agent what the biggest product issues are, drill into the details, pull user quotes, and turn it all into a drafted PRD in Notion - all without leaving your workflow. 

You can also give your whole team access to Chattermill insights directly in their AI agent - so anyone can ask questions and get answers, right where they already work.

What you can do with the Chattermill MCP 

With the Chattermill MCP Server you can:

Get quick summaries of customer feedback

When you need a quick overview of what's happening in your feedback data, you can ask for a summary of key topics, without having to dig through individual responses. Helpful when you need an executive summary, want to spot top issues, or just need a quick read on what's happening.

Example prompt: Summarize what our customers are saying about in-store experience in the UK.

Access quantified insights 

When you want to understand the specific issues behind a theme - like online experience, customer service, or product quality - you can pull a list of those issues along with complaint volumes and example quotes.

Example prompt: Compare customer feedback in Germany between this week and last week. What issues are increasing in volume?

Build reports and analyze data

When you need to answer questions with numbers, the server can build reports with filters, breakdowns, and comparisons - just like you would in the Chattermill app. The AI agent can then interpret the results and surface insights from the data. Useful for extracting metrics, tracking NPS or CSAT, understanding trends over time, or building comparisons and breakdowns.

Example prompt: Produce a PowerPoint presentation on which themes had the highest and lowest NPS in 2025. For each of the top 3 positive and negative themes, include key Observations and relevant customer quotes.

Access raw feedback

When you need to see individual customer responses rather than summaries, you can simply ask for the actual verbatim feedback. This is most useful when you have specific filters in mind and want to read through what customers said directly.

Example prompt: I'm working on improving the digital experience for our app users - find me any relevant customer quotes on this topic.

What this means for your teams

Connecting Chattermill data to your AI agents opens up new ways of working with customer feedback. Here are some of the use cases we see teams getting value from:

MCP lets you query the data without building reports yourself

Building reports and dashboards in any platform takes time - you need to know where to look, which filters to apply, how to break down the data. With the Chattermill MCP, you just ask a question using your AI agent and get an answer in seconds, with the ability to drill deeper immediately.

Speed is part of it, but the real value is that anyone can access Chattermill insights without learning the platform first. 

MCP lets you push insights to external tools 

MCP allows AI aagents to connect to multiple tools at once. So if you have Chattermill MCP connected alongside MCPs for Google Docs, PowerPoint, Miro or Linear, the AI can pull insights from one and push them into another. This way you can create performance reports, project updates, or briefs using data from multiple sources.

Examples: 

  • Create a Google Doc summarizing the top 5 customer complaints this quarter
  • Add a slide to my PowerPoint QBR presentation with this month's NPS trends 
  • Create a Linear ticket for the checkout issues customers have been reporting last week

MCP can cross-reference data across different tools

MCP can connect to multiple data sources at once - which means you can cross-reference customer feedback with data from your product analytics, CRM, or internal spreadsheets.

Say your product analytics shows a feature has low adoption, and you’re trying to understand why. With MCP, you can pull that usage data and cross-reference it with customer feedback in Chattermill - going from "what's happening" to "why it's happening" in a single query.

MCP can support complex agentic workflows 

For technically advanced teams, MCP servers can unlock the ability to build complex agentic workflows - incorporating data from different tools on one hand, and on the other, building Skills: custom instructions that teach AI agents such as Claude how to complete specific tasks in a precise, repeatable way.

For Insights and Research teams, recurring tasks such as building reports, analyzing large volumes of data, or maintaining regular updates are time-consuming to execute manually. Connecting customer feedback data via MCP means these workflows can run without the manual effort that comes with maintaining them.

How to get started with the Chattermill MCP 

The Chattermill MCP is available with any Chattermill package. Data access for each user is based on their role and permissions in Chattermill.

It already works with agents such as Claude, Claude Code, Cursor, Codex, Gemini CLI, Manus, Notion Agent, OpenClaw and many others with support for more agents coming soon. 

Whether you're AI-curious or aspiring to become truly AI-native, this is your opportunity to start working with customer insights in a whole new way.

Ready to try it? Sign up for the early preview to gain access to documentation and demos, and stay up-to-date with the latest changes as we continue to build.

How we’re using MCP to build agents at Chattermill

Today, the Chattermill MCP Server lets you bring customer data into a range of MCP-compatible AI agents. Soon, we'll use MCP to power agentic capabilities inside Chattermill itself.

Why does that matter? Because MCP is what allows AI agents to connect to your tools and take action - not just answer questions. 

This is just the beginning. We're committed to making customer insights more accessible, with faster workflows and smarter outputs. Look out for more updates as we continue to invest in this space.

Get granular insights from your feedback data

See how you can turn all your customer feedback into clear, connected insights that lead to action.

What to expect:

A short call to understand your needs and see how we fit

A tailored product demo based on your use case

An overview of pricing and implementation

4.5 rating

140+

5 star reviews

See Chattermill in action

Trusted by the world’s biggest brands

hellofresh logobooking.com logoamazon logoUber logoh&m logo