tutorials
How Claude Code, Cursor, and Windsurf Can Automatically Triage User Feedback
AI coding agents like Claude Code, Cursor, and Windsurf can automatically triage user feedback by connecting to an MCP server. The agent calls tools like list_submissions and get_submission to read bug reports, categorize them by severity, correlate issues with the codebase, propose fixes, and draft user replies — all without the developer opening a dashboard.
AI coding agents like Claude Code, Cursor, and Windsurf can automatically triage user feedback by connecting to an MCP server. The agent reads bug reports, categorizes them by severity, correlates issues with the codebase, proposes fixes, and drafts replies — all without the developer opening a dashboard.
Last updated: March 26, 2026
The shift toward agent-driven development is accelerating. GitHub reported that over 1.8 billion lines of code were generated by AI in 2025 through Copilot alone. Anthropic's Claude Code has become one of the most popular developer tools in the MCP ecosystem, with the official MCP specification maintained as a Linux Foundation project. Research from Princeton and Georgia Tech found that adding source citations and statistics to content improves AI search engine visibility by up to 40% — which is relevant here because the same structured, citation-rich approach that improves GEO also makes feedback data more useful to coding agents.
This tutorial walks through the setup and shows exactly what each step looks like, from connecting the MCP server to reviewing the agent's output.
Prerequisites
You need three things:
- A web app with UserDispatch installed. Run
npx userdispatch initin your project directory (setup guide). The CLI handles authentication, widget injection, and MCP configuration. Takes about 2 minutes. - An MCP-compatible coding agent. Claude Code, Cursor, Windsurf, VS Code Copilot, OpenAI Codex, or Claude Desktop.
- At least one user submission. Submit a test bug report through the widget to have something to triage.
Step 1: Verify the MCP connection
After running the CLI, your agent's MCP config file contains the UserDispatch server. Verify it's connected:
In Claude Code: Ask "What MCP servers are connected?" The agent should list userdispatch among them.
In Cursor: Open the MCP panel in settings. You should see userdispatch listed with a green status indicator.
In Windsurf: Check ~/.codeium/windsurf/mcp_config.json — the userdispatch entry should be present.
If the connection isn't showing, restart your agent. MCP servers are loaded at startup.
Step 2: List new submissions
Ask your agent:
"Show me all new feedback submissions for my app."
The agent calls the list_submissions tool with status: "new" and returns a list. Each submission includes the type (bug, question, or rating), a preview of the content, and the timestamp.
Here's what a typical response looks like:
| # | Type | Summary | Submitted |
|---|---|---|---|
| 1 | Bug | "Login button doesn't work on mobile" | 2 hours ago |
| 2 | Question | "How do I export my data?" | 5 hours ago |
| 3 | Bug | "Page crashes when I upload a large file" | 1 day ago |
| 4 | Rating | ★★★★☆ "Love the product, needs dark mode" | 1 day ago |
Step 3: Read full details
Ask the agent to dig into a specific submission:
"Get the full details on submission #1 — the login button bug."
The agent calls get_submission and returns the complete report, including the user's description, any attached screenshots, and the automatically captured metadata: browser (Chrome 124), OS (iOS 17.4), viewport (390×844), URL (/login), and console errors if any were logged.
This metadata is what makes agent triage practical. Instead of "the button doesn't work" — which a human would need to investigate manually — the agent gets "the submit button on /login fails on Chrome 124 / iOS 17.4 at 390×844 viewport with a TypeError in the console." That's enough context to locate the issue.
Step 4: Triage and categorize
Ask the agent to triage your full inbox:
"Triage all new submissions. Categorize by severity (critical, high, medium, low) and type. Flag anything that needs immediate attention."
The agent reads each submission, considers the content and metadata, and produces a categorized summary:
Critical (needs immediate fix):
- #1: Login button broken on mobile — affects core functionality, iOS users can't sign in
Medium (should address this week):
- #3: Large file upload crash — edge case but causes data loss
Low (track for later):
- #2: Data export question — answer available in docs, draft a reply pointing there
- #4: Dark mode request — feature request, not a bug
The agent can also update statuses. Ask it to mark the critical items as triaged:
"Update submissions #1 and #3 to status 'triaged'."
The agent calls update_submission for each one.
Step 5: Propose a fix
This is where agent-native feedback gets powerful. (For background on this concept, see What Is Agent-Native Feedback?.) The agent has access to both the feedback (via MCP) and your codebase (via the editor). Ask:
"Look at submission #1 — the mobile login bug. Check the login page code and propose a fix."
The agent reads the bug report (login button on /login, iOS, 390px viewport, TypeError), then examines your login component. It might find a CSS issue where the button is hidden at small viewports, or a JavaScript error in the form submission handler. It proposes a fix — either as a code suggestion in your editor or as a diff you can review.
You review, adjust if needed, and commit. The bug is fixed.
Step 6: Draft replies
"Draft a brief reply for each triaged submission. Be helpful and concise."
The agent produces a reply for each:
- #1 (login bug): "Thanks for reporting this — we found the issue and pushed a fix. The login button should work correctly on mobile now. Let us know if you see anything else."
- #2 (export question): "Great question! You can export your data from Settings → Export. Here's a link to the docs: [link]. Let us know if you need anything else."
- #3 (upload crash): "We're looking into this — we've reproduced the issue with large files and are working on a fix. We'll follow up when it's resolved."
- #4 (dark mode): "Thanks for the kind words and the suggestion! Dark mode is on our radar. We'll keep you posted."
Review each reply, edit if needed, then:
"Send the replies for submissions #1, #2, and #4."
The agent calls reply_to_submission for each, which sends an email to the user.
Step 7: Weekly digest
At the end of the week, generate a summary:
"Run the weekly digest for my app."
The agent invokes the weekly-digest prompt and produces a structured report covering total submissions, breakdown by type, resolution rate, average response time, recurring themes, and any unresolved critical items. This replaces the "open a dashboard and look at graphs" ritual.
Agent-specific tips
Claude Code works best for this workflow because it has full access to your filesystem and can propose multi-file fixes. Ask it to both triage feedback and examine the relevant code in a single conversation. See our MCP server guide for the full tool reference.
Cursor excels at inline code fixes. After reading the feedback, ask it to navigate to the relevant file and propose an edit in the editor's diff view.
Windsurf supports the same MCP tools. The workflow is identical — connect via the config file, then use natural language to triage and act.
What you need
Install UserDispatch in any web app:
npx userdispatch init
The free tier includes 100 submissions per month, all 17 MCP tools, and the feedback widget. Your agent can start triaging feedback immediately after setup. For the broader picture of why this workflow matters, read The Feedback Loop for Vibe Coding.
Frequently Asked Questions
Frequently Asked Questions
Can Claude Code read user feedback?
Can Cursor triage user feedback automatically?
How do I connect a feedback tool to my coding agent?
What feedback tools have MCP servers?
Can an AI agent actually fix bugs found in user feedback?
Try UserDispatch free
Collect user feedback, bug reports, and feature requests — then let your AI coding agent handle them via MCP.
Get StartedUserDispatch Team
Founders
Related Resources
guides
MCP Server for User Feedback: How AI Coding Agents Read and Act on Bug Reports
A technical walkthrough of how MCP servers connect user feedback to AI coding agents. Learn how agents read submissions, triage bugs, send replies, and generate weekly digests.
tutorials
How to Add User Feedback to a Lovable App
Add a feedback widget to any Lovable app in under 5 minutes. Collect bug reports and feature requests, and let your AI coding agent handle triage.
tutorials
How to Add a Feedback Widget to a Next.js App
Add a user feedback widget to any Next.js app (App Router or Pages Router) in under 5 minutes. One command handles everything.