There's a specific kind of dread that comes with opening a support ticket that just says: "The button doesn't work."

Which button? For which user? On which browser? Since when? You don't know. And so begins the ritual: pinging the reporter, hunting through logs, trying to reproduce something you can't see, in an environment you can't replicate.

It takes forever. And the whole time, somewhere out there, a user is pressing that button over and over again, wondering why your product is broken.

We think there's a better way.

The problem with debugging in the dark

Bug triage has always had an information problem. The data you need is scattered. Session behavior lives in one tool, error logs in another, the ticket in a third, and your codebase somewhere else entirely. Pulling it together takes time that engineers don't have and context that product managers can't always provide.

Pendo Session Replay helped close some of that gap. Instead of guessing what a user experienced, you could watch it. You could see a user hitting a "Run Sync" button, see it error out, and watch them click it again and again. Frustration, made visible. Pendo flags those moments as rage clicks, surfacing them proactively so you're not waiting on a ticket to find out something's wrong.

But seeing the problem is only the start. Getting to a fix still required a lot of manual assembly.

How Pendo Session Replay connects rage click detection to root cause

Here's what the workflow looks like now.

Step one: spot the problem. A session replay surfaces a rage click on a Run Sync button. The user is clearly stuck: repeated attempts, error states, no success. You've got the visual evidence. Now you need to turn it into something actionable.

Step two: create the ticket without the busywork. In the replay player, you select Create an issue, trim the clip to the relevant window, and describe what you expected to happen. With AI enabled, Pendo generates a summary and full description (including reproduction steps and a link to the saved replay clip) and pushes it directly into Jira with fields auto-populated. No tab-switching. No copy-paste. No "I think the issue is around line 47 of the sync flow." Just a complete, ready-to-act ticket in the time it used to take to open Jira.

Step three: go deeper with developer tools. When engineering opens the replay, they can toggle on the Developer tools panel and watch the session with two additional layers of context running in sync: the console tab (capturing console.log, console.warn, and console.error output, including uncaught exceptions) and the network tab (every request and response, with method, status code, bodies, and headers). Failed requests are marked in red. As the replay plays, both logs scroll in sync with the timeline, so the moment the button press fails and the network calls 500s, you see it all at once. No need to ask the customer to reproduce it. No need to guess what the backend returned.

Step four: triage with the Pendo MCP. This is where the workflow shifts from faster-same to genuinely different.

With the Pendo MCP connected to your AI assistant, you prompt it to triage the issue: take a look at the frustration data, the devlog events, and the contents of this ticket. What's the root cause? The Atlassian MCP reads the Jira ticket and extracts the replay link. The Pendo MCP's sessionReplayList tool pulls the relevant session and its frustration events. Then devlogEvents fetches the raw HTTP requests, response details, log levels, messages, and stack traces from that specific session, all resolvable directly from the replay URL with no manual lookup required.

The AI synthesizes everything and surfaces the root cause.

Step five: get to a fix. One more prompt (how would you fix this?) and you bring in codebase context via another MCP connection. The AI has the problem, the error evidence, and the relevant code. It gives you a concrete path forward rather than a list of things to investigate.

The Run Sync button now returns a success state.

Why this matters 

The magic here isn't any single piece. It's the connective tissue.

Session Replay captures what actually happened. Console logs capture what the browser reported. Network logs capture what the backend returned. Structured tickets preserve the context. The Pendo MCP’s growing toolset makes all of it queryable, not just searchable but reasoned over, by an AI that can synthesize across sources and produce an answer instead of a data dump.

What used to require a support rep, a product manager, and an engineer passing context between each other can now happen in a single, focused workflow. The human judgment is still there. You're steering, not delegating. But the assembly work largely disappears.

That's not a small thing. Assembly work is invisible overhead. It's the difference between an engineer spending an afternoon debugging and an engineer spending twenty minutes validating a solution. Multiply that across every bug ticket, every sprint, every team.

How does Pendo Session Replay detect rage clicks?

Pendo Session Replay automatically flags rage click events — repeated rapid clicks on the same element — as frustration signals in the session timeline. These appear as marked events in the replay scrubber, so you can jump directly to the moment a user started rage-clicking without watching the full session. Unlike standalone heatmap tools that show where rage clicks happened in aggregate, Pendo attaches each rage click event to the individual user's session, their segment, their NPS score, and their feature usage history — giving you the behavioral context that explains why the frustration occurred, not just where.

What's the fastest way to diagnose a rage click issue in a SaaS product?

The fastest path from rage click to diagnosis is:

(1) filter session replay by rage click events to isolate affected sessions.

(2) watch the replay alongside the user's behavioral timeline — what they did before and after — to identify whether the frustration was caused by a broken interaction, misleading UI, or a failed expectation.

(3) check whether the affected users share a segment (such as trial users, mobile users, or users on a specific plan) to determine if it's a targeted bug or a broader UX pattern. Tools that silo your replay data from your analytics data require manual cross-referencing at steps 2 and 3. In Pendo, all three steps happen in a single view because session replay and product analytics share the same data model.

The broader pattern

This is a preview of how product development tooling is going to evolve. The shift isn't just that AI can help you write code or draft tickets. It's that AI can now reach across the tools in your stack, pull the relevant context, and reduce the cognitive overhead of figuring out what to do next.

Session Replay has always been about empathy, about putting yourself in your users' shoes. Developer tools and MCP extend that. Now the question isn't just what did the user experience? It's why did it happen, and what do we do about it? Answered faster, with less friction, and without losing the human judgment that makes the fix actually good.

The button works now. And next time something breaks, you'll know exactly how to find out why.

Session Replay Developer tools, Create an issue with AI, and the Pendo MCP are available today. To connect your AI client to Pendo, see Connect to the Pendo MCP server. To set up developer tools for your application, visit Use developer tools in Session Replay.