Editor's Note: The following is a post by Chuck Kesler, Pendo's Chief Information Security Officer.

Artificial intelligence is transforming the enterprise faster than any technology I’ve seen in my career. Like the advent of the Internet in the 1990s and the rise of cloud computing in the 2010’s, it’s creating incredible opportunities to transform how we work. It also comes with new risks and new variations of old risks. As security leaders, we want to help the business forward while understanding and addressing these risks.

At Pendo, we’ve recently been exploring this balance firsthand with Model Context Protocol, or MCP, an emerging standard that connects large language models (LLMs) with the systems and data they need to be truly useful. 

I’ve learned a lot through that process, both as a CISO and as a lifelong technologist who likes to get my hands dirty. Here’s how I think about MCP and what it means for IT and security teams navigating the AI era.

What MCP is and why it matters

MCP is an open protocol developed by Anthropic that lets AI models communicate directly with software systems. You can think of it as a kind of universal translator for LLMs: a simple, standardized way to provide AI agents with access to external data sources by removing the underlying API calls to applications like Salesforce or Pendo.

In practice, it’s a bridge between natural language models and APIs. An MCP server translates a user’s question into a valid query, retrieves the data, and hands it back to the LLM to generate a useful response.

Right now, we’re using MCP internally to help our own AI capabilities “talk” to product data more intelligently. We’re also making the tech available to our customers through Pendo’s new external MCP server, which allows product data to become instantly accessible to the LLMs, agents, and automation tools that a company’s greater team uses. 

The real MCP risk: Identity and authentication

The most important question with any “agentic” AI system is simple: Whose identity is it using?

When you connect an agent to a business application, that agent typically needs to operate under your identity, which means an authentication token or credentials need to be shared with the agent. That’s powerful, but also risky. We’ve tested this ourselves: In some cases, these systems will prompt you for your username, password, and even your MFA code so they can authenticate as you. The credentials aren’t stored long-term, but the process still feels uncomfortable.

To me, authentication models that were never designed for non-human users are the biggest risk area around agentic AI right now—not prompt injection or data leakage. We need to think through how agents authenticate, how sessions are isolated, and how much access they really need.

As often comes up in my discussions with other security leaders who are working through these same challenges, these aren’t new risks. They’re just the same fundamentals in a faster, less predictable context. 

Learning by building

At Pendo, we decided the best way to understand this technology was to build with it. My security team embraces this mindset, and we have built several AI tools over the past few years both to learn and to make ourselves more effective. 

This summer, we worked with our intern to build an MCP server of our own, designed to automate security workflows across tools. While we still have some work to do before we’re ready to use it only a daily basis, it helped our security team learn how MCP works—and how it can fail.

Hands-on experience is essential. You can’t secure something you don’t understand. I encourage every IT or security leader to spend time with these technologies before trying to write policy about them.

Balancing innovation and risk reduction

Part of my job is to help the company move fast without losing control. That means saying “yes” as often as possible, but with the right guardrails.

When OpenAI releases new features, for example, our team often wants access right away. Sometimes we wait for the enterprise rollout. Other times, we’ll sandbox the consumer version for limited use. The key is to balance curiosity with caution.

Internally, we foster openness by encouraging employees to share what they’re experimenting with. We’ve built dedicated AI Slack channels where people post discoveries, and we use our own tools along with data loss prevention (DLP) and endpoint management to maintain visibility. That combination of trust and observability is what makes innovation sustainable.

Giving customers control

Not every customer has the same comfort level with AI, and that’s okay. Every AI-powered feature in Pendo can be turned on or off. Some are enabled by default, others aren’t, depending on the data they touch and the associated risk.

That approach grew out of conversations with enterprise customers who wanted clarity on how we use AI. In response, we published our AI principles, including transparency, customer choice, and responsible use of data. Putting those principles in writing helps build trust, both internally and externally.

The road ahead

Security leaders can’t afford to stand on the sidelines. The organizations that thrive with AI will be the ones that learn quickly, collaborate across teams, and adapt responsibly.

My philosophy is simple: Understand how the technology works, how it breaks, and how to manage the risk so the business can move faster, safely. That’s what we’re aiming to do at Pendo with MCP, and it’s a mindset every IT and security team can adopt as we navigate this next chapter of enterprise AI.

Want to learn more about connecting Pendo to LLM tools via MCP? Explore our Autumn Release.