Glossary / MCP Security
MCP Security
MCP (Model Context Protocol) security is the practice of assessing how AI agents connect to MCP servers, tools, and data sources, and ensuring those connections are authenticated, scoped, and safe.
MCP standardizes how an AI agent discovers and invokes tools such as files, databases, and APIs. MCP security focuses on the trust boundaries between the agent, the MCP server, and the underlying systems those tools can reach.
It asks concrete questions: Are tool definitions accurate and trustworthy? Are permissions limited to the current user or workflow? Is transport encrypted and protected against interception? Are tool responses validated before the model takes action?
A good MCP security review documents the servers in scope, the permissions each tool receives, and evidence that controls behave as intended. The outcome is clear, reviewable guidance for engineering and security teams.
How MCP works in practice
MCP establishes a contract between an AI agent and the tools it can use. Security comes from understanding each step where trust is delegated and enforcing controls at those points.
Appsecco's MCP testing traces these handoffs, attempts controlled misuse of tool discovery and calls, and verifies that authentication, scoping, and output checks hold up in real workflows.
Discovery and tool definitions
The agent retrieves a catalog of tools from the MCP server, including inputs, permissions, and scope. Attackers look for ambiguous or overly broad definitions that allow a tool to be used beyond its intended purpose.
Invocation and authorization
When the model selects a tool, the MCP server enforces authentication, per-user scopes, and allowlists. Weak auth or shared credentials can let tool calls execute with more privilege than intended.
Responses and downstream actions
Tool responses feed back into the model and influence its next steps. Without validation and guardrails, responses can drive unintended actions or data exposure.
Threat model for MCP security
MCP links an agent to real tools, so the main risk lies in how trust and authority are delegated across the tool catalog, authorization checks, and tool responses. A threat model maps the points where a normal workflow could be steered off course.
We use this model to design MCP tests that validate tool definitions, enforce least-privilege scopes, and confirm response handling in representative workflows.
Tool definition spoofing
Misleading or ambiguous tool schemas that cause the agent to use a tool beyond its intended purpose.
Scope confusion
Authorization rules that are valid but too broad, allowing calls outside the current user or task.
Credential exposure
API keys or tokens surfaced in logs, tool responses, or error messages that can be reused.
Response manipulation
Tool outputs that inject instructions or hide critical context before the model acts.
Data overreach
Tool calls that return more data than required, expanding the agent's access beyond need.
Common MCP security vulnerabilities
Most MCP issues arise from unclear boundaries between the agent, the MCP server, and the systems behind each tool. These are common in early MCP integrations and are not a reflection of diligence; they are the kinds of edge cases we map and validate during testing.
Over-broad tool scopes
Tools are permitted to act across multiple tenants, users, or workflows because scopes are shared or too generic.
Resolution: We verify per-user and per-task scoping with controlled calls and confirm least-privilege enforcement.
Ambiguous or inconsistent tool definitions
Tool schemas do not clearly describe intent, inputs, or limits, which makes it easier for the agent to invoke tools outside the intended use.
Resolution: We review tool catalogs for clarity and test boundary cases that could expand a tool beyond its purpose.
Weak authentication between agent and MCP server
Shared tokens, missing rotation, or relaxed verification allow tool calls to be replayed or used across contexts.
Resolution: We evaluate how credentials are issued, scoped, and validated, then attempt safe misuse to confirm protections.
Unvalidated tool responses
Outputs are treated as authoritative without checks, which can influence subsequent actions or disclosures.
Resolution: We test response handling and verify that validation and policy checks happen before downstream actions.
Excessive data exposure
Tools return more data than the current task needs, widening the agent's access and increasing disclosure risk.
Resolution: We compare returned data to task requirements and confirm minimization is enforced in practice.
Testing approach for MCP security
A focused MCP security review follows a small set of steps so teams know what will be tested and what will not. Scope, access, and environments are confirmed before work starts.
Confirm scope and environment
We list the MCP servers, tools, and environments in scope and agree on test boundaries and access.
Review tool definitions and permissions
We examine tool catalogs, scopes, and authorization rules to identify where boundaries can blur.
Run controlled misuse cases
We execute safe, non-destructive tests against representative workflows to validate authorization and response handling.
Document evidence and fixes
We provide a written record of what was tested, what held, and what needs adjustment, with clear remediation guidance.
Related glossary terms
Safe next step
See how MCP testing is scoped,
without committing yet.
We can walk through your MCP setup, define boundaries, and share what evidence a review would produce. If it is not the right fit, that is fine.
Explore MCP testingor view a sample report first