AI Security

MCP Server Security Testing

We test Model Context Protocol servers and integrations. Scoped assessments that examine transport security, tool safety, and access controls.

Authors of the open-source MCP pentesting checklist, featured in tldrsec.

What is MCP?

Model Context Protocol (MCP) is a standard that lets AI assistants connect to external tools and data sources. When you give Claude access to read files, query databases, or call APIs, that connection often uses MCP.

Each MCP server exposes capabilities to the AI. A file server might let the AI read and write files. A database server might let it run queries. Understanding these boundaries helps teams configure servers appropriately and scope access correctly.

What You'll Receive

Access boundary documentation

A clear map of which files, databases, and resources each server can reach, with specific recommendations for tightening scope.

Injection test results

Evidence of whether malicious content in documents or data can change AI behavior, with reproduction steps and remediation guidance.

Credential handling review

Assessment of how tokens and API keys are stored and scoped, with prioritized fixes for any gaps found.

What We Test

Attackers target MCP servers because they bridge AI capabilities with sensitive systems. We test the specific boundaries where that access can be abused.

Transport & Connection Security

MCP servers communicate over stdio, HTTP, or Server-Sent Events. We test how messages are authenticated, encrypted, and validated at each layer.

  • Protocol implementation vulnerabilities
  • Message tampering and injection
  • Connection hijacking risks
  • TLS configuration and certificate validation

Tool Safety & Parameter Validation

Each tool the AI can invoke is a potential entry point. We test whether tool parameters can be manipulated to perform unauthorized actions.

  • Path traversal in filesystem tools
  • Command injection in shell tools
  • SQL injection in database tools
  • Unsafe deserialization
  • Missing input validation

Prompt Injection & Data Leakage

Documents and data the AI processes can contain hidden instructions. We test whether malicious content can change AI behavior or extract information.

  • Indirect prompt injection via documents
  • Instruction override attempts
  • System prompt extraction
  • Context window poisoning
  • Data exfiltration through tool outputs

Resource Access Controls

MCP servers define which files, databases, and services the AI can reach. We verify these boundaries hold under adversarial conditions.

  • File system access boundaries
  • Database query restrictions
  • API scope limitations
  • Secrets exposure in tool responses

OAuth & Credential Hygiene

AI integrations often use OAuth tokens to access external services. We test whether these credentials are properly scoped and protected.

  • Excessive OAuth scopes
  • Token storage security
  • Refresh token handling
  • Multi-tenant isolation
  • Credential leakage in logs

Supply Chain & Trust

MCP servers can be installed from package registries. We verify package authenticity and check for known vulnerabilities in dependencies.

  • Package namespace verification
  • Dependency vulnerability scanning
  • Server authenticity validation
  • Code signing and integrity

What Assessments Reveal

These are representative findings from MCP assessments. Each finding includes specific remediation steps, so teams know exactly what to address.

Path Resolution in File Tools

File reading tool used naive path joining. We documented the exact scope boundary and provided a fix to restrict resolution to designated directories.

Resolution: Restricted path resolution to designated directories

Content Handling in Document Retrieval

Document retrieval tool returned content verbatim. We identified the parsing boundary and recommended sanitization before processing.

Resolution: Added content sanitization before processing

Package Verification

Customer had installed a similarly-named package instead of the official integration. We verified the correct source and documented the verification process.

Resolution: Verified official package sources

OAuth Scope Review

MCP server requested broader access than needed. We mapped the actual requirements and recommended scoping to minimum permissions.

Resolution: Scoped tokens to minimum required permissions

Execution Environment Boundaries

Code execution tool lacked isolation. We documented the boundary and recommended container isolation for execution.

Resolution: Implemented container isolation

How We Work

MCP testing follows a defined sequence. Each step has a clear scope and deliverable. You'll know what we're testing, what we've found, and what to do about it.

Map the Environment

We document all MCP servers, their tools, and the resources they access. This creates the baseline for testing scope.

Test Tool Implementations

Each tool is tested for input validation, path handling, and boundary enforcement. Findings are documented with reproduction steps.

Test Data Flow

We examine how content moves through the system and whether it can carry instructions that change AI behavior.

Review Credentials and Access

Token scopes, storage, and isolation are assessed against the principle of least privilege.

Verify Supply Chain

Package authenticity and dependencies are checked. You receive a verified inventory of what's installed.

Fixed scope defined before work begins
No surprise costs or scope changes
Clear deliverables at each step

Who This Is For

MCP security testing is relevant when AI assistants can access tools and data on your systems. These scenarios have different testing priorities.

Building MCP Servers

You're developing MCP servers that will be used by Claude, GPT, or other AI assistants. You want to verify that your tool implementations handle input correctly before users depend on them.

You receive:

Tool-by-tool security assessment with reproduction steps for each finding

Deploying AI Assistants Internally

Your team uses AI assistants with access to internal tools, files, or databases. You need to understand what boundaries exist and whether they hold.

You receive:

Access boundary documentation and configuration recommendations

Shipping AI Features to Customers

Your product includes AI features that connect to external services or customer data. You need evidence that these integrations don't introduce new risks.

You receive:

Integration security report suitable for customer security reviews

Common Questions

What do we receive at the end of an MCP assessment?

You receive a detailed report documenting each MCP server tested, the tools examined, and findings organized by severity. Each finding includes reproduction steps, screenshots or logs where applicable, and specific remediation guidance. The report is structured so your engineering team can address issues directly without needing follow-up clarification.

Do you test MCP clients, servers, or both?

We test both. MCP servers are assessed for how their tool implementations handle untrusted input and enforce boundaries. MCP clients are assessed for how they handle responses from potentially malicious servers. A complete assessment covers the full integration between client and server.

We built our MCP server internally. Can you test it?

Yes. We test custom MCP implementations regardless of framework or language. Our testing client works with any server that implements the MCP protocol. We've tested servers built in TypeScript, Python, Go, and Rust.

What if we use third-party MCP servers from npm or GitHub?

We verify the packages you've installed are authentic, check for known vulnerabilities in dependencies, and test how the servers are integrated into your environment. If you have authorization, we can also test the third-party servers directly.

How is MCP testing different from standard API pentesting?

Traditional API testing focuses on HTTP endpoints with predictable inputs. MCP testing examines the additional attack surface created when AI models interpret prompts and invoke tools. The AI's interpretation layer means inputs aren't deterministic—malicious content in documents can change tool behavior in ways standard API testing wouldn't catch.

Do you provide guidance on fixing issues?

Every finding includes specific remediation steps. For code-level issues, we provide example fixes. For configuration issues, we document the exact settings to change. After you've made fixes, we offer re-testing to verify the issues are resolved.

Safe next step

Review your MCP scopewith a security engineer

Share how your MCP servers and tools are used. We will outline a scoped test plan, answer questions, and only proceed if it feels right.

Discuss your MCP scope

or view a sample report first

No obligation to proceed
Scope and pricing agreed up front
Clear deliverables in writing