How We Work
How we run security testing engagements
We test the apps, APIs, cloud infrastructure, and AI/MCP integrations you place in scope using careful, non-disruptive methods and a defined plan. This page outlines timelines, communication, and deliverables.
Engagement models
Choose a model with clear scope and outcomes
Every model starts with a written scope, defined testing window, and agreed reporting format. You see what will be tested, what will not, and what you will receive.
Project-based assessment
A fixed-scope engagement for a defined release, launch, or annual requirement.
Typical timeline: 2–4 weeks
Best for
- Pre-launch validation
- Annual compliance testing
- Major architecture changes
Included
- Written scope and rules of engagement
- Scheduled testing window
- Draft review before final report
Rolling retainer
A monthly cadence that tests new features and high-risk areas as you ship.
Monthly cycles
Best for
- Continuous delivery teams
- Growing product surfaces
- Ongoing security visibility
Included
- Monthly scope plan agreed in advance
- Status updates during each cycle
- Retest verification on fixes
Focused security check
A short, targeted review of a specific feature, integration, or risk area.
Typical timeline: 3–5 business days
Best for
- New integrations
- Sensitive flows (auth, payments)
- Quick validation before launch
Included
- Target list and test data confirmed up front
- Evidence-backed findings
- Clear next-step guidance
Every model includes
Evidence and reproduction steps
Each finding includes the proof, steps taken, and affected endpoints or flows so engineering can verify quickly.
Prioritized fix guidance
Clear remediation guidance with severity, impact, and recommended next steps.
Review-ready summary
Executive summary plus a scope statement and out-of-scope list for internal and auditor review.
Scope is fixed before work starts
We confirm targets, environments, and constraints in writing.
No surprises or hidden add-ons
Changes only happen if you request them, with a written update.
A predictable engagement timeline
Every engagement follows a clear sequence with fixed scope and pricing agreed before testing begins. You always know what is happening, when it happens, and what you will receive.
Scope confirmation
We document targets, environments, access needs, and constraints in a written scope and rules-of-engagement.
Schedule and communication plan
We agree on the testing window, points of contact, and update cadence so there are no surprises.
Testing window
Testing runs inside the agreed window using non-disruptive methods, with progress updates as planned.
Draft review and final report
You receive a draft for review, then a final report with clear evidence and prioritized remediation guidance.
Communication that stays predictable
Before testing begins, we agree on points of contact, update cadence, and escalation paths. During the engagement, we stick to that plan so there are no surprises.
We align to the channels your team already uses so updates stay consistent and easy to track.
Before testing starts
- Confirm primary and backup points of contact
- Agree on update cadence and preferred channels
- Document start and end dates, plus any maintenance windows
During the testing window
- Status updates delivered on the agreed cadence
- Immediate notice for anything that could affect availability
- Questions routed through your primary contact
After testing wraps
- Draft review call to validate findings and context
- Written record of any scope changes or clarifications
- Clear next steps for fixes and retest planning
What stays consistent
Deliverables that stay predictable
You receive a consistent report package on the agreed delivery date. Each item is designed to make internal review and remediation straightforward.
Scope record and test window
Written scope with in-scope and out-of-scope lists, assumptions, and the agreed testing window.
Evidence-backed findings
Clear reproduction steps, affected endpoints or flows, and impact context for engineering review.
Prioritized remediation guidance
Severity and fix guidance tied directly to each finding so teams can plan work confidently.
Review-ready summary
Executive summary and risk overview packaged for internal stakeholders and auditors.
Retest confirmation notes
Verification notes for fixes you ask us to confirm, captured in the same report package.
What stays fixed
Frequently asked questions about how we work
What is included in the scope document?
A written list of in-scope apps, APIs, environments, testing dates, access requirements, and explicit out-of-scope items. This becomes the baseline for the engagement and the report.
What access or setup do you need from our team?
We confirm credentials, test data, and environment details before we start. If production testing is required, we agree on safeguards and a defined window in writing.
How will we know what is happening during testing?
We follow the update cadence agreed at kickoff. You receive planned status updates and immediate notice if anything could affect availability.
What does the report actually include?
Each finding includes evidence, reproduction steps, affected endpoints or flows, and prioritized fix guidance. The report also includes an executive summary plus in-scope and out-of-scope lists.
Do we get a draft before the final report?
Yes. We share a draft for review so you can validate context, add clarifications, and confirm scope before final delivery.
How does retesting work?
Retests are scheduled when you request verification. We confirm scope and timing for the retest and document the results in the same report package.
Safe next step
Talk through your scope.
No commitment required.
Share what you are planning to test and your timeline. We will outline a clear, fixed-scope engagement and answer questions before any decision.
Start a conversationor view a sample report first