When we talk about RedVolt's Expert Review tier, the question we get most often is some variation of: what does that actually mean in practice? Every audit firm says they combine AI and humans. Most of them mean "our AI generates a list and a junior engineer edits the Word doc." That's not what we do, and the difference matters if you're spending tens of thousands of dollars on a security engagement.
Here's what actually happens in a RedVolt expert review, from the first call through the final retest.
Phase 0: The Scoping Call (30-45 minutes)
Every engagement starts with a call between your team and the senior auditor who will own the review. Not a sales engineer. Not a project manager. The human who will read your code.
The scoping call is where we answer three questions that shape the entire audit:
-
What does "in scope" actually mean? Smart contract projects rarely have clean boundaries. Your token is 200 SLOC. Your vesting module is another 500. Your governance module pulls in OpenZeppelin Governor and adds 300 SLOC of custom logic. Your frontend interacts with a paymaster you didn't write. What are we auditing? The answer is usually larger than the initial thought — and the scoping call exists to catch that before pricing.
-
What are the critical invariants? "The total supply never exceeds the max cap." "No user can withdraw more than they deposited plus rewards." "Governance can't change the oracle during a timelock." These are the things the auditor will spend the most time trying to break. If you don't have them written down, we'll help you write them. If you do, we'll ask sharp questions about whether they actually mean what you think they mean.
-
What's the threat model? A DeFi protocol with a multisig controlled by five co-founders has a different threat model than one managed by a timelock-governed DAO. Your audit priorities depend on who the trusted parties are — and the scoping call is where we establish that.
ℹ️Why this matters
The single most common reason audits miss findings isn't that the auditor is bad — it's that the scope was wrong. A thorough scoping call is the cheapest security investment we can make together. It's also the one we spend the most time on, because getting it wrong costs weeks.
Phase 1: The AI Pass (Hours, Not Days)
Once scope is locked, your code goes into our audit engine. This is where RedVolt's multi-agent architecture does its work:
Sentinel — Protocol Mapper
Builds the complete call graph, state variable inventory, and role hierarchy across every contract in scope
Viper — Vulnerability Hunter
Hunts for logic bugs, arithmetic issues, reentrancy, and economic exploits using the map from Sentinel
Warden — Access Control Auditor
Analyzes every permission check, role grant, and privilege escalation path
Phantom — Edge Case Finder
Explores multi-step attack sequences and economic edge cases
Forge — PoC Generator
Writes Foundry test cases that prove every flagged finding. If the PoC doesn't pass, the finding is flagged for human review.
Scribe — Report Synthesizer
Deduplicates findings, assigns final severity, and generates the draft report
This phase typically completes in 5-15 minutes for contracts under 5,000 SLOC, and up to 90 minutes for larger protocols. The output is a draft report with every AI-identified finding, each backed by a passing Foundry PoC. In our published benchmarks, this pass alone catches 100% of high-severity findings on Code4rena contest codebases — before a human even opens the file.
But the AI pass is not the end. It's the foundation.
Phase 2: The Expert Review (1-3 Weeks)
This is where the engagement turns into a human audit, with the AI output as a head-start. The senior auditor assigned to your project — one person, not a rotating pool — does three things in parallel:
2.1 Verify every AI finding
Every flagged vulnerability gets a manual check. The auditor reads the Forge PoC, re-runs it against the live code, and confirms:
- The finding is reproducible
- The severity is correct (not over- or under-rated)
- The remediation guidance is accurate
- The PoC actually demonstrates the claimed impact
Roughly 10-15% of AI findings get re-classified during this step. Most commonly, a "medium" gets downgraded to "low" because the exploit requires a condition that's economically infeasible, or a "high" gets upgraded to "critical" because the auditor identifies a second exploit path the AI missed.
2.2 Extend the audit beyond AI coverage
This is the part that justifies the expert tier. The auditor spends most of their time on:
- Invariant analysis. Every invariant from the scoping call gets formally stated and manually verified. Anything that can't be verified gets fuzzed with Echidna or Foundry's invariant testing framework until either the invariant holds or a counterexample is found.
- Cross-contract reasoning. The AI analyzes each contract thoroughly but struggles with interactions between 4+ contracts in a single attack. The auditor builds state diagrams and traces flows that cross the AI's effective context window.
- Protocol-specific patterns. DeFi protocols have attack patterns (JIT liquidity, sandwich attacks, liquidation MEV) that don't generalize across domains. The auditor brings domain expertise — a bridges specialist for bridges, a restaking specialist for restaking — and tests patterns the AI wouldn't generate on its own.
- Documentation vs. code. The auditor reads your whitepaper, your docs, and your marketing pages, and asks: "does the code actually do what you claim?" This is where we find findings that AI never catches because the vulnerability is in the gap between spec and implementation.
💡What a good expert review looks like
A good expert review adds 2-5 findings beyond the AI output — usually including 1 critical or high that wouldn't have been caught otherwise. If the human doesn't find anything the AI didn't, either your codebase is very simple or the review was too shallow. We consider it a failure if the human phase doesn't produce new findings.
2.3 Collaborate with your team
Throughout the review, the auditor has direct access to your team. Not a support ticket queue — a shared Slack or Discord channel, async questions, and a standing weekly check-in if you want it.
Most findings start as questions. "In this function, why does the ordering of operations work this way?" The answer your team gives usually either resolves the question or reveals the bug. This is where human auditors add disproportionate value: they ask questions an AI can't, get answers a static tool can't access, and use the context to find issues no one could have found by reading the code alone.
Phase 3: The Report
When the audit is done, you get two artifacts:
- The audit report — a professional PDF with executive summary, detailed findings (each with reproduction steps, PoC code, severity rationale, and remediation), and a methodology section documenting what was and wasn't in scope.
- The markdown findings tracker — a lightweight version of the report designed for your engineers. Each finding has a checkbox, a clear fix recommendation, and references to the exact line numbers affected.
The report explicitly separates AI-identified findings from human-added ones, so you can see what the AI pass produced vs. what required manual analysis. This transparency matters because it tells you something about your codebase: if most findings came from the human phase, your protocol has complexity the AI doesn't handle yet, and you should plan future engagements accordingly.
Phase 4: The Retest (Included)
You fix the findings. You push the fixes. The audit doesn't end there.
RedVolt includes one full retest within 60 days of the original report. The retest works in two phases:
- AI re-run. The AI audit engine runs again against your fix branch. If any original finding was not properly remediated, or if the fix introduced a new issue, it gets flagged automatically.
- Human retest. Your original auditor — same person, same mental model of the protocol — verifies every fix manually. The retest report documents which findings are resolved, which are partial fixes requiring additional work, and whether any regressions were introduced.
Most security firms charge separately for retest. We don't, because we don't think an audit is complete until the fixes are verified. If you need a second retest beyond the included one (e.g., after multiple fix iterations), we charge a reduced rate, but the first retest is part of the engagement.
What This Costs
Expert reviews are priced by scope, not hours. A typical engagement ranges from $15,000 to $150,000 depending on:
- Total SLOC in scope
- Protocol complexity (a simple vault vs. a cross-chain bridge)
- Timeline (expedited reviews cost more)
- Scope expansions during the audit (we're transparent about change orders)
Our pricing is published on the web3 auditor page and the scoping call ends with a firm quote. No hourly billing surprises.
The Point
The expert review tier isn't "AI audit plus some extra features." It's a fundamentally different product: an AI-accelerated audit where one human auditor owns the full engagement, reads every line that matters, and stays with the project through retest.
If you're building something where the cost of a missed high-severity finding exceeds the cost of the audit itself — which is true for most protocols with real TVL — the expert tier is the right tier. If your scope is simple and your timeline is tight, the self-service AI audit gets you 90% of the coverage in under an hour for a fraction of the cost.
Either way, we're transparent about what you're paying for and what you're getting.
Ready to scope an engagement? Request an expert review and we'll schedule a scoping call within 48 hours.