Back to Blog
Expert ReviewProcessAudit IndustryProduct

One Expert Per Project: Why It Matters for Security Audits (And Why Most Firms Don't Do It)

March 30, 20269 min readRedVolt Team

If you've ever commissioned a security audit, you may have noticed something strange about the process. You start the engagement with one person — usually the senior who sold you the audit. Then, at some point, that person disappears, and a different person answers your questions. Then you get findings from a third person. By the time the final report lands, you've interacted with four or five humans, and you have no idea which one actually read your code.

This is the default in the audit industry. It's how every large firm — traditional Big 4 security, boutique smart contract shops, enterprise pentesting agencies — structures engagements. And it's quietly one of the biggest reasons audits miss findings that, in hindsight, were obviously there.

At RedVolt, we do the opposite. Every expert review is owned end-to-end by one senior auditor, matched to your domain, who stays with the engagement from scoping call to final retest. This isn't a scheduling accident. It's a deliberate security model, and it's worth explaining why.

The Fragmented Audit: What Most Firms Actually Do

Here's a typical engagement at a boutique audit firm, reconstructed from conversations with clients who've been through it:

01

Scoping (Week 0)

Senior partner leads the scoping call. They understand your protocol deeply. They quote the engagement.

02

Handoff (Week 1)

The engagement is assigned to a "team lead" — often 2-3 years junior to the partner. The team lead may or may not re-read your docs.

03

Execution (Weeks 1-3)

The actual code review is done by 2-4 engineers of varying seniority. Each is assigned a set of files. They work in parallel and communicate asynchronously.

04

Synthesis (Week 3)

The team lead aggregates findings, resolves conflicts, and drafts the report. They may not have independently reviewed every finding.

05

Retest (Week 5+)

By now, several original reviewers have rotated to other projects. Retest is often done by whoever is available.

This model makes economic sense for the firm. It lets them scale revenue without scaling senior hiring. It fills junior engineers' calendars. It averages their capacity across many simultaneous engagements.

It makes much less sense for the client. And it produces a specific failure mode that we see repeatedly when we audit contracts that have already been audited by other firms.

The Failure Mode: Gaps Between Reviewers

In a multi-reviewer audit, each reviewer has a slice of the codebase. Reviewer A reads the governance module. Reviewer B reads the vault. Reviewer C reads the oracle.

Each of them is competent. Each of them reviews their slice thoroughly. Each of them finds the bugs that are local to their slice — a reentrancy in the vault, a signature replay in the governance module, a missing staleness check in the oracle.

What none of them find is the bug that requires reasoning across all three. The attack that starts by manipulating governance, propagates through the oracle, and cashes out in the vault. That attack only exists in the gaps between reviewer responsibilities. And in our experience auditing contracts that have already been audited elsewhere, these are the bugs we find about 30% of the time.

⚠️A real example (anonymized)

A DeFi protocol came to us for a follow-up audit after a well-known firm had delivered a clean report. Our Phantom agent flagged an economic exploit that chained a governance proposal with a liquidation loop. When we dug in, the chain spanned three contracts — each of which had been reviewed by a different auditor at the previous firm. None of them had seen the other two. The finding would have cost the protocol $60M at the TVL they had at the time.

This is not a hypothetical concern. Cross-contract reasoning is the single most expensive skill in security auditing because it requires one human to build a complete mental model of the protocol. Fragmented audits make this structurally impossible.

The One-Expert Model

RedVolt's expert review tier assigns a single senior auditor to your engagement. Not as a gesture. As an architectural requirement. That auditor:

  • Leads the scoping call. They ask the questions. They propose the threat model. They write the engagement plan.
  • Reads every contract in scope. Not just a slice. Every line that matters, with the context of every other line.
  • Works alongside the AI. Our multi-agent audit engine runs first and produces a draft report. The auditor doesn't start from scratch — they start from the AI's output, verify it, and extend.
  • Owns communication. When your team has questions during the audit, they go to the auditor directly. When findings are disputed, the auditor resolves them. No triangulation through a project manager.
  • Writes the report. The report is their work, signed by them. If you have follow-up questions a month later, you email them specifically.
  • Conducts the retest. When you ship fixes, the same auditor verifies them. They've already internalized the protocol. They know exactly what the fix is supposed to do.

This model has tradeoffs — and it's worth being honest about them.

The Tradeoffs

Running this model isn't free. Specifically:

1. We can only run a limited number of expert reviews in parallel. Every senior auditor has a capacity limit. If we sell more engagements than we have seniors, we either break the one-expert model or we have to decline work. We do the second, which means our expert review tier has a waitlist. Typical lead time is 2-4 weeks from scoping call to audit start.

2. The auditor is a single point of failure. If our auditor gets sick or has a family emergency mid-engagement, there's no team to pick up the slack. We mitigate this by having a senior peer review every draft report before delivery, so the peer has enough context to finish the engagement if needed. But it's a risk, and we're transparent about it.

3. Expert reviews cost more than our AI-only tier. A dedicated senior human for 1-3 weeks is not cheap. Our pricing reflects that — expert reviews start at $15,000 for small scope and go up to $150,000+ for complex cross-chain protocols. Our AI-only audits start at $3 per SLOC, which is a fraction of the cost and a very reasonable choice for most projects.

We think these tradeoffs are the right ones for high-stakes audits. For a protocol managing real money, the cost of a missed finding dominates every other line item. The one-expert model is structurally less likely to miss findings that span the codebase. That's worth the price and the wait.

Why the Industry Doesn't Do This

If one-expert engagements produce better audits, why doesn't every firm do it?

Three reasons. They're not flattering to the industry, but they're real:

  1. Senior auditors are expensive. A senior who can own an engagement end-to-end commands a salary that makes one-expert engagements economically tight. Most firms make the math work by putting juniors on most projects and seniors only on the most prestigious ones.

  2. Scheduling is easier with rotating teams. If every auditor is interchangeable, you can fill your calendar efficiently. If each auditor owns their engagements, you end up with unpredictable gaps. Firms that optimize for utilization end up with fragmented audits.

  3. The client usually can't tell. An audit report is opaque. The client doesn't know whether it was written by one person or five. By the time the bug hits production, the firm is long since paid and the client has moved on.

We don't solve any of these problems by being smarter than other firms. We solve them by offering two tiers — a cheap AI-only tier for projects where the economics don't justify a senior, and an expert tier for projects where they do. We're explicit about what each tier delivers, and we don't blur the line.

Matching the Expert to Your Project

One more detail that matters: we match the specific auditor to your protocol's domain.

If you're building a bridge, your auditor has previously audited bridges — they know the replay semantics, the message coordination patterns, the finality assumptions. If you're building a restaking protocol, your auditor knows the EigenLayer architecture and the operator slashing edge cases. If you're building an AA paymaster, your auditor has read ERC-4337 enough times to have opinions about the spec ambiguities.

We have about 15 senior auditors on the expert roster, each with a documented specialty. Your scoping call is with the auditor we think is the best match. If we don't have the right specialist available in your timeframe, we say so — and we'll either defer until one is free or recommend another firm.

What You Can Take From This

If you're evaluating audit firms for your next engagement, here are four questions worth asking — regardless of whether you end up working with us:

  1. "Who specifically will review my code, and what's their background?" You should get a name and a CV, not "one of our senior engineers."
  2. "How many people will touch my codebase?" Fewer is better. Ask for the exact number.
  3. "Will the same person do the retest?" If the answer is "whoever is available," the retest will find less.
  4. "What's your handling if the primary auditor becomes unavailable?" Every firm should have an answer. Most don't.

The firm's answers will tell you a lot about how they actually structure engagements, versus how their marketing describes it. The gap between those two is where clients get disappointed.


Interested in a one-expert audit for your project? Request an expert review — we'll match you to the auditor whose experience most closely matches your stack.

Want to secure your application or smart contract?

Request an Expert Review